Flux.1 D - V1.0
After some fiddling around with Kohya and other, to my surprise, successful tests with pretty strange concept LoRA's for Flux i thought i would give this one a go to see if it works. Actually, it wasn't really my goal to get a Flux version, since Flux does a good job with contrast ( for the most part ). My main goal was pretty much to get rid of this dreaded artificial/fake look and i though a dataset so "undefined" would do the trick. Well, it kinda does sometimes or it makes it even worse.
Like the SDXL version, it's more something to be used with other LoRA's, especially the ones that add a lot of color.
Sorry for being a bit lazy with the prompts and images. Used pretty much the same stuff as i did in V.3.0, but it should do the trick for showcase reasons for now. Sometimes it kills the details and doesn't add something particular interesting ( jellyfish image for example or the parrot )
Same seed for corresponding images
The first 2 images are with and without LoRA, after that it's reversed, so first without then with LoRA ( sometimes different strengths )
Some have the trigger word's at first, but i kinda doubt those are needed in Flux
Based on the V1.0 dataset, not the new one ( didn't want to use to many images )
Not sure if i wanna do more LoRAs for Flux, but i will test quite a lot. It's really confusing at the moment with all the different versions and it doesn't feel like it's worth spending a lot of time on something that could be outdated 10h later.
And, just a side note: This was trained with a 4060 TI ( 16 GB ), between 14.4 and 15.2 GB max VRAM usage, 3.95s/it - 5.15s/it, 512x, Rank 4. .... so, no 4090 needed here. Of course it would be faster, but if you can just fire - forget and do something else in the meantime it's not really a problem, unless you wanna test something.
Like always, will use it here and there to see if i can get anything meaningful out of it. More likely just something to sate my curiosity.
V.3.0
After my little blunder with v2.0 i've made sure that this time it's the real deal, without merging or whatever. Was a bit skeptical at first, because of the amount of pictures added, that it would stray away to much much from the first version, but fortunately it didn't.
I'm really happy that a lot of people found a use for this LoRA and made so many pictures. Considering it was never intended to do what it is doing right now it's kinda nice to see it has another purpose or lets say it can be used for something else ( like all LoRAs )
This will probably ( most likely ) be the last version for quite some time cause just adding more pictures will not change the main effect. Most of it is done by using different models, LoRA combinations and prompts anyway.
Will focus more on actually creating images to find more combinations ( and to relax a bit ), make new weird LoRAs, fix old LoRAs etc. Also i've downloaded so many other stuff from people and i didn't have the chance to use any of them for the most part.
Thx again for all images posted so far and i hope to see more, even though every LoRA has a shelf life in this fast moving AI sector and the fancy factor will inevitable come to end. :)
V2.0
Little announcement to V2.0 ( 02.04 or 04.02 depending on your country i guess )
Well, i'm officially an idiot.
I've started to train another LoRA when i noticed nothing was happening in the sample images even after 2000 steps, which is pretty much impossible. So i was searching for why this is happening and saw a setting i didn't change back ( or in this case set to the right setting ) which is LR warmup. Normally i don't use warmup steps, but for what i did before i was testing what it really does. That was on a dataset of like 4 images. I have always a setting of 100 Epochs, but it's set to spit out a model every 500 steps and not per epoch. Warmup was set to 32%, so 4 images x 10 repeats x 100 Epochs = 4000 steps ( 32% warmup = 1280 steps where not much is really happening )
Now, this LoRA had the same settings, just with 60 images ( 60 x 10 x 100 = 60000 - 32% warmup = 19.200 steps )... sooo, even after 12000 steps it wasn't even getting started, which means i pretty much trained only hot air. Explains a lot actually. Since i fell asleep i didn't notice it, even though it was a bit baffling it took so long to have an effect in the first place, which i thought was cause of the images themselves.
Yeah... so this one is a dud and most of it what it does is from merging with V1.0.
I will train it again on the weekend when i get back from work, this time with the right settings.
I'm really sorry for this. Maybe it was just an unintentional April Fools joke :) .... :(
Added a few more pics because i'm using it quite a lot lately and i haven't uploaded a new LoRA for some time, which is pretty much a combination of me having to much to do at work and choosing only nerve-wrecking complicated subjects that also "have" to be made in the most convoluted of ways... everything else would be boring.
Also, a lot of time went into taking Pony apart, which is surprising me in some regards. If you keep all the Furry, Anime and general porn stuff aside it's almost more basic then the Base XL model, which is good for adding concepts and styles via LoRA training, but knows a lot more complex things like positioning, emotions, weird angels etc... but that's a complete different story.
Don't know how much more training i will get done this year. Maybe i will do some easier things in between but who knows. AI is moving fast, so there might be some new shit in a month... or a week.
Tested the same seed and the same model with previously generated images. The change is quite subtle in some cases and extreme in others ( and now i know why: read announcement )
Also sorry for not making more variations in the showcased images. Sometimes it takes longer to prepare pictures then training the LoRA, but it shouldn't matter to much in this case. More things will follow naturally anyway.
V1.0
Tried to push the AI a bit to see what it would pick up on training images that are almost complete black and only have a faint shape.
It was almost impossible to prepare the dataset in Photoshop because i pretty much saw nothing. Of course every person has different monitor settings etc. so it's hard to say if those pictures where really this dark, but it putt a lot of strain on my eyes. Even now looking at the generated images is kinda hard, though they are not as dark as the training images.
Was actually surprised the model picked up anything at all. Will putt it aside for now till i figure out what to do with it. It certainly does some weird stuff. Somehow putting generated pictures into ControlNet IP-Adapter enhances everything and is producing random results.
Tips for generating:
I have no clue... good luck.