Uff...
Even after ludicrous amounts of training for a LoRA, testing here and testing there, changing settings and whatever, it didn't went as planed. If anything, it showed me a lot of flaws in my approach, but also opportunity. Failure doesn't mean you can't learn anything from it. I'm also someone who is more practical oriented then theoretical. Could read 1000 articles about a subject and don't learn anything, do it myself and 'baam' gotcha.
It doesn't do what i original envisioned, but it certainly does something, especially when it comes to details. Just to bad i couldn't keep up with my usual shenanigans misusing other peoples LoRAs, but this one here took all my free time.
Some words in the example pictures don't even have an impact on the image or subject, but it looked kinda cool.
What i did, what i learned and how to "maybe" use it:
Has extreme tendencies to put a normal looking male face on inanimate objects
More pictures of not just complete beards, but also separate mustaches, messy hair and a combination of all of them.
Tried to only use trigger words for all separate images, but given the source images the training went haywire ( actually as expected )
Merged epochs together ( and lost the meta data while doing so )
Instead of following other peoples tutorials or tips i read more about what each setting does and what it means, then made changes and tests accordingly.
I could already see that it wouldn't work the way i wanted it because i set Kohya to spit out an image every 100 steps, but despite that kept going.
Before i used any other model i did some test on Base XL first. Since it was trained on this model it showed a lot of things why something went wrong. Other models are good at hiding stuff, so there is no, let's say, "raw" output. ( that may be also the case if the LoRA was trained on another model )
On SDXL, capitations make all the difference. Just changing one word can make or brake everything. Just a style or one character is no problem, but something more complex and it's a constant fight against the AI.
Some tips that may work:
As i said on a few occasions, a beard, a mustache and hair are associated mostly with humans ( or in this particular case with men ). It's also something that focuses on the face region, so portraits are more likely to happen. Putting a few words into negative that may have a connection to something else can help, even words like beard and mustache them self.
It might also help to bring in words that are associated with something like:
Dragon - Fire, Fantasy, Wings, Horne's etc.
Cat/Dog - That is usually no problem. Common animals are well known to pretty much every model
Dolphin - Water. Also it's an animal that isn't known to have hair, which makes it kinda difficult.
Vegetable - Garden, Food, Kitchen
Objects - Well, that depends i guess
The model used, the steps, CFG, strength of the LoRA or the weights of the words have quite the impact. The more steps the more time the AI has to "think". If you set your preview window so that it shows at least a few steps, you can see what is happening. A lower CFG scale gives the model more freedom interpreting something ( as far as i know ), so lower CFG can help. Using a word in positive prompt like "fire" if you want the help the AI a bit to generate a dragon, but you don't actually wanna see fire in the picture, lowering the weight even to negative ( automatic111 - (fire:-0.5) )... it maybe in the negative, but it still helps to create a dragon and to avoid fire.
Using plugins ( Automatic 1111 ) will most certainly get you a much better result and faster, but i try to avoid using them. ControlNet IP-Adapter is pretty much a golden egg.
Some of that stuff is just common knowledge i think and it's not something restricted to this LoRA.
Could go on and on about it. Most people just wanna use something and want it to work from the get go, which i think is a good and bad thing. Casual Andy will be happy, but for me personally i need a bit of a challenge, which most of the time leads to pictures that are anything but generic... maybe not super sharp, high resolution, over the top overdrawn stuff, but interesting and random.
More pics and description will follow after testing. Need some more time to do crazy stuff with it, for now only the basics.
New approach for me on how to train a model and how to prepare a dataset ( very time consuming and a lot of Photoshop )
Concept is known to a lot of models and will just change the outcome for better or worse
Makes beards fuller and fur more fluffy
At higher strength tends to humanfy everything ( beard and mustache are associated with male humans, so it's tricky to putt it on something else )
Depends on the model used ( i always go trough like 20 checkpoints with LoRA on/off )
Putting things like human / man into negative helps if you have trouble to create something that isn't a human ( animal, vegetable, object ). Also lowering the strength to like 0.5 - 0.6
Will probably try to add wild and messy hair sometimes and let you control what you wanna change ( only beard, beard+mustache, only mustache, only hair etc. )