This is my first attempt at creating an embedding (long live Textual Inversion!) for Stable Diffusion, and during a steep learning curve over the last month or so this is the result.
Works with "vanilla" Stable Diffusion 2.1 768, but also with a multitude of other 2.1 models. Does not affect animals (much), only people, and will also create quite extreme results in many cases.
***
Therefor I have included three versions of this embedding:
d3caricature-beta-x - a 1 vector evenly distributed version (lowest impact)
d3caricature-beta-y - a 6 vector "half weight" evenly distributed version (medium impact)
d3caricature-beta-z - a 6 vector full-on evenly distributed version (high impact)
Experiment by using them at the beginning of the prompt, in the middle, or at the end, with higher or lower weights, or higher/lower CFG.
They have been thoroughly tested with stable-diffusion-2.1-768, but I highly recommend using them with other 2.1 models such as perpetualDiffusion10_v10Moon (and sun) and illuminatiDiffusionV1_v11.
***
All versions of the embedding is based on the same training set of 68 hand picked and fine-tuned images, in the training course of:
500 steps @ lr 0.375-0.5 (linear with warmup)
1000 steps @ lr 0.075-0.1 (linear with warmup)
1000 steps @ lr 0.015-0.02 (linear with warmup)
2000 steps @ lr 0.003-0.004 (linear)
2500 steps @ lr 0.0006-0.0008 (linear)
2500 steps @ lr 0.00016
All training done with 4 gradient_accumulation_steps, in invokeai 2.3.5, and then combined using a1111 with the Embedding Inspector extension.
***
I really do hope that you find this embedding useful and that you have loads of fun with it! This embedding is in beta state, but I have no idea if it will ever reach a "release candidate" state, nor will I even expect it to do so.
It was created simply because I had an idea that I could not let go of and just had to create ;)