NOTE this TI is old and not very good, but it has since served it's purpose as there are now some good quality models out there for Alanah. Check out the LoCon by malcomrey for a better model.
I trained a Textual Inversion model on one of my favorite youtubers turned game designer, Alanah Pearce, and after some tinkering I was able to get some decent gens out of it so it's time to share.
I seem to get best results by using the keyword as ((alanah_pearce:0.89))
at least when using fitCorderMixV2.1 as my checkpoint. (NOTE: ((alanah_pearce_face:0.89))
also works and provides slightly different results, try it out. Both are pretty touchy and like to either overburn, or look nothing like her. YMMV with other models. (GalaxyTimeMachineV3 also produced some decent results)
The negative embedding is optional, it may help with some models, and it may hinder you with others. Try with and without it, and try modifying the strength of the negative embed in contrast with the strength of the positive embedding until something comes together decently.
This was trained on a set of 32 images pulled from public sources (Reddit, Instagram, YouTube) using A1111's "Train" tab for embeddings. I unfortunately don't recall the specific settings used, but there's definitely still room for improvement on this one.