
Greetings! YiffAI (YAI) is trained from the basic SD 2.x 768-v model with the finetune variant of the kohya repo to house all sorts of vectors for creating a myriad of (anthro) furs! This model has been tuned for a total of around 400+ hours on a lone 3090 to produce quite dazzling results in a wide range of styles, at least those that SD 2.x can make itself.
1/26/23: Currently training has shifted over to SD 2.0 768-V as a model base starting with YAI 2.3.22. This model does not explicitly require xformers like the SD 2.1 768-V based models, but everything else applies. If you wish to use the SD 2.1-based models, make sure you read up on the xformers requirement further in this description!
Generations are recommended to be made at 768x768 or a few steps up or down. 512x512 and below doesn't give as best as results- like with most models based off of the 768-v series.
Extra information, prompts, guides, and more available on the Discord server where this model originates from: Furry Diffusion Discord.
Importantly, this model does not use artist tags outside of those that are naturally available in SD 2.x itself. -No artists have been added in.- Instead, use actual, bona-fide style terms to great effect, such as watercolor, countershade, rim lighting, or even kemono! Of course, it does know a fair amount of more... particularly furry topics as well for image ideas. Wouldn't have a mark if otherwise.
Assorted Notes:
You will require either the latest version of Automatic1111's WebUI or a similarly capable interface with which to use this model. If you have ran Auto1111's UI before, then you will almost assuredly need to delete your /venv folder in the installation directory. It will remake it upon your next launch, but this is needed in order to update all your dependencies in order to load up an SD 2.x model.
You will require xformers to run the 2.1-based models, or you must use the --no-half command or a similar command argument in your batch file to run at full precision instead. If you do not use xformers or no-half, your images will come out all black! Heed this warning. If you are willing to reinstall or need to install an interface, https://www.reddit.com/r/StableDiffusion/comments/zpansd/automatic1111s_stable_diffusion_webui_easy/ makes this very easy to do.
You must have the YAML file(s) included in the same folder that you extract the model to, or where the model resides. Usually for an Auto1111, this is in the "/models/stable-diffusion" folder. The YAML must also have the same filename as the model!
This model, as well as SD 2.x itself, has been trained with Clip Skip! Your results can vary greatly between clip skip and no clip skip, so if you do not wish to use clip skip or want a taste of the crazier side, then rename your model's filename to have "noskip" at the end so that it matches the other yaml file! Otherwise, do not touch the naming scheme of this file!* Addendum: multiple YAMLs cannot be uploaded as of current, so the only available config is with clip skip.


