This is a Latent Consistency model (Kind of*). Read Instructions below.
This model uses the LCM sampler. It is available in AUTOMATIC1111/stable-diffusion-webui through the [animatediff extension] or the [sd-webui-lcm extension].
In case LCM is new to anyone. Simply put. It allows for generating images in just 4-8 steps.
My model though works best at 1.5-2 CFG and 6-12 steps.
Super fast generation.
generates 768x1024 without any issues.
For the higher resolutions I have used the [Kohya Hires.fix extesnion] using the parameters uploaded to images below.
Recommended but not necessary negative::
(worst quality, low quality, low res:1.0), (bad anatomy, deformed, long body, tall, extra leg, extra arm, extra limb, duplicate, copy, clone, twin:1.0), (3d, render, flat art, drawing, illustration, sketch, painting, fanart, anime, comic, fine-art, video game, sculpture, blender \(software\):1.0), (monochrome, desaturated. sepia, vignette, dull colors:1.0)
Recommended suffix for positive
(best quality, hi res, absurd res:1.0)
For realism (model is versatile and uses custom text encoder aka. CLIP)
(real, photorealism, real life, photography \(artwork\):1.0)
- The model is a hybrid model that uses some of the LCM weights without them being fully trained. So using higher steps works fine and can be beneficial if you want to experiment using CFG rescale extension or CADS extension