analog photo, (space fly, open space, the spacer, deep space, star field, milky way, shooting stars, black hole, astro suit, the crew, starship, dynamic space fly:1.4), colorful polaroid with vibrant colors, dramatic, high contrast, high saturation, hyperpunk scene with purple and yellow out of focus details, vintage, faded film, film grain, (daft punk iridescent helmet:1.2)
(man, grizzled soldier, smoking a cigarette, crouched, serious face, black hair, leaning against building), background (world war future battlefield)
masterpiece, best quality, portrait of a 18yo woman , (underground alchemy laboratory),  color photo, cinematic, cinematic lighting, (whimsical witch, (small witch hat)), (potions, alchemy beakers, lab cauldron), rainbow colors, anime, gorgeous 18-year-old woman, perfect eyes, graceful, landscape shot,  upper body, looking at viewer, standing, happy, enthusiastic
Quick Generate
Select a style, write what you want to see and click Generate
What you want in the image.
Clear
Random
* This model is not available for use and might generate different characters compared to the ones seen in the images.

Model Description by Creator

This model is highly experimental, but still probably the favorite of my models. Get ready to crank up that CFG, steps and noise generator as this model eats noise for lunch when sampling with Karras 2M/3M SDE. Even glitches and oddities from bad LoRAs seem to resolve into interesting results as long as it can follow your prompt.

For v1.1, I dare-ties merged 3D Animation Diffusion, Bionic Apocalypse, ComicMix, Fazz, RadioIllustrated, Poltergeist Mix, DarkClip, Inkpunk Diffusion, and DPO into V1.0 at a 80% drop rate, but the catch is I was thinking about how CALM said that Q alignment is what drives model intercompatibility; so I scaled all Q by the corresponding layer norm of a base model (fp32 SD 1.5 pruned, for my initial experiment), and then scaled K by the inverse to balance the equation. I think it made things a lot cleaner; but it's just a theory.

For v1.0, I merged two models:

My nightmare model - the base SD1.5, and a pool of 57 of the best and most varied models from my collection. From this pool I sampled 4000 passes merging a random model from the pool to the base, sampling 5% of parameters with DARE-TIES each pass with a 90% merge ratio. This model is really creative and ultra-stable, but the outputs are muddy (since the out blocks are about visual style, and the most difference was between the different model categories).

My cybernetic model - an early handcrafted (and crappy) merge that I want to be able to make sci-fi 2d. I weighted this more towards the out blocks. I believe the ratio was [.8, .5, .2].

A custom built CLIP - from base v1.5, I DARE-TIES merged clips from four models that had a variety of domain knowledge and training. I hadn't heard of anyone doing this before, but it worked really well; as it should, since the base SD1.5 was trained on a fixed CLIP.

The result, I don't know what to say except I'm a bit blown away. Try it, I'm curious to know it's limits. As always, hearts / 5-stars are a great way to show your appreciation and return my focus to a model; and any images posted will be used to DPO my models in the future.

Disclaimer: No CSAM. This model is not intended to create such images and can not be used to create such.

Images Generated With This Model

Loading...

Train Models

Train AI Models On Your Own Pictures

You can use your own pictures to train AI models. Reimagine yourself in new ways with our AI-powered image generator.
Create pictures of yourself around the world, in your dream outfit, as a video game character, or in countless art styles. The possibilities are endless.
Train Your Own Model

Generate AI Images. Simple and Easy

AIEasyPic is the #1 AI image generator. Generate images with simple prompts, swap faces, train models to reimagine yourself, and more.