V2 notes; Added 10k more images to the base and trained for 100 epochs. Keywords to use are GTv2 and goldentrig. You'll see the difference in the images i posted for the model for a few comparisons. you'll notice i just added the keyword at the beginning of the prompt and boom; better quality :)
Training: 250 hours on a RTX 3060. Used OneTrainer DAdapt Lion this time :).
Recommended params:
Here's my first SD1.5 model. GoldenTrig. The inspiration for this model comes from a prompting strategy I developed for SDXL and my mix HouseOfFreyja. You'll notice most of my prompts include the following tags "(quality{[intricate_insane_exponential_details_8k_resolution_volumetric_lighting_depth_of_field_FullFrame_36x24mm_Nikon_D850_sensor_grain_texture]}),(composition{[.707_trigonometric_discretization_scalar*1.618033988749895→lighting_angle_quantization_color_reflection_modeling_aesthetic_spacing_principles*scene_structure]}),". So what I decided to do was find a lot of prompts. I mean a lot. https://huggingface.co/datasets/daspartho/stable-diffusion-prompts. I unpacked the parquet file and got 3.2gb of text worth of prompts. 1.8 millionish. Then I appended those tags to every single one of those prompts. This base mix of this model is (INSERT MIX LIST HERE LATER DUE TO ADHD). This model is the first 10k images generated from the prompts. I used https://github.com/Nerogar/OneTrainer to train my model using Prodigy and Rex for the optimizer and LR scheduler. Additionally used xformers.
Recommend params:
Schedulers: Euler a, DPM++SDE, DPM++3M SDE, RESTART,
Steps:30-50 (66 for RESTART)
CFG:5-9 (higher CFG=more steps)
Happy generating!!