I wanted to try out konyconi's stunning technique for creating styles but exploring styles with more variable or complex objects. It more or less works well with candy. However, it could definitely be improved by inpainting each training image to create more detailed individual candies. I also tried a model with toys--it sort of works, but here you'd really want to inpaint any generated training images or you just get toy-like blobs for the most part.
A few more observations:
If possible, find real-world art than using dall-e generated training images, like in konyconi's tutorial. Reason being is dall-e only sometimes generates proper details AND it can doesn't apply objects with much artistry. For example, actual Candyland characters have more thought put into them than my training candy images -- however, it's hard to find enough with similar enough styles and candy to not confuse thhe LoRA training.
Dall-e as accessed through bing applies a watermark. I find you generally want to crop all the images closer and upscale anyway to get rid of the white space.
Small "mistakes" will impact the whole model. I let dall-e generate a couple tanks with mint leaves rather than peppermint candies, and now mint leaves sometimes pop up in the gens.
On a given seed, this style of model will vary pretty widely in terms of what parts of the image it affects (is the whole image made of candy or just the clothes, etc.). So, cherry pick.
I used about 100 training images and set my epoch size to 20 (so 5 epochs = run through each images 100 times). This gives more granular outputs to choose from. I found epochs 6-7 (120-140 cycles) were about the sweet spot, though with some models you might have to dial down the LoRA strength.