I finally managed to train some almost decent LoRa with my Nvidia 2070 8gb GPU.
There are some caveat tho.
First, it was trained on 768x768 buckets so the quality is not top notch.
Second, I had to use small dataset to train individual LoRa and then merge them togheter using Supermerger on A1111.
Third, I didn't trained the text encoder, no captions were used, the activation token was automatically took from the folder name so you'll need to use all the 3 token from the datasets to obtain the full effect.
The token are: flatee, shadee, r4y
!!! You have to fiddle a bit with the weight of the LoRa, if it get too crazy lower the weight !!!
Training LoRa necessitate a lot of coffee. If you think that what I do is useful and want to help me keep doing this you can buy me a coffe here: https://ko-fi.com/clumsy_trainer