介绍:
最近我创作了一些loha的作品,但我意识到很多人可能不太熟悉正确的使用方法。因此,我使用了相同的训练集来训练了lora模型。
在正向的prompt中仍然使用chibi。
如果有需要,你也可以尝试使用EasyNegative模型来处理一些负面情况。
或许你可以调高你的权重或者降低,训练时我将lora的权重降低,便于你们使用。
我非常欢迎大家提出意见,并希望你们能在评论区分享你们的反馈和创作。非常感谢!
Translation:
Recently, I have created some artworks using loha. However, I realized that many people may not be familiar with the correct usage. Therefore, I trained the lora model using the same training dataset.
chibi is still used as the prompt for positive creations.
If needed, you can also try using the EasyNegative model to handle negative scenarios.
Perhaps you can increase or decrease your weight. During training, I will lower the weight of 'lora' to facilitate your usage.
I warmly welcome everyone to provide feedback and share your creations in the comments section. Thank you very much!
v1 v2 v3的区别:
训练集:不同版本可能使用不同的训练集来进行模型的训练。训练集的规模、质量和多样性对于模型的性能影响很大,因此不同版本的训练集可能导致性能差异。
超参数的调整:超参数是模型训练过程中需要手动设置的参数,例如学习率、批大小、层数等。不同版本可能对超参数进行了不同的调整,以优化模型性能。
插件的使用:可能不同版本采用了不同的插件或扩展来增强模型的能力。
总结,使用v3就可以了。
The differences between v1, v2, and v3 are as follows:
Training Dataset: Different versions may use different training datasets for model training. The size, quality, and diversity of the training dataset have a significant impact on the model's performance, thus leading to performance variations among different versions.
Hyperparameter Tuning: Hyperparameters are manually set parameters during the model training process, such as learning rate, batch size, and the number of layers. Different versions may adjust these hyperparameters differently to optimize the model's performance.
Plugin Usage: Different versions may incorporate different plugins or extensions to enhance the model's capabilities.
In summary, v3 is fine.