Two LoRAs of American actress Carrie Fisher, the first of which is exclusively trained on real images of her dressed in the iconic 'metal bikini', as Jabba's slave in Return of the Jedi (1983). The second is trained on the same images, but also with an additional 90 'synthetic' images (see details below).
For obvious reasons, these models are practically impossible to illustrate here. They do not feature any bath robes, in spite of the accompanying images for this entry.
Now that Civit has a chat function, I would be most interested to hear about posts or videos etc. where this model is being used. I do not care about credit.
Please do not post 'default' renders from these models, which will feature the actress Carrie Fisher wearing the costume.
And do not post renders that have used these LoRAs to dress other actors or real people in the costume.This is against Civit's TOS.
But...
I am pleased to see that many people use these models to add the Slave Leia costume to other characters (i.e., without rendering Carrie Fisher or any other real person), and often to anime, and other styles of fictitious people.
Any such renders CAN be posted at at Civit. Commenting on this Slave Leia LoRA, official Eurotaku said: 'leia's iconic slave outfit is not a violation of our tos unless it's used on a real person or a minor.'
As of Friday, January 26, 2024, there are now two versions of this LoRA: SlaveLeia LoRA V1.0 and the new SlaveLeia LoRA V2 SYNTH.
IMPORTANT: Version 2 is not necessarily a direct 'upgrade' from version 1. Rather it is an 'alternative' version, and you may obtain the best results by using both for different parts of a workflow.
DATA: Version 1
Version 1 was trained entirely on 90 real images of Carrie Fisher dressed in the 'slave' costume: production stills, frame captures, behind-the-scenes photos, etc.
REAL DATA IS ALWAYS BETTER!
However, real data is also limited to the poses and content that it contains. This leads to limitations in the model, sometimes.
DATA: Version 2
Version 2 was trained on the same (real) 90 images as version 1, but also with an additional 90 synthetic or semi-synthetic images.
These synthetic images used the V1 LoRA either to alter or convert existing photos (such as real cosplay photos, fan artwork and/or CGI renders of a DAZ Slave Leia model) to resemble Carrie Fisher as she appeared in Return of the Jedi.
This means that V2 can often do more, in terms of posing and obeying prompts, than V1.
However, it also means that it can be subject to the Photocopier effect that can occur when you train a model on output from another model.
Both versions use the multiple training technique outlined in this Reddit post.
Version 1 was trained twice, on 90 512x512 images, and the same 90 images resized to 512x768 (see above Reddit link for explanation).
Version 2 was trained three times, on 180 512x512 images, the same 180 images resized to 512x768, and the same 180 images resized to 768x512.
This means...
Version 1 requires a LoRA strength of 0.4.
Version 2 requires a LoRA strength of 0.1.
Things will go very wrong if you deviate more than half a point or so from these recommended strengths.
You may find that text-to-image generations using the more versatile V2 model can do more things, and it can do many of them better. When inpainting faces, it can sometimes produce a superior result to V1 - but remember that V1 was trained only on real data, while V2 has many synthetic faces that may not be totally 'realistic', and that this can sometimes be reflected in the output.
Therefore I recommend experimenting with both these models.
I also recommend using these LoRAs with the default SD V1.5 pruned checkpoint.
The V2 model was trained on double the data, but at very similar settings to the V1 model. Therefore it may produce worse text-to-image results for any one CFG value, compared to V1.
If you adjust the CFG upwards or downwards for V2, you may see a noticeable improvement, depending on what you're trying to achieve.
The purple front-skirt can usually be removed in the V2 model only by adding the positive prompt nopelviccurtain. This does not remove the lower gold-plated bikini, but version 2 allows this via standard prompts.
This LoRA uses a new technique first shared on Reddit in late September 2023, by the user shootthesound. Please see the above link for details, but the long and short of it is that you create two versions of the same training data (one portrait and one square, i.e., for instance, 512x512 and 512x768), and train a LoRA for each of them.
You then pick the best trained checkpoint from each and merge them in Kohya at 100% strength each. See the original post for comments from a machine learning expert as to why this massively improves the quality of the LoRA, but suffice to say that the merged LoRA now has the best of both worlds.
The fact that they are merged at 100% each is why you need to use LoRAs made with this technique at around 0.4 (V1) or 0.1 (V2) strength, because technically the combined contributing LoRAs represent a 200% strength.
I generally use these models in complex workflows, inpainting faces after initial T2Is, and using ControlNet extensively. So if you're hoping for one-click prompt magic, my models aren't data-curated with this in mind, but rather as tools for traditional workflows that use Photoshop and other older methods.
No fake data or nude data was used in training of the version 1 model, so in theory it will not output any by-the-book nudity. However, it is certain to reproduce the subject wearing the arguably NSFW 'metal bikini' costume from Return of the Jedi,.
The added synthetic data for the version 2 model means it is more likely to produce generally NSFW content.