B-LORA sliced twice, Content (composition) and Style. I was bored AF and cooked some ComfyUI workflow, which has proven to be extremely efficient and flexible. I think there is no better way for style transfer. Separate Lora load for composition and style, samplers latents decoding vae and straight into IPAdapter Style & Composition SDXL, than sampling and (only if you megalomaniac like me) applying third Lora, which is trained separately with Dreambooth method (trigger + class). I will share ComfyUI workflow, make sure you know how to rename clip vision and ip adapter models, cause i use Unified loader for IP adapter.
All captions used for training + comfyUI workflows here:
https://drive.google.com/drive/folders/1FFS4CnX3RI4B1yhzwqrlEwLd_QGpuYCZ?usp=sharing