此模型的原作者为「@Lykon」,LiblibAI 致力于推动原创模型的分享和交流,因此转载了该模型、供大家非商用地交流、学习使用。请使用该模型的用户,遵守原作者所声明的使用许可。

如果您是该模型的原作者,请与我们联系(邮箱:liblibai@163.com;微信:Liblibaijiang),我们期待您入驻、并将第一时间把模型转移至您的账号中。如您不希望在 LiblibAI 分享该模型,我们也将遵循您的意愿在第一时间下架您的模型。

LiblibAI 尊重每一位模型原创作者,更期待与每一位模型作者共同成长!

声明:若该转载模型引发知识产权纠纷或其他侵权行为,LiblibAI将立即下架模型,并不会向原作者追责。

AnyLoRA

Add a ❤️ to receive future updates.
Do you like what I do? Consider supporting me on Patreon 🅿️ or feel free to buy me a coffee

Remember to use the pruned version when training (less vram and no baked vae).

I made this model to ensure my future LoRA training is compatible with newer models, plus to get a model with a style neutral enough to get accurate styles with any style LoRA. Training on this model is much more effective conpared to NAI, so at the end you might want to adjust the weight or offset (I suspect that's because NAI is now much diluted in newer models). I usually find good results at 0.65 weigth that I later offset to 1.

This is good for inference (again, especially with styles) even if I made it mainly for training. It ended up being super good for generating pics and it's now my go-to anime model. It also eats very little vram.

The first version I'm uploading is a fp16-pruned with no baked vae, which is less than 2 GB, meaning you can get up to 6 epochs in the same batch on a colab.

Just make sure you use CLIP skip 2 and booru style tags when training.

Remember to use a good vae when generating, or images wil look desaturated. I suggest WD Vae or FT MSE . Or you can use the baked vae version.