此模型的原作者为「@FapMagi」,LiblibAI 致力于推动原创模型的分享和交流,因此转载了该模型、供大家非商用地交流、学习使用。请使用该模型的用户,遵守原作者所声明的使用许可。

如果您是该模型的原作者,请与我们联系(邮箱:liblibai@163.com;微信:Liblibaijiang),我们期待您入驻、并将第一时间把模型转移至您的账号中。如您不希望在 LiblibAI 分享该模型,我们也将遵循您的意愿在第一时间下架您的模型。

LiblibAI 尊重每一位模型原创作者,更期待与每一位模型作者共同成长!

声明:若该转载模型引发知识产权纠纷或其他侵权行为,LiblibAI将立即下架模型,并不会向原作者追责。

This embedding will tell you what isREALLY DISGUSTING🤢🤮

So please put it innegative prompt😜

 

[Update:230120] What does it do?

These embedding learn what disgusting compositions and color patterns are, including faulty human anatomy, offensive color schemes, upside-down spatial structures, and more. Placing it in the negative can go a long way to avoiding these things.

-

 

 

What is 2T 4T 16T 32T?

Number of vectors per token

 

[Update:230120] What is 64T 75T?

64T: Train over30,000steps on mixed datasets.

75T: embedding limit maximum size, training 10,000 steps on aspecial dataset(generated by many different sd models and special reverse processing)

 

Which one should choose?

  • 75T: The most ”easy to use“ embedding, which is trained from its accurate dataset created in a special way with almostno side effects. And it contains enough information to cover various usage scenarios. But for some"good-trained-model"may hard to effect

    and, change about may be subtle and not drastic enough.

  • 64T: It works for all models, but has side effect. so, some tuning is required to find the best weight.recommend: [( NG_DeepNegative_V1_64T :0.9) :0.1]

  • 32T: Useful, but too more

  • 16T: Reduces the chance of drawing bad anatomy, but may draw ugly faces. Suitable for raisingarchitecturelevel.

  • 4T: Reduces the chance of drawing bad anatomy, but has a little effect on light and shadow

  • 2T: ”easy to use“ like T75, but just a little effect

 

Suggestion

Because this embedding is learning how to createdisgusting concepts, it cannot improve the picture quality accurately, so it is best used with(worst quality, low quality, logo, text, watermark, username)these negative prompts.

Of course, it is completely fine to use with other similar negative embeddings.

 

More examples and tests

 

How is it work?

I tried to make SD learn what is really disgusting with deepdream algorithm, the dataset is imagenet-mini (1000 images chosen randomly from the dataset again)

deepdream isREALLLLLLLLLLLLLLLLLLLLLYdisgusting 🤮 and process of training this model really made me experience physical discomfort 😂

 

What next?

sd-2.xembedding training~

 

Looking forward your reivew and suggestions🤗

-

my discord server, find me here:

https://discord.gg/v5HFg47J6U

-

putitinnegativeprompts