Hey, I'm the author of the SDA768 Embed, glad you like it! It's actually one of my earlier embeds, I should probably run a V2 on it at some point. Thanks for the shoutout! 👍🏼
@cat3c3a3tcatc3at2 жыл бұрын
AI "art" will never replace human art. What you're doing is illegal
@jeff_clayton2 жыл бұрын
The courts decide that -- and it is different in every country what is legal there. This type of battle happens EVERY SINGLE TIME there is a new big technology. Not everything created by AI is using copyrighted work either. There are bezillions of free and public domain (defined as not copyrighted or out of copyright) works that can be used with these new technologies even if the rest gets ruled unlawful later.
@jeff_clayton2 жыл бұрын
I for one am dying to see how it all pans out, or if it doesn't just continue to go back and forth in courts for the next several decades. In other big cases there have been varying results... settling out of court, laws changing, or a real loss - but sometimes the losing party moved to a country where their thing was not against the law. Internet companies can do that, so something may be deemed illegal where you are, but NOT where they are.
@ArisenProdigy2 жыл бұрын
I love your textural inversions. I haven't gotten quite to the point of training one myself, but I really want a negative text/words/letters inversion ;)
@generalawareness1012 жыл бұрын
I love making embeddings as I released a few now, and I am working on purely negative prompt embeddings with amazing results.
@lewingtonn2 жыл бұрын
if you have any cool embeddings pls share on the discord!
@meanwhiles4322 жыл бұрын
Just wanted to point out that Empire also has a negative embedding file you can also download. That will likely be the reason for the difference. For some reason the negative embedding improves my outputs.
@poisenbery2 жыл бұрын
Dude, thank you for going over the technical details. I was raging with confusion as to why textual inversion takes massively more time than making a new model. It makes sense that having to train something, essentially from scratch, would take longer than to build on existing knowledge. I also was not aware that dreambooth is a destructive process. It's very obvious to me now that you said it, but WOW I did not make that connection before.
@jordandavis406 Жыл бұрын
You're one of the best KZbinrs of all time. Don't change a thing.
@cryptidsNstuff2 жыл бұрын
Nice work as always.
@swannschilling4742 жыл бұрын
Thanks for the recap!! 😇
@MultiMam123452 жыл бұрын
Amazing tools and workflow---> spend time on trying to replicate a seed with copy paste only to think its garbage because its not a copy paste result of the embedding example. Great art, you must have an amazing paintbrush.
@techviking232 жыл бұрын
Thanks for the great explanation!
@ismailtibba2 жыл бұрын
2.0 Embeddings works great with 2.1 model
@ratside94852 жыл бұрын
Donates a wndows key to the man.
@juanjesusligero3912 жыл бұрын
I was looking for exactly this kind of comment XD
@lkewis2 жыл бұрын
Great video and explanation, though Textual Inversion came before Dreambooth, it was originally the only way to easily teach a new concept into Stable Diffusion with a limited dataset, then Dreambooth came out after and was implemented for SD instead of Imagen.
@Z10T102 жыл бұрын
@koiboi Please make a video about training an embedding, there is already some videos out there about TI and how to train one but no one is giving some detailed info about settings, learning rate, steps, ideal amount of images and why, some TI templates for a person face or a style There is so much to be mentioned but everyone is like setting learning rate to 0.005 and steps to 15000 and results are horrible
@reijin999 Жыл бұрын
thanks for reading the paper
@abuelos842 жыл бұрын
If anything, watching the images generated as the TI is being trained is pretty funny. Especially if you're training your own face.
@simonbronson2 жыл бұрын
Nice one....Looking forward to a how to for Textural Inversion 😃
@p_p2 жыл бұрын
10:00 maybe is the size, cause wasn't a square.. i dunno
@lewingtonn2 жыл бұрын
lol, maybe, there are a lot of ways to make it NOT work
@p_p2 жыл бұрын
@@lewingtonn hjahahah true buddy
@bobbob9821 Жыл бұрын
Textual inversion - best for training one very specific object or person that you'd like to use on multiple models. Models - Best for training a larger "class" of persons or objects or a certain style.
@Nalestech2 жыл бұрын
Great explaination! I am finally understanding how it all works. I have made heaps of successful ckpt models but embeddings have been a challenge. They only produce the images I train them on. I ask for something different and it spits out the same images. You mentioned using 3-5 images while I have been using ~20. Perhaps that is my issue? 🤪
@lewingtonn2 жыл бұрын
there are a lot of ways you can do training wrong sadly, definitely have a go with 3-5 images, maybe that will help
@Nalestech2 жыл бұрын
@@lewingtonn I went with 5 and it worked much better. I also went with fewer steps. 7500 worked better than 20k. Rather counterintuitive but I'm thinking I overtrained the earlier attempts.
@lewingtonn2 жыл бұрын
@@Nalestech that's siiiiiiick!!!!
@SteveWarner2 жыл бұрын
Keep in mind that your model has a massive impact on the TI/embed, both in terms of its creation and its end use. You're using the base SD1.5 model, which is more or less garbage. A decent model will significantly improve your results with any embed you add on top of it.
@UnderstandingCode2 жыл бұрын
230 would like to hear more on these
@kesar2 жыл бұрын
thoughts about Textual Inversion vs Lora?
@zerodefcts Жыл бұрын
Thanks very much for such an excellent video as usual, question for you. I your description of how DreamBooth works, how do the regularization images work within that explanation. Thanks!!!
@RexelBartolome2 жыл бұрын
I really like the thought of a few kb's worth of data that's easily shareable but the quality just isn't the same with dreambooth or general fine tuning... So for now, I'll have to make do with 2GB models or look into merging to cut them down a bit
@juanjesusligero3912 жыл бұрын
I read somewhere that textual inversion works much better on Stable Diffusion 2.0 and 2.1 (I haven't tried though). Maybe on future model versions its quality will improve even more?
@RexelBartolome2 жыл бұрын
@@juanjesusligero391 thats true, 2.1 embeddings are already powerful but for my use case (specific art styles) it's still not as good as a 1.5 dreambooth/finetune. hopefully it gets even better though :)
@KeinNiemand2 жыл бұрын
What about hypernetworks? Also what's the diffrence between dreambooth and real finetuning?
@garychap8384Ай бұрын
How does that work with Token limits? I'm not sure that the teapot could be captured with 75 tokens or less... are token limits still applicable? In particular, does the 'new' metatoken consume more than one of your valuable token slots (like a macro that must be expanded before processing) or just fill one token slot, like an actual new token concept. Anyone?
@poisenbery2 жыл бұрын
7:48 I should mention that those details are user generated during upload. CivitAI doesn't auto generate that based on special data, it's the user's responsibility to put the correct info when they upload. There are a lot of users who input incorrect settings. One glaring example are people who make NAI based merges, and recommend clip skip of 2, but they list "SD 1.5" as the base model EDIT: Yeah the user clearly did not put the correct checkpoint. There's no fkn way they got that wtih SD1.5, it's very obviously an NAI based model they used.
@haydenmartin58662 жыл бұрын
i seem to be having issues creating hypernetworks in 2.1... as i monitor my textual-inversion images they are all identical even though i have a training set and everything else setup
@tag_of_frank11 ай бұрын
Are they training for a specific sampler if so how?
@Peppermint_juice2 жыл бұрын
Can u explain how we can add different checkpoint in Automatic 1111 Google Collab?
@darshitgoswami2 жыл бұрын
Can it replace dreambooth ?? Can u compare with real person , not just style
@pipinstallyp2 жыл бұрын
They fundamentally work in different ways, textual inversion doesn't add any new images to the model itself. Except it takes parts of text that corelates with features of an image. It's like -> you give a bunch of images to the TI script, TI identifies features of the images and then make associations with the text prompt/keyword. Whereas dreambooth burns new images into the model itself. By converting images to noise, and then model making those noise back into the images. The model has a new concept introduced. That's dreambooth. Just a token, that's it. So fundamentally if TI doesn't introduce new images, then it's pretty much not recommended to train a non celebrity person inside it. The native SD model simply doesn't have our pictures. Though if you wanna introduce new artstyles, sure! Since SD has a lot of artstyles in it, and TI only makes better connections/word associations.
@techviking232 жыл бұрын
@@pipinstallyp 🙏 thanks for the explanation! Are you saying that TI would work better for styles than personal character? And Dreambooth is better at character than style?
@khirondb Жыл бұрын
I might be dumb but i think your first test was off because of the hight x width.
@juanjesusligero3912 жыл бұрын
How much VRAM do I need to create textual embeddings? (I've got an Nvidia with 8GB VRAM, I hope I can! ^^)
@Z10T102 жыл бұрын
I'm doing it on rtx 2060 (6 gb)
@juanjesusligero3912 жыл бұрын
@@Z10T10 Those are really good news for me! Thank you very much! :D Are you using automatic1111 repo to create them? Or maybe another method?
@Z10T102 жыл бұрын
@@juanjesusligero391 I'm using a1111 typical method (the standard train tab), just to decrease vram usage go to setting and under training check "Move VAE and CLIP to RAM when training if possible. Saves VRAM." and "Use cross attention optimizations while training" When training set "Save an image to log directory every N steps, 0 to disable" to 0, this way it will not generate any images while training to save some vram. For the other settings I wouldn't claim I'm expert at that but I found 15,000 steps with 0.00005 of embedding learning rate fine but not the best.
@MultiMam123452 жыл бұрын
I love using AI to create by using my own material to create models. But using existing IP without permission, then sell it is no different than stealing a bread. Or thinking that tickets for a concert of your favorite band should be free. These models exist because the artists who made them possible could buy and eat bread. Lets wait until AI gets hungry. I trust it will be smart enough to know whom to eat first😎