HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx
@Sandel99456 Жыл бұрын
This is a stupid guide for lazy people ..nothing more .. also, cropping is better, just not with brime .. also, dont use token unless you are planning on using sdxl only for your Lora
@TheCriticalMastermind Жыл бұрын
Thank you for this explanation, I just wish the stable diffusion community would try Training more models on objects like plastic parts and craft objects like more every day objects we see in product design and also more animal models and nature. There are so many female characters when you go on these model websites and when you look for trained models more object based and like more animal and landscape based I feel the community is getting lost in the home made female characters. Please guys please can we start training some more object and crafts and tools and 3d models so we can start getting concepts for ideas for being more productive.
@jamesclow108 Жыл бұрын
@@Sandel99456 what would be non-lazy way?
@Sandel99456 Жыл бұрын
@jamesclow108 non lazy would be reading all about every setting in kohya and choosing settings based on knowledge rather than copying some stupid bot json file into ur pc 🙄
@jamesclow108 Жыл бұрын
@@Sandel99456 At least it's a reasonable way for folk to start off becoming familiar with the concepts. I bit like buying pre-made Pizza dough rather than making the dough yourself. Sure, in the long run if you want to have more control over the end pizza, you'll probably want to learn how to make the dough, but pre-made will at least enable a few concepts to be learned in a hands on way. Can you post links to the documentation you recommend that explains all of the settings, so folks can go to that if they have any questions after the video?
@mactheo2574 Жыл бұрын
Give a man a Lora, and you feed him for a day. Teach a man to train Lora, and you feed him for a lifetime. Much appreciated K!
@0AThijs Жыл бұрын
Yep, been training without luck for months now 🥲
@mikazukiaugus3435Ай бұрын
Hi, is this video outdated? I just want to try making Lora with my PC
@hanmin83463 ай бұрын
An updated guide to this tutorial would be amazing as many things changed in the kohya ui and its very confusing! Thank you for all the hard work ♥
@MysteryGuitarMan Жыл бұрын
Thank you @Aitrepreneur! I love that this tutorial dispels some myths about LoRAs. Especially the random token thing... starting all the way back from "sks", now to "omhw" - when you take Lensa and other apps like that into account, think of how many millions of GPU-hours have been wasted (they could have started from "person" or "portrait"). Only one thing to mention: You don't need regularization images unless you plan on merging in your LoRA into your checkpoint. Or some other pretty specific use cases, like de-overfitting a specific person / character / etc. That should speed up your training even more.
@Aitrepreneur Жыл бұрын
Absolutely! Thank you so much for your help! ;)
@OriBengal Жыл бұрын
@@Aitrepreneur @MysteryGuitarMan - I ran some tests side by side.... HUGE difference in how many steps were saved by using your celeb trick (which you taught, but with less certainty, back in the day). No Reg images. Superb results. Also... nice to know now that I look like Patrick Dempsey :)
@jamesclow108 Жыл бұрын
I just don't understand how omhw is the go-to rare token to use if you want to use a rare token. I started looking for a list of rare token that can be used with SDXL and found nothing :-(.
@OliNorwell Жыл бұрын
@@jamesclow108 It dates back to the SD 1.5 days when it was shown to be a rare token, not sure there's any evidence out there that it's a rare token for SDXL necessarily.
@plejra Жыл бұрын
Thanks a lot! I was also a little bit confused with regularization data. Anyway I'm looking for way to optimize settings for my old GTX 1080 Ti with only 11GB of VRAM
@TransformXRED Жыл бұрын
Edit : chapters are here now ---- Don't get me wrong... I'm very grateful for your videos. But you need to add chapters, especially for long videos like that. People will come back more easily to it multiple time to check the tutorial... Double win.
@SteveGamesOnline Жыл бұрын
you mean time stamps?
@TransformXRED Жыл бұрын
@@SteveGamesOnline Video chapters is how youtube call the feature. It's the same thing ;)
@noeltock Жыл бұрын
Assuming he wants to increase engagement/time duration
@TransformXRED Жыл бұрын
@@noeltock That's the only important metrics for a video to perform well on KZbin. And "youtubers" want their videos to be watched, shared, by many people. I only stated that because chapters are beneficial for the creator too. Chapters are the best thing added to youtube. Aitrepreneur as a really good channel, I watch almost everything posted here. But I generally don't come back if there is another tutorial out there on the same subject, even if it's less polished... If the other videos have them (that's me, but I know others do that too). Same for podcasts. I watch it in full, but I never come back if I can't easily navigate (roughly) to the part I would like to listen again.
@MrGTAmodsgerman Жыл бұрын
The video does have chapterts...
@kallamamran Жыл бұрын
Great video! As allways, BUT.... Focus is still on persons. This is greatly limiting since most models are allready great at creating images of persons, specific or non-specific. What I miss is a training video on how to train styles like "my own art style" or poses like "yoga/contorted/laying down poses" or maybe actions like "playing football/fishing/linedancing" and such... Just training a person (portrait/likeness) is what everyone and her mother has been doing since training has been introduced.
@EH21UTB Жыл бұрын
Exactly, that's what I want also
@LonelionZK Жыл бұрын
Same here. All I see is training on faces
@originaltasan2 ай бұрын
I know I'm a year late, but my understanding is that you can just do the exact same thing - just caption your images differently. Use images of the concepts you want to train and caption them accordingly.
@OriBengal Жыл бұрын
Glad to see you doing visual stuff again, not just LLM's. I support you pursuing all your passions, of course - but you were one of the best at creating really useful visual tutorials.
@MonkeChillVibes Жыл бұрын
Thats what I told him lol, good to get back to SD
@OriBengal Жыл бұрын
@@MonkeChillVibes K was one of the best.... IS one of the best... as this video clearly demonstrates! :)
@MonkeChillVibes Жыл бұрын
@@OriBengal Yeah 100%
@TheKuzmann Жыл бұрын
From my experience captions should be used in the following situations and in the following manner: use them in cases where you want to generate a specific scene, subject or concept. This of course depends on the dataset you're training on - if you're training an item, you need the dataset to consist only of that item with different backgrounds. If you are training a person's face or half body and want to generate images of that person, for example, dancing, training a model with captions that do not mention a person is dancing (or standing in a pose that implies movement, so the captions are written with the mention of it's hands in the air, describing a movement, etc.) will make it much more difficult or impossible for the model to generate a trained person dancing. On the other hand, if your dataset consists of images of a person dancing, using captions will make a desired "concept" (implying a certain person dancing, i.e. standing in a pose that implies movement etc) become a variable (I've seen that for it also a term "pruned" caption is used) which is easy to call up. On the other hand, in terms of style: training text encoder is undesirable, because you want to transfer a visual identity to the model, and most importantly, you want it to be "printed" on every possible prompt. In that case, only the class (style or aesthetics) is trained. The most common mistake is to train a style with regularization images plus text encoder (which I did for an absurdly long time training styles in dreambooth). Such a model is literally unusable and generates random images. Even training textual inversion for style using captions can make it less flexibile. I'm writng all this from my personal experience and from all the possible tutorials that exist on internet and KZbin, and I've gone through ALL of them - including yours :-) I can't even mention how many failed models I've trained, and that's necessary to learn how to train a neural network.
@HestoySeghuro Жыл бұрын
Styles+captions works. Styles+no captions you need to dissable t.e.. styles+regs is something I never tried.
@khush7233 Жыл бұрын
The important points are as follows: 1. Captions should be used strategically in training models for generating specific scenes, subjects, or concepts. 2. The effectiveness of using captions depends on the nature of the dataset. For example, training on a dataset with images of a person dancing requires captions that explicitly mention the person's actions to achieve desired results. 3. Using captions can make a desired concept (e.g., a person dancing) more accessible for the model to generate. 4. Training a text encoder for style is discouraged because the goal is to transfer visual identity and ensure it works with various prompts. 5. Combining style training with regularization images and a text encoder can result in an unusable model that generates random images. 6. Even training textual inversion for style using captions can reduce flexibility.
@elias9725 Жыл бұрын
Wow this video did not feel like ~1 hour - thanks for making such a comprehensive guide K!
@Aitrepreneur Жыл бұрын
Glad you enjoyed it! It definitely felt like an eternity making it 🤣
@Ecker00 Жыл бұрын
wait... it was that long? I was so absorbed! Awesome research
@elias9725 Жыл бұрын
@@Aitrepreneur Haha I can imagine! 😂😂
@Always.Smarter Жыл бұрын
ai generated comment
@joepark81 Жыл бұрын
First of all, there is no sufficient explanation on configuration files. It seems the only way to get them, is to be your patron. Your customized configuration files for your patrons, well that's okay. But this whole video could have been much more useful if you mentioned where to get other configuration files, or at least how to make one. Second, the gui version you are using must be outdated, -or-, it must have some addon that you didn't mention. I've just done a fresh install of Kohya ss and THERE IS NO DEPRICATED TAB UNDER LORA - TOOLS!!! And I've just found out that in my GUI the tab is named, "Dataset Preparation."
@phily802011 ай бұрын
It's like a half cooked tutorial
@vilainsinge52829 ай бұрын
ever heard of "updates" my friend ?
@mixxfish9 ай бұрын
Did you ever get an answer?
@testales Жыл бұрын
The deprecated section is probably labeled that way because training with regularization images is more or less obsolete or has very specific use cases only. The model already has learned millions of things and proably can take a few images more. For concepts you may even be unable to generate regularization images in first place because the concept is not yet known. By overriding training of a celebrity you are damaging the model intentionally which regularizitation is supposed to prevent. But because the Lora is applied only temporary this doesn't matter anyway.
@CrazyCat-RU Жыл бұрын
I'm writing through a translator - but in my opinion the regulation folder is greatly underestimated. I tried in SD1.5 in dataset to give a photo of children playing in the park, and in the regulation folder a photo with steam locomotives. ;-) As a result, lora drew a great children's railroad in the park and children playing with steam locomotives. :-)
@David-Codes11 ай бұрын
But then how do you give your new person a new codename like zwx person
@juggz143 Жыл бұрын
@Aitrepreneur I just wanted to point out 2 settings that it seems you may have misunderstood. At around @30:00 you mention that the "cache text encoder outputs" option is broken and suggest not to use it for now, then later at @32:50 you mention the parameter "--network_train_unet_only" and how the difference is negligible and suggest people not to use it either, BUT if you use the "--network_train_unet_only" command it fixes the "cache text encoder outputs" command. Together they use significantly less vram and makes training much faster. So the difference is negligible and the training is way faster if you use them in combination. Give it a try and you may recommend the opposite of your conclusions testing them separately.
@ESGamingCentral Жыл бұрын
is it normal for a 4090 to do 2.45s/it ? I'm trying this tutorial but I was expecting the card to be faster.
@HO-cj3ut Жыл бұрын
olabilir
@nikoleifalkon Жыл бұрын
he has 3090 not 4090 @@ESGamingCentral
@tazztone Жыл бұрын
@@ESGamingCentral mine is pretty fast now (1.5sec/it) with 3090 (9 training images) Network Rank (Dimension): 128 added --network_train_unet_only and checked "cache text encoder outputs"
@ESGamingCentral Жыл бұрын
@@tazztone what drivers?
@marcelschuberth970911 ай бұрын
just a little hint since reopening the file to check the status is pretty annoying and inefficient, use tail -f instead, it prints the last 10 lines (by default, can be specified with -n ) of a file and -f sets the flag for it to update whenever new lines are added. it even handles progress bars correctly instead of printing a new line for each update
@doingtime208 ай бұрын
The best guide for Lora training, thanks!
@BetzVRz Жыл бұрын
Best results I have gotten ever with this tutorial Amazing stuff
@amin5127 Жыл бұрын
Hey it would be nice to see a style training guide for SDXL 1.0
@Aitrepreneur Жыл бұрын
What I show in this video should be enough but I could make a specialized video just for this
@think.feel.travel Жыл бұрын
Yes it would be very appreciated as I suppose that Lora could be way more useful than replicating a celebrity (I don't really know why one should use Lora to do that ahah) 😂 Thank you a lot for your videos! @@Aitrepreneur
@lennylein Жыл бұрын
Yes please 😊
@robxsiq7744 Жыл бұрын
from frustration to making amazing loras...thanks man!
@wholeness Жыл бұрын
This whole tutorial was legendary. Became a Patreon member without hesitation and never looked back. These one click installs are incredible!
@ayanechan-yt Жыл бұрын
Thank you, I was looking for a way to train a character with SDXL! This renewed my interest in Stable Diffusion :-)
@Aitrepreneur Жыл бұрын
Great to hear!
@ayanechan-yt Жыл бұрын
By the way, I have been meaning to ask... Are there any differences in picture quality between using a Lora vs using a fine-tuned model?
@mistercapitale Жыл бұрын
This is a fantastic video. I will be a Patreon supporter just because of this video. Very smart. The marketing is strong with this one.
@ChrisR8810 ай бұрын
I've tried so many tutorials in order to create a LoRa and the results were always subpar. With your guide and settings, I finally managed to make a proper Lora that works (almost) flawlessly! It isn't very flexible in terms of styles (it keeps it photographic, realistic) and with a net rank of 256 comes at 1.7gb, but it's the 1st actual LoRa that perfectly reproduces the face in +90%% of times, which is amazing! Thank you, @Aitrepreneur! Also, the runpod template was a time saver!
@BruceDailey Жыл бұрын
Thank you. I've been trying to get a lora to work for months. This is the first video that worked. The link for an awesome runpod template was really appreciated.
@lumaceon3863 Жыл бұрын
I shudder to think how many hours I wasted cropping images manually. Thanks, this is insanely helpful!
@TheRemarkableN Жыл бұрын
You are doing the AI gods’ work 🙏. Thank you good sir. You also have excellent taste in celebrities.
@ApexArtistX Жыл бұрын
Capital G is wrong grammar
@TheRemarkableN Жыл бұрын
@@ApexArtistX Thanks! 👍
@ccelik97 Жыл бұрын
> "You also have excellent taste in celebrities." History repeats itself I guess lol. E.g. "Lenna".
@Gardiance Жыл бұрын
Thank you, Bro. Loads of hours and $ used to create this video. You always do a good job ❤
@Aitrepreneur Жыл бұрын
Much appreciated!
@ericruffy2124 Жыл бұрын
Thank God you're BACK... 😙
@MarcSpctr Жыл бұрын
although, SD team are right about training being faster and easier with regular tokens rather than RANDOM TOKENS, it becomes useless if you want to use the trained Loras on different base model. So say tomorrow RealisticVision or similar base model is released for SDXL, using these Loras will result in inferior quality as compared to Loras that are trained FROM SCRATCH. So I would suggest if you plan to use other Base Models (which ofc everyone does), use RANDOM TOKENS like ohwx, ab12 or anything random stuff.
@jaoltr Жыл бұрын
This is a really good point. HOWEVER: Are you speculating or do you have real world results? If you're speculating, it would be great if someone who has done some testing could weigh in and confirm or refute the assertion...
@leucome Жыл бұрын
@@jaoltr Just logic seriously. If you use a token that exist your lora is going to be built on top of that. If this token look different in an other model then the lora will also look different. I had some real issue with the token vio turning my character into a violin or violet color on certain checkpoint. So now I dont take any chance and use weird rare token like bnhanlwx to avoid ending up with a broken lora.
@MysteryGuitarMan Жыл бұрын
@MarcSpctr - that's not right, unless you use a very common token like "orange" or even worse a word fragment like "vio". "ohwx" will also exist in RealisticVisionXL or whatever XL community models come out. Since you started so much father away from your final target, you run an even higher risk of having to retrain your LoRA.
@leucome Жыл бұрын
@@MysteryGuitarMan Yeah if everybody use the same rare token then it not rare anymore I had no issue with this yet but it is also likely.
@jaoltr Жыл бұрын
@@leucome Thank you for sharing your experience, that's what I was looking for. I see the logic (that's why I thought it was a good point). But logical only means you have a hypothesis that needs to be tested. It doesn't mean you found truth. Concluding that something is true because it's logical is both a trap and a paradox since it's an illogical method to reach such a conclusion. As Deming said "In God we trust. All others must bring data."
@bobdelul Жыл бұрын
Ok thats it. I've become a patreon now. Have access to your example Lora's will safe me tons of time figuring out what works and what not. Such a good idea!
@technocore1591 Жыл бұрын
Big thanks! I joined your patreon for the files!!! THE FILES!!!! Lol thanks, dude. What does dreambooth for SDXL look like?
@vokuh Жыл бұрын
haha just 2 days after joining your patreon, you saved me 10 hours of work
@ScottTheis Жыл бұрын
Missed you. Good to have you back. Thanks for all you work.
@abdelhakkhalil7684 Жыл бұрын
You know, you can keep the less trained LoRa as your main LoRA and use the more trained one in the positive prompt for ADetailer. This way, you get both flexibility and details.
@touchdownchef10 ай бұрын
The video was very helpful. Thank you. If there are any differences between working on LoRa and checkpoint models, even slight ones, what would those differences be? Additionally, do you have plans to upload a video tutorial on creating checkpoints for the SDXL version? There seems to be no related tutorials on KZbin, and it would be greatly beneficial.
@noelbachinimiliti17602 ай бұрын
Hi Man, amazing video, i'm learning to train a Lora, so this is very useful. I have a question, is it possible to train the Lora with Pony instead of Diffusion ? thank you so much.
@natsuschiffer8316 Жыл бұрын
When SDXL Dreambooth!! Thanks for the video!
@mihoilo22764 ай бұрын
Can you please make an updated guide to creating lore for SDXL?
@SooNmus Жыл бұрын
Thank you immensely for your insights 🙌. We previously approached some of these parameters from a distinct perspective. However, after implementing your suggestions, the outcomes were genuinely remarkable. Interestingly, several of our projects faced setbacks due to resolution concerns, and it never occurred to us that cropping was the culprit. Exceptional video content! 🌟
@ElGalloUltimo9 ай бұрын
On my first try, I just dumped 56 images in the training folder thinking it would help. I had 22000 training steps it was going to take 14 hours on a 4090. After figuring out how long it was going to take, I promptly went back and followed the video's advice to the letter with only 12 images and had a similar training time to the video.
@OriBengal Жыл бұрын
Hey K - Great tutorial. I've been watching a bunch of Lora tutorials recently.... You've definitely simplified it. Question for you-- Where did that Runpod Khoya image come from? They don't have that listed on their pull down of images. This is way better than manually installing it, etc.
@c0nsumption Жыл бұрын
This man is the GOAT
@NeonXXP Жыл бұрын
Haven't played with stable diffusion in months. Thought I'd hit you up and see if there were new fast and easy ways to train on specific people.
@EpochEmerge Жыл бұрын
I very much approve of the work done, I myself also tested different parameters and understand how much time it takes. The only remark I would like to make about the Seed (19:39, next to cache latents) parameter. If you need to test different parameters, they should be tested with a single seed, otherwise the training will be different every time. You can check it by making two loras with the same settings but different grains.Otherwise great video as well
@deadlymarmoset2074 Жыл бұрын
I am able to train on a 8g vram at 1024, by using a network size of 4. And obviously all the low vram settings, and a alpha of 1 (but I don't know if that matters.) The lora turned out pretty good.
@shiftyjmusic9170 Жыл бұрын
Could you post the .json somwhere? Would be much appreciated!
@ForkFaced Жыл бұрын
I second the json posting, would really be appreciated.
@theboyjohnny123 Жыл бұрын
Which gpu do you have ? Mine is a 2070 maxq 8gb and i'm not able to get any image out of it
@deadlymarmoset2074 Жыл бұрын
@@theboyjohnny123 2080, though, are you talking about training with khoya or using SDXL to actually generate images?
@shiftyjmusic9170 Жыл бұрын
Thank you very much! 💯 amazing work there btw!
@keller2me Жыл бұрын
Thank you very much. Your videos are fantastic and very detailed in content. If I may make a request, if it's in your plans or possible, could you do a little insight into the "Themes" as well as the characters in the picture training. It would be great to understand if there are substantial differences and I think the public would be grateful to you (at least I would be). Congratulations again for the excellent job in explaining everything and see you next time.
@jacquesbynens3816 Жыл бұрын
You are a true sensei... infinite thanks for all these tutorials. U da man!
@DromaticGnome10 ай бұрын
Thank you! I just joined your Patreon - looking forward to digging through all that you've created!
@Axodus Жыл бұрын
Now do a ultimate tutorial for LLM LoRAs! :D (if you need advice ask me, I know a LITTLE bit about LLM LoRAs)
@kofteburger Жыл бұрын
I've been looking forward to this.
@CronoBJS Жыл бұрын
I missed you Aitrepreneur! This is one of the most needed videos!
@jamesclow108 Жыл бұрын
Thank you, thanks to this video I've been able to take my first step in lora training. I decided to try my first attempt with your Margot Robbie set and a batch size of 2, as I have 24GB VRAM and wanted to see the speed. Looking at about two and a half hours and 17.6GB VRAM. It's gotten me curious though about the level of detail, and the best way to maximize the possible level of detail. The training images and regularization images are jpg. If you wanted to get the highest quality possible, would it be better to use png or would the difference be so negligable that it isn't worth it. Reason I ask is that I've noticed the trend of waxy looking low skin detail people images generated out there and wondered if only using training and regularization images with decent skin detail would solve that issue?
@jrobertsz71 Жыл бұрын
All I can say is Wow!
@aiviistudio Жыл бұрын
Thank you @Aitrepreneur! I really love your contents. You have very deep knowledge about what you are doing and explain them very well. Can’t wait for your checkpoint training video ☺️
@Aitrepreneur Жыл бұрын
I appreciate that! ;)
@jaoltr Жыл бұрын
🔥 One hour MASTERCLASS on how to train SDXL LoRAs . You Rock! 🔥
@ssjgokillo Жыл бұрын
This was really helpful, but I wish you had also included information on settings for Styles (like what instance/class prompts to use). Also what if we're doing a LORA based on a cartoon character that there isn't a celebrity likeness for?
@thedesigngraphik Жыл бұрын
I feel your pain, over a year now of many great videos from stable diffusion KZbinrs, but its always, and yes 100% always, about training people. It's like nobody is using AI with their own artwork as the training source?
@BlackMita Жыл бұрын
Truuue
@anonymousanonim7615 Жыл бұрын
@@thedesigngraphik i want to create a clothing style for training, now i'm still stuck at sd 1.5 lol
@TranshumanVideos Жыл бұрын
Those Milly Alcock SDXL base model generations 😂😂🪿
@AlterMax24-YouTube11 ай бұрын
This is strictly amazing. I have a little question. Is XL necessary? Or does it work with SD1.5? Same question for objects and clothing LoRa! THANK YOU'
@exiacyn46218 ай бұрын
Fantastic video, I'd really love to see a tutortial on how to do Lora training using One Trainer which seems to have a way better interface and more useful features like masking.
@LegionAI.Online Жыл бұрын
Is this why you were gone for so long! I missed you! My god I was getting worried again about you!
@Wasted-GTA10 ай бұрын
Are the Lora's suppose to look like the celebrity or the training models after? I completed all the steps, but now my model looks like the Margot Robbie instead of the blonde I created. Also you said to use a seed near the end, where do I get the right seed number? Thanks man you have a real talent.
@rogerioshigo6751 Жыл бұрын
you are the best man keep the good work😄
@iamCryptobulls Жыл бұрын
Such an amazing video! This has been so confusing so this video was very helpful!
@bricenuzzo7747 Жыл бұрын
This is pure gold, thank you and congratulations, you won my patreon subscription !
@BlueScorpioZA Жыл бұрын
As far as flexibility is concerned with LoRA models, one could always use a model that has more training, which has a photo realistic look when used at full strength, but simply reduce the strength of that LoRA when attempting to apply a non photo-realistic style to it. Eg. would look like a photo but or even lower would still give you a good resemblance to your character but would also be flexible enough to allow non-photorealistic styles applied to it.
@osojii10 ай бұрын
Thank you so much! This tutorial helped me immensely :)
@bartmeeus9033 Жыл бұрын
Thanks for the in depth explanation and the hard work to create this video!
@mckachun Жыл бұрын
masterclass~!! thanks for sharing~!!
@yoniattlan38706 ай бұрын
Perfect ! Thank you ! Can i install Kohya GUI tool with a macbook ?
@maxp7984 Жыл бұрын
Very useful and detailed! Thanks a lot.
@Aitrepreneur Жыл бұрын
Glad you enjoyed it!
@itzpaco5539 Жыл бұрын
Thank you K ❤
@noipowszystkim Жыл бұрын
number of new models at civit will increase after this video for sure
@K-A_Z_A-K_S_URALA15 күн бұрын
I can ask you a question: what is the maximum number of photos you need to upload for training on a real person in full growth? I have 250 pcs of photos of my wife, I trained on a 1.5 model and everything is cool. I wrote 150_gen and top quality and now I'm busy with sdhl and I'm curious whether this amount will be a style or a character or you need to make up to 100 pcs of photos???? thanks
@ejaykniep Жыл бұрын
Can't wait! 😁
@cedtala Жыл бұрын
hi :D thanks again...is there a way to stop training, and then start again the same training ? ...
@Aitrepreneur Жыл бұрын
yes you can, just stop the training then input the path to the last trained lora model in the "LoRA network weights Path to an existing LoRA network weights to resume training from" box (located at the top of the parameters tab
@graylife_ Жыл бұрын
thanks man! I really appreciate the hard work. You've done an incredible job. I like how much you progressed since a year. Keep the good work going on.
@spookywaves Жыл бұрын
Great video!
@OliNorwell Жыл бұрын
I wish everyone good luck and thanks of course for the excellent tutorial. But I'm going to be the one who stands up and disagrees with some things that have been said here. (Please don't shoot me!) The problem with the 'celeb trick' is that unless the likeness is very similar, you end up with a somewhat strange middle ground where yes it looks like your target person, but you can clearly see the underlying facial shape/features of the 'original celeb'. This is apparent in this tutorial. Personally I'm not a fan of that end result one bit. I've used 'ohwx' myself training on 20 images and have gotten very good results. Certainly comparable in terms of 'quality' to the results shown here. I also believe you should crop your images to remove useless information, and achieve a more zoomed in image where that is desired. I guess all I would encourage people to do is experiment for themselves.
@codefire88 Жыл бұрын
Can you do a video on how to do LORA training for LLAMA2?
@AgustinCaniglia1992 Жыл бұрын
Amazing work. Thank you.
@DivinityIsPurity Жыл бұрын
Oh LORA!!! -Steve Urkel
@russellmm Жыл бұрын
hopefully runpod sponsors you as you are one of the very best training channels
@Aitrepreneur Жыл бұрын
They were supposed to at one point, then they stopped responding to my emails 🤣
@LonelionZK Жыл бұрын
Where is the SDXL.json file? Thanks
@mikepp9588 Жыл бұрын
Another Great Video & Tutorial! Question: "Always use celebrity that looks like..." - Dogs? So if I'm training a "dog" should I search for famous dogs? Or not applicable to non-humans?
@Kujamon10 ай бұрын
Does this still install every file into the local windows drive, no matter where you run it from? I couldn't use it before because there was not enough space on the windows drive.
@spearcy Жыл бұрын
Your hugest YT vid ever!
@tanjabeckers9478 Жыл бұрын
Bravo pour ce MASTERCLASS !💥💥💥💥💥
@gammakaph Жыл бұрын
ça marche pour toi ?
@celebAIdance Жыл бұрын
Can someone answer this? If i am generating full body photos, i also need full body photos during lora training or its only the face that matters?
@velly027 Жыл бұрын
Great work! Really helpful 👌
@BetzVRz11 ай бұрын
Could you re-do this training. it seems something have changed on the runpod side.
@ashish-lk9lx Жыл бұрын
having regularization images at 768x768 or 1024x1024 is important beacuse i have random image of very high resolution so can i use random sizes?
@MathisDaudebourg Жыл бұрын
Thank you for this very comprehensive tutorial. I have a question, I have a pc not powerful enough to generate LORA with Kohya. So I used the Pods method to be able to generate my Lora. Once the work is done on Kohya, do we still need the Pod system on Stable Diffusion or not?
@stanpikaliri1621 Жыл бұрын
I personally would just train it in CPU mode only with hight parameters because I have 128GB of ddr ram. It also should be much simpler with less stuff to setup and I don’t need RTX card or runpod. To be honest I expected to see how to do it in this video but he only show us how to do it by using GPUs.
@flonixcorn Жыл бұрын
Yes let's goo finally a good lora tut 🎉❤
@HO-cj3ut Жыл бұрын
thank you so muchh , I liked this channel , the best
@TheKuzmann Жыл бұрын
33:33 ...for style training, what you definitely don't want is for certain words (tokens) in the captions to be associated with the data set, and for images from the data set to pop up with those words in the prompt. for this reason style training is done with little or no text encoder tuning, and really smart captions
@coulterjb22 Жыл бұрын
Simply amazing. 🤯
@021tks56 ай бұрын
This video is amazing. The regularized images introduced were realistic, but is it possible for them to also support anime-style LoRAs?
@augustolacerda3560 Жыл бұрын
Mr.(?) K, your videos are always amazing. I'd like to suggest some more in depth content on the minor things related to training. Like assembling a set of regularization images. I have also been looking for information on training models for text AI (Oobabooga models and so on) but I couldn't find the text AI community or information related to training.
@phillipberenz4284 Жыл бұрын
This is great! Is it possible to use this method to create a high quality Lora for a completely fictional person?
@rashedulkabir6227 Жыл бұрын
This Lora trained Cillian Murphy looks more accurate than other SD versions.
@aslansamarkanov6603Ай бұрын
Thank you so much for the video! That's a lot of hard work. I have a question. I want to make a comic using AI. ComfyUI, Stable Diffusion. Is it possible to train all materials in one model? Can multiple characters objects and styles be collected in one model? By prescribing different tags to each of them
@OttoMaticInc Жыл бұрын
K! Brother, your work is amazing, not just here but in general. I have been learning next to everything I know about AI from you and just when I think I must be in the top 10% of AI users, you come along and shatter this limit again. Thank you so much, this was exactly what I needed to proceed with my own work here on KZbin. Also just now I doubted the concept of tutorial style videos being a solid game plan for the KZbins and yet here you are, crushing it again! Know you have my respect and please do keep going. Cheers!