🔗 Full Instructions and Links Written Post (the one used in the tutorial) ⤵ ▶ www.patreon.com/posts/110879657 🔗 SECourses Discord Channel to Get Full Support ⤵ ▶ discord.com/servers/software-engineering-courses-secourses-772774097734074388 🔗 FLUX SwarmUI Instructions Post (public no need login) ⤵ ▶ www.patreon.com/posts/106135985 🔗 FLUX Models 1-Click Robust Auto Downloader Scripts ⤵ ▶ www.patreon.com/posts/109289967 🔗 Main Windows SwarmUI Tutorial (Watch To Learn How to Use) ⤵ ▶ kzbin.info/www/bejne/fny7aZJ8Zqqlldk 🔗 Cloud SwarmUI Tutorial (Massed Compute - RunPod - Kaggle) ⤵ ▶ kzbin.info/www/bejne/jne4i6Kca7ieodk 🔗 SECourses Reddit ⤵ ▶ www.reddit.com/r/SECourses/ 🔗 SECourses GitHub ⤵ ▶ github.com/FurkanGozukara/Stable-Diffusion
@BikingWIthPanda2 ай бұрын
8gb GPU lora will never look good lol needs more than batch size of 1. so many training sessions done for no reason
@LouisGedo2 ай бұрын
👋
@SECourses2 ай бұрын
batch size 1 actually yields best results so they will look good but take longer. the only downside of 8gb is that it is 512px
@SECourses2 ай бұрын
hello thanks for comment
@nhldesktop2 ай бұрын
I am not able to subscribe to your Patreon because my card is getting declined (I am from India and Indian cards are facing this issue currently for every software due to new e-mandate rules by RBI) I can pay you on Paypal or something I just need those files to run Kohya ss on runpod. Please help
@flintandsteel44572 ай бұрын
Perfect timing! Yesterday I spent hours on your blog post, finally training my first very slow Flux Lora last night! The whole time I was wishing you had a video on all of this. LOL. Thanks!
@SECourses2 ай бұрын
awesome
@devnull_2 ай бұрын
Thanks! I haven't yet gotten my setup fully configured, but this is very extensive. I did do quite a bit of LoRA, DB and TI training with SD 1.5, but then lost interest, so this is really nice to get back to speed. I don't quite know if anyone could complain about this kind of stuff, you've put a ton of your precious time into this.
@SECourses2 ай бұрын
@@devnull_ thank you so much
@ronnykhalil2 ай бұрын
what a resource!! thank you for always deep diving and bringing back the treasure
@SECourses2 ай бұрын
thank you so much
@yjf-eb1gz2 күн бұрын
Hello! I encounter a problem, could you tell me how to fix it? Thank you very much. when I run the GUI on a cloud computing platform. when I load the config file, the parameters and args remain the same. I can save the parameters to a new config file, but can not load any.
@SECoursesКүн бұрын
it could be due to giving path inaccurately. on runpod it starts with /workspace
@johnnyapbeats2 ай бұрын
Your effort is amazing , only by captioning a 1hour video you won my like and subscribe. Looking forward to seeing more amazing tutorials from you!
@SECourses2 ай бұрын
thank you so much. i am editing massed compute + runpod tutorial at the moment. i fully caption my all videos manually
@theironneon2 ай бұрын
Thank you so much! i've been watching your channel and keep waiting for this update
@SECourses2 ай бұрын
thank you so much. cloud tutorial also almost ready
@HolidayAtHomeАй бұрын
watched the whole thing. I'm glad that you finally work on a dataset with emotions ;)
@SECoursesАй бұрын
you are welcome. yes i have a 256 images dataset now and i am experimenting with it too on fine tuning for the fine tuning tutorial
@marhensa2 ай бұрын
22:38 I read some intensive research on Reddit that captioning images in Flux LoRA should be avoided if you are only training for facial likeness (not style, pattern, or photo angle), captioning faces only makes it worse.. I tried it, and it turned out better for me with my simple dataset (2400 steps: 10 pictures, 10 repeats, 24 epochs).
@SECourses2 ай бұрын
you are using different parameters maybe you can get better with mine have you compared? i make comparison exactly with same setup
@marhensa2 ай бұрын
@@SECourses it's linked to civitai articles/6982, here's the part that says: I had joked to myself at the time, "Hehe, it’s like it doesn’t even matter how you caption your images… you don’t even have to caption at all!" Well, it turns out that’s not a joke. It’s exactly how it works. the findings they find is, no need to complicate captioning, because flux already smart at concept because its T5XXL CLIP.
@Elwaves29252 ай бұрын
I also read that article and tried no captions with training on Replicate. I didn't do a comparison with captions but the results of my two captionless loras were fantastic, especially as some dataset images weren't great. They were both around 20 images and whatever the default settings are. I'm definitely going captionless for Flux training from now on.
@quercus32902 ай бұрын
@@marhensa I would take that with a pinch of salt.
@SECourses2 ай бұрын
in the video i say the same :) because flux has internal captioning + text encoder mechanism. so images are fully captioned
@1Know1tHurts2 ай бұрын
Furkan, thanks again for all the knowledge you share with the community! You are number 1 creator in AI space, in my opinion. I trained my first model today and results are great. I am looking forward to full model training. I trained 30 images and found out an interesting thing in my test grid: at 175 epochs model slightly opens her mouth much more than at 150 and 200 epochs🙂 Overall 125-200 epochs are all good but there are instances where 50 and 75 epochs do really well too. Flux is really impressive. Imo, it is better than Midjourney.
@SECourses2 ай бұрын
thank you so much. yes FLUX rules. hopefully full model training is my next research
@1Know1tHurts19 күн бұрын
Hi Furkan! Can you please confirm that using bf16 for Accelerate (7:45 mark in the video) is the best option? We later use fp16 in Kohya. Shouldn't we use bf16 or fp16 in both cases? Thanks.
@SECourses19 күн бұрын
when training fp16 doesn't work, ye i know it sounds ridiculous but it is about hardware architecture and weight scales and mixed precision training :)
@1Know1tHurts19 күн бұрын
@@SECourses Thanks a lot for letting me know how it works🤝
@SECourses18 күн бұрын
you are welcome
@sternkrieger1950Ай бұрын
The LORA I trained with Kohya keeps coming out as a 8.4GB file and cannot be used. Forge said it is corrupted with "HeaderTooLarge". My dataset is 3,000 images. I'm using the latest Flux branch, NOT on the Dreambooth tab (LORA tab), and used the same parameters as you are. Do you or anyone know what the issue could be? It's driving me nuts! I subscribed to your Patreon BTW.
@SECoursesАй бұрын
hello welcome. please use my configs they will come as like 2.3 GB. you can reduce size with saving as FP16. also fine tuning will be even better check this out : kzbin.info/www/bejne/fKfTiKxnrZqYqq8
@randysipe7952 ай бұрын
I'm not complaining just asking, are the links behind a pay wall? I see links on the page but nothing specifically for training flux Loras.
@SECourses2 ай бұрын
yes the link is behind paywall on patreon : www.patreon.com/posts/106135985
@mmilerngruppe2 ай бұрын
@@SECourses sorry, I would not use patreon for this, but what are the state of the art subject croppers nowadays?
@SECourses2 ай бұрын
subject croppers are really getting better
@mmilerngruppe2 ай бұрын
@@SECourses can you name some of them?
@SECourses2 ай бұрын
@@mmilerngruppe Yolo v8 latest I think to detect subjects
@1Know1tHurts2 ай бұрын
Guys, if you have this error, let it be, it should not affect training (people already asked about this but I leave my comment also so those who have this issue find their answer faster): ERROR: Could not find a version that satisfies the requirement xformers==0.0.27.post2 (from versions: none) ERROR: No matching distribution found for xformers==0.0.27.post2
@SECourses2 ай бұрын
he stepahey step 2 fixes it. also since we dont use xformers it doesnt matter
@candymancls8 күн бұрын
Thank you for this great Tutorial. I have a question.. I followed everything and I used 54 Pictures for the Lora training.. I have a rtx 4090 and choose Rank_3_T5_XXL_23500MB_11_35_Second_IT as the config file. I´ve started the Training 15 hours ago and now I´m on step 430/10800. So this will take forever.. Is this normal.. or should I choose a different config file or reduce the 200 Epoch? In your Video the steps go much faster.. it takes like 2 minutes or more for 1 step for me with this settings.
@SECourses8 күн бұрын
that config requires huge VRAM. not suitable for rtx 4090 so you must be using shared VRAM. also today hopefully I will update configs and they will work faster. you should use rank 3 without T5 training. after todays update this T5 config may work fast as well stay tuned.
@candymancls7 күн бұрын
@@SECourses Awesome, thanks for your reply
@SECourses7 күн бұрын
@@candymancls you are welcome
@candymancls7 күн бұрын
@@SECourses When I use the Rank 3 without T5 training it gives me this error: ValueError: T5XXL is trained, so cache_text_encoder_outputs cannot be used. I´m trying to make the Lora for the getphat FLUX Reality model from CivitAI which is a Dreambooth checkpoint I belive. Do you know what causes this error? I´ve turned Cache Text Encoder and Cache Text Encoder Outputs to Disk off and also Memory Efficient Save and the Training seems to start..
@SECourses7 күн бұрын
@@candymancls yes it is true. by the way currently i am doing huge tests to update configs. so many cases to test. if you join discord i can show :D
@moon47usaco2 ай бұрын
Your videos are getting much better. Much easier to understand. I know you have allot to say and it's exciting. =] The slower pace is easier to listen to, much appreciated... =]
@SECourses2 ай бұрын
thank you so much. i will try to improve further
@EternalAI-v9bАй бұрын
Hello, I remember reading a stability ai guy telling you once on reddi that using ohwx but was not the best, sometimes itwas better to use normal words? Do you remember, what do you think?
@SECoursesАй бұрын
yes i know. for flux it really doesnt matter much you can use either :)
@dconcorde6677Ай бұрын
Thank you for tutorial! I have a question - so i make lora with float weights and it goes around 1gb of size. There is a tool in kohya named lora resize tool and it can compress file from 1gb to 36mb. My question is - have you tested it, what settings would you recommend? Thx.
@SECoursesАй бұрын
that is on my plan i haven't yet. but fp16 saving works great and i have tested network dimension rank, lower reduces quality while training. i also tested exporting lora from fine tuning : Detailed LoRA extraction guide and tests from FLUX fine-tuned models : www.patreon.com/posts/112335162
@pk.94362 ай бұрын
Wow im impressed, this is a lot of information, amazing and thank you for this 💪
@SECourses2 ай бұрын
@@pk.9436 thank you so much
@EternalAI-v9bАй бұрын
Youare saying that torch 2.5.0 could make it faster? we simply need to update torch and it could work or do we have modify lot of stuff? ( i mean can I just update to a new torch and thats it?)
@SECoursesАй бұрын
100% on Windows it is way faster compared to torch 2.4
Ай бұрын
Great as always, I did not watched the whole video yet, I will. just a Quick question, can I use the trained character as a model to generate specific poses with ControlNet? did you cover that in this video? I am working on a NSFW project, and I need particular poses, can you kindly answer. Teşekkürler.
@SECoursesАй бұрын
thanks. if you have working controlnet for flux it would work without depending on what you want to generate
Ай бұрын
@@SECourses Actually i am working on SD 1.5 so can I train and generate?
@SECoursesАй бұрын
yes sd 1.5 works but its quality is nothing like flux
@BabylonBaller2 ай бұрын
Just signed up to support you brother. It is very much worth it, saved me a ton of valuable time.
@SECourses2 ай бұрын
awesome ty so much. also just updated configs for 8, 10, 12, 16, 24 and 48 gb gpus
@BabylonBaller2 ай бұрын
@@SECourses Geez, I just started the training according to everything you mention in this video. On my 3090, 15 images, 200 epoches it says its going to take 8 hours! How much faster would it be on masscopute on like an h100 or similar? Maybe that chip is too pricey
@SECourses2 ай бұрын
@@BabylonBaller on massed compute rent 4x A6000, use 4x_GPU_Rank_1_FAST_Lower_Quality and it takes 1 hour for 3000 steps :) great results. costs 1.25 USD
@BabylonBaller2 ай бұрын
@@SECourses awesome I'll look into it
@SECourses2 ай бұрын
@@BabylonBaller great ty
@WatchShadowHeadАй бұрын
Hello, I have a question. Does this training process, or FLUX, work well for training Asian characters? I noticed that the SDXL worked very well for training Western characters, but the results for training Asian characters, whether male or female, were not very good.
@SECoursesАй бұрын
you are absolutely right about SDXL and Asians. Sadly I dont know for FLUX yet I didn't have any Asian client to train yet
@WatchShadowHeadАй бұрын
@@SECourses ok,thanks.
@cyberbolАй бұрын
Can I use this Lora model of my face later in stable diffusion or it must be forge, comfy or swarm?
@SECoursesАй бұрын
once stable diffusion starts supporting FLUX you can use. i think still not supporting yet. i assume you mean automatic1111 web ui
@ihsasss2 ай бұрын
You are always doing amazing ❤❤❤
@SECourses2 ай бұрын
thank you so much
@VigilenceАй бұрын
The huggingface section appears in the bookmarks but not in the actual video, do you have a video covering this?
@SECoursesАй бұрын
yes i have a video : kzbin.info/www/bejne/jma6h41mg7KUisk also this video shows : kzbin.info/www/bejne/Y6bLfWWkjJx3mtk
@VigilenceАй бұрын
@@SECourses Ty!
@SECoursesАй бұрын
@@Vigilence you are welcome
@TomiTom12342 ай бұрын
Amazing tutorial and awesome work man, thank you! Just a Q: What does the sentence in the prompt do? It adds more time to generate a photo, is it necessary?
@SECourses2 ай бұрын
@@TomiTom1234 it mask the face and inpaint it with 70% denoise. Like adetailer Not mandatory but improve face
@christofferbersau6929Ай бұрын
looks like a great tutorial! However, I'm not able to pick FLUX is an option on the lora page in kohya_ss?
@SECoursesАй бұрын
when you select FLUX dev model path on your device, it will be selected - shown this specifically in new tutorial - hopefully soon - full fine tuning / dreambooth
@nomorejustice25 күн бұрын
tysm for all of your effort man, GBU always!
@SECourses25 күн бұрын
ty so much. check out fine tuning tutorial too it is even better : kzbin.info/www/bejne/fKfTiKxnrZqYqq8
@nomorejustice24 күн бұрын
@SECourses thanks for the link man 🙏
@SECourses24 күн бұрын
@@nomorejustice you are welcome thanks for comment
@user-nb6kx3qn3pАй бұрын
sorry to ask but i had trained a character Lora in Fal and used it in theri playgorund and it worked perfectly with the character resmble 100% however when i used the Lora with trigger word in my localy install confyui Flux-dev model its resmbles 60% with Lora strength 7. if u keep the Lora strength to normal 1 then it shows normal charcter and the resemeble to my trained chracter is 0% can u share me whats wrong with trained model in Fal and i am having such issues ?
@SECoursesАй бұрын
no idea. train on massed compute cheaper and better
@raffactb102 ай бұрын
This video is gold👏🏾
@SECourses2 ай бұрын
thank you so much
@csp0t7992 ай бұрын
I appreciate all the hard work you do, empowering more people who don't have as much time or just simply need guidance. You're a legend
@SECourses2 ай бұрын
Thank you so much for the comment
@AgustinCaniglia19922 ай бұрын
Your videos are very good I will for sure try your settings..
@SECourses2 ай бұрын
thank you so much
@EvgenyCh-th8dcАй бұрын
in the updated version of Kohya network Rank and Network Alpha there are no such parameters, please comment
@SECoursesАй бұрын
i see it is there. please do a fresh install since it is moved to gradio 5. Network Rank (Dimension) , alpha for LoRA weight scaling
@eriskendaj21402 ай бұрын
The link in the description points to another SwarmUI Tutorial, not the one you're following in your video. Do I have to become a patreon member to access that?
@SECourses2 ай бұрын
i just fixed the link thank you so much here : www.patreon.com/posts/110879657
@WorkAtHome-RemoteJobs2 ай бұрын
Do you have a tutorial on the downloaded version of Flux Webui that is downloaded from Pinokio? I'm looking to train my lora on my computer.
@SECourses2 ай бұрын
no i dont have any tutorial for Pinokio . but you can see how to use flux in this tutorial : kzbin.info/www/bejne/mKbTg5iGirR0Z5o
@tazztone2 ай бұрын
just in time as i was looking for a guide
@SECourses2 ай бұрын
Awesome. Thank you so much for the comment
@1Know1tHurts2 ай бұрын
Furkan, thanks for the configs for full model training. I haven't thoroughly tested Lora yet and you already have configs for the full model. Thanks a lot! I really appreciate your hard work, man🙏
@SECourses2 ай бұрын
thank you so much as well
@yashrami253118 күн бұрын
will it work well over garment or any product?
@SECourses18 күн бұрын
yes it works with everything as long as your dataset is good. you can read our style training tutorial here : huggingface.co/MonsterMMORPG/3D-Cartoon-Style-FLUX exactly same workflow used and worked perfect
@guangyuniu7852 ай бұрын
Great tutorial!!!! Thx a lot, here is one question, I notice that there is a "segment face" tag in the prompt, is that a unique feature of swarmUI? If I want to use the similar feature in comfyUI, Do I need a facedetailer node? or is there any other better ways to do that? Cuz I notice the likeness reduce when the face is not very big in the image.
@SECourses2 ай бұрын
yes it is feature of SwarmUI and works amazing. probably same can be done with comfyui but don't know how to :)
@UnlimitedGRenemy2 ай бұрын
hey i posted a comment before might have been removed because i used a link, anyway the question is that a guy suggests that when training a character you should mention only things that move like expressions, direction of head and body or look. Do you know if that works better than just using the owhx man txt
@SECourses2 ай бұрын
i am doing a huge expression training right now and i didnt mention any expression and it understood my expressions and now does them according to the prompt even if i dont mention. sharing results on this post : www.patreon.com/posts/training-flux-so-111891669
@UnlimitedGRenemy2 ай бұрын
@@SECourses oh great even less effort from us !! Thanks
@SECourses2 ай бұрын
@@UnlimitedGRenemy yep 100%
@vieighnsche2 ай бұрын
I have 3090 24G to train lora with, I'm using rank_3_slow, like you suggested. I still get OOM. 0MB of VRAM is used before starting training because I use my second GPU for display. Do you maybe know why this is? Edit: Rank_4_fast also gives me OOM; Edit2: Even Rank_9 is giving me OOM, so there must be something wrong with the configuration, or the latest build of Kohya Edit3: It's working now. My mistake is that I was in the Dreambooth tab, and not in the Lora path. Really love the course !
@SECourses2 ай бұрын
yes i was gonna tell you that dreambooth tab error glad you fixed :) also i got a better config hopefully updating the patreon post today.
@RANJEET39392 ай бұрын
Hi, why didn't you use comfy UI and AI toolkit for training and generating images?
@SECourses2 ай бұрын
Because Kohya and SwarmUI are easier and working perfect
@divye.ruhela2 ай бұрын
When generating using Flux model on Swarm UI, does the Flux1-dev model not need selection of VAE, etc.? I ask because we selected VAE and Clip during Kohya training. Is it not needed here?
@SECourses2 ай бұрын
i think it auto downloads VAE and Clip L. also auto downloads FP8 T5 XXL
@divye.ruhela2 ай бұрын
@@SECourses It auto-downloaded clip_l & t5xxl_enconly, but I had to provide 'ae' VAE manually. It was throwing an error without the VAE.
@SECourses2 ай бұрын
@@divye.ruhela i see thanks for the info.
@chrisdvo9910Ай бұрын
Realy, I don't get it. I followed all the tutorials. I even used the new, optimized json files. I got a set of 100 pics. It trains. ALL of the tensorfiles are now about 48GB large and ALL of them produce nonsense. I got the pics with backgourn, i got them without background. When it comes to swarmUI nothing works. So better pay for this?
@SECoursesАй бұрын
well 100s of people trained amazing models with my config :) and i did over 120 trainings so far
@lahiruwijesinghe11462 ай бұрын
Appreciate your efforts on this video 👌🏼 I have rtx 2060 laptop with 6gb of vram. Is there any point in me trying this? I tried kohya ss following your guide. It works perfectly
@SECourses2 ай бұрын
nope you need to use cloud to train : kzbin.info/www/bejne/Y6bLfWWkjJx3mtk
@PranavVarma-e3e2 ай бұрын
For me after step one, when i double click the run file the cmd terminal opens but closes in 2 seconds. Also when I hit 5 it says "OSError: [WinError 126] The specified module could not be found. Error loading "F:\Kohya_GUI_Flux_Installer_21\kohya_ss\venv\lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies"
@SECourses2 ай бұрын
i have added how to fix fbgemm dll error to the windows requirements section. you need to download dll. it is torch error. happens when you dont have c++ tools and visual studio
@ArtificialHorizonsАй бұрын
Are you planning to create a Colab notebook? Would really help!
@SECoursesАй бұрын
only pro colab would work. massed compute way cheaper and faster
@ArtificialHorizonsАй бұрын
@@SECourses I have colab pro so wouldn't mind. Would rather run that than set it up on my own. Too complicated and lengthy process to be honest
@SECoursesАй бұрын
@@ArtificialHorizons i am pretty sure you can use my runpod installer should be easy to use
@andrikurniawan5312 ай бұрын
awesome work man, will try with my 3060ti, finger crossed
@SECourses2 ай бұрын
awesome. make sure to reduce your vram usage to like 400 500 mb before starting training since that gpu has only 8 gb vram :/
@teeteetuu942 ай бұрын
Unfortunately, I was unable to get it working with my 3070 Ti (laptop, 8GB) and 16GB sys RAM, even after creating a 64GB swap space for it to spill over. It just throws this error: "getting error RuntimeError: unable to mmap 23xxxxxxxxx bytes from file : Cannot allocate memory (12) " before it even fills up any of the memory space available.
@SECourses2 ай бұрын
sadly your 16 GB RAM is the limitation. You need to have more :/ I think at least 32 GB RAM is necessary
@teeteetuu942 ай бұрын
@@SECourses Even with a sizable swap? Does it need the entire model loaded in one single chunk?
@SECourses2 ай бұрын
@@teeteetuu94 no actually it does swap and split. can you download fp8 base model and enable fp8 unet option and try again?
@Pstkolade2 ай бұрын
excellent tutorial and amazing results, please can you make a video for training LoRa using MacBooks
@SECourses2 ай бұрын
you can't train on MAC please use massed compute : kzbin.info/www/bejne/Y6bLfWWkjJx3mtk
@เหงียนดึงไข่Ай бұрын
Please teach me how to make a 1950s-style short film where the characters have a vintage look but a full-course charm.
@SECoursesАй бұрын
just collect that dataset and train as a style with following this tutorial : kzbin.info/www/bejne/fKfTiKxnrZqYqq8
@deonix952 ай бұрын
OSError: [WinError 126] Specified module not found. Error loading "D:\Programs\kohya_ss-master\kohya_ss\venv\lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies. help me pleas
@SECourses2 ай бұрын
yes the solution is posted under windows requirements section of the patreon post please check it. it fixes that error
@anshulsingh83262 ай бұрын
I wanna make Lora but damn this video is long. It will take me days to do all these
@SECourses2 ай бұрын
if you are short on time i give private lecture 1 hour 125$
@MrGTAmodsgerman2 ай бұрын
When it's bad to caption a Object or what ever, how do i then train it in the way that i control what type of variation it generates from the data set? Like training a person with specific different clothing normally was captioned so you can trigger that specific type of clothing from the training images.
@SECourses2 ай бұрын
you should have person to wear that clothing all the time - no different clothings. after training when generating images, when you don't define clothing or what he wear during training, should work
@shineson44192 ай бұрын
I watched your video and followed the steps exactly, but when I start the training, it ends in just 5 seconds without displaying any error messages or creating any files. This also happens when I switch the pretrained model to SD1.5 or SDXL. What should I check?
@SECourses2 ай бұрын
if there is no error that means you are getting out of RAM likely. set virtual RAM as 50 GB restart pc and let me know the results . also what is your gpu?
@shineson44192 ай бұрын
@@SECourses It looks like it worked! After increasing the virtual memory to 50GB, the training has been running now. However, it still feels a bit slow. (It's been stuck at epoch 1/200 over 10 minutes.) My computer's CPU is AMD Ryzen 5600-6Core and NVIDIA RTX 4070super.
@shineson44192 ай бұрын
@@SECourses It shows steps: 0%| | 2/2000 [14:31
@SECourses2 ай бұрын
yes using shared VRAM. i assume you have 8 GB. can you verify your VRAM usage before starting to train? it should be lesser than 500 MB preferably
@Elwaves29252 ай бұрын
Perfect timing, especially as it's possible on 12Gb GPU's. I've done a little Flux training on Replicate and CivitAI and was going to try some others to find the one that suited me best. They worked brilliantly but to be able to train them myself would be even better....and cheaper. Thank you kindly.
@beatemero67182 ай бұрын
How long would it Take to train with a 12gb rtx3060? Would it even be worth it, or work at all? I always avoided training loras because my Hardware is relatively weak. I'm glad that I can barely use flux1-dev
@SECourses2 ай бұрын
thank you so much. i compared my workflow with CivitAI default and ours way better
@SECourses2 ай бұрын
it depends on steps. but lets say you trained 2000 steps it should be done within like 12 hours. with torch 2.4.1 arrival i expect faster. you can also use torch 2.5 nightly version it speeds up
@Elwaves29252 ай бұрын
@@beatemero6718 No idea yet, I'm still listening to the tutorial (thanks to interruptions on my end), so I'll likely give it a go at the weekend. Flux seems a lot easier than earlier models to get great results. I have the same GPU as you and I'm expecting it to be slow. So like you, it's whether it's worth the time investment vs the cost of online services.
@SECourses2 ай бұрын
So true. Way easier to get amazing Realism
@Vigilence2 ай бұрын
What is the best website to rent the cheapest gpu? 48+gb. I read that Chinese modded 4090 with 48gb vram costs Pennie’s to rent per hour on Chinese servers
@SECourses2 ай бұрын
yes i saw that too but i don't know how :/ currently massed compute with our coupon : kzbin.info/www/bejne/Y6bLfWWkjJx3mtk
@GihasyАй бұрын
Thanks for your effort!
@SECoursesАй бұрын
thank you so much for the comment
@CanCan-nl4ce2 ай бұрын
Furkan abi, merhaba eğitimin için çok teşekkür ederim bende flux eğitimi yapmak istiyorum ama fotoğraf seçerken nelere dikkat etmeliyim genelde tek açıdan koydum bir eğitimimde güzel sonuç veremedi. çeşitlilik ve boyut açısından yardım edersen çok sevinirim
@SECourses2 ай бұрын
FLUX için olabildiğince fazla poz, açı, yüz ifadesi koy derim, ayrıca farklı kıyafet ve farklı arka plan olsun. ilk önce hepsini 1024x1024 yaparak eğit. sonra istersen bucketing denersin
@MyAmazingUsername2 ай бұрын
This is incredible, thank you so much for teaching us in such detail! Does anyone know the rough training time estimate for a 24 GB RTX 3090 GPU?
@SECourses2 ай бұрын
if you use torch 2.5 it will be faster. under 5 hours very likely depending number of images you have
@MyAmazingUsername2 ай бұрын
@@SECourses Thank you, that's inspirational. I was afraid that it would be 1 day per test! I'll be following your guide in detail. :)
@SECourses2 ай бұрын
@@MyAmazingUsername yep testing is expensive :)
@MyAmazingUsername2 ай бұрын
@@SECourses Yeah I can imagine the electricity already. :) By the way, OneTrainer is close to having FLUX LoRA training. It allows you to do things such as masked training (to separate subject from background for better character likeness), and also built-in captioning. Keep an eye out for that and try it later. :)
@SECourses2 ай бұрын
@@MyAmazingUsername ye the workflow will directly run there i expect :D
@KirillKulakov-iv1pm2 ай бұрын
You mentioned something about torch not being as performant at 2.4.0, but I cannot seem to find anything in the change logs that provides more details about what fix was issued in 2.4.1. Can you share more details or a link?
@SECourses2 ай бұрын
here check the threads. torch 2.5 definitely fixes and i hope torch 2.4.1 will too : github.com/pytorch/pytorch/milestone/47?closed=1
@Jaysunn2 ай бұрын
do the training images have to be 1024 x 1024 ? what about bigger? smaller?
@SECourses2 ай бұрын
it supports all with bucketing but i say do 1024x1024 first have a base and then do bucketing and compare. 1024x1024 works best
@M1cler2 ай бұрын
Dr. You are saver on my life srsly! Cheers!
@SECourses2 ай бұрын
thank you so much for the comment
@NDR0082 ай бұрын
Is it possible to make a lora with 2 different characters of class man? I tried preparing the dataset as: 1_johnQ1 man 1_bobQ1 man but the result is no matter if I use the trigger word johnQ1 or bobQ1, the result looks like a blend of the 2 characters. Is it simply not possible?
@SECourses2 ай бұрын
people reporting same issue. your best chance is train them like this. person A is ohwx and person B is bbuk and dont use class prompt. if you do can you let me results? also i am adding text encoder trainings to the configs in like 1 hour and that can improve your results. check the post back again until it is updated
@NDR0082 ай бұрын
@@SECourses will give it a try
@SECourses2 ай бұрын
great. sorry that configs still not updated i am still trying to decide best
@NDR0082 ай бұрын
@@SECourses I tried that, it did not work. I wonder if the issue is that the prompt is not unique enough? They are something like JohnZ and MatthewZ maybe I need something more dramatic like JZ20X and MZ30Y?
@SECourses2 ай бұрын
@@NDR008 sadly we dont know how T5 tokenize. also when training FLUX has internal encoding so it knows you are training a man class in both images. i added clip l config can you try with it as ohwx and bbuk? but still may bleed. maybe fine tuning can fix this
@kiransurwade35762 ай бұрын
🙏🏻Is this possible with 6GB card? GTX 1060 ?.... please reply 🙏🏻
@SECourses2 ай бұрын
sadly not possible. but you can train on cloud, full tutorials coming for this, and use on your gpu
@P.L.8082 ай бұрын
Thank you Furkan 😊 Much appreciated 😊 Haven’t had much time with Flux or training it yet but this is a very valuable video. Thanks 🙏
@SECourses2 ай бұрын
thank you so much
@Yojimbo-h8r2 ай бұрын
You're a superhero
@SECourses2 ай бұрын
thank you so much for the comment
@exelyugure2 ай бұрын
downloaded all the models, but for some reason kohya only gives me V2, V_parameterization, and SDXL, no option for flux1
@SECourses2 ай бұрын
are you on flux branch? did you load into lora tab? once you select model from your computer it enables flux
@exelyugure2 ай бұрын
@@SECourses ah my bad, i havent switch to flux branch. Missed that part. Thankyou for the reply!
@SECourses2 ай бұрын
@@exelyugure sure you are welcome
@ericgormly59902 ай бұрын
You are an amazing hard working resource for us all. Thank you.
@SECourses2 ай бұрын
thank you so much I appreciate your comment
@amird84002 ай бұрын
How to use 4 3070 gpu together on local pc ? Just config the kohya to use multi gpu ?
@SECourses2 ай бұрын
yes and also reduce number of epochs by 4. i shown and explained this in cloud tutorial in details and hopefully coming video today or tomorrow
@nguyenhongduong29062 ай бұрын
Hello, in the first installation step I see a message like this: ERROR: Could not find a version that satisfies the requirement xformers==0.0.27.post2 (from versions: none) ERROR: No matching distribution found for xformers==0.0.27.post2 Does it have any effect?
@SECourses2 ай бұрын
it doesn't have any effect as i said in video since we don't use it during training. it is due to used torch versions but it will also get fixed eventually no issues
@nguyenhongduong29062 ай бұрын
@@SECourses Yes, thank you for this video it helped me to install and train lora successfully on my machine, you are awesome! before that i fumbled for 2 days to install but still failed!
@SECourses2 ай бұрын
awesome. hopefully cloud tutorial coming too very soon
@mr.entezaee2 ай бұрын
When the installation is done, it is still the version Kohya_ss GUI version: v24.1.7 What should I do to update?
@SECourses2 ай бұрын
aren't you using our installer? it installs accurate branch sd3 flux 1 - currently version is 24.2.0
@mr.entezaee2 ай бұрын
@@SECourses Unfortunately, I can't get a subscription from you.. I couldn't get access. And there is no other tutorial on how to update it😢😢
@SECourses2 ай бұрын
@@mr.entezaee watch this tutorial you will learn it : kzbin.info/www/bejne/Y3_Nf6xtlsuCh5I
@김태형-h6t1c2 ай бұрын
Thank you so much for making the video easy and simple. My question is, I'm trying to do Dreambooth learning, not LoRA through diffusers, but I think OOM is happening, is there any result of trying it? I tested it on A100 80G x 3.
@SECourses2 ай бұрын
wow it is huge. i plan to do dreambooth via kohya with as low as 12GB hopefully this week :) probably you are missing a lot of optimizations. let me research this hopefully this week
@김태형-h6t1c2 ай бұрын
@@SECourses Wow! This is a really cool project. I look forward to it. Thank you.
@SECourses2 ай бұрын
@@김태형-h6t1c thank you
@SouthbayCreations2 ай бұрын
Fantastic video! Thank you for sharing the knowledge!
@SECourses2 ай бұрын
thank you so much
@quercus32902 ай бұрын
with regards to repeats and multiple concept folders. If you leave repeats at 1 for all folders, but we have an image imbalance say 100 images in folder 1 and 20 images in folder 2. Surely adding 20(different) images(to folder 2) would be the preferred solution vs setting repeats say to 2 or 5(to match folder 1) on that folder and training the same images as duplicates. If no more images can be made available or sourced, would augmentations then not be the next best course of action to effectively pad out the imbalance?. I feel a high number of repeats on low image counts can severely hamper generalization and should maybe be only used as a last resort. I of course could be absolutely wrong.
@SECourses2 ай бұрын
it should be used as last resort. try to collect same amount of images for most balanced training
@quercus32902 ай бұрын
@@SECourses is it possible to offset any such imbalance with a different LR for separate folders?, or does the LR have to apply to the lora as a whole.
@SECourses2 ай бұрын
i don't know if Kohya has such feature you should ask it.
@harshdurgude99002 ай бұрын
These are great tutorials does anyone know any other channels like this which are focused on text or diffusion models
@Najdmie2 ай бұрын
Thank you on behalf of us poor peasants with low VRAM GPU.
@SECourses2 ай бұрын
you are welcome. just updated latest configs. has a specific config for 8, 10, 12, 16, 24 and 48 gb gpus :)
@cmeerdo2 ай бұрын
Incredible content - thank you
@SECourses2 ай бұрын
thank you so much
@bollihotshots1752 ай бұрын
"Great tutorial! I learned a lot from this video. However, I'm still running into problems during my training. My setup includes 64 GB RAM, a 24 GB Nvidia 4090 GPU, and I've disabled Windows GPU acceleration. I'm also using an up-to-date version of kohya_ss. Despite setting the batch size to 1, I keep getting 'CUDA out of memory' errors. Additionally, the first step takes an unusually long time and reads: | 1/10200 [12:48
@SECourses2 ай бұрын
this happens when you load your config even for 1 time into dreambooth tab , it gets corrupted. load a new config from zip file and load into lora and setup again. also how much vram you are using before starting to train?
@Archilives_UE5_Ai2 ай бұрын
Thanks for sharing useful information about Vram
@SECourses2 ай бұрын
you are welcome thanks for comment
@sevincrz32 ай бұрын
Furkan abi, merhaba. Öncelikle teşekkür ederim böyle mükemmel bir ders için. Kohyayı kurdum, açıldı, ama "Exception in ASGI application" hatası alıyorum mesela klasör seçmeye çalışırken. Ne yapabilirim? Yazılımcı olmadığımdam zorluk çekiyorum
@SECourses2 ай бұрын
En son zip dosyamızı indirip yeniden kur ve step 2 yap. ayrıca kohya da bugı fixledi. fastapi denen library bütün gradio'ları bozdu
@chamathzoysa2 ай бұрын
After install kohya GUI I get "ERROR: Exception in ASGI application." any Solution?
@SECourses2 ай бұрын
@@chamathzoysa yep use v30 and step 2 fixes. It is global error of fast api broken all apps but I fixed it
@ThefrypodiPod2 ай бұрын
Swarm fails if I keep 11.8 cuda installed. I even installed 12.6 and lowered 11.8 in the variables. I had to remove it entirely to test swarmui. ComfyUI execution error: _scaled_mm_out_cuda is not compiled for this platform.
@SECourses2 ай бұрын
SwarmUI uses pre-compiled python not related to your system. I use cuda 11.8 in my system works perfect. but i am gonna ask this error to swarmui developer now
@SECourses2 ай бұрын
we got an answer remove --fast or reinstall : "that's the error from trying to use --fast on an outdated torch iirc"
@ThefrypodiPod2 ай бұрын
@@SECourses Thanks, I actually did that by guessing randomly. Made another lora today and forgot to reinstall cuda 11.8. The lora training worked with cuda 12.6. I still had visual studio 11.8 integration installed though.
@SECourses2 ай бұрын
@@ThefrypodiPod great
@tungstentaco4952 ай бұрын
Does training a lora on a 22Gb model still work with 8, 12, and 16Gb GPUs? I thought it had to load the model into vram to train the lora, so 22Gb model would obviously not work for those cards.
@SECourses2 ай бұрын
The model has half training split option. It swap model weights and trains single layers. That is how it works. It also slow downs training 3-5 times sadly. For 16 gb I have a slightly lower quality fast training option though. So 16 can do both of the worlds 24gb works perfect
@tungstentaco4952 ай бұрын
@@SECourses Ah, that info helps. I've been using a 11Gb model to generate images, but when I use it for training with your tutorial, I get errors. I believe I'm using the appropriate vae, clip and T5xxl for the model I have, but they are not the ones you're using in this tutorial. I do have a 4060ti 16Gb so that helps. I'm patient, so I'm good with slow for quality. I just can't get it to train at the moment.
@SECourses2 ай бұрын
please use the downloader i have in zip file and set paths as i have shown in the tutorial. this should fix your errors
@tungstentaco4952 ай бұрын
@@SECourses Yep, that was it. Using rank 5, my 4060Ti 16Gb is going 19.4s/it, but it is training now. Thanks for your help!
@SECourses2 ай бұрын
awesome. hopefully i will update configs. you can also use fast 16 GB config
@travislittle43812 ай бұрын
I will look forward to seeing how you might do this using Colab.
@SECourses2 ай бұрын
you need a paid colab. i have recorded runpod and massed compute tutorial. hopefully it will be published tomorrow
@genPackman2 ай бұрын
Thanks for the great video! Is there are way to run Kohya in google colab? I have only 6GB Vram =(
@SECourses2 ай бұрын
if you have pro colab yes.
@stevietee38782 ай бұрын
When I select option 1 the terminal flashes and then closes without installing anything. (v. 21) I know you specified to use Python 3.10.11 but most of my venvs (including ComfyUI) have been created using Python 3.10.6, will a Python update to 3.10.11 break those venv installs ?
@SECourses2 ай бұрын
it should work with python 3.10.6. can you open a cmd and run the installer bat file on that cmd and tell me what you see? if you join discord better. but you can upgrade 3.10.11 and if others gets broken you need to recompose them. they will work after that
@stevietee38782 ай бұрын
@@SECourses thanks, I'll test the installer bat file shortly.
@stevietee38782 ай бұрын
@@SECourses after opening a cmd and running the bat file here are the results, it seems I do require Python 3.10.11: "21:53:56-650604 INFO Kohya_ss GUI version: v24.2.0 21:53:56-668562 INFO Python version is 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] 21:53:56-676438 ERROR The current version of python (sys.version_info(major=3, minor=10, micro=6, releaselevel='final', serial=0)) is not appropriate to run Kohya_ss GUI 21:53:56-684246 ERROR The python version needs to be greater or equal to 3.10.9 and less than 3.11.0 E:\Kohya_GUI_Flux_Installer_21\kohya_ss>"
@SECourses2 ай бұрын
@@stevietee3878 awesome
@stevietee38782 ай бұрын
@@SECourses the result of using cmd to run the bat file, it seems I do require Python 3.10.11, or at least 3.10.9: 21:53:56-650604 INFO Kohya_ss GUI version: v24.2.0 21:53:56-668562 INFO Python version is 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] 21:53:56-676438 ERROR The current version of python (sys.version_info(major=3, minor=10, micro=6, releaselevel='final', serial=0)) is not appropriate to run Kohya_ss GUI 21:53:56-684246 ERROR The python version needs to be greater or equal to 3.10.9 and less than 3.11.0 E:\Kohya_GUI_Flux_Installer_21\kohya_ss>"
@panwar98482 ай бұрын
Thanks for very informative content. How to run all of this on Replicate, is there any guide?
@SECourses2 ай бұрын
I prepared Massed Compute and RunPod tutorial hopefully will be published today
@GeneralKenobi694202 ай бұрын
I'd be more interested in how to make a proper full finetune, especially since Kohya apparently supports it now. I have millions of images, I don't want to use some weird tokens for every little thing like "a ohbx man in a sozednl car" or whatever, I just want to use natural text.
@SECourses2 ай бұрын
full fine tuning is my next research. so with using my multi gpu joycaption editor and after i found out best config hopefully you can do a full fine tuning with multi gpu via kohya. i can even give you private lecture on this once my fine tuning workflow ready
@divye.ruhela2 ай бұрын
@@SECourses I would love to hear more about this. Hope I don't miss out when the workflow is ready! Tuned in!
@SECourses2 ай бұрын
thanks. yes i will share progress too
@GomezBro2 ай бұрын
Thank you brother! LFG!
@SECourses2 ай бұрын
thank you so much. editing cloud video
@sumitmamoria2 ай бұрын
How does the resource usage and results compared to AI toolkit?
@SECourses2 ай бұрын
I haven't tested. but with kohya as low as 7.5 GB possible
@mojay_6192 ай бұрын
Sir, will u publish procedure for doing it via kaggle?
@SECourses2 ай бұрын
it is impossible on kaggle due to GPUs on kaggle doesnt support BF16. FLUX training fails with FP16 cards
@thonghoang1792 ай бұрын
hello! I have follow you, and try train lora flux with gpu 2080ti 11vram, but have error when start train, I choosed config rank5 with my gpu and have error when click start button
@SECourses2 ай бұрын
Hello. Is your GPU supporting BF16? what is the error message?
@thonghoang1792 ай бұрын
@@SECourses I have fixed! But please give me contact to I can direct messgae to you! i see your config 800s/it so long compare with comfyui flux trainer or fluxgym! please check your setting! tks
@SECourses2 ай бұрын
@@thonghoang179 my rank 5 requires 11700 MB of course it would be 800 second / it :) you should use rank 7 or rank 8. by the way to contact me : monstermmorpg@gmail.com
@PiotrGarryWysocki2 ай бұрын
that single ohwx man.txt in dataset, contains anything or its only empty text file?
@SECourses2 ай бұрын
it was used for onetrainer it is not important and not used :)
@mrstratau65132 ай бұрын
Amazing work
@SECourses2 ай бұрын
thank you so much. cloud tutorial coming today hopefully if i can complete
@YogeshMali-l2y2 ай бұрын
Please make a video on how to train LoRA model for flux in colab using our own dataset that contains multiple subfolder for each character. Please show how to do it with kohya without its gui
@SECourses2 ай бұрын
you need paid colab for this. dont pay to colab it is expensive, poorer performance. massed compute better
@YogeshMali-l2y2 ай бұрын
@@SECourses is there any other method other than kohya to train a lora model in colab . Platforms like civitai don't give a full freedom to train a lora so I want to train it on my own . I have so big and large dataset
@SECourses2 ай бұрын
@@YogeshMali-l2y use massed compute :D you cant train flux on a free colab forget it
@jasonhemphill85252 ай бұрын
How can I manually update the pytorch version?
@SECourses2 ай бұрын
activate venv and pip install : kzbin.info/www/bejne/Y3_Nf6xtlsuCh5I
@RaunitMishra-k7pАй бұрын
I am getting a different message after choosing option5 Kindly tell the details