Great tutorial. Following your steps I was able to create images of a character I know.
@TheFutureThinkerАй бұрын
Thanks
@niccolon809521 күн бұрын
followed the steps and im only getting 1 .txt file even though I have many images. In the CMD prompt it describes all my images but only saves 1 .txt file that describes 1 random image .. any idea?
@ojikutuАй бұрын
Thanks. Good work. AI-ToolKit repo is not in your description, would have been helpful.
@TheFutureThinkerАй бұрын
Oh yes sorry, I was too focus on the information forgot to put that repo link. Its updated now, thank for remind. :)
@jayrony6924 күн бұрын
how do I copy token into folder?
@yngeneerАй бұрын
yeah, exactly, what if I have 16gb vram + bios enabled ram share with vram another 32bg, so totally 48gb? does that count as a 48gb vram?
@ttgboi6734Ай бұрын
Please make a guide on how to do these in Google Colab free tier
@TheFutureThinkerАй бұрын
Free Google Colab? No even have enough VRAM to run this.
@Veto2090Ай бұрын
Any advice on running this with 12gb vram?
@TheFutureThinkerАй бұрын
Very hard to do so. Honestly...
@sanducodrin1488Ай бұрын
What if i have 16gb of vram?
@terriermonisgodАй бұрын
just use runpod
@sanducodrin1488Ай бұрын
@@terriermonisgod successfully used runpod last night. Tnx
@wereldeconomie1233Ай бұрын
@@terriermonisgod people are so stubborn. Even you tell them this can't be run in their shitty PC.
@keanodaley51212 күн бұрын
@@wereldeconomie1233 be nice
@chefodsicakeshop5947Ай бұрын
no chance in 8VRAM
@gateopssssАй бұрын
No chance. It's already a struggle to generate images on 8GB of VRAM (if even possible) but i don't think lora training will be possible on 8GB of VRAM, unless community does some insane optimization, it's a big model. I'm still struggling to find a way to train flux lora with 12 GB of VRAM, let alone 8GB.
@PunxTV123Ай бұрын
@@gateopssss did you find it, 12gb vram can train?
@Point.AveugleАй бұрын
Please Kohya next, they 've got it having good results with 500 steps. All my loras have been good (way better than sdxl and pony) with default settings using ai-toolkit but they take hours to go to 2000-3000 steps. As someone mentioned below fiddle with high weights, 1.5-2 have worked for me. Also comment out most of those samples when training, they take unnecessarily long. I want to bump up learning rate but not sure how high to go; I read somewhere that people have been using 4-e4 but I tried 1-e3 and it wen crazy after 500 steps so I was thinking even that would be too high.
@elizagarcia8799Ай бұрын
Kohya can do flux loras?
@brianmonarchcomedy14 күн бұрын
Should the text files have the trigger word first? It didn't seem like yours did. Is it helpful to have that in the each text file first? Thanks!
@tex129712 күн бұрын
It is possible to train rola with ai studio using custom models based on flux? I tryd it with a safetensor format but it seems to look for a hugging face repository format
@pastuhАй бұрын
Why would you resize the images if the LoRA trainer is supposed to resize them automatically?
@TheFutureThinkerАй бұрын
oh really? This AI toolkit resize them? Well, I got used to, just one of the practical steps that I did in SD and SDXL Lora. update: yes, the dataset prep section did mentioned : github.com/ostris/ai-toolkit?tab=readme-ov-file#dataset-preparation
@TheFutureThinkerАй бұрын
P.S : but if I resize it before train, looks like the script skip the resize steps. And I don't need to wait longer.
@pastuhАй бұрын
yes, if you experimenting multiple times with same images..
@pastuhАй бұрын
I think this line is critical: Images with different dimensions will be trained for different aspect ratios. As we know, the assignment is based on one of the longest sides of the image. I believe that different dimensions will result in a different final outcome. (More combinations without overtraining) For example, a 768x1024 resolution might produce one result, while 896x1024 might yield another. This is because the model is trained on a different ratio, potentially resulting in better quality.
@TheFutureThinkerАй бұрын
That's why in new IPAdapter, they mentioned 50k steps with 512, and 25k steps with 1024. Maybe if they don't prep dataset like the old way, it will might become better.
@DJTripleRRR5 сағат бұрын
I'm confused. Was it because you used to general trigger word that it just didn't generate the woman you trained on initially? A woman generated a red head when you trained a blonde haired woman. To actually get it to activate the model you trained on I noticed you had to being up the blue clothing first.
@TheFutureThinker4 сағат бұрын
General word for trigger keyword are only getting alike result from dataset. So it will have freedom in some image to have elements from the base model. Other training style focus for characters only, and focus it to only getting style from dataset: kzbin.info/www/bejne/Z3Xcl4OeftCLbZosi=5dxGilp7iUIx4Vfu
@basemmgtow7954Ай бұрын
is there any chance i can do that with 12 vram?
@saurabhsswamiАй бұрын
yeah! same :(
@wereldeconomie1233Ай бұрын
Yes, burn your computer. If you want. How stuipd people are when things already define 24 GB Vram is the bottom line.😂😂😂
@oblivionmad82Ай бұрын
No
@zazaza2217Ай бұрын
@@wereldeconomie1233 lol, you cant burn your gpu just because you dont have enough vram, why you are so "smart" i cant understand script will fail with oom, thats all
@maxh8574Ай бұрын
@@wereldeconomie1233 Plenty of people are training flux loras with 12GB vram, maybe research before dismissing things lol
@nickolaygr3371Ай бұрын
tell me friend do i need to write subtitles if the text encoder is not trained ( train_text_encoder: false # probably won't work with flux)
@TheFutureThinkerАй бұрын
Some say they don't do captioning and just submit images to the trainer. But I guess that will take longer time to train. I just like to prepare everything nicely for the dataset before train. Also resize image , in other comment, i know the trainer do resizing. But again it take longer time in the whole process.
@Beauty.and.FashionPhotographerАй бұрын
what was used to to your final animation, image to video, where she walks ?
@lennygarcia3059Ай бұрын
Yes please. Would like to know too.
@rageshantony2182Ай бұрын
it take 1.5 hrs for 24 GB VRAM. So if I use 48 GG Quadro , does it decrease the time ?
@TheFutureThinkerАй бұрын
48GB VRAM? If so, yes it is.
@technoprincess9525 күн бұрын
HF_token= xxxx. you missed that it gave errors
@TheFutureThinker25 күн бұрын
Thank you for reminding. Yes , save the .env file, its just a text file. And stored the token key in there
@kalakala4803Ай бұрын
Omg! This model develop so fast! Just launched CN, Lora training are ready.
@TheFutureThinkerАй бұрын
Yes it is. Very fast development
@kalakala4803Ай бұрын
@@TheFutureThinkerlooking forward to the IPAdaptor
@Bpmf-g3u15 күн бұрын
I also tried Flux on mimicpc, the proficiency made for a surprisingly graphic experience. The LORA adjustment produces very realistic visual details.
@CyberPhonkMusicАй бұрын
I did LORa training for FLUX, when I put more than one person in the image, it duplicates the LORA faces in all the characters that appear in the image, what can I do to avoid duplicating the faces of the LORA training?
@theaorora4365Ай бұрын
can i use my custom flux model on the model path for training? . i dont want to use default model from black forest .
@VfxVictorАй бұрын
what is the extension for saving images, i would like to eb able to control the file name format and other stuff
@coffeepod124 күн бұрын
only for 24gb vram? poorly i got 16gb. will it still work?
@FJKMIsotryFitiavanaSiteWebАй бұрын
what if I already have the models tensor locally, and don't want to download them again from HuggingFace
@forifdeflais2051Ай бұрын
@@FJKMIsotryFitiavanaSiteWeb Puedes editar el archivo .yaml para especificar otras rutas en las están los modelos. Por otro lado, si estas en Windows, también podrías utilizar el comando mklink para crear un vínculo entre distintas carpetas.
@michaelNguyen914Ай бұрын
Thank you so much br, keep on fire!
@TheFutureThinkerАй бұрын
Thank you🙏
@RagonTheHitmanАй бұрын
Why should I train "private" and LOCAL on my PC ... if Huggingface/BlackForestLabs get's my data anyways and knows everything.... And I think, I must be online for training (to start). ?!
@michaelNguyen914Ай бұрын
But you'll have a model customized according to your personal preferences.
@michaelNguyen914Ай бұрын
But How can BlackForestLabs access your data if you train locally? The access token in the environment is only for accessing and pulling the model to your pc and ensuring it's not used for commercial purposes.
@timothywells8589Ай бұрын
Thanks I've been wanting to try a flux lora locally after trying for weeks to train a character lora in sdxl without much success. In sdxl when I use the loras each picture has elements of the original reference images but doesn't really look like the character. And this is even at 10000+ steps and prodigy set to LR of 1 😔
@TheFutureThinkerАй бұрын
Try this one , the AI toolkit also include other training. And it script base, for me more customizable
@massibob2004Ай бұрын
Good job man ! Do you know how to setup the yaml to use 2 same graphic cards ? SO I will have device 0 and Device 1.
@TheFutureThinkerАй бұрын
I think this script are supporting 1 gpu in a config. I am not sure. But technically, yes, index start from 0.
@SoloMetalАй бұрын
Without a eGPU/GPU, can I run it on my laptop? Can you do a video about the rig or requirements prerequisite for all this?
@gateopssssАй бұрын
The absolute minimum is 16 GB of VRAM on a gpu, integrated ones are out of question and is not possible whatsoever. I'm guessing some gpu with 24 GB of VRAM, and 64 GB of RAM with NVME storage is a necessity do train lora on flux, otherwise it's going to be a pain with lower requirements or literally impossible.
@Pauluz_The_Web_GnomeАй бұрын
Hi I am having problems with port 11434 can not access, and there is also no lava:latest model?
@TheFutureThinkerАй бұрын
Learn how to use it : kzbin.info/www/bejne/e4K9iKykbsp2fKc I put this link in the description and mentioned it already in the video.
@wqeerwqeer1375Ай бұрын
when i start running the training process, i get this error :( "ImportError: cannot import name 'apply_rope' from 'diffusers.models.attention_processor' (C:\Users\Frozen\AppData\Local\Programs\Python\Python312\Lib\site-packages\diffusers\models\attention_processor.py) I've reinstalled everything, but same error
@MilesBellas27 күн бұрын
11:07 Benji plugs in the guitar.
@UnchartedWorldsАй бұрын
Flux Capacitor!!!! Benji where we are going we won't need capacitors!!!
@TheFutureThinkerАй бұрын
Alright, lets do some cool stuff again.
@MilesBellas27 күн бұрын
Simpletuner on Linux next ?
@jonjoni518Ай бұрын
i find it impossible to download the models. it starts downloading the model at 30mb/s and then it goes down to just a few Kbytes and stays at 99%. i have tried with different hugginface tokens (write, read finegrain....). i also leave the .yaml by default except the path where i indicate the directory of my dataset. by the way i have a 14900k 4090 and 128ram and windows 11
@TheFutureThinkerАй бұрын
Looks like their network is jamming.i got 2 mb/s downloading another AI model.
@jamesluc007Ай бұрын
Could you find a solution to this by any chance? I'm facing the same scenario
@massibob2004Ай бұрын
Hello guys, Why do we need a Lora, if we can use contronet or ipadater etc. without a training ? Best quality, speed ?
@jasonwu4262Ай бұрын
More like capability. Controlnet and ipadapter requires you to copy an image, lora lets you generate completely novel images. You can use LORAs to do what control net does but you can't use control net to do what a LORA can do.
@massibob2004Ай бұрын
Thanks 👍👍
@bobsapp4119Ай бұрын
1 thing im struggling with is how to copy the access token into the directory. There appears to be no way of copying it from the webpage
@bobsapp4119Ай бұрын
I have managed to copy it when I create it, however its just a string of characters, not a file
@bobsapp4119Ай бұрын
I'm getting error mesage 'No module nemed dotenv' There is not a .env file in the ai-toolkit directory, instead a folder called 'venv' which doesn't appear to be in your directory in the example
@bearbro6375Ай бұрын
What GPU are you using in this video?
@TheFutureThinkerАй бұрын
Nvidia 4090
@pastuhАй бұрын
Are you going to fine-tune the model? A tutorial would be helpful :)
@TheFutureThinkerАй бұрын
I was thinking, any suggestion type of image style want to fine tune?
@pastuhАй бұрын
@@TheFutureThinker I would say deep shadows & hard/harsh light would improve photos. + paper carving style if you want some style :D
@TheFutureThinkerАй бұрын
So a Lora of this can be train and run with 1Dev
@pastuhАй бұрын
@@TheFutureThinker I mean, finetune model. Just like with the SD1.5 'realism' models, where they create mixes from each other. Some people focus on mixing, while others create their own models using a large number of images. Or this still not possible?
@daetojekf5973Ай бұрын
Comfy refuses to apply my lore as if ignoring it, I do like your video 1v1
@timothywells8589Ай бұрын
@@daetojekf5973 for Flux sometimes you need to turn the lora weight really high, when you think it's too high keep going. Some of the ones I've downloaded from civit don't seem to have any effect until 2,3 or even 4 weight. And also maybe a dumb suggestion but make sure you're using the correct base model eg dev for a Dev lora, schnell for well you get the picture 🍀
@TheFutureThinkerАй бұрын
@timothywells8589 exactly, so by default I use 1.0. it still very little affect from it.
@daetojekf5973Ай бұрын
thanks everybody, i'm not update comfy =) thats was a problem
@jonjoni518Ай бұрын
Me resulta imposible descargar los modelos. comienza a descargar el modelo a 30 mb/s y luego baja a solo unos pocos Kbytes y se queda en el 99%. He probado con diferentes tokens de Hugginface (escribir, leer finegrain....). También dejo el .yaml por defecto excepto la ruta donde indico el directorio de mi conjunto de datos. Por cierto, tengo un 14900K, 4090 y 128RAM y Windows 11