OneTrainer, Lora Made on 8gig RTX Trained in 1 to 2 hours

  Рет қаралды 4,888

Streamtabulous

Streamtabulous

Күн бұрын

Пікірлер: 73
@streamtabulous
@streamtabulous 5 ай бұрын
Note, Blip 2 will error if you don't run Blip first, seem blip 2 needs files from blip 1 but does not download them if ran first by itself
@AceOnlineMath
@AceOnlineMath 5 ай бұрын
You can use stability matrix to install several tools including comfy, One Trainer, Automatic ect, with a shared model directory and each package will have its own venv so they wont break eachother
@streamtabulous
@streamtabulous 5 ай бұрын
kzbin.info/www/bejne/mpy8hpxrnL-Xd5Isi=yz46Udd69YPQFrwg most you can point to a directory, krita Ai does not have it in its settings so its done as this video, just do vae etc etc hope this is what you mean
@MyAmazingUsername
@MyAmazingUsername 2 ай бұрын
Oh thanks for teaching me about Stability Matrix. I had made something similar myself using CLI but this is way better.
@baheth3elmy16
@baheth3elmy16 5 ай бұрын
Thank you for the Tutorial. Much needed. I had tried Dr. Furkan's settings and had no luck too... Welcome back! Wish you good health!
@streamtabulous
@streamtabulous 5 ай бұрын
Thanks still saw not fully recovered. But needed to get it up because last video. Yer the Dr said it's fine tuning but I followed that methodically and was no go. The old school Adamw8 I believe I used in kohya clearly is still the best. I'll be retraining my face once I don't look so sick and can get a bunch of photos lol. I'm wondering what loras to train, finally got my art into a lora. My settings work fantastic I'm finding. Scared to try and tweak them lol
@baheth3elmy16
@baheth3elmy16 5 ай бұрын
@@streamtabulous I'll watch your channel for the Realistic Lora. I just started training my Lora using your settings. I'm training on a real person's photos. Let's see how it goes.
@morganandreason
@morganandreason 23 күн бұрын
Just want to chime in and thank you for the json file with your settings. They worked wonders. VRAM usage perfectly fine for my 12GB RTX3060, and finished training fast with about 28-30 images. The resulting LoRA is very flexible and works a treat!
@mikazukiaugus3435
@mikazukiaugus3435 19 күн бұрын
Hello, may i know, does this json file works too for Pony? Or do you know any ways to create Pony lora in Onetrainer?
@morganandreason
@morganandreason 19 күн бұрын
@@mikazukiaugus3435 the json settings work for any sdxl base model, and therefore pony. Use boru tagging.
@mikazukiaugus3435
@mikazukiaugus3435 19 күн бұрын
@@morganandreason i see. do you think, is it better to use batch size 2 instead of 1? I got vram 8gb 😅
@morganandreason
@morganandreason 19 күн бұрын
@@mikazukiaugus3435 Just try 2 to begin with, and keep track of your vram usage in the system monitor. If it turns out you start offloading memory to ram/cache, interrupt and restart with batch size 1.
@itanrandel4552
@itanrandel4552 5 ай бұрын
excellent tutorial, thank you for sharing your knowledge
@gabrieljuchem
@gabrieljuchem 5 ай бұрын
Thanks for another great video, brother. Is the settings file in the description your best version for 8GB VRAM so far?
@streamtabulous
@streamtabulous 5 ай бұрын
yes so far these settings are my best at 1h to 2h per lora
@cmdr_stretchedguy
@cmdr_stretchedguy 3 ай бұрын
25:40 I strongly prefer the comma separated values for image generation as well. The "natural human language" just leaves the opening for "feelings" that image generation cannot understand. It is not going to understand that "gloomy" means different things to different people, some may think of rain, some fog, some may think of just being overcast. At this point it is just too early to train 500 variations for a single word, much less expanding that to thousands of "feelings based" descriptors into models that are only a few GB in size.
@streamtabulous
@streamtabulous 3 ай бұрын
also depends on lora and model and trigger words used in them. definitely a dance to find the right triggers for what been used and testing different models etc. getting the pairing model to trained lora and stacked lora helps when it comes to those feeling style looks. also with feelings prompts best putting as many triggers i can think of based on my definition of a feeling to increase the triggers that way if a trigger words not in a model i increase the chances of triggering what i want by having lots of trigger words.
@braintify
@braintify 5 ай бұрын
Thank you very much for this video! For me this is the first setting that really works. But I changed several parameters due to my equipment - I changed bf to fp and reduced the batch from 2 to 1. I have an 8 gig 2070 super. Now I’m experimenting with a mask and epochs. My set - 17 not the most successful selfies.
@streamtabulous
@streamtabulous 5 ай бұрын
yer 30 seems to be the sweet spot especially reducing batch, maybe 20 epoch at batch 2, or in the images the repeats to 1.5 with batch of 1 and 20 epoch but that increases time still, the fallback is when fp16 has issues to what i am aware, bf16 is ment to be less ram hungry and faster, im yet to play and set to all bf16 to see what happens since most say thats best but might be why the DRs setting down work for me, maybe bf is a no go but ill have to test, anything with tensor should do bf, need to run cmd and install the tensor to not get the error of tensor pip install tensorflow python.exe -m pip install --upgrade pip
@braintify
@braintify 5 ай бұрын
As far as I know, the 20th series of video cards do not support bf. I reduced the batch because this parameter depends on the cuda cores. My video card does not have enough cuda cores to support 2 threads. Maybe I'm wrong, but I took this knowledge from this old video kzbin.info/www/bejne/d2KYfmeZl7qAa80
@streamtabulous
@streamtabulous 5 ай бұрын
@@braintify oh wow yer hard to find, but rtx 2080ti does not support bf so you are correct. so odd since it uses tensor i assumed all rtx cards but its not, ill have to mention that in a video. what speeds are you getting, was laptop from memory that you use?
@braintify
@braintify 5 ай бұрын
@@streamtabulous Something like 2 seconds per iteration, since everything fits into 8 gigabytes of video memory. But this is with batch equal to 1. I tried it once with a batch equal to 2 and at the end of the training I got an error, but I’m not sure if this is related to the batch size, because there was something about an error in saving to the SSD. LoRa trains well, but FineTune is full of errors due to fp.
@streamtabulous
@streamtabulous 5 ай бұрын
@@braintify might try 1 and see how time and quality is hit, on the setting im at 9.5 gig so 1.5 gig to system ram, but hour to 2 hours im fine with as long as quantity is good.
@siddharthmishra8283
@siddharthmishra8283 4 ай бұрын
thank you so much for the SDXL json. Please do share SD1.5 Jason too and a video if possible :) awaiting for your response
@beragis3
@beragis3 5 ай бұрын
I am glad I found your videos. I have an AMD card, RX 6700 XT and OneTrainer is able to use Zluda. It does pretty good, some steps run in GPU, some run in GPU, but I can at least now attempt to train. Have you tried training a checkpoint yet. I tried a simple test with 40 images to get an idea on how fast it runs, and ran it overnight, it was still running when I woke up, made it to around epoch 20 when I stopped it. The file it saved was huge, 13 GB compared to the typical 6 GB that most checkpoints are. Even with only 40 images it was able to get a slight idea of the types of images I was trying to create, and the image from epoch 1 to epoch 20 showed some improvement.
@streamtabulous
@streamtabulous 5 ай бұрын
oh nice, what speed do you get for lora, I feel AMD need get on board since most programs use a MLC layer to talk to the card I love to see AMD work there drivers to just work with the Nvida stuff and handle directly rather than a software layer made by open source creators, It seems AMD has lots to offer especially for the price but i never get to talk to those running AMD to know there speeds and how well it does. I have avoided doing a full model, im not even sure RTX4090 owners use there card, to most I hear is rent cloud GPU stuff and even then take a lot of time.
@Hey-Its-Retro
@Hey-Its-Retro 5 ай бұрын
Thanks for another great video... I'm going to give this a try when I get back on my PC! I'm just wondering how you managed to train a LoRA on your own art style? This is what I'd like to do, too and train it on my own artwork. I'm curious on how you caption your artwork - I've never really seen a video here on KZbin that has explained this part fully. How do you get the LoRA to understand your actual art style? Maybe this would make another video idea for you and I know that, I for one, would really love to see your take on it and how your own art-style turned out. Keep up the great work and hope you're feeling a wee bit better soon! Best wishes from Scotland! 🏴󠁧󠁢󠁳󠁣󠁴󠁿
@streamtabulous
@streamtabulous 5 ай бұрын
same way as the Animal, so i took photos of the canvas paintings, and others where done is IBIS painttx and paintshop pro so they where digital, I have 109 images some where cropped so double images but just wanted some close ups added for parts. all large quality, then just did same as this video with the auto text generation, edited some most i left, even with the child like colours of purple trees and green skys I do AI text scrapper picked up whats a tree and a person, thats the key if the text gen is picking up what in your art then it works, and of course I have never painted a car or certain thing but the base model has that information so the AI looks at the lora style and says of I imagine it would look like this and it works fantastic. of course sometimes over detailed and better than the original in may aspects, the higher your prompt weight to your lora when using it the more like what it learnt it will be the lower the more the base model comes through so you can find the balance, then certain models ie colorful when im doing a prompt with my lora on my paintings works best because that colorful model has lots of training on well colorful art styles. Got a flu now lol got meds for body and caught the flu when getting the scripts. no fun.
@Hey-Its-Retro
@Hey-Its-Retro 5 ай бұрын
@@streamtabulous Thank you for such a detailed reply - it's very much appreciated! Just a quick question and something I've never really found out or understood: when writing the text captions for your art style, do you actually mention the medium? i.e. "painting in watercolour" or "illustration in oil paint" I really would like to train some of my art style but they're done as pen-and-ink, black and white line drawings. Just wondering if I should mention the "line drawing of a " or "line illustration of a SUBJECT>" bit or just caption the that actually appears in the artwork? I think read somewhere that you're not meant to mention the style and just caption what appears in the artwork - that way the model assumes that everything in it's "world" is rendered in that style and just concentrates on what appears in the image. And... Hey! Don't worry about a quick reply when you've ended up with the flu... take it easy and no worries about replying. Anyway... cheers for being so helpful and get well soon!
@streamtabulous
@streamtabulous 5 ай бұрын
@@Hey-Its-Retro I do write my style yes, so child like painting, brush marks, wild colours, thick Acrylic paint, etc etc, as they help trigger the lora and parts of the lora, also then if there something i want left out say it works better in the negative prompt. I i personally would use Pen Darwing, ink colors, etc
@Hey-Its-Retro
@Hey-Its-Retro 5 ай бұрын
@@streamtabulous Thank you! You've been incredibly helpful... I'll give it a shot when I can get onto the PC in a couple of days! You ROCK and that's OFFICIAL!
@streamtabulous
@streamtabulous 5 ай бұрын
@@Hey-Its-Retro thanks, i love shearing what i learn, people like yourself keep me going.
@Roachesneedlovetoo
@Roachesneedlovetoo 5 ай бұрын
thanks for the detailed walk through, it's been very helpful! personally i'm finding it very hard to judge the quality based on the sample outputs alone, every single sample output looks like straight up garbage. it's not until i add the lora to my prompt within auto1111 that i'm able to see the quality of the results. the problem i have with that is it's hard to gauge which version of my saved training is the best. as of right now, 30 does seem like a sweet spot, but maybe it could do with more... or maybe less. i guess i just feel like if the sample outputs during training were better, it would help me understand more clearly. i'm not really sure what i'm doing wrong as far as my training sample outputs are concerned.
@streamtabulous
@streamtabulous 5 ай бұрын
They do look horrible. There is no workflow for the samples there basic and like making a image you get bad ones. And it's certainly hard finding settings because it's one to two hours try the lora repeat. For myself the over 40 epochs was worse as it stared over powering the base model as was messy. I get great results at the settings linked. As per my CivitAi images show where the Wight to control the lora is 0.6 to 1.3 But yer the samples watching them as it goes looks horrible and like 2 steps forward 2 steps back. It's not till you test the lora there that sigh of relief.
@streamtabulous
@streamtabulous 5 ай бұрын
You could set the back up at 1 epochs that makes a lora in the backup dir so then you could teats each lora for every epoch and see
@maxp7984
@maxp7984 2 ай бұрын
Very informative.Thank you.
@duphasdan
@duphasdan 3 ай бұрын
Good tutorial. My only problem is that the prompts are not being built into the lora even though I have it chosen to do as much. And the names are matched as i used the same things to make another lora a while back.
@lechefski
@lechefski 4 ай бұрын
Is the resolution variable supposed to be set to the largest image from your dataset, or do images automatically get resized to match the resolution? Also, when your dataset is small, does the quality benefit when adding image copies at different scales due to bucketing?
@streamtabulous
@streamtabulous 4 ай бұрын
I will be doing a video on just images and text files as i get asked a lot. ok so forget the output resolution of 1024 for a tic, remember there are no images in the lora at all, its only information. think of it like the images in your head its just information so its not like a compression or anything like that, its why its reference to a neuronet of information. So bigger images in the data set are better and they don't have to be a set resolution etc because it doesn't matter, what happens is the AI looks at the image and from its training says hay that looks like a eye then trys and learn the eye of the image you put into it and so on so hay that sort looks like a face but its red and furry and slowly does that till it builds its neuronet thats your lora. so large images are more defined just as they are with you and me so it can learn much better and the results on the training are better, so dont crop dont clip let the AI do its thing. now the 1024 in the setting its just saying ok AI from the reference data when your learning I want to at minimum make images or parts of images at 1024. of course you can use that lora to do what ever size. Small amount of images to large amount of images in the data sets, the difference is how many references it has to learn from, More is always better. Ie: if i show you say the front of someone only for the first time and you never seen a person then you have no idea what the back of a person looks like and might assume it looks like the front, So a larger data set just gives more information to learn from, what its like being close to far away or the side etc. for example I did my face but only close up so at a distance it doesn't work because it has no idea of what i look like further away so it can only give good results on a close up rendering. I hope with my dyslexic and grammar and spelling issues this in some way helps. Will hopefully say this in a video on Friday and show how i do a character data set and tips to it.
@ILYA-zz4rf
@ILYA-zz4rf Ай бұрын
thanks for the video!, has anyone had a feeling that the lore does not work, and the pictures are all the same during sampling?
@streamtabulous
@streamtabulous Ай бұрын
i find they need more weight with onetrainer but no issues link for one i did some images of using it. civitai.com/models/518425/animal-muppet
@streamtabulous
@streamtabulous Ай бұрын
if the sample image doesn't change then something is wrong, the drs settings did that to me samples did not change.
@lechefski
@lechefski 4 ай бұрын
Thanks so much for the preset file! What settings should I tweak if I have 12GB of VRAM?
@hex1c
@hex1c Ай бұрын
I get Cuda out of memory regardless of what settings I use. I also have a 3060Ti 8GB. What can I do?
@streamtabulous
@streamtabulous Ай бұрын
that is odd, something in background might be using it, also install cuda toolkit
@hex1c
@hex1c Ай бұрын
I got it to work when i tried batch size 1. I have a question tho, before i used Civitais lora trainer and there you can download each epoch to see which gives the best results, can i do that here aswell?​@@streamtabulous
@hex1c
@hex1c Ай бұрын
@@streamtabulous Im sorry for spamming you but what I am really trying to do and want to do here is making loras for Pony XL. Do you have any config files for that or settings to use? I've searched like a fool on the net without success
@mikazukiaugus3435
@mikazukiaugus3435 19 күн бұрын
@@hex1c hi mate, have you found the way to do Pony Lora with onetrainer, i've been searching for it too
@insurancecasino5790
@insurancecasino5790 3 ай бұрын
It's really hard to find a basic vid on installing a LORA. OMG, everybody has their own software now. Good for them, but most folks just need the basics first.
@streamtabulous
@streamtabulous 3 ай бұрын
kzbin.info/www/bejne/j3S0p6R_qtKFmposi=-6krdfZTk-vfS74Q you want the older videos but also you didn't mention what program or site etc you use. i use krita local on my system so my videos are on that mostly and how i made my own loras.
@insurancecasino5790
@insurancecasino5790 3 ай бұрын
@@streamtabulous Thanks. I have SD portable on a laptop. It works but very slow. I did find some info on just basic LORA install. I got that far now. I just got some robot and dragons LORAs from Civitai. Working on a comic book for fun and needed those images. Now I'm learning control net for dynamic poses for the dragons, which has not been done in comic or tv/movie. I need SD to help design a dragon that can do that. I will check out your vids. But super basic vids really do help. Many how to vids go overboard to me.
@Stable_Confuzion
@Stable_Confuzion 5 ай бұрын
Great video and tutorial, many thanks! Those stable_confuzion images in the lora gallery look incredible!!! lol I removed the original metadata call to the lora in those older "Elmo" images to prevent someone else from copying the wrong thing, so that's why you did not see the old lora "filename" on those.
@mm_33
@mm_33 Ай бұрын
With 1.5 the settings would remain the same?
@contrastingrealities4882
@contrastingrealities4882 Ай бұрын
Thank you for this tutorial, I've tried training SDXL on Kohya and the program crashed every time. I have tried this method with Pony and found that it does work, but the loras I make have to be at high weight to work but that could be because I didn't format the datasets properly. But one issue I have is that it takes 10 hours for me to finish training one lora. I have a Nvidia 4060 gpu which is an 8gb one and I'm not sure if there's something wrong with it (since I have to reset it occasionally or else SDXL will take 10 minutes to generate a single image) or if I'm just using the wrong hardware.
@streamtabulous
@streamtabulous Ай бұрын
i have a rtx3060 8gig, install cuda tool kit that for me made a massive difference
@contrastingrealities4882
@contrastingrealities4882 Ай бұрын
@@streamtabulous Thank you very much, it's a lot faster now.
@luis-bejarano
@luis-bejarano 2 ай бұрын
thanks, great tutorial
@mm_33
@mm_33 Ай бұрын
Why so high LoRA alpha?
@stefanoangeliph
@stefanoangeliph 4 ай бұрын
Ok, I followed all the hints from the three videos, I also installed all the software as suggested by @SECourses (Python, Git, CUDA, etc.), but I still get the "CUDNN_STATUS_NOT_SUPPORTED (....Conv_v8.cpp:919.)" error. Any idea how to fix this? Should I change CUDA version (I installed CUDA 11.8)? Should I change Pytorch version (I have 2.3.1)? Python is v.3.10.11. I run OneTrainer on Windows 11, with a RTX 4070. Thanks in advance to anyone willing to help me.
@suveniro4ka
@suveniro4ka 4 ай бұрын
Есть полезная информация, однозначно есть, но автор очень много любит чесать языком. Контента на 5 минут, но языком почесал рассусолил на 40 минут
@Skyn3tD1dN0th1ngWr0ng
@Skyn3tD1dN0th1ngWr0ng 9 күн бұрын
1:40 Speed running "video is 40 minutes"... just how difficult this is going to be... Pain... 30:41 the skip here to a "finished product" was terrible, no clue how to continue after getting the first samples (that are inaccurate) and how to formalize the working session into a file, no info into how to save the training data or anything really, I know is a free tutorial but it feels more like self promotion.
@streamtabulous
@streamtabulous 8 күн бұрын
how is it self premonition? im not monetized its ad free, i put the settings up that i use for free on my cloud that i pay for and no one is going to watch nothing for a hour while in the background the program builds the lora. there literally nothing im selling if i where id not be running a very old computer.
@Stable_Confuzion
@Stable_Confuzion 5 ай бұрын
Oh yeah, text encoder gives the LORA better text capabilities in forming coherent words. for example in your prompt you might include something like: a puppy holding a card that reads "please help" And if you batch that about 36 times you will probably get one that is grammatically correct :)))
@streamtabulous
@streamtabulous 5 ай бұрын
thanks, that is the only reason i wanted sd3, to save editing in Paintshop
@jonasprintzen9508
@jonasprintzen9508 5 ай бұрын
Why do I get "Could not find text_encoder_2.text.projection in the given object!" when trying the LoRa in EasyDiffusion?
@streamtabulous
@streamtabulous 5 ай бұрын
sadly easy diffusion dropped its updates, I recommend move to krita with AI diffusion. and its due to easydiffusion simply not being up too date, they never finished the sdxl support and now there so many versions of sdxl its not funny. foocus would be my next recommendation. I loved EasyDiffusion but it just doesn't have the compatibility and works best with SD1.5
@jonasprintzen9508
@jonasprintzen9508 5 ай бұрын
@@streamtabulous Thanx for helping me avoid wasting time then. I'l check the alternatives 🙂
@streamtabulous
@streamtabulous 5 ай бұрын
@@jonasprintzen9508 it's sad easy was great especially for photo restoration for myself so i use it still, but its only good with sd1.5. i recommend krita with aclys Ai diffusion add-on all free and you won't look back. also faster
OneTrainer - My tips for preparing a data set
38:45
Streamtabulous
Рет қаралды 2 М.
Programming Is Cooked
9:30
ThePrimeTime
Рет қаралды 177 М.
Players push long pins through a cardboard box attempting to pop the balloon!
00:31
ТЮРЕМЩИК В БОКСЕ! #shorts
00:58
HARD_MMA
Рет қаралды 2,6 МЛН
Unreal Engine is Killing Games
27:18
Vex
Рет қаралды 175 М.
Don't Upgrade
16:32
Tek Syndicate
Рет қаралды 24 М.
Fast and EASY Flux LORA Training with AI-Toolkit and MimicPC
18:18
Bob Doyle Media
Рет қаралды 9 М.
Highlighted code in slides
11:58
fasterthanlime
Рет қаралды 10 М.
Local Flux.1 LoRA Training Using Ai-Toolkit
15:40
Nerdy Rodent
Рет қаралды 45 М.
Train Lora. Kohya_ss всё? Полный гайд OneTrainer
10:29
Konstantin Besedin
Рет қаралды 10 М.
AI generated games are becoming worryingly real
13:34
The Cutting Edge
Рет қаралды 13 М.
Brexit’s Trade Fallout: How the UK is Struggling to Adapt
22:04
Something is wrong with ISPs in India 🇮🇳
13:17
Mehul - Codedamn
Рет қаралды 24 М.
what happens when your CPU has a bug? (GhostWrite)
9:58
LaurieWired
Рет қаралды 36 М.