And tonight I nailed it, I worked out another setting sdxl that's perfect. Less that 2 hours training a model with over 100 references images. And it is fantastic. Will get it to you all Monday I hope. Be sure to be watching if you own a rtx 8gig card.
@Stable_Confuzion5 ай бұрын
Thanks for the upload. Looking forward to the updated version.
@gabrieljuchem5 ай бұрын
Is this new sdxl setting you mention, the file linked in the video description? Thanks!
@streamtabulous5 ай бұрын
@@gabrieljuchem not yet i have to upload it on my cloud, I'm sick atm and on phone, had Dr app today and appointments tomorrow so i'm planning Friday atm I also did get to tune setting more so getting better times when i did tests 4 days agp at night. so hit 1h, got that model on civitai and been yacking since, but if you don't mind fiddling with settings AdamW8 is what im using, epoch 50 was over trained 30 is sweet spot, that should hit 1 to 2 hours max on 8gig vram, set backup to 2 epoch as prior, and samples at 1 epoch, watch the sample images, if it over trains the images will start looking like there going backwards, if that happens just test back loras made from the every 2 make a backup setting. nothing else really needs changing. play with those.
@streamtabulous5 ай бұрын
@@gabrieljuchem kick out quick video uploading
@gabrieljuchem5 ай бұрын
@@streamtabulous Thank you, brother. I wish you a quick recovery.
@sizlax11 күн бұрын
If no one has said this yet, take off your glasses when taking pictures of yourself for AI. Same if you wear hats, scarves, anything that covers the part of your body that you want the model to learn. Glasses can be added to the image during generation. Alternatively, you could have a set of images specifically in which you wear the glasses, and adjust your head in position and angle relative to the camera, and in different light settings, then add those to the dataset while specifically tagging things like lighting, position, and angle. As an alt alternative, you could use those images to create a second (glasses) lora to use with the first, and adjust the strength of that lora as needed. in that lora, you would avoid tagging any hair or facial features (theorizing with this), so the two loras don't get confused, and the second primarily focuses on the glasses. That's the beauty of doing a lora of yourself, is that you are your own IP, so if you're willing to put in the time, you can diversify the hell outta the dataset and have it create a perfect 'you' every time. Now, setting up the camera, if you're using a smartphone and don't want the 'selfie' pose every time, is a different challenge in itself. Edit: sorry, It's 3am and I was only half paying attention. I went back over part of the video and realized that you already mentioned the glasses thing.
@Mranshumansinghr5 ай бұрын
I have a 2080TI. I could not get it working the first time. But once I changed Train Data Type to Float16 and Fallback to Float32 it worked. I still have to see the results.
@streamtabulous5 ай бұрын
Yes found out from another user the rtx20 doesn't do bf so just change to fp16 i believe you can do fall back to fp16 also and that will continue at 16 that way making sure ram use is low. I might set them to fp16 and put a separate download link for rtx20 user for future people to my video and I'll definitely write it in description tomorrow since flu is less bad I definitely will do that since it's needed. Thank you for letting me know your settings
@streamtabulous5 ай бұрын
kzbin.info/www/bejne/l2GWc4GsebuFmrMsi=GJAA12GkOdIR3kYN did you download and set up the faster settings from this video. If not just change from adamw to adamw8 for massive speed increase. And of course fp16 not bf for you
@cmdr_stretchedguy3 ай бұрын
I wish I had a fast enough GPU to run a batch test per lora, say 10 images at 512x512. With SDXL (Fooocus) my image creation times are typically around 25-30 sec per image using 12GB RTX2060.
@streamtabulous3 ай бұрын
install cuda toolkit and try nivida 3d settings, cuda fallback policy to prefer no system, you have to put it back to defult when running games etc.
@LynxGenisys2 ай бұрын
Does it refuse to load/recognize images if you rename them for anyone else?? {Also doesnt load images into the concepts space.. gona try n tweak settings to use new txts with old file names.. hopefully itll just load them sequentially or something...) PIL.UnidentifiedImageError: cannot identify image file 'C:\\AIworkspace\\1_lynxTEST\\0001.png Get this if I rename the photos... guess Ill go generate new prompts for old filenames...
@generalawareness1014 ай бұрын
I see you used repeats, but did not explore the drop down called samples. It gives an exact number so 137 samples, etc...
@streamtabulous4 ай бұрын
with AdamW and AdamW8 it automatically does repeat steps, so i leave it as default, get fantastic results and great run times. so depending on how many images in the data set it changes itself. I like that to be honest one less thing i have to worry about. so i leave balancing and it set to repeats due to that, and why i have not changed to samples. adafactor was very different needed to do that otherwise 8 images would be 8 steps so needed to adjust.
@generalawareness1014 ай бұрын
@@streamtabulous Adam never did repeats for me as I give it 12 images at BS1 with 1 repeats I have 12 steps per epoch.
@streamtabulous4 ай бұрын
@@generalawareness101 11 photos both adams it did automatically at 27 steps, khoya it did not do it automatically but onetrainer it is. i wonder if something in python installation or something could affect its way of working. i would not think so but its odd, i have screen shots to show in a video solely on my image data set set up, gonzo is 58 images and it automatically did 27 steps
@generalawareness1014 ай бұрын
@@streamtabulous I just tried this and 12 steps. I tried 18 images and 18 steps per epoch until I adjust repeats.
@streamtabulous4 ай бұрын
@@generalawareness101 very odd, has to be something with python and the way it communicates, i ended up re installing python, cuda tool kit and running few CMD line to get rid of the tensor error etc. wondering if it plays a part. does your show a error on running to make a lora of any type?
@RobertJeneАй бұрын
Video title says part 2 but I can't find a part 1 in your videos
@streamtabulousАй бұрын
kzbin.info/www/bejne/pX_Cg2pngMSsq6c
@RobertJeneАй бұрын
@@streamtabulous could you please put it in the video description?
@PlayerGamesOtaku4 ай бұрын
Hi, could you tell me how to create Checkpoints for Pony XL on One trainer?
@streamtabulous4 ай бұрын
I think pony to the best of my knowledge is just a sdxl with better training methods so you do it the same way but use a Pony model as the base learning model. best place to ask is here at this link, also most checkpoints are trained from cloud gpu, so im not sure on requirements of hardware and how long it would take. github.com/Nerogar/OneTrainer/discussions
@guijiao4 ай бұрын
怎么自动打标?
@katbikst91615 ай бұрын
The author looks like Garik Kharlamov. Comedian from Russia. 😉
@streamtabulous5 ай бұрын
My face is not that pudgy and he is better looking lol
@chrisdvo99102 ай бұрын
Thanku, after a hard time with kohya i became kinda frustrated. Cause I set um my hardware for this. I learnd first basics and at present it is learning a big set o pictures of myself from photoshootings made for this case^^ I hope this time it will work. it is way faster then kohya was with a 4080 16Gig. CPU Usage is decend and tempretures are coool. I just get wrning "known incorrect sRGB profil". But it runs. So from other tool I know that I might ignore this. And Ideas?And just btw..... ANYONE SEEING THIS CAT ON HIS SHOULDER? Soooo cute!
@streamtabulous2 ай бұрын
never had that warning so not sure on it, i away make every image a png, only thing i would think is maybe its something to do with some image colours.