No video

SDXL 1.0 Tips in A1111 Low VRAM and other Errors and Refiner use cases for Stable Diffusion XL

  Рет қаралды 15,529

How to

How to

Күн бұрын

Пікірлер: 70
@zuthmani9955
@zuthmani9955 7 ай бұрын
People r complaining about 8GB and here I am looking ways to run it at 2GB 😂
@3diva01
@3diva01 Жыл бұрын
Thank you so much for this! I haven't really been able to use SDXL on my machine due to low vram, so I think these tips will help me. I appreciate you posting this video and helping us with the great tips!
@photobackflip
@photobackflip Жыл бұрын
A Kohya SDXL LoRA training video would be greatly appreciated if you have figured it out. Been trying to get it working but only fails so far.
@redradar3366
@redradar3366 Жыл бұрын
Good tips. Thanks
@AI-HowTo
@AI-HowTo Жыл бұрын
Glad it was helpful!
@HO-cj3ut
@HO-cj3ut Жыл бұрын
Hello, you are very good in your field, we follow you, you provide very good information to the sector. How to make an example for a model (checkpoint) style with SDLX as a lesson in the next video ?
@AI-HowTo
@AI-HowTo Жыл бұрын
Thank you for your input, will check if I can.
@thedevilgames8217
@thedevilgames8217 11 ай бұрын
im using vlad diffusion and the sdxl model stop doesn't load, so can't use the model
@AI-HowTo
@AI-HowTo 11 ай бұрын
whatever works go with it, vlad diffusion is a good option too so is Comfy UI, but usually with --medvram or --lowvram --xformers --no-half-vae options it should work on A1111 too and now there is a new option called --medvram-sdxl in A1111 version 1.6 ....
@iloveshibainu9003
@iloveshibainu9003 Жыл бұрын
hiiii im not able to load the "XL 1.0" i dont know what to do ....i'm getting this error: size mismatch for model.diffusion_model.out.2.weight: copying a param with shape torch.Size([4, 384, 3, 3]) from checkpoint, the shape in current model is torch.Size([4, 320, 3, 3]).
@AgustinCaniglia1992
@AgustinCaniglia1992 Жыл бұрын
have you updated auto1111?
@AI-HowTo
@AI-HowTo Жыл бұрын
it doesnt seem you A1111 is updated, check the version at the bottom of the screen 1.5.1 after running A1111, if not use Git pull as explained in the video, hopefully it resolves this, or your torch version is outdated, but usually git pull also updates torch version too ... if all failed, reinstall A1111 as a last resort.
@iloveshibainu9003
@iloveshibainu9003 Жыл бұрын
@@AgustinCaniglia1992 no i did'nt :]
@iloveshibainu9003
@iloveshibainu9003 Жыл бұрын
@@AI-HowTo ok thanks
@WillFalcon
@WillFalcon 10 ай бұрын
On my laptop --medVram does not work but --lowvram does, RTX2050 - 4 GB and image dimensions: 1024x1024 generated... oops... it does not. At final steps I got an error Cuda Out Of Memory
@AI-HowTo
@AI-HowTo 10 ай бұрын
currently there is --medvram-sdxl in recent A1111 1.6 version, if you used these options --xformers ----no-half-vae --medvram-sdxl --lowvram and still didnt work out then I suggest you to switch to ComfyUI, it can run even on a CPU.
@WillFalcon
@WillFalcon 10 ай бұрын
@@AI-HowTo will try, thank you
@WillFalcon
@WillFalcon 10 ай бұрын
@@AI-HowTo unfortunatelly with --medvram-sdxl it even does not start to generate, I got CUDA Out Of Memory just right after click generae.
@mitteos
@mitteos 6 ай бұрын
I just checked it several times, so --lowvram for 2070 super with 8 GB video memory works faster (1.3 min) than with --medvram-sdxl (5 min)
@AI-HowTo
@AI-HowTo 6 ай бұрын
interesting, this is not supposed to happen, but with open source it does! on my 3070 8GB ---medvram-sdxl is faster than --lowvram based on several tests, and this is what it is supposed to be, but with open source anything happens... in general, steps are generated much faster with medvram sdxl on my card but the conversion to pixels takes a while longer, but still overall i get faster generation .... however its best to use ComfyUI for SDXL, it gives better performance than A1111 especially for the 8GB cards.
@WifeWantsAWizard
@WifeWantsAWizard 11 ай бұрын
(1:30) "...and all of them had the correct anatomy..." Yeah, but is that because the 1.5 version didn't have enough horses fed into the dataset, because any model will look bad if the dataset it's pulling from is minimal.
@AI-HowTo
@AI-HowTo 11 ай бұрын
definitely, bad anatomy of animals/humans comes mainly from having less training images and lower resolution images, higher resolution images makes it possible to learn more features with less number of images too like SDXL... while the structural changes are minimal, the magic in AI is always in the data, everything else is just a distraction.
@MrDebranjandutta
@MrDebranjandutta Жыл бұрын
I have a 3060 /w 12 gb ram, comfyui ui runs pretty fast with xl
@AI-HowTo
@AI-HowTo Жыл бұрын
true, even faster for 8GB... only problem it is not as user friendly or easy to use as A1111...now there is also a refiner scrip extension that allows loading Refiner along with the base model as a pipeline which makes it slightly faster than before.
@sidheart8905
@sidheart8905 Жыл бұрын
yeah i also end up using refiner only from 640x640 then convert to 1024x1024 , i have 10gb vram but its hanging my lappy for sometime when switching to base model , some time i am using refiner on old 1.5 generated images too.
@AI-HowTo
@AI-HowTo Жыл бұрын
yes, switching is really annoying, wastes time, SD 1.5 is still very practically in terms of speed, this is why refiner model is more practical to use standalone without the SDXL 1.0 base model for me...hopefully A1111 will soon do a pipeline for faster refiner usage, but still, we will suffer a lot with lower GPU apparently even then....SD 1.5 will most likely be more useful and faster in many use cases till then.
@sathien9158
@sathien9158 Жыл бұрын
hi thanks for the video! nice work! my question is full 8gb/8gb 100% vram usage, is "safe"? and.. normal? or i need to optimize more. I have no error, and fine time.. but this 100% usage... i simply don't like this
@AI-HowTo
@AI-HowTo Жыл бұрын
You are welcome, I think its totally fine, the system will take from the System when it needs more VRAM after this point, as long as it works, let the system handle it, no need to further optimize or use --lowvram for instance because it will only get slower .... in my case, it goes 100% usage when i generates the images then goes down but it doesnt stay 100% all the time, only when generating a new image ranging often between 66% and 100% ... if it is always 100% that might be better because it might be a setting from the BIOS that allows faster run and higher GPU clock speed for instance, i think.
@k.puanpinta
@k.puanpinta Жыл бұрын
thx bro
@AI-HowTo
@AI-HowTo Жыл бұрын
Welcome
@schopenhauer408
@schopenhauer408 8 ай бұрын
it does not load the sd_xl_base_1.0.safetensors model in automatic1111
@schopenhauer408
@schopenhauer408 8 ай бұрын
Failed to load checkpoint, restoring previous
@AI-HowTo
@AI-HowTo 8 ай бұрын
it is possible that your A1111 needs update, because sdxl requires new A1111 version to run...and if you VRAM is low, you should use --no-half-vae --medvram-sdxl options in the webui-user.bat startup parameters after set COMMANDLINE_ARGS= line ... hopefully that fixes it .... another possibility if you already done that, that your sdxl file is corrupted, which rarely happens.
@kevinehsani3358
@kevinehsani3358 Жыл бұрын
Thanks for the video. The medram helped some but when I run it a few times and increase the number of pictures to 3 get memory issue again, I too use 8 GB, I wonder if you run to the same problem
@AI-HowTo
@AI-HowTo Жыл бұрын
You are welcome, I didnt face this, i generated many pictures without a problem, one at a time though... possible memory leaks from A1111 tools...as you can see, many errors with extensions too, its still not very practical to use effectively compared to SD 1.5... Comfy UI is good, but not user friendly in comparison to A1111.
@kevinehsani3358
@kevinehsani3358 Жыл бұрын
@@AI-HowTo I was wondering if you could help me with something. I have been trying to find code or a model that does facial expression transfer from face2face it used to be around a few years back but can not find anything on github or anywhere else that is running tensorflow2.0 and above. Everything seems to be so old that you can not even install the libraries any more!
@AI-HowTo
@AI-HowTo Жыл бұрын
sorry, cannot help with that, but if you want face 2 face expression transfer you can check control net, it does that too --- just use (reference only) model, can work in img 2 img and text to image as well, so you can use it for existing faces or SD generated ones.
@kevinehsani3358
@kevinehsani3358 Жыл бұрын
@@AI-HowTo Sorry for taking up your time but what do you mean by "check control net", not sure what control net is.?
@AI-HowTo
@AI-HowTo Жыл бұрын
I have not prepared a video on this topic, so you can check others, such as here, kzbin.info/www/bejne/qnPanpWKrLKrnLM it explains this feature... or search for Control net in stable diffusion, it is an extension that allows us to mimic existing poses/facial features images of others
@user-fo2wy1mj6l
@user-fo2wy1mj6l 11 ай бұрын
I am planning to upgrade my old gpu to Rtx 3080ti 2nd, Is 12gb vram of it enough for SDXL?
@AI-HowTo
@AI-HowTo 11 ай бұрын
yes, you can both train and generate images using 12GB RTX, but if you are buying, then be patient, and try to go for 3090, 24GB is the way to go, because soon, 12GB might no longer be sufficient very soon, and 24GB will give you peace of mind for quite sometime.
@lI-_Yxyrio_-Il
@lI-_Yxyrio_-Il Жыл бұрын
is there any chances to make it work on my 3GB VRAM 16GB RAM ? I know it is not even supposed to be enough for SD1.5 but it works with some optimizations
@AI-HowTo
@AI-HowTo Жыл бұрын
I think it is possible... if you failed to run with --lowvram , i suggest you install Comfy UI github.com/comfyanonymous/ComfyUI and follow installation instruction (just dowload the github.com/comfyanonymous/ComfyUI/releases/download/latest/ComfyUI_windows_portable_nvidia_cu118_or_cpu.7z ) extract , copy models into your check point folder or update the config file to existing stable diffusion installation folder and run... Comfy UI, can run using GPU or CPU, it is slower using CPU alot though ... so it wont be practical...I think in general, with very Low VRAM, it maybe best to use Google colab, you can google that too.
@endecoder
@endecoder Жыл бұрын
How much system ram DDR4 consumes SDXL? is true taht 16 gb de ram is insufficient?
@AI-HowTo
@AI-HowTo Жыл бұрын
16GB is enough, I run on 16RAM, and my GPU is 8GB while recording this video too, more than enough
@apnavu007
@apnavu007 3 ай бұрын
I'm thinking a buy a Laptop 8GB Vram Will I be able to run a Stable Diffusion XL model?
@AI-HowTo
@AI-HowTo 2 ай бұрын
Yes it is possible, but it will be slightly slower than you hope for, it can take 20 seconds and more for a 1024x1024 image using Forge UI or using ComfyUI and more ... currently with how AI stuff is heading, if you plan to buy something, you better save and buy 24GB VRAM, it is very expensive, but it is the only option that allows you to run everything such as Animate Diff models without suffocating on memory or suffering slow generation
@apnavu007
@apnavu007 2 ай бұрын
@@AI-HowTo Then I'll just have to buy a PC good choice.
@AI-HowTo
@AI-HowTo 2 ай бұрын
yes, PC is alot more practical, cheaper, and more powerful, avoid laptops unless you extremely need to move around often, even lower VRMAM RTX 3060 for PC is alot more powerful than its Laptop counterpart and has more VRAM.
@ManDogAndCows
@ManDogAndCows 4 ай бұрын
i want to run this of a server i have in it a gt1030 only 2Gb wil it work? also has 64Gb ram and 2x 10 core CPU.. render time is no issue for me since the server works while i do something else i just want to utilise my server for something other than storage.. also a quadro p2000 fits in my server im thinking about upgradeing it has 5 GB
@AI-HowTo
@AI-HowTo 4 ай бұрын
it will be impractical to run on 2GB, with A1111 this might not work propertly, but Forge have better automatic memory management github.com/lllyasviel/stable-diffusion-webui-forge this repositry provides same things as A1111 with same UI but has better memory management and can run on 2GB for SD 1.5.... it might run SDXL too but it will use the CPU then which will be slow.
@ManDogAndCows
@ManDogAndCows 4 ай бұрын
@@AI-HowTo yes slow these days is unusable.. the gt1030 was the dumbest purchase i have ever made 2gb idk. if it is the drivers but i cant get it to render or transcode anything..i found a quadro p2000 for cheap so i will run with that thank you for fast response
@AI-HowTo
@AI-HowTo 4 ай бұрын
You are welcome, these days, RTX graphics cards are game changers, they are the way to go for AI/Gaming/3D, they are expensive, but they seem like the only option to save time and be able to stay uptodate with the technolgoy...best of luck.
@WallyMahar
@WallyMahar Жыл бұрын
Will I be able to figure out how to do this in invoke?
@AI-HowTo
@AI-HowTo Жыл бұрын
this video was made for A1111 when SDXL 1.0 was first released, Invoke as far as I know can also run it, but not sure about its config ... on the otherhand ComfyUI automatically detects graphics card settings and adjusts its run parameters to match your graphics card so no settings is required and faster than both, but less user friendly.
@Nakasasama
@Nakasasama 6 ай бұрын
heck I have problems with 12gb of video ram.
@AI-HowTo
@AI-HowTo 6 ай бұрын
SDXL is much slower than SD 1.5, still, 12GB is good enough to train, generate, and do everything, it is even better if you use ComfyUI as it gives better pipeline and performance compared to A1111, but A1111 is more familiary to use and easier in many use cases.
@fallguyjames
@fallguyjames Жыл бұрын
Has anyone tried with 1650ti ? I think it has only 4GB vram.
@AI-HowTo
@AI-HowTo Жыл бұрын
with --lowvram it might work or in ComfyUI which can even run on CPU and detect best settings based on your GPU automatically., but it definitely will be very slow and not practical.
@fallguyjames
@fallguyjames Жыл бұрын
@@AI-HowTo Actually that is okay. As long as it will not crash. In Google Colab free version, it will crash. I actually have stable diffusion in my AWS EC2 server and 20 iterations will take 7-8 hours. But that is fine :)
@generalawareness101
@generalawareness101 Жыл бұрын
How much PC/system ram do you have?
@AI-HowTo
@AI-HowTo Жыл бұрын
GPU Ram is 8GB RTX 3070 Laptop with only 16GB System RAM.
@generalawareness101
@generalawareness101 Жыл бұрын
@@AI-HowTo There would be why SDXL requires 32GB of system ram so I can see why it took so long. XL is mean on specs as my card can't run it.
@hadiadot2471
@hadiadot2471 Жыл бұрын
Can my gtx 1070 run it? It's 8gb vram
@AI-HowTo
@AI-HowTo Жыл бұрын
you need to try it, use --lowvram option if --medvram didnt work, generally speaking it requires strong GPU, so even if you ran it, it is unlikely that you will use it much considering it is slower than SD 1.5 for instance
@AI-HowTo
@AI-HowTo Жыл бұрын
or use Comfy UI, it automatically finds best settings for you GPU.
@zigma5706
@zigma5706 Жыл бұрын
for poor people in non-1st world countries :) like me who wants to get a taste of heaven , we need a 2 stage separate SXDL comfyUI workflow 1) for base then 2) for refiner(using saved image from base output). running both in a single sequence fc uk up the whole thing.
@AI-HowTo
@AI-HowTo Жыл бұрын
true, A1111 doesnt pipe line the refiner, so it is slower... with lower GPU, it is even best as you said to generate using base model alone...later on, use refiner for best images only ...base model alone can also give good results even without refiner in most cases.... Comfy is slightly faster...but less used because it doesnt have a user frieldly GUI like A1111
@vinnybane-ki6eq
@vinnybane-ki6eq Жыл бұрын
Do you have an email I can reach you at?
@kwhitfield6684
@kwhitfield6684 Жыл бұрын
Promo>SM
Multi Diffusion for A1111 - Super Large + LOW Vram Upscaling
8:19
Olivio Sarikas
Рет қаралды 49 М.
а ты любишь париться?
00:41
KATYA KLON LIFE
Рет қаралды 2,9 МЛН
SPILLED CHOCKY MILK PRANK ON BROTHER 😂 #shorts
00:12
Savage Vlogs
Рет қаралды 46 МЛН
Who Says You Can't Run SDXL 1.0 8GB VRAM? Automatic1111 & ComfyUi
6:38
How to Install AUTOMATIC1111 + SDXL1.0 - Easy and Fast!
10:23
Incite AI
Рет қаралды 25 М.
Save You HOURS of gen. Top 10 Stable Diffusion SDXL Hacks
6:56
Revealing my Workflow to Perfect AI Images.
13:31
Sebastian Kamph
Рет қаралды 317 М.
Mastering Stable Diffusion SDXL in Automatic 1111 v1.6.0 | Tips, Tricks & Workflow
21:36
Capcut AI Does Automatic Video Editing?!... I WAS SHOCKED!
7:37
AI Webb TV
Рет қаралды 342 М.