Converting civitai models to ONNX -> kzbin.info/www/bejne/mXXVmqV7qdJ4p7s
@nomanqureshi1357 Жыл бұрын
thank you i was just looking for it 😍
@_JustCallMeRex_ Жыл бұрын
Hello. I would like to ask something regarding the installation process, at the point where it begins creating the venv folder in Stable Diffusion. I have an AMD Graphics Card, specifically an RX 580, I accidentally updated Stable Diffusion by adding the git pull command on the webui text file and it broke Stable Diffusion because apparently it had installed torch version 2.0.1. Now, I tried deleting everything and starting out fresh by following your guide, but for some reason it keeps on installing 2.0.1 torch version. How do I prevent this from happening? Is there anyway to specify it to install torch 2.0.0 again? Thank you.
@OneTimePainter Жыл бұрын
Finally a tutorial that makes sense and doesn't reference 3 other unnamed videos. Thank you!
@FE-Engineer Жыл бұрын
Glad you liked it. I try to boil things down and go start to finish completing a task.
@Mewmew-y4m11 ай бұрын
i know what youtuber your referring too HAHAHA
@2ndGear9 ай бұрын
All these other tutorials had me installing python from github for my AMD GPU. Did not realize there was a tutorial on AMD site itself for A1111! Well, time to start over and do it your way. Radeon 6600 XT and all I get for speed is 2/it while you're getting 20+. I have to start over thanks for your tutorials!
@CahabaCryptid Жыл бұрын
This new process is significantly easier to get SD running on AMD GPUs than it was even 6 months ago. Thanks for the video!
@FE-Engineer Жыл бұрын
You are welcome! And I agree. It is a lot easier than before. And with ROCm on Linux you get to do everything. Hopefully they will finish getting ROCm onto windows.
@dumiicris26948 ай бұрын
@@FE-Engineer what is the vram requirement on amd as high as before? or comparable now with nvidia?
@bhaveshsonar75585 ай бұрын
@@dumiicris2694 vram doesnt work like that
@dumiicris26945 ай бұрын
@@bhaveshsonar7558 what ure saying it ocupies less bytes? But ure talking about the speed thats why ure saying that and yeah vram has a bus of 192 or 256 bytes or 512 but it still works the same imagine instead of 64 bits of whitch u need 2 bytes instead video card needs th whole line yeap but my man it works the same but thats the technology with clocks so it needs more bytes on ram cause thats the way vram works but ram needs a different driver to be used as vram so it has to be bigger so it does not separate the line cause of the slower speed
@iskiiwizz5367 ай бұрын
I get the 'launch.py: error: unrecognized arguments: --onnx --backend directml' error at 9:23 even if i put the two lines of code
@FE-Engineer7 ай бұрын
Code has been updated. Lots of changes
@mgwach Жыл бұрын
Hey FE-Engineer!! Thank you so much for this tutorial. Glad to see people helping out the AMD crowd. :) I do have a question though.... how come when you run the webui.bat initial setup command you don't get the "Torch is not able to use GPU; add --skip-torch-cuda-test" error? I get that every time I try to install it.
@FE-Engineer Жыл бұрын
because before I even get that error I add directml to the requirements file for pip to install -- i do this specifically because I know this error is coming.
@Jyoumon Жыл бұрын
@@FE-Engineer mind telling how you do that? extremely new to this stuff.
@xt_raven8842 Жыл бұрын
im getting the same torch error how can we fix it?@@FE-Engineer
@ИскандерКубышкин Жыл бұрын
@@FE-EngineerReally, why don't you talk about it? And how can we do this
@hobgob32111 ай бұрын
Hey did you figure it out? I have the same issue. I tried editing the webui-user.bat/sh but I still get the same error@@ИскандерКубышкин
@LadyIno Жыл бұрын
I'm so gonna try this when I'm home. Just recently I tried running stable diffusion on my xtx (took me half an evening to set it up) and was immediately frustrated how slow everything was. It took around 10 minutes to create 4 batches. I'm a total beginner when it comes to ai art, but your guide is very well explained. I think I can copy your homework 😅 ty for the video!
@LadyIno Жыл бұрын
Quick update: This worked perfectly! I can create 4 batches in less than half a minute. Sir, you are a genius. Thanks so much ❤
@FE-Engineer Жыл бұрын
🙃 I’m glad it helped and worked without issue. As I state in the video. There are a lot of things like inpainting that do not in fact work appropriately. Right now unfortunately to get “everything” you really need to run it in Linux. But full windows support with ROCm should be coming soon ish. So hopefully when you get to the point of wanting the other pieces hopefully ROCm will work on windows and switching over should be easy! Have fun! And thank you for watching and the kind words!
@CreepyManiacs Жыл бұрын
I didnt have the ONNX and Olive tab, I just add --onnx in commandline args and it seems to work XD
@FE-Engineer Жыл бұрын
Strange. Well if it is working correctly then that is all you can ask for.
@JamesAville Жыл бұрын
Thanks for you time, sadly, as a new RX 7700 XT owner... I'm going to sell it back, no tutorial worked out for me. Got so many errors not shown on any video or blog during installation.
@FE-Engineer Жыл бұрын
Read the first few lines of the video description…they changed the code a few days ago and everything broke. Theres an updated video showing how to get it to work with the current code.
@EverCreateStudio12 күн бұрын
How do i enter auto1111 everytime i start anaconda? you know, like your fast boost bat file for comfy. sorry im straight new, like 5 days into this. comfyui is nothing but issues, so i am deciding for auto1111. thanks
@cmdr_stretchedguy5 ай бұрын
Only if you have a RX7900XT, ROCm support is limited for the rest of the line and are half or less of the speed. For example my RTX3050 produces 800x1200 images in 20-30 seconds per image (native CUDA) versus my RX7600 does the same in 220-250 seconds per image (via directml). In gaming the RX7600 has almost twice the performance of the 3050 but Stable Diffusion and many AI tools still rely on CUDA.
@Lumpsack Жыл бұрын
Thanks so much for this, I followed the guide, got the same error and fixed with the text in the description - top man, this has saved me from having (yet another) fight with Linux :) Also, top tip on being patent, not my strong suit, thankfully for her my wifes at work, so I had to just pester the kids instead! Now, I too am on the 7900xtx and not getting quite the same speeds, around 17it/s but still a big jump up, so thank you and I look forward to more of your vids. Incidentally, the nice thing here too, is not seeing gpu ram perma-maxxed!
@FE-Engineer Жыл бұрын
Yea with previous runs a few months or like a year ago. The ram was like always maxed out and would just randomly “out of vram” which drove me crazy having to constantly kill and restart if I made one mistake with a button. Glad it helped! And 17it/s is still really fast overall. That’s still 100 steps in under 10 seconds easily. And probably only about 7 seconds.
@FE-Engineer Жыл бұрын
For getting up to 20 iterations per second. Just a thought you might consider undervolting your gpu slightly. Like a -10 or something. I think mine is at -10.
@Lumpsack Жыл бұрын
@@FE-Engineer Thats cool, I'll take the slightly slower speed, but thanks - I get the difference now.
@Justin141-w3k11 ай бұрын
9:07 I get the Torch is not able to use GPU error here.
@FE-Engineer11 ай бұрын
Check the newer video. Details in the video description. Like first line…
@Justin141-w3k11 ай бұрын
Isn't SHARK garbage? @@FE-Engineer
@FE-Engineer11 ай бұрын
Shark is…not awesome in my opinion. But the code changes and fixes for automatic1111 directml…there’s a long comment about it and a link to the video showing how to fix errors that come up in the video description.
@thelaughingmanofficial Жыл бұрын
I have to use the WebUI from explorer otherwise it doesn't install. if I try from Miniconda I get an error about "couldn't launch python" when using the --onnx --backend directml options. Rather frustrating.
@FE-Engineer Жыл бұрын
Sounds like python problems. Like multiple versions of python potentially or not checking the add to path option when installing. Hard to say for sure though. :-/ sorry that is irritating.
@thelaughingmanofficial Жыл бұрын
@@FE-Engineer I only have one version of python installed and it's 3.10.6 because that's the only version it seems to work with
@Sod1es7 ай бұрын
the onnx and olive tabs aren't not showing
@FnD421210 ай бұрын
I got "RuntimeError: Torch is not able to use GPU;" after 8:56 step
@FE-Engineer10 ай бұрын
Read the video description
@ZecVitaly Жыл бұрын
My PC keeps shutting off when im optimizing the model using olive @ 15:06. Any fix for this?
@FE-Engineer Жыл бұрын
You might be running out of actual ram. How much ram is on your machine?
@ZecVitaly Жыл бұрын
32GBs DDR4@@FE-Engineer
@Rain_Zima9 ай бұрын
Just a heads up from the future past here, you still need to use Python 3.10.6 and you need to install the version of anaconda that supports its. some people get cannont find cmake in PATH error, usually "conda install cmake" fixes this.
@NicoPlayGames96 Жыл бұрын
Dude thank u so much, u help me a lot. Im from germany and this video was still very understandable and thx to u i can now have fun on stable diffusion :)
@FE-Engineer Жыл бұрын
You are very welcome! I’m glad it helped! Next tutorial is for running with Rocm on Ubuntu.
@optimisery10 ай бұрын
Great tutorial, thank you very much! One thing worth mentioning is that conda virtual env (like any venv for python) is not really "virtual machine", but rather a bunch of env variables that's set/activated for the current shell, so that when you're running anything under this context, binaries/libs are searched for within the context. Nothing is really "virtualized"
@rwarren589 ай бұрын
I am a rank beginner. I would appreciate an explanation of what you mean by "virtualized". Thanks if you reply. It's a month old thread.
@uxot4 ай бұрын
followed this guide but the onnx tab isnt showing after :/
@FE-Engineer4 ай бұрын
Uodated code. Read the video description
@Cuentaedits98 Жыл бұрын
I'm at minute 9:29, and from there, I don't know how to proceed. I'm sorry, I don't know much English.
@stuff_and_things_and_stuff Жыл бұрын
arfter the error run venv\Scripts\activate and pip install httpx==0.24.1 then run again
@Cuentaedits98 Жыл бұрын
@@stuff_and_things_and_stuff ty
@LeLeader00 Жыл бұрын
Very good Video, I was having trouble installing SD in my amd pc, thank you
@FE-Engineer Жыл бұрын
I honestly got tired of trying to get it up and running and coming back to it being entirely broken and figuring out how to get it back up. I figured others might appreciate skipping the junk (hopefully) and having a straight forward guide to just get it up and running! I’m super glad it helped and hopefully was easy and got you up and running quickly!
@zekkzachary Жыл бұрын
When trying to Olive optimize, it always crash on "ERROR:onnxruntime.transformers.optimizer:There is no gpu for onnxruntime to do optimization. Click here to continue". Which version of torch do you use? I always get "You are running torch 1.13.1+cpu. The program is tested to work with torch 2.0.0.". I manually update it to 2.0.0, but SD automatically downgrade to 1.13.1. Do you have any lead?
@FE-Engineer Жыл бұрын
Yes I have seen all of these errors. If you simply do nothing. It should continue on and work. Be patient. Optimizing the model will take a while.
@zekkzachary Жыл бұрын
@@FE-Engineer That's the problem, it doesn't continue. It close Stable-Diffusion preventing the optimisation to continue.
@FE-Engineer Жыл бұрын
For torch I’m using the same version as you. I’m surprised. A lot of people have followed this and have not seemingly had any real problems. The press here to continue is a bit unusual. Is that the last error that you see before or we here to continue?
@zekkzachary Жыл бұрын
@@FE-Engineer Yes. Here's a screenshot: drive.google.com/file/d/1DdUJl_B_5N6ahJtjRpvJi4d7uPgxWO2U/view I have 32Gb of RAM and a Radeon RX 6800 XT, it should be enough.
@zekkzachary Жыл бұрын
As a folluw-up, in case anyone has the same problem that I had: I made sure to have no heavy memory application running (even in the tray) then I tried again and it works. However, I found and the output with DirectML renders a lot faster, but are of lower quality than pyTorch. I made much more stunning image with my old nVidia than with my brand new Radeon. Let's hope AMD fix this compatibility issue quickly...
@SyntheticSoundAI Жыл бұрын
If anyone wants a simple batch file to automatically start SD without typing it all in, here ya go. Just replace the cd directory with the location of your stable diffusion webui. @echo off call conda activate sd_olive cd C:\Users\YOURUSER\sd-test\stable-diffusion-webui-directml webui.bat --onnx --backend directml
@langinable Жыл бұрын
Thank you! This helped.
@FE-Engineer Жыл бұрын
;)
@toketokepass10 ай бұрын
I get "runtime error: found no nvidia driver on your system" in the console and gui. I also dont have the ONNX tab. *Sigh*
@FE-Engineer10 ай бұрын
Check out new video. Just finished recording. Should be up in less than 24 hours.
@NínGhéin11 ай бұрын
It still tells me that "torch is unable to use GPU" despite the fact that this was designed to use an AMD GPU and I have an AMD GPU.
@FE-Engineer11 ай бұрын
Yep. Code changes over time. Read the video description. -use-directml
@Drunkslav_Yugoslavich11 ай бұрын
Is there any way to make the main folder not on C? conda create --prefix /path/to/directory makes a directory in the needed path, but when i do git clone it's just downloads everything to my user folder on C :/
@FE-Engineer11 ай бұрын
Go to the directory that you want to clone the repo in. Then do git clone there.
@Drunkslav_Yugoslavich11 ай бұрын
@@FE-Engineer I can do that only through cmd, not conda. Sorry, I'm not really into any kind of programming, so it's kinda hard to me. I cloned it through cmd and done everything you showed in the vid next, but it just gives me "Torch is not able to use GPU" and gives me the command to ignore CUDA
@TokkSickk11 ай бұрын
@@Drunkslav_Yugoslavich do --use-directml not --backend directml
@Drunkslav_Yugoslavich11 ай бұрын
@@TokkSickk Do not work in command line for webui-user.bat, "launch.py: error: unrecognized arguments: not --backend-directml"
@TokkSickk11 ай бұрын
Huh? the current working directml is --use-directml not the backend one. @@Drunkslav_Yugoslavich
@PCproffesorx Жыл бұрын
I have an nvidia GPU, but have still looked into ONNX my main problem with it is that it doesnt have lora support yet. You have to merge the lora's into your model first. If lora's are every properly supported with the ONNX format I would switch immediately.
@FE-Engineer Жыл бұрын
Interesting take. My understanding of the underlying differences between the formats is pretty limited. So it is definitely curious to me to find out that ONNX while lacking some almost rudimentary functionality is that appealing. I’ll have to find time to dig in a bit more when I have some time.
@Galova3 ай бұрын
how do I integrate this method with photoshop stable diffusion plugin?
@_gr1nchh8 ай бұрын
Any update on this? I just got a 6600 last night (as a test card, I was planning on going with a 2070 super instead for a cheaper price) but I like this card and all of AMD's tools more than nVidia's. If I can get decent results out of this card I'll just keep it. Wondering if there's been any major updates regarding SD on AMD.
@Mr.Every1 Жыл бұрын
i have the following error message when i try to optimize .. what can i do ? AssertionError: No valid accelerator specified for target system. Please specify the accelerators in the target system or provide valid execution providers. Given execution providers: ['DmlExecutionProvider']. Current accelerators: ['gpu'].Supported execution providers: {'cpu': ['CPUExecutionProvider', 'OpenVINOExecutionProvider'], 'gpu': ['DmlExecutionProvider', 'CUDAExecutionProvider', 'ROCMExecutionProvider', 'TensorrtExecutionProvider', 'CPUExecutionProvider', 'OpenVINOExecutionProvider'], 'npu': ['QNNExecutionProvider', 'CPUExecutionProvider']}.
@FE-Engineer Жыл бұрын
I have never seen that error. Try reinstalling. Not really sure because that is not an error anyone else has mentioned.
@Namelles_One Жыл бұрын
Any chance to list models that can run with these? I tried stable diffusion xl, and always getting "assertion error", so, list of models that can be used will be very helpfull, with slower connection is just waste of time to download and try blindly. Thank you!
@FE-Engineer Жыл бұрын
I’ll have a new video coming out basically outlining which programs can do what with AMD cards because honestly it is all over the board.
@FE-Engineer Жыл бұрын
As a quick note. I have not been able to get stable diffusion xl working on this one in windows.
@16thSD Жыл бұрын
i got the error "FileNotFoundError: [Errno 2] No such file or directory: 'footprints\\safety_checker_gpu-dml_footprints.json' Time taken: 1 min. 56.9 sec." not sure what i did wrong here....
@FE-Engineer Жыл бұрын
I have no idea either. The safety checker is usually used when optimizing if I remember correctly. But I have not seen this error.
@TrippyRiddimKid Жыл бұрын
Trying to get this running on a 5600xt but no matter what I do I get "Torch is not able to use GPU". I could skip the torch test but from what I can tell that will just end up using my CPU. I know the 5xxx series can do it as Ive seen others mention it working. Any help?
@FE-Engineer Жыл бұрын
Yep. Read the video description at the top. It will provide the help you need….
@mmeade940211 ай бұрын
I get a different error message. when running the webui.bat --onnx --backend directml command it runs through and I end up with RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
@mmeade940211 ай бұрын
This is all just too much. Im vaguely computer literate, but I'm certainly no programmer. Until somebody makes this stuff more user friendly for somebody that just wants to download the software in windows, click install and start messing with it Im going to throw in the towel. Ive gone through 4 different A1111 forks today, and they all toss errors at me while following the destructions. Im sure my 7900xtx is probably making things more complicated. But thats ridiculous.
@FE-Engineer11 ай бұрын
You can use shark from nod.ai. It is pretty much one button install. And it just kinda works. It’s just super slow. It compiles shaders to use. So it’s quite fast to generate an image. But then if you change models it recompiles shaders. If you change image size. Recompiles shaders.
@mgbspeedy8 ай бұрын
Had to add skip cuda test and it worked. But when I try to create an image, it still fails and says there is no NVIDIA GPU. Doesn’t seem to recognize my AMD. Is a AMD RX 580 8g too old to be recognized on stable division.
@FE-Engineer8 ай бұрын
The code has changed pretty significantly since I made this video. Zluda does not work for the rx580 because it is not supported by hip sdk. But I believe the directML fork should work. You might need the argument -use-directml
@FE-Engineer8 ай бұрын
Remove anything about onnx.
@mgbspeedy8 ай бұрын
Thanks for the reply. I’ll give it a shot.
@leeyong414 Жыл бұрын
hi, so after the "socket_options" error, I followed the venv\Scripts\activate and pasted the code but still get the same error. what am i doing wrong?
@FE-Engineer Жыл бұрын
Wrong version of httpx is installed.
@FE-Engineer Жыл бұрын
Change the version to I think it is 0.24.1 you can change it in the requirements.txt file. But the directions in the video do work. So you must have something else going on or you are not in a conda environment correctly. Or wrong version of python. Lots of ways to have things go sideways. Have to follow the directions closely.
@LeLeader00 Жыл бұрын
What does it mean 😢 OSError: Cannot load model DreamShaper: model is not cached locally and an error occured while trying to fetch metadata from the Hub. Please check out the root cause in the stacktrace above.
@FE-Engineer Жыл бұрын
It seems to not be able to load the model. Did you download it. Then optimize it?
@Andee... Жыл бұрын
Works so far! However hires fix isn't working at all. Just does nothing. Any idea what that could be? I've made sure to put an upscale model in the correct folder.
@FE-Engineer Жыл бұрын
Honestly. With these directml onnx + olive. A lot of things don’t seem to work appropriately. I’m currently looking at a bunch of alternatives like using normal A1111 with Rocm. And also using sd.next still with directml and onnx. So far I don’t see many things that are nearly as fast though. Still working on it.
@richkell1653 Жыл бұрын
Hi, followed everything running and downloaded the model you use in the vid, however I am getting this error: models\ONNX-Olive\DreamShaper\text_encoder\model.onnx failed. File doesn't exist. My text_encoder folder is in my z: drive and in that is a config.json and model.safetensors file. Any ideas? Btw thanks for your work on helping us poor AMDer's out :)
@richkell1653 Жыл бұрын
Managed to optimize another model and it works perfectly! Jumped from 2-3it/s to 12.36it/s!!! You SIR do ROCK!!!
@FE-Engineer Жыл бұрын
If it is saying file not found it means it is looking specifically for a file and it is not there. Why it thinks there should be a file there is harder to figure out. Might just try optimizing again and during optimization it should put a file there
@bluevaro50510 ай бұрын
well I followed the steps and my first runtime error was, RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check. So, I add it and ran webui.bat --onnx --backend directml --skip-torch-cuda-test. Then it give me the error, launch.py: error: unrecognized arguments: --onnx --backend directml. At a lose as to what to do.
@FE-Engineer10 ай бұрын
-use-directml. Remove onnx and definitely remove skipping torch cuda test.
@DarkShadow686 Жыл бұрын
for me it still doesnt show olive/onnx in the bar you have any idea how to fix it? I did the check on miniconda
@FE-Engineer Жыл бұрын
Something is wrong on automatic1111 right now. I’m looking into what the fix is, or what broke.
@DarkShadow686 Жыл бұрын
@@FE-Engineer okay thanks a lot because atm it does render but it takes ages, did it before with another installation and it did it like 100 times faster.
@FE-Engineer Жыл бұрын
New video showing how to get around all the current problems is up. :)
@AdemArmut-g5p Жыл бұрын
Nice video. But I keep getting the error "Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check" and then it uses CPU only. I have a RX6800, found a lot of ppl with same issue but no one has a solution. Do you have any idea how to fix it?
@FE-Engineer Жыл бұрын
Not currently. This just happened within the last few days. I’m working on figuring out what’s happening and how to fix it.
@htenoh538611 ай бұрын
Any luck with finding a fix? Getting this too...@@FE-Engineer
@petrinafilip96 Жыл бұрын
Whats considered fast? I do inpainting with batches of 4 pics (so I pick the best one) and it usually takes 3-4 minutes for one batch with RX 6800
@FE-Engineer Жыл бұрын
I would say getting 8-10+ iterations per second is quite fast. Are you using olive optimized models? Are you increasing resolution when you do this? How many steps are you doing? I would expect 6800xt to perform a bit better to be honest.
@xIndustrialShadoWx Жыл бұрын
Thanks man!! Questions: How do you update the repository safely keeping all your models and extensions? Also, how do you reset the entire environment if things go tits up?
@FE-Engineer Жыл бұрын
Just git pull to update. The folders for models and stuff should not have any problems. Resetting the entire environment. Delete the venv folder will blow away the virtual environment stuff. Then you just need to re install all of those tools to get back to a hopefully working environment. I’m really bad cases you can of course move or copy your models and delete everything but that would be if you were really having problems that you could not get working properly.
@OriolLlv9 ай бұрын
Im getting an error after executing webui.bat --onnx --backend directml. fatal: No names found, cannot describe anything. Any idea how to fix it?
@FE-Engineer9 ай бұрын
Read the video description
@OriolLlv9 ай бұрын
Which part? I followed all the steps.@@FE-Engineer
@Kii2308 ай бұрын
@@OriolLlv having the same issue. It's because lshqqytiger refactored onnx so -onnx no longer works. Idk how to fix
@sierranoble3451 Жыл бұрын
I got "There is no gpu for onnxruntime to do optimization." when running the optimizations. If I can't figure it out I'll submit an issue on the github repo.
@FE-Engineer Жыл бұрын
It’s using cpu for optimization. There isn’t supposed to be a GPU for onnx optimization.
@FE-Engineer Жыл бұрын
If you let it go it will finish and optimize anyway.
@mehmetonurlu Жыл бұрын
I am kinda new to all of these, when i close it and reopen the webui-user file, standart stable diffusion opens up not this one. I opened anaconda prompt and manually changed all directories and pasted the "webui.bat --onnx --backend directml" code. And it works, but is there any easy way?
@FE-Engineer Жыл бұрын
Yep. I have a video showing how to open and run anaconda all from a single prompt.
@kenbismarck499911 ай бұрын
Hello, great comprehensive tutorial video, nicely done man :) Timestamping the vid would be awesome. Say, i.e.: 2:00 mins after the intro, beginning of the main part of the vid :) Best regards
@FE-Engineer11 ай бұрын
Oh you mean time stamping in the video description? KZbin pretty automagically separates the videos fairly well into sections. Which is crazy convenient.
@vexillen1877 Жыл бұрын
I got 3.5 it/s on 6700XT. Which is x2 faster than the default. This is without using any run commands.
@Justin141-w3k11 ай бұрын
How the heck do you get it to work with your AMD GPU
@DoomDeer9 ай бұрын
It would be kick4$$ if you could do a comparison on the XTX between windows with Microsoft Olive Toolset, and SD running on Linux with only ROCm! Want to know what runs better on my 7900XT.
@FE-Engineer9 ай бұрын
Onnx/olive > linux full ROCm > windows ROCm zluda > windows directml no onnx. But onnx has a lot of limitations. From my testing that is what I have seen. For a windows user I would recommend using zluda due to the onnx issues. For Linux I would just use full ROCm.
@DoomDeer9 ай бұрын
@@FE-Engineer I'm buying an SSD to use MSOlive on windows (my current ssd is 512gb rip) but meanwhile , I am installing a Linux Distro to use Full ROCm. Do you have a guide for installing SD on there? I am gonna use the automatic1111 guide to install it on ARCH LINUX.
@Limmo13372 ай бұрын
does not work for me... i get RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check, I have a 7900xtx. I added the command to the webui-user.bat and still get the same error
@FE-Engineer2 ай бұрын
This is very old code. Check the video description for links to updated code
@Limmo13372 ай бұрын
@@FE-Engineer I did the updated version it works now but it wont go into quick mode, It just goes into slow mode and takes forever.
@팟-i1r Жыл бұрын
When I click (Optimize model using Olive) I get this error "AttributeError: 'NoneType' object has no attribute 'lowvram'"
@FE-Engineer Жыл бұрын
Yep. Something broke on it in the last few days. Trying to figure out what’s broken
@팟-i1r Жыл бұрын
@@FE-Engineer thanks❤, I bypassed the other error regarding "this gpu can't use torch" or smth by adding (-- skip cuda check) to my .bat file. however I am completely stumped on this new Lowvram error, tried adding either "--lowvram" or "--midvram" to my .bat file but didn't help. btw, there is a newer article on the amd website, you still need an optimzed model, but you no longer run the onnx backend thing. Now it's the same interface and features like Nivida, DPM++ and everything, but since I can't get the optimized model to work, it's using my cpu instead.
@FE-Engineer Жыл бұрын
new video showing how to get it working!
@mustafaselimavci471310 ай бұрын
I got an error help me plz i dont know what to do, I followed your every single steps aise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
@FE-Engineer10 ай бұрын
Check the video description
@tomaslindholm9780 Жыл бұрын
Seems like "Torch is not able to use GPU" is another common issue not resolved. Relates to Cuda version.
@FE-Engineer Жыл бұрын
Perhaps. I’ve been out and about with family and saw this crop up in the last few days. I have to dig in and see if I can figure out what’s going on with it.
@FE-Engineer Жыл бұрын
new video shows how to get it working
@tomaslindholm9780 Жыл бұрын
Sooo much appreciated! I dig in, right now and initially confirmed, my miniconda enviroment was set up with pyhton 3.10.13. Guess I got it updated by accident before the failed attempt to build SD. @@FE-Engineer
@FE-Engineer Жыл бұрын
I ditched miniconda in favor of reduced complexity this time. I like anaconda. But for this. I wanted to just use default stuff and cut out as much spaghetti as possible.
@TanMan07 Жыл бұрын
So I was able to get this going... but now how would I run again without having to redo steps? New to all of this - I would like to have a shortcut I could just click on in my desktop to run all this
@FE-Engineer Жыл бұрын
Check my other videos for the one talking about activating conda and running sd in one script.
@evetevet7874 Жыл бұрын
It says "RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check"
@evetevet7874 Жыл бұрын
venv "C:\Users\xgevr\sd-test\stable-diffusion-webui-directml\venv\Scripts\Python.exe" fatal: No names found, cannot describe anything. Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)] Version: 1.7.0 Commit hash: 25205c9e114a3773f2ce38379f85d18304c34988 Traceback (most recent call last): File "C:\Users\xgevr\sd-test\stable-diffusion-webui-directml\launch.py", line 48, in main() File "C:\Users\xgevr\sd-test\stable-diffusion-webui-directml\launch.py", line 39, in main prepare_environment() File "C:\Users\xgevr\sd-test\stable-diffusion-webui-directml\modules\launch_utils.py", line 560, in prepare_environment raise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check Press any key to continue . . .
@FE-Engineer Жыл бұрын
Hmm I wonder if something changed in the code now.
@evetevet7874 Жыл бұрын
@@FE-Engineer Im trying to setup stable difussion on my AMD gpu since 2021 💀pls help
@1684biolab Жыл бұрын
@@evetevet7874 After updating to webui ver 1.70, I found this comment from Ishqqytiger on github SD webui directml disscusion, maybe this will help : "As the upstream added --use-ipex, the --backend directml was changed to --use-directml, and the default value was changed to CPU. Please run with --use-directml" So I change n add commandline_args with --use-directml
@mrmorephun Жыл бұрын
I have a (noobish) question...when i close the program and want to restart stable diffusion later, what command should i use?
@FE-Engineer Жыл бұрын
Check my other videos. I have one specifically about this. I think it is about 3 minutes total. It’s very short.
@lordiii1231 Жыл бұрын
It does not work for me. I followed your gude step by step, but in my case it wont use or find the gpu and runs torch with cpu every time. This leads to a failure in that I cant use onyx or olive at all. Do you know how I could fix this porblem? Output when Starting the webui.bat with --onnx and --backend directml: No SDP backend available, likely because you are running in pytorch versions < 2.0. In fact, you are using PyTorch 1.13.1+cpu. You might want to consider upgrading. ... You are running torch 1.13.1+cpu. this is on the bottom of the webui: version: 1.6.1 • python: 3.10.6 • torch: 1.13.1+cpu • xformers: N/A • gradio: 3.41.2 • checkpoint: This leads to an Assertion error when using Olive
@FE-Engineer Жыл бұрын
directml is not using torch really. Mine also says torch 1.13+cpu. Mine also has the no SDP backend available. Those are things that come up for everyone with AMD on this setup. alternatives if you are not able to get it working properly. Try nod.ai shark -- I have a video for that. Setup linux as a dual-boot, and run actual ROCm then you don't have to deal with ONNX and olive at all and get all of the automatic1111 features. Or simply wait, ROCm 6 is supposed to be out this month, and while it has not been mentioned, following along in the github progress for ROCm, it looks like they are in fact getting close to be able to run on windows, and with ROCm on windows, again you would not need to fiddle with ONNX or directml at all.
@lordiii1231 Жыл бұрын
Thank you very much for your time responding to my message :D I figrued out, why my conversion failed: I used Dreamshaper XL with the XL conversion tool, and not dreamshaper with the regular conversion. With the regular model it did work. Do you know how to convert those XL models?@@FE-Engineer
@bysamuelneves Жыл бұрын
Remember: the Miniconda version must be the same version of Python installed (both must be 3.10.6)
@FE-Engineer11 ай бұрын
I’m not sure that statement is accurate. But I’m not really big into conda, so this is potentially true. I just have never had a reason to care what version of anaconda I had and mostly just needed to make environments inside anaconda with specific versions of python to get things to work appropriately. ?
@jostafa4438 Жыл бұрын
already installed it succesfully then i upgraded to win 11 clean install and tried to do it again but got that problem (sd_olive) C:\Users\Jostafa\stable-diffusion-webui-directml>webui.bat --onnx --backend directml venv "C:\Users\Jostafa\stable-diffusion-webui-directml\venv\Scripts\Python.exe" fatal: No names found, cannot describe anything. Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)] Version: 1.7.0 Commit hash: a675a0a5107f39292709a78a2802425f67e4c6c4 Traceback (most recent call last): File "C:\Users\Jostafa\stable-diffusion-webui-directml\launch.py", line 48, in main() File "C:\Users\Jostafa\stable-diffusion-webui-directml\launch.py", line 39, in main prepare_environment() File "C:\Users\Jostafa\stable-diffusion-webui-directml\modules\launch_utils.py", line 552, in prepare_environment run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True) UnboundLocalError: local variable 'torch_command' referenced before assignment
@FE-Engineer Жыл бұрын
I’m uploading an alternative here shortly. Sorry I have to dig into this more to figure out if something has changed and but I’ll have an alternative video up on the channel shortly.
@Not_Hans Жыл бұрын
every time i do this i get an error that says RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check Press any key to continue . . . I have tried every thing uninstalling/reinstalling/cleaning my directories it just keeps giving me this error
@FE-Engineer Жыл бұрын
It looks like something has broken or changed. I’m not sure what it is yet. I have other alternative videos for shark and comfyui on windows. And rocm on Linux. For now try those because they all should work while I figure out what changed and get a fix or video up about it in the next few days after the holidays
@Not_Hans Жыл бұрын
@@FE-Engineer I am trying to roll back my SD1111 right now we are on versions 1.7 and I am wondering if that’s what has caused the issue. Unfortunate all around thanks for the reply and hopefully we can find a fix.
@1684biolab Жыл бұрын
@@FE-Engineer After updating to webui ver 1.70, I found this comment from Ishqqytiger on github SD webui directml disscusion, maybe this will help : "As the upstream added --use-ipex, the --backend directml was changed to --use-directml, and the default value was changed to CPU. Please run with --use-directml" So I add commandlineargs --use-directml and it work again using my GPU
@Not_Hans Жыл бұрын
@@FE-Engineer I hope you read this! It looks like the issue was with Direct-ml! You must edit your "Requirements Version.txt" file and add the line "torch-directml" after this you will add "--use-directml" to your batch command line, you will also have to delete your venv folder and have it redownload. This fixed the issue and i am able to run SD on 1.7 with an AMD gpu.
@Joni17 Жыл бұрын
@@1684biolab THX this helped me!!!
@leandrovargas615 Жыл бұрын
Thank you bro for the fix!!!A Big hug!!!
@FE-Engineer Жыл бұрын
Glad it helped!
@JahonCross7 ай бұрын
Is this like a beginner guide to SD? I have an amd gpu and cpu
@AdmiralPipito Жыл бұрын
Rx 580, works, 20 sempling steps, 512/512 15-20 sec, you can add your args, work little faster, 1024x512, or 1024x640, 40-50 sec
@FE-Engineer Жыл бұрын
That’s awesome! Glad it is working! :)
@BhillyJhordyRamadhan Жыл бұрын
I have nitro+ rx 580 se, but my pc is always restarted if i try to generate image
@AmandaPeach Жыл бұрын
@@BhillyJhordyRamadhan get your amd drivers, go to optimizations and try to lower your frequency and power bars to -15% adn try again. i had to "underclock" and "undervolt" my gpu to not restart my pc and turn my gpu fans liek a airplaine fan
@BhillyJhordyRamadhan Жыл бұрын
@@AmandaPeach it's work, thanks bro
@caseydwayne11 ай бұрын
I'm jealous. 2x 8gb RX 570s and I've just torched an entire week trying to get this running ... Tried Windows, Docker, Linux. Closest I've managed is 5-6s/it with pure CPU on Linux.
@TAS2OO6 Жыл бұрын
Could someone please help me? I keep getting an error message 'File "C:\Users\Admin\sd-test\stable-diffusion-webui-directml\modules\sd_olive_ui.py", line 363, in optimize assert conversion_footprint and optimizer_footprint AssertionError' every time I try to optimize a model in the Optimize ONNX model tab. I have RX6600 8gb by the way. Update: I found out that it eats all my RAM, because of it I have this error. So, after that I have another question: How can I reduce RAM usage while optimizing the model?
@FE-Engineer Жыл бұрын
Try using -medvram setting. Is it using up all of your ram or vram?
@TAS2OO6 Жыл бұрын
It using up all of my ram, not vram. How can I reduce ram usage while optimizing the model?@@FE-Engineer
@FE-Engineer Жыл бұрын
Oh. I see. You can look through the flags on the GitHub for that repo. I think there is a -lowram flag you can add.
@4MERSAT Жыл бұрын
Why can't I change the image size in the Optimize tab? I can only select 512.
@FE-Engineer Жыл бұрын
You do not need to change it in that tab. The models you are optimizing are most likely trained on 512x512 anyway.
@duckybcky773211 ай бұрын
Everytime I get to collecting torch==2.0.1 my computer freezes like I can’t move my cursor and the clock is stuck on the computer is that normal?
@FE-Engineer11 ай бұрын
No. That is not normal or at least does not happen to me. Sounds like resource constraints maybe?
@Slewed Жыл бұрын
when I try to optimize the model it says ERROR:onnxruntime.transformers.optimizer:There is no gpu for onnxruntime to do optimization.
@FE-Engineer Жыл бұрын
The multiple comments below on this exactly. That’s normal.
@Slewed Жыл бұрын
It doesn't work it says error on the website before it finishes optimizing the model @@FE-Engineer
@ultralaggerREV1 Жыл бұрын
Hey man, I got Stable Diffusion installed on my PC, but here’s a massive problem… AUTOMATIC1111 stated that in AMD GPUs, if you use the argument “-medvram”, it will work for GPUs that are between 4GB to 6GB and “-highvram” for GPUs that are 8GB and above. I own an RX 6600 with 8GB of video memory, but whenever I test the AI with the “-medvram” argument, I end up a not enough memory error. How to solve this?
@FE-Engineer Жыл бұрын
Try no arguments. Try it with lowvram arguments. The version you are using here is using directML and onnx. So it is a bit different than the normal automatic1111 as far as memory is concerned. You will have to test them out to see which ones work for you.
@ultralaggerREV1 Жыл бұрын
@@FE-Engineer i’ve been using the lowvram arguement and it does work, but the problem is that even with 35 sampling steps, a CFG of 8, and the Karras sampling method, I don’t get to generate as good images as the ones my buddies do. :( I tried without arguments and it gives me an error.
@mareck694611 ай бұрын
@@ultralaggerREV1 optimize your models to fp16 helps but 8GB depending on the model you use is cutting it awfully close
@phelix88 Жыл бұрын
Anyone know if training isnt supposed to work on this version? I get an error immediately after it finishes preparing the dataset...
@FE-Engineer Жыл бұрын
I don’t know for sure. On the directml version there are a lot of things that do not work appropriately. You can also do the swap to move over to Linux where and has the Rocm drivers that work. Or you can wait another 2-3 months or so until amd finishes hopefully getting Rocm ported over to windows.
@YanTashikan7 ай бұрын
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
@YanTashikan7 ай бұрын
did as it asked webui.bat --onnx --backend directml --skip-torch-cuda-test
@YanTashikan7 ай бұрын
And got this (Automatic1111_olive) C:\Users\User\stable-diffusion-webui-directml>webui.bat --onnx --backend directml --skip-torch-cuda-test venv "C:\Users\User\stable-diffusion-webui-directml\venv\Scripts\Python.exe" Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)] Version: v1.9.3-amd-12-gf4b8a018 Commit hash: f4b8a018cc47289502587eb05826dec9b1e5127e no module 'xformers'. Processing without... no module 'xformers'. Processing without... usage: launch.py [-h] [--update-all-extensions] [--skip-python-version-check] etc
@Robert306gti11 ай бұрын
Swede here with some problem. I just installed the latest Miniconda because I thought that was what I'm supposed to do but got an error that in the end that it wanted 3.10.6 but I can't find an installer for this. The only one I find is for python. Am I doing something wrong?
@FE-Engineer11 ай бұрын
I run it on python 3.10.6. I’m fairly sure that the 3.10.6 is regarding python version.
@FE-Engineer11 ай бұрын
And for the record. I stopped using mini conda mostly.
@shakewait7612 Жыл бұрын
Excited for new content! Well done! How to relaunch Stable Diffusion URL without reinstalling all over again? Also about the sampling methods - where are the other samplers like DPM++ 2M Karras?
@FE-Engineer Жыл бұрын
Unfortunately right now, I don't know how many improvements will be made with this repo. It is still actively worked on, but as you can see, some of the samplers are simply missing. I also saw the person who built and maintains this repository is also helping out with SD.Next. I tried SD.Next...and did not find it working as well as I would like, but it is a bit simpler in some respects.
@shakewait7612 Жыл бұрын
Like you I have a 7900XTX and I REALLY enjoy the speed boost, thanks! Just wish there were more working features inside the UI. You had commented somewhere about best of both worlds. Excited to see what what's in store@@FE-Engineer
@FE-Engineer Жыл бұрын
Same here. As a quick update. I tried installing Rocm on windows subsystem for Linux. Cool. It worked. But you essentially can’t get the graphics card passed through or at least…not really. So then I was like…well what about Ubuntu desktop. Then we at least have a working gui basically. Ran into a lot of problems there. Blew away dual boot Ubuntu desktop. Accidentally wiped an entire hard drive that I was using…and now I’m doing it straight through Ubuntu server. That’s why I have mostly been quiet for a few days. Working through some issues so that hopefully I can get a good clean tutorial up showing something worth seeing!
@el_khanman Жыл бұрын
@@FE-Engineer please let us know if you have any success with dual booting. I got it working nicely on my first try, then accidentally broke it, and have not been able to get it working again (even after countless fresh reinstalls).
@Mr.Kat3 Жыл бұрын
So from my understanding unless I'm missing something so far I cant get any of my old "Embedings (Textual Inversions)" to work? And I am assuming they don't work in this version which is a huge downside for me. Any info you have on this?
@FE-Engineer Жыл бұрын
Run ROCm in Linux if you want to be able to do everything. No optimizations or anything in Linux. Just pure regular automatic1111 and everything works.
@Code_String Жыл бұрын
How does this compare to A1111 with ROCm on Linux? I tried to run the Olive optimization on my G15AE's RX6800m but it never picked that up. Was wondering if it's worth going through after getting a simple Ubuntu setup going.
@FE-Engineer Жыл бұрын
7900xtx just recently got Rocm support. I’m going to try it out and see how they compare. I’m trying to get to that today if I can get enough time.
@deepanyai Жыл бұрын
How was it? I am thinking of buying the card but there isn't fair comparisons on the internet. All of them XTX on DirectlML but rather I am looking for comparions with Linux ROCM. .@@FE-Engineer
@CESAR_CWB Жыл бұрын
unfortunately it's not working for me anymore either, all of a sudden all i got its this error RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check Press any key to continue . . . I hope they fix this and if you know a way to get it working again please let us know
@FE-Engineer Жыл бұрын
In the meantime. You can run either shark or comfyui on windows.
@CESAR_CWB Жыл бұрын
yeah yeah,nice tutorials btw hope to find something to fix this issue tho thx dude@@FE-Engineer
@athrunsblade846 Жыл бұрын
What is the reason for having far less sampling methods on the AMD version? Or is there a way to install more? Thanks for the help :)
@FE-Engineer Жыл бұрын
I have new videos coming out that should help with this. The specific question you asked, I’m not sure how much support this directml fork of automatic1111 is receiving these days. I know the person who build it is also helping out with SD.Next project as well. Hence why I said I’m not entirely sure how much more support this fork is really getting. I hope this information helps. Also I have new videos coming out about running native with Rocm for and cards.
@zackbum9159 Жыл бұрын
This worked for me. However i've got some Problems wit extensions and Models. Extension like "ReActor" "Faceswap Lab" "Agent Scheduler" are not working correctly. An i Have Downloaded a new Model from Civitai. Its a safetensor file. Where i need to copy this file an how can i Optimize this file? Do i get this Problems because of this special version of SD oder because the AMD card? Do you have expierience in this?
@FE-Engineer Жыл бұрын
Yes. Unfortunately ONNX models have a lot of known issues. Many of the parts do not work appropriately on directML. If you want the full set of features at the moment the only real way to do it is running on rocm in Linux. AMD is working to get Rocm on windows but that could easily be a few months away still.
@zackbum9159 Жыл бұрын
@@FE-Engineer Ok, Thanks. So i could install Linux as a second OS and run rocm there? An then i can use any Model without optimizing process and the mentiones extensions will also work? Same compatibility like a nvidia card? I have a AMD 7800 XT, what would you prefer to do, run rocm on linux or buy a nvidia card and sell de AMD? My main concern is that I can do everything with it that you can do with the Geforce, with similar speed.
@kkryptokayden4653 Жыл бұрын
@@zackbum9159I run sd on Linux only for generating images, zero issues. I ran comfyui on Linux for animations and zero issues. Give comfyui a try. It is amazing.
@djust270 Жыл бұрын
Ive followed this guide to a T, but I keep getting this error when trying to optimize a model "ERROR:onnxruntime.transformers.optimizer:There is no gpu for onnxruntime to do optimization." Ive tried searching this error but havent found a solution yet. Have you encountered that before? Im using a Radeon 6950XT with the latest driver 23.11.1
@FE-Engineer Жыл бұрын
Yep. It says it for me as well. Just let it continue it will optimize anyway.
@djust270 Жыл бұрын
@@FE-Engineer ok thanks I'll try again. I was only getting 4-5 iterations per second and the images generated were distorted, particularly faces. I'm going to start from scratch and try again.
@FE-Engineer Жыл бұрын
I understand. I have had that happen. Make sure to double check settings. Make sure you are in the text to image tab. Image to image can get really wonky if you click the wrong boxes. When optimizing a model. Some basically will just work. Others will not (from my testing and playing with it). Definitely to start out try some of the ones I suggested just to make sure everything is working properly if you can. During optimization you will see that specific error that you mentioned onnx does not have a gpu to use. That is ok. Let it finish optimizing. It should work even with that error. It definitely takes several minutes though.
@FE-Engineer Жыл бұрын
If faces look really weird. Try changing the sampler to Euler or Euler-ancestral and do that once or twice. Sometimes some of the other sampling methods from my experience make horrible faces…
@djust270 Жыл бұрын
@@FE-Engineer Thank you for the tip. Changing the sampler did indeed fix my issue! Also, these videos are great. Thank you.
@Blue_Razor_ Жыл бұрын
Downloading models using the ONNX tab is super slow, and stops about halfway through. Is there a way I can download the file off of huggingface and just copy and paste it into the ONNX-Olive folder? I tried it with a dreamshaper model I already had downloaded but it didn't recognize it.
@FE-Engineer Жыл бұрын
I have not had those problems. I’ve found those tabs to be really finicky and easily get messed up. Sorry I can’t be much more help. Keep trying though.
@SkronkJappleson Жыл бұрын
Thanks, I got it going a lot faster on my RX 6600 because of this
@FE-Engineer Жыл бұрын
That’s awesome! Glad to hear it helped!
@thaido1750 Жыл бұрын
how many it/s does your RX 6600 have?
@SkronkJappleson Жыл бұрын
@@thaido1750 after using it a bit I decided to just use my other machine with rtx 3060. I could get 2.5 it/s with the 6600 (a little more if i overclocked) and then you have to use their crappier sampling method as well. for comparison, rtx 3060 gets around 7 it/s without trying to overclock with xformers installed
@evetevet7874 Жыл бұрын
@@FE-Engineer help, i get "Torch is not able to use GPU" error plss
@jakeblargh Жыл бұрын
How do I optimize safetensors models I've downloaded from CivitAI using this new WebUI?
@FE-Engineer Жыл бұрын
How to convert civitai models to ONNX! AMD GPU's on windows can use tons of SD models! kzbin.info/www/bejne/mXXVmqV7qdJ4p7s
@yozari4 Жыл бұрын
There is a way to put a model that I have already downloaded?
@FE-Engineer Жыл бұрын
Most models will work. I converted safetensor models from civitai. I even made a video on how to do it. kzbin.info/www/bejne/mXXVmqV7qdJ4p7ssi=h2nK_Ez4-TwXKR6m
@uffegeorgsen372 Жыл бұрын
спасибо, друг. Я уже хотел выбросить свою карту AMD, но попался твой ролик. Все получилось, искренне благодарю!
@FE-Engineer Жыл бұрын
That is fantastic! I am glad it worked! :)
@morgoroth92 Жыл бұрын
During optimization model I got always the same errore. "There is no gpu....." and can't do nothing. I got a 6900XT any suggestion to fix it? I even tryed putting models downloaed from civitai into stablediffusion models but it makes weirds images
@FE-Engineer Жыл бұрын
Hard to say. Make sure you have the latest drivers for your video card. Potentially try uninstalling and reinstalling video card drivers. Sorry I haven’t had very many people talk about having that type of issue.
@morgoroth92 Жыл бұрын
@@FE-Engineer I think it's a problem with the model because I tried with another one and it works. There's no way to optimize the one downloaded from civitai?
@FE-Engineer Жыл бұрын
There is, I actually just got one converted correctly. I am making a video about it right now. The optimization process does not use your GPU, it is CPU only, that is both why it takes a while, and why you get the "no gpu error" but if you let it keep going it should be fine. Make sure you have plenty of hard drive space and ram available, the optimization process blows through a ton of hard drive space temporarily, and eats a ton of ram.
@morgoroth92 Жыл бұрын
@@FE-Engineer thanks I just saw your new video! I'll give it a try asap
@animarkzero3 ай бұрын
What is faster? This or Zluda with RocM....??🤔🤔
@lenoirx11 ай бұрын
"Torch is not able to use GPU" any help? Im using an RX 5600 XT
@FE-Engineer11 ай бұрын
Code changed. Command line argument is now -use-directml instead of the backend directml piece
@sa2bin90911 ай бұрын
This was the best tutorial I can find at this time for AMD One question, did you manage to get stable diffusion XL models to work? If I put them in the stable diffusion folder in the models folder, the WebUI does not show them
@FE-Engineer11 ай бұрын
Only when using ROCm. I have not tried with shark. But on windows I have not gotten SDXL to work. :-/
@sa2bin90911 ай бұрын
@@FE-Engineer me neither, ended up installing ubuntu 22.04, works even better and can use SDXL models
@ELIASVV Жыл бұрын
Hi, how can I start SB after leaving the anaconda terminal?
@FE-Engineer Жыл бұрын
Automatically activate conda and run your SD from one bat file! Super easy! kzbin.info/www/bejne/rHysopdre6l_pJI Like this :)
@Bordinio11 ай бұрын
So the whole optimization limited sampling methods, right? Karras etc. are gone.
@FE-Engineer11 ай бұрын
Sort of. ONNX doesn’t have the ability to use those other samplers. So it’s more of an ONNX format problem rather than the optimization really. You can not run in ONNX mode and then you will have them. But the performance hit is pretty big.
@Bordinio11 ай бұрын
@@FE-Engineer aye, thx for the reply!
@diamondlion47 Жыл бұрын
Good vid man, gotta show support for open source non ngreedia ai. Nice punk btw.
@FE-Engineer Жыл бұрын
Haha thank you. I worked for a crypto company and the designers made punks for everyone who worked there. It’s on some chain, I don’t remember which one though to be honest. And yea. Nvidia cards are good. No doubts there. Their prices are just too high for me to stomach personally. :-/
@evetevet7874 Жыл бұрын
Can i do it on my 6000's series? (6900XT) and take the speed that you take?
@FE-Engineer Жыл бұрын
Yes it works in 6900xt. Speeds will not be as fast but will be faster than any other way in windows other than maybe shark.
@yodashi5 Жыл бұрын
It worked, thanks. Now there is another problem, i dont know how to open it again. Every time i use webui use bat i dont have olive window. Could you help?
@FE-Engineer Жыл бұрын
Take a look at this. Automatically activate conda and run your SD from one bat file! Super easy! kzbin.info/www/bejne/rHysopdre6l_pJI
@bysamuelneves Жыл бұрын
Here says: RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
@FE-Engineer Жыл бұрын
Yes. And I updated the video description yesterday to let people know there was something wrong ideally so I did not get comments saying “doesn’t work” or “this is broken”. I also went to the trouble of linking videos to alternatives that DO work currently. But as for now you will have to wait for me to finish editing the video describing how to get around these errors.
@kidcoal33 Жыл бұрын
comparing the download steps by using the command you can see in the video at 9:06 that it is installing torch-directml among other stuff(torchvision etc.) . I followed everything as described but this certain package is missing and i assume it has to do something with it. is there a way to force it to install this package too?
@bysamuelneves Жыл бұрын
@@FE-Engineer bro, I have a Radeon Vega 5 AMD. I'm more than a week having throuble trying to use SD. What can I do?
@raystyles93262 ай бұрын
how do you install the error updates on cmd, im so new to this
@FE-Engineer2 ай бұрын
Error updates is not a package or thing to be installed. When a program hits an error it usually sends out an error message to help users have an idea of what went wrong.
@raystyles93262 ай бұрын
@@FE-Engineer this is where i got and the error i got...not sure if my gpu can run...if you have a fix just let me know (sd_olive) C:\Users\DJ Dubbii\sd-test\stable-diffusion-webui-directml>webui.bat --onnx --backend directml venv "C:\Users\DJ Dubbii\sd-test\stable-diffusion-webui-directml\venv\Scripts\Python.exe" Python 3.10.9 | packaged by Anaconda, Inc. | (main, Mar 8 2023, 10:42:25) [MSC v.1916 64 bit (AMD64)] Version: v1.10.1-amd-11-gefddd05e Commit hash: efddd05e11d9cc5339a41192457e6ff8ad06ae00 Traceback (most recent call last): File "C:\Users\DJ Dubbii\sd-test\stable-diffusion-webui-directml\launch.py", line 48, in main() File "C:\Users\DJ Dubbii\sd-test\stable-diffusion-webui-directml\launch.py", line 39, in main prepare_environment() File "C:\Users\DJ Dubbii\sd-test\stable-diffusion-webui-directml\modules\launch_utils.py", line 592, in prepare_environment raise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
@Benji.v29 ай бұрын
hey there! it says cannot use Gpu.. help me pls
@FE-Engineer9 ай бұрын
Read the video description
@renkun8090 Жыл бұрын
i get AssertionError in the optimization progress. any ideas?
@FE-Engineer Жыл бұрын
Hard to say. Double check that you did it the way I did it in the video and don’t flip over to other tabs as that sometimes causes problems.
@Kudoxh Жыл бұрын
So do I always need to download models from huggingface AND into the Onnx folder? does it also work if i would simply download a model and placed it into model/stable-diffussion? I'm kinda new to this sry if it seems like a dumb question. edit: another question, to start the webui, is it required to start it via anaconda using the "...\ webui.bat --onnx --backend directml" or can i simply start it with clicking on the webui-user batch file and if so I probably need to add the --onnx --backend directml into the arguments section...?
@FE-Engineer Жыл бұрын
Anaconda is required if that is how you set it up (that is how I did it in my video because it significantly reduces problems and provides consistency, plus if you do not use it, and make any type of mistake, it is time consuming to try and fix any of these mistakes) You can download models from just about anywhere. Not every model works 100% of the time due to different ways that people configure and encode some models. See the pinned message on this video to get a better idea of how to convert models from civitai for example.