Thanks so much for this video, much appreciated! Finally a tutorial that actually got me past the "Torch is not able to use GPU" error. For programmers that might all be easy and self-explanatory, for everyone else it's a real hustle to stand in front of these errors that tell us nothing if we don't speak code. What I cannot wrap my mind around is why a multi-billion dollar company like AMD doesn't attach a fix like this at the bottom of their stablediffusion tutorial. They must be aware there's issues for many users during install. Anyways, we luckily got helpers like FE-Engineer.
@FE-Engineer Жыл бұрын
You are very welcome! Thank you for the kind words and support on KZbin! I am hoping to be able to one day have a working relationship with AMD to be able to help folks even better with AI things as software and changes occur in the fast moving world of AI. Maybe one day? :)
@MrRyusuzaku Жыл бұрын
Tbh even programmers might not be able to get it at a go. Especially if Python is not their thing. One here tho I had a tiny clue, but this video helps a lot
@kampkrieger11 ай бұрын
@@MrRyusuzaku even if python is their thing, you don't just know how this is supposed to work. I get the error that it can not find venv/lib/site-packages/pip-22.2.1-dist-info/metadata, i have no folder site-packaged and I don't know what it is or where it comes from
@joncrepeau3510 Жыл бұрын
This is the only way with windows and an amd gpu. Other tutorials get stable diffusion running, but it is only on the cpu. I was seriously about to give up hope until i watched this. Thank you
@FE-Engineer Жыл бұрын
Glad it worked for you and you were able to get up and running! Thanks for watching!
@ml-qq5ek10 ай бұрын
Just found out about olive/onnx, Thanks for the easy to follow guide, unfortunately it doesn't work anymore. Will be looking forward to see the updated guide.
@lurkmoar4 Жыл бұрын
Thanks for the tutorial, it's the best one I've seen so far and everything works great
@FE-Engineer Жыл бұрын
You are welcome. The code changed a few days ago and most peoples stuff broke. And depending on what you had it could be fixed several ways. But this seemed the most bulletproof to make a video saying do this and it should work.
@chris99171 Жыл бұрын
Thank you @FE-Engineer for taking the time to make this tutorial. It helped!
@FE-Engineer Жыл бұрын
Glad that it helped! Thank you for watching and supporting my work. It means the world to me!
@adognamedcat1311 ай бұрын
I was wondering if you could help me with interesting issue. So after following the steps, it kept telling me that the --onnx was an unknown argument. I heard somewhere that with the newest update onnx didn't need to be included as an argument. So I deleted it from the webui-user.bat args line. To my surprise the webui booted as normal, though, there was no sign of olive or predictably onnx. Now I'm getting around 1.5its/sec and I have the same exact card as you. on the plus side I have dmp++ 2M Karras now, and it does *technically* work, but the speeds are ridiculously slow. Thanks for any/all help and thanks a million for making this series, you're the man! Update: to clarify the error I get if I try to launch it the way you described is ' launch.py: error: unrecognized arguments: - '
@Vasolix11 ай бұрын
I have same error how to fix that ?
@FE-Engineer11 ай бұрын
Remove -onnx. They changed code. It is no longer necessary.
@williammendes11911 ай бұрын
@@FE-Engineer but when sd start we dont have Olive tab
@whothefislate11 ай бұрын
@@FE-Engineer but how do you get the onnx and olive tabs then?
@tomlinson413411 ай бұрын
@@FE-Engineer I have the exact same issue. Do you know a fix?
@scronk3627 Жыл бұрын
Thanks for this! I ended up not having to comment out the lines in the last step, the optimization worked without it
@FE-Engineer Жыл бұрын
You are very welcome! And that is awesome. I’m seeing mixed comments about it. Some people still run into it. Others seem to not run into it. Probably differences of what code people have pulled. But I’m glad it worked for you and you didn’t have to put in that hacky fix. Thank you for watching!
@patdrige Жыл бұрын
you Sir are the MVP. You not only showed how to install but also showed how to trobule shoot errors step by step. Thanks
@FE-Engineer Жыл бұрын
You are welcome! I’m glad it helped. Thank you for watching!
@patdrige Жыл бұрын
@@FE-Engineer do you have a guide or plan to have a guide for text2text AI for AMD ?
@zengrath11 ай бұрын
Dude, you have no idea how long i been trying to get automatic on windows with my 7900xtx and conclusion always has been use linux from everywhere I go. but I seen AMD's post about how it works with windows with olive yet it wouldn't work for me and tried for hours. Your video finally got it working for me. The key part for me was not using the skip cuda command, nothing anywhere i've seen had showed me how to proper fix this until your video. I funny enough didn't have some of errors you did after that but maybe they updated some things since this video or i already installed some of those things already, not sure. thank you so much. I been using Shark and it's such a pain to use, every model change, every resolution change, requires recompiling, every lora and so on, it's a nightmare and it doesn't appear to have as many options as automatic. I hear that we still can't do lora training and all but hopefully that comes later.
@FE-Engineer11 ай бұрын
Yea. Honestly. I love that shark kinda just works. But I can not stand using it. It takes forever. If you want to just load a model keep an image size and just generate image after image it’s ok. But if you wanna jump around, change models, change images sizes. Then shark is crazy slow. You are very welcome! I’m glad you got it working, thank you so much for watching!
@zengrath11 ай бұрын
@@FE-Engineer I actually switched to comfyUI also thanks to your other video and while it may be a little slower, it's still good enough for 7900xtx and inpainting, img to img, lora's, and all that works which didn't on the automatic one. So much better for me then automatic on windows so far. but hoping it improves even more, i noticed some plugins not working when following a tutorial but at least basics work.
@dangerousdavid8535 Жыл бұрын
You're a life saver i couldnt get the onnx optimization to work but now its all good thanks!
@FE-Engineer Жыл бұрын
Yea. I suddenly started getting a lot of comments about things being broken. So as soon as I really could dig in and figure out how to at least get people up and running I tried to get something to help people get stuff at least with a shot of working for now.
@on.the.contrary11 ай бұрын
hi, I did just as the video and I got this problem "launch.py: error: unrecognized arguments: --onnx". Anyone got and fixed this?
@CANDLEFIELDS11 ай бұрын
Been reading all comments for the past half hour...somewhere above FE-Engineer says that it is not needed and you should delete it....I quote it: Remove -onnx. They changed code. It is no longer necessary.
@nangelov10 ай бұрын
@@CANDLEFIELDS if I remove --onnx, I no longer have the onnx and olive tabs and can't optimize the models
@ca499910 ай бұрын
@@nangelovSame problem sadly.
@nangelov10 ай бұрын
@@ca4999 I surrendered and decided to buy an used 3090. There are plenty available in Europe for about 600 Euro and it is like 30 times faster, if not more.
@ca499910 ай бұрын
@@nangelovThe sad thing is, I got it somehow to work after 5 hours of work just to realize that the hires fix doesnt work currently with ONNX. Should've went the linux route from the beginning. Thats a very solid price for a 3090, congrats ^^ Just out of curiosity because I'm also located in Europe, where exactly did you buy it?
@xCROWNxB00GEY Жыл бұрын
you are honestly my hero. I am still getting alot of wierd errors but everthing is working.
@FE-Engineer Жыл бұрын
Yea. I mean. Fair warning. This literally disables some logic for lowvram flag. Like for real. Stuff could break. But maybe some things potentially breaking seems better than “well it straight won’t work” 😂
@xCROWNxB00GEY Жыл бұрын
I do prefer it running with constant warnings instead of errors which prevent me from running it. Do you still use it this way or are you using an alternative? I just started with AI Image and could use any input. But because I have an 7900XTX I feel like there are no options.@@FE-Engineer
@EscaExcel Жыл бұрын
Thanks, this was really helpful it was hard to find a tutorial that actually gets rid of the torch problem.
@FE-Engineer Жыл бұрын
Glad this helped and worked! I agree. It’s difficult to find good information and things that actually work.
@ЛамтТ Жыл бұрын
Thank you! After 2 days of struggling the problem is gone!
@FE-Engineer Жыл бұрын
I’m glad it helped! Thank you for watching!
@le_crispy Жыл бұрын
I never comment on videos, but you fixed my issue of stable diffusion of not using my GPU. I love you.
@FE-Engineer Жыл бұрын
I’m glad it helped and fixed your problems! Thank you so much for watching!
@LeitordoRedditOficial11 ай бұрын
If you get the error: RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check then add "--use-directml --reinstall-torch" to the COMMANDLINE_ARGS in the webui-user.bat file through notepad this way SD will run off your GPU instead of CPU. after use one time, remove --reinstall-torch, remember, is without " ". please share in more videos for help more people.
@TPkarov9 ай бұрын
Obrigado amigo, você é um amigo !
@LeitordoRedditOficial9 ай бұрын
@@TPkarov de nada amigo, sendo sincero com você, o melhor mesmo é gerar imagens 512x512 tenho uma rx 6800 xt e varias vezes quando ponho algo maior que isso, quando está em 99% dá erro, e esperei aquele tempo por nada kkkkkkkkk, mas se for da serie 7000 da amd pode dar certo com imagens maiores.
@FostPhore4 ай бұрын
@@LeitordoRedditOficial Quick question, should I add --onnx now? Because otherwise I can't use Stable Diffusion it seems to me
@yannbarral7242 Жыл бұрын
Super helpful, thanks a lot!! The --use-directml in COMMAND ARGS was what I was missing for so long. You helped a lot here. If it can help others with random errors during installation and 'Exit with code 1' , what worked for me was turning off the antivirus for an hour.
@FE-Engineer Жыл бұрын
Interesting about the antivirus. Which antivirus do you use? Glad this helped. Most folks could probably just swap their command line arguments to -use-directML and it would probably work. Unfortunately when I make a video in order to avoid a mountain of “doesn’t work” comments I try to balance between what will fix it for most folks and I try hard to include information that should fix it entirely for 99.99% of folks. And of course. People have different code from different points in time, different systems, different python versions etc. so I try hard to make sure that if nothing else. If you blow away and start over. This should work and fix your problems. Hence why even when a video could be like 1 minute with 1 small change. It can easily become 10+ minutes with the handful of “and if you happen to see this…” pieces. :-/ it is a difficult balancing act.
@FE-Engineer Жыл бұрын
Thank you for the kind words, I am glad this helped you. Thank you for watching!
@lenoirx Жыл бұрын
Thanks! After 3 days of trying workarounds, this guide finally worked out!
@FE-Engineer Жыл бұрын
Yea the changes they made really kind of were irritating and while they are documented. A lot of people didn’t really see how to fix it easily.
@NewHaven32111 ай бұрын
I get the following error when running the webui-user.bat file, "launch.py: error: unrecognized arguments: --onnx". I can still run if I remove the --onnx parameter but I will have no Olive or ONNX tab in the interface. Appreciate any input here.
@MattStormage11 ай бұрын
Same here
@IJN-Yamato11 ай бұрын
Hi! An update has been released and --onnx is now automatically installed and does not require an argument in webui-user
@Omen0911 ай бұрын
@@IJN-Yamato but it doesn't show onnx in gui
@GenericYoutuber123411 ай бұрын
@@Omen09 you can fix this now with git checkout d500e58a65d99bfaa9c7bb0da6c3eb5704fadf25. This will go back to the old version before the update. You may need to delete your requirements file with the change to add torch-directml before doing it. Then, run the webui-user.bat after changing it to include the command line parameters --use-directml --onnx. This will give you the ONNX tab like before, where you can follow the video from around the 8 minute mark.
@FranciscoSalazar-qi4mw11 ай бұрын
I have the same error, have you been able to fix it? As they say in the comment, it is supposed to be automatic but the onnx does not appear
@rikaa7056 Жыл бұрын
thank you man all other tutorials on youtube was useless. CPU was at 99% now you fixed my gpu rx 6600xt is doing the heavy lifting
@FE-Engineer Жыл бұрын
Nice! Glad it helped! Thank you for watching!
@Djangots6 ай бұрын
Many thanks ! Your guide was very helpful with just the first 10 minutes
@magnusandersen889810 ай бұрын
I've followed all your steps up untill the 8:00 minute mark, where I after running the webui-user.bat file, get an error saying "launch.py: error: unrecognized arguments: --onnx". Any ideas how to fix this?
@FE-Engineer10 ай бұрын
Remove -onnx
@Thomas_Leo Жыл бұрын
Thank you so much! This was the only video that helped me. Liked and subscribed. 👌
@FE-Engineer Жыл бұрын
I’m glad this helped! Thank you so much for your support!
@ktoyaaaaaa9 ай бұрын
Thank you! it worked
@FE-Engineer9 ай бұрын
:):) glad you got it working! Thank you for watching!
@nickraeyzej57811 ай бұрын
This worked great in 12/2023. Latest automatic conversion changes simply do not work and end up corrupted at random. Even when it does work, it makes automatic conversions for every single switch you make to the image resolution. Is there a way to git clone the project version from when this method was perfectly fine, back when we had the ONNX/Olive conversion tab, and one conversion per safetensor covered all resolutions on it's own?
@tomaslindholm9780 Жыл бұрын
You were quick in some parts, but the "entire" server restart (terminate batch job Y/N) just hit Ctrl C Thank you so much for this fix the guide fix guide. Hero!
@FE-Engineer Жыл бұрын
😂😂 I was not going to make a video. But I decided to start from scratch and figure out all the trouble spots and I was like…mmmm…I’ll get too many comments about people having weird troubles and it’s hard to explain some of it over text. And yea. I try not to go too fast but I also try to avoid pointlessly lingering. I tend to record and get a bit too in depth and off topic and in editing I usually cut most of that out. Just the way I naturally talk versus the cleanest way to really do a how to. It’s a process. Plus I really am trying to get it down to more of a reflex and more natural for me to be able to do these without going too far off and also not going too fast. :-/
@tomaslindholm9780 Жыл бұрын
Well, as a former system engineer I understand you must have a great deal of confidence to do what you did, considering the promising title of your video. Brave and good! Thank you for sharing your skill to the rest of us kamikaze engineers. (BTW, its inside a VM, just make it or break it seems like a good approach) @@FE-Engineer
@MasterCog999 Жыл бұрын
This guide worked great, thank you!
@FE-Engineer Жыл бұрын
You are welcome! Thank you for watching!
@СергейКозлов-л4пАй бұрын
Thanks so much for this video!!!
@AdemArmut-g5p Жыл бұрын
Thank you so much. You have helped so many people with this video!
@FE-Engineer Жыл бұрын
I’m glad it helped you!! Thanks so much for watching!
@BOIWHATmusic11 ай бұрын
Im stuck on installing the requirements line, taking a really long time. Is this normal?
@FE-Engineer11 ай бұрын
Depends on internet connection and some other things. But yes. It is not exactly fast.
@nourel-deenel-gebaly372210 ай бұрын
Thanks a lot for the tutorial, it worked but without the onnx stuff unfortunately, patiently waiting for your new video on this matter.
@FE-Engineer10 ай бұрын
It’s so much better too!
@FE-Engineer10 ай бұрын
Sorry about the wait though. Sick daughter. Sick son. Surgery for son. Hospitalization for son. It’s…busy. Plus work and life and all that. Still I do apologize whole heartedly for the wait.
@nourel-deenel-gebaly372210 ай бұрын
@@FE-Engineer no need to apologize you're literally amazing, hope all goes well for you, although i'll still be using this old and slow method since the new video is for higher cards and I have more of a potato than a gpu 😅, but hopefully I upgrade soon and benefit from this ❤️
@Reaperdarkhorse Жыл бұрын
Thanks man you helped a lot. much appreciated for your time and effort.
@FE-Engineer Жыл бұрын
You are welcome! Thanks so much for watching!
@pack969411 ай бұрын
thank you for helping me fix the olive issue you are amazing
@FE-Engineer11 ай бұрын
I’m glad this helped! Thank you so much for watching!
@RobertJene11 ай бұрын
10:20 use Ctrl+G to jump to a specific line in notepad
@faridabdurrahman602511 ай бұрын
help me i got error when i use argument -onnx it says launch.py: error: unrecognized arguments: --onnx
@FE-Engineer11 ай бұрын
Remove -onnx. They changed code again.
@Spaceguy11 ай бұрын
@@FE-Engineer works thank you
@mrsir9211 ай бұрын
It starts up for me but throws out a bunch of errors on the UI. I try enabling ONNX but it doesn't do anything. I'm not able to see an ONNX tab. "ERROR: Exception in ASGI application" Any ideas?
@IJN-Yamato11 ай бұрын
Errors in the interface are a bug in the new version. Unfortunately, the author of webui directml has nothing to do with this. They do not interfere with use, but they represent an interface defect. About the ONNX tab: I have the same problem. Rolling back to the previous version will help.
@mrsir9211 ай бұрын
@@IJN-Yamato Thanks! This helped. Been fight with this for a couple days now 😅
@FE-Engineer11 ай бұрын
Yes. I have started getting random things from folks saying things are not working. I always know code changed when I start getting several of these comments a day. :-/
@nielsjanssen242211 ай бұрын
@@IJN-Yamatohey man, i cant seem to figure out HOW to fall back to an earlier version😅 can you explain to me? Is i on the github of automatic1111 directml? Cuz i cant seem to find an "earlier" version
@amGerard0 Жыл бұрын
This is great! Thanks for the excellent video, I went from ~4s/it to ~2it/s on a 5700XT! so *much* faster!
@FE-Engineer Жыл бұрын
Yay! I’m glad it helped! Thanks so much for watching!
@sanchitwadehra11 ай бұрын
my 6600xt went from 1.75 it/sec to 2 it/sec did you do something else could you please give me some recommendations on how you increased it so much
@amGerard011 ай бұрын
@@sanchitwadehra Make sure you have no other versions of Python, only 3.10.6 When I had other versions it just didn't work, maybe if you have another version it's slowing it down? Other than that I'm not sure I only use: set COMMANDLINE_ARGS=--use-directml --onnx If you're using medvram or something, remove it and try again? Depending on the model it can be slower - if you're using a really big model that can affect it, certain sampling methods are faster than others too. Likewise, if you are trying to generate images bigger than 512x512 (i.e. 768x512) then it will struggle. Try another model and see if it's just that, then try every sampling method availible (about 5 worked for me, the others were a total artifact ridden mess).
@sanchitwadehra11 ай бұрын
@@amGerard0 maybe it's the python version problem as my pc has latest python version and i installed a1111 using a conda environment with python 3.10.6 and i also have comfyui on my pc in a different conda environment with python 3.10.12 maybe i will try doing the whole process again by deleting everything from my pc thx for sharing
@NA-oe5jj11 ай бұрын
you solved the exact problems i had. thanks for the true best tutorial.
@FE-Engineer11 ай бұрын
You are welcome, I am glad it helped! Thanks for watching
@NA-oe5jj11 ай бұрын
@@FE-Engineer woke up today to it no longer working. why computers be like this. :D when i attempt to use webui-users it says installing requirements then *** could not load settings. then tries to launch anyway and starts to complain about Xformers and Cuda. i think this settings load is the issue. ima fiddle at lunch and then after work tonight, i will do a complete reinstall again using your handy guide.
@orestogams Жыл бұрын
Thank you so much, could not get this maze to work otherwise!
@FE-Engineer Жыл бұрын
You are welcome! Glad it helped! Thanks for watching and supporting my work!
@Hozokauh Жыл бұрын
at 7:00, you got it to skip the torch/cuda test error finally. for me however, it did not resolve the issue. went back and followed the steps twice over and same result. still getting the torch cuda test failure. any ideas?
@FE-Engineer Жыл бұрын
-use-directml in your startup script
@FE-Engineer Жыл бұрын
I did not skip the torch and cuda test. From my experience if you are having problems and skip it. It will never work because that test is designed to simply check if it thinks it can run on the GPU.
@Hozokauh Жыл бұрын
@@FE-Engineer thank you for the timely feedback! You are the best. Will try this out!
@jordan.ellis.hunter Жыл бұрын
This helped a lot to get it running. Thanks!
@FE-Engineer Жыл бұрын
You are very welcome! Thank you so much for watching. Glad it helped!
@evilivy4044 Жыл бұрын
Great tutorial, thank you. How do you go about using "regular" models with the --onnx argument? Do I need to convert them, or should I look for and use only ONNX models?
@FE-Engineer Жыл бұрын
Have to convert them basically. Occasionally you can find some models in ONNX format but it is not really super common…
@Azure1Zero4 Жыл бұрын
Thanks a lot. Something to note is if you don't want onnx mode enabled just exclude it from the arguments.
@FE-Engineer Жыл бұрын
This is true. Removing ONNX allows the other samplers to be used. But for AMD users. The performance hit is a big one.
@Azure1Zero4 Жыл бұрын
That's true. When I try running ONNX converted models it wont let me adjust the size of the image for some reason and they don't seem to be producing results nearly as good as non-converted.@@FE-Engineer
@Azure1Zero4 Жыл бұрын
I think I might have figured out my issue. I think I'm maxing out my ram and its crashing the CMD prompt mid optimizing. Do you think you could do me a favor and tell me about how much system ram you use when going through the optimization process? Going to upgrade and need to know how much.@@FE-Engineer
@Azure1Zero4 Жыл бұрын
In case anyone need to know I required 32GB of ram to optimize models. So if you don't have that much your going to need to upgrade or download an already optimized model. Something I had to learn the hard way. Hope this helps someone.
@nagilacarla32511 ай бұрын
i have follow all these steps, but when i click to start the web-ui bat they give me this error::: stderr: ERROR: Could not install packages due to an OSError: [WinError 5] Access denied: 'C:\\Users\\a\\stable-diffusion-webui-directml\\venv\\Lib\\site-packages\\onnxruntime\\capi\\onnxruntime_providers_shared.dll' Check the permissions. and i have alredy removed the --onnx 'cause i see they're no more necessary, so this continue giving me this error. someone could help me?
@baka514811 ай бұрын
having same issue here... really hope can find solution soon
@nagilacarla32511 ай бұрын
I believe I solved it. I saw some topics with similar problems and most of them deleted the venv folder and ran webui.bat again letting cmd recreate the venv folder from 0. I did it and it solved it, it opened right away.@@baka5148
@ALKSYM11 ай бұрын
add "--reinstall-torch" in the args and launch webui-user.bat, after the ui launched, delete the arg " --reinstall-torch", hope it help
@nangelov10 ай бұрын
Sorry to bother you. I've done everything so far, except that when I staart webui, the interface loads but there's no ONNX or OLIVE tabs. Everything is slow on the RX6800XT (1.3s/it) If I enable onnx in the settings, I get missing positional arguments error and I can't generate anything. Someone mentioned to roll back to an older UI Version, I don't see how to do that - there are no different versions for this fork.
@Daxter250 Жыл бұрын
that was... the best AND ONLY tutorial i found that worked. my 5700xt had no problems with stable diffusion half a year ago and then suddenly puff, some bs about tensor cores which i dont even have. all those wannabes on the internet simply said to delete venv and it will sort itself out. NO IT DOESNT. this tutorial here does! thanks for the work you put in! btw. with those onnx and olive models i even turned the speed from fucking seconds per iteration to 2 iterations per second O.o, while also increasing the image size!
@DGCEO_ Жыл бұрын
I also have a 5700xt, just curious what it/s you are getting?
@Daxter250 Жыл бұрын
@@DGCEO_ 2 it/s as written in the last sentence. image is 512x512.
@FE-Engineer Жыл бұрын
I’m glad this helped! Thank you so much for the kind words! :) and thank you for watching!
@Verlaine_FGC11 ай бұрын
I keep getting this error: "launch.py: error: unrecognized arguments: --onnx"
@FE-Engineer11 ай бұрын
-onnx is no longer needed. They changed the code. Just omit that from command arguments
@metaphysgaming740611 ай бұрын
Thanks so much for this video, much appreciated!
@FE-Engineer11 ай бұрын
You are welcome I hope it helped! Thanks for watching!
@chaitanyamore878610 ай бұрын
Can u put how to use dreambooth with amd 😅 its not supporting xformer torch many time im trying on different version its getting tough n tough
@FE-Engineer10 ай бұрын
I used dreambooth on Linux with amd. You might be able to do it with zluda. Maybe
@chaitanyamore878610 ай бұрын
@@FE-Engineer do that work on torch 1.31.1 ? On window amd? Old dreambooth?
@mobas07 Жыл бұрын
Whenever I try to optimise any model for olive it gets to this part then gives an error: [2024-01-07 16:19:04,824] [INFO] [engine.py:929:_run_pass] Running pass optimize:OrtTransformersOptimization Press any key to continue . . . Anyone know how to fix it?
@FE-Engineer Жыл бұрын
Strange. I think someone else mentioned this. I might have to dig in and see what’s going on or if I can recreate this. I finally got stuff I wanted done on my website. So that is reasonably in a good place finally so I can start now getting back to making videos!
@zerohcrows Жыл бұрын
did you ever fix this?
@mobas07 Жыл бұрын
Nope
@dr.bernhardlohn910411 ай бұрын
So cool, many, many thanks!
@FE-Engineer11 ай бұрын
Glad it helped! Thank you for watching!
@davados111 ай бұрын
Thank you for the tutorial. So I got the webui to load up but I don't have ONNX and Olive tab at the top just not there oddly. Would you know why has webui changed and removed it?
@miosznowak8738 Жыл бұрын
Thats the only solution I found which actually works, thanks :))
@FE-Engineer Жыл бұрын
I’m glad it helped and got it running :). Thanks so much for watching!
@markdenooyer Жыл бұрын
Has anyone gotten past the 77 token limit ONNX DirectML on the prompt? I really miss my super-long prompts. :(
@FE-Engineer Жыл бұрын
Not with this version on windows yet. :-/
@Maizito11 ай бұрын
I finally manage to run SD with your tutorial, I have an Rx7000, it didn't let me run with --onnx, I saw that in the comments they mention that that command is no longer necessary, so I removed it from the user-bat, and it opens the SD , but it goes very slow, it works between 1.5 and 2.5 it/s, any solution to make it go fast?
@W00PIE11 ай бұрын
That's exactly my problem at the moment with a 7900XTX. Really disappointing. Did you find a solution?
@Maizito11 ай бұрын
@@W00PIE No, I haven't found a solution yet :(
@DrMacabre11 ай бұрын
hi, for some unknow reason, i'm getting "launch.py: error: unrecognized arguments: --onnx" everything was working yesterday, i reinstall windows and stable diffusion on a new SSD and now i'm getting this error. No typo in the bat file
@DrMacabre11 ай бұрын
Luckily, i saved yesterday's install and it's working. No idea why today's install doesn't. That's kinda weird. Anyone managed to load SDXL models with this?
@IJN-Yamato11 ай бұрын
Hi! An update has been released and --onnx is now automatically installed and does not require an argument in webui-user
@IJN-Yamato11 ай бұрын
@@DrMacabre there is an opportunity to get your (previous version, that is, which you currently have installed) stable diffusion? After today's update, I can't use the new version of stable diffusion.
@DrMacabre11 ай бұрын
@@IJN-Yamato sure, i'll check the size to see if i can upload it somewhere.
@Omen0911 ай бұрын
you can get the old one with git checkout d500e58a65d99bfaa9c7bb0da6c3eb5704fadf25
@astarwolfe1411 Жыл бұрын
I’m not sure if this is a batch issue or a computer issue, but after getting the error and the “press any key to continue…” when I press any key the prompt closes immediately and doesn’t let me type anything in
@FE-Engineer Жыл бұрын
I saw someone else mention something similar. I’ll go in and take a look here when I get some time. I’m not sure if something maybe changed?
@semirvin Жыл бұрын
i found a solution for that. When you try to double click and run webui-user exe, console will immediately close. Try to run it with cmd promt. Like exactly on this video. After that console wont close.
@JustisKai11 ай бұрын
Everything runs fine until the final launch where i get launch.py: error: unrecognized arguments: -onnx. Any advice?
@IJN-Yamato11 ай бұрын
--onnx is now automatically installed and does not require an argument in webui-user
@FE-Engineer11 ай бұрын
Thanks!
@Ranfiel0411 ай бұрын
If you're having problems with ONNX tab missing use this command one the stable diffusion folder: git checkout d500e58a65d99bfaa9c7bb0da6c3eb5704fadf25 That revert back the new update that have the problem with the ONNX
@tmsenioropomidoro724311 ай бұрын
This actually helped. You have to load in your created virtual environment (mine is automatic1111_olive), then go to the folder path by cd (mine is F:\stable... etc) then use this git checkout d500e58a65d99bfaa9c7bb0da6c3eb5704fadf25 F:\stable...(the rest of the folder`s name). Then you have to do everything shown in the video again (will be much faster because most of the stuff is downloaded already, but requirements and webui-user.bat needs to be edited again)
@nielsjanssen242211 ай бұрын
You two fine gentlemen have gained my respect THANKYOU bro i struggled for hours
@NielsJanssen-m1n11 ай бұрын
@@tmsenioropomidoro7243 well i thought it worked, the onnx and olive tabs are back, but now i'm getting the error: "onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running MatMul node. Name:'MatMul_460' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(2476)\onnxruntime_pybind11_state.pyd!00007FFE8EC9B33F: (caller: 00007FFE8EC9CAA1) Exception(6) tid(1a7c) 80070057 The parameter is incorrect." when i try to generate txt2img,
@tmsenioropomidoro724311 ай бұрын
Well I got similar issue, it's not generating yet - shows some errors. Trying to figure out what is wrong @@NielsJanssen-m1n
@Wujek_Foliarz11 ай бұрын
stderr: ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'C:\\Users\\igorp\\Desktop\\crap\\stable-diffusion-webui-directml\\venv\\Lib\\site-packages\\onnxruntime\\capi\\onnxruntime_providers_shared.dll' Check the permissions.
@ALKSYM11 ай бұрын
add "--reinstall-torch" in the args and launch webui-user.bat, after the ui launched, delete the arg " --reinstall-torch", hope it help
@rivariola6 ай бұрын
hello sir I keep getting an error that is driving me nuts: DLL load failed while importing onnxruntime_pybind11 do you know what it means?
@hoangduong206511 ай бұрын
pls help me, I stuck this Failed to import transformers.modeling_tf_utils because of the following error (look up to see its traceback): No module named 'keras.__internal__' after making from you
@2ubyme11 ай бұрын
Same here. I did surrender after trying out for 10 hours. I hope that someone finds out how to fix that. If I some day get that working I won't do any changes.
@chilldesigns525611 ай бұрын
any fix?
@2ubyme11 ай бұрын
@@chilldesigns5256 Nope. I stopped trying until there are more Google results concerning "No module named 'keras.__internal__' ".
@wybo11 ай бұрын
same problem, just commenting in the hopes of an answer
@karikaturdigital612311 ай бұрын
try this.. cmd on sd web ui directories venv\Scripts\activate pip install onnxruntime-directml pip install torch-directml pip install keras pip install tensorflow
@Meatbix75 Жыл бұрын
thanks for the tutorial. It certainly got SD working for me, which is excellent. however the Olive optimisation doesn't seem to have any effect. I could run the optimisation even without modifying sd_models but it made no difference to performance- I'm getting around 3.3 it/s with either the standard or optimised checkpoint. I've gone ahead and modified sd_models but to no effect. GPU is an RX6700 10GB. CPU is i5 12400F, 32GB RAM.
@FE-Engineer Жыл бұрын
Hard to say. I’ve found a lot of issues with the optimization. It’s tricky to even get it to work a lot of the time. But if you aren’t seeing any performance increase with it running then my guess is that the model is optimized. If you grab other models you might end up seeing the performance boost. It just probably is that the one you have is already optimized. You are welcome, thank you so much for watching. Sorry I don’t have a better answer to this.
@michaelbuzbee5123 Жыл бұрын
I was having trouble with my A1111 being slow so searching around I found your fix video and decided to do just a clean install. I already downloaded a bunch of models though, how does one run them through onnx? And I am assuming I can no longer just add the models to the stable diffusion folders anymore? I think my PC specs are the same as yours.
@FE-Engineer Жыл бұрын
So you need to optimize them for Olive and ONNX. I have a pretty short video about this. You should be able to just optimize them from your normal models folder. Once optimized they will be in onnx or olive-cache I think are the folder names. But yes you can use them. Just not SDXL models. I have yet to get SDXL to work correctly with directML and ONNX. :-/
@lucianoanaquin4527 Жыл бұрын
Thanks for the amazing tutorial bro! I only have one question, watching other videos I noticed that they have more sampler options, what do I have to do to have them too?
@FE-Engineer Жыл бұрын
The other samplers don’t work in this version with onnx and directml. So options are. Run ROCm on Linux. Or wait for ROCm on windows when we can just use the normal automatic1111 without needing directML and onnx.
@aadilpatel6591 Жыл бұрын
Great guide. Thanks. What are the chances that we will be able to use reactor (face swap) or animatediff with this repo?
@FE-Engineer Жыл бұрын
You are welcome! Thank you for watching! My guess is not very good…most of the extensions don’t play well with ONNX and directml. Plus my guess is that no one is really working on trying to get them to work with ONNX and directml really. :-/ You can always try. I just have had very little luck with very many extensions that like “do things”.
@aadilpatel6591 Жыл бұрын
@@FE-Engineer will they be usable once ROCm is ready for windows?
@Oren918611 ай бұрын
I'm at the step at 6:54 and when i run the webui-user.bat I get this error "launch.py: error: unrecognized arguments: --onnx". How can I fix it. I can run it without the "--onnx" line and it starts up fine Any help fixing this issue would be appreciated.
@canalpan911 ай бұрын
same problem
@FE-Engineer11 ай бұрын
Code changed. Just remove the -onnx. It is not necessary anymore.
@wassup647211 ай бұрын
@@FE-Engineer when removing the onnx, should we skip the onnx part in which you optimize cause i did remove --onnx and olive doesn't show up in stable diffusion
@pankeczap9 ай бұрын
@@FE-Engineer glad i found this, was a little confused, should prob edit description and leave a note for others less fortunate, ty
@Grendel43011 ай бұрын
Thank you!
@FE-Engineer11 ай бұрын
No problem! Thanks for watching!
@PuMa10w10 ай бұрын
launch.py: error: unrecognized arguments: --onnx on the final step - what should i do now?
@FE-Engineer10 ай бұрын
Remove -onnx
@DJ_Kie6 ай бұрын
@@FE-Engineer bruv it took me like 20 minutes to work out what you meant haha. @puma10w you need to edit the webui-user.bat file and change the --use-directml --onnx and remove the --onnx part. If you are a dum dum like me i hope this helped
@waltherchemnitz7 ай бұрын
What do you do if when you run Venv you get the message "cannot be loaded because running scripts is disabled on this system"? I'm running the terminal as Adminstrator, but it wont let me run venv.
@mgwach Жыл бұрын
Thanks!! Got everything up and running. Question though.... do you know if LoRAs are supposed to work with Olive yet?
@FE-Engineer Жыл бұрын
No idea. My guess would be no. And to be clear. I am 99% sure ONNX does not care but automatic1111 with directML is probably not setup to support it most likely.
@mgwach Жыл бұрын
@@FE-Engineer Gotcha. Okay, thanks for the response. :) Yeah it seems that whenever I select a LoRA it's not recognizing it at all and none of the prompts make any difference for it.
@matthieu3967 Жыл бұрын
Thanks for the video but do you know how to add sampling methods ?
@FE-Engineer Жыл бұрын
Don’t use ONNX or go ROCm in Linux. You can’t use the other samplers with ONNX.
@sanchitwadehra11 ай бұрын
wow thanks dhanyavad
@FE-Engineer11 ай бұрын
You are very welcome! Thanks so much for watching!
@halilyldrm74511 ай бұрын
Bro. Thank you so much but ı have a problem. It does not use my graphic card and gives me that 2024-02-03 18:56:37.0992612 [W:onnxruntime:, session_state.cc:1166 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
@FE-Engineer11 ай бұрын
Is this during onnx optimization? If it is during optimization that is ok. It uses cpu only. But image rendering is done with the GPU.
@carterstechnology810511 ай бұрын
also curious how to optimize my iterations per sec. currently running 3.08it/s (AMD Ryzen 9 7940HS w/Radeon 780M graphics, 4001Mhz, 8 Core(s), 16 Logical Processor(s)64 GB RAM
@FE-Engineer11 ай бұрын
Using your GPU is the first step to optimizing. Using arguments like no half are required for some people or some models but will hurt performance usually. Remember that even the top of the line amd 7900xtx gets about 20 it/s currently. So 3 is not necessarily bad and depending on the resolution of the images might be very good
@Doomedjustice Жыл бұрын
Hello! Thank you very much for the tutorial, it really helped. I wanted to ask is there any way to use generic sampling methods that are usual for Automatic1111?
@FE-Engineer Жыл бұрын
You have to drop ONNX. But you will take a big performance hit. Or use ROCm on Linux.
@lake370810 ай бұрын
An excellent guide, but I have a question: there's a .safetensors checkpoint that has a config attached in the .yaml format. After optimization, the program stops seeing the config and generates noise. Do you have any idea how to fix this problem?
@FE-Engineer10 ай бұрын
Ohh. Not sure on that one. But I have a new video coming out with a much better way of doing this!
@nienienie75679 ай бұрын
Hey man! Great tutorial! Got any ideas for VRAM Usage optimazitation on AMD? I'm using a modified BAT like below: set PYTHON= set GIT= set VENV_DIR= set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128 set COMMANDLINE_ARGS=--use-directml --medvram --always-batch-cond-uncond --precision full --no-half --opt-split-attention --opt-sub-quad-attention --sub-quad-q-chunk-size 512 --sub-quad-kv-chunk-size 512 --sub-quad-chunk-threshold 80 --disable-nan-check --use-cpu interrogate gfpgan codeformer --upcast-sampling --autolaunch --api set SAFETENSORS_FAST_GPU=1 it helps a lot but i still wanna squeeze out more, I'm using RX 7600 8gb vram, 32gb ram
@NXMT0711 ай бұрын
Thanks for the tutorial, it really did worked with my rx580, albeit very slow. Can you please make a tutorial on how to use huggingface diffusers with automatic1111? I've tried to find the safetensors file and even converted the diffusers into one but to no avail.
@FE-Engineer11 ай бұрын
Last I knew. Most of the additional pieces of automatic 1111 will not work with ONNX. They might work with only directml. But it has a big performance penalty. Overall for AMD. Your best bet right now is ROCm on Linux. Slightly slower than onnx and olive but all the functionality works correctly. Also nice that you don’t have to fiddle with converting to onnx and the headache that comes with all of that and what does and does not work etc. :-/
@NXMT0711 ай бұрын
@@FE-Engineer well I heard that Zluda is enabling CUDA on amd GPU so OONX shouldn't be a problem after a period of development in WindowOS. I have managed to play around with it and can confirm it does indeed work with CUDA-related programs, haven't got it to work with Automatic1111 though. Still, my trouble with the huggingface diffusers remains unsolved, I think it is a entirely new problem
@EricFluffy Жыл бұрын
Is there a fix for AssertionError when trying to optimize SDXL models? It works perfectly fine for SD and ONNX models, but it can't seem to optimize SDXL models
@FE-Engineer Жыл бұрын
I have never gotten sdxl models to optimize correctly. So no fix that I am aware of. And I tried a fair number of things. :-/
@EricFluffy Жыл бұрын
@@FE-Engineer I see. Is there any chance you could make a tutorial on how to convert Civitai models based on AMD's latest AI blog post where they outline using Olive and the DirectML extension? It seems like Olive can optimize SDXL models, but it currently doesn't work with the extension, and for the life of me I can't figure out how to make it work with locally downloaded/Civitai models. It seems like more tedium, but if it can convert SDXL models, I'm alright with it.
@EricFluffy Жыл бұрын
Also, I'm running into a weird issue where embeddings aren't showing up at all in the textural inversions tab. Tried removing all from my device and a different drive, the same message telling me where to put embeddings show up.
@arcadiandecay1654 Жыл бұрын
This has been a lifesaver, thanks! One thing I did notice after I got this working (perfectly, actually) is that there are some sampling methods missing, like DPM++ SDE Karras. Do you know if that's that something that could be manually installed? I tried doing a git clone of the k-diffusion repo and doing a git pull but that didn't get them to show up.
@FE-Engineer Жыл бұрын
Yea. They don’t work with ONNX. :-/
@arcadiandecay1654 Жыл бұрын
Oof lol. Thanks! Well, I'm going to count my blessings, since I was floundering before finding this tutorial. I have Linux on a couple other disks and one of them is Ubuntu, so I'm going to install it on that, too.
@raystyles93263 ай бұрын
thanks a lot i got it to work without downloading onnx...onnx was giveing problems
@raystyles93263 ай бұрын
have a good day really appreciate it
@FE-Engineer2 ай бұрын
You are welcome. There have been a ton of updates and code changes. Most folks use zluda or rocm for running AMD cards for stable diffusion. So ONNX is no longer as necessary as it was before.
@obiforcemaster8 ай бұрын
This no longer woks unfortunately. The --onnx comand line argument was removed.
@Phoenix-e3h11 ай бұрын
Thanks for the video. May I ask what's your GPU and how's the performance? Cheers!
@@FE-Engineer I'm interested to know if you really need an Nvidia GPU or AMD. Perhaps a good video to make in the future where you compare the two GPU makers? Thanks!
@gkoogz9877 Жыл бұрын
Great video, Any tips to use more than 77 tokens on this method? It's a critical limitation.
@FE-Engineer Жыл бұрын
You can use it without ONNX but performance takes a big hit. Or run it with ROCm on Linux. Or wait for ROCm on windows whenever that will be.
@chrisc4299 Жыл бұрын
Hello thank you very much for the video I have a question how I could use vae with the optimized models you have to transform them I appreciate your help since placing the vae in the regular folder does not apply to the generation
@FE-Engineer Жыл бұрын
You will need to run ROCm in Linux to get full functionality like that.
@livb413911 ай бұрын
can you make vid on how to make ollama run on rx 7900xtx
@terraqueojj Жыл бұрын
Good evening, thanks for the Video, but the problems with ControlNet and Image Dimensions continue. Do you know if there is any update for this in the pipeline?
@FE-Engineer Жыл бұрын
I do not. Although I am somewhat unsure how much more support overall this fork of automatic1111 will get ultimately. I think it’s just a bit of a waiting game for rocm on windows
@Justin141-w3k Жыл бұрын
This is the only tutorial that has worked.
@Justin141-w3k Жыл бұрын
New issues. I managed to generate an image of a car though.
@FE-Engineer Жыл бұрын
You are seeing new issues?
@Justin141-w3k Жыл бұрын
Regarding the only valid links being hugging face.@@FE-Engineer
@Justin141-w3k Жыл бұрын
After optimizing I receive this error: InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from C:\AI\stable-diffusion-webui-directml\models\ONNX-Olive\stable-diffusion-v1-5\unet\model.onnx failed:Protobuf parsing failed.@@FE-Engineer
@JosephSmith-v1p11 ай бұрын
Thanks so much for the Video! I wonder why do I need Internet connection when converting "normal" models(with safetensors file extension name). Due to my poor Network, python always raises "ReadTimeout" error whenever I click the "Convert & Optimize checkpoint using Olive" button. Do I need to download something else to convert a model? I think I only need my own GPU to compute.
@FE-Engineer11 ай бұрын
That is interesting. I did not know it needed to get anything from the internet. I am not sure to be honest. Are you running it on like an old spinner hard drive? Is it possible that the read timeout is from your disk drive?
@carterstechnology810511 ай бұрын
would you be willing to create a video of the same type for creating a local LLM like a chat GPT? is that possible?
@FE-Engineer11 ай бұрын
What do you mean by a local LLM. Do you mean training one? Or just running local LLM models?
@acho97x5 ай бұрын
At cmd, when I press any key it closes the windows. Which key do you press?
@mojlo4ko998 Жыл бұрын
legend
@FE-Engineer Жыл бұрын
😂 thank you! I hope this helped!
@hifidrache53665 ай бұрын
Ich hatte mir schon überlegt ob ich dir einen Daumen nach oben geben sollte. Aber da diese Anleitung überhaupt nicht funktioniert, lasse ich das lieber. Da sind so viele sachen die nicht gehen --onnx ist nur ein Beispiel. (Es gibt im gesamten Internet nicht eine Anleitung mit dml und olive. Aber du hast sicher eine besondere Version. Nein im Git liegt deine version mit sicherheit nicht, sonst hätte ich gerne die genaue branch, ich bekomme so viele Fehler bei dieser Anleitung. Dann starte ich ohne onnx, weil dies ja nicht geht und bekomme eine Geschwindigkeit zum weglaufen. Mega schade das du die Version nicht zum Download anbietest. (--use-directml --no-half --opt-sub-quad-attention --disable-nan-check --autolaunch) Ja 6It/s von 19. Wirklich mega.... langsam
@Krautrocker Жыл бұрын
Soooo, i initially installed automatic1111 using your first video on the matter, which was troubleshooting the official guide. Before i tear that down and reinstall the whole jazz, what exactly is different? Does this fix lift the limitations (like high res stuff not working) or is it 'just' about running it more stable?
@FE-Engineer Жыл бұрын
No, there was an update recently -- for many folks it broke. In the video description I tried to be clear saying if your setup works fine, don't bother with any of this. This is just to get things working for folks who got a new github update to the code and everything entirely broke and they were not able to use it at all.
@_JustCallMeRex_11 ай бұрын
Greetings, FE-Engineer. I would like to ask something in regards to using prompts longer than 77 tokens. So I was able to follow the steps of this whole video but, there's been an issue that I've been encountering constantly and I do not know how to fix it. "The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens"
@FE-Engineer11 ай бұрын
Textual inversion can help. But ultimately if you want to run everything you should consider rocm on Linux. There is no bypass for that. It’s a hard limit with directML and onnx.
@_JustCallMeRex_11 ай бұрын
@@FE-Engineer Damn, that is unfortunate. I only run Stable Diffusion using an AMD RX 580 Graphics Card, and if I remember correctly, it doesn't support rocm. Truly unfortunate. But thank you for answering my question. Looks like I gotta stick with the normal Stable Diffusion hahaha.
@FE-Engineer11 ай бұрын
Sorry. I am not sure if that is supported or not but I think you are correct and that it may not be supported.
@_JustCallMeRex_11 ай бұрын
@@FE-Engineer Yeah no that is fine, no need to apologize. And yeah no, it is not supported unfortunately, but it's fine. I can still use DirectML normally than the Onnx/Olive version. Still, a very informative video! Thank you very much for trying to help out, truly appreciated!
@piyaphumL.11 ай бұрын
Thank for the fixing tip but right now I find a new problem. The images I generated became random noises instead of images. I changed all checkpoints but still got same problem. How should I fix this problems please?
@FE-Engineer11 ай бұрын
Try the normal sd checkpoints. Convert to onnx. And try those at normal default settings. My guess is either your models are not converted properly or something weird is going on. Or your models don’t work. Sdxl models won’t work at all.
@tmiss17 Жыл бұрын
Thanks!!
@FE-Engineer Жыл бұрын
You are very welcome! Thanks for watching!
9 ай бұрын
Any method works for me, I have this error: AttributeError: module 'onnxruntime' has no attribute 'SessionOptions'
@TheBrainAir8 ай бұрын
i do all steps and / AttributeError: module 'torch' has no attribute 'dml'