I just wanted to say that I got TensorRT installed in stablediffusion, and WOW WOW WOW what a difference it makes. Your instructions were crystal clear and I noticed a -significant- increase in it/s. I'm getting above 30it/s now on my 3090 Ti (w/ 24G RAM). Glad I can now better use that beast under the hood. WOW. Thanks!
@ferluisch11 ай бұрын
I got about x3.5 speed up with my 2080, from 2.1it/s to 7.5it/s. Such a huge boost!
@higon999 ай бұрын
Thank you for a clear instruction. At the current state, I just had to 'pip install polygraphy importlib_metadata' before installing the extention to a1111 dev branch. It's working for me with the caveat that it doesn't load any lora from the lycoris folder at all.
@YakaBita Жыл бұрын
i wish we had upscaler presets for 2x, 4x with similar tensorRT speed boost
@puzzles626 Жыл бұрын
Ey its Dimitri from csgo surf! Keep up the good work my dude
@TroubleChute Жыл бұрын
Wasn't expecting you here
@danielhejira899 Жыл бұрын
when i try to export default engine it says No ONNX file found. Exporting ONNX... Please check the progress in the terminal. anyone know ?
@Heldn1008 ай бұрын
same, did you find any fix?
@nandoPluister7 ай бұрын
I deleted the venv folder, restarted my PC and opened SD and it worked@@Heldn100
@wingofwinter888 Жыл бұрын
sadly it doesnt work with control net in my PC also give me error with reactor. its a huge boost in speed, im praying NVIDIA keep ironing out the errors and make it more compatible with other modules. im ok with converting the checkpoint, its not taking too long. 2gb is less then 4K movies, so i wouldnt call it as negative because the speed boost is really huge
@DeViciousOfficial Жыл бұрын
I don't want to be that guy but I am going to be that guy.... this works, your video is fantastic and you are doing a great job. However.. the TensorRT comes completely without security guard rails for your card, it just keeps maxing out the card uncontrollably and causes it to overheat. People with RTX 2xxx won't run into issues but if you have a 3090 or 4090 and have run into black screens / Max Fan Speed before, you will run into this issue almost certainly. Reproduced it on 3 rigs with 3090 and 4090 which have all 3 masterful cooling systems. Maxing out these cards is no joke, this can cause serious damage. I'd sit out round one till this is fixed if you run a XX90 Card, image Generation isn't slow for you anyways, upscaling is.
@3d_visuals__motion Жыл бұрын
Yes its does initially to my 3090 now i have just dropped the GPU power to 70% and its now working without any serious over heating issues i have tried it constantly to more than tone hour of image rerols and my GPU was not never crossed 65 degrees. Let me know if this will help.
@DeViciousOfficial Жыл бұрын
@@3d_visuals__motion Oh yeah sure I know how to prevent it, thanx. I actually went back up to maximum power and started cooling my case with a fan which is the cheapest and most efficient colling system I have ever had 😀
@valter987 Жыл бұрын
Should i be worried about my 3060?
@DeViciousOfficial Жыл бұрын
@@valter987 no need to worry if you never ran into overheating issues before, when the PC was still running fans go 100% but the screen goes black. Have an eye on the temperature. U should be fine, thats mostly a 3090 problem
@petec73711 ай бұрын
Imagine thinking your card breaks just because you see that usage jump to 100% lol..
@orianonicolau6253 Жыл бұрын
Thank you for the tutorial! May be you can help me, Im getting this message when gerating and render times getting terrible slow "CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization." How do I active that Cuda lazy loading? Thanks!
@Heldn1008 ай бұрын
i have this problem No ONNX file found
@Painjusu Жыл бұрын
Can't wait for my 4090 next month, god.
@waltervolbers3443 Жыл бұрын
great, thanks for explaining, is now faster
@DinoFancellu Жыл бұрын
Doesn't work for me, did all the steps then got "Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)" No problems at all without tensorrt (RTX 4090), using juggernautXL_version6Rundiffusion
@darkjanissary5718 Жыл бұрын
I have the same error. It is so buggy, completely unusable atm.
@imresomodi4961 Жыл бұрын
You used a sdxl LoRa for sd 1.5. ;) Good Video, thx
@west1778 Жыл бұрын
Does this work with SDXL models as well?
@daemoniax37888 ай бұрын
not from 2-4weeks, before yes, now no, only if u have a really strong gpu with a lot of vram like 24gbvram, because with the new update, the model is now trying to force more ran, if it has not, it show "onix parse error"
@ksk50586 ай бұрын
whats this green extension in your prompt??
@christianblinde Жыл бұрын
Very Nice, Thank you. Would be great if there will be something similar for ComfyUI
@Rimbo2811 ай бұрын
Hey men... do you have any multicontrolnet workflow that works ?
@dhonta40david3 Жыл бұрын
Huge boost but it doesn't wok with controlnet unfortunately
@substandard649 Жыл бұрын
Thanks for the tutorial, does this work with hires fix? What about controlnet?
@Painjusu Жыл бұрын
This is for overall generation lol.
@BrunoMartinho Жыл бұрын
Is it possible to train tensorRT in high resolutions? I get a error, I was going for 870x1305
@ThatGuyNamedBender8 ай бұрын
I built the default engine but when I render at anything other than 512 or if I go 512 then hires fix to a slightly higher res the rendering fails. With the highres fix it does the standard steps but fails when doing the highres fix steps. Any ideas?
@skimmingdeathАй бұрын
You need to build engines for all resolutions. If you want 768x768, build an engine for 768x768. Same for hires. IF you want to hires by 2, you should build an engine for 1024x1024. Sucks ass, but the speeds are worth.
@chiemfishery8 ай бұрын
dose it support controlnet?
@DSLDARTH Жыл бұрын
I still get an error but can still launch automatic1111 but when I got to TensorRT and click export in the exporter it says. "No ONNX file found. Exporting ONNX... Please check the progress in the terminal." it runs its script but at the end nothing happens and when clicking export again it tries to pull Onyx again but can't.
@Gwenyria11 ай бұрын
I had the same issue but it was fixed for me when i deleted the --medvram commandline argument. Maybe you should try starting a1111 without them and see if it works. Also i selected an automatic vae, created 1 standard image (maybe to satisfy something i dont understand) and afterwards i started the tensorRT with a model i liked and it worked (you have to wait a while until it starts after clicking export engine)
@DSLDARTH11 ай бұрын
@Gwenyria unfortunately it doesn't work at all for me, downloaded and installed all the dependencies but always fails when trying to load tensorrt. This is on a 3090.
@scyence Жыл бұрын
When installing it, I get the error "ModuleNotFoundError: No module named 'importlib_metadata'"
@scyence Жыл бұрын
Also, deleting the venv folder broke a1111 for me. Just ended up reinstalling.
@KratomSyndicate Жыл бұрын
Do you have to be on the dev branch in 1111 for this to work? Just getting cpu and cuda:0 errors.
@TroubleChute Жыл бұрын
No. You can use the normal release. Just make sure it's up to date. Some have reported better compatability with dev
@Duckers_McQuack Жыл бұрын
With just 512x512 20 steps, i went from 7.16 iterations to 20, so 3x speed there with 3090 :D Downside is that you need a TRT model per resolution sadly.
@PhilippSeven11 ай бұрын
But the 3090 should give about 17 it/s without this extension. 7 it/s is the 3060.
@ICE012410 ай бұрын
If anyone else still gets errors after reinstall run these commands To run them go into the auto1111 stable diffusion root folder and then in the path bar type "cmd" with no quotation marks. Or go and copy the path and then open command prompt and then type "cd PathHere" then run these commands: venv\Scripts\activate python -m pip uninstall -y nvidia-cudnn-cu11 then open the web ui again and hope that fixed it
@ThePolyakovv10 ай бұрын
When i'm trying to create Default model - "Failed to parse ONNX model." Error on "Clean SD Automatic" What it should be? According to this guide, everything was fine before. UPD: remove -- medvram or --lowram Args, it works!
@12uniflew7 ай бұрын
God Bless you kind sir/ma'am!
@pastuh Жыл бұрын
I hope that Apple will enter the gaming or AI industry.. Just imagine a generation inside the headset, like an artist with a paintbrush :)
@LFXMusicNoCopyright Жыл бұрын
How do you update the venv folder?! very critical thank you
@tsmakrakis3211 ай бұрын
I think you just delete the folder (or rename it) and run stable diffusion again (the .bat file). It will create a new venv folder and re-download whatever is needed.
@leandrozanardo104610 ай бұрын
It is really fast, but the results have nothing to do with the original model used. Sometimes can be nice, but in general if you are using loras it loses a lot of details...
@dannywoods39288 ай бұрын
Shout out to all the SA youtubers!
@weirdscix Жыл бұрын
I installed this but it was a pain to get working as the a1111 extension installer is bugged, so I had to do it manually.
@Jet_Set_Go Жыл бұрын
2 or 3 days and it will for sure be fixed or in this case, even improved
@TroubleChute Жыл бұрын
And the errors and and. Followed a issue on Nvidia's GitHub to fix the errors, but it would work after that. Seems to work find turning a blind eye so hey. I'll take improvements where I can get em
@___x__x_r___xa__x_____f______ Жыл бұрын
Would have been perfect if you had converted sdxl. I was not to install for sdxl unfortunately
@jamesclow108 Жыл бұрын
Not sure I went wrong, but after creating an optimized model, then creating an image I get RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0!
@Rambo.... Жыл бұрын
It's a very new extension, it still has a lot of bugs, I get this error when using controlnet, currently it doesn't support controlnet. 😥
@pascaltatipata Жыл бұрын
Same here but only on XL models.
@ratside9485 Жыл бұрын
Thanks for the info. But this still looks pretty buggy. I'll wait a few more days until I test it.
@liquidmind11 ай бұрын
Any luck anyone with RTX of 6 GB VRAM?
@andrejlopuchov797211 ай бұрын
I wish this would work with animatediff
@procrastonationforever552111 ай бұрын
Yeah, yeah... But what about hires-fix? Upscaling? Compatibility? No? Oh boy...
@crazysteve8088 Жыл бұрын
you dont need to restart after deleting venv. this is a virtual environment.