Пікірлер
@yiluwididreaming6732
@yiluwididreaming6732 3 күн бұрын
comfy?
@NextTechandAI
@NextTechandAI 3 күн бұрын
@yiluwididreaming6732 Same ControlNet options can be used with ComfyUI, but in the vid I've used Automatic1111 because it is a bit more common - or what's your question?
@yiluwididreaming6732
@yiluwididreaming6732 2 күн бұрын
@@NextTechandAI Sorry, just about comfy work flow for this? thanks
@semenderoranak2603
@semenderoranak2603 6 күн бұрын
I have rx 6800 yet keep getting the error "Building PyTorch extensions using ROCm and Windows is not supported" and "Model not loaded"
@NextTechandAI
@NextTechandAI 5 күн бұрын
Do you have in the command window an output entry similar to this one? "Device: device=AMD Radeon RX 6800 [ZLUDA] n=1 arch=compute_37 cap=(8, 8) cuda=11.8 cudnn=8700 driver=" and you have used the --use-zluda option? Are there additional error entries when starting up/generating? Have you used a new Conda/Venv environment? Are you using different drives for SD.next and ZLUDA? Are you using this ZLUDA version github.com/lshqqytiger/ZLUDA (there have been updates)?
@user-fg6mq3dg3d
@user-fg6mq3dg3d 8 күн бұрын
I got tired with Window's bullshit so I came to Arch Linux, thanks for this video
@NextTechandAI
@NextTechandAI 7 күн бұрын
@user-fg6mq3dg3d Thanks a lot for your feedback, I'm happy that the video was useful.
@vargaalexander
@vargaalexander 8 күн бұрын
hy. my controlnet doesnt work in sd1.5. can you upload your sd1.5 with installed controllnet without models, just the files. please
@NextTechandAI
@NextTechandAI 8 күн бұрын
@vargaalexander What exactly is the problem? The video is about Automatic1111 and SDXL.
@vargaalexander
@vargaalexander 8 күн бұрын
@@NextTechandAI Error running process: F:\ST\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py Traceback (most recent call last): File "F:\ST\stable-diffusion-webui\modules\scripts.py", line 825, in process script.process(p, *script_args) File "F:\ST\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1222, in process self.controlnet_hack(p) File "F:\ST\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1207, in controlnet_hack self.controlnet_main_entry(p) File "F:\ST\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 898, in controlnet_main_entry Script.check_sd_version_compatible(unit) File "F:\ST\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 827, in check_sd_version_compatible raise Exception(f"ControlNet model {unit.model}({cnet_sd_version}) is not compatible with sd model({sd_version})") Exception: ControlNet model control_v11p_sd15_seg [e1f51eb9](StableDiffusionVersion.SD1x) is not compatible with sd model(StableDiffusionVersion.SDXL)
@NextTechandAI
@NextTechandAI 7 күн бұрын
@vargaalexander I pointed this out in the video. You can only combine SDXL with SDXL and SD1.5 with SD1.5. You are using an SDXL checkpoint with an SD1.5 ControlNet model. That does not work.
@vargaalexander
@vargaalexander 7 күн бұрын
@@NextTechandAI wow thank i didnt know that my checkpoint was sdxl.... THANKS a LOT
@NextTechandAI
@NextTechandAI 7 күн бұрын
Thanks for confirming the solution.
@NextTechandAI
@NextTechandAI 9 күн бұрын
What are your areas of interest?
@J2thaBgeerie
@J2thaBgeerie 10 күн бұрын
SD.next does feel a bit buggy sometimes, definitively after changing some settings and restarting UI/server. But man is it a massive difference over directml. On my 6700xt I used to get about 1.5it/s, now I'm getting about 9.5-10it/s a second, so that's about the 8x increase. Using deepcache aswell and the optimized rocms for gfx1031
@NextTechandAI
@NextTechandAI 10 күн бұрын
Thank you very much for your feedback, I'm happy to see such a speed increase. If you want to use Automatic1111 instead of SD.next, this can be done quite fast following this short: kzbin.infoYzFRlsEYyEE
@J2thaBgeerie
@J2thaBgeerie 10 күн бұрын
@@NextTechandAI couldn't get it to work, it doesn't recognize the --use-zluda command, not sure what's going on and I'm bored of debugging for a while :D, and SD.next does have some nice functions, but the UI is indeed a lot worse.
@NextTechandAI
@NextTechandAI 10 күн бұрын
@J2thaBgeerie What a pity. It must be a fairly recent installation of A1111, but I can understand that you want to enjoy the speed of SD.next first and don't want to configure it any further :)
@nghilam2205
@nghilam2205 10 күн бұрын
OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\Admin\ZLUDA\automatic\venv\lib\site-packages\torch\lib\caffe2_nvrtc.dll" or one of its dependencies.
@NextTechandAI
@NextTechandAI 11 күн бұрын
What is your best practice to speed up the workflow for Stable Diffusion?
@NextTechandAI
@NextTechandAI 11 күн бұрын
What is your preferred method for generating perfect hands in Stable Diffusion/SDXL?
@__-fi6xg
@__-fi6xg 12 күн бұрын
just too complicated, AMD just dropped the ball, switching to Nvidia.
@NextTechandAI
@NextTechandAI 12 күн бұрын
I can understand that.
@__-fi6xg
@__-fi6xg 12 күн бұрын
@@NextTechandAI im sorry, i tried it for a week,im just not good at this... got frustrated
@NextTechandAI
@NextTechandAI 12 күн бұрын
@@__-fi6xg Honestly, I can understand that. When I bought my RX 6800, I didn't know that I would get so involved with AI. Inevitably I'm making the best of it now, but with an NVIDIA GPU everything would be a lot easier :)
@alanreynolds4262
@alanreynolds4262 13 күн бұрын
Hello, when I try to generate an image I get this error: Building PyTorch extensions using ROCm and Windows is not supported. I have followed everything perfectly. Could it be because I have an RX 7900 GRE, which isn't listed at all?
@NextTechandAI
@NextTechandAI 12 күн бұрын
@alanreynolds4262 Hello, frankly speaking, I don't know. The architecture of your GPU is very similar to the RX 7900 XT, but e.g. for the RX 6xxx series below 6800 special libraries are required for ROCm HIP as described in my vid. Maybe this is the case for the GRE versions, too. Currently there is an open issue in SD.next main branch leading to an "Diffusers failed loading" error. So just in case you can go one step further you might have to wait for an update or use an older commit of SD.next.
@ffwrude
@ffwrude 14 күн бұрын
Hello, I guess that like all hands SD video, it only works where the hands are very visible and in an easy position. Like if the hands have fingers touching each other it will still not work right ?
@NextTechandAI
@NextTechandAI 13 күн бұрын
@ffwrude Hi, indeed fingers touching each other is more difficult and and pushes especially Openpose to its limits. In fact SD can generate even such hands in good quality, it's just less likely. So I would inPaint as described and generate several batches. Another option is to take a picture of your hands (or a template you have generated) in the desired position and use depth or canny, increasing chances for good hands a lot.
@soraphop99
@soraphop99 14 күн бұрын
AssertionError: Torch not compiled with CUDA enabled how to fix
@NextTechandAI
@NextTechandAI 14 күн бұрын
@soraphop99 Have you already successfully generated images with ZLUDA on SD.next? When running SD.next, do you have an ouput similar to this one "Device: device=AMD Radeon RX 6800 [ZLUDA] n=1 arch=compute_37 cap=(8, 8) cuda=11.8 cudnn=8700 driver="?
@soraphop99
@soraphop99 14 күн бұрын
@@NextTechandAI Device: device=Radeon RX 570 Series [ZLUDA] n=1 arch=compute_37 cap=(8, 8) cuda=11.8 cudnn=8700 driver=
@soraphop99
@soraphop99 14 күн бұрын
@@NextTechandAI SD.next err OSError: Building PyTorch extensions using ROCm and Windows is not supported
@NextTechandAI
@NextTechandAI 14 күн бұрын
@soraphop99 Your RX 570 is not officially supported by ROCm, see the website containing the list in my vid. You could try if you can find suitable libraries similar to the ones from brknsoul, but in the comments there was nobody so far running the installation with less than an RX 6xx0.
@user-ei8kk8vp7k
@user-ei8kk8vp7k 16 күн бұрын
Updated SD.next today. It worked fine before on 7800 xt with ZLUDA. Now it says: "OSError: Building PyTorch extensions using ROCm and Windows is not supported." Do you know by any chance what the matter is, how to fix it?
@NextTechandAI
@NextTechandAI 16 күн бұрын
I can only guess. Maybe you have to update your Zluda files. In any case the start and very first image generation will take very long again after an update. When starting SD.next, was there a line like this Device: device=AMD Radeon RX 6800 [ZLUDA] n=1 arch=compute_37 cap=(8, 8) cuda=11.8 cudnn=8700 driver="? At which point did you get the error?
@user-ei8kk8vp7k
@user-ei8kk8vp7k 15 күн бұрын
@@NextTechandAI The error appears when loading models. And now there is another error: "OSError: [WinError 126] Das angegebene Modul wurde nicht gefunden. Error loading "F:\Stable Diffusion\SD.Next\venv\lib\site-packages\torch\lib\caffe2_nvrtc.dll" or one of its dependencies." Even on frsh SD.next install. Somtheing wrong with Python modules: "F:\Stable Diffusion\SD.Next\venv\lib\site-packages\torch\__init__.py:141 in <module> │ │ │ │ 140 err.strerror += f' Error loading "{dll}" or one of its dependencies.' │ │ > 141 raise err │ │ 142" Initialization before looks like this: "Using VENV: F:\Stable Diffusion\SD.Next\venv 10:41:56-762871 INFO Starting SD.Next 10:41:56-765373 INFO Logger: file="F:\Stable Diffusion\SD.Next\sdnext.log" level=INFO size=29636 mode=append 10:41:56-767377 INFO Python 3.10.9 on Windows 10:41:56-846333 INFO Version: app=sd.next updated=2024-05-07 hash=e081f232 branch=master url=github.com/vladmandic/automatic/tree/master 10:41:57-383979 INFO Platform: arch=AMD64 cpu=AMD64 Family 25 Model 97 Stepping 2, AuthenticAMD system=Windows release=Windows-10-10.0.22631-SP0 python=3.10.9 10:41:57-389487 INFO AMD ROCm toolkit detected 10:41:57-402589 WARNING ZLUDA support: experimental 10:41:57-404095 INFO Using ZLUDA in F:\Stable Diffusion\ZLUDA 10:41:57-457633 INFO Extensions: disabled=['sd-webui-controlnet'] 10:41:57-458635 INFO Extensions: enabled=['Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg'] extensions-builtin 10:41:57-461632 INFO Extensions: enabled=['prompt_translator', 'sd-webui-mosaic-outpaint'] extensions 10:41:57-464036 INFO Startup: quick launch 10:41:57-464036 INFO Verifying requirements 10:41:57-470543 INFO Verifying packages 10:41:57-471543 INFO Extensions: disabled=['sd-webui-controlnet'] 10:41:57-472673 INFO Extensions: enabled=['Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg'] extensions-builtin 10:41:57-475185 INFO Extensions: enabled=['prompt_translator', 'sd-webui-mosaic-outpaint'] extensions 10:41:57-479188 INFO Command line args: ['--use-zluda', '--autolaunch'] autolaunch=True use_zluda=True ┌──────────────────────────────────────────────────────────────────── Traceback (most recent call last)"
@NextTechandAI
@NextTechandAI 15 күн бұрын
@user-ei8kk8vp7k In one of the comments there was a hint, that it doesn't work on external drives. In order to avoid any problems related to the path I've put my installation of SD.next and Zluda etc. on same drive; best option is C:\. Other possibility: You have older installations of SD.next/A1111 or Zluda and they interfere with your current installation.
@piccoloimmun1577
@piccoloimmun1577 16 күн бұрын
hi there im using this for my rvc mangio, i put the files it its torch folder and did anything else, tou i start app trough the webui.bat file runtime\python.exe infer-web.py --pycmd runtime\python.exe --port 7897 pause how do i change the code too let it run with zluda i dont have the terminal option in my windows, so i guess i need to change it there? thank you very much
@Painbeas
@Painbeas 16 күн бұрын
Is Zluda compatible with AnimateDiff ? I installed automatic1111 with Zluda since I have an AMD RX 6800. Installing AnimateDiff is not a problem but the generation uses my CPU instead of the GPU. Do you know a solution ? Thanks !
@NextTechandAI
@NextTechandAI 16 күн бұрын
As far as I know it should work. There's currently an issue with AnimateDiff v3, you could try AnimateDiff v2.
@Painbeas
@Painbeas 15 күн бұрын
@@NextTechandAI Indeed, it seems to work with the v2 model. Thanks for the advice.
@NextTechandAI
@NextTechandAI 15 күн бұрын
@@Painbeas I'm happy that it's working, now. Thanks for sharing.
@kittykisses2257
@kittykisses2257 17 күн бұрын
Thanks for the video. I went with AMD this go around not realizing that I would want to get in to AI image generation. One little critique on an otherwise excellent video: there were some steps where you skipped parts and also some command prompts that were not available in the description. Overall, an easy to follow video. However, I seem to get stuck on "text to image starting". I left it for over an hour and it never progressed. Have you tried using zluda with SDXL models? UPDATE: I swapped to same model as you and it says it will take 700 minutes... Any idea why?
@NextTechandAI
@NextTechandAI 16 күн бұрын
Thanks for you feedback. Which steps have been skipped in your opinion? I had to fast forward sometimes in order to keep the video short, but it was my goal to include all necessary steps. Which AMD GPU are you using? Have you interrupted the very first image generation with Zluda? With my RX6800 it took nearly half an hour, but not longer. When starting SD.next, was there a line like this Device: device=AMD Radeon RX 6800 [ZLUDA] n=1 arch=compute_37 cap=(8, 8) cuda=11.8 cudnn=8700 driver="? Yes, Zluda works with SDXL models. All images in my latest video about ControlNet with SDXL have been created by using my Zluda installation.
@kittykisses2257
@kittykisses2257 16 күн бұрын
@@NextTechandAI Thank you for getting back to me. I will comb through your video and find the missing step/s. In the mean time, here is the answer to your other questions. I am using a 7900 XTX and the Devices line says this "Devices: " I checked in the system information tab of SD.Next and there is no GPU listed in the GPU section. So maybe it is defaulting to my CPU? I thought I followed instructions perfectly. Any thoughts?
@kittykisses2257
@kittykisses2257 16 күн бұрын
@@NextTechandAI Upon rewatching, I found only one step that I remember having to pause and google. At 5:30 you talk about editing path variables but don't show how to get to the appropriate menus.
@NextTechandAI
@NextTechandAI 16 күн бұрын
Well, but at 5:30 I'm hinting at a document which describes how to edit the path in case you don't know. Nevertheless, if your device is not listed then your Zluda installation is not complete and - yes - in this case it's using the CPU. Difficult to guess which step is missing, but according to one comment you should make sure to have Zluda and SD.next on same drive (in best case C:\).
@kittykisses2257
@kittykisses2257 16 күн бұрын
@@NextTechandAI I restarted from scratch, followed every step in video exactly, and my device is still not showing up. I guess I will have to wait for new solutions. Sad stuff.
@lucianodaluz5414
@lucianodaluz5414 17 күн бұрын
Everytime I try to use it, start to download "clip_vision_h" and after it finishes nothing happen.
@jiuvk8393
@jiuvk8393 17 күн бұрын
Hello, 2 or 3 questions: #1: I have a Radeon RX 5600M (will this work for it?) (I already made comfy work but only for 1.5, it doesn't work for sdxl or stable cascade etc.) #2: for brknsoul/ROCmLibs/ do I just download the one from the link or do I have to download a specific one for my gpu?, also there are 2 now instead of one: Optimised_ROCmLibs_gfx1031.7z (Optimised Libs for gfx1031) and Optimised_ROCmLibs_gfx1032.7z (Add files via upload). thank you very much.
@NextTechandAI
@NextTechandAI 17 күн бұрын
@jiuvk8393 Regarding #2: When I created the vid, there was only ROCmLibs.7z. Choose this if you don't have gfx1031 or gfx1032, Optimised_ROCmLibs_gfx1031.7z or Optimised_ROCmLibs_gfx1032.7z if you have one of these. If this doesn't work, I would try ROCmLibs.7z and additionally overwrite with Optimised_ROCmLibs_gfx1031.7z or Optimised_ROCmLibs_gfx1032.7z depending on your GPU. Alas, in the readme it doesn't tell whether Optimised_ROCmLibs_gfx1031.7z and Optimised_ROCmLibs_gfx1032.7z include all required files or only customized files. Regarding #1: I'm sorry, but so far no one has reported in the comments that they have successfully gotten it to work with a GPU below an RX 6xx0.
@xLovelty
@xLovelty 18 күн бұрын
Hi, i have an RX 580, i think this is not for me, right ?
@NextTechandAI
@NextTechandAI 18 күн бұрын
Hi, I'm sorry, but so far no one has reported in the comments that they have successfully gotten it to work with a GPU below an RX 6xx0.
@FoxTroupe
@FoxTroupe 18 күн бұрын
I'm running into a hickup here. I was able to follow your video steps, but when you get to the part where you are activating the scripts in VENV (with activate) nothing happens. My pip list doesn't show any new packages activated, and the first part of my text prompt still shows (zluda) where yours shows (venv) (zluda).
@NextTechandAI
@NextTechandAI 18 күн бұрын
That's strange, the script should activate SD.next's venv. Does SD.next run successfully including Zluda on your PC? Are there any anomalies in the cmd window?
@FoxTroupe
@FoxTroupe 17 күн бұрын
@@NextTechandAI Yes. I was able to get around it by running "activate.ps1" and it let me launch ComfyUI. Weirdly, after the most recent Reboot and Update for Windows, it now works just like it did in your video. Very weird. I now have another question: where do I put arguments in the zluda ComfyUI launch to force Low VRAM mode? I'm trying to keep it from spilling over into Shared GPU memory and slowing to a crawl.
@NextTechandAI
@NextTechandAI 16 күн бұрын
Run python main.py -h and you get a list of all available options.
@avaallein
@avaallein 19 күн бұрын
Like FaceMod!!!
@yumiibrahim5513
@yumiibrahim5513 19 күн бұрын
a few questions: I noticed a considerable decrease in quality using ControlNet with the models that are commonly used with SDXL (i.e. like the ones you used) in comparison with 1.5 controlnet models. is there a way to rectify that? second, SDXL controlnet takes considerably longer to generate than their 1.5 counterparts. Any fixes for that?
@NextTechandAI
@NextTechandAI 19 күн бұрын
@yumiibrahim5513 Well recognized. SDXL allows higher resolutions and shorter prompts, but that comes with a price. As I said in the video, there is no official ControlNet repository for SDXL, so the models are customized accordingly. I also noticed the difference in quality, but I can't prove it. The running times are clearly too long for me to run longer test series. Which brings us to your second point. I still have to generate a lot of images to get satisfactory results. This takes too long for me with SDXL. I use SD1.5 (usually with ControlNet) and Upscale as described in my other video. The result can then be used as a template for SDXL with depth or canny if I really need it.
@yumiibrahim5513
@yumiibrahim5513 19 күн бұрын
@@NextTechandAI thank you for your response. Good idea to use 1.5 then use the result as a template. Thank you.
@NextTechandAI
@NextTechandAI 19 күн бұрын
@yumiibrahim5513 Thanks for your feedback :)
@SINEWEAVER-
@SINEWEAVER- 20 күн бұрын
how do i addd my models from 1111?
@NextTechandAI
@NextTechandAI 20 күн бұрын
The directory structure is same as in Automatic1111, so put them in models\Stable-diffusion.
@NextTechandAI
@NextTechandAI 20 күн бұрын
=== Advertisement / sponsor note === Try FaceMod AI Face Swap Online: bit.ly/4aha0fm === Advertisement / sponsor note ===
@jasondsouza3555
@jasondsouza3555 22 күн бұрын
Can I run LLMs locally using ROCm/ZLUDA? I was looking to make an RAG Chatbot on custom data with something like Llama-2
@NextTechandAI
@NextTechandAI 22 күн бұрын
Why not using GPT4All locally with Llama-2 or Llama-3? It supports AMD GPUs and has a server mode and an API. See my vid: kzbin.info/www/bejne/ipS7q6yrqcughdk
@jasondsouza3555
@jasondsouza3555 22 күн бұрын
@@NextTechandAI oh I didn't know about this. I was only making RAG chatbots on kaggle and wanted to shift to local. Do you think an RX6600 can work with GPT4All?
@NextTechandAI
@NextTechandAI 22 күн бұрын
@jasondsouza3555 GPT4All uses the Vulkan API for GPU support, so I think yes, it should work with an RX6600.
@taffyware1059
@taffyware1059 23 күн бұрын
I don't think its AMD's fault. From what I heard, the torch developers just dont want to. (also, for language model AI's, ROCm has been supported for a long time)
@NextTechandAI
@NextTechandAI 22 күн бұрын
Which source are you referring to with this statement? You might have a look at my linked vid. There you can see even the related Git isses, it's obvious the ROCm AI libraries are still missing. Torch, A1111, etc. have open Git issues for the implementation and are still waiting for ROCm.
@taffyware1059
@taffyware1059 7 күн бұрын
@@NextTechandAI Just something I heard from the dev's that supported ROCm for language models on windows, so not really a reliable source
@10xHealthyLifeStyle
@10xHealthyLifeStyle 23 күн бұрын
Thank you so much, this video helped me a lot. Do you think you can do a tutorial on installing "Stable Diffusion WebUI AMDGPU Forge"🙏
@NextTechandAI
@NextTechandAI 22 күн бұрын
Thank you very much for the feedback. I'll add it to my list, haven't expected this demand for Forge.
@pilotdawn1661
@pilotdawn1661 23 күн бұрын
Very helpful. Liked and Subscribed. Thanks!
@NextTechandAI
@NextTechandAI 22 күн бұрын
Thanks a lot for the like and the sub. I'm happy that my vid was helpful.
@hny4uable
@hny4uable 26 күн бұрын
I am getting this error "RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check Press any key to continue . . ."" would you help me out please
@NextTechandAI
@NextTechandAI 26 күн бұрын
Does SD.next with Zluda run on your machine? At which point do you get this message? Are there other errors? Which GPU are you using?
@hny4uable
@hny4uable 26 күн бұрын
@@NextTechandAI I tried long ago with sdnext as well it used to say using zluda as experimental and also got a message rocm toolkit was found, but it was using my cpu all the time. I have 7800XT AMD GPU
@NextTechandAI
@NextTechandAI 26 күн бұрын
@hny4uable Now I'm irritated. The basic requirement for installing Automatic1111 according to this short is a running installation of SD.next with Zluda as described in my corresponding video.
@hny4uable
@hny4uable 25 күн бұрын
@@NextTechandAI I understand you, but sdnext should use my gpu instead cpu while using rcom or zluda?
@NextTechandAI
@NextTechandAI 25 күн бұрын
@hny4uable Sure, SD.next should use the GPU with Zluda, that's why we're doing the whole thing.
@wiyye
@wiyye 26 күн бұрын
Can GPT4all also be configured to serve the answers via API instead of in a chat interface ?
@NextTechandAI
@NextTechandAI 26 күн бұрын
Yes, there's a server mode and a Python API.
@weishanlei8682
@weishanlei8682 27 күн бұрын
There is another space under the ControlNet that we can drop an image. What is that for?
@jatinnagar5257
@jatinnagar5257 28 күн бұрын
SD.NEXT YOUR LAST VIDEO I INSTALLED AND IT IS USING CPU INSTEAD OF CPU I RUN THREE TIME SAME HAPPEN
@jatinnagar5257
@jatinnagar5257 28 күн бұрын
USING CPU INSTEAD OF GPU , HELP PLS
@NextTechandAI
@NextTechandAI 27 күн бұрын
It's not helpful if you post your requests multiple times or with different accounts. Which GPU and which options do you run? You should have an entry similar to this one, how does it look like? "Device: device=AMD Radeon RX 6800 [ZLUDA] n=1 arch=compute_37 cap=(8, 8) cuda=11.8 cudnn=8700 driver=" Which additional errors occur in your command window?
@jatinnagar5257
@jatinnagar5257 25 күн бұрын
@@NextTechandAI sorry for disturbing u
@Rishi-ch6jo
@Rishi-ch6jo 28 күн бұрын
USING CPU INSTEAD OF GPU
@DaehaKim
@DaehaKim 28 күн бұрын
10:37:32-465225 INFO Device: device=AMD Radeon RX 5700 XT [ZLUDA] n=1 arch=compute_37 cap=(8, 8) cuda=11.8 cudnn=8700 driver= 10:37:32-469226 DEBUG Migrated styles: file=styles.csv folder=models\styles 10:37:32-497233 DEBUG Load styles: folder="models\styles" items=288 time=0.03 10:37:32-499233 DEBUG Read: file="html eference.json" json=36 bytes=21493 time=0.000 rocBLAS error: Cannot read C:\Program Files\AMD\ROCm\5.7\bin\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1010 rocBLAS error: Could not initialize Tensile host: regex_error(error_backref): The expression contained an invalid back reference. How can i fix it ?
@NextTechandAI
@NextTechandAI 27 күн бұрын
I don't have good news. You remember the part in the video where I list the supported GPUs? For the 6600 to 6750 XT, which are not directly supported, there are adapted libraries on the linked website. You could see if something like this also exists for the 5700 XT, I haven't heard of it yet.
@weishanlei8682
@weishanlei8682 28 күн бұрын
Where can I get the prompt at 5:08?
@NextTechandAI
@NextTechandAI 28 күн бұрын
You stop the video and type the prompt into an editor - or what do you mean?
@weishanlei8682
@weishanlei8682 28 күн бұрын
@@NextTechandAI I am not willing to do anything more difficult than copy and paste. Could you help me?
@NextTechandAI
@NextTechandAI 27 күн бұрын
Seriously? Then I think such complex topics are not for you.
@weishanlei8682
@weishanlei8682 27 күн бұрын
@@NextTechandAI Come on! An easy copy-and-paste can save time!
@weishanlei8682
@weishanlei8682 28 күн бұрын
Where should I put the .pth file? Ok Understood! 4:23 WATCH: Starting 3:45 it explains where are the four files should be put.
@NextTechandAI
@NextTechandAI 28 күн бұрын
I'm happy that you found it in my vid.
@QuackCow144
@QuackCow144 29 күн бұрын
Mine doesn't have "DPM++ 2M SDE Karras" for the sampling method. Mine has "DPM++ 2M SDE" and "DPM++ 2M SDE Heun". Which one of these should I use?
@NextTechandAI
@NextTechandAI 28 күн бұрын
It depends on your version of Automatic1111. You could try to update if this doesn't hurt your installation. In latest versions you have to select e.g. Karras in a list right from the sampling method or leave it on "automatic". If this is no option for you, start with DPM++ 2M SDE.
@faredit-cq2xl
@faredit-cq2xl Ай бұрын
thank's . did you know how can i use this method in Forge UI ? in Forge Preprocessor is `InsightFace+CLIP-H` and Model is `ip-adapter-plus-face_sd15` , i Don't know how to use `ip-adapter-clip_sd15` as preprocessor !
@IshanJaiswal26
@IshanJaiswal26 Ай бұрын
(base) C:\Users\ishan>cd \ki The system cannot find the path specified.
@NextTechandAI
@NextTechandAI Ай бұрын
As I said in the vid: You have to create an appropriate directory and enter it. I put my SD installations in KI, you can put yours anywhere on your PC.
@yo-kaiwatchfan3417
@yo-kaiwatchfan3417 Ай бұрын
hi, do you know if zluda can be used in realESRGAN?
@yo-kaiwatchfan3417
@yo-kaiwatchfan3417 Ай бұрын
hi, do you know if zluda can be used in realESRGAN?
@IshanJaiswal26
@IshanJaiswal26 Ай бұрын
Does it work with rx6600 8gbvram ?
@NextTechandAI
@NextTechandAI Ай бұрын
I don't own a rx6600, but it should work by setting the target to gfx1032 instead of gfx1030. Moreover I would stick to SD1.5 models as SDXL requires much more VRAM.
@matiosjed
@matiosjed Ай бұрын
Do you know if Applio (text-to-speech) can work with Zluda. I think it used to work with amd by directml but my program just starts on cpu😢.
@ajphilippineexpat
@ajphilippineexpat Ай бұрын
Hi, I followed your instructions right to the end, but the Automatic requires python 3.11. I've tried plenty of things to roll back from 3.12 but can't make it it work. Ugh. Using AI on my CPU is hopeless when I've got an AMD 7900XTX to use instead. If I can't roll back python and continue the last step then I've got to wait for ROCm on windows...
@NextTechandAI
@NextTechandAI Ай бұрын
I'm sorry, but there was a reason that I've created a Conda environment with python 3.10 in the vid. You have to start over with a new e.g. Conda env based on python 3.10 or indeed wait for ROCm on Windows.
@Plutonium.239
@Plutonium.239 25 күн бұрын
@@NextTechandAI I have this same problem and I did create the environment exactly as you did with the 3.10 option but it still show as 3.12 and "Incompatible version" when running the "webui use-zluda debug autolaunch command. I even tried again and started over and ran "conda install python=3.10" and created a new environment and nothing seems to change the python version to prevent this incompatible version error.
@Plutonium.239
@Plutonium.239 25 күн бұрын
Actually it looks like the problem may be the version of Miniconda that i installed. there are older version with python 3.10
@NextTechandAI
@NextTechandAI 25 күн бұрын
SD.next uses the first python version it finds in the path and creates a venv with this version. You could try to delete (or rename to have a backup) the venv folder of your SD.next. Make sure to have an active Python 3.10 in your path when starting SD.next after this modification.
@Adsen83
@Adsen83 Ай бұрын
cannot get ComfyUI Running with SD.next now after an accidental update - "Device: cuda:0 AMD Radeon RX 7900 XTX [ZLUDA] : cudaMallocAsync" disaplaying - the model_management.py no longer mentions AMD anywhere as it's been overwritten with an updated version which includes apparent changes.
@NextTechandAI
@NextTechandAI Ай бұрын
Have you already tried modifying the two .py files again?
@PThomas22
@PThomas22 Ай бұрын
What is the reason for using the SDNext VENV? Could we copy certain files or just the whole folder into A1111 instead?
@NextTechandAI
@NextTechandAI Ай бұрын
The installation of SD.next has already taken care of all of Zluda's needs, so we hardly have to do anything manually. The VENV therefore already contains all dependencies like the appropriate PyTorch. Of course, you can do everything manually, but installing SD.next really saves a lot of work and immediately creates an Automatic1111 fork that can run with Zluda.
@PThomas22
@PThomas22 Ай бұрын
@@NextTechandAI thanks for the reply and this works great! if you need to update or reload the VENV you would just delete the SDNEXT version, run SDNEXT, and then be good to go in A1111?
@NextTechandAI
@NextTechandAI Ай бұрын
Well, you could just "git pull" and start SD.next for an update. But, yes, for major updates to be on the safer side you could remove SD.next and install again in order to use the updated VENV (the risk is very low as SD.next is a fork).
@matjav1
@matjav1 Ай бұрын
Thank you very much!
@NextTechandAI
@NextTechandAI Ай бұрын
I'm happy that the video is useful :)
@SimonLange
@SimonLange Ай бұрын
I got everything up and running. SD even recognizes ROCm AND the zluda installation. BUT pyTorch still uses cpu. i got no idea why. here the startup partly: 20:23:33-506037 DEBUG Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False 20:23:33-509538 DEBUG Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True 20:23:33-523036 DEBUG Package not found: torch-directml 20:23:33-525537 INFO AMD ROCm toolkit detected 20:23:34-014298 DEBUG ROCm agents detected: ['gfx1030'] 20:23:34-016797 DEBUG ROCm agent used by default: idx=0 gpu=gfx1030 arch=navi2x 20:23:34-224965 DEBUG ROCm version detected: 5.7 20:23:34-226965 WARNING ZLUDA support: experimental 20:23:34-230047 INFO Using ZLUDA in D:\AI\zluda 20:23:34-231967 DEBUG Installing torch: torch==2.2.1 torchvision --index-url download.pytorch.org/whl/cu118 20:23:34-342515 DEBUG Repository update time: Sun Apr 21 14:25:50 2024 As you can see. looks fine. BUT then this: 20:23:46-687652 INFO Load packages: {'torch': '2.3.0+cpu', 'diffusers': '0.27.0', 'gradio': '3.43.2'} 20:23:47-866616 DEBUG Read: file="config.json" json=30 bytes=1350 time=0.001 20:23:47-874117 INFO Engine: backend=Backend.DIFFUSERS compute=cpu device=cpu attention="Scaled-Dot-Product" mode=no_grad 20:23:47-878616 INFO Device: 20:23:47-880615 DEBUG Read: file="html eference.json" json=36 bytes=21493 time=0.000 20:23:48-465794 DEBUG ONNX: version=1.17.3 provider=CPUExecutionProvider, available=['AzureExecutionProvider', 'CPUExecutionProvider'] As you can see pyTorch uses the cpu version. No f**n idea why. So what is missing?! i octa-checked meanwhile your instructions and my installation. It makes no difference if run via powershell or cmd. it makes no difference as run as admin or normal user. it makes no difference if restarted or without. All drivers are updated, so why is torch not using it?! why the backfall to cpu? ideas?!
@SimonLange
@SimonLange Ай бұрын
starting with exactly your phrase gets me this: 20:32:28-812973 INFO Base: class=StableDiffusionPipeline 20:33:01-831983 DEBUG Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cpu'), 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'output_type': 'latent', 'width': 512, 'height': 512, 'parser': 'Full parser'} as you can see it starts with device type cpu. weird. where is the secret switch so he utilizes my gpu.
@CASTmpChannel
@CASTmpChannel 27 күн бұрын
same issue here, i have a 6900xt, no error in logs, i copy only relevant lines here: >> Platform: arch=AMD64 cpu=AMD64 Family 25 Model 97 Stepping 2, AuthenticAMD system=Windows release=Windows-10-10.0.22631-SP0 python=3.10.14 ROCm agent used by default: idx=0 gpu=gfx1030 arch=navi2x << but: >> Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cpu'), 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'output_type': 'latent', 'width': 512, 'height': 512, 'parser': 'Full parser'} << did you manage to make it work ?