Will you try the GGUF SD3.5 models or are you sticking with FLUX? UPDATE: City96 released his own versions of SD3.5 Medium GGUF models: huggingface.co/city96/stable-diffusion-3.5-medium-gguf
@czaro8006Ай бұрын
Hi, are you using AMD for your workflow? Just saw your latest video. I need to buy a new GPU and I have been considering AMD. I wanted to try out Stable Diffusion as well, and since I have used Linux I thought I wouldn't have that much of an issue. And AMD provides so much VRAM compared to NVIDIA.
@NextTechandAIАй бұрын
@@czaro8006 Yes, I (still) use an AMD RX6800 for workflows. My new video will be coming soon so you can run ComfyUI on Linux with AMD. Nevertheless, I would think twice about buying an AMD graphics card if you want to use it primarily for AI.
@czaro800621 күн бұрын
@@NextTechandAI I don't live in US and I just don't know whether I should go NVIDIA or AMD. I am debating 4070 Super or 16Gb from AMD. I could up my budget but again I can get 16Gb from NVIDIA or 20 from AMD. But here at 4070 Ti Super prices I am getting uncomfortable spending that much on a GPU. I can afford it but it just feels bad. I think I will be mostly gaming and I fear Ray Tracing to feel mostly like a gimmick. And as I said I want to try out the Stable Diffusions, Flux and all the others. I wish I could make a small side hustle making AI images or content in my area. I did my Tensorflow thesis on RTX 2060 6gb, but it didn't get me a job in IT or AI industry :( Everything might not pan out and I might just use the GPU for gaming only. Ugh decisions. And just saying, I can afford 4070 Ti Super, I'm just afraid I won't utilize it. I am not afraid of Linux either.
@NextTechandAI21 күн бұрын
@czaro8006 If you are using the GPU mainly for gaming and are not interested in Raytracing then choose AMD. If you plan to use AI a lot then you will regret not choosing NVIDIA. I expect my next GPU to be an NVIDIA.
@czaro800621 күн бұрын
@@NextTechandAI Thank you for that answer. I guess there is no point in trying to sail upwind. GPU have become too versatile, and I can't justify spending so much just to play video games on it I guess.
@yousifradio2 ай бұрын
I Like It
@NextTechandAI2 ай бұрын
Thank you!
@wolfgangterner72772 ай бұрын
The loader GGuf does not work. I always get this error message. What do I have to do to load the GGUF files?? ( `newbyteorder` was removed from the ndarray class in NumPy 2.0. Use `arr.view(arr.dtype.newbyteorder(order))` instead. Danke ## Stack Trace)
@NextTechandAI2 ай бұрын
Have you updated both the GGUF extension as well as your Comfy? Which GPU are you using?
@wolfgangterner72772 ай бұрын
@NextTechandAI I have updated everything and have a 12 GB RTX3060
@NextTechandAI2 ай бұрын
@wolfgangterner7277 There is an issue in the GGUF github, which suggests several solutions: github.com/city96/ComfyUI-GGUF/issues/7 I think downgrading numpy as suggested in the bottom of this issue is the easiest solution.
2 ай бұрын
Since when 12 GB is "low vRAM"? 😅. I always considered 4-6 GB as low, 8-12 GB medium and 16+ as high vRAM.
@NextTechandAI2 ай бұрын
VRAM refers to the GPU's memory, not the file size😉 Some have already gotten FLUX with GGUF to work with 4-6GB VRAM, I expect the same for SD3.5. With 8 GB or less, GGUF definitely makes sense, I also use it with 16 GB.
@Gaming_Legend22 ай бұрын
its a good amount but not for ai generation, i keep crashing the card when running SDXL models on a RX6600 lol 12 is defently on the edge of low vRAM for this type of stuff
@kaleb51Ай бұрын
Hello you have a nice channel. I had a quick question. How can i use tensorflow on my amd rx6800xt?
@AberrantArt2 ай бұрын
I still can't get flux to run on my Radeon 5500 XT. I love your channel BTW. Thank you for what you do.
@NextTechandAI2 ай бұрын
Thank you very much!
@Kostya10111981Ай бұрын
Hallo Patrick! Have you tried Amuse for AMD generator? If so, what can you say about it?
@NextTechandAIАй бұрын
Hallo Kostya! Although I haven't used it yet, I think Amuse is a reasonable option to try out image generation. However, I think AMD should have put the resources into ROCm for Windows; ComfyUI, Forge and A1111 are proven open source tools that offer significantly more options. In my opinion, we don’t need another proprietary tool that is also based on ONNX.
@srikantdhondiАй бұрын
My PC has: Total VRAM 8192 MB, total RAM 32637 MB pytorch version: 2.5.1+cu124 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 3050 : cudaMallocAsync Even flux1-schnell-fp8.safetensors based workflows not working in my pc, Comify UI Reconnecting and pausing, any suggestions how to fix this issue?
@srikantdhondiАй бұрын
i tried your workflow, "SD3_5-Large-Turbo-NextTechAndAI-Patreon.json", but ComifyUI reconnecting and not able to run prompts
@PenkWRK2 ай бұрын
hello i got problem like this "the size of tensor a (1536) must match the size of tensor b (2304) at non-singleton dimension 2". i try anything but still got this
@NextTechandAI2 ай бұрын
Hi, which of my workflows and which model files do you use?
@PenkWRK2 ай бұрын
@NextTechandAI i use models SD. 3.5 Medium fp16 GGUF and i create manually same workflow on your this video but still getting error. i talk ChatGPT and update torch, transformers, diffusers and also setting torch float16 in my python not working aswell
@NextTechandAI2 ай бұрын
@@PenkWRK Could you please use the suggested models and my workflows, you can download them from my patreon for free. In case this doesn't help, try the original SD3.5 Medium model without GGUF.
@Med24022 ай бұрын
Thanks
@NextTechandAI2 ай бұрын
I'm glad you liked the vid.
@93simongh2 ай бұрын
Thanks but doesnt help if the just the 3 text encoder files are about almost 15 gb in total... My confyui crashes my pc (99% ram use) while loading the 3 clips before even attenpting to load the gguf
@NextTechandAI2 ай бұрын
For very low VRAM I've suggested in the video the FP8 T5, which is below 5GB. g and i together are about 1,5GB. You can even use the GGUF T5 encoders linked at the bottom of City96s GitHub with down to 2GB, but they have a bigger impact on quality than the quantized UNET models. Hence I'd try FP8 T5 first. Use runtime parameters --use-split-cross-attention and --lowvram or even --novram.
@93simongh2 ай бұрын
@NextTechandAI thanks I will have to try. Where do you use the parameters you mentioned? Do I use them when launching comfyui?
@NextTechandAI2 ай бұрын
@@93simongh Yes, in the batch file directly after 'main.py'.
@darkman2372 ай бұрын
What about forge?
@NextTechandAI2 ай бұрын
From what I have seen there shall be a release soon. Forge with Medium 3.5 seems to be broken, probably they want to fix this first.
@NextTechandAI2 ай бұрын
From what I have seen there shall be a release soon. Forge with Medium 3.5 seems to be broken, probably they want to fix this first.
@forg2x2 ай бұрын
I tried SD 3.5 and Flux wins at least in Low VRAM 3060 12GB
@NextTechandAI2 ай бұрын
Regarding quality or speed? Are you using both with GGUF?