One thing to help users distinguish which format of model to use. Bf16(bfloat16) can only effectively be used by NVidia 30 and 40 series cards(Ampere and Ada) which have the native hardware support to read that format. The "full sized" models are in fp32 format which any modern GPU since 2010 can use but of course they take up more VRAM. Even though the bf16 models are "only" 16bits per parameter and the full sized fp32 are 32bits per parameter both are of almost the same quality in terms of the range of the numbers they can represent. The only difference is that the bf16 doesn't have as much accuracy(fewer decimal places) as fp32. Don't confuse bf16 with fp16 which is another number format mentioned with AI datasets and GPU capabilities and which has much less "range" in the values of numbers they represent. In so far as image quality you can think if bf16 as having almost the same dynamic range as fp32 while fp16 is nowhere near the same range as bf16. The "lite" vs full version should be self explanatory, lite has many less parameters than the full version therefore less encoded information in it's neural net(smaller capacity). Nvidia 30 and 40 series card users can mix and match between bf16/fp32 and lite/full variations. But if you don't have 30 or 40 series you will only be stuck with the lite/full versions of the fp32 models. FYI, the SC nodes as they're setup do NOT remain the models in memory after they are used, therefore your maximum VRAM usage is not accurate, the largest full model I've loaded takes up around 8-9GB VRAM so anyone with a 12GB card should be able to comfortably run the full fp32 models.
@MichauxJHyatt8 ай бұрын
I appreciate your insight!
@FiveBelowFiveUK8 ай бұрын
thanks for the detailed write up! I was using a 3060 12gb and upgraded to 4090 24gb, so i guess i never noticed, i'll be sure to mention this in future :)
@fabiotgarcia28 ай бұрын
So... for Mac M2 we need to use fp32 format, right? where can we find it for download?
@glenyoung18098 ай бұрын
@@fabiotgarcia2search for huggingface stable cascade, the first page should have the safetensors files want the ones marked stage a,b,c.
@FiveBelowFiveUK8 ай бұрын
huggingface.co/stabilityai/stable-cascade @@fabiotgarcia2 should all be here as safetensors format
@gimperita30358 ай бұрын
I'm running full stage C and B on a 4080. No errors so far generating 2K ish, even 2560x1440.
@Loutchianooo8 ай бұрын
Thanks man, really helpful! A bit confusing all them folders, SDXL was checkpoints, now it's unet...
@electronicmusicartcollective8 ай бұрын
THanx men! Good explaining and a handy workflow
@RalFingerLP8 ай бұрын
Nice Video drift!
@styrke92728 ай бұрын
cool and pretty concise!!
@magenta68 ай бұрын
Excellent and concise!
@AI_Creatives_Toolbox8 ай бұрын
Thanks for the excellent video! An unrelated question - How do you get the lower bar with the renders? Thanks!
@FiveBelowFiveUK8 ай бұрын
in the latest comfy you can click a button iin the bottom left corner, adjust less images with bigger size will get you what i have set in mine :)
@LuckyWabbitDesign8 ай бұрын
@@FiveBelowFiveUK not seeing any 'button in bottom left corner'. Pretty certain I've got the latest cumfyUI. Could you describe further, or post a screenshot? thanks
@FiveBelowFiveUK8 ай бұрын
i can post a screenshot on the channel feed, so look there ;) @@LuckyWabbitDesign
@kofteburger8 ай бұрын
Tried this With Radeon 6700 10 GB on Ubuntu using the lite models. It worked once I replaced the VAE decoder with the tiled one. However the second KSampler (stage B?) is painfuly slow on the default resolution. Over 20s/Itteration slow. 1024x1024 is quite good.
@FiveBelowFiveUK8 ай бұрын
interesting. See if the newer method released later yesterday works. I have people on
@magimyster8 ай бұрын
Is it possible to make it work with just the cpu?
@sickvr76808 ай бұрын
yes
@FiveBelowFiveUK8 ай бұрын
yes, but you will need to install the python marked with CPU for your system, not the cuda version
@equilibrium9648 ай бұрын
Good job, dude. Thank you very much! Do you know if it is possible to use stable cascade with SDUpscale and which model I should use in this case?
@FiveBelowFiveUK8 ай бұрын
upscaling is done on the images, so you can go ahead and use any upscaler or upscaling nodes. It does produce very high resolution images, so it will be interesting to see how large they can go.
@djivanoff138 ай бұрын
next video img2img plz (Stable Cascade)
@FiveBelowFiveUK8 ай бұрын
effnetencoder not yet supported in comfy, watching the commits for an update, this is needed to encode loaded images :D we will have it up as soon as it lands!
@dasistdiewahrheit95858 ай бұрын
Thanks for the info. One point: What you wrote on that website is pretty hard to read. Please put some newlines or something.
@FiveBelowFiveUK8 ай бұрын
if you meant Civit, i wrote it up a bit clearer. hope that helps, can add more if needed
@dasistdiewahrheit95858 ай бұрын
@@FiveBelowFiveUK That was exactly my point. Thank you, very helpful information.
@juanchogarzonmiranda8 ай бұрын
Thanks!!
@jonmichaelgalindo8 ай бұрын
You don't have to match B and C! Use C bf16 and B lite bf16 for best results. "Fully supported" do img2img or inpainting. O wait, you can't. Works in SAI repo with python script only.
@FiveBelowFiveUK8 ай бұрын
uhm. hold up there a minute haha. Img2img is supported, you are watching old videos ;) We got Vision and img2img in SDXL since day one. This video was the day one comfyui support launch. The big giveaway is this was released before Stability even let checkpoints out. Since then I released workflows that allow ANY diffusion model with all CNET etc to bridge into cascade. Should check for the more recent videos ;) before you comment. AI space moves fast, it's not waiting around. also, Ampere cards only for BF16, so 30xx & 40xx, so i try not to make assumptions when i give out advice
@jonmichaelgalindo8 ай бұрын
@@FiveBelowFiveUK This video is for Stable Cascade. It's a completely different model architecture from SDXL. No img2img support for Stable Cascade in Comfy as of morning Feb 24 2024. (In SDXL, to do img2img you only need to run the RGB pixels through a VAE to get a latent. In Stable Cascade, you first run the RGB through a VAE to get one latent, then you compress that to a second latent using a diffusion model (called "stage B"). It's this compressed latent that Stable Cascade's main pipeline diffuses ("stage C").)
@FiveBelowFiveUK8 ай бұрын
hi there, You seem to be running on maybe the day before info. The **effnet_encoder** is a VAE which is built into the stage C encoder. You run your image into the VAE encoder to perform img2img. There is no difference in principle between this and the VAE encoder used with SDXL. In early workflows (last week) we were doing this before the Checkpoints were released (see Argus-v18 img2img workflow) Please check recent videos, as if you watch a more than a day late, it's likely outdated already. @@jonmichaelgalindo We have had img2img support in comfyui since Feb19th. I released workflows to help people on that day that comfyui added the code to support the feature. As a matter of fact i have demonstrated using SD1.5, SD2.1, SDXL all with txt2img, img2img and Vision Stack in the case of Cacade, with and without Lora Loading on both sides. Specific nodes were introduced to aid with Cascade img2img here: (19th feb) github.com/comfyanonymous/ComfyUI/commit/a31152496990913211c6deb3267144bd3095c1ee Like i said, I'm here to bridge the gap between the new tools and artists. That means putting things in terms everyone will understand.
@jonmichaelgalindo8 ай бұрын
@@FiveBelowFiveUK Thanks so much, I'll try again to find the info. I honestly haven't found a single video or workflow anywhere enabling img2img. I tried using the VAE as an encoder on the 17th. I'm amazed posting that github link didn't get your comment deleted! I've found a workflow that adds images as conditioning?
@rsunghun8 ай бұрын
Does it work with controlnet?
@FiveBelowFiveUK8 ай бұрын
there is a canny model, however it's not yet implemented, i'll cover this as soon as it arrives :D
@MuffShuh_PA8 ай бұрын
keep it up
@appolonius41088 ай бұрын
the workflow is not the one you show in the video
@FiveBelowFiveUK8 ай бұрын
If you look at the Argus Page, there are more than 10 versions already. Each was created for a purpose. The early Workflows do not use the Checkpoints released on day three following Cascade launch. We were using them on day one with these. all Argus Versions up to V18 did not use the newer and current checkpoint method. You can learn more about that in the next video ;) "stable cascade comfy checkpoint update" I have a version of Argus, that is all in one, "Argus Cascade Studio". Once this is complete, I will release it.
@LouisGedo8 ай бұрын
👋
@AgustinCaniglia19928 ай бұрын
Too many models. My pc frozes every time it loads one. Not practical
@FiveBelowFiveUK8 ай бұрын
well lucky for you, they released two proper checkpoints to make it all easier. new video and workflows came out yesterday