Flux Model Update: Smaller, Faster, and Just as Powerful!

  Рет қаралды 11,387

Code Crafters Corner

Code Crafters Corner

Күн бұрын

Пікірлер
@jcboisvert1446
@jcboisvert1446 5 ай бұрын
Thanks
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Thank you!
@premium2681
@premium2681 5 ай бұрын
good video! Because of the different extension at first I though the new, smaller modesl were going into a traditional workflow with the normal checkpoint loader, but we are staying with the unet stuff I guess. Great that they managed to bring it down in size this much without it hurting the quality! Keep the video's coming!
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Glad it was helpful!
@tetsuooshima832
@tetsuooshima832 5 ай бұрын
Very well explained, I like this. Thumbs up !
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Thank you!
@sunlightlove1
@sunlightlove1 5 ай бұрын
thanks for such a quick updates
@kylepoole3580
@kylepoole3580 5 ай бұрын
Thanks so much for your videos! Just a quick correction for anyone trying your trick of combining Flux and SDXL. You can't use the Latent from Flux directly into the KSampler for SDXL, you have to use a VAE Encode node using the image output from the Flux step, then put that Latent into the SDXL KSampler. It's a pretty good combo for getting the scene composition nice with Flux and then using a good SDXL checkpoint for the style!!
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Thanks for sharing! Really appreciate the correction.
@panonesia
@panonesia 5 ай бұрын
@@CodeCraftersCorner looking forward for tutorial combining flux and SDXL please
@swannschilling474
@swannschilling474 5 ай бұрын
Thanks for those super tight updates!! 😊
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Glad you like them!
@jjhon8089
@jjhon8089 5 ай бұрын
Thank you
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Thank you for watching!
@Gabriecielo
@Gabriecielo 5 ай бұрын
FP8 version is really good, not obvious different quality to DEV version, but much faster. 11GB VS 23GB is really helping most users who don't have a 4090 with 24GB vram.
@devnull_
@devnull_ 3 ай бұрын
Thanks. I know this video is basically from time when Flux.1-dev and Schnell were released, but I still wonder if they did train on purpose that bad looking female face for Schnell, or is it because of the distillation. You pretty much most often get this exact same plastic looking face, its shading (no matter what skin color) looks like what you get when you texture a 3d model with a photo and/or it look like airbrush art. And in Flux.1-dev of course by default (unless using LoRAs and lower guidance value) you get that same cleft chin, square jaw and thin nose face - if you dare to use keyword beautiful :D
@CodeCraftersCorner
@CodeCraftersCorner 3 ай бұрын
Thanks for sharing! I’ve noticed the same pattern. It does seems like it could be due to Schnell being a distilled version.
@rosetyler4801
@rosetyler4801 5 ай бұрын
Why does this model require the unet loader instead of the traditional checkpoint loader? I'm sure it's because that's the way it was designed but I'm wondering is there an advantage to this? And if not is it possible for someone to make regular checkpoint versions or will we always have to stick to using unet for any version of this model?
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Hello, it is now available. Just published a video on how to use the built in nodes with the model here: kzbin.info/www/bejne/lZe2c3WMl9mKfKM
@nourghafarji
@nourghafarji 5 ай бұрын
Great stuff man! just dicovered your channel, you explain very well, is there a way to put a negative prompt in flux model?
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Thanks for the sub! For now, it does not work with negative prompt. Will have to wait for more updates.
@sertocd
@sertocd 5 ай бұрын
how do you make node lines cornered? instead of the default spaghetti version?
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Hello. These are the steps: Go to Settings (gear icon above Queue Prompt button) > Look for "Link Render Mode" and change it to "Straight". There is an option to hide the lines too.
@sertocd
@sertocd 5 ай бұрын
@@CodeCraftersCorner thanks
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
👍
@aurelienb9109
@aurelienb9109 5 ай бұрын
What about using ControlNet with a SDXL second pass? Wouldn't it preserve more of the initial composition and shapes of Flux outputs than inpainting with masks ?
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Yes, I am preparing a video on the topic.
@aurelienb9109
@aurelienb9109 5 ай бұрын
@@CodeCraftersCorner Cool, an idea could for example be to generate a "line art drawing" with Flux, and then use ControlNet on the lineart to guide SDXL.
@ImAlecPonce
@ImAlecPonce 5 ай бұрын
I tried them out and I have noticed there to be more detail in the larger dev file
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Yes, true! dev model gives better detail.
@4thObserver
@4thObserver 5 ай бұрын
Quick question? Can it be used without ComfyUI?
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
At the moment, you can use it in ComfyUI and on Huggingface space. There are feature requests for Automatic1111 and Fooocus but not sure when it will be implemented.
@Geonade
@Geonade 5 ай бұрын
are you able to use other models or just the basic flux one from the setup?
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
There are only the Flux dev and the Flux schnell versions for now.
@DevasistheContentCreator
@DevasistheContentCreator 5 ай бұрын
please help me i am trying to run Flux ai i followed all the steps, but it didn't work in the CMD. It writes, "Got prompt and points to load diffusion Model Please Help
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Hello, can you try to update your ComfyUI and see if that fixes it.
@Mehdi0montahw
@Mehdi0montahw 5 ай бұрын
thanks
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Thank you too!
@dbmann22
@dbmann22 5 ай бұрын
Thanks! How much time did it take to render an image with fp8 version if compared to the full model?
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Full model is about 10 minutes. fp8 is a little bit over 7 minutes on my GTX 1650. You may or may not see a speed up depending on your card.
@IntiArtDesigns
@IntiArtDesigns 5 ай бұрын
Flux can't do realistic images? Uhh, yes it can, pretty well actually. What nonsense is this? lol It might not be the best, since it's a general purpose model and not 'specialized' in anything because it's not fine tuned, but despite this it can make some incredible realistic images, if you know how to prompt for them. It's not terrible by any stretch of the imagination. I mean SD3 struggles with even basic human anatomy. And let's not forget that the vast majority of images, especially of women, online are touched up anyway, to smoothen skin and remove blemishes etc. It really depends on what you consider natural and/or realistic, because a lot of the photos we see these days are not that realistic. And that's not even bringing makeup into the equation either. Some of the outputs i've got from flux look about as "realistic" as your average IG photo. I don't think you can ask for much more without a fine tune to your specific 'realistic' needs. Also, are we just talking about human subject realism, or realism in general including objects and scenery? I think it's a bold statement in any case.
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Agree!
@Elwaves2925
@Elwaves2925 5 ай бұрын
@@CodeCraftersCorner If you agree, why did you say it was true after reading their comment? I'm not having a go or anything, just curious. @IntiArt I agree and was about to post my own comment when I saw yours. The vast majorty of my generations have been realistic (without any workaround) and they look great, even better with a skin detailer workflow. As you say, realism also applies to non-humans and I have some generations that are indistinguishable from photos. The fact that commenter could only call it 'trash' says a lot IMO. I suspect by 'realistic' what they really meant were nudies. :-)
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Before you post your comment on this video, for the previous comment, which I showed in the video, I responded similarly, agreeing that this is a general-purpose model. I assume you're using the dev model. The Schnell model tends to produce photoshoot-type images unless otherwise specified in the prompt. However, when using the same simple prompt in the dev model, it generates more realistic images. Of course, as you mentioned, what one considers realistic can vary from person to person.
@jonmichaelgalindo
@jonmichaelgalindo 5 ай бұрын
Using this since day 1 though.
@jaouadbentaguena8340
@jaouadbentaguena8340 5 ай бұрын
If you don’t need text in the image, why use comfy UI instead of Foocus? I don’t yet see a clear advantage to switch from Foocus to comfy UI. Any tips on Foocus for realistic face swap? I find that Foocus always gives the same woman face as the one you showed in the video😅
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Hello, only ComfyUI supports Flux model for now. There is a feature request in Fooocus but has not been implemented yet. I usually do the face swapping in ComfyUI with either FaceID V2. You can try using loras to get different characters.
@jaouadbentaguena8340
@jaouadbentaguena8340 5 ай бұрын
@@CodeCraftersCorner thank you I appreciate your feedback 🙏
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
👍
@AlexUnder_BR
@AlexUnder_BR 5 ай бұрын
I would like for you to share where it is stated that this is for commercial purpose allowed Sir. Cause the very huggingface page exhibit in your own video clearly States that it is Non-commercial use.
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Hello. The dev model is under research non-commercial purposes only. The schnell version is under Apache-2.0 License which allows for non-commecial, research and commercial use. I made a video when the model came out. Here is the link to the video: kzbin.info/www/bejne/hZK2apt8ecSrsJo In the description of this video, you will find links to the models original huggingface repo with the license.
@manchumuq
@manchumuq 5 ай бұрын
I really wish that some more intuitive interface will come out soon for FLUX, and I believe it will be soon. On the other hand, ComfyUI is indeed powerful and flexible, but I just don't see it's the overall AI generation tool for creative jobs. I switched from Automatic 1111 to Comfy, and then as long as Fooocus getting really handy and practical, I never went back to Comfy, it just too complicated for some simple job, I would rather spend the time on refining the images with the inpaint, than spending lots of time on designing the nodes each time I need to do anything creative, it just kill the creativity from the accessibility end. It's way too complicated and techy which is more inclined for coders and geeks, not really for the mass majority of creatives.
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Thanks for sharing your experience. I usually go for the UI which is easier to use. Right now, I'll say, I am using ComfyUI about 40% and Fooocus 60% for my image generations.
@manchumuq
@manchumuq 5 ай бұрын
@@CodeCraftersCorner We're probably the same, the Fooocus inpaint engine is just so good. I am trying out the FLUX right now, it literally has just blown me away.
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
@manchumuq yes, same.
@Elwaves2925
@Elwaves2925 5 ай бұрын
It works on SwarmUI apparently. Not StableSwarmUI but the newer SwarmUI. While that is still Comfy it's all in the background.
@manchumuq
@manchumuq 5 ай бұрын
@@Elwaves2925 Thank you sir, I’ll try it now.
@francoisneko
@francoisneko 5 ай бұрын
Great video, just discovered your channel and subscribed as it looks very instructive! I have an gtx 3060 laptop (6g only) and 16g of ram, is it okay to run flux such elle relatively fast? Also I am thinking to upgrade my ram to 32g if it helps generate faster (it is an investment so I would do it only if it’s worse). What would you recommend me to do to have an okay rendering time? Thank you for your input!
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Thank you for the sub! I cannot say for sure. For reference, I am running the Flux model on GTX 1650 4GB VRAM and 32GB of RAM. Try to see if you can run it first and if you feel the need to upgrade, then you can.
@СанжарАльжанов-г4ь
@СанжарАльжанов-г4ь 5 ай бұрын
Hi! Can it working with Faceid??
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Hello, unfortunately not for now.
@Nid_All
@Nid_All 5 ай бұрын
how much vram do i need to run this ?
@cedricweber4141
@cedricweber4141 5 ай бұрын
in a low vram mode you are supposed to run it with 12gb
@totempow
@totempow 5 ай бұрын
Got it running with 8GB though it takes about 9 minutes for an image on dev version and 6 or so for schnell.
@pajeetkumar1645
@pajeetkumar1645 5 ай бұрын
​@@totempowOut of sheer curiosity, might I inquire as to the specific model of graphics card upon which you have chosen to execute the flux model?
@PunxTV123
@PunxTV123 5 ай бұрын
when i ran on rtx3060 12gb… i got error… tried swarmui and comfy… no luck
@shareeftaylor3680
@shareeftaylor3680 5 ай бұрын
​@@totempow how u get it running on 8gb ram every time I try to run it it crashes in unit I'm trying to run on cpu
@CGFUN829
@CGFUN829 5 ай бұрын
yes, flux has waxed skin textures that is no good for realism , but some people already fixed this with sdxl .
@serasmartagne
@serasmartagne 5 ай бұрын
Using Flux Guidance node set to around 2.3 gives very realistic skin textures. To further enhance, do a second pass with Flux Guidance around 1.8.
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Dev model is better but slower. Yes, SDXL second pass can give great results.
@johndebattista-q3e
@johndebattista-q3e 5 ай бұрын
To let you know they're into the manager now
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Thanks for letting me know.
@johndebattista-q3e
@johndebattista-q3e 5 ай бұрын
@@CodeCraftersCorner My pleasure you helped me a lot
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
👍
@der-zerfleischer
@der-zerfleischer 5 ай бұрын
The error MPS can be suppressed if you set the weight_dtype to default. Then there is no error and you can generate images with a MacBook Pro M3 Max. A really good model with great results. But think of us Mac users who unfortunately can't use NVidia ;-) In German: kzbin.info/www/bejne/op6Ucq1jaL6lkNU
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Thanks for sharing!
@bladechild2449
@bladechild2449 5 ай бұрын
now we just need somewhere to use it that isn't comfy.
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
At the moment, you can use it in ComfyUI and on Huggingface space. There are feature requests for Automatic1111 and Fooocus but not sure when it will be implemented.
@leonardogoncalves8341
@leonardogoncalves8341 5 ай бұрын
sorry but you destroyed the image
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
Yes, I know! Difficult to get things right when recording. A more appropriate image to represent what I was trying to show is this thumbnail. Made using the same method.
@bomsbravo
@bomsbravo 5 ай бұрын
ComfyUI is really slow. I hope there will be a simpler interface like Swarm or Fooocus so it will be much faster.
@poppi3362
@poppi3362 5 ай бұрын
generally an interface has very little if anything to do with the speed of the processing. It's there for you, not for the computer.
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
I generally go for the interface which works for me.
@hadbildiren123
@hadbildiren123 5 ай бұрын
Flux makes the faces or the skin look like plastic doll.
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
I think so too.
@Elwaves2925
@Elwaves2925 5 ай бұрын
I disagree that they look plastic. If you don't prompt differently then I'd say adults have too much of a supermodel look to them, too much of a professional photoshoot look to them but that's been the case with all the base models.
@der-zerfleischer
@der-zerfleischer 5 ай бұрын
Error occurred when executing SamplerCustomAdvanced: Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype. File "/Users/markusrossler/ComfyUI/execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/comfy_extras/nodes_custom_sampler.py", line 612, in sample samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/comfy/samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/comfy/samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/comfy/samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/comfy/k_diffusion/sampling.py", line 143, in sample_euler denoised = model(x, sigma_hat * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/comfy/samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/comfy/samplers.py", line 682, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/comfy/samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/comfy/samplers.py", line 279, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/comfy/samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/adv_control/utils.py", line 64, in apply_model_uncond_cleanup_wrapper return orig_apply_model(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/comfy/model_base.py", line 123, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/comfy/ldm/flux/model.py", line 141, in forward out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/comfy/ldm/flux/model.py", line 102, in forward_orig img = self.img_in(img) ^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/comfy/ops.py", line 63, in forward return self.forward_comfy_cast_weights(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/comfy/ops.py", line 58, in forward_comfy_cast_weights weight, bias = cast_bias_weight(self, input) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/comfy/ops.py", line 39, in cast_bias_weight bias = cast_to(s.bias, dtype, device, non_blocking=non_blocking) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/markusrossler/ComfyUI/comfy/ops.py", line 24, in cast_to return weight.to(device=device, dtype=dtype, non_blocking=non_blocking) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@IntiArtDesigns
@IntiArtDesigns 5 ай бұрын
Are you on mac or windows? How much RAM do you have? What version of pytorch are you running? Is your ComfyUI fully up to date? Have you tried using a different workflow?
@nirsarkar
@nirsarkar 5 ай бұрын
Will not work on MacOS since the Float8_e4m3fn dtype is not supported on MPS backend.
@der-zerfleischer
@der-zerfleischer 5 ай бұрын
@@IntiArtDesigns MacBook Pro M3 Max Always the latest ComfyUI. The workflow is identical to the one you can find at ComfyUI Examples. Meanwhile the model runs under "weight = default", approx. 13 minutes later the image is ready. Quality is gigantic, but not usable for any Mac user :-(
@CodeCraftersCorner
@CodeCraftersCorner 5 ай бұрын
@der-zerfleischer Thanks for the follow up.
Flux Workflows: Updated Models, ControlNet & LoRa in ComfyUI
11:33
Code Crafters Corner
Рет қаралды 20 М.
Improve your Prompting with ConDelta in SD1.5, SDXL and Flux
16:25
Code Crafters Corner
Рет қаралды 3,7 М.
Beat Ronaldo, Win $1,000,000
22:45
MrBeast
Рет қаралды 158 МЛН
人是不能做到吗?#火影忍者 #家人  #佐助
00:20
火影忍者一家
Рет қаралды 20 МЛН
To Brawl AND BEYOND!
00:51
Brawl Stars
Рет қаралды 17 МЛН
Flux ControlNet Integration & New LoRAs Explained
13:19
Code Crafters Corner
Рет қаралды 10 М.
Flux.1 IMG2IMG + Using LLMs for Prompt Enhancement in ComfyUI!
16:50
Using Llama3.2 to "Chat" with Flux.1 in ComfyUI (8GB+ VRAM)
9:55
Nerdy Rodent
Рет қаралды 17 М.
10 AI Animation Tools You Won’t Believe are Free
16:02
Futurepedia
Рет қаралды 728 М.
How to do Soft Inpainting in ComfyUI
4:47
Prompting Pixels
Рет қаралды 8 М.
How To Create PERSONALIZED AI IMAGES With Flux - LoRA EXPLAINED
19:46
Best Practice Workflow for Automatic 1111 - Stable Diffusion
8:00
AIKnowledge2Go
Рет қаралды 262 М.
Beat Ronaldo, Win $1,000,000
22:45
MrBeast
Рет қаралды 158 МЛН