Great tutorial. Very clear and concise. Many thanks for explaining every step and not going a million miles an hour. Just subscribed.
@CodeCraftersCorner21 күн бұрын
Thanks for the sub! Glad it was helpful!
@dadekennedy9712Ай бұрын
I know that some people arent happy with flux lack of control but I appreciate you going into detail with this. I also do still use SDXL and others due to the options. I use flux like a refiner.
@CodeCraftersCornerАй бұрын
Thank you! I will probably do some SDXL videos too.
@RandMpinkFilms11 күн бұрын
Fantastic video!! Super clear!! Thank you for this!
@CodeCraftersCorner8 күн бұрын
Thanks for watching!
@UnclePapi_2024Ай бұрын
Still use SDXL and SD1.5... just getting into FLUX thanks to video's like yours so keep up the excellent work!!
@CodeCraftersCornerАй бұрын
Thank you!
@chillsoftАй бұрын
I'd love the SDXL stuff as well. Your channel is so helpful, thanks for everything!
@CodeCraftersCornerАй бұрын
Thank you!
@bordignonjuniorАй бұрын
as always, great videos. no body explains and show how things works as good as you do.
@CodeCraftersCornerАй бұрын
Thank you!
@8tanАй бұрын
As always the best tutorials, thanks a lot very concise and at the same time great explanation without getting too technical. Thanks a lot
@CodeCraftersCornerАй бұрын
Glad you like them!
@DarkGrayFantasyАй бұрын
Amazing work! Short to the point and informative!
@CodeCraftersCornerАй бұрын
Glad it was helpful!
@NazarovaNataliyaАй бұрын
Thank you very much for your work. Good luck and good mood!
@CodeCraftersCornerАй бұрын
Thank you! You too!
@TheGalacticIndianАй бұрын
Thank you for your clear English!💛💛
@CodeCraftersCornerАй бұрын
Thank you!
@Lestad45 күн бұрын
I'm getting an error in the DualClipLoader: required input is missing. I don't have either of your clip_names 1 and 2. How do I get them and in what folder should I put them? Thanks in advanced
@CodeCraftersCornerКүн бұрын
Hello, I made a video on how to get all the models for Flux here: kzbin.info/www/bejne/fqvNeamafZqVe5o. This should help you get all missing models.
@brave3dАй бұрын
macos m1max(32gb ram) is freezing when loading the unet model, is there a way to use gguf model instead
@CodeCraftersCornerАй бұрын
Sorry, I do not own this system to test. Can you try changing the dtype weight in the load diffusion model node to default and try again.
@jakkalsvibesАй бұрын
Thank you for the detailed explanation. any idea why I get this error when I hit Queue? : Error occurred when executing ControlNetLoader: MMDiT.__init__() got an unexpected keyword argument 'image_model'
@CodeCraftersCornerАй бұрын
Are you sure you are using the Comfy Core version of the Advanced Controlnet node? For now, only the defatult nodes will work. There is an issue here that may be of help: bit.ly/3Ums2If
@jakkalsvibesАй бұрын
@@CodeCraftersCorner Thank you will check it out
@AleksandrlllmakovАй бұрын
Hi, why does my 'Apply ControlNet (OLD Advanced)' not have a 'vae' input?
@CodeCraftersCornerАй бұрын
Hello, I am using the latest ComfyUI version. Try to update yours and try again.
@sergeykurkcuoglu2808Ай бұрын
thank you for your work . I downloaded diffusion_pytorch_model.safetensors for comfyui. where to paste that file? There are many diffrent directories inside the model directory
@CodeCraftersCornerАй бұрын
in the models folder, go inside the controlnet folder and paste the safetensors file there. You can rename it and try again.
@dribidelka7863Ай бұрын
Hi, why does the output is just a black square? Upscaler is in diffusion_models folder, t5xxl and clip l are in clip folder, and vae is in vae
@CodeCraftersCornerАй бұрын
Hello, are you perhaps using a Mac. If so, change the "Load Diffusion Model" node "dtype_weight" to default. In case you are using windows and still getting black images, try to bypass the controlnet nodes and see if you are able to generate images with the Flux model.
@dribidelka7863Ай бұрын
@@CodeCraftersCorner I’m on windows, 12gb vram. What do I do to bypass controlnet?Disconnecting it does not help
@rmeta3391Ай бұрын
This Jasper Upscaler changes the face of my subject too much. If you don't care about that, then it's a good upscaler. I care, so I won't use it on humans. I think on animals it would be fine. Thanks for a good workflow.
@CodeCraftersCornerАй бұрын
Good point! Thanks for sharing.
@philippebourin3505Ай бұрын
A SDXL video should be interesting.
@CodeCraftersCornerАй бұрын
Thanks! Will see what I can do.
@aegisgfxАй бұрын
Can this workflow be modified to use the gguf models?
@CodeCraftersCornerАй бұрын
Hello. Yes, you can.
@aegisgfxАй бұрын
@@CodeCraftersCorner I tried it, the results are way off, almost like its doing creative upscale or something.
@bordignonjuniorАй бұрын
you mentioned on the video that this is not supposed to be used for 4k upscaling. could you make a video with a good upscaler method to get up to 4k images ?
@CodeCraftersCornerАй бұрын
Yes, this model was trained on upscaling low resolution (320px) images to higher resolution. It's not meant for 4k upscaling. Most likely you will run into Out Of Memory error. I'll see if I can make a video on 4k upscaling.
@yct695619 күн бұрын
don't know which part has error, when i press queue prompt, python error
@CodeCraftersCorner19 күн бұрын
Hello, usually the error message should be in the terminal (CMD). It will tell you if there is anything missing.
@yct695618 күн бұрын
@@CodeCraftersCorner python.exe has been quit the cmd only has `Using split attention in VAE` no error code left
@2PeteShakurАй бұрын
nice! yeah this is good alternative to supir, which is also a great upscaler. i have yet to find an upscaler for enlarging small text, most results/outputs are made up gibberish! lol
@CodeCraftersCornerАй бұрын
Thanks for the tips!
@tariq2812p26 күн бұрын
👍
@CodeCraftersCorner22 күн бұрын
Thank you so much!
@VintageForYouАй бұрын
Rendered out after 20 minutes with an image size of 1024x1600 Scale by 2.0 and steps 20, with a 12GB Graphics card and 32GB of RAM this can be produced in seconds on a free online upscale app.🤔
@CodeCraftersCornerАй бұрын
Thanks for sharing your findings. For me, it takes a little bit less than a minute (~40 seconds after loading model) to go from 320x320 to 1024x1024. However, when I tried going from 1280x720 to 1920x1080, it took 22 minutes to complete.
@emptyslot6972Ай бұрын
When did tuvok come to earth and started doing videos about ai imagery
@CodeCraftersCornerАй бұрын
Lol! first time someone made that reference to me. Only OG will know about this!