*Edit: uploaded an updated workflow to include an image resize node.* So far I've found the canny lora controlnet isn't that good compared to the full version but at 1.24GB it's to be expected. But the depth lora controlnet works really well compared to others that are currently out there. What has been your experiences?
@giuseppedaizzole702517 күн бұрын
Yes, canny is crap but depth is quite good. thanks
@SouthbayJay_comАй бұрын
Such a great tutorial! Super easy to follow and the workflow is so nicely done! The layout is perfect and having a 2-n-1 deal is a great idea!! Thanks for the video and the workflow! 🙌🙌
@MonzonMediaАй бұрын
Cheers bud!
@LuciiFlynn28 күн бұрын
This is amazing 😍 Thank you 🙏🏻
@MonzonMedia28 күн бұрын
Welcome! 😊
@Maylin-ze6qxАй бұрын
So lovely ❤❤❤
@muhammadakbary688Ай бұрын
Hello there, thank U so much for all the effort and kindness that you poured into this video. for me, Depth results were as excepted, but canny results were unfortunately terrible.
@MonzonMediaАй бұрын
This workflow is for Flux, make sure you download the correct model. In the video I'm using the all in one that includes text encoders and VAE which can be placed in your usual checkpoints folder. huggingface.co/lllyasviel/flux1_dev/blob/main/flux1-dev-fp8.safetensors
@dkamhajiАй бұрын
Flux tools are so great! How do you control the size of your end image? Like if you want to use a different size than the ref image.
@MonzonMediaАй бұрын
Absolutely! You just have to add any aspect ratio node to latent in the KSampler. I can add it to the workflow I did and upload that one on google drive....not sure why I forgot to put it in the first place! hahaha! 😊 *Edit just updated the workflow*
@captainpike3490Ай бұрын
There is a GGUF version of the dev and canny models on HF. I haven't tested it yet.
@MonzonMediaАй бұрын
Oh cool! Will give it a go!
@madballdesign11 күн бұрын
this works for me . I used Xlabs controlnet, can not load other Lora, result is strange
@pawelthe1606Ай бұрын
Great video as always thumbs up. Can you use canny for anything ? What is this for ?
@MonzonMediaАй бұрын
Of course! It detects edges and outlines so if you want something like line art or the exact shape of a person or object you can use canny. However I find the lora quality isn't the greatest. But using it in conjunction with depth can give you good results.
@op12studioАй бұрын
You rock! You showed using any switch in your workflow but Any Switch just uses the first used connection and ignores the rest, so how do you use both depth and canny at the same time?
@MonzonMediaАй бұрын
Toggle on both lora's and enable both workflows . Any switch will use both, it detects the first open node but will still read the next node as long as it's enabled.
@LarimussКүн бұрын
I’m still curious like.. what are the good use cases for canny? What can we really do with it. Like sketch to image looks cool but can’t get it to work.
@MonzonMediaКүн бұрын
Canny uses outlines and edges so if you want the structure or outline of a person or object to be precise, it's very good for that. Architecture is a good use case and for people and characters I sometimes would use it in combination with depth or pose to get an accurate result. With that being said, the canny controlnets for Flux are still not great and need to mature whereas SDXL canny controlnets work a lot better.
@klb1113Ай бұрын
Is the full version basically a non-starter for you? I don't know enough about ComfyUI to be confident enough to add a Lora to the full version of the Canny one, but I'm wondering if it would be good to pose a specific Lora, for example. But for that, I'd need to add a Lora node to it somehow and I'm too dumb to do it! By the way, I greatly appreciate your sharing your workflows, unlike some other YT content creators, who often put it behind a Patreon paywall or elsewhere. You give a really nice explanation of things, too. Hopefully I'll get more confident with ComfyUI, as with Flux being so dominant now it's pretty hard to avoid using! Assuming you live in the US (I think you do?), I hope you have a Happy Thanksgiving.
@MonzonMediaАй бұрын
Sorry not sure what you mean? Can you elaborate on full version a non-starter? I'm in Canada so our thanksgiving was last month but thanks anyway! 😁
@klb1113Ай бұрын
@@MonzonMedia My apologies! I meant is the fact that you can't use the 21GB unet file a non-starter for you with your 8GB VRAM. I said that because of your update regarding the canny lora controlnet not being that good. (And, selfishly, I have a 12GB card and wonder if that could handle it. Of course, trying to create my own workflow in ComfyUI is well....a fool's errand for me! )
@MonzonMedia25 күн бұрын
Ooooh I see, that's what you meant. Yeah it's just too big for my system since I only have 8GB VRAM and 32GB system RAM. I am looking to get more system RAM soon though. With that being said it probably would run really slow on my system. Comfyui does pretty decent ram swapping so if you have 12GB GPU and 32GB system RAM or more you should be able to run it. As long as your total GB is less than VRAM+System ram and a little bit of space for everything else.
@marhensaАй бұрын
what custom node is the resources status you have up there? that vram ram etc.
@MonzonMediaАй бұрын
It's called Crystools. Very handy to have. You can install it through the comfyui manager but if you want to read more about it you can find it here. github.com/crystian/ComfyUI-Crystools
@noonesbiznass5389Ай бұрын
Thx for the video - do you know anyway to use the full canny dev model (not the LORA) and also use an img2img workflow? i.e., so it uses the canny to drive the image as well as the original image to drive colors/textures, etc? I can do it with the LORA but as you noted it's not as good as the full model.
@MonzonMediaАй бұрын
You can use the "diffusion models loader" and put the controlnet models in comfyui/models/diffusion models.
@Neuromindart28 күн бұрын
Is it possible to combine controlnet and Redux?
@MonzonMedia28 күн бұрын
Sure!
@baheth3elmy16Ай бұрын
👍👍👍👍
@MonzonMediaАй бұрын
🙌 Thank you for watching!
@Shaolinfool_animation4 күн бұрын
Is it possible to use control net with a batch of images at instead of one single image?
@MonzonMedia4 күн бұрын
Do you mean with multiple controlnets? Or just increasing the qty of images generated?
@Shaolinfool_animation3 күн бұрын
@ just increasing the quantity of images.
@MonzonMedia3 күн бұрын
@@Shaolinfool_animation Ahhhh...ok well there should be a node "empty latent image" change the batch size to whatever you want and off you go.
@MonzonMedia3 күн бұрын
@@Shaolinfool_animation Ahhhh I just realized that in the video the empty latent node was hidden behind the sampler but if you download the file I provided on the description you should see that empty latent node. Just note there is also a standard generation workflow as well. You can just toggle it off when you are not using it.
@joghjnglus16 күн бұрын
I'm a graphic designer and I'm starting using comfyui and following your tutorials, I have a problem that flux create very good images for my work, but hands and fingers too bad, How can I fix this problem please, I tried flux dev inpainting but it did not fix fingers and hands also 😵💫
@Raiden.savitar27 күн бұрын
im getting " It seems that models and clips are mixed and interconnected between SDXL Base, SDXL Refiner, SD1.x, and SD2.x. Please verify. #### " error. but i didn't change anything in workflow
@MonzonMedia27 күн бұрын
Make sure you are using the correct Flux models and you have to physically select all the clip and vae files.
@giuseppedaizzole702521 күн бұрын
Check DualCLIPLoader, In the third and last tab select flux. I think that might be it.
@giuseppedaizzole702517 күн бұрын
Hi, I'm using this workflow quite a lot, thanks. I have a question. I edit the generated mask in depth or canny to make some adjustments, but when I click generate it deletes the changes and generates a new one... How do I keep the changes? Thanks.