Oh my golly, FINALLY a real teacher who actually EXPLAINS what is happening behind the scenes. Liked, subbed, and loved. I’m soooo tired of the millions of bs tuts out there that tell you nothing. Thanks a ton!!!
@aysenkocakabak7703Ай бұрын
Sincerely telling i follow each of your videos, your artistic approach amazes me all the time. We are so lucky here that we have you. Your open source your knowledge is amazing.
@risunobushi_aiАй бұрын
thank you for the kind words!
@baheth3elmy162 ай бұрын
Welcome back! Congratulations on the new job.
@risunobushi_ai2 ай бұрын
thank you!
@JoelB712 ай бұрын
We missed you! Thanks for another beautifully informative tutorial, and congratulations on your new position! They're lucky to have you :)
@risunobushi_ai2 ай бұрын
thank you! I missed doing videos too
@abaj0062 ай бұрын
Very good tutorial, thanks for explaining the specific nodes.
@DanDanTheAiMan2 ай бұрын
Congrats on the new job!
@risunobushi_ai2 ай бұрын
Thanks!
@zerobase98582 ай бұрын
Hi! I really like your creative and meticulous workflow and your attitude towards licensing. Glad to see you back in action.
@risunobushi_ai2 ай бұрын
thank you!
@bregsma2 ай бұрын
Thank you always for sharing your insight and everyone is congratulating you for your new job so congratulations as well!
@Lily-wr1nwАй бұрын
learned a lot! Thanks master.
@antichitati.si.trandafiri2 ай бұрын
Congrats on your new job! I have been using Photoshop for 20 years, so I am looking to learn Flux also to expand my art techniques. Thank you for the tutorials!
@risunobushi_ai2 ай бұрын
thank you! while PS is great for the ease of use, I think that creating automated pipelines in comfy is better over large volumes that always need the same logic applied
@Mranshumansinghr2 ай бұрын
Exactly what I was looking for. Its like you read my mind.
@runebinder2 ай бұрын
Really nice detailed overview and clearly explained, thanks :)
@JohanAlfort2 ай бұрын
Really nice workflow and explanation, thanks :)
@prodmas2 ай бұрын
Look for the Inpaint crop and stitch nodes. They do the same thing as your advanced workflow, but much easier.
@Neotrixstdr2 ай бұрын
Great work!
@kallamamran2 ай бұрын
"Load & Resize Image" from KJnodes does loading, resizing/scaling (with multiple). It can replace your complete Input-group 😊Thanks for another great video
@Zampano22 ай бұрын
Congratulations on the new job..! Hope they appreciate your knowledge... thanks for the workflow, looks like it's time to finally download that fat union-CN model... my SSD is crying...
@risunobushi_ai2 ай бұрын
Thank you! As suggested by another comment, you could use the Alimama inpainting ControlNet for flux, but it works differently and it’s not as “catch all” as depth or other controlnets in my testings.
@mauriziogastoni9779Ай бұрын
Great stuff and a great explanation! I normally use the "prepare image for inpaint" to crop it and then the "overlay" node to stitch it back but I noticed that it keeps the original image proportions for the bounding box losing resolution. It doesn't look like it is the case here so I will probably update my workflows with this =) Thanks!
@Mranshumansinghr2 ай бұрын
IC light Ver 2 is out. Can not wait for your next video.
@defidigest92 ай бұрын
I needed this
@serasmartagne2 ай бұрын
I use the Apply Advanced Controlnet node in ComfyUI-Advanced-ControlNet by Kosinkadink, as that has an optional mask to control which regions are influenced by the depth map conditioning. In your example of inpainting large flowers over small ones, I would provide the inverted inpainting mask as an input mask to the Apply Advanced Controlnet node. The effect is that the masked conditioning helps the inference understand the context around the target inpaint area, but ignores the existing content inside the area.
@MaxRohowskyАй бұрын
dude, thanks for these videos! Really helped! Do you have any idea how I could change the view outside a window? I would like to keep the window and everything around it the same - just change the view... any idea?
@risunobushi_aiАй бұрын
if you can create a mask in something like Photoshop, you can import the mask separately. as long as it lines up with its image, you can inpaint over a separately loaded mask instead of drawing one in the open in mask editor window. create a mask only inside of the window, and after loading image and mask, you would need to adjust the controlnets' strength to taste, and inpaint only inside the window.
@MaxRohowskyАй бұрын
@@risunobushi_ai hey, thanks for the quick reply! The thing is that I'm programming a web app which needs to do all this automatically. I was hoping for there to be a ready to use model on replicate but it looks like I'll need to create a custom model for this :D
@baheth3elmy16Ай бұрын
Thanks again, I'm returning to your video again. I have a question please. What setting do I change in the lower groups (flux and sdxl) that makes the generated preview/save image identical in size to the one I loaded and masked in the Input group. Thank you!!!!!!
@ArnaudSteinmetz2 ай бұрын
Very information as usual! I'm wondering, why not use directly inpainting controlnets like the one from alimama ?
@risunobushi_ai2 ай бұрын
I debated showing them as well, but ultimately I decided against it because: - they’re not as straightforward to understand in terms of how they work (with depth it’s much easier to understand from the preprocessed image) - they’re not always as good as a custom ControlNet setup (for example, I had mixed results using them with face Loras / garment Loras combos) - they’re not always available for all models, or they might not be as quick in being released, so it wouldn’t have been a “catch-all”, easy solution But yeah, they’re a valid alternative depending on the usecase
@ayakakamisato-ls8nu2 ай бұрын
great project
@ralfschwarzfischer3525Ай бұрын
Hey, nice video. Have checked if they aspect ratio of the extracted area is influencing quality? And have you tested the workflow with 3.5?
@ValorantNexus2 ай бұрын
thanks for the great info
@DarioToledo2 ай бұрын
I have seen some inpaint controlnets, like the alimama inpaint alpha (now beta) for flux. Any idea on how they should be implemented? Is it an alternative to the inpaintmodelconditioning node?
@risunobushi_ai2 ай бұрын
hi! Alimama's inpainting controlnet, AFAIK, doesn't need a preprocessor, and in my testing the higher the strength is, the more it forces the inpainting over the original image. but then again, I'm not an expert on inpaint controlnets, mainly because I find them too specific to what they were trained for, and I'd rather use less tools that are more suited to general use
@d4veejones532 ай бұрын
Another great workflow by the looks of it! Although I get aksampler error - 'mat1 and mat2 shapes cannot be multiplied (1x768 and 2816x1280)'? Is this due to the original picture size or something being wrong with the Math1 and 2 nodes?
@risunobushi_ai2 ай бұрын
this is the error you get when you're trying to use a controlnet for a different model than it was designed for - so a SDXL controlnet with a FLUX model for example
@NinoLouLeChenadec2 ай бұрын
Hi Andrea, Comfy is really great for flexibility between Lora and model, but for inpaint I prefer to use Invoke AI (UI local), have you try it ? Thxs for your work 🙌
@risunobushi_ai2 ай бұрын
I don’t use Invoke in my stack, mostly because the clients I work for like to implement comfy rather than anything else, or straight up use the API versions of the json files
@salomahal72872 ай бұрын
Hey I like the idea i got a problem with it though, only 1 out of 3 seeds gives something that i asked for in either sdxl and flux dunno how this is a thing maybe models? but flux gives me really random results, also i was trying to implement the new daemon detailer node with a custom advanced sampler which also didnt really inpaint as wanted, is there a way to implement the sampler as an extra node in the standard ksampler used in ur workflow?
@risunobushi_ai2 ай бұрын
Did you test it before using detailer daemon or did you straight up used it alongside it? I haven’t tested detailer daemon yet, and AKAIK it works by using model shifts, and that’s a much more invasive approach than usual - so I wouldn’t trust it to be working properly with this kind of pipeline straight out of the box
@salomahal72872 ай бұрын
@@risunobushi_ai I did run the workflow as is with flux dev and an inpaint model on the sdxl side, i wanted to inapint red points on the cap of a person, idk if thats a difficult task however both sides do whatever with the instruction, black logos or nothing at all, its kinda weird. omnigen was somewhat able to achieve it but after some trys it seems to me that in ur workflow the sampler just doesnt care about the text. mb its just me though...so its not a daemon detailer problem it seems
@artemnikolski3197Ай бұрын
KSampler freezes and reports an error.... any known solution for that?
@casperd21002 ай бұрын
Hi, sorry but I'm super new at this. I'm getting missing node errors: --- Missing Node Types When loading the graph, the following node types were not found UnetLoaderGGUF GetImageSize+ DepthAnythingV2Preprocessor SimpleMath+ ImageResize+ GrowMaskWithBlur --- Do I have to install some extensions to get these nodes to work?
@risunobushi_ai2 ай бұрын
hi! you need to go into the manager (if you don't have it installed, get it from here: github.com/ltdrdata/ComfyUI-Manager ) and install the missing custom nodes. once that's done, you should install any model that you're missing, so for example, in the GGUF node you'll be missing a quantized version of flux dev, found here: huggingface.co/city96/FLUX.1-dev-gguf/blob/main/flux1-dev-Q4_0.gguf usually if you load a workflow, look up the missing models in google, and look at their docs, you should be able to find them and placing them where they should be
@casperd21002 ай бұрын
I found the extension needed for each node type: UnetLoaderGGUF - ComfyUI-GGUF GetImageSize+, ImageResize+ - Image Resize for ComfyUI DepthAnythingV2Preprocessor - ComfyUI's ControlNet Auxiliary Preprocessors SimpleMath+ - SimpleMath GrowMaskWithBlur - ComfyUI-KJNodes
@4etam2 ай бұрын
hi, please tell me how can I make vae visible, I downloaded the file. safetensirs and placed it in the models/vae folder, but the node still doesn't see it
@4etam2 ай бұрын
and can i invert the mask and replace the background in a full length portrait shot?
@risunobushi_ai2 ай бұрын
hi! did you refresh comfy after placing the models? you can invert masks by using a invert mask node, or by using the grow mask with blur "inverted mask" output
@panonesia2 ай бұрын
can we add lora to speedup process? lora turbo to make it 8 step? where to place it? before Differential Diffusion node or after?
@risunobushi_ai2 ай бұрын
yes you can, and usually you can apply is wherever, before or after differential diffusion. the only times I've had issues with the placing of differential diffusion was with specific versions of comfy while using ipadapter advanced, in which case differential should be either before or after the ipadapter, I don't remember which
@oonefilms2 ай бұрын
I'm a bit lost about Inpainting itself - Do you just paint any area on the image with solid color like black and then open in in comfy?
@risunobushi_ai2 ай бұрын
hi! in order to inpaint, you can either: - input an image, and open it with the mask editor (right click on the image), then draw your mask, like in this video or - input an image, and input a custom mask (in this case you'd need to rewire the mask pipeline to account for that)
@erikdias96042 ай бұрын
Question: First, thank you for your video and your explanations. In Photoshop, if I have an arm or something else too many: I select and click on generation without doing anything else. In Flux ComfyUi, I am confused. I am a beginner and I would have liked to be able to select the part to delete like in PS but I am not sure I understood that it is possible via your video (I have problems understanding, so it does not come from you ^^; ). Thanks again for your work, it helps me a lot.
@risunobushi_ai2 ай бұрын
Hi! In your specific case, you’d want to use a very low ControlNet strength, because you don’t want to follow the underlying picture too much - otherwise, if you did the opposite, you would always get something following the depth of the extra arm. It’s possible, it just takes a bit of time adjusting to it!
@titanoplastikАй бұрын
Hello, I'm encountering the following error right at the beginning: Prompt outputs failed validation SimpleMath+: - Return type mismatch between linked nodes: a, INT != INT,FLOAT SimpleMath+: - Return type mismatch between linked nodes: a, INT != INT,FLOAT Can you give me a tip on how to fix this?
@titanoplastikАй бұрын
I solved it by simply using the Utils Math Expression node instead.
@antronero59702 ай бұрын
Yeah!
@FEILIU-m6c2 ай бұрын
👍👍👍
@generalawareness101Ай бұрын
Do text. I don't mean on a sign I mean Image 1. Text "Hello" and out comes Image 1 with the "Hello" text that FLUX created overlayed.
@AndroKarpo6 күн бұрын
Don't mislead people, your video has nothing to do with classic Inpainting, you just have a workflow with control net
@Art13eck2 ай бұрын
but it's not a full-blown inpaint, it's just replacing one thing with another, it's a very simple thing....
@Lily-wr1nwАй бұрын
Wdym, can you please explain. I am a noob,sorry
@oonefilms2 ай бұрын
Sorry, one more noobie question: I've downloaded Depth anything v2, but it keeps giving me this error even though I have a file in that folder: [Errno 2] No such file or directory: 'D:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\depth-anything\\Depth-Anything-V2-Large\\.cache\\huggingface\\download\\depth_anything_v2_vitl.pth.a7ea19fa0ed99244e67b624c72b8580b7e9553043245905be58796a608eb9345.incomplete'
@risunobushi_ai2 ай бұрын
it looks like the node can't properly download the depth anything v2 model in its folder. try selecting a different depth anything model in the dropdown menu, like the s version, or you can change preprocessor to another depth estimator model (like midas, marigold, zoe, etc)