Easy Inpainting for ANY model (SDXL, Flux, etc)

  Рет қаралды 5,710

Andrea Baioni

Andrea Baioni

Күн бұрын

Пікірлер: 74
@risunobushi_ai
@risunobushi_ai 2 ай бұрын
I'm back, sorry for the wait!
@Lexie-bq1kk
@Lexie-bq1kk 2 ай бұрын
hi
@1lllllllll1
@1lllllllll1 2 ай бұрын
Oh my golly, FINALLY a real teacher who actually EXPLAINS what is happening behind the scenes. Liked, subbed, and loved. I’m soooo tired of the millions of bs tuts out there that tell you nothing. Thanks a ton!!!
@aysenkocakabak7703
@aysenkocakabak7703 Ай бұрын
Sincerely telling i follow each of your videos, your artistic approach amazes me all the time. We are so lucky here that we have you. Your open source your knowledge is amazing.
@risunobushi_ai
@risunobushi_ai Ай бұрын
thank you for the kind words!
@baheth3elmy16
@baheth3elmy16 2 ай бұрын
Welcome back! Congratulations on the new job.
@risunobushi_ai
@risunobushi_ai 2 ай бұрын
thank you!
@JoelB71
@JoelB71 2 ай бұрын
We missed you! Thanks for another beautifully informative tutorial, and congratulations on your new position! They're lucky to have you :)
@risunobushi_ai
@risunobushi_ai 2 ай бұрын
thank you! I missed doing videos too
@abaj006
@abaj006 2 ай бұрын
Very good tutorial, thanks for explaining the specific nodes.
@DanDanTheAiMan
@DanDanTheAiMan 2 ай бұрын
Congrats on the new job!
@risunobushi_ai
@risunobushi_ai 2 ай бұрын
Thanks!
@zerobase9858
@zerobase9858 2 ай бұрын
Hi! I really like your creative and meticulous workflow and your attitude towards licensing. Glad to see you back in action.
@risunobushi_ai
@risunobushi_ai 2 ай бұрын
thank you!
@bregsma
@bregsma 2 ай бұрын
Thank you always for sharing your insight and everyone is congratulating you for your new job so congratulations as well!
@Lily-wr1nw
@Lily-wr1nw Ай бұрын
learned a lot! Thanks master.
@antichitati.si.trandafiri
@antichitati.si.trandafiri 2 ай бұрын
Congrats on your new job! I have been using Photoshop for 20 years, so I am looking to learn Flux also to expand my art techniques. Thank you for the tutorials!
@risunobushi_ai
@risunobushi_ai 2 ай бұрын
thank you! while PS is great for the ease of use, I think that creating automated pipelines in comfy is better over large volumes that always need the same logic applied
@Mranshumansinghr
@Mranshumansinghr 2 ай бұрын
Exactly what I was looking for. Its like you read my mind.
@runebinder
@runebinder 2 ай бұрын
Really nice detailed overview and clearly explained, thanks :)
@JohanAlfort
@JohanAlfort 2 ай бұрын
Really nice workflow and explanation, thanks :)
@prodmas
@prodmas 2 ай бұрын
Look for the Inpaint crop and stitch nodes. They do the same thing as your advanced workflow, but much easier.
@Neotrixstdr
@Neotrixstdr 2 ай бұрын
Great work!
@kallamamran
@kallamamran 2 ай бұрын
"Load & Resize Image" from KJnodes does loading, resizing/scaling (with multiple). It can replace your complete Input-group 😊Thanks for another great video
@Zampano2
@Zampano2 2 ай бұрын
Congratulations on the new job..! Hope they appreciate your knowledge... thanks for the workflow, looks like it's time to finally download that fat union-CN model... my SSD is crying...
@risunobushi_ai
@risunobushi_ai 2 ай бұрын
Thank you! As suggested by another comment, you could use the Alimama inpainting ControlNet for flux, but it works differently and it’s not as “catch all” as depth or other controlnets in my testings.
@mauriziogastoni9779
@mauriziogastoni9779 Ай бұрын
Great stuff and a great explanation! I normally use the "prepare image for inpaint" to crop it and then the "overlay" node to stitch it back but I noticed that it keeps the original image proportions for the bounding box losing resolution. It doesn't look like it is the case here so I will probably update my workflows with this =) Thanks!
@Mranshumansinghr
@Mranshumansinghr 2 ай бұрын
IC light Ver 2 is out. Can not wait for your next video.
@defidigest9
@defidigest9 2 ай бұрын
I needed this
@serasmartagne
@serasmartagne 2 ай бұрын
I use the Apply Advanced Controlnet node in ComfyUI-Advanced-ControlNet by Kosinkadink, as that has an optional mask to control which regions are influenced by the depth map conditioning. In your example of inpainting large flowers over small ones, I would provide the inverted inpainting mask as an input mask to the Apply Advanced Controlnet node. The effect is that the masked conditioning helps the inference understand the context around the target inpaint area, but ignores the existing content inside the area.
@MaxRohowsky
@MaxRohowsky Ай бұрын
dude, thanks for these videos! Really helped! Do you have any idea how I could change the view outside a window? I would like to keep the window and everything around it the same - just change the view... any idea?
@risunobushi_ai
@risunobushi_ai Ай бұрын
if you can create a mask in something like Photoshop, you can import the mask separately. as long as it lines up with its image, you can inpaint over a separately loaded mask instead of drawing one in the open in mask editor window. create a mask only inside of the window, and after loading image and mask, you would need to adjust the controlnets' strength to taste, and inpaint only inside the window.
@MaxRohowsky
@MaxRohowsky Ай бұрын
@@risunobushi_ai hey, thanks for the quick reply! The thing is that I'm programming a web app which needs to do all this automatically. I was hoping for there to be a ready to use model on replicate but it looks like I'll need to create a custom model for this :D
@baheth3elmy16
@baheth3elmy16 Ай бұрын
Thanks again, I'm returning to your video again. I have a question please. What setting do I change in the lower groups (flux and sdxl) that makes the generated preview/save image identical in size to the one I loaded and masked in the Input group. Thank you!!!!!!
@ArnaudSteinmetz
@ArnaudSteinmetz 2 ай бұрын
Very information as usual! I'm wondering, why not use directly inpainting controlnets like the one from alimama ?
@risunobushi_ai
@risunobushi_ai 2 ай бұрын
I debated showing them as well, but ultimately I decided against it because: - they’re not as straightforward to understand in terms of how they work (with depth it’s much easier to understand from the preprocessed image) - they’re not always as good as a custom ControlNet setup (for example, I had mixed results using them with face Loras / garment Loras combos) - they’re not always available for all models, or they might not be as quick in being released, so it wouldn’t have been a “catch-all”, easy solution But yeah, they’re a valid alternative depending on the usecase
@ayakakamisato-ls8nu
@ayakakamisato-ls8nu 2 ай бұрын
great project
@ralfschwarzfischer3525
@ralfschwarzfischer3525 Ай бұрын
Hey, nice video. Have checked if they aspect ratio of the extracted area is influencing quality? And have you tested the workflow with 3.5?
@ValorantNexus
@ValorantNexus 2 ай бұрын
thanks for the great info
@DarioToledo
@DarioToledo 2 ай бұрын
I have seen some inpaint controlnets, like the alimama inpaint alpha (now beta) for flux. Any idea on how they should be implemented? Is it an alternative to the inpaintmodelconditioning node?
@risunobushi_ai
@risunobushi_ai 2 ай бұрын
hi! Alimama's inpainting controlnet, AFAIK, doesn't need a preprocessor, and in my testing the higher the strength is, the more it forces the inpainting over the original image. but then again, I'm not an expert on inpaint controlnets, mainly because I find them too specific to what they were trained for, and I'd rather use less tools that are more suited to general use
@d4veejones53
@d4veejones53 2 ай бұрын
Another great workflow by the looks of it! Although I get aksampler error - 'mat1 and mat2 shapes cannot be multiplied (1x768 and 2816x1280)'? Is this due to the original picture size or something being wrong with the Math1 and 2 nodes?
@risunobushi_ai
@risunobushi_ai 2 ай бұрын
this is the error you get when you're trying to use a controlnet for a different model than it was designed for - so a SDXL controlnet with a FLUX model for example
@NinoLouLeChenadec
@NinoLouLeChenadec 2 ай бұрын
Hi Andrea, Comfy is really great for flexibility between Lora and model, but for inpaint I prefer to use Invoke AI (UI local), have you try it ? Thxs for your work 🙌
@risunobushi_ai
@risunobushi_ai 2 ай бұрын
I don’t use Invoke in my stack, mostly because the clients I work for like to implement comfy rather than anything else, or straight up use the API versions of the json files
@salomahal7287
@salomahal7287 2 ай бұрын
Hey I like the idea i got a problem with it though, only 1 out of 3 seeds gives something that i asked for in either sdxl and flux dunno how this is a thing maybe models? but flux gives me really random results, also i was trying to implement the new daemon detailer node with a custom advanced sampler which also didnt really inpaint as wanted, is there a way to implement the sampler as an extra node in the standard ksampler used in ur workflow?
@risunobushi_ai
@risunobushi_ai 2 ай бұрын
Did you test it before using detailer daemon or did you straight up used it alongside it? I haven’t tested detailer daemon yet, and AKAIK it works by using model shifts, and that’s a much more invasive approach than usual - so I wouldn’t trust it to be working properly with this kind of pipeline straight out of the box
@salomahal7287
@salomahal7287 2 ай бұрын
@@risunobushi_ai I did run the workflow as is with flux dev and an inpaint model on the sdxl side, i wanted to inapint red points on the cap of a person, idk if thats a difficult task however both sides do whatever with the instruction, black logos or nothing at all, its kinda weird. omnigen was somewhat able to achieve it but after some trys it seems to me that in ur workflow the sampler just doesnt care about the text. mb its just me though...so its not a daemon detailer problem it seems
@artemnikolski3197
@artemnikolski3197 Ай бұрын
KSampler freezes and reports an error.... any known solution for that?
@casperd2100
@casperd2100 2 ай бұрын
Hi, sorry but I'm super new at this. I'm getting missing node errors: --- Missing Node Types When loading the graph, the following node types were not found UnetLoaderGGUF GetImageSize+ DepthAnythingV2Preprocessor SimpleMath+ ImageResize+ GrowMaskWithBlur --- Do I have to install some extensions to get these nodes to work?
@risunobushi_ai
@risunobushi_ai 2 ай бұрын
hi! you need to go into the manager (if you don't have it installed, get it from here: github.com/ltdrdata/ComfyUI-Manager ) and install the missing custom nodes. once that's done, you should install any model that you're missing, so for example, in the GGUF node you'll be missing a quantized version of flux dev, found here: huggingface.co/city96/FLUX.1-dev-gguf/blob/main/flux1-dev-Q4_0.gguf usually if you load a workflow, look up the missing models in google, and look at their docs, you should be able to find them and placing them where they should be
@casperd2100
@casperd2100 2 ай бұрын
I found the extension needed for each node type: UnetLoaderGGUF - ComfyUI-GGUF GetImageSize+, ImageResize+ - Image Resize for ComfyUI DepthAnythingV2Preprocessor - ComfyUI's ControlNet Auxiliary Preprocessors SimpleMath+ - SimpleMath GrowMaskWithBlur - ComfyUI-KJNodes
@4etam
@4etam 2 ай бұрын
hi, please tell me how can I make vae visible, I downloaded the file. safetensirs and placed it in the models/vae folder, but the node still doesn't see it
@4etam
@4etam 2 ай бұрын
and can i invert the mask and replace the background in a full length portrait shot?
@risunobushi_ai
@risunobushi_ai 2 ай бұрын
hi! did you refresh comfy after placing the models? you can invert masks by using a invert mask node, or by using the grow mask with blur "inverted mask" output
@panonesia
@panonesia 2 ай бұрын
can we add lora to speedup process? lora turbo to make it 8 step? where to place it? before Differential Diffusion node or after?
@risunobushi_ai
@risunobushi_ai 2 ай бұрын
yes you can, and usually you can apply is wherever, before or after differential diffusion. the only times I've had issues with the placing of differential diffusion was with specific versions of comfy while using ipadapter advanced, in which case differential should be either before or after the ipadapter, I don't remember which
@oonefilms
@oonefilms 2 ай бұрын
I'm a bit lost about Inpainting itself - Do you just paint any area on the image with solid color like black and then open in in comfy?
@risunobushi_ai
@risunobushi_ai 2 ай бұрын
hi! in order to inpaint, you can either: - input an image, and open it with the mask editor (right click on the image), then draw your mask, like in this video or - input an image, and input a custom mask (in this case you'd need to rewire the mask pipeline to account for that)
@erikdias9604
@erikdias9604 2 ай бұрын
Question: First, thank you for your video and your explanations. In Photoshop, if I have an arm or something else too many: I select and click on generation without doing anything else. In Flux ComfyUi, I am confused. I am a beginner and I would have liked to be able to select the part to delete like in PS but I am not sure I understood that it is possible via your video (I have problems understanding, so it does not come from you ^^; ). Thanks again for your work, it helps me a lot.
@risunobushi_ai
@risunobushi_ai 2 ай бұрын
Hi! In your specific case, you’d want to use a very low ControlNet strength, because you don’t want to follow the underlying picture too much - otherwise, if you did the opposite, you would always get something following the depth of the extra arm. It’s possible, it just takes a bit of time adjusting to it!
@titanoplastik
@titanoplastik Ай бұрын
Hello, I'm encountering the following error right at the beginning: Prompt outputs failed validation SimpleMath+: - Return type mismatch between linked nodes: a, INT != INT,FLOAT SimpleMath+: - Return type mismatch between linked nodes: a, INT != INT,FLOAT Can you give me a tip on how to fix this?
@titanoplastik
@titanoplastik Ай бұрын
I solved it by simply using the Utils Math Expression node instead.
@antronero5970
@antronero5970 2 ай бұрын
Yeah!
@FEILIU-m6c
@FEILIU-m6c 2 ай бұрын
👍👍👍
@generalawareness101
@generalawareness101 Ай бұрын
Do text. I don't mean on a sign I mean Image 1. Text "Hello" and out comes Image 1 with the "Hello" text that FLUX created overlayed.
@AndroKarpo
@AndroKarpo 6 күн бұрын
Don't mislead people, your video has nothing to do with classic Inpainting, you just have a workflow with control net
@Art13eck
@Art13eck 2 ай бұрын
but it's not a full-blown inpaint, it's just replacing one thing with another, it's a very simple thing....
@Lily-wr1nw
@Lily-wr1nw Ай бұрын
Wdym, can you please explain. I am a noob,sorry
@oonefilms
@oonefilms 2 ай бұрын
Sorry, one more noobie question: I've downloaded Depth anything v2, but it keeps giving me this error even though I have a file in that folder: [Errno 2] No such file or directory: 'D:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\depth-anything\\Depth-Anything-V2-Large\\.cache\\huggingface\\download\\depth_anything_v2_vitl.pth.a7ea19fa0ed99244e67b624c72b8580b7e9553043245905be58796a608eb9345.incomplete'
@risunobushi_ai
@risunobushi_ai 2 ай бұрын
it looks like the node can't properly download the depth anything v2 model in its folder. try selecting a different depth anything model in the dropdown menu, like the s version, or you can change preprocessor to another depth estimator model (like midas, marigold, zoe, etc)
How I work when designing ComfyUI workflows
23:03
Andrea Baioni
Рет қаралды 2,1 М.
Get Better Inpainting & Outpainting Results With Fluxfill
10:50
Monzon Media
Рет қаралды 6 М.
“Don’t stop the chances.”
00:44
ISSEI / いっせい
Рет қаралды 62 МЛН
Что-что Мурсдей говорит? 💭 #симбочка #симба #мурсдей
00:19
Relight and Preserve any detail with Stable Diffusion
19:02
Andrea Baioni
Рет қаралды 20 М.
10 AI Animation Tools You Won’t Believe are Free
16:02
Futurepedia
Рет қаралды 670 М.
Is SUPER FLUX the Secret to Insane Details?
8:58
Olivio Sarikas
Рет қаралды 34 М.
BrushNet SDXL and PowerPaintV2 = InPaint With Any Model in ComfyUI
15:50
Why do Studios Ignore Blender?
8:52
Film Stop
Рет қаралды 468 М.
Flux CANNOT be stopped! They Just Keep Shipping NEW AI Tools!
22:11
MattVidPro AI
Рет қаралды 29 М.
REDUX Advanced for FLUX - THIS is really GOOD!
9:04
Olivio Sarikas
Рет қаралды 27 М.
“Don’t stop the chances.”
00:44
ISSEI / いっせい
Рет қаралды 62 МЛН