supeer... so how will this work selectively(interactive tool) in the comfyUI dev mode/API section? Development towards this would be perfect.
@illbrat3 күн бұрын
how do you get that progressive preview image in the kSampler node?
@elezetamusic3 күн бұрын
@@illbrat oh, enable previews somewhere in the config, comfyui manager config I believe!
@illbrat3 күн бұрын
@@elezetamusic Thank you!
@offmybach9 күн бұрын
Can you do this with an existing image to add to main image- not just prompting?
@elezetamusic9 күн бұрын
@@offmybach i don't get the question, can you explain further? Thanks!
@offmybach9 күн бұрын
@@elezetamusic If you already have a character image and for ex. a living room that you also created. So you wouldn't be prompting for the living room that you already perfected.
@offmybach7 күн бұрын
@@elezetamusic does that make sense?
@zGenMedia11 күн бұрын
I assume this will work for Flux?
@elezetamusic11 күн бұрын
@@zGenMedia there's an example for flux in the repository :)
@zGenMedia9 күн бұрын
@@elezetamusic thanks boss
@yvan_sellest26 күн бұрын
These nodes are absolutely insane. However i struggle a lot inpainting in 'plain colors' areas. For example, I have a perfume bottle on a beige background. It usually fails to generate something in the 'empty area'. Any idea why ?
@elezetamusic25 күн бұрын
@@yvan_sellest are you setting denoise to 1? Are you masking enough area? For this use case, are you encoding with vae encode for inpainting?
@ScrapemistАй бұрын
Does it work with controlnet? I get weird outputs with black noise. Have you tried?
@elezetamusicАй бұрын
@@Scrapemist it should. Just plug things correctly and the sampler/controlnet don't even know these nodes are part of the workflow!
@chaiosАй бұрын
Wow, this is what I've been looking for. Tried Impact and Masquerade, but they were a bit too fiddly. This looks more direct to the point.
@fr34ky5Ай бұрын
Thanks a lot for this! This was the only thing still holding me to do my complete workflow in comfyUI 🥰
@elezetamusicАй бұрын
@@fr34ky5 thank you!!
@---Nikita--Ай бұрын
Like a facedetailed but for a handpainted mask + outpaint. Thanks. +NewSub
@elezetamusicАй бұрын
@@---Nikita-- thank you!
@spinearАй бұрын
Thank you for making this. I hate that ComfyUI lacks basic things.
@chrislongleyАй бұрын
Have been using your nodes for a couple of weeks now and I think they are fantastic.
@olekXDDDDАй бұрын
it does not work with the eff ksampler.advanced. ill get an error. in case someone is having the same problem: "Input and output sizes should be greater than 0, but got input (H: 1, W: 1) output (H: 0, W: 0)"
@elezetamusicАй бұрын
@@olekXDDDD it should, though, as basically you'd still sample on an image and mask. You must be plugging things incorrectly
@李云-f1bАй бұрын
The work is great, it hasn't been updated for a long time
@AInfectadosАй бұрын
First, thx for this wonderful WF, was searching for this for a long time to stop using AUTO's to only do the Inpainting part. Can you adapt this for *FLUX* and also add the node LORA POWER LOADER, please? PD: We have a now an Inpaint Model FLUX (Controlnet): *FLUX.1 dev Controlnet Inpainting Alpha*
@elezetamusicАй бұрын
@@AInfectados these nodes should be completely independent from the model you're using! Just plug in your samplers etc on the cropped image
@imonutiy2 ай бұрын
Looks like blend_pixels just adds more pixels to context so it is the same as 1st option.
@elezetamusicАй бұрын
@@imonutiy not really! Blend adds more pixels to have enough context to blend, but blend also does a gradual blending of the newly generated area into the original image, so that the transition is less abrupt. You see more context because it is required to blend.
@imonutiyАй бұрын
@@elezetamusic Thank you for the answer, by the way what are other options for inpainting sketching nodes in comfyui. It feels really wonky compared to web ui.
@jonjoni5182 ай бұрын
fantastic!!!
@NywlMac2 ай бұрын
THANK YOU!! I was looking for a tutorial so I could understand how the nodes for a inpaint workflow works, and all the ones I found is already those HUUGE workflows. This one is simple and works, and now, as I could understand how it works, I can improve it as I please. Great tutorial!
@matthallett41262 ай бұрын
This is a great solution, I love it. Thank you so much!
@Vigilence2 ай бұрын
I used this node successfully with a regular mask. However, if I use the invert mask option with the precious mask and the same base image, the image that is inpainted is much lower in quality. Same settings are used from the successful in paint, so I’m not sure what the issue is. Can you confirm by testing with the lasted comfyui that the invert mask option is working properly?
@elezetamusic2 ай бұрын
@@Vigilence Hi, this is because the inpainting area is so much larger if you do the whole image, so you get it downscaled by the node to fit the target resolution, and therefore, you lose resolution. A good solution is to apply a detailer afterwards. I'm figuring out if I could implement some node that does that detailing for very large images on a way that's easy to set up with these nodes
@boobak2 ай бұрын
Great nodes! Thank you for sharing. It definitely optimizes the time required for the inpainting. I have a question - is it possible to run these nodes with latent patch for example 4 or more?
@ronnykhalil2 ай бұрын
so glad i came across this on a reddit thread. super helpful
@Vigilence2 ай бұрын
I have a photo with a woman. I created a mask for the ear and also masked some of the below portion so I can add an earring. However when in-painting no earring is added. Any tips?
@elezetamusic2 ай бұрын
@@Vigilence try setting a context mask with the whole face and write a prompt such as "a woman with earrings". That probably works
@Vigilence2 ай бұрын
@@elezetamusic I will try! Ty
@MrPer4illo2 ай бұрын
Thank you for the video. Can this approach be implemented for animatediff (animate only certain areas of an image)?
@elezetamusic2 ай бұрын
totally, I am aware of some users using it for that purpose.
@李云-f1b2 ай бұрын
Very good, thank you very much
@PanKazik2 ай бұрын
Very nice tutorial. The only problem I have is that despite the model I use (tried a few inpainting models) the result is always black square in place of mask. Any idea why it happens? (i am using comfy on mac)
@elezetamusic2 ай бұрын
@@PanKazik do inpainting models work for you without my nodes? If not, then the issue is unrelated to the nodes and I don't think I can help. If it works without my nodes, I'd suggest to load the workflow from github and try it with different models without changing anything. That should work. I also use Mac. There's an issue with some Mac updates that the GPU only generates black images, but it would affect all models, not only inpainting. Check if that's the case.
@PanKazik2 ай бұрын
@@elezetamusic I tried simple inpaining workflow and it worked. However I found the reason why it worked. The problem was number of steps. Anything over 16 produced black square. With that in mind everything works flawlessly. Thanks for your response :)
@derrickpang43043 ай бұрын
This is exactly what I need. Thank you so much!!!!!
@elezetamusic3 ай бұрын
@@derrickpang4304 thank you!!
@InaKilometrosX1TUBO3 ай бұрын
Hi, thanks for your videos, I have an error, see if you could help me. - Prompt outputs failed validation UnetLoaderGGUF: - Value not in list: unet_name: 'None' not in [ ]
@elezetamusic3 ай бұрын
@@InaKilometrosX1TUBO this doesn't seem related to my nodes but to other nodes that I don't know. Sorry, can't help
@subashchandra95573 ай бұрын
Now I just need to know how to do the Fill/Original/Latent Noise/ Latent Nothing options in ComfyUI!
@marcinchaciej3 ай бұрын
This should be default in Comfy, you made a great great contribution and we all really appreciate it
@elezetamusic3 ай бұрын
@@marcinchaciej thank you!!
@RemiStardust3 ай бұрын
Excellent work! This is something I really wanted! Very simple workflow, easy to even stitch the updated area back in to the original image! Thank you!
@elezetamusic3 ай бұрын
@@RemiStardust thank you!
@user-yg4qo9zg3u3 ай бұрын
You are a wonderful person, thank you for sharing! This is what I've been looking for! Just perfect! 🔥
@KevinScandinavia3 ай бұрын
Yeah no kidding, and including the json file was just the icing on the cake <3 Thank you so much!
@weaze55833 ай бұрын
is there no pathing for the output ? where are these images saved ? Output/Temp/Input is empty and only the examples folder in the custom nodes is there. huh ? No sign of a html or folder. mh
@elezetamusic3 ай бұрын
@@weaze5583 they should be exported to output/ in a html file + folder with images. Are you using the example workflow with the node to export gallery as well? Can you otherwise file an issue via github and provide a screenshot of your workflow and some info on your setup, OS etc? This should work with the provided example workflow
@weaze55833 ай бұрын
@@elezetamusic excuse me. i think there was some problem on my end. idk why but the sampler ran through but didnt produce an image. never had this before. it just sampled forever
@Milo_Estobar3 ай бұрын
Thank you for your contribution 👍🏽 (App extension creator + Tutorial)...
@EmeranceLN133 ай бұрын
subbed! Thank you for such a straight forward explanation !
@Damian1516143 ай бұрын
Do you know why I get unmatched colors? I set denoise to 0.00 on purpose because I wanted to get the same output as input image, but somehow I get faded colors on masked area.
@elezetamusic3 ай бұрын
@@Damian151614 that's the encoding and decoding process of the VAE. A different VAE may get more accurate colors for that specific image
@Damian1516143 ай бұрын
@@elezetamusic It was model problem. I tested it on another models (non VAE and VAE baked models) and only that one caused problems.
@hphector63 ай бұрын
i have an issue where the mask is still very visible, like a gray mask over the inpainted area
@elezetamusic3 ай бұрын
@@hphector6 hi, I'd bet that you're using vae encode for inpainting and a denoise lower than 1. If you want to use a denoise lower than 1, use InpaintModelConditioning instead of vae encode for inpainting.
@hphector63 ай бұрын
@@elezetamusic I was using inpaint model conditioning as per the default workflow. I think it might have been the model im using which is a pony variant, no issues with epic real xl
@henryphillips61674 ай бұрын
How do we modify this workflow to work with premade masks? I would also like to use another image as the fill for the masked areas. Could you detail how I would go about this?
@elezetamusic4 ай бұрын
@@henryphillips6167 i get what you're asking but this is out of the scope of this video and these nodes. I'd suggest you to continue to learn using comfyui and you'll eventually figure out how to do this. Sorry, I can't offer tailored support for comfyui
@treksis4 ай бұрын
thank you so much. saved my life. I tried to copy auto1111's Inpainting myself, this is much better.
@fromloveandlifestory4 ай бұрын
THank you!!!, Please create a note that can import a high-resolution image and split it into specified sections. Then, it should run the img2img (inpaint) process on each section and finally combine all the split sections back into a complete image. The goal is to be able to process high-resolution images without having to manually split them in Photoshop and edit each section individually
@elezetamusic4 ай бұрын
@@fromloveandlifestory there is tiled sampler for that!
@fromloveandlifestory4 ай бұрын
Could you please share a similar workflow with me? Thank you very much!
@davoodice4 ай бұрын
Thank you
@NeonSparks4 ай бұрын
This is exactly what I have been looking for! Amazing, can't wait to try this. Thanks
@elezetamusic4 ай бұрын
@@NeonSparks thank you!! Enjoy!
@skycladsquirrel4 ай бұрын
Perfect. Thank you. Subscribed!
@831digital4 ай бұрын
Instead of manually painting the mask in, do you have an example of this working with a detector that generates masks? This would make it more useful for animation.
@elezetamusic4 ай бұрын
No, but you can easily put it together :) give it a go!
@831digital4 ай бұрын
@@elezetamusic I tried doing it with SEGS, but SEGS resizes the video and throws an error when trying to feed the mask back to the original. If it's super easy, please share an example.
@natlrazfx4 ай бұрын
brilliant, thank you so much
@elezetamusic4 ай бұрын
Thank you!
@mahilkr4 ай бұрын
Hi @Elezeta, excellent work! How can I generate multiple variations of a stitch? Currently, it only works with the repeater set to a value of 1.
@elezetamusic4 ай бұрын
Hey, please provide more details on what you want to do. If you want to generate multiple images, you could enqueue the job multiple times or in comfyui advanced options, set a higher batch number. If you want to use a repeater node, you'd have to repeat both the image and the masks (not sure if the repeater node can do masks). Not sure if I answered your question. If not, please let me know what you mean by "it only works...". Does it give an error?
@arkelss45 ай бұрын
If I wanted to used this with lora is it possible but the real question is is there a way to test multiple lora within a promot or combination, would I need a lora stack to include all lora that may be used and will be trigger, can I just include the tags in the prompt and maybe that will trigger automatically. What do you belive is available overall I want to use this and include different lora for different prompts , thank you in advance
@elezetamusic5 ай бұрын
This node seems to do exactly what you want: github.com/badjeff/comfyui_lora_tag_loader Use it with prompt combinator! Please check and validate that the node is legit, I haven't written it nor used it.
@arkelss45 ай бұрын
@@elezetamusic I appreciate it and will give it a try and feedback to.
@arkelss45 ай бұрын
@@elezetamusic I confirm it dose work. using "load lora tag" incorporated into your workflow works. Servual lora can be used at once in many diffrent combination. I appreciate you for creating this node, I plan to further incorporate instantID consistent character workflow with this for consistency for all prompt and possibly running servual of your nodes in one workflow, for consistency , for controlnet with a image cyclyer and maybe one more for an unknown feature.
@elezetamusic4 ай бұрын
@@arkelss4 awesome!!!
@ARAI969695 ай бұрын
Thank you so much for this node! Before this I've never found a way to do inpaint one area without sending the whole image to Encode and spoiling the overall quality.. This Is a god send!! But I do have 1 query on which setting should I do if I wish to inpaint a large area (example 1/4 of the whole image to mask a whole character) and change a character entirely with another lora character, thus creating 2 unique characters interacting Cause if I mask a large area, the output is usually very bad, lacks details and distorted, do I upscale and area, Downscale it or increase size of mask perhaps? Thanks if u have any tips, cause I wish to keep my workflow simple and avoid using segment and auto detect to mask and repaint characters to my lora. I prefer to choose and mask them myself for more control
@ARAI969695 ай бұрын
Hi, do u have any tips for inpainting larger areas? Cause I used your settings but it will generate distortions, any advise on good settings would be greatly appreciated
@elezetamusic4 ай бұрын
@@ARAI96969 well, for larger areas I'd suggest to inpaint the whole area first, then detail the key areas in it with several passes. You could also consider a tiled sampler. There's no magic solution to sample on higher resolutions with high detail in a single go and fast.
@Daralima.5 ай бұрын
These nodes make inpainting in Comfy super convenient and easy to adjust to your needs. Thank you!
@elezetamusic4 ай бұрын
Thank you!
@Kikoking-y9b5 ай бұрын
Hi ,very nice node thank you a lot. I have a question, How can I do upscaling before sampling ,what is the meaning of that or how can I do it? Iam not sure if you mean what I think. I think of cutting an area around a 'Face' and upscale only the face, and stitch the upscaled face back. Can you help me?
@elezetamusic5 ай бұрын
If you set mode to ranged size or forced size, the cropped image is automatically upscaled (or downscaled) to fit that resolution. Then you sample on it, and then during stitch it is returned to the original size. So you don't have to worry, the node takes care of it for you! You can check it by previewing the cropped image and checking its size
@dfhdgsdcrthfjghktygerte5 ай бұрын
I want to erase something from a skin and flood fill it with just one color that maches the surroundings. Is this possible? When i try to use the "skin" or "color" prompt - it inserts faces or random stuff into the masked area.
@elezetamusic5 ай бұрын
hi! extend the context area enough so you can see where the skin is (e.g. an arm, a leg, whatever) and then type in "an arm", "a leg", or even extend the context area further to show there's a person, and then type a prompt like "a person". that will give the sampler enough context to fill in the gap seamlessly.
@elezetamusic5 ай бұрын
also, use an inpainting model, they work much better and don't add random stuff in the masked area