Пікірлер
@haliskurguyan
@haliskurguyan Сағат бұрын
supeer... so how will this work selectively(interactive tool) in the comfyUI dev mode/API section? Development towards this would be perfect.
@illbrat
@illbrat 3 күн бұрын
how do you get that progressive preview image in the kSampler node?
@elezetamusic
@elezetamusic 3 күн бұрын
@@illbrat oh, enable previews somewhere in the config, comfyui manager config I believe!
@illbrat
@illbrat 3 күн бұрын
@@elezetamusic Thank you!
@offmybach
@offmybach 9 күн бұрын
Can you do this with an existing image to add to main image- not just prompting?
@elezetamusic
@elezetamusic 9 күн бұрын
@@offmybach i don't get the question, can you explain further? Thanks!
@offmybach
@offmybach 9 күн бұрын
@@elezetamusic If you already have a character image and for ex. a living room that you also created. So you wouldn't be prompting for the living room that you already perfected.
@offmybach
@offmybach 7 күн бұрын
@@elezetamusic does that make sense?
@zGenMedia
@zGenMedia 11 күн бұрын
I assume this will work for Flux?
@elezetamusic
@elezetamusic 11 күн бұрын
@@zGenMedia there's an example for flux in the repository :)
@zGenMedia
@zGenMedia 9 күн бұрын
@@elezetamusic thanks boss
@yvan_sellest
@yvan_sellest 26 күн бұрын
These nodes are absolutely insane. However i struggle a lot inpainting in 'plain colors' areas. For example, I have a perfume bottle on a beige background. It usually fails to generate something in the 'empty area'. Any idea why ?
@elezetamusic
@elezetamusic 25 күн бұрын
@@yvan_sellest are you setting denoise to 1? Are you masking enough area? For this use case, are you encoding with vae encode for inpainting?
@Scrapemist
@Scrapemist Ай бұрын
Does it work with controlnet? I get weird outputs with black noise. Have you tried?
@elezetamusic
@elezetamusic Ай бұрын
@@Scrapemist it should. Just plug things correctly and the sampler/controlnet don't even know these nodes are part of the workflow!
@chaios
@chaios Ай бұрын
Wow, this is what I've been looking for. Tried Impact and Masquerade, but they were a bit too fiddly. This looks more direct to the point.
@fr34ky5
@fr34ky5 Ай бұрын
Thanks a lot for this! This was the only thing still holding me to do my complete workflow in comfyUI 🥰
@elezetamusic
@elezetamusic Ай бұрын
@@fr34ky5 thank you!!
@---Nikita--
@---Nikita-- Ай бұрын
Like a facedetailed but for a handpainted mask + outpaint. Thanks. +NewSub
@elezetamusic
@elezetamusic Ай бұрын
@@---Nikita-- thank you!
@spinear
@spinear Ай бұрын
Thank you for making this. I hate that ComfyUI lacks basic things.
@chrislongley
@chrislongley Ай бұрын
Have been using your nodes for a couple of weeks now and I think they are fantastic.
@olekXDDDD
@olekXDDDD Ай бұрын
it does not work with the eff ksampler.advanced. ill get an error. in case someone is having the same problem: "Input and output sizes should be greater than 0, but got input (H: 1, W: 1) output (H: 0, W: 0)"
@elezetamusic
@elezetamusic Ай бұрын
@@olekXDDDD it should, though, as basically you'd still sample on an image and mask. You must be plugging things incorrectly
@李云-f1b
@李云-f1b Ай бұрын
The work is great, it hasn't been updated for a long time
@AInfectados
@AInfectados Ай бұрын
First, thx for this wonderful WF, was searching for this for a long time to stop using AUTO's to only do the Inpainting part. Can you adapt this for *FLUX* and also add the node LORA POWER LOADER, please? PD: We have a now an Inpaint Model FLUX (Controlnet): *FLUX.1 dev Controlnet Inpainting Alpha*
@elezetamusic
@elezetamusic Ай бұрын
@@AInfectados these nodes should be completely independent from the model you're using! Just plug in your samplers etc on the cropped image
@imonutiy
@imonutiy 2 ай бұрын
Looks like blend_pixels just adds more pixels to context so it is the same as 1st option.
@elezetamusic
@elezetamusic Ай бұрын
@@imonutiy not really! Blend adds more pixels to have enough context to blend, but blend also does a gradual blending of the newly generated area into the original image, so that the transition is less abrupt. You see more context because it is required to blend.
@imonutiy
@imonutiy Ай бұрын
@@elezetamusic Thank you for the answer, by the way what are other options for inpainting sketching nodes in comfyui. It feels really wonky compared to web ui.
@jonjoni518
@jonjoni518 2 ай бұрын
fantastic!!!
@NywlMac
@NywlMac 2 ай бұрын
THANK YOU!! I was looking for a tutorial so I could understand how the nodes for a inpaint workflow works, and all the ones I found is already those HUUGE workflows. This one is simple and works, and now, as I could understand how it works, I can improve it as I please. Great tutorial!
@matthallett4126
@matthallett4126 2 ай бұрын
This is a great solution, I love it. Thank you so much!
@Vigilence
@Vigilence 2 ай бұрын
I used this node successfully with a regular mask. However, if I use the invert mask option with the precious mask and the same base image, the image that is inpainted is much lower in quality. Same settings are used from the successful in paint, so I’m not sure what the issue is. Can you confirm by testing with the lasted comfyui that the invert mask option is working properly?
@elezetamusic
@elezetamusic 2 ай бұрын
@@Vigilence Hi, this is because the inpainting area is so much larger if you do the whole image, so you get it downscaled by the node to fit the target resolution, and therefore, you lose resolution. A good solution is to apply a detailer afterwards. I'm figuring out if I could implement some node that does that detailing for very large images on a way that's easy to set up with these nodes
@boobak
@boobak 2 ай бұрын
Great nodes! Thank you for sharing. It definitely optimizes the time required for the inpainting. I have a question - is it possible to run these nodes with latent patch for example 4 or more?
@ronnykhalil
@ronnykhalil 2 ай бұрын
so glad i came across this on a reddit thread. super helpful
@Vigilence
@Vigilence 2 ай бұрын
I have a photo with a woman. I created a mask for the ear and also masked some of the below portion so I can add an earring. However when in-painting no earring is added. Any tips?
@elezetamusic
@elezetamusic 2 ай бұрын
@@Vigilence try setting a context mask with the whole face and write a prompt such as "a woman with earrings". That probably works
@Vigilence
@Vigilence 2 ай бұрын
@@elezetamusic I will try! Ty
@MrPer4illo
@MrPer4illo 2 ай бұрын
Thank you for the video. Can this approach be implemented for animatediff (animate only certain areas of an image)?
@elezetamusic
@elezetamusic 2 ай бұрын
totally, I am aware of some users using it for that purpose.
@李云-f1b
@李云-f1b 2 ай бұрын
Very good, thank you very much
@PanKazik
@PanKazik 2 ай бұрын
Very nice tutorial. The only problem I have is that despite the model I use (tried a few inpainting models) the result is always black square in place of mask. Any idea why it happens? (i am using comfy on mac)
@elezetamusic
@elezetamusic 2 ай бұрын
@@PanKazik do inpainting models work for you without my nodes? If not, then the issue is unrelated to the nodes and I don't think I can help. If it works without my nodes, I'd suggest to load the workflow from github and try it with different models without changing anything. That should work. I also use Mac. There's an issue with some Mac updates that the GPU only generates black images, but it would affect all models, not only inpainting. Check if that's the case.
@PanKazik
@PanKazik 2 ай бұрын
@@elezetamusic I tried simple inpaining workflow and it worked. However I found the reason why it worked. The problem was number of steps. Anything over 16 produced black square. With that in mind everything works flawlessly. Thanks for your response :)
@derrickpang4304
@derrickpang4304 3 ай бұрын
This is exactly what I need. Thank you so much!!!!!
@elezetamusic
@elezetamusic 3 ай бұрын
@@derrickpang4304 thank you!!
@InaKilometrosX1TUBO
@InaKilometrosX1TUBO 3 ай бұрын
Hi, thanks for your videos, I have an error, see if you could help me. - Prompt outputs failed validation UnetLoaderGGUF: - Value not in list: unet_name: 'None' not in [ ]
@elezetamusic
@elezetamusic 3 ай бұрын
@@InaKilometrosX1TUBO this doesn't seem related to my nodes but to other nodes that I don't know. Sorry, can't help
@subashchandra9557
@subashchandra9557 3 ай бұрын
Now I just need to know how to do the Fill/Original/Latent Noise/ Latent Nothing options in ComfyUI!
@marcinchaciej
@marcinchaciej 3 ай бұрын
This should be default in Comfy, you made a great great contribution and we all really appreciate it
@elezetamusic
@elezetamusic 3 ай бұрын
@@marcinchaciej thank you!!
@RemiStardust
@RemiStardust 3 ай бұрын
Excellent work! This is something I really wanted! Very simple workflow, easy to even stitch the updated area back in to the original image! Thank you!
@elezetamusic
@elezetamusic 3 ай бұрын
@@RemiStardust thank you!
@user-yg4qo9zg3u
@user-yg4qo9zg3u 3 ай бұрын
You are a wonderful person, thank you for sharing! This is what I've been looking for! Just perfect! 🔥
@KevinScandinavia
@KevinScandinavia 3 ай бұрын
Yeah no kidding, and including the json file was just the icing on the cake <3 Thank you so much!
@weaze5583
@weaze5583 3 ай бұрын
is there no pathing for the output ? where are these images saved ? Output/Temp/Input is empty and only the examples folder in the custom nodes is there. huh ? No sign of a html or folder. mh
@elezetamusic
@elezetamusic 3 ай бұрын
@@weaze5583 they should be exported to output/ in a html file + folder with images. Are you using the example workflow with the node to export gallery as well? Can you otherwise file an issue via github and provide a screenshot of your workflow and some info on your setup, OS etc? This should work with the provided example workflow
@weaze5583
@weaze5583 3 ай бұрын
@@elezetamusic excuse me. i think there was some problem on my end. idk why but the sampler ran through but didnt produce an image. never had this before. it just sampled forever
@Milo_Estobar
@Milo_Estobar 3 ай бұрын
Thank you for your contribution 👍🏽 (App extension creator + Tutorial)...
@EmeranceLN13
@EmeranceLN13 3 ай бұрын
subbed! Thank you for such a straight forward explanation !
@Damian151614
@Damian151614 3 ай бұрын
Do you know why I get unmatched colors? I set denoise to 0.00 on purpose because I wanted to get the same output as input image, but somehow I get faded colors on masked area.
@elezetamusic
@elezetamusic 3 ай бұрын
@@Damian151614 that's the encoding and decoding process of the VAE. A different VAE may get more accurate colors for that specific image
@Damian151614
@Damian151614 3 ай бұрын
@@elezetamusic It was model problem. I tested it on another models (non VAE and VAE baked models) and only that one caused problems.
@hphector6
@hphector6 3 ай бұрын
i have an issue where the mask is still very visible, like a gray mask over the inpainted area
@elezetamusic
@elezetamusic 3 ай бұрын
@@hphector6 hi, I'd bet that you're using vae encode for inpainting and a denoise lower than 1. If you want to use a denoise lower than 1, use InpaintModelConditioning instead of vae encode for inpainting.
@hphector6
@hphector6 3 ай бұрын
@@elezetamusic I was using inpaint model conditioning as per the default workflow. I think it might have been the model im using which is a pony variant, no issues with epic real xl
@henryphillips6167
@henryphillips6167 4 ай бұрын
How do we modify this workflow to work with premade masks? I would also like to use another image as the fill for the masked areas. Could you detail how I would go about this?
@elezetamusic
@elezetamusic 4 ай бұрын
@@henryphillips6167 i get what you're asking but this is out of the scope of this video and these nodes. I'd suggest you to continue to learn using comfyui and you'll eventually figure out how to do this. Sorry, I can't offer tailored support for comfyui
@treksis
@treksis 4 ай бұрын
thank you so much. saved my life. I tried to copy auto1111's Inpainting myself, this is much better.
@fromloveandlifestory
@fromloveandlifestory 4 ай бұрын
THank you!!!, Please create a note that can import a high-resolution image and split it into specified sections. Then, it should run the img2img (inpaint) process on each section and finally combine all the split sections back into a complete image. The goal is to be able to process high-resolution images without having to manually split them in Photoshop and edit each section individually
@elezetamusic
@elezetamusic 4 ай бұрын
@@fromloveandlifestory there is tiled sampler for that!
@fromloveandlifestory
@fromloveandlifestory 4 ай бұрын
Could you please share a similar workflow with me? Thank you very much!
@davoodice
@davoodice 4 ай бұрын
Thank you
@NeonSparks
@NeonSparks 4 ай бұрын
This is exactly what I have been looking for! Amazing, can't wait to try this. Thanks
@elezetamusic
@elezetamusic 4 ай бұрын
@@NeonSparks thank you!! Enjoy!
@skycladsquirrel
@skycladsquirrel 4 ай бұрын
Perfect. Thank you. Subscribed!
@831digital
@831digital 4 ай бұрын
Instead of manually painting the mask in, do you have an example of this working with a detector that generates masks? This would make it more useful for animation.
@elezetamusic
@elezetamusic 4 ай бұрын
No, but you can easily put it together :) give it a go!
@831digital
@831digital 4 ай бұрын
@@elezetamusic I tried doing it with SEGS, but SEGS resizes the video and throws an error when trying to feed the mask back to the original. If it's super easy, please share an example.
@natlrazfx
@natlrazfx 4 ай бұрын
brilliant, thank you so much
@elezetamusic
@elezetamusic 4 ай бұрын
Thank you!
@mahilkr
@mahilkr 4 ай бұрын
Hi @Elezeta, excellent work! How can I generate multiple variations of a stitch? Currently, it only works with the repeater set to a value of 1.
@elezetamusic
@elezetamusic 4 ай бұрын
Hey, please provide more details on what you want to do. If you want to generate multiple images, you could enqueue the job multiple times or in comfyui advanced options, set a higher batch number. If you want to use a repeater node, you'd have to repeat both the image and the masks (not sure if the repeater node can do masks). Not sure if I answered your question. If not, please let me know what you mean by "it only works...". Does it give an error?
@arkelss4
@arkelss4 5 ай бұрын
If I wanted to used this with lora is it possible but the real question is is there a way to test multiple lora within a promot or combination, would I need a lora stack to include all lora that may be used and will be trigger, can I just include the tags in the prompt and maybe that will trigger automatically. What do you belive is available overall I want to use this and include different lora for different prompts , thank you in advance
@elezetamusic
@elezetamusic 5 ай бұрын
This node seems to do exactly what you want: github.com/badjeff/comfyui_lora_tag_loader Use it with prompt combinator! Please check and validate that the node is legit, I haven't written it nor used it.
@arkelss4
@arkelss4 5 ай бұрын
@@elezetamusic I appreciate it and will give it a try and feedback to.
@arkelss4
@arkelss4 5 ай бұрын
@@elezetamusic I confirm it dose work. using "load lora tag" incorporated into your workflow works. Servual lora can be used at once in many diffrent combination. I appreciate you for creating this node, I plan to further incorporate instantID consistent character workflow with this for consistency for all prompt and possibly running servual of your nodes in one workflow, for consistency , for controlnet with a image cyclyer and maybe one more for an unknown feature.
@elezetamusic
@elezetamusic 4 ай бұрын
​@@arkelss4 awesome!!!
@ARAI96969
@ARAI96969 5 ай бұрын
Thank you so much for this node! Before this I've never found a way to do inpaint one area without sending the whole image to Encode and spoiling the overall quality.. This Is a god send!! But I do have 1 query on which setting should I do if I wish to inpaint a large area (example 1/4 of the whole image to mask a whole character) and change a character entirely with another lora character, thus creating 2 unique characters interacting Cause if I mask a large area, the output is usually very bad, lacks details and distorted, do I upscale and area, Downscale it or increase size of mask perhaps? Thanks if u have any tips, cause I wish to keep my workflow simple and avoid using segment and auto detect to mask and repaint characters to my lora. I prefer to choose and mask them myself for more control
@ARAI96969
@ARAI96969 5 ай бұрын
Hi, do u have any tips for inpainting larger areas? Cause I used your settings but it will generate distortions, any advise on good settings would be greatly appreciated
@elezetamusic
@elezetamusic 4 ай бұрын
@@ARAI96969 well, for larger areas I'd suggest to inpaint the whole area first, then detail the key areas in it with several passes. You could also consider a tiled sampler. There's no magic solution to sample on higher resolutions with high detail in a single go and fast.
@Daralima.
@Daralima. 5 ай бұрын
These nodes make inpainting in Comfy super convenient and easy to adjust to your needs. Thank you!
@elezetamusic
@elezetamusic 4 ай бұрын
Thank you!
@Kikoking-y9b
@Kikoking-y9b 5 ай бұрын
Hi ,very nice node thank you a lot. I have a question, How can I do upscaling before sampling ,what is the meaning of that or how can I do it? Iam not sure if you mean what I think. I think of cutting an area around a 'Face' and upscale only the face, and stitch the upscaled face back. Can you help me?
@elezetamusic
@elezetamusic 5 ай бұрын
If you set mode to ranged size or forced size, the cropped image is automatically upscaled (or downscaled) to fit that resolution. Then you sample on it, and then during stitch it is returned to the original size. So you don't have to worry, the node takes care of it for you! You can check it by previewing the cropped image and checking its size
@dfhdgsdcrthfjghktygerte
@dfhdgsdcrthfjghktygerte 5 ай бұрын
I want to erase something from a skin and flood fill it with just one color that maches the surroundings. Is this possible? When i try to use the "skin" or "color" prompt - it inserts faces or random stuff into the masked area.
@elezetamusic
@elezetamusic 5 ай бұрын
hi! extend the context area enough so you can see where the skin is (e.g. an arm, a leg, whatever) and then type in "an arm", "a leg", or even extend the context area further to show there's a person, and then type a prompt like "a person". that will give the sampler enough context to fill in the gap seamlessly.
@elezetamusic
@elezetamusic 5 ай бұрын
also, use an inpainting model, they work much better and don't add random stuff in the masked area