The Advanced Inpaint workflow has been my go to for that task, so with the Flux Fill Model I wanted to see if I could intergrate it. You might already have done this, but in case not I swapped the basic guider for a CFG guider for the negative input. Connected the positive conditioning to InpaintModelConditioning node and an empty Clip Text Encoding for negative and hooked uo the rest of the inputs, have the positive and negative outputs from the InpaintModelConditioning feeding into the CFG Guider and the latent going to the SamplerCustomAdvanced and removed the Set Noise Mask and VAE Encode nodes and it's working great so far :)
@GrocksterRox2 күн бұрын
Yup, just started playing with it a couple days ago and I agree it's pretty game-changing. I'm going to be introducing a new video on it pretty soon.
@steveg93802 ай бұрын
wow your tutorials are the best Ive seen to date. Your explanation of how things function is well done my dude, hats off!
@GrocksterRoxАй бұрын
Thank you so much, the more we can help each other, the better the art and media we can create! Feel free to share the knowledge wealth (and videos) 😁
@Uday_अK2 ай бұрын
Really appreciate your efforts!
@GrocksterRox2 ай бұрын
Thank you so much!
@ShubzGhuman2 ай бұрын
great guide bro
@GrocksterRox2 ай бұрын
Thank you so much!
@yu-gg2 ай бұрын
I learned a lot watching this, thanks :)
@GrocksterRox2 ай бұрын
That makes me so happy, my goal is to help as many people in the community as possible!
@sven18582 ай бұрын
nice video, learnt about a few new nodes with this.
@GrocksterRox2 ай бұрын
Awesome, so glad it was helpful (every tip and trick helps)
@runebinder2 ай бұрын
I was talking to someone last night about wanting to find a way to crop a masked area and upscale it for Inpainting and then shrink it back down to put it back and I'd installed some nodes to look at to try this, then you save me the effort with this video :) Awesome stuff, downloading to try it out now, thank you.
@GrocksterRox2 ай бұрын
Great minds think alike! 😂 Good luck!
@runebinder2 ай бұрын
@@GrocksterRox, I take it this could be retrofitted to other model types like SDXL, Kolors etc. by replacing the loaders and Custom Sampler? The reason I was thinking along these lines is because as far as I'm aware the crop/upscale/Inpaint/shrink/put back is what Fooocus does for it's Improve Details option in Inpaint, and as that's been the best tool for Inpaint it seemed like the way to try and replicate those kind of results in Comfy.
@runebinder2 ай бұрын
@@GrocksterRox was so excited about the Inpaint I didn't notice the bonus at the end. I've got the beta toolbar enabled at the top and I still have Crystools showing in it, so you don't necessarily have to sacrifice it. It just appeared automatically for me when I enabled it, so not sure why it doesn't show for you.
@GrocksterRox2 ай бұрын
@runebinder oh, that's awesome, I'm going to have to tinker with it then. Resource monitor is a must have
@runebinder2 ай бұрын
@@GrocksterRox, definitely. The node had an import failed error for a couple of days once after installing some other nodes and it drove me up the wall not having. Just used your Inpaint workflow to mask a mangled hand holding a sword for a D&D character portrait I generated earlier and it's one of the best results I've seen on an Inpaint. It's exactly what I've been after :)
@MilesBellas2 ай бұрын
Interesting. Thanks ! 😊👍
@GrocksterRox2 ай бұрын
Absolutely! Glad it intrigues and thanks so much for sharing with others to help their learning.
@b3arwithm3Ай бұрын
thanks so much for this excellent tutorial. Which method would you recommend for my use case? I want to generate several pictures that happens in the same place. Let say, I have a picture of a hotel lobby. I want 2 men talking in the lobby. Another image with a woman waiting for the elevator. Just need them to roughly. Any tips?
@GrocksterRoxАй бұрын
Thanks so much. Depending on if you want the woman and the two men to be in the same picture, you would probably want to do some compositing that line up to a final img2img to bake in the image, but I'd have to learn more about the situation to help you. Feel free to jump on the discord and we can chat (link in the vid description)
@faycaltech592225 күн бұрын
Thanks for this tuto, when i download the worklow (inpainting advanced), it's not completed like here
@GrocksterRox25 күн бұрын
Hmm... It's the same process (but many of the nodes are collapsed for cleaner organization). You can expand the nodes to have more control over the process. Good luck!
@zRegicideTVz2 ай бұрын
I learned a lot after watching this, thank you so much. Do you happen to have a WF that can change a piece of clothes or object to different color?
@GrocksterRox2 ай бұрын
So glad! Yes, definitely have done clothes changing as well as batch changes of clothes - kzbin.info/www/bejne/infJfHhpf96sY9k
@zRegicideTVz2 ай бұрын
@@GrocksterRox thank you so muchhhh
@GrocksterRox2 ай бұрын
@@zRegicideTVz anytime
@thibaudherbert31442 ай бұрын
hey thanks for the tutorial. what is call the tool you are using for tha face animation in your videos ?
@GrocksterRox2 ай бұрын
I vary between live portrait and Hedra, also looking at Fuze lately as well.
@divye.ruhela2 ай бұрын
Does the Q8_O GGUF model work with 12GB cards? Is it better than the FP8 model?
@GrocksterRox2 ай бұрын
It may struggle a little, but I just spoke with someone earlier today that's used it successfully on a 12GB card, so you may squeak by using it successfully. Of course there are also online services (like Runpod) that you can rent hardware at a very reasonable rate.
@tuurblaffe14 күн бұрын
gguf dlows down my generation speed i use it for clip/t5 but for models are very slow when i use gguf
@GrocksterRox14 күн бұрын
Thanks again, I've noticed exactly the same thing in terms of rendering speed for these models.
@AberrantArtАй бұрын
Is flux required for this method? Flux always crashes on me so i have to use other model checkpoints.
@GrocksterRoxАй бұрын
No, you can definitely use this layering method in SDXL as well, but the LORA is specialized for flux
@831digital2 ай бұрын
Instead of drawing the mask, can you do this with something like SAM2 and video?
@GrocksterRox2 ай бұрын
Very possible, it's easy to adapt many of these methods to video generation, though the video models are still in their infancy (at least the local ones)
@TentationAI2 ай бұрын
Nice thanks a lot, for the last workflow at the end ( firework ). Can we replace the node " load diffusion model " by " unet loader (GGUF) " for load as model flux dev Q8 ?
@GrocksterRox2 ай бұрын
Definitely, you can use any model you want for inpainting (no flux specific models for inpainting, at least right now) and it works great!
@Novalis20092 ай бұрын
Sorry, where do I find the gguf model? Can't find the link ...
@GrocksterRox2 ай бұрын
No worries, the Flux LORA Inventory is here (the models are on the 3rd tab) - docs.google.com/spreadsheets/d/1543rZ6hqXxtPwa2PufNVMhQzSxvMY55DMhQTH81P8iM/edit?gid=1074472502#gid=1074472502 The WoW GGUF model is here - civitai.com/models/147933?modelVersionId=756568
@Pauluz_The_Web_Gnome2 ай бұрын
Thanks bro! Nice vid! Greetz...Pauluz RTX 😂
@GrocksterRox2 ай бұрын
Always appreciated!
@PeterLunk2 ай бұрын
Niceone :)
@GrocksterRox2 ай бұрын
Thanks so much!
@ImAlecPonce2 ай бұрын
OMG!!!! with the new selkection now... I may be able to quite photoshop!!!
@GrocksterRox2 ай бұрын
It's an amazing extension for sure! Really amazing what you can do with it especially when you loop in the AI gen capabilities.
@bushwentto7112 ай бұрын
Bruh what is that AI video in the corner 💀
@GrocksterRox2 ай бұрын
Pretty wild huh? I vary between Live Portrait, Hedra and looking into Fuze as well
@bushwentto7112 ай бұрын
@@GrocksterRox give it another year and it won't be distinguishable from real life, but for now bit of a gag hahahaha