Join the conversation on Discord discord.gg/gggpkVgBf3 You can now support the channel and unlock exclusive perks by becoming a member: pixaroma kzbin.info/door/mMbwA-s3GZDKVzGZ-kPwaQjoin Check my other channels: www.youtube.com/@altflux www.youtube.com/@AI2Play
@pablodara87204 күн бұрын
This video is Top tier quality, I love you
@Uday_अK3 ай бұрын
Thank you. I have been waiting for this in-painting and out-painting video for a long time. There are many tutorials available on KZbin, but understanding the method and implementing it is very important. You have explained it in a very simple way.
@pixaroma3 ай бұрын
thanks uday 🙂
@Patricia_Liu3 ай бұрын
Thanks for all your hard work! I really appreciate the effort you put into your tutorials.
@pixaroma3 ай бұрын
Thank you so much for your support 🙂
@sergeysaulit3 ай бұрын
Simple, accessible and understandable! You are the best one who delivers complex information! Thank you and don't stop!
@pixaroma3 ай бұрын
Thank you 😊
@Kozitaju11 күн бұрын
Perfectly explained ! Thanks. I can at last move from Forge to Comfyui for Inpainting thanks to your tutorial and this stich and crop tool 🙂
@hyperdeloutz2444Ай бұрын
Perfect tutorial. Like the way you explained the workflow.
@pollapatjampeeklang74933 ай бұрын
It was worth the wait; you never truly disappoint.
@pixaroma3 ай бұрын
Thank you
@SumoBundle3 ай бұрын
Thank you for another fantastic episode!
@59Marcel3 ай бұрын
Great tutorial! You made everything so clear and easy to follow - thanks for breaking Inpainting down so well!
@pixaroma3 ай бұрын
Thank you 😊
@emergentcomics49263 ай бұрын
I've been waiting on this one. Thanks so much! I can't wait to give it a try
@baheth3elmy163 ай бұрын
Great episode! Thank you very much!!!!
@SebAnt3 ай бұрын
Wonderful tutorial once again. I like that you explained about denoise and also demonstrated photoshop techniques at the end! I will try out once I get some free time…
@pixaroma3 ай бұрын
thank you 🙂
@cives3 ай бұрын
Me, on the other side, I prefer no PS here. Or, if there is any, not using the AI into it, so that we can use as well GIMP alternatively. The magic of this channel is its focus on ComfyUI and its freedom to let us do what we want, under our control. I moved away from Leonardo and the like looking for that. Besides, PS is the exact opposite of free (in both meanings). If this channel gears into more PS, it will lose me. There are other fantastic channels out there, and this one will be losing its edge by losing its focus.
@staicpaic492816 күн бұрын
Inpainting is an awesome feature, this video was very helpful, thank you very much.
@pixaroma16 күн бұрын
Glad it was helpful :)
@ob3ythee.t.1282 ай бұрын
Very well done tutorial series, very easy to follow, I also recommend if anyone does not want to setup ComfyUI in the same way instead use Stabilitymatrix
@Jinjinyajin3 ай бұрын
Love the tutorial details and it's easy to understand
@RealNazrax3 ай бұрын
I was hoping you'd cover inpainting! Now, please do IP Adapter :)
@aminshobeiri8901Ай бұрын
Life Saver, thanks for sharing.
@Filokalee9993 ай бұрын
Thanks! Very clear and valuable Inpaint tutorial. While trying a similar workflow, using Differential Diffusion node in combination with InpaintModelConditioning seems to have better integration results than just InpaintModelConditioning alone. Additionally if this node refuses to inpaint the shirt, try using another Inpaint model, (such as BrushNet).
@pixaroma3 ай бұрын
thank you
@JefHarrisnation3 ай бұрын
Love your tutorials.
@UmarandSaqib3 ай бұрын
awesome!
@alexandrapadureanu41923 ай бұрын
New interesting stuff ❤
@maxmad62tube3 ай бұрын
very impressive and informative thank you very much
@arep74243 ай бұрын
For the color problem and fine tuning denoise, just add a controlnet ( depth are good ) with the cropped image as reference. this allow higher denoise ( even 1) on sampler without deform and transfigure the subject
@pixaroma3 ай бұрын
thank you, I will give it a try :)
@ling670112 күн бұрын
Thanks. And now DIY photoshop's Generative Fill in our arsenal, another christmas :)
@hannibal9110073 ай бұрын
Hi. Thanks for this valuable serie on comfyui. I would suggest you an episode on How to train a lora on comfyui, which workflow depending on check point used, best ckpnt you May suggest us to try... Keep going and thnx
@pixaroma3 ай бұрын
thanks, yes people keep asking me, just didnt find an easy way to do it locally, only online and that is not free. The method I tried are harder to install and gives errors, that I am not able to explain the community how to fix them, since I am a designer not a coder. So I have been waiting for an easy solution, like fluxgym can work sometimes but still give some errors sometimes
@K-A_Z_A-K_S_URALA15 күн бұрын
You explain everything very coolly, well done! Question: where can I get monitoring like yours, next to the "Manager" button, I updated it, but I don't have it... And the question: will you record a video of "Laura Inpаint" with new accelerators?
@pixaroma15 күн бұрын
Is the crystools node that shows those, you can move the topbar to bottom and again on top if doesn't appear right away. As for new accelerator i need to do first research to see what is about
@K-A_Z_A-K_S_URALA15 күн бұрын
@@pixaroma Thank you very much, have a nice day time! we have a night out) in Russia)
@K-A_Z_A-K_S_URALA15 күн бұрын
Очень Крутой человек Вы и Умный !
@damnned3 ай бұрын
Fooocus inpaint is the king I think
@pixaroma3 ай бұрын
i think I saw someone using a fooocus model in comfyui, but not sure.
@radedr0073 ай бұрын
great video
@pixaroma3 ай бұрын
thank you 🙂
@kattamaran3 ай бұрын
Add a depth controlnet to have the generation more similar. Specially in flux
3 ай бұрын
Exciting as always! Could you share the Inpainting and Outpainting workflow so that I don't have to rush from the video?
@pixaroma3 ай бұрын
Is on discord on the pixaroma-workflows channel link to discord in the header of the channel or in video description
3 ай бұрын
@@pixaromaOh, I found it, thank you!
@pixaroma3 ай бұрын
this is the direct link, mention pixaroma there if you still cant find it discord.com/channels/1245221993746399232/1270589667359592470/1300813045127188591
@MrOlivazАй бұрын
Thanks a lot man! you're amazing, what is the recommended imagen size for do inpaint in flux using Q8? i mean to get the best results, thanks in advance!
@pixaromaАй бұрын
the node only make a selection of what you paint in the mask editor so that is the actual size that can inpaint so i will say if that inpaint area is around 1024px is ok, with flux can go a little higher, but with sdxl can not go too high
@silfrasjoeld26 күн бұрын
The shortcut to increase brush size, rotating the mouse wheel zooms, what is the current shortcut to increase brush size?
@pixaroma26 күн бұрын
Should be scroll but try with one of the key pressed i dont remember shift, alt or control, or maybe use the brackets []keys
@djwhispers3157Ай бұрын
Is there a workflow that helps clear up and add detail to low res low light images, from low quality to 4k UHD? I noticed in this workflow that refining with flux can add more detail.
@pixaromaАй бұрын
I only use upscaler like in episode 12
@TimesNewRomanAI3 ай бұрын
Thank you very much for all the information and how easy it is to follow. What pc configuration do you have and what time do you get in the image generation?
@pixaroma3 ай бұрын
Rtx4090 24 gb of vram, 128 gb of ram. Depends on the model between 3-15 seconds, sdxl is fast and flux takes like 14-15 seconds to generate an image
@TimesNewRomanAI3 ай бұрын
@@pixaroma Thanks for the info. 15 seconds is really fast
@ArielTavori11 күн бұрын
FWIW, unless I missed some critical changes to architecture or tooling, inpainting models should ONLY be used with a full denoise strength of 1.0. Standard inpainting workflows feed pure noise in to the masked area, and inpainting models generally do very poorly at img2img. If you want slight modifications, you will have much better luck and much more control using standard models with img2img workflows and a low denoising strength anywhere from 0.3-0.9 depending on the model and situation
@bratan0073 ай бұрын
Thank you for great tutorial! How do you do workflow tabs in ComfyUI?
@pixaroma3 ай бұрын
go to settings (that gear wheel) and search for workflow, look for Opened workflows position, and choose Topbar instead of sidebar
@GabbakiАй бұрын
Can you tell me how I can change the color of a garment with such a small workflow? But without changing the garment! Only the color.
@pixaromaАй бұрын
Is hard to keep things intact, i will probably just change the color in Photoshop and use image to image with low denoise to better blend the colors.
@ian25933 ай бұрын
After upgrading to the new interface, I can't tell where my workflows are being saved to. Is there some way to tell?
@pixaroma3 ай бұрын
you can click on export and put it in any folder you want, if you go to top left corners and workflow and choose export, and choose a folder. and with workflow open you can open it. If you save it with save as or save it will go into the folder workflows, the path is something like this ComfyUI_windows_portable\ComfyUI\user\default\workflows
@bentontramell3 ай бұрын
Flux chin sighted. 😅😊
@pixaroma3 ай бұрын
😁 can be fixed with sdxl Inpaint 😂
@CharlesPrithviRaj2 ай бұрын
how do we include the new flux canny and depth lora with the inpainting workflow. How to combine pix to pix instruct and the inpainting condition nodes
@pixaroma2 ай бұрын
you can connect the lora just like i did at the end of episode 24, but I remember I tried something and didnt work quite as expected, maybe because lora need 10 for flux guidance and inpainting didnt need so much or something, or maybe i didn put the right settings
@CharlesPrithviRaj2 ай бұрын
@@pixaroma Yes I was confused which latent would go to the ksampler. Is it the one from inpaint conditioning or the one from pix to pix. See if you can figure it out.
@pixaroma2 ай бұрын
@@CharlesPrithviRaj Ah I see what you mean, probably that why I didnt get the result I expected I think i skipped one of the inpaint nodes :)) I didnt test it but you can try with one of this nodes, LatentAdd or LatentBlend and that comes from both inpaint node and go to one of that latent node and from that node you connect to ksampler, so you combine both latent, in theory should work, but as I said I didnt test it, let me know if it works
@emrekiratli2614Ай бұрын
is there a anyway to save the inpainted image only, for instance i inpaint a hat. I retrieve 2 images which are stiched and and unstiched. what I need is only generated hat.
@pixaromaАй бұрын
when it inpaint it doesnt just add a hat with transparent background on top, it used that cropped image the area you masked and then based on that it generate a new image that is used to be placed on top of the other one, so if is a bunny head, and you ask for a hat, the output will be a bunny head with the hat, that is merged with the original image with the full body bunny. You can see what that image look like if you drag a preview or save image node from the vae decode just right before it go to stitch node at the end
@r3vdev3 ай бұрын
When are you going to do an IMG2VIDEO tutorial???? :)
@pixaroma3 ай бұрын
when there is a good video model, so far the video models I saw are not really usable, compared with Kling ai for example that create decent video
@yasinolgun75483 ай бұрын
In the introduction part of the video, the female picture appears in motion. How can I do that, what should I research?
@pixaroma3 ай бұрын
I used image to video on the platform klingai.com/ so far i didn't find a free options that does that good like locally that why i used online platform
@icedzinnia2 ай бұрын
When I did this tutorial, it didn't work (5:52). I realized after much fiddling around, using different loaded images, different masks, that it won't work if the mask is not continuous. I couldn't make it work when i had a mask on the far right edge and a seperate one on the far left edge. WEIRD.
@pixaroma2 ай бұрын
I usually do one selection, there is on the inpaint crop an option called fill_mask_holes, maybe try to turn it off to see if helps, if not maybe is some bug
@topy7063 ай бұрын
is there a way to increase the batch size?
@pixaroma3 ай бұрын
From queue you have batch size 1, you increase that number
@topy7063 ай бұрын
@@pixaroma but in queue thats batch count not batch size. i tried to add a "repeat latent batch" node but the stitch node does not like it
@pixaroma3 ай бұрын
I don't know any, i usually just use that batch and check in a few minutes or use increment queue, so only if you find a node to do that
@topy7063 ай бұрын
@@pixaroma okay thank you anyways. 👍
@eros6398Ай бұрын
friend i have an annoying problem i just cant find the florence2 imageprompt node
@pixaromaАй бұрын
there are two, i use this one ComfyUI-Florence2 by kijai yo can find it in manager. this is their page github.com/kijai/ComfyUI-Florence2 look how i use it in episode 11 kzbin.info/www/bejne/r6bXiohvbKedbacsi=EZ2yHl6-7tsbGapX
@eros6398Ай бұрын
@@pixaroma thank you men
@audiogus26513 ай бұрын
Is there a Comfy node that lets you paint on the image like in Forge?
@pixaroma3 ай бұрын
I dont know one, but there are so many nodes, I am sure there are some that let you do that
@sollmasterdoodle3 ай бұрын
Thx! But what’s the version of your comfy ui?
@pixaroma3 ай бұрын
I i go to manager and scroll down in the right I see this ComfyUI: 2797[770ab2](2024-10-29) Manager: V2.51.8 As for release is v0.2.5
@sollmasterdoodle3 ай бұрын
@@pixaroma thx but is the .exe ? the ComfyUI Desktop V1? I'm waiting for
@pixaroma3 ай бұрын
@@sollmasterdoodle no, that is still in beta from what I know and if you are not on the list dont have access to it yet
@sollmasterdoodle3 ай бұрын
@@pixaroma thx a lot too you!Perfect video!
@bubuububu3 ай бұрын
Excuse my amateur question, but how can I batch more images?
@pixaroma3 ай бұрын
Depends, you can use a load image from itools node from example that let you take images from a folder, or you can use batch the one next to queue to run the workflow multiple time, so depending on what you need to do. Chech episode 15 if you want to load a folder with images
@bubuububu3 ай бұрын
@@pixaroma I don't think i'm understanding. So for example, if I want to get 10 result for different glasses on the same face. Where to put the batch size?
@pixaroma3 ай бұрын
@@bubuububu is hard to explain in text, would have been much easier to show screenshot on discord. If you have the new interface with that floating QUEUE button, that has a number next to it like 1, you can increase that number to 10 and it will run that workflow 10 times, so if your seed is not fixed usually is set to randomize will run 10 times each time with a different seed so different results and it will stop. If you have the old interface that as the button QUEUE PROMPT then under it you have a check mark Extra Options, and there you have batch count that default is one, so you can put 10 there and run it.
@bubuububu3 ай бұрын
@@pixaroma thanks a lot. i knew i was missing something basic 😅
@AInfectados3 ай бұрын
And what about the *Controlnet Inpaint BETA* for FLUX?
@pixaroma3 ай бұрын
I didnt try it yet since it was flux + control net so it will take extra time to generate since is loading an extra model, but I will play with it to see if can get better results
@jessedbrown19803 ай бұрын
I am trying to do a single transform inpainting with flux 1.1 dev trained loras and the same model to do inpainting The model is trained on materials and when I run it through with a mask, the whole picture chages instead of the mask, I might be missing a node, but I just wat the masked area to be changed and it always changes the whole picture and outs artifacts in them I already have lora strgh to 1 and I need to play with a lot, but I have not been able to control the mask area or the picturre, so weird! The setup you have does not have a lora in it, which would be effective, anyone have a setup with a lora?
@pixaroma3 ай бұрын
I didn't try it with lora yet
@jessedbrown19803 ай бұрын
@@pixaroma Would you be interested in colaborating? I need to get it done! good experience too, we have like 700 loras trained waiting to be used!!
@pixaroma3 ай бұрын
@@jessedbrown1980 i am checking now the message on discord
@OaklandSignalCollapse2 ай бұрын
Thanks for not loading this up with a bunch of BS
@raz0rstr28 күн бұрын
can I replace an entire face with my face based on my trained lora? h
@pixaroma28 күн бұрын
Check the episode 24 at the end i did something like that with lora inpaint
@mleitejunior9 күн бұрын
Somebody has a invite to join matrix official channel of ComfyUI? Me and my friend are trying to join it for days...