Flux Tools For Low VRAM GPU's | Introduction to Inpainting & Outpainting

  Рет қаралды 4,605

Monzon Media

Monzon Media

Күн бұрын

Пікірлер: 81
@MonzonMedia
@MonzonMedia Ай бұрын
By the way, there is Flux Tools support for SwamUI and SDNext. Fingers crossed that Forge adds this when the updates get done soon!
@havemoney
@havemoney Ай бұрын
This is what JackDainzh answered, I don’t know who he is. It's not a controlnet model, it's an entirely separate model that is designed to be used as a model to generate, not to guide. The issue is that, as of now, currently with img2img implementation of flux, there is no way to guide the model, say, with pix2pix intrsuct's Image CFG Scale slider, because it doesn't effect anything at the moment. (I forced it visible in the ui, when using flux), but because flux's conditioning is different to that of regular sd models, it get's skipped. To implement the guidence from img2img tab, is to rewrite the whole backend engine, which I have no idea how long will it take, maybe months, or maybe 1 day.
@SouthbayJay_com
@SouthbayJay_com Ай бұрын
These are so cool! Can't wait to dive into these! Thanks for sharing the info! 🙌🙌
@MonzonMedia
@MonzonMedia Ай бұрын
@@SouthbayJay_com appreciate it bro! Have fun! 🙌🏼
@onurerdagli
@onurerdagli Ай бұрын
Thank you, I just tried outpaint and inpaint truly amazing quality
@MonzonMedia
@MonzonMedia Ай бұрын
Indeed! So far I haven't seen any major issues yet but still testing. Impressive so far, especially with outpainting. 👍
@AimanFatani
@AimanFatani Ай бұрын
Been waiting for such things .. thanks for sharing ❤❤
@MonzonMedia
@MonzonMedia Ай бұрын
You’re welcome 😊
@vVinchi
@vVinchi Ай бұрын
This will be a good series of videos
@MonzonMedia
@MonzonMedia Ай бұрын
Indeed! Already working on the next one. Good to hear from ya bud!
@havemoney
@havemoney Ай бұрын
We are waiting for lllyasviel to attach it to Forge
@MonzonMedia
@MonzonMedia Ай бұрын
🙏😊 I do see some action on the github page and no other "delays" posted. Fingers crossed my friend!
@mik3lang3lo
@mik3lang3lo Ай бұрын
❤ we are all waiting ❤
@Elwaves2925
@Elwaves2925 Ай бұрын
I finally got to do the inpainting I needed from Flux. On 12Gb VRAM with the full 'fill' model it was a lot quicker than I expected. That's the only one I've tried so far but with how well it worked I'm looking forward to the others.
@MonzonMedia
@MonzonMedia Ай бұрын
@@Elwaves2925 good to know it can run on 12gb vram. Have you tried the FP8 and if you do is it any faster?
@Elwaves2925
@Elwaves2925 Ай бұрын
@@MonzonMedia Haven't had chance to try the fp8 version as I didn't know it existed until your video. I will be trying it later.
@FranktheSquirell
@FranktheSquirell Ай бұрын
ya did it again , great job as usual 😊😊 only trouble is i've been using foocus for in/out painting and now you've made me want to try it in comfyui grrrrr lol I gave up on comfy cause the update broke the LLM generator i was using in it , come to mention it i cant even remember the name of the generator now .. damn🤣🤣🤣🤣
@MonzonMedia
@MonzonMedia Ай бұрын
😊 It does take some time to get used to, I have a love/hate relationship with Comfyui hehehe. But it is worth knowing though especially since it get's all the latest features quickly. At the very least just learn how to use drag and drop workflows and install any missing nodes. That's pretty much all most people need to know.
@contrarian8870
@contrarian8870 Ай бұрын
Thanks. Do the Redux next, then Depth, then Canny
@MonzonMedia
@MonzonMedia Ай бұрын
Welcome! Redux is pretty cool! Will likely do it next, then combine the 2 controlnets in another video.
@skrotov
@skrotov Ай бұрын
great, thanks. by the way you can hide noodles just pressing eye icon on the down right side of your screen
@MonzonMedia
@MonzonMedia Ай бұрын
Indeed! I do like to use the straight ones though but switch to spline when I need to remember where everything is connected. 😊 👍
@LucaSerafiniLukeZerfini
@LucaSerafiniLukeZerfini Ай бұрын
Can't wait. I found flux less effective in design rather than SDXL.
@TheColonelJJ
@TheColonelJJ Ай бұрын
As always, your videos are a welcome view. Favor to ask. As things come so fast to Comfy could you add a sound bite when things aren't quite ready for Forge?
@MonzonMedia
@MonzonMedia Ай бұрын
Yeah I normally do but forgot this time although I did post a pinned comment that there is support for other platforms like SDNext and SwarmUI.
@cekuhnen
@cekuhnen Ай бұрын
Redux will be fun for MJ to deal with.
@MonzonMedia
@MonzonMedia Ай бұрын
Hey my friend! Nice to see you here! I haven't used MJ in a while but there is a lot you can do locally compared to MJ's features, plus way more models to choose from. Hope all is well with you. 👍
@cekuhnen
@cekuhnen Ай бұрын
@@MonzonMedia My MJ sub will end this year and I wont go back. Vizcom became so powerful and Rubbrband also is shaping up really well.
@bause6182
@bause6182 Ай бұрын
Thanks for the guide , it is possible to run redux with low vram ?
@MonzonMedia
@MonzonMedia Ай бұрын
How low? The redux model itself is very small, only 129MB so if you have low vram gpu just use the GGUF flux models and you should be good to go! Runs great on my 3060Ti 8GB VRAM with Q8 GGUF model.
@bause6182
@bause6182 Ай бұрын
​@@MonzonMediathank you, do we need another workflow for gguf models?
Ай бұрын
Thanks for the vid! And do you know if the flux.1-fill-dev (23Gb) version is an extended version of the original flux.1-dev? or a whole new thing, and you have to install both?
@MonzonMedia
@MonzonMedia Ай бұрын
Welcome! Typically in-outpaint models are just trained differently but should be based on the original model.
Ай бұрын
@@MonzonMedia Got it! thanks!
@RiftWarth
@RiftWarth Ай бұрын
Could you please do a video on crop and stitch with Flux tool inpainting?
@MonzonMedia
@MonzonMedia Ай бұрын
Yes of course! Will be doing it on my next inpainting video 👍
@RiftWarth
@RiftWarth Ай бұрын
@MonzonMedia Thank you so much. Your tutorials are really good and easy to follow.
@hotlineoperator
@hotlineoperator Ай бұрын
Some people keep several functions or "workflows" on one desktop, which they turn on and off as needed. Others keep separate workflows on completely different desktops or use them one-by-one. Is there a convenient function in ComfyUI that allows you to switch between different Workflows, as if you have several desktops open and choose the one that suits what you are doing.
@MonzonMedia
@MonzonMedia Ай бұрын
The new ComfyUI has a workflow panel on the left that allows you to select your saved workflows or recently used. Alternatively there is a fairly new tool I've been trying out called Flow that has several workflows pre-designed. Downside of it is that you can't save custom workflows yet but I hear that option will come soon. I'll be doing a video on it soon. Other than that, yeah it really is a personal thing on what works best for you.
@Maylin-ze6qx
@Maylin-ze6qx Ай бұрын
❤❤❤❤
@MonzonMedia
@MonzonMedia Ай бұрын
Thank you! 😊
@LucaSerafiniLukeZerfini
@LucaSerafiniLukeZerfini Ай бұрын
Great to follow. Updated Comfy but returning this: RuntimeError: Error(s) in loading state_dict for Flux: size mismatch for img_in.weight: copying a param with shape torch.Size([3072, 384]) from checkpoint, the shape in current model is torch.Size([3072, 64]). by the way, depth and canny would be the best to see
@MonzonMedia
@MonzonMedia Ай бұрын
Context? What were you doing? What are your system specs?
@LucaSerafiniLukeZerfini
@LucaSerafiniLukeZerfini Ай бұрын
I managed to pull a comfyui update and now it works. Still having outlines visible on the outpainting. Thanks for the reply. I'm on windows and 4090 rtx.
@MonzonMedia
@MonzonMedia Ай бұрын
Cool! Yeah always update when new features come out. If you’re seeing seams when outpainting try increasing the feathering or do 1-2 sides at a time. Results can vary.
@LucaSerafiniLukeZerfini
@LucaSerafiniLukeZerfini Ай бұрын
Yes maybe side by side works better. Other point I'm trying to manage background switching for car but the results still awful with flux.
@MonzonMedia
@MonzonMedia Ай бұрын
@@LucaSerafiniLukeZerfini crop and stich inpaint node might be better for that but also the redux model can do it. I'll be posting a video on redux soon.
@skrotov
@skrotov Ай бұрын
and what i don't like in this new fill model is that it seems works on the actual pixels without enlarging painted area as we did in automatic. As a result we have low detailed and crappy quality if masked object was not so big
@MonzonMedia
@MonzonMedia Ай бұрын
That has more to do with the platform you are using, for example fooocus and invoke ai has methods where when inpaint is used it generates the inpainted areas in it's native resolution. I can't recall on comfyui if there is a node that does that but I'm pretty sure there is. Might make a good video topic. 👍
@Scn64
@Scn64 Ай бұрын
When painting the mask, what effect do the different colors (black, white, negative) and opacity have on the outcome? Does the resulting inpaint change at all depending on which color/opacity you choose?
@MonzonMedia
@MonzonMedia Ай бұрын
It's just for visual preference it has no effect on the outcome.
@RikkTheGaijin
@RikkTheGaijin Ай бұрын
SwarmUI tutorial please
@MonzonMedia
@MonzonMedia Ай бұрын
Working on it! 😊
@eledah9098
@eledah9098 Ай бұрын
Is there a way to include LoRA models for inpainting?
@MonzonMedia
@MonzonMedia Ай бұрын
Not sure what you mean? Do you want to use a lora to inpaint? It doesn't work that way.
@generalawareness101
@generalawareness101 Ай бұрын
How do I get flux to inpaint text? I have tried everything when all I want is to take an image and have flux add the text it generates over it.
@MonzonMedia
@MonzonMedia Ай бұрын
Same way you would prompt for it, just state in your prompt something like "text saying _________" and inpaint the area you want it to show up.
@generalawareness101
@generalawareness101 Ай бұрын
@@MonzonMedia Tried doing that for a few days it just never worked. I could say a lake, or an army, or whatever and that it would do, but never the text. Stumped.
@FranktheSquirell
@FranktheSquirell Ай бұрын
me again lol 😊 Have you tried the "DMD2" SDXL models yet? not that many about but wow are they impressive . prompt adherence is about the same as Flux schnell, but the image quality is really good, they say 4-8 steps, but a 12 step DMD2 image gives better results imo. Then again i am getting old now and me eyes aren't as good as they used to be .. that's my excuse 🤣🤣
@MonzonMedia
@MonzonMedia Ай бұрын
Not yet but I remember reading about it on Reddit. Thanks for the reminder!
@baheth3elmy16
@baheth3elmy16 Ай бұрын
With inpainting, it disturbs the composition of the image, the results are not that good, so are the results for the outpainting, the final images are distorted at the edges and lost details, using the fp8 model.
@MonzonMedia
@MonzonMedia Ай бұрын
I've had a good experience so far with both inpainting and outpainting. Make sure you are increasing the flux guidance. There are other methods to doing inpainting that should help with the original composition which I will cover soon.
@ProvenFlawless
@ProvenFlawless Ай бұрын
Huh. What is the difference between xlabs and Shakker-Labs controlnets of canny/depth. Why is this special? We already have two of them. Someone please explain.
@MonzonMedia
@MonzonMedia Ай бұрын
From what I recall they have 2. The union pro controlnet (6GB) which is an all in one controlnet with multiple controlnets. It's pretty decent but still needs more training. They also have a separate depth model that is 3GB, this one is only 1.2GB. I've yet to do side by side comparisons yet though. It was the same with SDXL, we will get other controlnets from the community until one is trained better. Keep in mind controlnet for flux is still very new.
@Xenon0000000
@Xenon0000000 Ай бұрын
When I try the outpainting workflow, the pictures come out all pixelated, especially the added part. What am I doing wrong? I'm using the same parameters, denoise is already at 1. Thank you for your videos by the way, you should have way more subs!
@MonzonMedia
@MonzonMedia Ай бұрын
@@Xenon0000000 appreciate the support and kind words. Are you using a high flux guidance? 20-30 works for me.
@Xenon0000000
@Xenon0000000 Ай бұрын
@@MonzonMedia I left it at 30, I'll try changing that parameter too, thank you.
@MonzonMedia
@MonzonMedia Ай бұрын
I have a much better workflow that I'll be sharing with you all soon that gives better results. Hope to post it some time tomorrow (Wed).
@havemoney
@havemoney Ай бұрын
I'll go play Project Zomboid, I recommend it
@MonzonMedia
@MonzonMedia Ай бұрын
Ooohhh, will check it out! I finally played final fantasy vii remake! 😬😊 loved it!
@rogersnelson7483
@rogersnelson7483 Ай бұрын
I tried both the big model and the FP8. Nothing but really BAD results. I don't know why. I'm using a 8 Gig VRam. All I get is random noise around the the outpaint areas and the original image is changed to mostly to noise. Also should it take 6 to 10 minutes for 1 image?
@MonzonMedia
@MonzonMedia Ай бұрын
I'm going to do a follow up video on inpainting. What is shown here is very basic and sometimes not the best results. There are a couple other nodes that will help to get better results. Stay tuned!
@rogersnelson7483
@rogersnelson7483 Ай бұрын
@@MonzonMedia Thanks for you reply. I'll be watching. Keep up the good work as usual. Man, I started watching you at Easy Diffusion.
@MonzonMedia
@MonzonMedia Ай бұрын
Whoa! That's awesome! 😁 I appreciate the support since then and now.
@sokphea-h5q
@sokphea-h5q Ай бұрын
Error(s) in loading state_dict for Flux: size mismatch for img_in.weight: copying a param with shape torch.Size([3072, 384]) from checkpoint, the shape in current model is torch.Size([3072, 64]). how i'm can do ?
@benkamphuis5614
@benkamphuis5614 Ай бұрын
same here!
@MonzonMedia
@MonzonMedia Ай бұрын
Did you do an update?
@_O_o_
@_O_o_ Ай бұрын
I had the same problem. My DualCLIPLoader Type was set to "Sdxl" not "Flux" ... maybe that helps haha
Get Better Inpainting & Outpainting Results With Fluxfill
10:50
Monzon Media
Рет қаралды 4,4 М.
Can LTX Video run on my 8GB GPU? Hold My Beer!
12:59
Monzon Media
Рет қаралды 4,1 М.
24 Часа в БОУЛИНГЕ !
27:03
A4
Рет қаралды 7 МЛН
ССЫЛКА НА ИГРУ В КОММЕНТАХ #shorts
0:36
Паша Осадчий
Рет қаралды 8 МЛН
The Lost World: Living Room Edition
0:46
Daniel LaBelle
Рет қаралды 27 МЛН
Flux ControlNet Lora's Depth & Canny Perfect For Low VRAM GPU's!
10:36
Want STUNNING Results? Flux Redux Makes It Happen!
11:45
Monzon Media
Рет қаралды 5 М.
Audio Reactive Live Portrait - ComfyUI Tutorial
10:36
Ryanontheinside
Рет қаралды 1,6 М.
Outpaint Flux FILL Tools
16:04
StableDif
Рет қаралды 5 М.
This free AI can control anyone's face
22:27
AI Search
Рет қаралды 257 М.
No GPU? No Problem! MimicPC Might Be The AI Solution For You!
11:17
23 AI Tools You Won't Believe are Free
25:19
Futurepedia
Рет қаралды 2,1 МЛН