By the way, there is Flux Tools support for SwamUI and SDNext. Fingers crossed that Forge adds this when the updates get done soon!
@havemoneyАй бұрын
This is what JackDainzh answered, I don’t know who he is. It's not a controlnet model, it's an entirely separate model that is designed to be used as a model to generate, not to guide. The issue is that, as of now, currently with img2img implementation of flux, there is no way to guide the model, say, with pix2pix intrsuct's Image CFG Scale slider, because it doesn't effect anything at the moment. (I forced it visible in the ui, when using flux), but because flux's conditioning is different to that of regular sd models, it get's skipped. To implement the guidence from img2img tab, is to rewrite the whole backend engine, which I have no idea how long will it take, maybe months, or maybe 1 day.
@SouthbayJay_comАй бұрын
These are so cool! Can't wait to dive into these! Thanks for sharing the info! 🙌🙌
@MonzonMediaАй бұрын
@@SouthbayJay_com appreciate it bro! Have fun! 🙌🏼
@onurerdagliАй бұрын
Thank you, I just tried outpaint and inpaint truly amazing quality
@MonzonMediaАй бұрын
Indeed! So far I haven't seen any major issues yet but still testing. Impressive so far, especially with outpainting. 👍
@AimanFataniАй бұрын
Been waiting for such things .. thanks for sharing ❤❤
@MonzonMediaАй бұрын
You’re welcome 😊
@vVinchiАй бұрын
This will be a good series of videos
@MonzonMediaАй бұрын
Indeed! Already working on the next one. Good to hear from ya bud!
@havemoneyАй бұрын
We are waiting for lllyasviel to attach it to Forge
@MonzonMediaАй бұрын
🙏😊 I do see some action on the github page and no other "delays" posted. Fingers crossed my friend!
@mik3lang3loАй бұрын
❤ we are all waiting ❤
@Elwaves2925Ай бұрын
I finally got to do the inpainting I needed from Flux. On 12Gb VRAM with the full 'fill' model it was a lot quicker than I expected. That's the only one I've tried so far but with how well it worked I'm looking forward to the others.
@MonzonMediaАй бұрын
@@Elwaves2925 good to know it can run on 12gb vram. Have you tried the FP8 and if you do is it any faster?
@Elwaves2925Ай бұрын
@@MonzonMedia Haven't had chance to try the fp8 version as I didn't know it existed until your video. I will be trying it later.
@FranktheSquirellАй бұрын
ya did it again , great job as usual 😊😊 only trouble is i've been using foocus for in/out painting and now you've made me want to try it in comfyui grrrrr lol I gave up on comfy cause the update broke the LLM generator i was using in it , come to mention it i cant even remember the name of the generator now .. damn🤣🤣🤣🤣
@MonzonMediaАй бұрын
😊 It does take some time to get used to, I have a love/hate relationship with Comfyui hehehe. But it is worth knowing though especially since it get's all the latest features quickly. At the very least just learn how to use drag and drop workflows and install any missing nodes. That's pretty much all most people need to know.
@contrarian8870Ай бұрын
Thanks. Do the Redux next, then Depth, then Canny
@MonzonMediaАй бұрын
Welcome! Redux is pretty cool! Will likely do it next, then combine the 2 controlnets in another video.
@skrotovАй бұрын
great, thanks. by the way you can hide noodles just pressing eye icon on the down right side of your screen
@MonzonMediaАй бұрын
Indeed! I do like to use the straight ones though but switch to spline when I need to remember where everything is connected. 😊 👍
@LucaSerafiniLukeZerfiniАй бұрын
Can't wait. I found flux less effective in design rather than SDXL.
@TheColonelJJАй бұрын
As always, your videos are a welcome view. Favor to ask. As things come so fast to Comfy could you add a sound bite when things aren't quite ready for Forge?
@MonzonMediaАй бұрын
Yeah I normally do but forgot this time although I did post a pinned comment that there is support for other platforms like SDNext and SwarmUI.
@cekuhnenАй бұрын
Redux will be fun for MJ to deal with.
@MonzonMediaАй бұрын
Hey my friend! Nice to see you here! I haven't used MJ in a while but there is a lot you can do locally compared to MJ's features, plus way more models to choose from. Hope all is well with you. 👍
@cekuhnenАй бұрын
@@MonzonMedia My MJ sub will end this year and I wont go back. Vizcom became so powerful and Rubbrband also is shaping up really well.
@bause6182Ай бұрын
Thanks for the guide , it is possible to run redux with low vram ?
@MonzonMediaАй бұрын
How low? The redux model itself is very small, only 129MB so if you have low vram gpu just use the GGUF flux models and you should be good to go! Runs great on my 3060Ti 8GB VRAM with Q8 GGUF model.
@bause6182Ай бұрын
@@MonzonMediathank you, do we need another workflow for gguf models?
Ай бұрын
Thanks for the vid! And do you know if the flux.1-fill-dev (23Gb) version is an extended version of the original flux.1-dev? or a whole new thing, and you have to install both?
@MonzonMediaАй бұрын
Welcome! Typically in-outpaint models are just trained differently but should be based on the original model.
Ай бұрын
@@MonzonMedia Got it! thanks!
@RiftWarthАй бұрын
Could you please do a video on crop and stitch with Flux tool inpainting?
@MonzonMediaАй бұрын
Yes of course! Will be doing it on my next inpainting video 👍
@RiftWarthАй бұрын
@MonzonMedia Thank you so much. Your tutorials are really good and easy to follow.
@hotlineoperatorАй бұрын
Some people keep several functions or "workflows" on one desktop, which they turn on and off as needed. Others keep separate workflows on completely different desktops or use them one-by-one. Is there a convenient function in ComfyUI that allows you to switch between different Workflows, as if you have several desktops open and choose the one that suits what you are doing.
@MonzonMediaАй бұрын
The new ComfyUI has a workflow panel on the left that allows you to select your saved workflows or recently used. Alternatively there is a fairly new tool I've been trying out called Flow that has several workflows pre-designed. Downside of it is that you can't save custom workflows yet but I hear that option will come soon. I'll be doing a video on it soon. Other than that, yeah it really is a personal thing on what works best for you.
@Maylin-ze6qxАй бұрын
❤❤❤❤
@MonzonMediaАй бұрын
Thank you! 😊
@LucaSerafiniLukeZerfiniАй бұрын
Great to follow. Updated Comfy but returning this: RuntimeError: Error(s) in loading state_dict for Flux: size mismatch for img_in.weight: copying a param with shape torch.Size([3072, 384]) from checkpoint, the shape in current model is torch.Size([3072, 64]). by the way, depth and canny would be the best to see
@MonzonMediaАй бұрын
Context? What were you doing? What are your system specs?
@LucaSerafiniLukeZerfiniАй бұрын
I managed to pull a comfyui update and now it works. Still having outlines visible on the outpainting. Thanks for the reply. I'm on windows and 4090 rtx.
@MonzonMediaАй бұрын
Cool! Yeah always update when new features come out. If you’re seeing seams when outpainting try increasing the feathering or do 1-2 sides at a time. Results can vary.
@LucaSerafiniLukeZerfiniАй бұрын
Yes maybe side by side works better. Other point I'm trying to manage background switching for car but the results still awful with flux.
@MonzonMediaАй бұрын
@@LucaSerafiniLukeZerfini crop and stich inpaint node might be better for that but also the redux model can do it. I'll be posting a video on redux soon.
@skrotovАй бұрын
and what i don't like in this new fill model is that it seems works on the actual pixels without enlarging painted area as we did in automatic. As a result we have low detailed and crappy quality if masked object was not so big
@MonzonMediaАй бұрын
That has more to do with the platform you are using, for example fooocus and invoke ai has methods where when inpaint is used it generates the inpainted areas in it's native resolution. I can't recall on comfyui if there is a node that does that but I'm pretty sure there is. Might make a good video topic. 👍
@Scn64Ай бұрын
When painting the mask, what effect do the different colors (black, white, negative) and opacity have on the outcome? Does the resulting inpaint change at all depending on which color/opacity you choose?
@MonzonMediaАй бұрын
It's just for visual preference it has no effect on the outcome.
@RikkTheGaijinАй бұрын
SwarmUI tutorial please
@MonzonMediaАй бұрын
Working on it! 😊
@eledah9098Ай бұрын
Is there a way to include LoRA models for inpainting?
@MonzonMediaАй бұрын
Not sure what you mean? Do you want to use a lora to inpaint? It doesn't work that way.
@generalawareness101Ай бұрын
How do I get flux to inpaint text? I have tried everything when all I want is to take an image and have flux add the text it generates over it.
@MonzonMediaАй бұрын
Same way you would prompt for it, just state in your prompt something like "text saying _________" and inpaint the area you want it to show up.
@generalawareness101Ай бұрын
@@MonzonMedia Tried doing that for a few days it just never worked. I could say a lake, or an army, or whatever and that it would do, but never the text. Stumped.
@FranktheSquirellАй бұрын
me again lol 😊 Have you tried the "DMD2" SDXL models yet? not that many about but wow are they impressive . prompt adherence is about the same as Flux schnell, but the image quality is really good, they say 4-8 steps, but a 12 step DMD2 image gives better results imo. Then again i am getting old now and me eyes aren't as good as they used to be .. that's my excuse 🤣🤣
@MonzonMediaАй бұрын
Not yet but I remember reading about it on Reddit. Thanks for the reminder!
@baheth3elmy16Ай бұрын
With inpainting, it disturbs the composition of the image, the results are not that good, so are the results for the outpainting, the final images are distorted at the edges and lost details, using the fp8 model.
@MonzonMediaАй бұрын
I've had a good experience so far with both inpainting and outpainting. Make sure you are increasing the flux guidance. There are other methods to doing inpainting that should help with the original composition which I will cover soon.
@ProvenFlawlessАй бұрын
Huh. What is the difference between xlabs and Shakker-Labs controlnets of canny/depth. Why is this special? We already have two of them. Someone please explain.
@MonzonMediaАй бұрын
From what I recall they have 2. The union pro controlnet (6GB) which is an all in one controlnet with multiple controlnets. It's pretty decent but still needs more training. They also have a separate depth model that is 3GB, this one is only 1.2GB. I've yet to do side by side comparisons yet though. It was the same with SDXL, we will get other controlnets from the community until one is trained better. Keep in mind controlnet for flux is still very new.
@Xenon0000000Ай бұрын
When I try the outpainting workflow, the pictures come out all pixelated, especially the added part. What am I doing wrong? I'm using the same parameters, denoise is already at 1. Thank you for your videos by the way, you should have way more subs!
@MonzonMediaАй бұрын
@@Xenon0000000 appreciate the support and kind words. Are you using a high flux guidance? 20-30 works for me.
@Xenon0000000Ай бұрын
@@MonzonMedia I left it at 30, I'll try changing that parameter too, thank you.
@MonzonMediaАй бұрын
I have a much better workflow that I'll be sharing with you all soon that gives better results. Hope to post it some time tomorrow (Wed).
@havemoneyАй бұрын
I'll go play Project Zomboid, I recommend it
@MonzonMediaАй бұрын
Ooohhh, will check it out! I finally played final fantasy vii remake! 😬😊 loved it!
@rogersnelson7483Ай бұрын
I tried both the big model and the FP8. Nothing but really BAD results. I don't know why. I'm using a 8 Gig VRam. All I get is random noise around the the outpaint areas and the original image is changed to mostly to noise. Also should it take 6 to 10 minutes for 1 image?
@MonzonMediaАй бұрын
I'm going to do a follow up video on inpainting. What is shown here is very basic and sometimes not the best results. There are a couple other nodes that will help to get better results. Stay tuned!
@rogersnelson7483Ай бұрын
@@MonzonMedia Thanks for you reply. I'll be watching. Keep up the good work as usual. Man, I started watching you at Easy Diffusion.
@MonzonMediaАй бұрын
Whoa! That's awesome! 😁 I appreciate the support since then and now.
@sokphea-h5qАй бұрын
Error(s) in loading state_dict for Flux: size mismatch for img_in.weight: copying a param with shape torch.Size([3072, 384]) from checkpoint, the shape in current model is torch.Size([3072, 64]). how i'm can do ?
@benkamphuis5614Ай бұрын
same here!
@MonzonMediaАй бұрын
Did you do an update?
@_O_o_Ай бұрын
I had the same problem. My DualCLIPLoader Type was set to "Sdxl" not "Flux" ... maybe that helps haha