How To Run Flux On A 12GB VRAM GPU Or Less In ComfyUI

  Рет қаралды 21,019

Monzon Media

Monzon Media

Күн бұрын

Пікірлер: 171
@MonzonMedia
@MonzonMedia Ай бұрын
I'd love to hear you generation times if you have a 12GB VRAM GPU or less! Let me know in the comments!
@tripleheadedmonkey6613
@tripleheadedmonkey6613 Ай бұрын
You missed out on a load of optimization options that could have improved results for you. At the very least it will allow you to watch do more with the PC without it lagging. First of all set the launch arguments: --bf16-vae --fp16-text-enc --fp8_e4m3fn-unet This one may also be helpful, not sure but I use it atm with flux --use-pytorch-cross-attention Furthermore you should have replaced the VAE decode nodes with VAE decode TILED nodes.
@MonzonMedia
@MonzonMedia Ай бұрын
Oh great tips! Will give it a shot! I haven’t needed it in the past with SDXL models but these bigger ones are killin my system! Hahaha! Time to update my gpu. Appreciate the heads up!
@tripleheadedmonkey6613
@tripleheadedmonkey6613 Ай бұрын
@@MonzonMedia Updating your GPU isn't particularly helpful either. Not unless you can find a 40GB+ card :D
@Dro2k6
@Dro2k6 Ай бұрын
RTX 3080 12GB VRAM 1 minute
@MonzonMedia
@MonzonMedia Ай бұрын
Bruh….i have a 8gb card, even 12gb would be a massive upgrade for me 😆😊
@RamonGuthrie
@RamonGuthrie Ай бұрын
What amazes me most about this model are the details on the first pass. Normally, you will get these details in a second pass or on an upscale.
@MonzonMedia
@MonzonMedia Ай бұрын
Yes exactly and I find I'm running less gens to get what I want. Once controlnets come out, Loras and fine tuned models, I can see it taking off!
@droidJV
@droidJV Ай бұрын
Thanks for the video, tried it on a gtx1060 6gb (16gb ram / i7 5700G) and it took 4:27 to generate a 600x600 image with 5 steps and 7:26 for a 600x600 image with 10 steps. Can't imagine how much longer would it take for a bigger image but it works.
@MonzonMedia
@MonzonMedia Ай бұрын
I think your 16GB of system RAM is your bottle neck there as I've read others using 6GB of VRAM successfully, although still kind of slow. 1060's also tend to be slower than the recent gen GPU's. These bigger models does need more system RAM as well. Nowadays 32GB is standard. Let's hope they optimize it even more.
@droidJV
@droidJV Ай бұрын
@@MonzonMedia Thanks for the info.
@Archalternative
@Archalternative Ай бұрын
@@MonzonMedia same configuration, but with 32GB of RAM, images at 800x800 in 4:30
@DaniDani-zb4wd
@DaniDani-zb4wd Ай бұрын
@@ArchalternativeThank you for the info. Looks like there is hope for my rtx 2060👍once Forge gets updated (which is even better with vram optimization) I can’t wait to try it.
@DaniDani-zb4wd
@DaniDani-zb4wd Ай бұрын
@@MonzonMedia how much slower is the dev version compared to schnell version? Isn’t most of the time spent on offloading the model into ram anyway? The dev model is much better at realism. How long does it take to regenerate if you change the prompt?
@pokerandphilosophy8328
@pokerandphilosophy8328 Ай бұрын
I have a RXT 2060 Super with 8GB VRAM (and 64GB RAM). I used to generate one 1024x1024 image in four steps (Flux Schnell) in three minutes. With your workflow, it's down to 75 seconds! (Have you tried the DemonFlux model posted on CivitAI? It's a pruned model that merges Schell with Dev, and also the two clip models and the VAE file in one single 16GB checkpoint. It achieves close to Dev quality with just 3 steps. I was wondering if their approach could be combined with yours.)
@pokerandphilosophy8328
@pokerandphilosophy8328 Ай бұрын
Another test: One 1535x1280 image, 6 steps with Schnell, 80 seconds. Batch of two images: 192 seconds.
@MonzonMedia
@MonzonMedia Ай бұрын
Hey thanks for sharing your info. I’m always curious how other peoples set ups yield results. Funny you mention that merged model as I was trying it out earlier today. Not sure if it’s the same one though. The one I used is still just under 24 gb and was getting 50seconds for a 1024x1024 image. Going to check civil Ai for the one you mentioned. Appreciate it!
@matten_zero
@matten_zero Ай бұрын
You can always face swap for celebrities. The magic of ComfyUI is ability to modularize and mix and match different models and workflows.
@MonzonMedia
@MonzonMedia Ай бұрын
Very true, just a minor annoyance when it can just as well be trained with celebs but I’m starting to see a trend where these newer models are going away from that.
@matten_zero
@matten_zero Ай бұрын
@@MonzonMedia it's future proofing so they don't have the headaches later on. Good thing faceswap exists!
Ай бұрын
Thank you for this precise, accurate, step-by-step tutorial, exactly what I needed (again).
@MonzonMedia
@MonzonMedia Ай бұрын
You're very welcome! Appreciate the support!
@lucasrodriguez8957
@lucasrodriguez8957 Ай бұрын
Hi. I almost have it working, but I'm having a problem with the "dualcliploader node" more specific with the clip_name2. If I leave it like it is show in the video I get an error and if I put anything there I get a blank image as an output. What can I do to fix this? Edit: I'm really dumb. Just minutes after posting this I realize you can download the file call "clip_I.safetensors" from the same place that you download the other two files. Instead of deleting this comment I'm going to leave in case this happend to someone else, you never know.
@MonzonMedia
@MonzonMedia Ай бұрын
aaahhhh no worries! I didn't really mention it in the video either but glad you figured it out. 👍
@lucasrodriguez8957
@lucasrodriguez8957 Ай бұрын
@@MonzonMedia thanks for the video!
@danwe6297
@danwe6297 Ай бұрын
LOOOOL! I managed to kick it on RTX 3050 with 8 gigs of VRAM and with another 32 gigs of CPU RAM
@alifrahman9447
@alifrahman9447 Ай бұрын
just deleted after generation an image in 8 mint in 2060 12 gb and now you came up with this🙂🙂
@MonzonMedia
@MonzonMedia Ай бұрын
Wow 8min? Yeah try this method, you should be getting better generation times with this workflow. These new models are not getting smaller which sucks for me with lower spec GPU. Keep me posted!
@worldofgames2000
@worldofgames2000 Ай бұрын
Thank you for info! I have 4070 -12 gb and generate in around 40 sec, usual workflow...
@MonzonMedia
@MonzonMedia Ай бұрын
nice! nowadays 12gb is standard....I guess I need to update my GPU soon! hahaha!
@korvine7
@korvine7 Ай бұрын
absolutely same configuration and time for the most main schedulers. Some of them take a bit longer. 1152x768, 25 steps - 40 sec
@thrWasTaken
@thrWasTaken Ай бұрын
was it on dev model and what resolution were you using ? i have a rtx 4070 too and it tames me 110 sec with dev model to make 1024x1024 image
@korvine7
@korvine7 24 күн бұрын
great job, i`ve 0 skill in genegation and everything works perfect! Faster then forge UI somehow (~x2 speed same parameters)
@gameswithoutfrontears416
@gameswithoutfrontears416 Ай бұрын
Looking good, will be watching flux closely
@MonzonMedia
@MonzonMedia Ай бұрын
Pretty impressed so far and to see there is already a lora and controlnet out, once it's supported the rest will follow.
@hleet
@hleet Ай бұрын
Do you see a real difference in image quality (or prompt behaviour) between the two ? (schnell vs dev) ? By the way I don't use split sigma to try on flux. You can make it with SampleCustom, the sigmas is connected to a basicScheduler (simple). But yeah load diffusion model flux-fp8. I found out that if you put the weight_dtype to something else then default, the graphic card will go back and forth, just leaving at default is ok. I really like flux, it is very coherent to the prompt. I hope that ipadapter will port his custom node on this one :D
@MonzonMedia
@MonzonMedia Ай бұрын
Yes definitely a big difference in the final output. I haven't tested it in-depth though. Text is definitely worse using Schnell and overall quality takes a hit, I mean it is a distilled model so that's expected. All the other workflows I've tried still took too long for me considering my limited VRAM specs but will try the SampleCustom node. 👍
@SouthbayCreations
@SouthbayCreations Ай бұрын
Great video thanks for sharing!
@MonzonMedia
@MonzonMedia Ай бұрын
Appreciate it bud!
@alienandroid943
@alienandroid943 Ай бұрын
thanks what i needed to know for 8 gb cards.
@user-st2tz7eu9j
@user-st2tz7eu9j Ай бұрын
Hi. Thanks for the guide. Could you please tell me how you made the lines connecting the nodes flat? I'm very frustrated that there are a lot of them and they are like a bunch of wires :) Thanks.
@MonzonMedia
@MonzonMedia Ай бұрын
Just got to your settings>link render mode>striaght. I show it in this video kzbin.info/www/bejne/p2LIfH17qLWWl7Msi=I43g-7OYViDaT6ZT&t=596 hope it helps!
@MPCDesenvolvimentoWeb
@MPCDesenvolvimentoWeb Ай бұрын
I've been looking for this add-on for "ComfyUI" for a few days now to make the workflow lines straight, what is this plugin called?
@MonzonMedia
@MonzonMedia Ай бұрын
It's not an add in, just got to your settings>link render mode>striaght. I show it in this video kzbin.info/www/bejne/p2LIfH17qLWWl7Msi=I43g-7OYViDaT6ZT&t=596 hope it helps!
@DaveCS103
@DaveCS103 Ай бұрын
Thank you for this amazing video!
@MonzonMedia
@MonzonMedia Ай бұрын
You're welcome!
@siliconbrush
@siliconbrush Ай бұрын
I know they worked very hard on the text but I can bet you thats why the model is so large. At the end of the day the text design (the overall layout and text fonts ect) is basic. The images are phenonminal but I wonder if they could seperate the text part to the model from the image part of the mode? would that make it smaller. I bet it would quite a bit, franky I am fine without any text anything I would design would far better in pure vector graphics.
@MonzonMedia
@MonzonMedia Ай бұрын
The thing is with text is has to be trained just like anything else. I think there will be a day where you can prompt for the font you want but it's still early for text development.
@LewGiDi
@LewGiDi 29 күн бұрын
Thank you very much 🙏 I'm able to run flux in a laptop with 6gbvram a pic with 1344×768 takes 2 minutes to gen as you said when comfy its loading all takes more time 4 minutes then time reduces. Are you planning to make an updated? It's been released N4 model from the creator of forge It's faster than f8 and schenll models
@MonzonMedia
@MonzonMedia 28 күн бұрын
Yes I covered it here but using Forge. kzbin.info/www/bejne/b2HGq2ump7CZg7ssi=ktAIpytoWn-az7hq But also planning and updated video for Comfy although it's pretty much the same process. The NF4 model loads as a regular checkpoint so even the basic workflow works with it.
@L3X369
@L3X369 8 күн бұрын
what extension or option are u using for the straight connections? They look awesome!
@MonzonMedia
@MonzonMedia 8 күн бұрын
It's just a setting you can change I cover it here. Just select straight under "link render mode" you can even hide them too! kzbin.info/www/bejne/p2LIfH17qLWWl7Msi=hpzFqcS1wZu4N4yN&t=596
@vladch3485
@vladch3485 Ай бұрын
Someone has a problem when you press Queue, after a couple of seconds the Reconnecting window pops up ? Using 3080 12GB...
@MonzonMedia
@MonzonMedia Ай бұрын
You need to leave the command window open.
@spaceandstuff
@spaceandstuff Ай бұрын
Thanks for the video. This is great for us the poors.
@MonzonMedia
@MonzonMedia Ай бұрын
😊 you’re welcome!
@AI_Creatives_Toolbox
@AI_Creatives_Toolbox Ай бұрын
Didn't really understand how to use the split sigma node. What is getting connected to it and what do I connect from it. Thanks!
@MonzonMedia
@MonzonMedia Ай бұрын
I provided a workflow in the description, just drag and drop. The Split Sigma node, low sigma will connect to the "sampler custom advanced" and the left side where it says "sigmas" will connect to the "basic scheduler". Hope that helps.
@AI_Creatives_Toolbox
@AI_Creatives_Toolbox Ай бұрын
@@MonzonMedia It defiantly does, thank you!
@liquidmind
@liquidmind Ай бұрын
what a great video!
@MonzonMedia
@MonzonMedia Ай бұрын
Thank you! Appreciate that!
@liquidmind
@liquidmind Ай бұрын
@@MonzonMedia brother, you have an amazing narrator voice, so well . Thanks for your time to make these videos.
@liquidmind
@liquidmind Ай бұрын
@@MonzonMedia BTW, someone here said it could run on a RTX2060 6GB VRAM and 16 RAM, but took like 4 minutes!!! I have the same card, 6GB VRAM but 32 RAM instead.... do you think is worth the try?
@MonzonMedia
@MonzonMedia Ай бұрын
You should be able to run it but obviously will take a bit longer. Go for it and let me know how it goes! 👍🏼
@liquidmind
@liquidmind Ай бұрын
@@MonzonMedia i will!!
@MrPool-fk9ll
@MrPool-fk9ll Ай бұрын
Please Help me with this error 😭😭 it says : Error occurred when executing DualCLIPLoader: CLIP.__init__() got an unexpected keyword argument 'state_dicts' i followed every step that is shown in the video my specs are : i7-14th gen rtx 4060 ti 16gb 32 gb ram 6000mhz
@SunnyEscapades
@SunnyEscapades Ай бұрын
That helped a lot. Thank you.
@motopaediatheview9284
@motopaediatheview9284 Ай бұрын
I run Flux-Dev full on 6GB 1060 GTX - it takes time, but works...
@MonzonMedia
@MonzonMedia Ай бұрын
How much time? But yeah 1060 and 6GB is probably pushing it. Hope they come out with a more optimized version.
@motopaediatheview9284
@motopaediatheview9284 Ай бұрын
@@MonzonMedia Up to 15~20 minutes per 1280X1024. I don't know the accuracy of the meter, but it rarely go over 65%GPU use, VRAM of course 100% and Temp 65~70 Cels.
@relexelumna5360
@relexelumna5360 Ай бұрын
Will it be faster on AMD gpu RX 7800xt over rtx4070 as RX are heavily on gaming and quite unheard in Ai stuffs. am curious and no ones doing review on ai.
@MonzonMedia
@MonzonMedia Ай бұрын
Not sure how it will run on AMD unfortunately. Typically amd doesn’t run too well with Ai stuff. AMD+linux is another story though. Currently what’s out now is all based on using Nvidia’s cuda cores. I’m sure that will change in time.
@relexelumna5360
@relexelumna5360 Ай бұрын
@@MonzonMedia Thank you. I hope Flux will fix unoptimized issue of GPU hungry. I observe that its only good in text and hands. while the rest looks like Ai image generated which is disappointing for such 22gb checkpoints.
@MonzonMedia
@MonzonMedia Ай бұрын
Actually it can do very photorealistic images of you prompt correctly for it. I have a few examples in the video but those were simple prompts and still have a bit of a hyper realistic look. But it can be done I’ve don’t it myself. Also bear in mind this is a base model. There is already a realistic Lora out, it’s just not compatible with ComfyUI at the moment. Also fine tune models will likely be trained although licensing prevents commercial use but the schnell model is open source.
@relexelumna5360
@relexelumna5360 Ай бұрын
@@MonzonMedia Oh ok. I wud love to try both Dev and Schnell on my rtx4070 n see how much time it takes. Oh i think we can convert Flux Dev to TensorRT to make it faster in Auto1111 but sadly might not be supported as it only work in Comfyui. Open Source is the way to go for long term creativity job n i love it more than close source. Thank you for the brief reply.
@westingtyler1
@westingtyler1 Ай бұрын
3:20 but where do we download those clip models?
@MonzonMedia
@MonzonMedia Ай бұрын
Link in the description my friend. huggingface.co/comfyanonymous/flux_text_encoders/tree/main
@rorymorrissey4970
@rorymorrissey4970 Ай бұрын
Is it still mostly restricted to Nvidia GPUs or have AMDs gained the ability to use image gen AI stuff like Flux now? I'm.a bit out of the loop on non Nvidia cards.
@MonzonMedia
@MonzonMedia 28 күн бұрын
Yeah Nvidia is still the way to go AMD is getting there but usually on Linux and still a pain in the ass to deal with. I'm seeing more AMD videos though.
@Ryographix
@Ryographix 22 күн бұрын
im using RTX3060 16GB RAM, will do po ba sa actual download sa github idol, salamat po
@dawelimey9819
@dawelimey9819 Ай бұрын
how u get the system status bar like GPU, and VRAM on the Queue Prompt?
@MonzonMedia
@MonzonMedia 29 күн бұрын
Just go into the comfyui manager and search for Crystools extension. Install it and you should be god to go!
@farslght
@farslght Ай бұрын
I`ve tryed both 16gb version and multi-model version and i got same generation time on my system. Which is 16gb RAM (yes, i know) and 3060 with 12gb vram.
@fixelheimer3726
@fixelheimer3726 Ай бұрын
Hey, I dont see guidance/CFG values in your workflow? Why's that?
@MonzonMedia
@MonzonMedia Ай бұрын
This model really doesn’t need cfg, default is 1 as recommended. If you use the fp8 version in the comfyui link and use the “checkpoint example workflow” you can use it on any workflow that way that gives you access to cfg. I wouldn’t go higher than 3.5 though. Hope that helps!
@rennynolaya2457
@rennynolaya2457 Ай бұрын
Hi, I tried your notebook with those models but it runs much slower than the normal version of fp8. I have a 3060 with 32 GB of RAM, I think that notebook needs to be optimized.
@Varibam
@Varibam Ай бұрын
I don't want to attack or offend, but i though the FLUX Dev.1 model you are showcasing is for non-commercial uses only and since your video is monetized and all...
@MonzonMedia
@MonzonMedia Ай бұрын
Appreciate your concern but licence states we have the rights to use output commercially. The terms just prevent anyone using their model for a service, or a fine tune model to use commercially. If it was not the case all the content creators would be liable. Nothing to worry about my friend. 👍🏼
@ghilesbardi
@ghilesbardi Ай бұрын
much appreciated sir =) !
@MonzonMedia
@MonzonMedia Ай бұрын
You're welcome!
@A.I.Ther_Technology
@A.I.Ther_Technology Ай бұрын
update: it last up to 45 min to build it.....Original message: I am trying it with a 4060 with 8gb on vram and 16gb of ram but it doesn't move over 8%: it shows an upper bar with "(1)8%-unetloader" and doesn't change
@tobycortes
@tobycortes 29 күн бұрын
i really cant see any differnce between the models ifyou go 30+ steps, dont know why evrybody keep using 4 steps only and call it losing quality, go 30+steps with Schenll
@MonzonMedia
@MonzonMedia 29 күн бұрын
Schnell is a distilled model which requires 4-8 steps only. It’s a matter of speed mostly. 30 steps is overkill.
@JimGardner
@JimGardner Ай бұрын
I'm just getting blank images. RTX 3060 12GB VRAM. I have the --lowvram flag on the startup script. With or without it makes no difference. Would really appreciate help with this. Thanks.
@JimGardner
@JimGardner Ай бұрын
In case anyone from the future is reading this and screaming "why is nobody else experiencing this" I fixed it by reinstalling the NVidia device drivers.
@MonzonMedia
@MonzonMedia Ай бұрын
Glad you figured it out 👍🏼
@shivasavant898
@shivasavant898 Ай бұрын
Hi sir, That great video! I’ve got an Intel Core i7-9700F CPU @ 3.00GHz and 16 GB of RAM-do you think this setup be good for running SDXL workflows? Looking forward to your thoughts!
@MonzonMedia
@MonzonMedia Ай бұрын
What GPU do you have?
@shivasavant898
@shivasavant898 Ай бұрын
@@MonzonMedia zotac GTX 1050 TI, 4 GB VIDEO MEMORY
@Arthur-jg4ji
@Arthur-jg4ji Ай бұрын
@@shivasavant898 i don't think it is possible to run sdxl with 4gb vram
@AncientShinrinYoku
@AncientShinrinYoku Ай бұрын
@@Arthur-jg4ji It is possible on 3GB with ComfyUI and reasonable resolutions.
@Arthur-jg4ji
@Arthur-jg4ji Ай бұрын
@@AncientShinrinYoku Oh ? I didn't know . But won't the speed and quality be horrible ?
@4thObserver
@4thObserver Ай бұрын
I'm just Curious, Can it run without ComfyUI?
@MonzonMedia
@MonzonMedia Ай бұрын
As far as I know not yet, It can work in SwarUI since it has a comfyui backend. I haven't run it in SwarmUI just yet though. github.com/mcmonkeyprojects/SwarmUI/blob/master/docs/Model%20Support.md#black-forest-labs-flux1-models
@brianmolele7264
@brianmolele7264 Ай бұрын
I’ll wait for the optimized version, I downloaded it twice, through slow internet 😭 . It crash on my RTX 4060, XEON E5 2680 V4 CPU and 16Gb Ram. If this video came out early I would’ve not deleted the model
@MonzonMedia
@MonzonMedia Ай бұрын
Not sure when that will happen. Seems that these models are getting bigger and bigger. I think also your bottleneck is your system ram. For anything to do with text to image 32gb system ram is recommended.
@0A01amir
@0A01amir Ай бұрын
Greate video. wish for 5B or 8B version of the model so we can use it with ease.
@MonzonMedia
@MonzonMedia Ай бұрын
Yes me too! Seems more and more the need for more VRAM.
@KlausMingo
@KlausMingo 25 күн бұрын
The waitress leg looks weird at 10:42
@fixelheimer3726
@fixelheimer3726 Ай бұрын
Whats wrong with two front teeth? In SD often the model couldn't decide between teeth and lips at this place.
@MonzonMedia
@MonzonMedia Ай бұрын
It makes the person look like they have buck teeth. It’s very common in SD1.5, not as much in SDXL
@fixelheimer3726
@fixelheimer3726 Ай бұрын
@@MonzonMedia as long as its not over pronounced I dont mind, on the contrary it can look sexy imo :D
@MonzonMedia
@MonzonMedia Ай бұрын
hahaha yeah I hear ya 👍
@oaahmed7515
@oaahmed7515 Ай бұрын
and 8G vram?
@MonzonMedia
@MonzonMedia Ай бұрын
Yes sir...All my videos I use 3060Ti 8GB VRAM.
@kallamamran
@kallamamran Ай бұрын
I have a 3090 with 24GB VRAM. Using Low Sigmas still uses 24GB VRAM. Either ist doesn't work or it's flexible somehow 🤔 Maybe it's my 32GB of system RAM that is the limiter?
@MonzonMedia
@MonzonMedia Ай бұрын
I use have 32GB of VRAM, so it should be fine. What do you mean it doesn't work? Is it not generating an image? Can't help with no context my friend.
@shareeftaylor3680
@shareeftaylor3680 Ай бұрын
Can u use both clip l and clip g from sd3 on flux 1? It will help those that have less ram I'm trying to run this in cpu 10gb vram 10gb ram hope it works
@MonzonMedia
@MonzonMedia Ай бұрын
Yes I believe it’s the same but you have to use either T5 clip encoders as well. Unless you download the one that has the clip baked in.
@DezorianGuy
@DezorianGuy Ай бұрын
I get anime images with flux. How do I get a realistic style?
@MonzonMedia
@MonzonMedia Ай бұрын
Just prompt for things like cinemtaic film still, photo of a....etc. It's all in the prompt.
@DezorianGuy
@DezorianGuy Ай бұрын
​@@MonzonMedia I don't get realistic images, just 3D or anime, sometimes real ones. This flux isn't really flashed out it seems. We need to wait for a few month.
@MonzonMedia
@MonzonMedia Ай бұрын
@@DezorianGuy must be your prompts, as you saw in my video many photoreaistic images and they were very simple. Give us an example of your prompt.
@shareeftaylor3680
@shareeftaylor3680 Ай бұрын
I was able to use this on my PC with 4gb vram grx 1650 super and 23 gb ddr4 ram
@MonzonMedia
@MonzonMedia Ай бұрын
Nice! How long were your generations?
@shareeftaylor3680
@shareeftaylor3680 Ай бұрын
@@MonzonMedia 512x1024 4step shnell took 4 minutes
@shareeftaylor3680
@shareeftaylor3680 Ай бұрын
@@MonzonMedia u just need lowvram command
@fredpourlesintimes
@fredpourlesintimes Ай бұрын
Tested, not efficient at all; even with schnell (8G)
@Pawel_Mrozek
@Pawel_Mrozek 16 күн бұрын
Milion tutorials how to use FLUX it in Comfy UI and very few how to set everything to work in normal UI for ordinary people and none how to use Controlnet with it without Comfy UI involved.
@MonzonMedia
@MonzonMedia 16 күн бұрын
You're right, that's why I made these videos and I'm editing another one on other Flux models. Install Forge kzbin.info/www/bejne/fHzdp3t8qchrhJIsi=LLpZYf8g0aqrzGuz Using Flux Dev in Forge kzbin.info/www/bejne/b2HGq2ump7CZg7ssi=Kg1-f2iSWYoZegQC. As for controlnets, they are out now for ComfyUI which I also have a video to come, but for other platforms, still not available.
@danielc121
@danielc121 Ай бұрын
I am getting kinda blurry, low resolution results somehow
@MonzonMedia
@MonzonMedia Ай бұрын
What are your specs and settings?
@danielc121
@danielc121 Ай бұрын
@@MonzonMedia welp, actually i use it on tensor art, eular normal 25 step cgf 3 or 3.5, now it's better but it seems it was a problems with samplers
@GoAnim
@GoAnim Ай бұрын
Not working for i5, 4gb vram and 8gb ram.
@MonzonMedia
@MonzonMedia Ай бұрын
Yeah won't run on these specs, need at least 6GB VRAM on and Nvidia card and 32GB system ram, maybe 16GB but would take longer.
@pibyte
@pibyte 23 күн бұрын
YES! HAHAHA STEALING WAS NEVER EASIER! LOVE IT!
@regularguy23
@regularguy23 Ай бұрын
do you actually make money from flux?
@MonzonMedia
@MonzonMedia Ай бұрын
Not sure what you mean? From the developers? If that’s your question no.
@user-yi2mo9km2s
@user-yi2mo9km2s Ай бұрын
Consored, important "datas" removed.
@MonzonMedia
@MonzonMedia Ай бұрын
Not completely censored. I’m sure there will be some fine tunes eventually.
@WayOfTheZombie
@WayOfTheZombie Ай бұрын
:/ but will it make boobs?! Thank you so much, great info here!
@MonzonMedia
@MonzonMedia Ай бұрын
hahaha it's somewhat uncensored but yes you can with the right prompts.
@marshallodom1388
@marshallodom1388 Ай бұрын
Looks like it can do pretty nice spaghetti alien women wearing colored gauze dresses with human like hands.
@MonzonMedia
@MonzonMedia Ай бұрын
Hahaha! Can’t say I’ve tried that but now I’m curious! Speaks volumes for its prompt coherency! 🙌🏼
@NotThatOlivia
@NotThatOlivia Ай бұрын
it would be better if you craft your own workflow, since it can be slightly more optimized for 8-12 Gb Vram than this that you are showing ...
@MonzonMedia
@MonzonMedia Ай бұрын
Oh yes definitely! Just wanted to show the basic workflow for people with lower end GPU's. The problem with these newer models is the size of the files, not a good sign for people like me. I've also created workflows for upscaling, touch ups using SDXL as a refiner, img2img etc.
@DezorianGuy
@DezorianGuy Ай бұрын
​@@MonzonMediaI am new to comfy ui and work flow creation. Which would you recommend for my 12GB card? The one you showed in your video at the beginning?
@MonzonMedia
@MonzonMedia Ай бұрын
Yup it's in the description 👍 Download, then drag it into the workspace, make sure you've updated ComfyUI
@dirtydevotee
@dirtydevotee Ай бұрын
I'm going on record as saying Flux.1 is total garbage.
@MonzonMedia
@MonzonMedia Ай бұрын
Your entitled to your opinion but I'm curious why you think so?
@dirtydevotee
@dirtydevotee Ай бұрын
@@MonzonMedia My pleasure. First of all, let''s call it what it is: it's "Stable Diffusion 3.1". They changed the name because SD3 was so bad that it tarnished the brand and the old company is being sued into extinction by Getty. Second, it uses more electricity than Midjourney and the other SD models. So, they're making the world a worse place. And why? Because they want to gen at a higher resolution so it's "one-stop". But that's stupid. Gen at the lowest resolution in seconds (instead of minutes) and then upscale the good stuff to get what you need. Finally, everyone's saying it's "uncensored". That's a lie. I have personally used it and it is censored to the hilt. They want Wall Street VC money. Wall Street is concerned about bad news stories about porn. To get the money, they crippled the thing by removing large quantities of "sexy" data. You may also want to ask why they refused to tell Ars Technica what training data they used.
@rogergoldwyn3851
@rogergoldwyn3851 Ай бұрын
Hey, I have followed every step but when I start ComfyUI and I am using the CPU one cuz I dont have Nvidia nothing in the model list shows up .. I haven't put anything into Checkpoints yet so maybe I have missed something, mind helping me out what I should put into the checkpoints folder please? :)
@QuickBeat
@QuickBeat Ай бұрын
your exact configurations im getting the following error "got prompt model weight dtype torch.bfloat16, manual cast: torch.float16 model_type FLUX Killed", I have 16gb vram
@Giorgio_Venturini
@Giorgio_Venturini Ай бұрын
Hi, what is the difference between the (flux1-dev-fp8.safetensor) models of Confy-Org (17.2 GB) and that of Kijai (11.9 GB) Thanks
@MonzonMedia
@MonzonMedia Ай бұрын
The 17gb one you can use as a normal checkpoint in any workflow. It also has the clip encoders baked in. However due to its size generation times will increase. If your gpu has more vram like 16gb+ then it should be fine to use. But if you have 12gb or less the 12gb file is best to use with the split sigma node shown in the video.
@Giorgio_Venturini
@Giorgio_Venturini Ай бұрын
@@MonzonMedia Thanks, and see you in your next videos
OK. Now I'm Scared... AI Better Than Reality!
8:10
AI Revolution
Рет қаралды 190 М.
when you have plan B 😂
00:11
Andrey Grechka
Рет қаралды 53 МЛН
Webui Forge is "Flux"ing Their Muscles!
7:06
Monzon Media
Рет қаралды 6 М.
How to Run Flux Image Models In ComfyUI with Low VRAM
11:43
The Local Lab
Рет қаралды 14 М.
Flux.1 IMG2IMG + Using LLMs for Prompt Enhancement in ComfyUI!
16:50
Flux Dev Models Explained For Webui Forge | Low VRAM GPU Options
9:13
Generate Faster With Flux Dev Hyper Loras
5:56
Monzon Media
Рет қаралды 2,3 М.
Face Swap For Flux In Webui Forge
7:16
Monzon Media
Рет қаралды 4 М.
Deep dive into the Flux
28:03
Latent Vision
Рет қаралды 35 М.
when you have plan B 😂
00:11
Andrey Grechka
Рет қаралды 53 МЛН