I'd love to hear you generation times if you have a 12GB VRAM GPU or less! Let me know in the comments!
@tripleheadedmonkey6613Ай бұрын
You missed out on a load of optimization options that could have improved results for you. At the very least it will allow you to watch do more with the PC without it lagging. First of all set the launch arguments: --bf16-vae --fp16-text-enc --fp8_e4m3fn-unet This one may also be helpful, not sure but I use it atm with flux --use-pytorch-cross-attention Furthermore you should have replaced the VAE decode nodes with VAE decode TILED nodes.
@MonzonMediaАй бұрын
Oh great tips! Will give it a shot! I haven’t needed it in the past with SDXL models but these bigger ones are killin my system! Hahaha! Time to update my gpu. Appreciate the heads up!
@tripleheadedmonkey6613Ай бұрын
@@MonzonMedia Updating your GPU isn't particularly helpful either. Not unless you can find a 40GB+ card :D
@Dro2k6Ай бұрын
RTX 3080 12GB VRAM 1 minute
@MonzonMediaАй бұрын
Bruh….i have a 8gb card, even 12gb would be a massive upgrade for me 😆😊
@RamonGuthrieАй бұрын
What amazes me most about this model are the details on the first pass. Normally, you will get these details in a second pass or on an upscale.
@MonzonMediaАй бұрын
Yes exactly and I find I'm running less gens to get what I want. Once controlnets come out, Loras and fine tuned models, I can see it taking off!
@droidJVАй бұрын
Thanks for the video, tried it on a gtx1060 6gb (16gb ram / i7 5700G) and it took 4:27 to generate a 600x600 image with 5 steps and 7:26 for a 600x600 image with 10 steps. Can't imagine how much longer would it take for a bigger image but it works.
@MonzonMediaАй бұрын
I think your 16GB of system RAM is your bottle neck there as I've read others using 6GB of VRAM successfully, although still kind of slow. 1060's also tend to be slower than the recent gen GPU's. These bigger models does need more system RAM as well. Nowadays 32GB is standard. Let's hope they optimize it even more.
@droidJVАй бұрын
@@MonzonMedia Thanks for the info.
@ArchalternativeАй бұрын
@@MonzonMedia same configuration, but with 32GB of RAM, images at 800x800 in 4:30
@DaniDani-zb4wdАй бұрын
@@ArchalternativeThank you for the info. Looks like there is hope for my rtx 2060👍once Forge gets updated (which is even better with vram optimization) I can’t wait to try it.
@DaniDani-zb4wdАй бұрын
@@MonzonMedia how much slower is the dev version compared to schnell version? Isn’t most of the time spent on offloading the model into ram anyway? The dev model is much better at realism. How long does it take to regenerate if you change the prompt?
@pokerandphilosophy8328Ай бұрын
I have a RXT 2060 Super with 8GB VRAM (and 64GB RAM). I used to generate one 1024x1024 image in four steps (Flux Schnell) in three minutes. With your workflow, it's down to 75 seconds! (Have you tried the DemonFlux model posted on CivitAI? It's a pruned model that merges Schell with Dev, and also the two clip models and the VAE file in one single 16GB checkpoint. It achieves close to Dev quality with just 3 steps. I was wondering if their approach could be combined with yours.)
@pokerandphilosophy8328Ай бұрын
Another test: One 1535x1280 image, 6 steps with Schnell, 80 seconds. Batch of two images: 192 seconds.
@MonzonMediaАй бұрын
Hey thanks for sharing your info. I’m always curious how other peoples set ups yield results. Funny you mention that merged model as I was trying it out earlier today. Not sure if it’s the same one though. The one I used is still just under 24 gb and was getting 50seconds for a 1024x1024 image. Going to check civil Ai for the one you mentioned. Appreciate it!
@matten_zeroАй бұрын
You can always face swap for celebrities. The magic of ComfyUI is ability to modularize and mix and match different models and workflows.
@MonzonMediaАй бұрын
Very true, just a minor annoyance when it can just as well be trained with celebs but I’m starting to see a trend where these newer models are going away from that.
@matten_zeroАй бұрын
@@MonzonMedia it's future proofing so they don't have the headaches later on. Good thing faceswap exists!
Ай бұрын
Thank you for this precise, accurate, step-by-step tutorial, exactly what I needed (again).
@MonzonMediaАй бұрын
You're very welcome! Appreciate the support!
@lucasrodriguez8957Ай бұрын
Hi. I almost have it working, but I'm having a problem with the "dualcliploader node" more specific with the clip_name2. If I leave it like it is show in the video I get an error and if I put anything there I get a blank image as an output. What can I do to fix this? Edit: I'm really dumb. Just minutes after posting this I realize you can download the file call "clip_I.safetensors" from the same place that you download the other two files. Instead of deleting this comment I'm going to leave in case this happend to someone else, you never know.
@MonzonMediaАй бұрын
aaahhhh no worries! I didn't really mention it in the video either but glad you figured it out. 👍
@lucasrodriguez8957Ай бұрын
@@MonzonMedia thanks for the video!
@danwe6297Ай бұрын
LOOOOL! I managed to kick it on RTX 3050 with 8 gigs of VRAM and with another 32 gigs of CPU RAM
@alifrahman9447Ай бұрын
just deleted after generation an image in 8 mint in 2060 12 gb and now you came up with this🙂🙂
@MonzonMediaАй бұрын
Wow 8min? Yeah try this method, you should be getting better generation times with this workflow. These new models are not getting smaller which sucks for me with lower spec GPU. Keep me posted!
@worldofgames2000Ай бұрын
Thank you for info! I have 4070 -12 gb and generate in around 40 sec, usual workflow...
@MonzonMediaАй бұрын
nice! nowadays 12gb is standard....I guess I need to update my GPU soon! hahaha!
@korvine7Ай бұрын
absolutely same configuration and time for the most main schedulers. Some of them take a bit longer. 1152x768, 25 steps - 40 sec
@thrWasTakenАй бұрын
was it on dev model and what resolution were you using ? i have a rtx 4070 too and it tames me 110 sec with dev model to make 1024x1024 image
@korvine724 күн бұрын
great job, i`ve 0 skill in genegation and everything works perfect! Faster then forge UI somehow (~x2 speed same parameters)
@gameswithoutfrontears416Ай бұрын
Looking good, will be watching flux closely
@MonzonMediaАй бұрын
Pretty impressed so far and to see there is already a lora and controlnet out, once it's supported the rest will follow.
@hleetАй бұрын
Do you see a real difference in image quality (or prompt behaviour) between the two ? (schnell vs dev) ? By the way I don't use split sigma to try on flux. You can make it with SampleCustom, the sigmas is connected to a basicScheduler (simple). But yeah load diffusion model flux-fp8. I found out that if you put the weight_dtype to something else then default, the graphic card will go back and forth, just leaving at default is ok. I really like flux, it is very coherent to the prompt. I hope that ipadapter will port his custom node on this one :D
@MonzonMediaАй бұрын
Yes definitely a big difference in the final output. I haven't tested it in-depth though. Text is definitely worse using Schnell and overall quality takes a hit, I mean it is a distilled model so that's expected. All the other workflows I've tried still took too long for me considering my limited VRAM specs but will try the SampleCustom node. 👍
@SouthbayCreationsАй бұрын
Great video thanks for sharing!
@MonzonMediaАй бұрын
Appreciate it bud!
@alienandroid943Ай бұрын
thanks what i needed to know for 8 gb cards.
@user-st2tz7eu9jАй бұрын
Hi. Thanks for the guide. Could you please tell me how you made the lines connecting the nodes flat? I'm very frustrated that there are a lot of them and they are like a bunch of wires :) Thanks.
@MonzonMediaАй бұрын
Just got to your settings>link render mode>striaght. I show it in this video kzbin.info/www/bejne/p2LIfH17qLWWl7Msi=I43g-7OYViDaT6ZT&t=596 hope it helps!
@MPCDesenvolvimentoWebАй бұрын
I've been looking for this add-on for "ComfyUI" for a few days now to make the workflow lines straight, what is this plugin called?
@MonzonMediaАй бұрын
It's not an add in, just got to your settings>link render mode>striaght. I show it in this video kzbin.info/www/bejne/p2LIfH17qLWWl7Msi=I43g-7OYViDaT6ZT&t=596 hope it helps!
@DaveCS103Ай бұрын
Thank you for this amazing video!
@MonzonMediaАй бұрын
You're welcome!
@siliconbrushАй бұрын
I know they worked very hard on the text but I can bet you thats why the model is so large. At the end of the day the text design (the overall layout and text fonts ect) is basic. The images are phenonminal but I wonder if they could seperate the text part to the model from the image part of the mode? would that make it smaller. I bet it would quite a bit, franky I am fine without any text anything I would design would far better in pure vector graphics.
@MonzonMediaАй бұрын
The thing is with text is has to be trained just like anything else. I think there will be a day where you can prompt for the font you want but it's still early for text development.
@LewGiDi29 күн бұрын
Thank you very much 🙏 I'm able to run flux in a laptop with 6gbvram a pic with 1344×768 takes 2 minutes to gen as you said when comfy its loading all takes more time 4 minutes then time reduces. Are you planning to make an updated? It's been released N4 model from the creator of forge It's faster than f8 and schenll models
@MonzonMedia28 күн бұрын
Yes I covered it here but using Forge. kzbin.info/www/bejne/b2HGq2ump7CZg7ssi=ktAIpytoWn-az7hq But also planning and updated video for Comfy although it's pretty much the same process. The NF4 model loads as a regular checkpoint so even the basic workflow works with it.
@L3X3698 күн бұрын
what extension or option are u using for the straight connections? They look awesome!
@MonzonMedia8 күн бұрын
It's just a setting you can change I cover it here. Just select straight under "link render mode" you can even hide them too! kzbin.info/www/bejne/p2LIfH17qLWWl7Msi=hpzFqcS1wZu4N4yN&t=596
@vladch3485Ай бұрын
Someone has a problem when you press Queue, after a couple of seconds the Reconnecting window pops up ? Using 3080 12GB...
@MonzonMediaАй бұрын
You need to leave the command window open.
@spaceandstuffАй бұрын
Thanks for the video. This is great for us the poors.
@MonzonMediaАй бұрын
😊 you’re welcome!
@AI_Creatives_ToolboxАй бұрын
Didn't really understand how to use the split sigma node. What is getting connected to it and what do I connect from it. Thanks!
@MonzonMediaАй бұрын
I provided a workflow in the description, just drag and drop. The Split Sigma node, low sigma will connect to the "sampler custom advanced" and the left side where it says "sigmas" will connect to the "basic scheduler". Hope that helps.
@AI_Creatives_ToolboxАй бұрын
@@MonzonMedia It defiantly does, thank you!
@liquidmindАй бұрын
what a great video!
@MonzonMediaАй бұрын
Thank you! Appreciate that!
@liquidmindАй бұрын
@@MonzonMedia brother, you have an amazing narrator voice, so well . Thanks for your time to make these videos.
@liquidmindАй бұрын
@@MonzonMedia BTW, someone here said it could run on a RTX2060 6GB VRAM and 16 RAM, but took like 4 minutes!!! I have the same card, 6GB VRAM but 32 RAM instead.... do you think is worth the try?
@MonzonMediaАй бұрын
You should be able to run it but obviously will take a bit longer. Go for it and let me know how it goes! 👍🏼
@liquidmindАй бұрын
@@MonzonMedia i will!!
@MrPool-fk9llАй бұрын
Please Help me with this error 😭😭 it says : Error occurred when executing DualCLIPLoader: CLIP.__init__() got an unexpected keyword argument 'state_dicts' i followed every step that is shown in the video my specs are : i7-14th gen rtx 4060 ti 16gb 32 gb ram 6000mhz
@SunnyEscapadesАй бұрын
That helped a lot. Thank you.
@motopaediatheview9284Ай бұрын
I run Flux-Dev full on 6GB 1060 GTX - it takes time, but works...
@MonzonMediaАй бұрын
How much time? But yeah 1060 and 6GB is probably pushing it. Hope they come out with a more optimized version.
@motopaediatheview9284Ай бұрын
@@MonzonMedia Up to 15~20 minutes per 1280X1024. I don't know the accuracy of the meter, but it rarely go over 65%GPU use, VRAM of course 100% and Temp 65~70 Cels.
@relexelumna5360Ай бұрын
Will it be faster on AMD gpu RX 7800xt over rtx4070 as RX are heavily on gaming and quite unheard in Ai stuffs. am curious and no ones doing review on ai.
@MonzonMediaАй бұрын
Not sure how it will run on AMD unfortunately. Typically amd doesn’t run too well with Ai stuff. AMD+linux is another story though. Currently what’s out now is all based on using Nvidia’s cuda cores. I’m sure that will change in time.
@relexelumna5360Ай бұрын
@@MonzonMedia Thank you. I hope Flux will fix unoptimized issue of GPU hungry. I observe that its only good in text and hands. while the rest looks like Ai image generated which is disappointing for such 22gb checkpoints.
@MonzonMediaАй бұрын
Actually it can do very photorealistic images of you prompt correctly for it. I have a few examples in the video but those were simple prompts and still have a bit of a hyper realistic look. But it can be done I’ve don’t it myself. Also bear in mind this is a base model. There is already a realistic Lora out, it’s just not compatible with ComfyUI at the moment. Also fine tune models will likely be trained although licensing prevents commercial use but the schnell model is open source.
@relexelumna5360Ай бұрын
@@MonzonMedia Oh ok. I wud love to try both Dev and Schnell on my rtx4070 n see how much time it takes. Oh i think we can convert Flux Dev to TensorRT to make it faster in Auto1111 but sadly might not be supported as it only work in Comfyui. Open Source is the way to go for long term creativity job n i love it more than close source. Thank you for the brief reply.
@westingtyler1Ай бұрын
3:20 but where do we download those clip models?
@MonzonMediaАй бұрын
Link in the description my friend. huggingface.co/comfyanonymous/flux_text_encoders/tree/main
@rorymorrissey4970Ай бұрын
Is it still mostly restricted to Nvidia GPUs or have AMDs gained the ability to use image gen AI stuff like Flux now? I'm.a bit out of the loop on non Nvidia cards.
@MonzonMedia28 күн бұрын
Yeah Nvidia is still the way to go AMD is getting there but usually on Linux and still a pain in the ass to deal with. I'm seeing more AMD videos though.
@Ryographix22 күн бұрын
im using RTX3060 16GB RAM, will do po ba sa actual download sa github idol, salamat po
@dawelimey9819Ай бұрын
how u get the system status bar like GPU, and VRAM on the Queue Prompt?
@MonzonMedia29 күн бұрын
Just go into the comfyui manager and search for Crystools extension. Install it and you should be god to go!
@farslghtАй бұрын
I`ve tryed both 16gb version and multi-model version and i got same generation time on my system. Which is 16gb RAM (yes, i know) and 3060 with 12gb vram.
@fixelheimer3726Ай бұрын
Hey, I dont see guidance/CFG values in your workflow? Why's that?
@MonzonMediaАй бұрын
This model really doesn’t need cfg, default is 1 as recommended. If you use the fp8 version in the comfyui link and use the “checkpoint example workflow” you can use it on any workflow that way that gives you access to cfg. I wouldn’t go higher than 3.5 though. Hope that helps!
@rennynolaya2457Ай бұрын
Hi, I tried your notebook with those models but it runs much slower than the normal version of fp8. I have a 3060 with 32 GB of RAM, I think that notebook needs to be optimized.
@VaribamАй бұрын
I don't want to attack or offend, but i though the FLUX Dev.1 model you are showcasing is for non-commercial uses only and since your video is monetized and all...
@MonzonMediaАй бұрын
Appreciate your concern but licence states we have the rights to use output commercially. The terms just prevent anyone using their model for a service, or a fine tune model to use commercially. If it was not the case all the content creators would be liable. Nothing to worry about my friend. 👍🏼
@ghilesbardiАй бұрын
much appreciated sir =) !
@MonzonMediaАй бұрын
You're welcome!
@A.I.Ther_TechnologyАй бұрын
update: it last up to 45 min to build it.....Original message: I am trying it with a 4060 with 8gb on vram and 16gb of ram but it doesn't move over 8%: it shows an upper bar with "(1)8%-unetloader" and doesn't change
@tobycortes29 күн бұрын
i really cant see any differnce between the models ifyou go 30+ steps, dont know why evrybody keep using 4 steps only and call it losing quality, go 30+steps with Schenll
@MonzonMedia29 күн бұрын
Schnell is a distilled model which requires 4-8 steps only. It’s a matter of speed mostly. 30 steps is overkill.
@JimGardnerАй бұрын
I'm just getting blank images. RTX 3060 12GB VRAM. I have the --lowvram flag on the startup script. With or without it makes no difference. Would really appreciate help with this. Thanks.
@JimGardnerАй бұрын
In case anyone from the future is reading this and screaming "why is nobody else experiencing this" I fixed it by reinstalling the NVidia device drivers.
@MonzonMediaАй бұрын
Glad you figured it out 👍🏼
@shivasavant898Ай бұрын
Hi sir, That great video! I’ve got an Intel Core i7-9700F CPU @ 3.00GHz and 16 GB of RAM-do you think this setup be good for running SDXL workflows? Looking forward to your thoughts!
@MonzonMediaАй бұрын
What GPU do you have?
@shivasavant898Ай бұрын
@@MonzonMedia zotac GTX 1050 TI, 4 GB VIDEO MEMORY
@Arthur-jg4jiАй бұрын
@@shivasavant898 i don't think it is possible to run sdxl with 4gb vram
@AncientShinrinYokuАй бұрын
@@Arthur-jg4ji It is possible on 3GB with ComfyUI and reasonable resolutions.
@Arthur-jg4jiАй бұрын
@@AncientShinrinYoku Oh ? I didn't know . But won't the speed and quality be horrible ?
@4thObserverАй бұрын
I'm just Curious, Can it run without ComfyUI?
@MonzonMediaАй бұрын
As far as I know not yet, It can work in SwarUI since it has a comfyui backend. I haven't run it in SwarmUI just yet though. github.com/mcmonkeyprojects/SwarmUI/blob/master/docs/Model%20Support.md#black-forest-labs-flux1-models
@brianmolele7264Ай бұрын
I’ll wait for the optimized version, I downloaded it twice, through slow internet 😭 . It crash on my RTX 4060, XEON E5 2680 V4 CPU and 16Gb Ram. If this video came out early I would’ve not deleted the model
@MonzonMediaАй бұрын
Not sure when that will happen. Seems that these models are getting bigger and bigger. I think also your bottleneck is your system ram. For anything to do with text to image 32gb system ram is recommended.
@0A01amirАй бұрын
Greate video. wish for 5B or 8B version of the model so we can use it with ease.
@MonzonMediaАй бұрын
Yes me too! Seems more and more the need for more VRAM.
@KlausMingo25 күн бұрын
The waitress leg looks weird at 10:42
@fixelheimer3726Ай бұрын
Whats wrong with two front teeth? In SD often the model couldn't decide between teeth and lips at this place.
@MonzonMediaАй бұрын
It makes the person look like they have buck teeth. It’s very common in SD1.5, not as much in SDXL
@fixelheimer3726Ай бұрын
@@MonzonMedia as long as its not over pronounced I dont mind, on the contrary it can look sexy imo :D
@MonzonMediaАй бұрын
hahaha yeah I hear ya 👍
@oaahmed7515Ай бұрын
and 8G vram?
@MonzonMediaАй бұрын
Yes sir...All my videos I use 3060Ti 8GB VRAM.
@kallamamranАй бұрын
I have a 3090 with 24GB VRAM. Using Low Sigmas still uses 24GB VRAM. Either ist doesn't work or it's flexible somehow 🤔 Maybe it's my 32GB of system RAM that is the limiter?
@MonzonMediaАй бұрын
I use have 32GB of VRAM, so it should be fine. What do you mean it doesn't work? Is it not generating an image? Can't help with no context my friend.
@shareeftaylor3680Ай бұрын
Can u use both clip l and clip g from sd3 on flux 1? It will help those that have less ram I'm trying to run this in cpu 10gb vram 10gb ram hope it works
@MonzonMediaАй бұрын
Yes I believe it’s the same but you have to use either T5 clip encoders as well. Unless you download the one that has the clip baked in.
@DezorianGuyАй бұрын
I get anime images with flux. How do I get a realistic style?
@MonzonMediaАй бұрын
Just prompt for things like cinemtaic film still, photo of a....etc. It's all in the prompt.
@DezorianGuyАй бұрын
@@MonzonMedia I don't get realistic images, just 3D or anime, sometimes real ones. This flux isn't really flashed out it seems. We need to wait for a few month.
@MonzonMediaАй бұрын
@@DezorianGuy must be your prompts, as you saw in my video many photoreaistic images and they were very simple. Give us an example of your prompt.
@shareeftaylor3680Ай бұрын
I was able to use this on my PC with 4gb vram grx 1650 super and 23 gb ddr4 ram
@MonzonMediaАй бұрын
Nice! How long were your generations?
@shareeftaylor3680Ай бұрын
@@MonzonMedia 512x1024 4step shnell took 4 minutes
@shareeftaylor3680Ай бұрын
@@MonzonMedia u just need lowvram command
@fredpourlesintimesАй бұрын
Tested, not efficient at all; even with schnell (8G)
@Pawel_Mrozek16 күн бұрын
Milion tutorials how to use FLUX it in Comfy UI and very few how to set everything to work in normal UI for ordinary people and none how to use Controlnet with it without Comfy UI involved.
@MonzonMedia16 күн бұрын
You're right, that's why I made these videos and I'm editing another one on other Flux models. Install Forge kzbin.info/www/bejne/fHzdp3t8qchrhJIsi=LLpZYf8g0aqrzGuz Using Flux Dev in Forge kzbin.info/www/bejne/b2HGq2ump7CZg7ssi=Kg1-f2iSWYoZegQC. As for controlnets, they are out now for ComfyUI which I also have a video to come, but for other platforms, still not available.
@danielc121Ай бұрын
I am getting kinda blurry, low resolution results somehow
@MonzonMediaАй бұрын
What are your specs and settings?
@danielc121Ай бұрын
@@MonzonMedia welp, actually i use it on tensor art, eular normal 25 step cgf 3 or 3.5, now it's better but it seems it was a problems with samplers
@GoAnimАй бұрын
Not working for i5, 4gb vram and 8gb ram.
@MonzonMediaАй бұрын
Yeah won't run on these specs, need at least 6GB VRAM on and Nvidia card and 32GB system ram, maybe 16GB but would take longer.
@pibyte23 күн бұрын
YES! HAHAHA STEALING WAS NEVER EASIER! LOVE IT!
@regularguy23Ай бұрын
do you actually make money from flux?
@MonzonMediaАй бұрын
Not sure what you mean? From the developers? If that’s your question no.
@user-yi2mo9km2sАй бұрын
Consored, important "datas" removed.
@MonzonMediaАй бұрын
Not completely censored. I’m sure there will be some fine tunes eventually.
@WayOfTheZombieАй бұрын
:/ but will it make boobs?! Thank you so much, great info here!
@MonzonMediaАй бұрын
hahaha it's somewhat uncensored but yes you can with the right prompts.
@marshallodom1388Ай бұрын
Looks like it can do pretty nice spaghetti alien women wearing colored gauze dresses with human like hands.
@MonzonMediaАй бұрын
Hahaha! Can’t say I’ve tried that but now I’m curious! Speaks volumes for its prompt coherency! 🙌🏼
@NotThatOliviaАй бұрын
it would be better if you craft your own workflow, since it can be slightly more optimized for 8-12 Gb Vram than this that you are showing ...
@MonzonMediaАй бұрын
Oh yes definitely! Just wanted to show the basic workflow for people with lower end GPU's. The problem with these newer models is the size of the files, not a good sign for people like me. I've also created workflows for upscaling, touch ups using SDXL as a refiner, img2img etc.
@DezorianGuyАй бұрын
@@MonzonMediaI am new to comfy ui and work flow creation. Which would you recommend for my 12GB card? The one you showed in your video at the beginning?
@MonzonMediaАй бұрын
Yup it's in the description 👍 Download, then drag it into the workspace, make sure you've updated ComfyUI
@dirtydevoteeАй бұрын
I'm going on record as saying Flux.1 is total garbage.
@MonzonMediaАй бұрын
Your entitled to your opinion but I'm curious why you think so?
@dirtydevoteeАй бұрын
@@MonzonMedia My pleasure. First of all, let''s call it what it is: it's "Stable Diffusion 3.1". They changed the name because SD3 was so bad that it tarnished the brand and the old company is being sued into extinction by Getty. Second, it uses more electricity than Midjourney and the other SD models. So, they're making the world a worse place. And why? Because they want to gen at a higher resolution so it's "one-stop". But that's stupid. Gen at the lowest resolution in seconds (instead of minutes) and then upscale the good stuff to get what you need. Finally, everyone's saying it's "uncensored". That's a lie. I have personally used it and it is censored to the hilt. They want Wall Street VC money. Wall Street is concerned about bad news stories about porn. To get the money, they crippled the thing by removing large quantities of "sexy" data. You may also want to ask why they refused to tell Ars Technica what training data they used.
@rogergoldwyn3851Ай бұрын
Hey, I have followed every step but when I start ComfyUI and I am using the CPU one cuz I dont have Nvidia nothing in the model list shows up .. I haven't put anything into Checkpoints yet so maybe I have missed something, mind helping me out what I should put into the checkpoints folder please? :)
@QuickBeatАй бұрын
your exact configurations im getting the following error "got prompt model weight dtype torch.bfloat16, manual cast: torch.float16 model_type FLUX Killed", I have 16gb vram
@Giorgio_VenturiniАй бұрын
Hi, what is the difference between the (flux1-dev-fp8.safetensor) models of Confy-Org (17.2 GB) and that of Kijai (11.9 GB) Thanks
@MonzonMediaАй бұрын
The 17gb one you can use as a normal checkpoint in any workflow. It also has the clip encoders baked in. However due to its size generation times will increase. If your gpu has more vram like 16gb+ then it should be fine to use. But if you have 12gb or less the 12gb file is best to use with the split sigma node shown in the video.
@Giorgio_VenturiniАй бұрын
@@MonzonMedia Thanks, and see you in your next videos