2:41 WHAAAAAT That's how that works! Oh my goodness! I heard someone say the picture is the workflow, but didn't get it. Now I do :) Thank you!
@JohnVanderbeck5 ай бұрын
It's one of the magical bits about ComfyUI :)
@memoryhero6 ай бұрын
The constant quick side commenting was magical in this vid - you kept it brief enough so that veterans won't feel bogged down by old redundant info but also that newbies will highly benefit from it. World class tutorial protocol.
@Copperpot56 ай бұрын
Excellent job on this workflow! Playing w/ it now after making a few of my own/using some common on civ/discord, but your incorporation of the LLM Party node + autosizing/etc is simply brilliant. Hope all is well!
@Cadmeus6 ай бұрын
A node that really helps to manage latent/image sizing btw is an underappreciated little extension from Ser-Hilary, called SDXL_sizing. Automatically spits out the right size for any base resolution (e.g. 512, 1024, 2048), at any given aspect ratio. I use wildcards from Impact Pack as an input to the aspect ratio setting, which works *really* well with Flux.
@esuvari6 ай бұрын
Canny for flux has just been released today
@NerdyRodent6 ай бұрын
I thought someone might notice that sneaky screenshot I put in 😉
@swannschilling4746 ай бұрын
OMG its really going good!!! 🎉🎉🎉
@PhotoBomber6 ай бұрын
Whats canny?
@JustFeral6 ай бұрын
@@PhotoBomber A controlnet model type. Lines and such, google it.
@256chiru6 ай бұрын
What is canny
@akratlapidus23906 ай бұрын
Nerdy, you always deliver!!! 👌🏻👏🏻👏🏻👏🏻👏🏻
@BirkB16 ай бұрын
Thank you. LLM party sounds awsome 😄
@paulotarso44834 ай бұрын
love this narration haha thank you!
@wakegary5 ай бұрын
Bill didn't seem to mind the multiples of 16! Stay nerdy!
@purposefully.verbose6 ай бұрын
"nice beaver" ok, thanks for that.
@joeduffy526 ай бұрын
I've just had it stuffed.
@synthoelectro6 ай бұрын
and for those who are stuck with 4GB VRAM, just use a large virtual memory and about 768 x 768, it takes up to 8 mins depending but hey, we did it before on 1.5. and SDXL, we can keep going, you can do this.
@riflebird48425 ай бұрын
@@synthoelectro what do you mean by use a large virtual memory? Can you explain?
@synthoelectro5 ай бұрын
@@riflebird4842 swap file.
@massibob20045 ай бұрын
My god ! You re good :)
@MilesBellasАй бұрын
"Pandora's Box of Oddities!" 😅🤣👍
@deadlymarmoset20746 ай бұрын
OOOOOooh NErdy rODEnt...
@DeconvertedMan6 ай бұрын
:) cute things AI makes are cute.
@icchansan5 ай бұрын
Amazing, tutorials to make a custom lora soon?
@jonmichaelgalindo6 ай бұрын
Even if outputs were non-commercial, education and reporting are protected fair use. (Education and reporting are inherently commercial. Teachers and journos have to get paid.) Fair use is the doctrine all AI model training is built on. 😊
@equilibrium9646 ай бұрын
A face detailer workflow for flux would be really useful.
@erikjohnson91126 ай бұрын
Almost 50K subs. I'll do my part. (just subbed)
@scarletsword456 ай бұрын
I really like Flux, but I'm disappointed that I keep having to buy a new GPU to keep up with the demands of new AI art models. 😁
@SouthbayCreations6 ай бұрын
Here’s my theory, buy the biggest (consumer) gpu available, 4090, and be good for a few years. People will spend the minimum and expect to be good forever. In the AI world models are only going to keep getting bigger and more reliant on vram. It’s unfortunate but if we want better AI then it comes with a cost
@blakecasimir6 ай бұрын
I hope FOOOCUS adds support for Flux
@yngeneer6 ай бұрын
Lets Party 🎉🥳
@eveekiviblog73615 ай бұрын
it says to install flash attention. where to get it and where to put?
@raducodreanu23096 ай бұрын
Hi, thanks for the awesome tutorial! How did you get the labels updated for the switches input (ex: "1_llm_enhanced", instead of "text1")?
@joeduffy525 ай бұрын
Right-click on the small dot next to the label and you should see "Rename Slot".
@dagkjetsa84866 ай бұрын
Hello. Great tutorial! :) Though... I am trying to follow your tutorial and get llama 3.1 up and running in comfyui, but it is not working. I am not sure what to put into the base_url field of the API Large Language Model Loader. Any help? Also, why are you using this node and not the "Local Large Language Model"? Thanks
@shareeftaylor36805 ай бұрын
Can you please compare the different flux gguf versions please
@TobinatorXXL6 ай бұрын
hello my LLM party problem: Error code: 500 - {'error': {'message': 'llama runner process no longer running: -1 ', 'type': 'api_error', 'param': None, 'code': None}}
@JohnVanderbeck5 ай бұрын
Can someone help me understand why the fp16 model has to be used from unet and the fp8 can just be used as a normal checkpoint? What even is the difference between unet and checkpoints and why does it matter for what is essentially the same model?
@NerdyRodent5 ай бұрын
Think of the collection of files like a zip file, where lots of things are collected together - whereas the unet is just one of those files.
@squallseeker-i2i6 ай бұрын
Not mentioned, but I presume that to use the standard checkpoint loader I must move the flux models from unet to where they will be visible to the std loader. I ran the update-all before getting started tonight and flux is basically broken on my 3080 now after using it all week... so I don't have a choice other than trying to modify the workflow.
@simonmunk43266 ай бұрын
@@squallseeker-i2i The checkpoint version of the model is not the same file as the unet version. Move the unet version back and download the checkpoint version. The checkpoint version is around 18 GB.
@NerdyRodent6 ай бұрын
@squallseeker. No - use the linked files above each workflow as they’re appropriate to that workflow. For example, to use the one shown in this video you’ll need to download the file as shown, directly to the location shown.
@joeb29206 ай бұрын
Do you have a copy of the final workflow that we can download?
@NerdyRodent6 ай бұрын
Of course! www.patreon.com/posts/ai-enhanced-flux-109665789
@danowarkills40932 ай бұрын
Can we get the img2img workflow?
@EmmaFitzgerald-dp4re6 ай бұрын
thanks for the vid, really appreciate it!, But the resources needed to run flux, It's still too just demanding for me
@CapaUno13226 ай бұрын
Hi buddy, what GPU are you using? I have an rx6800 which I am really happy with and it's 16gb for half the price of Nvidia as of last week and hopefully I will be able to get it to work with a few tricks here and there....just wondering...thanks! ;D
@NerdyRodent6 ай бұрын
I’ve got an old 3090 as VRAM is where it’s at. AMD cards should work for the most part on Linux, though on MS Windows it’s likely to be a slightly more bumpy journey and may not work at all for many things!
@CapaUno13226 ай бұрын
@@NerdyRodent Thanks for your reply, I've had a few 'bumps' for Stable D but I did overcome then, I've discovered ZLUDA and Anaconda and peeps have things up and running but it's a lottery to how much ball ache you may have to experience, one guy is getting really fast renders with ZLUDA on AMD so eh, fingers crossed, thanks for your helpful videos and I'll let you know, just to add that the rx6800 with gaming is only 20% lower on average with FPS than the rtx3090, I know there's more to AI but that's really good and a used 3090 is still $6-700 depending on which one...so hopefully I'll get some reasonable performance as well....have a fun day!
@NerdyRodent5 ай бұрын
@@CapaUno1322 nice!
@michaelbayes8026 ай бұрын
The ability to load the flux models via "Load Checkpoint" thows up an error for me ...Could not detect model type of ... did you have this problem?
@NerdyRodent6 ай бұрын
Are you using the special fp8 checkpoint downloaded via the link above workflow, like in the video?
@tosvus5 ай бұрын
Does your patreon have workflows that work with the largest flux dev model? I have the regular one working fine, but it looks quite different from what is here, (and uses the 24GB file), and I would like to get all the extra stuff on your patreon if it works with that as a basis (or basically works with 24GB related files). BTW: I use a RXT 4090. Thanks!
@NerdyRodent5 ай бұрын
Yup, there’s a whole boatload of workflows 😉
@tosvus5 ай бұрын
@@NerdyRodent Thanks, signing up!
@blakemann27946 ай бұрын
I don't have comfy installed yet... but I've been wanting to try flux img2img with my DAZ3D renders... I'd be interesting to see how subtle I can make the changes... I just want to make the renders look like either painted/illustrated or more realistic.. while keeping as much detail as possible.. I got some good results a1111 by simply upscaling renders with certain models and prompts in effect... I'm wondering if I can do the same with Flux
@TR-7076 ай бұрын
install it already
@NotThatOlivia6 ай бұрын
this is not the best approach - why load into RAM/VRAM LLMS and corresponding nodes, if you can create prompts with them and after run comfy to generate?
@joefawcett21915 ай бұрын
definitely a better idea
@juanjesusligero3916 ай бұрын
Oh, Nerdy Rodent, 🐭 he really makes my day; 😎 showing us AI, 🤖 in a really British way. 🫖 🎵🎶
@NerdyRodent6 ай бұрын
😄
@jimdelsol19416 ай бұрын
Long version when ??!!
@CheoWalker5 ай бұрын
CLIPTextEncodeFlux as of this day doesn't have the t5xxl input
@Kvision25th5 ай бұрын
Im starting with flux can you share that workflow??
@NerdyRodent5 ай бұрын
Sure! The pre-made version you can grab from www.patreon.com/posts/ai-enhanced-flux-109665789
@MilesBellas4 ай бұрын
Llama 3.2 for Comfyui FLUX ? Prompt enhancement node?
@NerdyRodent4 ай бұрын
Yup, works great with llama3.2… or indeed much larger models
@EllaFinch08126 ай бұрын
Hi, do you accept sponsorship to your video?
@carlodemichelis5 ай бұрын
How do you add/remove Pin(s) to slots of a node? For example, the pin to the t5xxl slot (wich become hidden) in the CLIPTextEncodeFlux node, or the user_prompt pin in the API Large Language Model? (sorry, complete noob in Comfy). Thanks.
@NerdyRodent5 ай бұрын
You can write click on any node for a variety of options,
@carlodemichelis5 ай бұрын
@@NerdyRodent Yup, of course, but i don't find any options to add/remove pins to slots :-/
@magimyster6 ай бұрын
goodbye mj🤭👍
@엠케이-p3p5 ай бұрын
who are the people that can run this workflow? I am on 12gb 3060 gpu, is it possible to run flux and ollama at the same time? does anyone know about this?
@robbana99096 ай бұрын
Generating images with the checkpoint models takes forever on my 8gb card compared to the unet models for some reason.
@luislozano28966 ай бұрын
this new checkpoint is huge! the regular Flux one is 22GB, the FP8 version is 11GB. We got so used to having SD1.5 at 2-4 gb! XL and pony 6GB
@DrMacabre5 ай бұрын
is there a way to reduce the memory usage of Llama ? it's peaking heavily on my 3090 and makes everything painfully slow.
@NerdyRodent5 ай бұрын
Using fp8 and a smaller LLM should reduce the load. With 64gb RAM and a 3090 you should see images generate in around 20 seconds
@DrMacabre5 ай бұрын
@@NerdyRodent damn, im already on fp8 with the smallest Llama 3.1 model, it goes just enough over the vram to slow everything down with Florence + Llama.
@NerdyRodent5 ай бұрын
you can change the amount of time all models are loaded into memory by setting the `OLLAMA_KEEP_ALIVE` environment variable when starting the Ollama server. You'll need to wait for it to load again each time, but if you've got other things eating that VRAM it may help
@DrMacabre5 ай бұрын
@@NerdyRodent Done, no more long queue, thank you :) the results with florence and Llama are simply amazing
@NerdyRodent5 ай бұрын
@@DrMacabre yeah, it totally changes how you can prompt!
@Tapiolla6 ай бұрын
Excuse me, but I didn't understand, where can I find the workflow you have shown in the video?
@SouthbayCreations6 ай бұрын
On his patreon page
@Tapiolla6 ай бұрын
@@SouthbayCreations Thank you! Wasn't clear :)
@MediAndLemon6 ай бұрын
Do Loras work yet (and I am just too stupid for it) or are those also still on the waiting list?
@NerdyRodent6 ай бұрын
Yup! Loras and controlnets are available, though still early days!
@Utoko6 ай бұрын
yes but of course they need additional VRam controlnet too
@godfuzza27785 ай бұрын
Has anyone shared this workflow already or do I have to build it on my own?
@NerdyRodent5 ай бұрын
You can get the workflows here - www.patreon.com/posts/ai-enhanced-flux-109665789
@jjog31856 ай бұрын
Is it possible to run it locally with an RTX4060 8GB VRAM and with 16GB RAM ?
@BlackParade016 ай бұрын
@mexihcahcoatl4105 that's with Schnell, right?
@jjog31856 ай бұрын
@mexihcahcoatl4105 I'll try and I'll share my result and opinion. Thanks!
@kkryptokayden46536 ай бұрын
@@BlackParade01 I would use dev version is way better, slower but is worth it
@BlackParade016 ай бұрын
@@kkryptokayden4653 I've used both and tested both intensively. The Dev version is really good for photorealism, but the Schnell model seems great for illustrated outputs. And of course, it's much faster. Both models have their uses. I prefer the Dev, but I use the Schnell for img2img.
@luislozano28966 ай бұрын
Even having 32 GB of ram it will max fill and swap memory and slow down! Close all other apps and tabs! i just got some extra ram yesterday to 48 gb. i just found the FP8 version and got a bit of speedup.
@themachine82296 ай бұрын
Works with flux1 dev.sft?Because I don't have the extension.safetensor in the model I use
@NerdyRodent6 ай бұрын
Yup! Remember to use the workflows at the top when using the individual files though
@themachine82296 ай бұрын
@@NerdyRodent thanks bro
@riflebird48425 ай бұрын
What 4GB vram people can do ??😢
@NerdyRodent5 ай бұрын
Sd1.5 is a great choice for extra low vram cards!
@TR-7076 ай бұрын
hmm whats this ays + scheduler. Also, is Flash attention 2 required for maximum nerdyness or not really?
@NerdyRodent6 ай бұрын
There are multiple options for florence, but I find flash attention works just fine! Align Your Steps is just an option that works quite well - kzbin.info/www/bejne/gJi8q3Z7r613qMU
@TR-7076 ай бұрын
@@NerdyRodent wow awesome video! I don't see AYS+ in my ksampler though. I do see Beta and Resample though ..
@Rentoa6 ай бұрын
i have the same problem... no ays+ scheduler?
@NerdyRodent6 ай бұрын
@@TR-707 Align Your Steps is just an option that works quite well - kzbin.info/www/bejne/gJi8q3Z7r613qMU
@NerdyRodent6 ай бұрын
@@Rentoa Align Your Steps is just an option that works quite well - kzbin.info/www/bejne/gJi8q3Z7r613qMU
@sinayagubi88056 ай бұрын
can you somehow simulate negative prompts??
@tc85576 ай бұрын
@@sinayagubi8805 since it's an llm you're prompting, just say what you dont want. Try like 'with no eyeglasses or makeup on'
@Elwaves29256 ай бұрын
From what I've read, Flux doesn't use negative prompts. If you add one it will often ignore it. Try what @tc8557 says and describe what you don't want.
@FusionDeveloper6 ай бұрын
My version is: 2489 (comfyui) 2.48.5 (manager)
@over-dose4 күн бұрын
Pc requirements??
@NerdyRodent4 күн бұрын
Just the same as for flux :)
@choppergirl6 ай бұрын
I found ComfyUI impossible to figure out.
@shApYT6 ай бұрын
Then you'll have trouble using any other art software. Nodes aren't hard. Just do it. blender, houdini, substance designer, davinci resolve etc.
@choppergirl6 ай бұрын
@@shApYT Funny. I tried to get the demo sample project to do anything at all with no luck. I use othe AI programs all the time. ComfyUI just looked like a logic flowchart mess of object oriented containers gone off the rails.
@shApYT6 ай бұрын
@@choppergirl millions of artists using blender, Houdini, grasshopper, davinci resolve, substance and countless. It is the standard interface.
@goodie2shoes6 ай бұрын
did you try the portable /standalone version? and what are your GPU specs?
@Elwaves29256 ай бұрын
@@shApYT That's complete rubbish about other art software.
@fast_harmonic_psychedelic6 ай бұрын
If it uses CLIP why dont you just use clip vision encoder to input an image and trick flux by telling it its a text embedding lol
@elgodric6 ай бұрын
Of all of the 8 billion photos out there you chose Bill fuckin Clinton 👌
@Silberschweifer5 ай бұрын
the moment you don't check he mean schnell > Schn-e-ll
@SlyNine6 ай бұрын
How do I get the manager menu to come up. I'm ew to ComfyUI, sorry if it's a dumb question
@NerdyRodent6 ай бұрын
ComfyUI manager is the first thing to install right after Comfy itself - github.com/ltdrdata/ComfyUI-Manager
@vitalis6 ай бұрын
I see rodent I click. Simple 👍
@_.o..o._6 ай бұрын
I'm usually very sceptical about "these" things, but why do most of the major KZbin channels with AI tutorials always use a political figure to explain image and video generation with AI? I mean, come on 🤐
@Elwaves29256 ай бұрын
I'd say it's because it's someone recognisable, a public figure of their own choosing, that doesn't fall into the category of actors etc who don't. Or it's something else, like them being easy targets.
@_.o..o._6 ай бұрын
@@Elwaves2925 I would rather see actors' faces. Lately, whatever I try to learn, I see political figures shoved down my throat everywhere 🤮
@Elwaves29256 ай бұрын
@@_.o..o._ I don't think it's being done because it's political, not on this channel anyway but for me using actors should be okay as long as nothing outrageous is done with them. 🙂
@hadbildiren1236 ай бұрын
Not a nice model. Generations look like animation or drawing and the skin is like plastic or shiny! Going on with SDXL!
@daylight3d6 ай бұрын
Don't use the Schnell version. It's plastic. Dev version is much better.
@kkryptokayden46536 ай бұрын
@@hadbildiren123 I agree the dev version and the schnell version are very different it's day and night
@ArchangelAries6 ай бұрын
I hate comfyui with a visceral passion
@generichuman_6 ай бұрын
Oh, so you don't know how to use it and you've never even really tried... thanks for sharing!
@qus1236 ай бұрын
I tried loading flux with the same loading checkpoint as you. It fails with: Error occurred when executing CheckpointLoaderSimple: ERROR: Could not detect model type of: /mnt/e/Projekty/_AI/ComfyUI/models/checkpoints/Flux/flux1-dev-fp8.safetensors (this model does load the "old" way, though)
@NerdyRodent6 ай бұрын
Could be the checkpoint file or your version of comfy? Make sure it’s the 17gb one and check the sha256