LOL There is NOTHING Simple about ComfyUI!! I'll wait for Forge support.
@bWWd04 сағат бұрын
when you add something after applystyle, you get your inference chopped and it takes two times as long to generate, so they have to fix stylisation strength internally without extra nodes
@jugibur21175 сағат бұрын
Thanks for the introduction! The AI wheel of news is rotating so fast, I can't keep up with internalizing all this...
@user-pt1kj5uw3b7 сағат бұрын
Huge upgrade to flux
@user-hi3ke6qh7q9 сағат бұрын
Anyone else notice the first sign in the first example is trying to say "Her Body". Predicting someone in the circle making the news for SA in the next 6 months.
@ovideotube9 сағат бұрын
it is compatible with GGUF flux models?
@ovideotube9 сағат бұрын
I found the answer by myself by trying it: yes the gguf models also work
@DaveTheAIMad9 сағат бұрын
dear god these files are MASSIVE lol
@ChrissyAiven10 сағат бұрын
Great, I’m going to try it out! :) But why isn’t anyone building a workflow that combines outpainting with controls? For example, one area could remain unchanged, while the outpainted area is controlled by something like Canny or OpenPose. This would be amazing for products that need to maintain a consistent appearance. I’m not sure if this is even possible, but maybe I’ll give it a try once I gain more experience. :)
@undoriel12 сағат бұрын
Hopefully someone can make it work on 12GB GPUs *fingers crossed*
@lennoyl13 сағат бұрын
think to update comfyUI before using all of this (you probably said it in the video but I was too hurry to try ^^ ) or you will have some errors (or the loras will do nothing)
@FlyingCowFX13 сағат бұрын
Awesome tutorial! However when i use the flux fill model, i am getting very noisy images... especially noticable when zoomed in..any tips?
@antonivanov578214 сағат бұрын
Спасибо за ноду Crop And Stich! Вы мне очень помогли!
@adamholter188414 сағат бұрын
Nerdy rodent really makes my day, showing us AI in a really British way.
@TRoJMelencio16 сағат бұрын
no for Forge UI yet?
@blakecasimir17 сағат бұрын
Hopefully this means we're closer to a Fooocus Flux Edition?
@BetatestBanned18 сағат бұрын
Been following your channel for almost 3 years, thank you to always deliver, Roden!
@ArtificialDILOmcfly18 сағат бұрын
Huge thanks for bringing this content so quick! God bless you!
@christianholl792418 сағат бұрын
Hi! Awesome introduction into the new Flux Tools! Instead of two ConditioningSetTimestepRange you can also combine two ConditioningSetAreaStrength. The strength of the two ConditioningSetAreaStrength should always sum up to 1. This setup gave me much more control over the results.
@NerdyRodent15 сағат бұрын
Nice!
@loubakalouba18 сағат бұрын
I see new flux goodies on reddit, I run to nerdy rodent's channel. Thank you.
@M14wAI-mj6so19 сағат бұрын
Is there a way to do instructpix2pix with Flux ? Where is the model?
@LeadingEdgeStory19 сағат бұрын
Great. How to get Force/Set CLIP Device node?
@genAIration17 сағат бұрын
If u have one gpu, u don't need that. It can brake some nodes.
@NerdyRodent15 сағат бұрын
It’s from extra models. Really handy for low VRAM cards!
@gnsdgabriel20 сағат бұрын
Use image composite to composite the output with the original for the areas that are supposed not to change.
@Paulo-ut1li20 сағат бұрын
Great! Using controlnets with redux makes it way better and versatile
@raphaild27919 сағат бұрын
please explain, I get errors when I do everything as in the video. error in "Load Style Model" for Apply Style Model
@Paulo-ut1li18 сағат бұрын
@@raphaild279 update comfyui
@draken537921 сағат бұрын
The Fill model tbh is a bit of a dud. You get very similar results just using diff diff and the base flux model, i assume that is why they dont compare to that. Kinda silly not to compare to your base model doing inpainting, they clearly not 'happy' that their fill model is most likely only a tiny bit better than the base model.
@alex.nolasco22 сағат бұрын
where did you get sigclip_vision_patch14_384.safetensors frrom?
@msclrhd17 сағат бұрын
There's a link to it on the ComfyUI_examples page in the "Redux" section. It's in the Comfy-Org hugging face page in the sigclip_vision_384 repository.
@icedzinnia22 сағат бұрын
It sounded like you said "Jeweled Clip Loader" and that's what I'm going to call it from now on. 1:44
@caleboleary18223 сағат бұрын
first
@achidavitadze2181Күн бұрын
10:34 Install part
@p_p3 күн бұрын
5:21 unfortunatly the trick with other loras dosent work well, probably only with this design lora one do his job better no idea
@p_p3 күн бұрын
i installed ollama from the website with the exe installer, not really sure if this is running locally or not🙄🙄
@NerdyRodent3 күн бұрын
Yup, as it’s installed & running on your pc it’s running locally! 👍🏼
@TheSORCERER-p9l3 күн бұрын
which one did you use for the woman with the red hair, that looks like the answer for Pulid plastic skin, just not sure what you did the achieve that.
@p_p3 күн бұрын
I love this KZbin channel. every time I can't wait for new videos to come out. I literally hang on this guy's every word to try all the new stuff.
@karvakorvatjupulit57164 күн бұрын
Initially Inpaint was working but then I started getting this error. Inpaint Anything - ERROR - Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install 'accelerate>=0.26.0'`
@larryvw14 күн бұрын
Where are the models or safetensoners stored in OmniGen it apears that when using cumfyui it downloads the each time, please advise
@electronicmusicartcollective5 күн бұрын
My lovely Rodent, thx 💫
@isabellatang77535 күн бұрын
this is Flux???? no thanks 3:15
@myta6op4025 күн бұрын
Why do they stick gay flags everywhere?
@photogeneze5 күн бұрын
could you please share your workflow file here? the one with outpainting
@zzzzzzz84735 күн бұрын
yea these are very interesting for their use with inpainting , can imagine this as being more controlled underlying steps for things like omnigen manipulation . and more modularized as we can train specific synthetic transformations as seperate LoRAs then apply that any input image , its much more then a style transfer as is not limited by the underlying starting pixels of img2img
@ChanhDucTuong5 күн бұрын
Nice tools and ideas. Thank you very much.
@USBEN.6 күн бұрын
Getting closer to consistency workflows for comics and animations.
@marp325 күн бұрын
Control net will get you there.
@noonesbiznass53896 күн бұрын
This works pretty dang good actually, thanks for exposing me to this.
@kariannecrysler6406 күн бұрын
💗 🐁
@Elwaves29256 күн бұрын
I'll have to try these in the morning but I like the look of them.
@juanjesusligero3916 күн бұрын
Oh, Nerdy Rodent! 🐭🎵 He really makes my day! ☀ Showing us AI, 🤖 in a really British way! ☕🎶
@NerdyRodent5 күн бұрын
😀
@fixelheimer37266 күн бұрын
This would be nice as img2img node without the need to work around
@Paulo-ut1li6 күн бұрын
Couldn't find the proof of concept outpainting wf on patreon
@NerdyRodent5 күн бұрын
It’s the second JSON file (with IMGOP in the name), as detailed under the “workflows” heading with more info
@aestheticPhantasm6 күн бұрын
"Day"mon
@tdfilmstudio6 күн бұрын
I am so glad you are covering this. In-Lora is very helpful