This is by far the best explanation of setting up Flux + ControlNet I have seen so far, since you actually explain everything rather than just "here's my over-complicated workflow!". The node layout is so nice and clean. You did more than enough to earn a sub and a like from me. Keep it up!
@goshniiAIАй бұрын
I am glad to hear that the step-by-step approach was clear and helpful for you. Your support is encouraging, and I appreciate your sub and the like. Thank you so much for your time and the amazing feedback.
@bigironinteractive57477 күн бұрын
Thanks for not just showing a final workflow but explaining each node. This is what makes your videos so great.
@goshniiAI7 күн бұрын
You’re welcome, glad you found the breakdown helpful!
@noNumber2Sherlock11 күн бұрын
I've been through a few of your tutes so far, and I am just floored by your expertise, your delivery and to add your workflows work! Not like some others where it's all just smoke and mirrors and when you use their workflow you soon find out it was made just for the show. Not you! I am thrilled to have found your channel. Thank you!
@goshniiAI10 күн бұрын
Your excitement is motivating, and I am glad that you’ve not only found value in the videos but also had success with the workflows.
@nikolaprokic23052 күн бұрын
I usually never comment but this is really helpful video man , you explain everything so perfectly , god bless
@goshniiAI2 күн бұрын
I appreciate you taking the time. I'm glad you found it helpful.
@kajukaipira3 ай бұрын
Amazing, concise, understandable. Congrats man, keep the good work.
@goshniiAI3 ай бұрын
Thank you so much! appreciate it
@jamessenade318127 күн бұрын
thank bro ... i love the way your detailles all the process ... you are a Rock star , merci
@goshniiAI27 күн бұрын
You are very welcome, and saying thank you for your compliment.
@240dbprisms52 ай бұрын
omg bro, just what i need 🔥🔥 THANK YOU clear rhythm, working method
@goshniiAI2 ай бұрын
you are most welcome. i am glad to read your feedback.💜
@pizza_later3 ай бұрын
So helpful. Thank you for starting fresh and walking us through each step. Definitely earned a sub.
@goshniiAI3 ай бұрын
Thank you so much! I’m honoured to have earned your subscription and and glad you found this helpful.
@zoewilliams2010Ай бұрын
Much love from South Africa! Thank you for this video!!! I'm busy making a short horror movie for fun using Flux Dev and KLING to do image-to-video, and this is EXACTLY what I need! Because I need to make consistent characters but I only have 1 input image of the character as reference. Man I didn't know they had a character pose system for flux yet THANK YOU!!! :D this needs to be ranked higher in google!
@goshniiAIАй бұрын
You are very welcome! I am glad it was helpful for your short horror film project, and I appreciate your feedback. It is always great to connect with local creators, especially since I am currently in South Africa. Happy creating!
2 ай бұрын
Just wanted to say, you are amazing!!
@goshniiAI2 ай бұрын
Hearing that means so much. Thank you for your support.
@sergeysaulit3 ай бұрын
Thank you! It’s good that you just tell and show what and how to do. Otherwise you can spend your whole life learning ComfyUI)). And so, in the process, in practice, it is easier to learn.
@goshniiAI3 ай бұрын
I'm really glad to hear that the straightforward approach is helping you! Just diving in and practicing as you go makes it a lot easier. Thanks again for the feedback!
@devnull_2 ай бұрын
Thanks and it is nice to see a cleaner node layout, instead of a jumble of nodes and connections, which too many Comfy tutorial makers seem to love.
@goshniiAI2 ай бұрын
I am Glad it was helpful! Thank you for the observation and feedback . It means alot
@ielohim2423Ай бұрын
This is amazing! Thank you so much. Subscribed!
@Gimmesomemore2012Ай бұрын
thank you very much for this tutorial... at the right speed and detailed explanation..
@goshniiAIАй бұрын
Thank you so much for the kind words!
@sudabadri70513 ай бұрын
Superb work mate
@goshniiAI3 ай бұрын
Thank you so much, Suda! Love
@ainaopeyemi3393 ай бұрын
I love this, already subscribed
@goshniiAI3 ай бұрын
Thank you for being here. i appreciate your support.
@cleverfox44133 ай бұрын
Really good Explanation, Keep up the good work :)
@goshniiAI3 ай бұрын
Thank you for the motivation! I'm glad I could help.
@JoeBurnett3 ай бұрын
Great video as always! Thanks!
@goshniiAI3 ай бұрын
Thank you for your encouragement.
@yangli1437Ай бұрын
Thanks so much for your hardwrok, very useful videos.
@goshniiAIАй бұрын
You are very welcome! I appreciate your encouraging feedback. Thank you!
@BunnyMuffins9 күн бұрын
hey, if i want to make a charactersheet for an animal like a bunny, do I need na new reference sheet with different dimensions? how would i go about creating that. When i copy paste someone's charactersheets they look too humanoid instead of being a bunny for example
@goshniiAI7 күн бұрын
You are right. A character sheet won't quite cut it because the proportions and features are so different. Create or find a reference sheet specifically designed for animal anatomy. For a bunny, this would include front, side, and back views, focusing on its unique features-like ears, body shape, and tail placement. You can even sketch a simple one yourself or use basic AI tools to generate outlines.
@willmobarАй бұрын
Thank you, you are excellent!
@goshniiAIАй бұрын
That's very kind of you!
@wrillywonka1320Ай бұрын
also for anyone experiencing an issue downloading the yolo model, you will need to go into the comfyui folder comfyui> custom nodes> comfyui manager and you will find a config file. you open in notepad editor and where it says bypass_ssl = False you need to change False to True and save. restart comfyui and you will be able to download the yolo model no problem
@calvinnguyen14512 ай бұрын
Dope stuff. You rock!
@goshniiAI2 ай бұрын
I appreciate that! Thank you!
@cosymedia2257Ай бұрын
Thank you!
@goshniiAIАй бұрын
You are more than welcome.
@devon9374Ай бұрын
Great video!
@goshniiAIАй бұрын
I'm glad you enjoyed it!
@Usermx01013 ай бұрын
Great video. I wonder what are the system specs you use to run this on. I got out of vram memory with 20Gb card using GGUF flex-dev-Q5 so I guess I might be doing something wrong.
@goshniiAI3 ай бұрын
I've got an RTX 3060 Nvidia card with 12GB. It's happened to me a few times. Just make sure to close all the apps that might be using your GPU. You could also try using an upscale of 2 instead of 4. And sometimes, saving the workflow and then restarting comfyUi helps things run smoother.
@E.T.S.3 ай бұрын
Very helpful, thank you.
@goshniiAI2 ай бұрын
i appreciate your feedback
@Shaolinfool_animation27 күн бұрын
You always make great content! I have a question. I got a image of character in a front view T-pose and I want to get different views of the character from one image. Is it possible to load that image and get different views of that character using open pose character sheet? Thanks for all of your hard work!
@goshniiAI25 күн бұрын
That is possible, but the process will likely involve a lot of trial and error. I recommend using the OpenPose character sheet as a guide to create the character views. Then use this to make a Lora for the character. This approach will give you more control. Thank you for your encouraging feedback.
@petttertube2 ай бұрын
Thank you very much for this priceless video. You say the parameter cfg is chosen to be 1 because we are not using the negative prompt. As far as I know Flux doesn´t use negative prompts, so I am a bit confused, could we just remove the negative prompt node from the workflow?
@goshniiAI2 ай бұрын
You are welcome and entirely correct. However, the Ksampler will still require a negative conditioning input, so the negative prompt node is linked for that.
@wrillywonka1320Ай бұрын
i cant lie, this was the best consistent character video for sure! is this able to work with sd3.5?
@goshniiAIАй бұрын
Thank you for coming here, and I appreciate your feedback. Yes, it is possible! Just keep in mind that SD3.5 might need the right controlnet models and slight adjustments to the ControlNet parameters to achieve the same consistency since it has a few differences in model handling. If you can tweak those and add the right nodes, you should be able to get great, consistent characters!
@wrillywonka1320Ай бұрын
@goshniiAI wrell since im super new to comfyui i guess ill just wait for someone to make a videwo about it. By the way great video! I would use flux but my issue is that i heard flux has very strict commercial use rulesf
@kagawakisho43823 ай бұрын
Thanks for the video. This is Awesome. Do you use this to create loras? Or what do you use the character sheets for?
@goshniiAI3 ай бұрын
I haven't specifically used this workflow to create LoRAs, BUT character sheets can definitely be a foundation for that. They help you capture a character in different poses and perspectives, making it easier to feed consistent images into training processes for LoRAs. Also they are super useful for game development, animation, or just keeping a consistent look across different art projects
@pixelist999Ай бұрын
Great tuts! Helped me install flux1 seemlessly - however I don't seem to have dwprocessor or controlnet apply in my drop down lists? I get this message when in manager - 【ComfyUI's ControlNet Auxiliary Preprocessors】Conflicted Nodes (3) AnimalPosePreprocessor [ComfyUI-tbox] DWPreprocessor [ComfyUI-tbox] DensePosePreprocessor [ComfyUI-tbox] So I uninstalled ComfyUI-tbox and still no joy? Do you have any suggestions?
@OzstudiosioАй бұрын
perfect but what if i want use image instead use prompt input?
@diaitigai98563 ай бұрын
Great content in your video! I really enjoyed it. One suggestion I have is to improve the echo in your voice using a tool called Audacity. It can help enhance the audio quality significantly. Feel free to contact me if you need any help with that. Keep up the good work!
@goshniiAI3 ай бұрын
Thanks a lot for the awesome suggestion and kind words! I am considering the idea of using Audacity I've heard it's great so I'll definitely give it a try. If I run into any issues, I might take you up on your offer to help! Thanks again for watching and giving me some really helpful input.
@LaMagra-w4cАй бұрын
Love your videos. I purchased the pack including the one in this video but I'm having issues. I keep getting the following error. 'CheckpointLoaderSimple ERROR: Could not detect model type of: flux1-dev-fp8.safetensors' . Where would I download the correct model for this to work?
@goshniiAIАй бұрын
Thank you for supporting the channel. Make sure you're grabbing the specific FP8 version of the model and placing it in the models/checkpoints folder within your ComfyUI directory. Double-check that the file name hasn’t changed (e.g., flux1-dev-fp8.safetensors) and that it's saved in the right format. If you need further guidance, feel free to view this step by step video kzbin.info/www/bejne/ioi2d5iglLiSmLssi=hWosspilbjYj3QWl
@LaMagra-w4cАй бұрын
@@goshniiAI Thank you! It worked but is it normally very slow when it hits the first ksampler? it takes forever to get through this point
@goshniiAIАй бұрын
@@LaMagra-w4c Yes, FLUX Dev can be a bit sluggish when it hits the first KSampler , It’s not just you! Here are a few tips to speed things up - Use Quantized Models, Lower Sampling Steps, also make sure that your GPU and VRAM aren't getting held back by other stuff running in the background.
@V3ryH1ghАй бұрын
when doing the first queue prompt for the aio aux processor - i just get a blank black image
@goshniiAIАй бұрын
double-check that your image resolution matches the AIO's setup, mismatches can sometimes be the cause. Also, tweaking the strength values for ControlNet can help the AUX processor interpret the image better. It took me a bit of experimenting with these settings too! I hope this helps.
@RetrocausАй бұрын
@@goshniiAI i still get a blank image also the strength is after the preprocessor save image i don't think it affects it?
@pumbchik57883 ай бұрын
for the pose reference, can we add our own pics posing as we like. will it work?
@goshniiAI3 ай бұрын
Yep!!! You can use any picture, and then you'll need ControlNet to extract your pose.
@ImHewg3 ай бұрын
How do you get the super cartoony prompts, like that cool robot? I keep generating 3D characters. Sweet workflow! Subbed!
@goshniiAI3 ай бұрын
Welcome on board! Here is the prompt for that. A Cyberpunk Mecha Kid, concept art, character sheet, in different poses and angles, including front view, side view, and back view, turnaround sheet, minimalist background, detailed face, portrait.
@TheBearmoth3 ай бұрын
Great video, very helpful! What kind of spec do you need for this flow? I'm able to run some Flux1D stuff, but ComfyUi keeps getting killed for taking too much memory with this workflow :(
@goshniiAI3 ай бұрын
Thank you! I'm glad you found the video helpful. if you’re already running FLUX1D. Ideally, you’d want at least 12GB of VRAM for smoother runs. You can try lowering the resolution of the inputs or using quantized models to reduce memory usage.
@TheBearmoth3 ай бұрын
@@goshniiAI any system RAM requirements? That's given me grief in the past, before I upgraded it.
@ttthr4582Ай бұрын
How to know which other models are trained for use with controlnet? I basically want to create a 2d cartoon character turnaround sheet using your workflow
@goshniiAIАй бұрын
Hello, and thank you for watching and engaging. Controlnet only conditions your prompt to take a specific pose you want.. So to find models that work smoothly with ControlNet, you can explore Civitai. Sometimes the models include detailed tags indicating ControlNet compatibility. However, the majority of models are trained for controlnet. For that 2D cartoon character turnaround, try searching models tagged with styles like “cartoon” or “illustration. I hope these help.
@muggyate2 ай бұрын
I find that if you add another generation step before to tell the AI to generate a design sheet for a mannequin, you can skip the part where you have to have an image loaded into the controlnet per-processor.
@goshniiAI2 ай бұрын
Thank you for sharing that approach with everyone! awesome tip!
@m3dia_offline3 ай бұрын
are you going to follow up on this video on how to use this character sheet to put them in different scenes/videos?
@goshniiAI3 ай бұрын
Thanks for the suggestion! I'll check it out since you mentioned it.
@phenix56092 ай бұрын
Any idea why i can't get it to work, strangely, i get your workflow correctly from the link you provide, generate my image with the 3 view like you ( before applying the controlnet ) then i run the workflow, again to apply the controlnet pose ( that show like you in the video with the reference image provide, i see the pose extracted correctly) but when i run the workflow trying to apply the controlnet, instead of the 3 view picture, i don't get the panel view applying the previously generated character to the controlnet pose, but a single centered character..., i'm really not sure what went wrong lol, si if you have any idea thx
@goshniiAI2 ай бұрын
Thank you for diving into the workflow! Here are a few tips that might help: - Before you run the workflow again, just make sure the reference images for ControlNet are lined up right. Take a look at your positive prompt and think about adding multiple views if you haven’t already. - It’s a good idea to double-check the ControlNet settings, especially the resolution and how the preprocessor reads the pose data. Sometimes tweaking those can keep you from getting just a single-centred result. i hope these helps
@RoN43wwq3 ай бұрын
great thanks
@goshniiAI3 ай бұрын
You are welcome!
@personaje272 ай бұрын
Hi bro thanks for the video please which PC do you recommend for all of this I am trying to get a laptop but I don't want to do mistakes as u want it for traditional video editing and Ai vidéo/image generator
@goshniiAI2 ай бұрын
Aim for at least an NVIDIA RTX 3060 or higher with 6GB or more VRAM. This will help with both rendering in video editing software and running AI generation workflows efficiently. Also, RAM size of 32GB is ideal for smooth performance, especially when multitasking or running resource-heavy AI models.
@lordmo34162 ай бұрын
Would you be so kind as to give the workflow for using an existing image or character? Thanks
@goshniiAI2 ай бұрын
Yes, hopefully, the tutorial that follows will clarify and give that.
@lordmo34162 ай бұрын
@@goshniiAI can't wait
@Fret-Reps2 ай бұрын
IDK if you can help me but I've had problems with this AIO Preprocessor. AIO_Preprocessor 'NoneType' object has no attribute 'get_provider. Please help
@goshniiAIАй бұрын
A missing or outdated dependency can cause this, so make sure to update comfy Otherwise, you can continue to use individual preprocessors for each controlnet model. that will still work fine.
@edmartincombemorel3 ай бұрын
great stuff, but there is defenetly a missed opportunity to crop each pose and redo a pass of ksampler on it, you could even crop your controlnet to fit the same pose.
@goshniiAI3 ай бұрын
You're absolutely right-cropping each pose and running it through KSampler again could really refine the details and give even more control over the final result. I’ll definitely keep that in mind for future tutorials! I appreciate the insight
@greenlanternA1233 ай бұрын
your UI is very nice, I still have the old look, how do I update to get your UI ?
@goshniiAI3 ай бұрын
Please see my Video here, towards the end, i explained the settings: kzbin.info/www/bejne/hoGzgmSJdrOGma8si=uMK8VUuxhCxyIerW
@AnthonyTori2 ай бұрын
It would be nice if we could upload a 3D file like a glb so the software has every angle of the model. It would make consistent characters a lot easier.
@goshniiAI2 ай бұрын
.glb would advance the creation of consistent characters. That might just be a possibility in the future!
@demiurgen34073 ай бұрын
This might be a dumb question but what do you do with a character sheet? You have a character in different poses, then what? Do you animate it? Do you use it for something else?
@goshniiAI3 ай бұрын
Not a dumb question at all! Character sheets are often used in animation, game development, and concept art to showcase a character in various poses or expressions, making it easier for artists or animators to reference and maintain consistency. it’s mostly a reference tool to visualize how the character moves and looks from different angles.If you’re looking to bring these poses to life, you can definitely use them as a foundation for animation or even export them into 3D modeling software.
@demiurgen34073 ай бұрын
@@goshniiAI Cool! Maybe you could do a video on that? How to move from a character sheet to a 3D model :)
@bananacomputer93513 ай бұрын
Wow nice
@goshniiAI3 ай бұрын
saying thank you!
@lefourbe55963 ай бұрын
i was versed into Chracter sheet making for over a year. however... i have yet to succeed at making the single picture Lora character that would make the reference sheet of the original concept in one go dencently. your take is basically the mick mumpitz workflow with flux. it's good as it is.
@goshniiAI3 ай бұрын
I'm really glad you found this workflow helpful and shared your experience! Flux really kicks it up a notch, and when you combine it with a refined approach like Mick Mumpitz’s, it really gives it that extra edge.
@AIRawFootagesАй бұрын
it shows "(IMPORT FAILED) ComfyUI's ControlNet Auxiliary Preprocessors" when i try to install ControlNet Auxiliary Preprocessors...anyone pls help
@goshniiAIАй бұрын
Make sure you're running the latest version of ComfyUI. Sometimes, older versions don’t play well with newer add-ons.
@秦奕-f9k3 ай бұрын
great ai master
@goshniiAI3 ай бұрын
Thank you, Sensei!
@clapandesign73285 күн бұрын
Hi. Can you please explain to everyone how to create the node called "controlnet apply sd3 and hunyuandit"? Thanks.
@goshniiAI5 күн бұрын
Hello there, the ControlNet Apply SD3 and Hunyuandit node is no longer available-the node has been renamed to "Apply ControlNet with VAE" in the latest updates. It is a core node available in ComfyUI. Once you update it to the latest version, it needs no installation. I hope this helps.
@ZergRadio3 ай бұрын
Wow, I really enjoyed this vid. I am an absolute beginner. I am confused. In the video you have your character in many poses and improved the details. How would you take just one of those poses from the character (say Octopus chef) and put it in a new environment? Do you have a video on that?
@goshniiAI3 ай бұрын
I'm really glad you enjoyed the video! It's awesome that even as a beginner, you're already asking great questions. If you want to take one of those poses, like our "Octopus Chef," and put it into a new environment, you can easily combine FLUX and ControlNet to lock in the pose while changing the background. I haven't made a specific video on that yet, but it's a good idea for a future tutorial, and I'll definitely create a detailed walkthrough soon.
@dmitryboldyrev73642 ай бұрын
How to create multiple consistent cartoon characters interacting with each other on different scenes?
@goshniiAI2 ай бұрын
Hopefully soon, in the next post
@poptasticanimation55Ай бұрын
My AIO AUX Preprocessor is not wokring, says its not in teh folder. what should i be looking for in that folder and if not where can i get the preprocessor?
@goshniiAIАй бұрын
First, double-check that the ControlNet Auxiliary Preprocessors folder is present in your ComfyUI directory. [ custom_nodes/ControlNet ] If it’s missing, you can download the necessary files by using the Manager. then make sure you update dcomfyUi to the latest version.
@hmmrm3 ай бұрын
THANKS
@goshniiAI3 ай бұрын
You're welcome!
@Larimuss2 ай бұрын
Nice thanks. But what about when we want to use the character in a generation?
@goshniiAI2 ай бұрын
Yes, you can, here is a follow-up video that explains the process. kzbin.info/www/bejne/hXnPan2VhcyUY6c
@pushingpandas64793 ай бұрын
thank you!!!!
@goshniiAI3 ай бұрын
You're welcome!
@stevenls9781Ай бұрын
Is there a way with this workflow to use an image of a person that would be part of the output character sheet?
@goshniiAIАй бұрын
Hello Steven, the answer is sadly no for this workflow. I have explained in the next tutorial how to achieve this with the IP Adapter, but it uses SDXL rather than FLUX due to the IP Adapter's consistency. To obtain an accurate input image, I recommend creating a character sheet for your character concept and then training a lora using your images.
@stevenls9781Ай бұрын
@@goshniiAI oh ok, that works also. Doooo you happen to have a link to a training a lora video :D
@goshniiAIАй бұрын
@@stevenls9781 Not just yet. For now, I do not have a video of Lora training with FLUX, but I am considering making one to share the process. you can check out this reference video that might assist you kzbin.info/www/bejne/i53WkJ2Orp6Fq7csi=EJoLucxVyOFFQKjB
@cray989Ай бұрын
I'm getting an error when I try to use the DWPreprocessor (and several others). The message says: # ComfyUI Error Report ## Error Details - **Node Type:** AIO_Preprocessor - **Exception Type:** huggingface_hub.utils._errors.LocalEntryNotFoundError - **Exception Message:** An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on. ## Stack Trace My internet connection is fine. Any advice?
@goshniiAIАй бұрын
Sorry to hear that; I would recommend updating any of your nodes as well as running an update for ComfyUI.
@Larimuss2 ай бұрын
But how do we make different poses and profile photos for loras etc? Part 2 would be awesome 😂 this is a great workflow and video thanks!
@goshniiAI2 ай бұрын
I'm glad you enjoyed the workflow and video! I appreciate your suggestion to create various poses and profile photos for LoRAs, and I will take it into consideration. True enough, Part 2 seems like a really good idea! :)
@skybluexox2 ай бұрын
I can’t use AIO Aux processor, how do I fix this? 😢
@goshniiAI2 ай бұрын
No need to worry. You can use separate preprocessors for each model, and everything will still work.
@pottersquill3 ай бұрын
So would you incorporate a photo of myself or another real person into this workflow to get realistic images?
@goshniiAI3 ай бұрын
Yes, you could do that by including the IP adapter node. but for now FLUX is inconsistent with the models available. Hopefully soon
@adult85a12 ай бұрын
sir! which gpu are you using? and please suggest cloud gpu service site!
@goshniiAI2 ай бұрын
I'm using an NVIDIA RTX 3060 for my workflow, for cloud GPU services, I recommend trying out RunPod or Vast.ai-both offer flexible pricing and options for FLUX and ControlNet if your local hardware isn't enough.
@k.jatuphat97852 ай бұрын
How to add LoRA to this workflow? Please. I need LoRA for my charector face and Controlnet for my charector pose.
@goshniiAI2 ай бұрын
To achieve the Lora results, place the Lora Node between the load checkpoint and the prompt nodes. You can also follow this tutorial on how to use Flux with Lora. kzbin.info/www/bejne/fqanhmd6ob-cmposi=-l4wISSzrH0i1wmp
@aaagaming20233 ай бұрын
Is there automated way in comfy to split the character sheet into individual images to train LoRAs on the character?
@goshniiAI2 ай бұрын
Yes, you can get individual images by using the image crop node.
@AIChandu773 ай бұрын
thanks
@goshniiAI3 ай бұрын
You're welcome!
@nickfai930129 күн бұрын
How to use the image reference in animation?
@goshniiAI25 күн бұрын
I am hoping to share a video process on that in future videos.
@ainaopeyemi3393 ай бұрын
So I have a question, rather than prompt everything in a single box can we have a different workflow for different pose, like for example here is the sitting pose, the standing pose, the jumping pose workflow and generate them individually rather thsn generate them in one box Also is there a way to make sure that this character you are prompting remains the same with time, for example this octopus man that you prompted let's say I want to use it for a children's story book, and I dont wanna prompt all the characters at once, I can prompt him sitting today, tomorrow he is standing, next week i want him eating, and this character remains the same all through at different times????? Thank you
@Muz8893 ай бұрын
What he showed in the video is called a character sheet. You can then use this character sheet as a reference image to tell flux what a character looks like and prompt any pose or action you want this character specifically. What you should now research, is how to use character sheets with flux.
@goshniiAI3 ай бұрын
Thanks for explaining and providing the extra information
@wrillywonka1320Ай бұрын
update on the controlnetapplysd3 node, supposedly it has been renamed controlnet apply vae
@goshniiAIАй бұрын
Thank you for making us aware. We appreciate you watching out for that.
@TealyAndFriends5 күн бұрын
can this do image to video?
@goshniiAI5 күн бұрын
Yes, you can, once you have your character. The video here can guide you: kzbin.info/www/bejne/j6eYd6iujaZ2kJYsi=Slm5LUUHilgD0oIR
@ScaleniumPersonaleAIАй бұрын
Bro this video is great but some nodes are missing...how should we fix this?
@goshniiAIАй бұрын
If you see missing nodes in your workflow, it means you have not yet installed the custom nodes. To install the missing nodes, go to Manager > Install Missing Nodes and then install the ones that appear. That will help to find the missing nodes and fix them.
@bushwentto7112 ай бұрын
Cool but now how can we use that to create a consistent character in a scene with flux?
@goshniiAI2 ай бұрын
I am looking into it, and hopefully we will have a video guide on it soon.
@bushwentto7112 ай бұрын
@@goshniiAI Cheers mate keep up the great content
@josemasisvalverde8646Ай бұрын
Can use this for Sdxl?
@goshniiAIАй бұрын
Yes, you can; just make sure to use the correct SDXL models for controlnet, checkpoint Loader, and other SDXL-compatible nodes.
@AIandTech-dq4iy3 ай бұрын
I can't find the ControlNetApply SD3 and HunyuanDIT nodes. Where can I install them?
@goshniiAI3 ай бұрын
One of the key nodes in comfyUI is the controlnetapplySD3. Before it's made available, make sure comfy is updated.
@goldkat943 ай бұрын
@@goshniiAI I can't find it either. Auxiliary Preprocessors is installed and "ComfyUI is already up to date with the latest version."
@bluemodize77183 ай бұрын
@@goshniiAI I already have comfy and packages up to date and still can't find it
@Simjedi3 ай бұрын
@@bluemodize7718 It has changed. It's been renamed to "Apply Controlnet with VAE"
@fedesalmaso2 ай бұрын
@@bluemodize7718 same here
@sanbait2 ай бұрын
what is ur comf ui panel in browser?
@goshniiAI2 ай бұрын
Hello there, i have explained that towards the end of this video. kzbin.info/www/bejne/hoGzgmSJdrOGma8si=_KhvMhp30g_h2rxx i hope this helps.
@jamqdlaty5 күн бұрын
The face detailer on your example doesn't seem to understand these are all the same character poses and adds more variety to the faces, which is obviously not wanted.
@Huguillon3 ай бұрын
How do you get that new Interface??, I updated everything and I still have the old interface
@Huguillon3 ай бұрын
Nevermind, I found it
@goshniiAI3 ай бұрын
Awesome! I'm glad you found it.
@Huguillon3 ай бұрын
@@goshniiAI By the way, Amazing video, Thank you
@goshniiAI3 ай бұрын
@@Huguillon i appreciate it, You are welcome
@AIRawFootages25 күн бұрын
but can i use image generated from flux dev commercially??
@ralphmccloudvideo14 күн бұрын
yes
@goshniiAI12 күн бұрын
Thank you for your support. @ralphmccloudvideo
@RagonTheHitman3 ай бұрын
I can't use "DWPose" as a Preprocessor. I get some strange errors. Could have something to do with onnxruntime-gpu / Cuda version whatever. Someone wrote: "The error message mentioned above usually means DWPose, a Deep Learning model, and more specifically, a Controlnet preprocessor for OpenPose within ComfyUI's ControlNet Auxiliary Preprocessors, doesn't support the CUDA version installed on your machine." I tried for 4 hours to fix it, ChatGpt could'nt help neither anyone on the Internet..... :(
@JustinCiriello3 ай бұрын
I can't either. Try using OpenposePreprocessor instead.
@RagonTheHitman3 ай бұрын
@@JustinCiriello Yes, this is working :)
@goshniiAI3 ай бұрын
Thank you for providing the additional information.
@brandoncurrypitcher19452 ай бұрын
@@JustinCiriello Thanks, I had the same issue
@JustinCiriello3 ай бұрын
It all works except the Face Detailer. It just gets stuck in a loop when it gets to that step. Endless loop with no error. Refreshing and Restarting did not help. Everything is fully updated.
@goshniiAI3 ай бұрын
yes thats correct, the face detailer continuously refines the face details until they are complete. Keep it running until it generates the final image. You got it right!
@RxAIWithDrJen2 ай бұрын
Have no idea how what i'm missing to get ControlNetApply SD3 and HunyuanDT. Does not update and does not show on Manager...so can anyone shed light? New to SD and Comfy. THanks
@goshniiAI2 ай бұрын
The "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates. The process to find it remains the same, but the node has been renamed.
@RxAIWithDrJen2 ай бұрын
@@goshniiAI Thanks! And thank you for an excellent video
@goshniiAI2 ай бұрын
@@RxAIWithDrJen You are most welcome. Thank you for being here
@CsokaErno2 ай бұрын
This "controlnetapply sd3 andhunyuandit" is nowhere :/ I updated everything.
@goshniiAI2 ай бұрын
The "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates. The process to find it remains the same, but the node has been renamed.
@tmlander2 ай бұрын
why not share the json for comfy? I went to gumroad and downloaded your files but was surprised there is no json just an image of your set up!!!?????
@devnull_2 ай бұрын
You sure the image didn't have the comfy workflow stored into it? Did you try dropping it into Comfy UI?
@goshniiAI2 ай бұрын
Yes you are right, the PNG image still works the same as a JSON file. You only have to import it or drag and drop into comfyUI.
@tmlander2 ай бұрын
@@goshniiAI I saw that later... sorry I thought comfy only accepted json... thanks for your work!
@goshniiAI2 ай бұрын
@@tmlander you are most welcome, thank you for sharing an update.
@fungus983 ай бұрын
So it appears that apply SD3 node has been renamed to Apply With VAE?
@goshniiAI3 ай бұрын
It is still SD3, as I checked.
@fungus982 ай бұрын
@@goshniiAI still can't get it to come up on mine, but "apply" and "apply with vae" are the exact same nodes it looks like. At least, I can't see a difference
@goshniiAI2 ай бұрын
Thank you for pointing that out, it looks like the "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates
@goshniiAI2 ай бұрын
@@fungus98 Yeah, you are right, and thank you for sharing your observation
@stevenls9781Ай бұрын
Can we download that workflow.. maybe I missed that in the vid.
@goshniiAIАй бұрын
Yes, you can use the link in the description.
@stevenls9781Ай бұрын
@@goshniiAI oh man... if only I used my eyes. thanks for the reply.
@stevenls9781Ай бұрын
ah I was looking for a JSON file or something, it's a PNG to use as a ref and copy into Comfy
@goshniiAIАй бұрын
@@stevenls9781 True! A PNG or JSON file can be used in the same way. The benefit of using a PNG workflow is that you can see a preview of the node structure or layout. You only need to drag the PNG file into comfyui to get to the workflow.
@stevenls9781Ай бұрын
@@goshniiAI ah gotcha, I was just looking at them as an image preview and thought cool I can create it based on that. Now after doing it manually I have dragged the png into Comfy and it loaded.. hahahah well good practice following the image :D
@mr.entezaee3 ай бұрын
Does anyone know how to fix this problem? Failed to restore node: Ultimate SD Upscale Please remove and re-add it.
@goshniiAI3 ай бұрын
It seems there might be a mismatch in the workflow. Try deleting the node and adding it back from scratch. If that doesn’t work, just make sure you have the latest version of the node installed.
@mr.entezaee3 ай бұрын
@@goshniiAI Yes, that's it, but I don't know which node to delete.. How do I know which node to delete?
@felipecesarlourenco89552 ай бұрын
how to add simple lora?
@goshniiAI2 ай бұрын
Hello there, you find view my guide here about adding a Lora in my previous videos for FLUX. kzbin.info/www/bejne/fqanhmd6ob-cmposi=FzSSqoe6OV_56l55
@sanbait2 ай бұрын
but what about non-human characters? Animals?
@goshniiAI2 ай бұрын
For animals, you'll need the controlnet animal position model, but for now I'm not sure it is currently available for Flux.
@sanbait2 ай бұрын
@@goshniiAI how i can custom skelet. iam have game char like pokemon
@victorestomo7293 ай бұрын
can I add load lora node?
@goshniiAI3 ай бұрын
Yeah, that can be done. I explained how to do it in this link here. kzbin.info/www/bejne/fqanhmd6ob-cmposi=gC-go2q4ylLSm6Or
@bhuvanaib.9731Ай бұрын
Hi it's stuck on Load Upscale Model node. I believe I don't have the "4x-Ultrasharp.pth". How to get that please?
@goshniiAIАй бұрын
The Upscale models can be downloaded through the Manager, or you can watch the video link here to guide you kzbin.info/www/bejne/hoGzgmSJdrOGma8si=M-fMMvE6-kEzr5u8
@wrillywonka13203 ай бұрын
Can this be done in forge ui?
@goshniiAI3 ай бұрын
Yeah, hopefully I'll make a tutorial video for that.
@wrillywonka13203 ай бұрын
@@goshniiAI thatd be awesome! I need that badly
@botlifegamer70263 ай бұрын
There is no option for controlnetapply sd3 option.
@goshniiAI3 ай бұрын
The controlnetapplySD3 is a core node in comfyUI. Ensure comfy is updated before it becomes available.
@goshniiAI3 ай бұрын
please do the same by updating comfyui.
@botlifegamer70263 ай бұрын
@@goshniiAI it's not there even after updates
@goshniiAI3 ай бұрын
@HelloMeMeMeow Yeah the workflow is now available.
@botlifegamer70263 ай бұрын
@@goshniiAI Your workflow is a ControlnetApply vae not the sd3 you have in yours or did you rename it?
@amirhossein1108Ай бұрын
Is this free?
@goshniiAIАй бұрын
Yes, you are welcome to use the description's link.
@Thefishos3 ай бұрын
Very nice work ! thanks a lot man. I know it takes a lot of time to make videos like this, but is there any chance you could make a video with a workflow like this one but with flux ofc: kzbin.info/www/bejne/bmWcqXWhnNV5aacsi=GZwbPr4nuI8dvvyn That would be amazing!!! 🙏
@goshniiAIАй бұрын
Hi there, I appreciate your suggestion and the reference link. i will consider that.
@hasstv93933 ай бұрын
Can Anyone tell me the usecase of this characters images?
@goshniiAI3 ай бұрын
Awesome question! Just picture game development, animation, or storyboarding. When you have consistent images from different angles, it makes sure your character looks the same from any perspective. This makes it easier to animate, storyboard, or even print in 3D. It's also super helpful for storybooks or visualizing characters in dynamic scenes. I hope that gives some inspiration!
@hasstv93933 ай бұрын
@@goshniiAI is possible to make the 3D models with AI with this images
@goshniiAI3 ай бұрын
@@hasstv9393 Absolutely! There are good AI tools for converting 2D concepts to 3D. If you're looking for AI-powered choices, you can use 3D A.I. Studio, Meshy, Rodin, Tripo 3D, or Genie by Luma Labs to produce 3D models directly from images, while platforms like Ready Player Me allow you to build 3D avatars using an image input.