Hand FIXING Controlnet - MeshGraphormer

  Рет қаралды 51,028

Olivio Sarikas

Olivio Sarikas

Күн бұрын

MeshGraphormer is hand FIXING for ControlNet. I Build a Workflow for ComfyUI for you and will explain step by step how it works.
👐 Elevate your digital artwork with Graphormer's advanced Depthmap generation, providing lifelike and realistic hand anatomy. 🎨 This tool opens up a world of possibilities for correct hands and expressive hand gestures
#### Links from the Videoi ####
my Workflow: openart.ai/wor...
ControlNet Aux: github.com/Fan...
Hand Inpaint Model: huggingface.co...
#### Join and Support me ####
Buy me a Coffee: www.buymeacoff...
Join my Facebook Group: / theairevolution
Joint my Discord Group: / discord
AI Newsletter: oliviotutorial...
Support me on Patreon: / sarikas

Пікірлер: 240
@Douchebagus
@Douchebagus 11 ай бұрын
Hey, Olivio, been watching your videos for a while now. I just wanted to say what an absolute help your guides are and I am thankful that your channel exists. I tried comfyui a few months ago but I gave up because of large learning curve, but with your help, I'm not only cruising through it but learning faster than I thought possible.
@goldholder8131
@goldholder8131 11 ай бұрын
The way you articulate how the nodes relate to each other is just fantastic. And your workflows is a fantastic place to learn about the flow of processing all these complex things. Messing with variables here and there makes it a fun scientific project. Thanks again!
@diegopons9808
@diegopons9808 Жыл бұрын
Hey! Available on A1111 as well?
@vincentmilane
@vincentmilane Жыл бұрын
ERROR : When loading the graph, the following node types were not found: AV_ControlNetPreprocessor Nodes that have failed to load will show as red on the graph. I tried many things, always pop up
@caffeinezombies
@caffeinezombies Жыл бұрын
I still have this issue as well, after following many suggestions on installing other items.
@MatthewJohnson-z7r
@MatthewJohnson-z7r 11 ай бұрын
Same here
@MatthewJohnson-z7r
@MatthewJohnson-z7r 10 ай бұрын
I solved this issue by opening the manager and then clicking "Install Missing Custom Nodes"
@ok3x9
@ok3x9 9 ай бұрын
@@MatthewJohnson-z7r That works! Thank you!
@joellim7521
@joellim7521 6 ай бұрын
did you ever fix this? I am facing this now
@김기선-j5t
@김기선-j5t Жыл бұрын
Can I use this in A1111????
@Ulayo
@Ulayo Жыл бұрын
A little late comment, but you don't need to do a vae decode -> encode. There's a node called "Remove latent noise mask" that removes the mask so you can keep working on the same latent. (Every time you go between latent and pixel space you lose a little quality, as the decode/encode process is not lossless). Also, you would probably get a little less sausage like hands if you lowered the denoise a bit to somewhere in the 0.7-0.9 area.
@zoybean
@zoybean Жыл бұрын
But then it doesn't show an image output so how would I do that for the midas preprocessor step?
@Ulayo
@Ulayo Жыл бұрын
@@zoybean You still decode the latent to get an image for the midas step. Just connect that same latent to a remove noise mask and pass that to the upscale latent node.
@beatemero6718
@beatemero6718 Жыл бұрын
I dont quite understand. You need the decode to pass the Image to the meshgraphormer. The remove Noise mask has only a latent in and output, so how would not need the decode?
@Ulayo
@Ulayo Жыл бұрын
@@beatemero6718 I may have worded my reply a bit wrong. You still need to decode the latent to get an image that you pass to the preprocessor. But you shouldn't encode that image again. Just add a remove latent noise mask to the same latent and send it to the sampler.
@beatemero6718
@beatemero6718 Жыл бұрын
@@Ulayo I got you.
@abellos
@abellos Жыл бұрын
Fantastic, can be used also in automatic1111?
@mirek190
@mirek190 Жыл бұрын
lol
@sharezhade
@sharezhade Жыл бұрын
Need a video about that. Comfy-ui seems so complicated
@AirwolfPL
@AirwolfPL Жыл бұрын
@@sharezhade it's not complicated and offers great control of the process but it's horribly time consuming. A1111 offers much more streamlined experience for me.
@BrunoBissig
@BrunoBissig Жыл бұрын
Hi Olivio, thats simply great, thank you very much for the workflow!
@seraphin01
@seraphin01 Жыл бұрын
awesome video, it was still a struggle even with controlnet to fix those pesky hands.. gonna give it a try with this setup, you're amazing, happy new year!
@EyeOfOdin-r1n
@EyeOfOdin-r1n Жыл бұрын
would be nice if you had an sdxl version
@knightride9635
@knightride9635 Жыл бұрын
Thanks ! A lot of work went into this. Happy new year
@daviddiehn5176
@daviddiehn5176 11 ай бұрын
Hey Olivio, I intergrated the mask captioning etc. to my workflow, but now the same error is occuring everytime. I tried a bit a round, but I am still clueless. Error occurred when executing KSampler: mat1 and mat2 shapes cannot be multiplied (308x2048 and 768x320) ... ( 100 lines complex code )
@MarxHumberto
@MarxHumberto 6 ай бұрын
This error happens because you are using the wrong controlnet model, the ckpt and controlnet model must be of the same version, sdxl - sdxl or 1.5 - 1.5
@fingerprint8479
@fingerprint8479 Жыл бұрын
Hi, works with A1111?
@clifforderskineart
@clifforderskineart 2 ай бұрын
Did you ever find out?
@fingerprint8479
@fingerprint8479 2 ай бұрын
@clifforderskineart Not using anymore
@UltraStyle-AI
@UltraStyle-AI Жыл бұрын
Can't find any info about it yet. Need to install on A1111.
@BackyardTattoo
@BackyardTattoo Жыл бұрын
Hi Olivo, thanks for the video. How can apply the workflow to an imported image? Is it possible?
@maxfxgr
@maxfxgr Жыл бұрын
Hello and have an awesome 2024
@Rasukix
@Rasukix Жыл бұрын
I presume this is usable with a1111 also?
@GS195
@GS195 Жыл бұрын
Oh I hope so
@ImmacHn
@ImmacHn Жыл бұрын
Should really try going the Comfy route, it might seem overwhelming at first, but it's amazing once you get the hang of it.
@Rasukix
@Rasukix Жыл бұрын
I just find nodes hard to handle, my brain just doesn't work well with it@@ImmacHn
@ryutaro765
@ryutaro765 10 ай бұрын
Can we also use this refined method for img2img?
@duck-tube6786
@duck-tube6786 Жыл бұрын
Olivio, by all means continue with ComfyUI vids but please also include A1111 as well.
@happyme7055
@happyme7055 Жыл бұрын
Yes, please Olivio! For me as a hobby AI creator, A1111 is the better solution because it's not nearly as complicated to install/operate...
@cchance
@cchance Жыл бұрын
A111 plugins come slower these days and stuff like this in a1111 just isn’t as easy he’s doing 3 ksamplers masking and other stuff in a specific order. That’s just not how a111 works at least not easily
@sierramultimedia2024
@sierramultimedia2024 Жыл бұрын
Yes please!
@joeterzio7175
@joeterzio7175 Жыл бұрын
I see ComfyUI and I stop watching. It's obsolete already and that workflow looks like a complex wiring diagram. The future of AI image generation is going to be text based, not that mess of spaghetti string.
@fritt_wastaken
@fritt_wastaken Жыл бұрын
@@joeterzio7175 text based is the past of AI image generation. And it won't come back until something like chatgpt can understand you perfectly and use that "spaghetti string" for you. And even then you probably would have to intervene if you're not just goofing around and actually creating something. There is absolutely no way to describe everything required for an image using just text
@amkire65
@amkire65 Жыл бұрын
Great video. I find that the depth map looks a lot better than the hand in the finished image, I'm not too sure why it changes quite so much. It's cool that we're getting closer, though... what I'm really after is a way to get consistent clothing in multiple images so I don't have a character that changes clothes in every panel of a story.
@RamonGuthrie
@RamonGuthrie Жыл бұрын
This might be your most liked video ever ...Hand FIXING the Holygrail of AI
@rileyxxxx
@rileyxxxx Жыл бұрын
xD
@fabiotgarcia2
@fabiotgarcia2 11 ай бұрын
Hi Olivio! How can we apply this workflow to an imported image? Is it possible?
@97BuckeyeGuy
@97BuckeyeGuy Жыл бұрын
I wish you would do more work with SDXL models. I want to see some of the workarounds that may be out there for the lack of a Tiled ControlNet. And I'd like to see more about Kohya Shrink with SDXL.
@OlivioSarikas
@OlivioSarikas Жыл бұрын
Yes, I really need to do more sdxl. But personally I never use it for my Ai images, because it takes much longer and I don't need the added benefits
@EH21UTB
@EH21UTB Жыл бұрын
@@OlivioSarikas Also interested in SDXL. Isn't there a way to use this new hands tool to generate the depth mask and then apply with SDXL models?
@Steamrick
@Steamrick Жыл бұрын
@@EH21UTB Of course. There's SDXL depth controlnets available, though they're not specifically trained for hands. You'd have to experiment which of the available ones works best.
@bluemurloc5896
@bluemurloc5896 Жыл бұрын
great video, would you please consider making a tutorial for automatic 1111?
@BabylonBaller
@BabylonBaller Жыл бұрын
Yea, feels like all he posts about is Comfy and forgetting about the 90% of the industry that uses Automatic1111.
@Not4Talent_AI
@Not4Talent_AI Жыл бұрын
Pretty cool! Does it work well with hands in more complex positions? Like someone flicking a marble (random example).
@Rasukix
@Rasukix Жыл бұрын
hello there
@Steamrick
@Steamrick Жыл бұрын
Try it out and let us know
@Not4Talent_AI
@Not4Talent_AI Жыл бұрын
sup!1 hahhaa@@Rasukix
@Not4Talent_AI
@Not4Talent_AI Жыл бұрын
dont have comfy installed atm@@Steamrick
@jcvijr
@jcvijr Жыл бұрын
Thank you! This model could be included in adetailer node, to simplify the process..
@skycladsquirrel
@skycladsquirrel Жыл бұрын
Great job Olivio! Let's give you a five finger hand of applause!
@jenda3322
@jenda3322 Жыл бұрын
Jako vždy, jsou vaše videa fantastická ok.👍👏
@TheColonelJJ
@TheColonelJJ 9 ай бұрын
Can we add this to Forge?
@colaaluk
@colaaluk 11 күн бұрын
Thanks so much for video.
@nikgrid
@nikgrid 11 ай бұрын
Thanks Olivio..excellent tutorial
@genome692002
@genome692002 17 күн бұрын
How do you do this with just load image instead of generating the image with text to image.. I tried to connect the meshgra thing directly to the output of load image it does nothing.. then I tried vaencode the vaedecode first that work untill it come to the ksampler part and generates an error...
@BenjaminKellner
@BenjaminKellner Жыл бұрын
Instead of VAE decode and encode before your latent upscale, you could use a use 'get latent size' node, create an empty mask injecting width/height as input, and apply a blank mask as the new latent mask. Especially with larger images it will save you time versus going through the VAE pipeline, but also, since the VAE encoding/decoding is a lossy process, you actually lose quality between samples (not that an upscaled latent looks any better unless done iteratively) -- I prefer to upscale in pixelspace, then denoise starting at step 42, ending at step 52, then another sample after that from step 52 to carry me to 64 steps. I find three samples before post is my optimal workflow.
@DavidEmour
@DavidEmour Жыл бұрын
Excellent. Where can I find your optimal workflow to learn from you? Thank you
@tetsuooshima832
@tetsuooshima832 9 ай бұрын
@@DavidEmour hahahaha
@hurricanepirate
@hurricanepirate Жыл бұрын
Why is AV_ControlNetPreprocessor node red? Egadz!
@vincentmilane
@vincentmilane Жыл бұрын
same for me
@NamikMamedov
@NamikMamedov Жыл бұрын
How can we fix hands in automatic 1111?
@megadarthvader
@megadarthvader Жыл бұрын
Isn't there a simplified version for web ui? 😅 With that concept map style system everything looks so complicated 🥶
@omegablast2002
@omegablast2002 Жыл бұрын
only for comfy?
@kleber1983
@kleber1983 Жыл бұрын
does the controlnet is really necessary? I´ve achieved the same result by passing the meshgraphormer mask through a VAE encode for impainting and it worked, I think it´s simplier, but I wonder if it compromisses the quality... thx.
@HolidayAtHome
@HolidayAtHome Жыл бұрын
That's great! Would love to see some examples of more complicated hand positions or hands that are partly covered by some objects. Does it still work then or is it unusable in those scenarios ?
@D3coify
@D3coify 11 ай бұрын
I'm trying to do this with "load Image" node
@smert_rashistskiy_pederacii
@smert_rashistskiy_pederacii 8 ай бұрын
I got a strange error: SyntaxError: Unexpected non-whitespace character after JSON at position 4 (row 1 column 5) .Even "Install Missing Custom Nodes" does not help.
@TheHmmka
@TheHmmka 11 ай бұрын
How to fix next error? When loading the graph, the following node types were not found: AV_ControlNetPreprocessor Nodes that have failed to load will show as red on the graph.
@MatthewJohnson-z7r
@MatthewJohnson-z7r 10 ай бұрын
I solved this issue by opening the manager and then clicking "Install Missing Custom Nodes"
@alucard604
@alucard604 Жыл бұрын
Any idea why my "comfyui-art-venture" custom nodes have an "import failed" issue? Its is required by this workflow for the "ControlNet Preprocessor". I already made sure that all conflicting custom nodes are uninstalled.
@2PeteShakur
@2PeteShakur Жыл бұрын
same issue, u updated comfyui?
@caffeinezombies
@caffeinezombies Жыл бұрын
Same issue
@ImmacHn
@ImmacHn Жыл бұрын
1:30 you can update the custom nodes instead of uninstalling then reinstalling, in the manager press "Fetch updates" once the the updates are fetched Comfy will prompt you to open the "Install Custom Nodes" at which point the custom nodes that have updates will show an "Update" button. After that restart comfy and refresh the page.
@OlivioSarikas
@OlivioSarikas Жыл бұрын
I know. But when I updated it, it didn't give me the new preprocessor
@ImmacHn
@ImmacHn Жыл бұрын
@@OlivioSarikas I see, did you refresh the page after? The nodes are basically client sided so you would need to reload after the reset to see the new node
@ImmacHn
@ImmacHn Жыл бұрын
@@OlivioSarikas Also thanks for the videos, they're very helpful!
@Kryptonic83
@Kryptonic83 Жыл бұрын
yeah, i hit update all in comfyui manager then fully restarted comfyui and refreshed the page, worked for me without reinstalling the extension.
@Gabriecielo
@Gabriecielo Жыл бұрын
Thanks for the tutorial and the result is amazing, save a lot of photoshop time. I found there are several limitations too. It focus on fingers fixing, but if two right hands for same person, this model seems not fixing it, may be I didn't find the right way to tune it? And it's a SD15 only could not work with SDXL checkpoints for now, hope it gets updated later.
@A42yearoldARAB
@A42yearoldARAB 11 ай бұрын
Is there an automatic 1111 version of this?
@adamcarskaddan
@adamcarskaddan 11 ай бұрын
I don't have the controlnet preprocessor. How do if fix this?
@MatthewJohnson-z7r
@MatthewJohnson-z7r 10 ай бұрын
Try opening the manager and then clicking "Install Missing Custom Nodes" and reboot
@SetMeFree
@SetMeFree 9 ай бұрын
when i do img2img it changes my original image into a cartoon but fixes the hands. Any advice?
@androidgamerxc
@androidgamerxc Жыл бұрын
im automatic 1111 squad please tell how to add in that
@graphilia7
@graphilia7 Жыл бұрын
Thanks! I have a problem when I launch the Workflow, this warning appears: "the following node types were not found: AV_ControlNetPreprocessor" I downloaded and placed the "ControlNet-HandRefiner-pruned" file in this folder: ComfyUI_windows_portable\ComfyUI\models\controlnet. Can you please tell me how to fix this?
@sirdrak
@sirdrak Жыл бұрын
Same here... I tried uninstalling and reinstalling the custom nodes as said in the video but the error persists. Edit: Solved intallling Art Venture custom nodes, but now i have the problem of 'mediapipe' error with MeshGraphormer-DepthMapPreprocessor node...
@birdfingers354
@birdfingers354 Жыл бұрын
Me three
@caffeinezombies
@caffeinezombies Жыл бұрын
​@@sirdrakI looked for art venture custom nodes and couldn't find anything.
@notanemoprog
@notanemoprog 11 ай бұрын
I replaced the workflow "ControlNet Preprocessor" used in the video (from that "venture" package I don't have) with "AIO Aux Preprocessor" selecting "MiDas DepthMap" and got at least the first image produced (bad hands) before further problems happened
@Zbig-xw6yp
@Zbig-xw6yp Жыл бұрын
Great video. Please note that Preprocess is requiring Node "Segment Anything" for some reason and without it can not be loaded! Thank You for sharing!
@henrischomacker6097
@henrischomacker6097 Жыл бұрын
Excellent video! Congratulations.
@AnimeDiff_
@AnimeDiff_ Жыл бұрын
segs preprocessor?
@4thObserver
@4thObserver Жыл бұрын
I really hope they streamline this process in future iterations. MeshGraphormer seems very promising but I lost track of what each step and process does 6 minutes into the video.
@meadow-maker
@meadow-maker Жыл бұрын
Yeah I couldn't even load the Mesh Graphormer node at first, it took me several breaks, coffees and redo until I found it. Really shoddy training video.
@hippotizer
@hippotizer Жыл бұрын
Super valuable video, thanks a lot!
@mosske
@mosske Жыл бұрын
Thank you so much Olivio. Love your videos! 😊
@OlivioSarikas
@OlivioSarikas Жыл бұрын
My pleasure!
@GrocksterRox
@GrocksterRox Жыл бұрын
Very creative as always Olivio!!!
@ooiirraa
@ooiirraa Жыл бұрын
Thank you for the new ideas! I think it can be improved a little bit. Every encode goes with a loss of quality, so it might be a better decision to first create the full rectangular mask with the dimensions of the image and then apply the new mask to the latent without reencoding. ❤ thank you for your work!
@cchance
@cchance Жыл бұрын
Ya was gonna say don’t decode and recode just overwrite the mask
@Foolsjoker
@Foolsjoker Жыл бұрын
@@cchance How would you just overwrite the mask without decoding to 'flatten' the image?
@Madwand99
@Madwand99 Жыл бұрын
@@cchance Do you have another workflow to show what you mean by this?
@HeinleinShinobu
@HeinleinShinobu Жыл бұрын
cannot install controlnet preprocessor, has this error Conflicted Nodes: ColorCorrect [ComfyUI-post-processing-nodes], ColorBlend [stability-ComfyUI-nodes], SDXLPromptStyler [ComfyUI-Eagle-PNGInfo], SDXLPromptStyler [sdxl_prompt_styler]
@frederickfortier7984
@frederickfortier7984 6 ай бұрын
im trying this but meshgraphormer dosent seem to find the hands and creates a solid black mask
@V_2077
@V_2077 8 ай бұрын
Anybody know an sdxl controlnet refiner for this?
@hatuey6326
@hatuey6326 Жыл бұрын
great tuto as always ! i would like to see how it works on img to img and with sdxl !
@weebtraveller
@weebtraveller Жыл бұрын
thank you very much, great as always. Can you do Ultimate SD Upscale instead?
@News_n_Dine
@News_n_Dine Жыл бұрын
Unfortunately I don't have the device requirement to set up comfyui. Please do you have any advice for me?
@News_n_Dine
@News_n_Dine Жыл бұрын
Btw, I already tried google colab, didn't work
@ysy69
@ysy69 Жыл бұрын
happy new year olivio!
@substandard649
@substandard649 Жыл бұрын
Interesting.... does it work with SDXL too?
@rodrimora
@rodrimora Жыл бұрын
would like to know too
@Steamrick
@Steamrick Жыл бұрын
The controlnet is clearly made for SD1.5. That said, there's no reason you could not combine the depth map output with a SDXL depth controlnet, though it may not work quite as well as a net specifically trained for hands.
@TheP3NGU1N
@TheP3NGU1N Жыл бұрын
Sd1.5 always comes first.. SDXL will probably be next as they usually require a little extra to get worked out.
@substandard649
@substandard649 Жыл бұрын
I thought sd15 was officially deprecated, if so then you would expect sdxl to be the first target for new releases. That being said i get way better results from the older model, XL is so inflexible by comparison...rant over 😀
@TheP3NGU1N
@TheP3NGU1N Жыл бұрын
Depends on what you are going for. In the realm of realism, sd15 is still king to most people. Tho xl is quickly catching up. Programming wise, sd15 is easier and most of the time, if you get it to work for sd15 getting it to work for xl is going to be much easier, the reverse isn't quite the same@@substandard649
@76abbath
@76abbath Жыл бұрын
Thanks a lot for the video Olivio!
@MiraPloy
@MiraPloy Жыл бұрын
Couldn't dwpose or openpose do the same thing?
@listahul2944
@listahul2944 Жыл бұрын
Great! thanks for the video. how about a img to img fix hands workflow.
@OlivioSarikas
@OlivioSarikas Жыл бұрын
It's inpainting, so that should work too
@TheDocPixel
@TheDocPixel Жыл бұрын
Technically... this is img2img. Just delete the front parts that generate the picture, and start by adding your own picture with a Load Image node.
@lauracamellini7999
@lauracamellini7999 Жыл бұрын
Thanks so much olivio!
@hmmrm
@hmmrm Жыл бұрын
hello, , i have tried to reach you on discord but i couldnt, i wanted to ask you a very important question.. once we upload our workflows in open ai .. we cant delete any of the workflows ? why ?
@hwj8640
@hwj8640 Жыл бұрын
Thanks for sharing!
@OlivioSarikas
@OlivioSarikas Жыл бұрын
My pleasure
@marcdevinci893
@marcdevinci893 5 ай бұрын
Would anyone know how to upload an image instead of creating one in this workflow? Still new to ComfyUI
@KINGLIFERISM
@KINGLIFERISM Жыл бұрын
In Darth Vader's voice, " the circle is com-plete." I am now wondering if SEGS could be used instead of a huge box. It can mess up a face if the hand is close to it. Any ideas guys?
@michail_777
@michail_777 Жыл бұрын
That's great! Now let's do the tests:)))
@truth_and_raids3404
@truth_and_raids3404 Жыл бұрын
I can't get this to work ,every time I get an error Error occurred when executing MeshGraphormer-DepthMapPreprocessor: [Errno 2] No such file or directory: 'C:\\Users\\AShea\\Downloads\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\hr16/ControlNet-HandRefiner-pruned\\cache\\models--hr16--ControlNet-HandRefiner-pruned\\blobs\\41ed675bcd1f4f4b62a49bad64901f08f8b67ed744b715da87738f926dae685c.incomplete'
@micbab-vg2mu
@micbab-vg2mu Жыл бұрын
Very useful - thank you.
@wzs920
@wzs920 Жыл бұрын
does it work for a1111?
@OlivioSarikas
@OlivioSarikas Жыл бұрын
I will check. But new tech almost always comes to comfyui first
@randomVimes
@randomVimes Жыл бұрын
One suggestion for vids like this: a section at the end which shows 3 example prompts and results. Prompt can be on screen, dont have to read it out
@vbtaro-englishchannel
@vbtaro-englishchannel 9 ай бұрын
It’s awesome but I can’t use meshgraphormer node. I don’t know why. I guess it’s because I’m using Mac.
@SoyUnMonor
@SoyUnMonor Жыл бұрын
ComfyUI doesn't look comfy AT ALL. 😢
@kkryptokayden4653
@kkryptokayden4653 Жыл бұрын
Once you created a couple of workflows it is very comfy, the beginning is not comfy at all because you don't have any workflows relevant to yourself. But it is very easy once you have several personal workflows
@Konrad162
@Konrad162 9 ай бұрын
isn't an open pose better?
@feelkori
@feelkori 10 ай бұрын
The hand is calibrated to some extent, but the end result is different face. But the face is different in the end result. Can't you do the same face?
@1ststepmedia105
@1ststepmedia105 Жыл бұрын
I keep getting an error message, the workflow stops at the MeshGraphormer-DepthMapPreprocessor window. I followed the direction you gave and have downloaded the hand inpaint model and place it in the folder not luck.
@jamiesonsidoti
@jamiesonsidoti Жыл бұрын
Same... hits the MeshGraphormer node and coughs the error: A Message class can only inherit from Message Getting the same error when attempting to use the Load InsightFace node for ComnfyUI_IPAdapter_Plus. Tried on a separate new install of Comfy and the error persists.
@josesimoes1516
@josesimoes1516 Жыл бұрын
If anyone else has an error that 'mediapipe' module can't be found and can't install package due to OSError or something like that just uninstall the auxiliary processor nodes, reboot comfy, install again, reboot again and it works. Everything was fully updated when I was getting that error so reinstalling is probably the best choice just to avoid annoyances.
@2PeteShakur
@2PeteShakur Жыл бұрын
getting conflicts with comfyui-art-venture, disabled the conflicted nodes, still issue,,,
@VladimirBelous
@VladimirBelous Жыл бұрын
I made a workflow for improving the face using a depth map, I would like to link to this process the improvement of hands using a depth map, as well as the process of enlarging with detail without losing quality. For me it turns out either soapy or pixelated around the edges.
@j2k784
@j2k784 4 ай бұрын
You really need to mention if this will only work with 1.5 SD or will also work for SDXL. Because many noobs, such as myself, will be confused why it's not working when trying.
@sb6934
@sb6934 Жыл бұрын
Thanks!
@BVLVI
@BVLVI Жыл бұрын
what keeps me from using comfy UI is the models folder. I want to keep it in a1111 but I can't seem to figure out how to make it point to that folder.
@OlivioSarikas
@OlivioSarikas Жыл бұрын
There is a yaml file in the comfy folder called extra_model_paths. Most likely your version ends in ".example" remove that to make it a yaml file and add the A1111 folder
@pixelhusten
@pixelhusten Жыл бұрын
What Olivio has written or symlinks. That's how I did it, because I put all the loras and checkpoints on an external SSD, they are connected with symlinks. I do the same with the output folders, they run together on one folder using symlinks.
@vincentmilane
@vincentmilane Жыл бұрын
ERROR : (IMPORT FAILED) comfyui-art-venture How to fix ?
@notanemoprog
@notanemoprog 11 ай бұрын
If you don't have that "venture" package I guess it is possible to replace the workflow "ControlNet Preprocessor" with "AIO Aux Preprocessor" selecting "MiDas DepthMap"
@Okratron-rr8we
@Okratron-rr8we Жыл бұрын
i tried replacing the first ksampler with a load image node so that i could process an already generated image through, but it just skipped the mesh graphormer node entirely. any tips? i also plugged the load image into a vae encoder for the second ksampler.
@Okratron-rr8we
@Okratron-rr8we Жыл бұрын
nvm, the mesh graphormer simply isnt able to detect the hands in the image i'm using. maybe soon there will be a way to increase its detectability. other than that, this works great!
@listahul2944
@listahul2944 Жыл бұрын
@@Okratron-rr8we I'm just starting with comfy so forgive me is there is some mistake What I did: Created a "Load image" node and connected it to the "meshgraphormer hand refiner" create a "VAE encoder" node and connect the same "Load image" to it... that VAE encoder I connected to the "set latent noise mask". Also, using it like this there is sometimes it isn't able to detect the hands in the image i'm using.
@Okratron-rr8we
@Okratron-rr8we Жыл бұрын
@@listahul2944 yep, thats exactly what i did also. im sure there is a way to identify the hands for the ai but im new to this also. thanks for trying though
@PeoresnadaStudio
@PeoresnadaStudio Жыл бұрын
i would like to see more result examples :)
@OlivioSarikas
@OlivioSarikas Жыл бұрын
You can create as many as you want with my workflow. But I know what you mean 🙂
@PeoresnadaStudio
@PeoresnadaStudio Жыл бұрын
@@OlivioSarikas i mean, it's nice to see more samples on general... thanks for your videos, they are great!
@ImAlecPonce
@ImAlecPonce Жыл бұрын
Thanks :) I'm going to try sticking an img to img to it right away XD
@DarkPhantomchannel
@DarkPhantomchannel 5 ай бұрын
For anyone that gets an ERROR: i found out that the node can't get the model if it's in the "normal" controlnet folder (at least not the AUTOMATIC1111 one). The precise folder is: ComfyUI_windows_portable / ComfyUI / custom_nodes / comfyui_controlnet_aux / ckpts / hr16 / ControlNet-HandRefiner-pruned
@haoshiangyu6906
@haoshiangyu6906 Жыл бұрын
Add Krita +comfy work flow. Please! I see a lot of video that combines the 2 and line to see how you use it
@DashtonPeccia
@DashtonPeccia Жыл бұрын
I'm sure this is a novice mistake, but I am getting AV_ControlNetPreprocessor node type missing even after completely uninstalling and re-installing the Controlnet Aux Preprocessor. Anyone else getting this?
@kasoleg
@kasoleg Жыл бұрын
I have the same case, help
@KonoShunkan
@KonoShunkan Жыл бұрын
That is a different set of custom nodes to the aux controlnet nodes. It's called comfyui-art-venture (AV = Art Venture) and can be installed via Comfyui Manager. You may also need control_depth-fp16 safetensors model from Hugging Face.
@2PeteShakur
@2PeteShakur Жыл бұрын
@@KonoShunkan getting conflicts with comfyui-art-venture, disabled the conflicted nodes, still issue,,,
@Madwand99
@Madwand99 Жыл бұрын
I'm getting this error too, I haven't figured it out yet.
@notanemoprog
@notanemoprog 11 ай бұрын
Because the one featured in the video and workflow is _not_ in "comfyui_controlnet_aux-main" which most people have but in another "venture" package, so if I understood the point of that node, I suppose the same result can be produced by replacing the workflow "ControlNet Preprocessor" used in the video (from that "venture" package I don't have) with "AIO Aux Preprocessor" selecting "MiDas DepthMap" and got at least the first image produced (bad hands) before further problems happened
@toonleap
@toonleap Жыл бұрын
No love for AUTOMATIC1111?
@Ultimum
@Ultimum Жыл бұрын
Is there something similar for Stable diffusion?
@beatemero6718
@beatemero6718 Жыл бұрын
What do you mean? Bro, this IS stable diffusion.
@Ultimum
@Ultimum Жыл бұрын
@@beatemero6718 Nope thats ComfyUI
@sharezhade
@sharezhade Жыл бұрын
I think means automatic 1111, cos comfy-ui its so complicated for some users @@beatemero6718
@notanemoprog
@notanemoprog 11 ай бұрын
Probably means one of the text user interfaces like A1111 or similar@@beatemero6718
@ttul
@ttul Жыл бұрын
Hmmm. The mask still being in the latent batch output is something that should be fixed.
Image to 3D to Image - THIS IS AWESOME!!!
7:33
Olivio Sarikas
Рет қаралды 7 М.
ComfyUI - Hands are finally FIXED!  This solution works with all models!
12:17
I'VE MADE A CUTE FLYING LOLLIPOP FOR MY KID #SHORTS
0:48
A Plus School
Рет қаралды 20 МЛН
Маусымашар-2023 / Гала-концерт / АТУ қоштасу
1:27:35
Jaidarman OFFICIAL / JCI
Рет қаралды 390 М.
ControlNet Guidance tutorial. Fixing hands?
8:47
Sebastian Kamph
Рет қаралды 49 М.
DWPose for AnimateDiff - Tutorial - FREE Workflow Download
17:15
Olivio Sarikas
Рет қаралды 56 М.
Depth-Anything for A1111 and InstantID
10:40
Olivio Sarikas
Рет қаралды 45 М.
PhotoMaker - better than IPAdapter?
12:51
Nerdy Rodent
Рет қаралды 41 М.
Why AI art struggles with hands
9:57
Vox
Рет қаралды 2,7 МЛН
Magnific Upscale + FREE Alternative!
13:09
Olivio Sarikas
Рет қаралды 94 М.
Stable Diffusion - Inpainting with Fooocus - Don't Regenerate, Fix!
15:14
Animate Anyone - Only 1 Image needed!!!!
12:01
Olivio Sarikas
Рет қаралды 46 М.
Two Methods for Fixing Faces in ComfyUI
15:55
How Do?
Рет қаралды 14 М.