For error: "cannot import name 'packaging' from 'pkg_resources'" The solution: Ensure that Python 3.12 or lower is installed with a comfy UI portable. Then go inside ComfyUI_windows_portable\python_embeded folder and run this command. python.exe -m pip install setuptools==65.5.1 A comfy update installs setuptools-70.0.0, you need to downgrade for it to work.
@hleet6 ай бұрын
Thank you very much ! now it works
@SteAtkins6 ай бұрын
@@hleet were at the tip of the spear now , thanks. never give up and we'll get there.
@chea998620 күн бұрын
I install comfyUI-vextra-Nodes (Import Failed), How to do! I follow you pin top right?
@controlaltai20 күн бұрын
@ comfy hi has changed a lot. You have to contact the custom nodes for support as they have to update it to make it compatible with the latest version of comfy. My pin note was about yolo nodes from zhozhozho and not the custom node you mentioned.
@chea998620 күн бұрын
@@controlaltai What version comfy used, I download old version used too. thank you
@moonson81014 ай бұрын
I've seen your video before, but I though it was bit too long back then. This week when I really encountered inpaint problems, I checked your video and found it is really really helpful. Detailed enough to help me improve inpaint and it works well. Thank you! Many subtle elements really need us to learn from you step by step. just go on, the time would be worthy.
@AI3EFX10 ай бұрын
Crazy. Time for me to put my Confy UI cape back on. Great video!!!
@goodie2shoes7 ай бұрын
Amazing tutorial. I need a couple of viewings to take it all in because there is soo much usefull information!
@brgresearch10 ай бұрын
Brilliant explanations. Thanks for making this video, it is so useful, and you have a great mastery of the subject.
@brgresearch10 ай бұрын
My only gripe, as I'm replicating these workflows, is that perhaps the seed numbers you use could be simpler to replicate, or perhaps pasted in the description? That way we can get the exact same generations that you did easily. Right now, not only is the seed long and complicated, but it's not always clear, like in the case of the bear on the street seed 770669668085503, which even on a 2K monitor (the easiest frame I could fine was at 22:16) , was really hard to make out due to the 6's looking like 8's. Still replicable, but perhaps for ease of following along, an easier seed would be helpful. Thank you again for making this, I'm halfway through replicating the workflows and I'm beginning to understand!
@controlaltai10 ай бұрын
@brgresearch the seed number I used is random. Don't use the same seed as it's not cpu generated and will still give different results if you are not using the same GPU as mine. Use any random seed and keep randomising it. You are suppose to replicate the workflow intend rather than the precise output. Meaning, the workflow is supposed to do x with y output, at your end it should still do x with z output. I hope that makes sense.
@controlaltai10 ай бұрын
Also, if you need the seed for any video, just send an email or comment on the video, I will just post it for you. I prefer to not post it in description as some one with not a 4090 will get different output.
@brgresearch9 ай бұрын
@@controlaltai thank you for the clarification. I did not know that the hardware will also affect the generation. My thought was to try to follow along as exactly as possible, so that I would get the same results and be able to make the changes you made in a similar manner, especially with the seam correction example, because I did not want to get a different bear! I completely understand that it's okay to get a z output, even if yours is y, as long as the workflow process arrives at the same type of result. I'm practicing with the workflow today, and it's really amazing what can be accomplished with this workflow. Thank you so much again, and really appreciate the work and education you are doing.
@runebinder10 ай бұрын
Excellent video, want to try and use ComfyUI as much as I can but Inpaint and Outpaint has been better for me in other UI. Hopefully this will help. I've also only just realised you can zoom in and out of the canvas in Mask Editor due to watching you do it when you were fixing the edge of the mask after adding the polar bear lol.
@agreda15935 ай бұрын
Hi, I cannot see the inpaint folder
@controlaltai5 ай бұрын
Create one: 'ComfyUI\models\inpaint'
@jeremystere_FRАй бұрын
The best Comfy tutorial !
@oliviertorres800110 ай бұрын
Results are amazing. But the learning curve to understand (and not only copy/paste) all these workflows seems a very long journey... Nevertheless, I subscribe immediately 😵💫
@brgresearch10 ай бұрын
I had the same reaction. I can't imagine how these workflows were first created, but I'm grateful that eventually, because of these examples, I might understand it.
@blacksage819 ай бұрын
If you put the time in, you will understand. Also, I suggest finding a specific use case. In other words "Why am I in this space, what do I want the AI to help me create?" For me, it was consistent characters, so learning masking and inpainting is great for me so I can ensure likeness and improve my training dataset.
@oliviertorres80019 ай бұрын
@@blacksage81 For me, it’s to build my own workflow to sell virtual home staging upfront to my clients. I’m a real estate photographer. Of course, it worth it to struggle a little bit to nail inpainting at a high level of skills 🧗
@brgresearch9 ай бұрын
@@blacksage81 this is really good advice. For me, I'm trying to create a filter tool for photos with controlnet and to be able to do minor photo repairs using masking and inpainting. ComfyUI is such a flexible tool in that regard, but at the same time, it's amazing to see how some of the workflows are created.
@LuckRenewal10 ай бұрын
i really love your videos, you really explain it very well!
@rsunghun10 ай бұрын
This is so high level 😮
@АлексейКузнецов-ж4з7 ай бұрын
Huge thanxs for the video! At last i have good inpaint and outpaint workflows
@Artishtic7 ай бұрын
Thank you for the object removal section
@jeffg468610 ай бұрын
Thanks for the comment the other day. I had deleted my post already before I saw it so unfortunately the tip wasn't left for others (because of my deletion). The tip (leaving for others here) was to delete the specific custom node folder if you have problems loading an addon - in certain cases anyways. I had an idea for NN model decoders. The idea is simple. It's to pass in a portion of the image that's pre-rendered and that you want unchanged in the final image. So, the decoder would basically do it's magic right after the noise is generated. So, right on top of the noise, the decoder overlays the image you want included (transparent in most cases). It can have some functionality in the NN decoder for shading your image clips - both lighting applied to it as well as shadows. This might even need a new adapter "Type" - but I just haven't gotten deep enough into it yet (sorry if you're reading this as I keep correcting it, it's like 4:48 am... - it's pretty bad writing...) If you all have direct contacts with those at stability ai, you might reach out and suggest something regarding including pre-renders directly into noise at the beginning of the denoise process.
@swipesomething7 ай бұрын
3:37 After I installed the node, I had errors "cannot import name packaging from pkg_resources", I updated the inference and inference-gpu packages and it was working, so if anybody has the same errors try to update the inference and inference-gpu
@controlaltai7 ай бұрын
The issue is this won't work with the latest version of ComfyUI. Python3.12 is incompatible. You have to use an older version of python.
@controlaltai7 ай бұрын
Okay found the solution, Firstly ensure, that Python 3.12 or lower is installed with comfy ui portable. The go inside the comfy portable folder python embedded folder and run this command. python.exe -m pip install setuptools==65.5.1 A comfy update installs setuptools-70.0.0, you need to downgrade for it to work.
@Mr.WangLittleHand2 ай бұрын
cool video, super helpful! Please make more like this! 🥰🥰
@Wibur1110 ай бұрын
The good news is that the ComfyUI Yolo World plugin is great. The bad news is that the author of this plugin has made many plugins and never maintains them.
@haljordan15759 ай бұрын
that's my least favorite thing about self-ran image workflows.
@laetokang22632 ай бұрын
Wow! This is an incredible tutorial! I'm curious would it be possible to use the inpainting technique to create 'invisible mannequin' effects where only the clothing remains visible?
@controlaltai2 ай бұрын
Thank You, Yes you probably can,. Logic would go as get the clothing masked. Then have the clothing overlay on your mannequin image if the model is aligned in the same way of the mannequin .
@dexter00104 ай бұрын
Hi, really useful and interesting video! I have one small problem and maybe you can help me. At 33:25 minute mark your KSampler has a seed input circle, mine doesn't and i tried to use the KSampler(WAS) node that has a seed input circle but my Seed node from rgthree doesn't want to connect with the WAS KSampler node... any help please?
@controlaltai4 ай бұрын
Hi, You have to right click on the k sampler and convert widget seed to input.
@vivektekale2 ай бұрын
In the final out, it changes the original image outside the masked area, any solution for this?
@controlaltai2 ай бұрын
That should not happen. It would change minor that is normal whenever you put any image from pixel to pixel in diffusion meaning pixel to latent to pixel again some change will be there depending on the checkpoint. This is normal. Solution would be to reverse mask the area and paste over the newly removed subject image.
@KANGZHANG-q5r10 ай бұрын
Great. Very detailed.
@sanchitwadehra10 ай бұрын
Dhanyavad
@root65727 ай бұрын
why are the model files for inpainting not in safetensors format?
@Kikoking-y9b9 ай бұрын
At the Beginning Thank you The question is now which inpaint methode is better? To use Vae encode then Refine with Preview Bridge or to work directly with VAE Enfode&Inpaint conditioning without any refinment . I want to know how to get the best results :) Appreciate
@controlaltai9 ай бұрын
Hi, so basically I recommend both. The vae encode is for replacement and replaces way better than vae encode&inpaint conditioning. However during extensive testing I found in some cases the latter can replace as well much better. But in many cases I had to keep re generating with random seeds. I would go with the first method, then try the second, because second is not always replacing an object and the success depends on a variety of factors like the background, object mask etc. For minor edits go with second, for major edits like complete replacement, try first method then the second.
@barrenwardo10 ай бұрын
Awesome
@freke809 ай бұрын
Very well explained! ❤
@controlaltai9 ай бұрын
Thank you!!
@geraldwiesinger63010 ай бұрын
Wow, awesome results. Which resolution are the images? Would this work on 4k images as well or would it be necessary to downscale or crop the inpaint region first?
@controlaltai10 ай бұрын
Advisable to downscale, near SDXL resolution. Then upscale using comfyui or topaz.
@Bikini_Beats10 ай бұрын
Hi, thanks for the video. Is this working for SD1.5?
@controlaltai10 ай бұрын
The fooocus inpaint patch is only for sdxl. Yolo world and color grading doesn't require any checkpoint.
@ericruffy212410 ай бұрын
amazing as always, thanks for all details Mali~
@Kikoking-y9b8 ай бұрын
Hello 👋can i use the replace methode with refinement to inpaint a face to give a women shorter hair? I tried it and it looked bad with blured hair and masking line.
@controlaltai8 ай бұрын
Yeah, do one at a time. Don't mask the entire face just above eye brows and up to the chin, maintain mask within the facial borders. The masking line has to be refined. Why it is blurred I have no idea. That is dependant on your ksampling and checkpoint. I suggest you look at face detailer for hair. It has an automated workflow specifically for hair. kzbin.info/www/bejne/labEgGqMhNtmfKMsi=b_6LSljm0SjLYXvq
@Kikoking-y9b8 ай бұрын
I appreciate your very fast answer thank you a lot i will take your advice. ❤
@op12studio5 ай бұрын
Holy crap this was good... I love being able to just boop people out of pictures now lol
@ucyuzaltms93248 ай бұрын
thanks for this incredible tutorial, i have a problem, i want to use the images which comes from yoloworld Esam, but there is box and text overlays, how can i remove them
@controlaltai8 ай бұрын
The yolo world doesn't not do anything but segment it. You further process by removing objects. And use those images. Why would you want to use images from yoloworld esam?
@saberkz9 ай бұрын
How i can contact you for some workflow help
@nkofr9 ай бұрын
Hi! I'm trying to understand what's the point of preprocessing with Lama if the samplers then use a denoise of 1.0?
@controlaltai9 ай бұрын
Hi, The samplers checkpoints are not trained to remove the object that well. Lama is very well trained. However it’s not perfect. The point here is to use lama to accurately remove the subject object and then use fooocus inpainting to guide and fix the image to perfection.
@nkofr9 ай бұрын
@@controlaltai Yes but my understanding was that setting denoise to 1.0 was like starting from scratch (not using anything from the denoised area), so if the denoise is set to 1 my understanding is that what Lama has done is completely ignored. No??
@controlaltai9 ай бұрын
@nkofr not really, we are using fooocus Inpaint models with inpaint conditioning method, not vae encode method . This method is basically for fine tuning. Where as the vae encode is for subject replacement. Denoising of 1 here is not the same as denoising of 1 in general sampling. Value comparison is apple to oranges. Denoising value also is not a hard rule and depends on the distortion cause by the lama model. So no, denoising of 1 will not undo the lama work, you can actually see in the workflow it uses the bases left by the lama and reconstructs that. The things is Mat And Lama will work in complicated images and the reconstruction done by them is beautiful, however for such complexity we need to just fine tune it. Hence we use the fine tune method.
@nkofr9 ай бұрын
@@controlaltai Ok thanks that makes sense! (What you call "fine tune" is the pass with fooocus inpaint). Have you heard about Lama with Refiner? Any idea on how to activate the Refiner for Lama in ComfyUI? Where do you get all that knowledge from?:)
@controlaltai9 ай бұрын
No idea on how to activate refiner for lama in comfyui at the moment.
@THEJATOXD5 ай бұрын
Is there any update on this? i'm still having the For error: "cannot import name 'packaging' from 'pkg_resources'" error
@controlaltai5 ай бұрын
Yeah, I posted in the pinned post. Reposting for you again.... For error: "cannot import name 'packaging' from 'pkg_resources'" The solution: Ensure that Python 3.12 or lower is installed with a comfy UI portable. Then go inside ComfyUI_windows_portable\python_embeded folder and run this command. python.exe -m pip install setuptools==65.5.1 A comfy update installs setuptools-70.0.0, you need to downgrade for it to work.
@THEJATOXD5 ай бұрын
@@controlaltai hey i tried doing that and I was not successful, still had the error, any suggestions?
@controlaltai5 ай бұрын
Something else is wrong. That has to do with some dependency version. If you are on portable that the dependency command should be in the comfy python folder. Double check the version. Also check your portable python version. It should be lower than 3.12 I have double checked it and it works. Unless python is at 3.12
@moviecartoonworld445910 ай бұрын
Thank you for always great lectures. I am leaving a message because I have a question. If you uncheck mask_combined and mask_extracted in Yoloworld ESAM and run it (Error occurred when executing PreviewImage: I get the error Cannot handle this data type: (1, 1, 15), |u1). Is there a solution? You can check and run them separately, but if you run them with both turned off, an error will appear.
@controlaltai10 ай бұрын
Thank You! So basically, if you pass on the mask to another node, that node cannot handle multiple mask. If yolo for example, detects more than 1 mask, you would get this error when passing on. For that, you should select an extracted mask value or combine the masked. Only a singular mask image should be passed on. If you are getting the error without passing it on, then let me know, something else is wrong, as i doubled checked now and I don't get that error.
@moviecartoonworld445910 ай бұрын
Thank you for answer!!@@controlaltai
@Kikoking-y9b9 ай бұрын
I need Help Please: i dont see the Models of the (Load Fooocus Inpaint). I download all 4 and placed them in models -inpaint models.
@controlaltai9 ай бұрын
The location is ComfyUI_windows_portable\ComfyUI\models\inpaint And not ComfyUI_windows_portable\ComfyUI\models\inpaint\models After putting the models, close everything including browser and restart.
@Kikoking-y9b9 ай бұрын
@@controlaltai Thank you. The problem has been solved after i renamed the folder as (inpaint) instead of inpaint models. I apppreciate your fast answer ;) Continue, i like you
@emizemani69585 ай бұрын
Amazing work! Just one question, in the combined workflow, how do you add a mask manually? Because the moment I queue prompt the mask disappears from the preview bridge. Am I doing something wrong?
@controlaltai5 ай бұрын
Thank you. That is normal. The preview bridge is a live stop over node. If the source before that regenerates the preview bridge will refresh. If you have masked once and want to re use that same mask replace the preview bridge or make a side connection with a node called load image as mask. And load the mask there.
@emizemani69585 ай бұрын
@@controlaltai thank you for your reply! I tried what you suggested and it worked. Just another question, is there any way to have all of those workflows in one comfy tab?
@controlaltai5 ай бұрын
@@emizemani6958 Welcome and Yes. Update Comfy to the latest version. Go to settings, there should be a beta option at the top. Enable by setting the bar position at the top or bottom. You will also get the workflow management in this Beta. You can load multiple workflows in one tab.
@neoz841310 ай бұрын
import failed and the log file said, can't found Supervision, how to fix this pls.
@controlaltai10 ай бұрын
go to comfyui python embedded folder, open terminal and try python -m pip install supervision If that does not work then try this: python -m pip install inference==0.9.13
@edba74108 ай бұрын
May I get the JSON files for this lesson?
@controlaltai8 ай бұрын
Everything is explained in the video. There are 6 to 7 workflows, you can build the workflow yourself.
@edba74108 ай бұрын
@@controlaltai I tried, but I don't get the same results as you. Maybe I can't catch some points and I'm connecting the nodes incorrectly.
@runebinder10 ай бұрын
I'm trying out the Yolo World Mask Workflow, but I'm getting this error when I get to the first Mast to Image node: "Error occurred when executing MaskToImage: cannot reshape tensor of 0 elements into shape [-1, 1, 1, 0] because the unspecified dimension size -1 can be any value and is ambiguous" I haven't changed any of the settings, and using a decent image with not too much in it (Res 1792 x 2304), and the prompt of shirt which is showing in WD14. Not sure what settings I need to change. Have tried altering the confidence but that hasn't helped and tried both the Yolo L & M models. Any ideas?
@controlaltai10 ай бұрын
That error is when it cannot detect any segment. Try with confidence 0.01 and iou to 0.50, if it still cannot detect anything, You need to check what's your inference. When you launch comfy, do you get any message in command prompt that your inference is in a lower version latest version is 0.9.16. If you get that warning then all dependencies are good. if you don't get that warning, means you are on the latest inference on which this node does not work. The Wd14 is not what Yolo Sees. That's just there to help you. Both are un related. I put that there because, I was testing some images with low resolution and I could not see the objects but the ai could. Let me know if 0.01 / iou 0.50 works or not.
@runebinder10 ай бұрын
Thanks. I’ll check in a bit and let you know how I get on.
@runebinder10 ай бұрын
@@controlaltai copied everything out of the Command Prompt window and into a Word doc so I could use Ctrl+F to search for Inference and I get the warning that I'm on 0.9.13 and it asks me to update, so looks good on that front. Tried the same image but used Face as the prompt this time as it's a portrait shot and figured that would be easy for it to find and it worked, thanks for your help :)
@iangregory95699 ай бұрын
do you have any basic masking, composting videos??
@controlaltai9 ай бұрын
Not yet, however a 10 to 15, maybe more basic series episodes will be there on the channel covering part by part slowly, explaining every aspect of comfy and stable diffusion. We just don’t have an eta on the first episode…..
@iangregory95699 ай бұрын
@@controlaltai ,sorry i guess what i mean is what "mask node" would i use too layer two images together like in ,photoshop fusion, AE so a 3d rendered apple with a separate alpha channel then comped onto a background of a table, but there are so many mask nodes i don't know which is the most straight forward to use for such a simple job, thanks
@controlaltai9 ай бұрын
It’s works differently here. Say apply and bg, so you use mask to select apply, cut the apple and then paste it on another bg. This can be done via masquerade nodes cut by mask and paste by mask function. To select the apple, manual is messy, you can use grounding dino, clipseg or yoloworld. All three would suffice. In between you can add a grow mask node, feather mask etc to refine the mask and selection.
@iangregory95699 ай бұрын
thank you!@@controlaltai
@croxyg69 ай бұрын
If you have a missing .jit file error, go to SkalskiP's huggingface and find the jit files there. Place in your custom nodes > efficient sam yolo world folder.
@alvarocardenas888810 ай бұрын
is there a way to make the mask for the mammoth automatically? Like putting a mask where the woman was before with x padding
@controlaltai10 ай бұрын
Yes, you can, try create a rectangular mask node from masquerade nodes. Use some math nodes to get the size directly from the image source to the width and height input and just define the x and y co ordinates and the mask size.
@baseerfarooqui58979 ай бұрын
hi very informatic video i am getting this error while running code "AttributeError: type object 'Detections' has no attribute 'from_inference'
@controlaltai9 ай бұрын
Thank you! Is it detecting anything? Try a lower threshold.
@baseerfarooqui58979 ай бұрын
@@controlaltai already tried but nothing happened
@controlaltai9 ай бұрын
Check inference version.
@baseerfarooqui58979 ай бұрын
can u please elaborate it. thanks@@controlaltai
@petrino7 ай бұрын
Error occurred when executing Yoloworld_ESAM_Zho: cannot import name 'packaging' from 'pkg_resources' (C:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu 1\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pkg_resources\__init__.py)
@petrino7 ай бұрын
this is also after i followed this step: Command to Install Inference: python -m pip install inference==0.9.13 python -m pip install inference-gpu==0.9.13
@controlaltai7 ай бұрын
I am looking at this. I don't think it's an inference problem. Something to do with the python version or the latest comfy update. Will get to you if I find a solution.
@icebergov9 ай бұрын
i checked the link in the description for Yolo World Efficient Sam S CPU/GPU Jit in the description and the model there is marked as unsafe by HuggingFace... where can I download it from?
@controlaltai9 ай бұрын
Please recheck the .jit files are safe. The other file is marked unsafe...yolow-v8_l_clipv2_frozen_t2iv2_bn_o365_goldg_pretrain.pth You can download it from another source here: huggingface.co/spaces/yunyangx/EfficientSAM/tree/main
@icebergov9 ай бұрын
@@controlaltai thanks
@stepahin9 ай бұрын
It stuck on Blur Masked Area, I see issues on github but cant find clear solution, something about pytorch version :(
@controlaltai9 ай бұрын
Yeah, I bypassed it by having the code run via gpu.....had to make modifications to the node.
@mariusvandenberg42509 ай бұрын
Fantastic node thank you. I am getting this error: Error occurred when executing Yoloworld_ESAM_Zho: 'WorldModel' object has no attribute 'clip_model'
@eric-rorich9 ай бұрын
Me too... there is already a Ticket open, should be fixed soon
@controlaltai9 ай бұрын
Are on your inference 0.9.13 or the latest 0.9.17?
@eric-rorich9 ай бұрын
@@controlaltaiinference package version 0.9.13
@controlaltai9 ай бұрын
Are the jit models downloaded? When does this error happen? Always or occasionally.
@mariusvandenberg42509 ай бұрын
@@controlaltai yes i am. I rerun python -m pip uninstall inference and then python -m pip install inference==0.9.13
@haljordan15759 ай бұрын
what if you wanted to replace an object with an existing one? or inpainting it?
@controlaltai9 ай бұрын
The tutorial covers that extensively. Please check the video.
@ankethajare91768 ай бұрын
hey, what if my image is more than 4096 pixels for outpainting?
@controlaltai8 ай бұрын
Hey, You may run out of memory issues on consumer grade hardware. SDXL cannot handle that resolution. You can outpaint in smaller pixels and do more runs rather than going beyond 1024 outpaint resolution.
@가릉빈가-i5m7 ай бұрын
I tried to install inference==0.9.13 But i got error. Should i downgrade my python version to 3.11 ?
@controlaltai7 ай бұрын
I suggest you backup your environment then downgrade. Wont work unless on 3.11
@가릉빈가-i5m7 ай бұрын
@@controlaltai Thank you i solve the problem on 3.11
@mikrodizels9 ай бұрын
Quick question, if I need to add LoRa's to the workflow, should they come before Self-Attention Guide + Differential Diffusion nodes or after? Does it make a difference?
@controlaltai9 ай бұрын
I add the lora after self attention and differential diffusion. To be honest I have not tested it in any other order.
@mikrodizels10 ай бұрын
Does this only work with SDXL models? I only have tried outpainting for now, I want to outpaint my epicrealism_naturalSinRC1VAE created images, everything seems to work in the previews, but in the final image after going through the sampler, the outpainted area is just noise. I included the same Lora and custom VAE I used to previously generate my images into this workflow as well.
@controlaltai10 ай бұрын
The fooocus patch only works with SDXL checkpoints.
@mikrodizels10 ай бұрын
@@controlaltai Oh ok, got it. Is there an outpaint workflow, that would work like this for SD 1.5?
@controlaltai10 ай бұрын
In comfy, all you have to do is remove the focus patch. However you have seen the difference when applying fooocus. I suggest you switch to any sdxl checkpoint. Even turbo lightning will give good results.
@mikrodizels10 ай бұрын
@@controlaltai Got it to work without Focus for 1.5, seamless outpaint, but the loss of quality with each queue (image becomes more grainy and red), unfortunately is inescapable no matter what. You are correct, time to try the lightening jugger, cheers
@kikoking50099 ай бұрын
Great video very helpfull. I just have an issue in removing an object. I have a picture of 4 man, and i wanted to remove 1. In the step at the end after the Ksampler i have the issue that the face details of the other persons change a bit when i see it in the Image Comparer (rgthree) Can i remove 1 person without changing other details?
@controlaltai9 ай бұрын
Thanks! This is a bit complicated. So I have to try this. Are you finding this issue after the first or second ksampler? Also the approach would depend on how the interaction is in the image. If you can send a sample image, i can try and let you know if successful.
@kikoking50099 ай бұрын
@@controlaltai I find the Issue in both KSamplers. I don't know how to send a sample image. And here in youtube i can only write
@controlaltai9 ай бұрын
@@kikoking5009 send an email to mail @ controlaltai . com (without spaces)
@controlaltai9 ай бұрын
I cannot reply to you from whatever email you sent from. "Remote server returned '550 5.4.300 Message expired -> 451 Requested action aborted;Reject due to policy restrictions" I need the photo sample of the 4 person along with your workflow. Sent an email from an account where I can reply back to you.
@kikoking50099 ай бұрын
@@controlaltai i tried and sent it with other email. If it didn't work i really don't know. By the way iam thankful that you answer my question and try to help. Best luck
@yklandares9 ай бұрын
as you wrote, I downloaded the models, but in the node where we select (yolo_world/l) In general, are they supposed to load themselves? but no, I have this error. I got an error when uploading Your world_ModelLoader_Zho: It is impossible to get a "model of the world"
@controlaltai9 ай бұрын
Yes, the dev has designed that it loads the models automatically the "yolo_world/l". However, you have to download the .jit files in the custom node folder root directory. Other wise you get error for models and it does not load automatically.
@yklandares9 ай бұрын
Error occurred when executing Yoloworld_ModelLoader_Zho: Can't get attribute 'WorldModel' on File "G:\NEUROset\ComfyUIPort\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\NEUROset\ComfyUIPort\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\NEUROset\ComfyUIPort\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\NEUROset\ComfyUIPort\ComfyUI\custom_nodes\ComfyUI-YoloWorld-EfficientSAM\YOLO_WORLD_EfficientSAM.py", line 70, in load_yolo_world_model YOLO_WORLD_MODEL = YOLOWorld(model_id=yolo_world_model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\NEUROset\ComfyUIPort\python_embeded\Lib\site-packages\inference\models\yolo_world\yolo_world.py", line 36, in __init__ self.model = YOLO(self.cache_file("yolo-world.pt")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\NEUROset\ComfyUIPort\python_embeded\Lib\site-packages\ultralytics\engine\model.py", line 95, in __init__ self._load(model, task) File "G:\NEUROset\ComfyUIPort\python_embeded\Lib\site-packages\ultralytics\engine\model.py", line 161, in _load self.model, self.ckpt = attempt_load_one_weight(weights) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\NEUROset\ComfyUIPort\python_embeded\Lib\site-packages\ultralytics n\tasks.py", line 700, in attempt_load_one_weight ckpt, weight = torch_safe_load(weight) # load ckpt ^^^^^^^^^^^^^^^^^^^^^^^ File "G:\NEUROset\ComfyUIPort\python_embeded\Lib\site-packages\ultralytics n\tasks.py", line 634, in torch_safe_load return torch.load(file, map_location="cpu"), file # load ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\NEUROset\ComfyUIPort\python_embeded\Lib\site-packages\torch\serialization.py", line 1026, in load return _load(opened_zipfile, ^^^^^^^^^^^^^^^^^^^^^ File "G:\NEUROset\ComfyUIPort\python_embeded\Lib\site-packages\torch\serialization.py", line 1438, in _load result = unpickler.load() ^^^^^^^^^^^^^^^^ File "G:\NEUROset\ComfyUIPort\python_embeded\Lib\site-packages\torch\serialization.py", line 1431, in find_class return super().find_class(mod_name, name) @@controlaltai
@controlaltai9 ай бұрын
Make sure something is masked. Also ensure with multiple objects are masked only one is passed through to the next node. That can be done via mask combine or mask extracted (selection).
@yklandares9 ай бұрын
I didn 't sleep for two days and agonized over the process and eventually placed two .jit models but not just in a folder but with the name yolo_world@@controlaltai
@francaleu777710 ай бұрын
thank you, great
@NeonSparks6 ай бұрын
I really love comfyui but I'm getting tired of custom nodes that end up breaking everything... WD tagger is absolutely terrible.... Had to reinstall comfy along with all my nodes.
@controlaltai6 ай бұрын
That node is optional. Also you can avoid re install. If you are using comfy portable back up the embedded python folder. Now when you install any node that breaks for you. Do this: Go to the custom nodes folder and just delete the custom node. Then revert the embedded python folder from backup. This way you can avoid a full re install. Typically to back just just duplicate the folder and name it _2. To revert delete current and rename the backup. I hope this helps.
@调调-l9n7 ай бұрын
Does this method work with videos?
@controlaltai7 ай бұрын
It does indeed in my testing but the workflow is way way different. I took a plane take off video and removed it completely (the plane that is) and re constructed the video. I did not include it in the tutorial as it was becoming too long.
@vivektekale2 ай бұрын
@@controlaltai Will you be able to make a separate video on removing objects from Video?
@vivektekale2 ай бұрын
Thanks in advance
@controlaltai2 ай бұрын
No I won’t be able to do that. I make videos for such things which are so specific. It would be easier to explain personally and quicker. Lot of time and resources are needed for such videos and it’s not practical.
@vivektekale2 ай бұрын
@ i can understand, thanks for the reply
@yklandares9 ай бұрын
Don't ask me what kind of "yellow world" it is. Is it obvious that you need to go somewhere, but as you wrote, I downloaded them, but in yolo_world/limis you need to go somewhere? In general, they kind of have to download themselves, but no. He writes this I got an error when loading Yoloworld_ModelLoader_Zho: It is not possible to get a "model of the world" in File "G:\NEUROset\ComfyUIPort\ComfyUI\execution.py ", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all)
@controlaltai9 ай бұрын
I cannot understand your question. Rephrase.
@yklandares9 ай бұрын
as you wrote, I downloaded the models, but in the node where we select (yolo_world/l) In general, are they supposed to load themselves? but no, I have this error. I got an error when uploading Your world_ModelLoader_Zho: It is impossible to get a "model of the world"@@controlaltai
@controlaltai9 ай бұрын
Yes models load themselves.
@petpo-ev1yd10 ай бұрын
where should I put yolo modles to ?
@controlaltai10 ай бұрын
What yolo models are you talking about....? Check here 4:34
@petpo-ev1yd10 ай бұрын
@@controlaltai I mean yolo_world/l or yolo_world/m
@petpo-ev1yd10 ай бұрын
@@controlaltai Error occurred when executing Yoloworld_ModelLoader_Zho: Could not connect to Roboflow API. here is the error
@controlaltai10 ай бұрын
Did you download the .jit files?
@petpo-ev1yd10 ай бұрын
@@controlaltai yes,I did everything what you said
@juxxcreative5 ай бұрын
Any chance to get downloadable workflow ?
@controlaltai5 ай бұрын
Everything is shown in the tutorial and can be replicated, in the sense that the technique and know how is not hidden behind a paywall. However, paid channels members have access to the json workflow along with the assets used.
@juxxcreative5 ай бұрын
@@controlaltai ok thnx
@andrejlopuchov797210 ай бұрын
Can that work with animatediff?
@controlaltai10 ай бұрын
Yup. I had planned to showcase it however I could not fit it as the video went too long, I had to cut so many concepts, so I though it would be a seperate video all together. Yolo world works for video real time detection. Basically I was able to take a plane lifting off video. Get the plane mask, make it disappear and have the whole video without the plane, only the camera and other elements moving. I still have to iron it out. Other ideas include use these workflow techniques like color grading a video in comfy. So you have a person dancing short video. Use the similar technique to isolate and change the colors say of the clothing and re stitch everything. All shown in the video can be applied to animate diff. Just the workflows would be slightly different.
@Spinaster10 ай бұрын
Thank you for your precious tutorial. I follow every steps but I still get the following error: "Error occurred when executing Yoloworld_ESAM_Zho: type object 'Detections' has no attribute 'from_inference' File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-YoloWorld-EfficientSAM\YOLO_WORLD_EfficientSAM.py", line 141, in yoloworld_esam_image detections = sv.Detections.from_inference(results) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^" Any suggestions? 🙏
@controlaltai9 ай бұрын
Okay, multiple things can be wrong, 1. Check inference is 0.9.13 2. Check if the jit models are downloaded correctly 3. The object may not be detected, for that select some other keyword or reduce threshold. 4. Multiple objects are selected, and is getting passed on to the masked node. Only a single mask can be passed. For this use mask combined or selectg from mask extracted a value of the mask.
@Yojimbo-h8r7 ай бұрын
Anyone else struggling with the command python -m pip install inference==0.9.13, try using py -m pip install inference==0.9.13 instead.
@sukhpalsukh351110 ай бұрын
Please share workflow 🥺
@controlaltai10 ай бұрын
It's already shared with members. Also nothing is hidden in the video, you can create it from scratch if you do not wish to be a member.
@sukhpalsukh351110 ай бұрын
@@controlaltai Thank you, really advanced but simple tutorial, appreciate your work,
@beveresmoor10 ай бұрын
The workflow is not too hard to learn. It is taking up too much of my machine resources , if I put every process I want in one workflow. I need to separate them to finish the image. 😢
@controlaltai10 ай бұрын
Remove the tagger node, the tagger node needs to run on cpu than gpu, there is a trick to do that, on gpu it takes minutes, cpu seconds. For the rest, it’s just what it is. Splitting it is a good idea.
@manolomaru9 ай бұрын
✨👌😎😯😯😯😎👍✨
@controlaltai9 ай бұрын
Thank you!!
@manolomaru8 ай бұрын
@@controlaltai Hello Malihe 👋🙂 ...Yep, I installed Pinokio to avoid dealing with the other way of installation. But unfortunately I'll have to do it that way. Thank you so much for your time, and ultrasuperfast response 👍
@अघोर10 ай бұрын
This....this rivals Adobe.
@Pauluz_The_Web_Gnome10 ай бұрын
Luckily its not this cumbersome! Nice
@Darquesse-y7k5 ай бұрын
bro you are fucking god to me
@silverspark817510 ай бұрын
avoid using Yolo World - it has outdated dependencies and most probably you will have issues with other nodes. Also Segm_Detector from Impact-Pack detects objects mush more accurate
@35wangfeng9 ай бұрын
agree with you
@controlaltai9 ай бұрын
I know, the dev doesn’t respond. I am trying to find a way to update the dependencies myself. Will post if I am successful. The techniques I show in the video are not possible via dino or clipseg. As of now the best solution is to just have another comfy installed portable and have the yolo and inpainting inside. I am finding, this is now becoming very common. For example comfy 3d is a mess and requires completely different diffusion, same with plenty of other stuff. With mini conda I can manage different environments instead of separate environments, but I should get around to make a tutorial for that, this way we can still use new stuff without compromising the main go to workflow.
@yklandares9 ай бұрын
please reply to the subscriber)
@stephaneramauge45507 ай бұрын
Thanks for the video But slowdown twice at least !!! It's really painfull to pause, rewind to see where you're clicking It 's a tutorial not a formula one race !
@controlaltai7 ай бұрын
Thank you for the feedback. Will take it into consideration.
@viniciuslacerda45779 ай бұрын
error : efficient_sam_s_gpu.jit does not exist
@controlaltai9 ай бұрын
Check the requirements section of the video. You need to download the two .jit files in the custom nodes yolo folder.
Thanks! Please link the json file on google drive or something.....will check it out for you.
@hottrend49777 ай бұрын
Please help me, I get the error "cannot import name 'packaging' from 'pkg_resources'"
@controlaltai7 ай бұрын
Okay found the solution, Firstly ensure, that Python 3.12 or lower is installed with comfy ui portable. The go inside the comfy portable folder python embedded folder and run this command. python.exe -m pip install setuptools==65.5.1 A comfy update installs setuptools-70.0.0, you need to downgrade for it to work.
@hotmarcnet9 ай бұрын
When the workflow passes the ESAM Model Loader, there is an error: """ Error occurred when executing ESAM_ModelLoader_Zho: PytorchStreamReader failed reading zip archive: failed finding central directory """
@controlaltai9 ай бұрын
Have no idea what is this error. Is this on comfy portable? Windows OS? or different environment?