ComfyUI: Area Composition, Multi Prompt Workflow Tutorial

  Рет қаралды 38,552

ControlAltAI

ControlAltAI

Күн бұрын

Пікірлер: 207
@controlaltai
@controlaltai Ай бұрын
Please note: The Node is no longer functional the in the Latest Version of Comfy (Checked on 20th August, 2024). Either I will make a new tutorial or a new node itself, but that will take time. As of now you can still use area conditioning within comfy UI, however there is no visual aid for the co-ordinates and palcement.
@MarvelSanya
@MarvelSanya Ай бұрын
This tool is exactly what I was looking for, but it doesn't seem to work.... Can you recommend an alternative so that propmts can be set by image region?
@controlaltai
@controlaltai Ай бұрын
No alternative as of now. Unfortunately. I might try to fork and fix it, but the script is in Java and requires some understanding. ComfyUI has built in but you cannot visually see.
@LewGiDi
@LewGiDi 12 күн бұрын
​@@controlaltaido you find a solution for this incredible node?
@controlaltai
@controlaltai 12 күн бұрын
@LewGiDi no yet, have to custom code it and make it compatible.
@fuwuffy
@fuwuffy 11 күн бұрын
I have added a node to ComfyUI Manager that works pretty similarly to the one made by DaveMane42. It is still WIP but both the node and the visualzations work properly. It is called "ComfyUI Visual Area Nodes" under ComfyUI Manager
@gabrielmoro3d
@gabrielmoro3d 9 ай бұрын
Omggggg!!! This is a masterclass. Mind blowing. Joining the members area right now, your content is absolute gold.
@controlaltai
@controlaltai 9 ай бұрын
Thank you!!
@hakandurgut
@hakandurgut 10 ай бұрын
great tutorial, appreciate your time... I learned a lot, the only thing is; nodes displacement may follow the process order to make more sense instead of packing them in a compact area.
@controlaltai
@controlaltai 10 ай бұрын
Ohh thank you! I will keep that in mind and try to be more organized in the next video. I am still finding my ropes will comfy, probably I should make a habit of making colored groups and separating them as per flow for easy understanding. Good feedback, appreciate it.
@enriqueicm7341
@enriqueicm7341 9 ай бұрын
OMG! This was the best tutorial! thanks a lot!
@controlaltai
@controlaltai 9 ай бұрын
Welcome. Thank you for the support!!
@carlosmeza4478
@carlosmeza4478 Ай бұрын
great tutorial. easy to follow along. well prepared (don't think people don't noticed all the work looking for the correct seeds for the examples) and it actually covers more than the main topic. great job guys (n_n)b
@controlaltai
@controlaltai 22 күн бұрын
Thanks!!
@M4Pxls
@M4Pxls 9 ай бұрын
Really love your workflow! Subscribed ;) Would love to use track anything model to mask out characters, use controlnet to modify background, the resample all the image/sequence.
@controlaltai
@controlaltai 9 ай бұрын
Hi, thank you!! Track Anything will work with a video workflow correct, Animate Diff?
@Unstable_Stories
@Unstable_Stories 5 ай бұрын
does the visual area conditioning custom node no longer exist? I can't find it while searching in my manager...
@controlaltai
@controlaltai 5 ай бұрын
I can still find it. Try searching the author name dave
@Unstable_Stories
@Unstable_Stories 5 ай бұрын
@@controlaltai sooo weird. I can literally find any other author and custom node besides this one. It's either something weirdly wrong with mine, or maybe since you already have it downloaded if it was removed for some reason you can still have access to it...
@controlaltai
@controlaltai 5 ай бұрын
@@Unstable_Stories no it’s still there, I always double check before answering any queries. You can check on the GitHub page and manually install it. Also you don’t need this node to use multi area composition. I used it because it’s easier to explain and easier for end users. Comfy ui itself has area conditioning and multi latent conditioning, but there is no visual representation. Here is the git hub link github.com/Davemane42/ComfyUI_Dave_CustomNode
@Unstable_Stories
@Unstable_Stories 5 ай бұрын
@@controlaltai Thanks for much! Not sure what the issue is then on my end. I will just manually install it and maybe install a clean version of manager too. Your tutorials are really good. Thanks so much :)
@YuusufSallahuddinYSCreations
@YuusufSallahuddinYSCreations 4 ай бұрын
I also could not find it in the Manager, however a manual install from git link above worked fine.
@Mypstips
@Mypstips 2 ай бұрын
Amazing tutorial! Thanks a lot!
@ai_gene
@ai_gene 3 ай бұрын
Thank you for the tutorial! I would like to know how to do the same thing as in the fourth workflow but using IPAdapter FaceID to be able to place a specific person in the frame. I tried, but the problem is that the inputs to MultiAreaConditioning are Conditioning, while the outputs from IPAdapter FaceID are Model. How can I solve this problem? I would appreciate any help.
@controlaltai
@controlaltai 3 ай бұрын
Okay but this area conditioning in the tutorial is not designed to work with ip adapter. That's a very different workflow. To place a specific person in the frame we have not covered that tutorial but involves masking and adding the person and then using lc light and a bunch of other stuff to adjust lighting as per the scene. Processing it through sampling and then changing the face again.
@ai_gene
@ai_gene 3 ай бұрын
@@controlaltai Thank you for your response! 😊 It would be great if you could create a tutorial on this topic. I'm trying to develop a workflow for generating thumbnails for videos. The main issue is that SD places the person's face in the center, but I would like to see the face on the side to leave space for other information on the thumbnail. Your tutorial was very helpful for composition, but now I need to figure out how to integrate a specific face. 😅
@controlaltai
@controlaltai 3 ай бұрын
Unfortunately due to an agreement with the company who owns the insight face copyright tech I cannot publicly create any face swapping tutorial for KZbin. Just search for a reactor and you should find plenty of KZbin. I am just restricted for public education not paid consultations or workflows which are private. (For this specific topic)
@controlaltai
@controlaltai 3 ай бұрын
​@@ai_geneHi, okay so to have the face on the left it is very very easy. You can do this via using 2 control nets. Dw pose and depth. Make sure the image resolution is same as the image generated and ensure the ControlNet image the person is on the left.
@Douchebagus
@Douchebagus 7 ай бұрын
Does this not work with SDXL? It popped off for 1.5, but it doesn't seem to work for the newer models of SD. Edit: I figured it out, the sdxl model I work with is trained on clip skip -2, and setting the clip skip to that breaks the entire node.
@LaughterOnWater
@LaughterOnWater 7 ай бұрын
Wow. This was holding me back. I had it at -1 and it looked like a dog's breakfast. I put it to -2 and suddenly the stars aligned. Thanks for posting this!
@tigerfox68
@tigerfox68 9 ай бұрын
Just amazing. Thank you so much for this!
@TouchSomeGrassOnce
@TouchSomeGrassOnce 9 ай бұрын
Such great content 👏❤.. this was very helpful.. Thank you so much for creating this tutorial 😊... Looking forward to more such videos
@controlaltai
@controlaltai 9 ай бұрын
Thank you!
@interfactorama
@interfactorama 7 ай бұрын
Unbelievably great tutorial! Blown Away!
@RompinDonkey-bv8qe
@RompinDonkey-bv8qe 6 ай бұрын
Great workflow and great video. Although, has this process stopped working now? When I add the multi area conditioning node, it doesn't have the grid and can't seem to add extra inputs. I saw that its been abandoned. Anyone else having this issue?
@controlaltai
@controlaltai 6 ай бұрын
Thank you! I just checked the workflow and everything is working as intended. Right click the node and insert input above 1, 2, 3 etc. You then have to connect it. You then have to select the appropriate index and define the width and height or you to see the multi colored grid. If you are not getting this then something else is probably wrong, maybe it did not install correctly or some other comfy conflict.
@RompinDonkey-bv8qe
@RompinDonkey-bv8qe 6 ай бұрын
@@controlaltai Thank you so much for taking the time to reply. I'm still not sure what the issue was, I was running comfyui on google colab at the time. It had the manager installed and the daveman nodes were available. It just didnt look the same (no grid) and wasn't able to add more inputs. I have now tried on desktop local install and it works fine. Thought i'd let youknow incase anyone else asks. Thanks again ❤
@freshlesh3019754
@freshlesh3019754 4 ай бұрын
This is great, I would love to see this with Stable Cascade and Ipadapter. Being able to have regional control, global style based on an image, and then minute control over a specific area with ipadapter as well would be about everything that I would need in a workflow. (Maybe the addition of an upscaler). But that would be powerful.
@controlaltai
@controlaltai 4 ай бұрын
Hi, This is clip text conditioning and cannot be combined with IP adapter. You can use this with cascade however.
@wagnerfreitas3261
@wagnerfreitas3261 Ай бұрын
brilliant, thanks
@agftun8088
@agftun8088 6 ай бұрын
love the voice ai , how did you set it up ? want to use it to hear poetry
@AlyValley
@AlyValley 2 ай бұрын
well described and explained, but can this be mixed with instantID to insert a consistant character into the image, like the portrait workflow, but using instantID to have same face and such?
@controlaltai
@controlaltai 2 ай бұрын
This is not meant for that. For consistent characters the workflow is very different. This is basically only used to define the composure. What element where. Consistency is very different and comes after the exposure. Same face is also different and requires tools like face swap, ip adapter etc, or even instant I'd as you mentioned. You can create your composure with this. Then have the character replaced and use ic light to re light the whole scene. So technically yes but requires multiple workflows.
@AlyValley
@AlyValley 2 ай бұрын
@@controlaltai thank you so much for the brain brightining
@monkeypanda-ib5cz
@monkeypanda-ib5cz 8 ай бұрын
This was super helpful. Thanks 🙏
@stijnfastenaekels4035
@stijnfastenaekels4035 3 ай бұрын
Awesome tutorial, thanks! But i'm unable to find the visual area composition custom node when i try to install it. Was it removed?
@controlaltai
@controlaltai 3 ай бұрын
Thanks and no you can find it here....github.com/Davemane42/ComfyUI_Dave_CustomNode
@aybo836
@aybo836 8 ай бұрын
Great tutorial but a little bit challenging technique as you are using an SD1.5 model to generate 1024x1024 image. As a result, with every pass we can see that new artifacts are being added to the input image. If you want to increase the detail while remaining loyal to the input image, a better way of doing this is either using a model trained to produce 1024x1024 images or do tile upscale. Informative video overall tho, thanks!
@controlaltai
@controlaltai 8 ай бұрын
Thank You!! and you are absolutely spot on. The only reason for using the SD 1.5 model for the video is that the elements in the composition are of lower resolution. The SDXL model will give artifacts (checkpoint dependant), for example, if you want to choose a box for the sun that is very low resolution. Furthermore, you can pass it through an SDXL model for upscaling. If the elements are near SDXL resolution, you can use SDXL for the area composition.
@aybo836
@aybo836 8 ай бұрын
Oh sorry my bad, I was specifically talking about the upscaling technique you used, not the area composition😊
@bungei91
@bungei91 7 күн бұрын
I did a git dave 42 for this project and I get everything else listed but the conditioning..... Is there a bug or something? I looked for bug fixs and branches but nothing came up to fix this issue
@bungei91
@bungei91 7 күн бұрын
So in my add node-> Daveman42-> multiLatentComposite, ConditioningUpscale, ConditioningStretch (These only show up) except what I actually need which MutiArea Conditioning..... All the scripts are in the set up folder and MultiArea Conditioning is not showing up or doing anything and I am not getting any errors.... I am at a loss
@controlaltai
@controlaltai 7 күн бұрын
@bungei91 yeah please check the pin post. After the latest comfy update the node does not work and is broken. Looking for an alternative visual solution similar to this.
@icedzinnia
@icedzinnia 5 ай бұрын
i am coming back to this and the nodes aren't available anymore ie "Davemane". Perhaps it's because of the ipdapter massive change. otherwise, it's just not there anymore. :( sad face.
@controlaltai
@controlaltai 5 ай бұрын
Give me some time. Will fix it and post an update.
@controlaltai
@controlaltai 5 ай бұрын
Hi, Just checked and am confused now. Which workflow are you referring to. The Multi Area Composition does not have an IP Adapter in it.
@qkrxodls3377
@qkrxodls3377 5 ай бұрын
Assuming there is some issue with the manager. Just download the repo from the provided link above in the description to the custom_node folder, then restart comfy. For me, the git clone did not worked also, so just downloaded and pasted it manually. Hope it helps!
@ImmacHn
@ImmacHn 9 ай бұрын
It's funny how this method is basically what you do when doing this by hand.
@controlaltai
@controlaltai 9 ай бұрын
Lol exactly 💯. When I used to draw I used to do the same. I was finding a way to do this with AI. So basically I created my own workflow technique.
@eucharistenjoyer
@eucharistenjoyer 6 ай бұрын
Your videos are great, really in depth and clear. One question though: Is this MultiAreaConditioning similar to Gligen?
@controlaltai
@controlaltai 6 ай бұрын
Thanks! Yes, it’s similar. In some cases gligen may be superior as it understands the entirety of the image. Although I couldn’t find sdxl compatibility, hence multi area composition.
@eucharistenjoyer
@eucharistenjoyer 6 ай бұрын
@@controlaltaiThank you for the answer. I still use 1.5 most of the time (4gb VRAM), but I wish there was a comfyui node with GUI for Gligen similar to MultiAreConditioning. Doing everything by numbers is really cumbersome.
@controlaltai
@controlaltai 6 ай бұрын
It is pre built into comfy.. comfyanonymous.github.io/ComfyUI_examples/gligen/
@othoapproto9603
@othoapproto9603 9 ай бұрын
Thanks, that was wonderful. Q: can you replace the props with images as a source?
@controlaltai
@controlaltai 9 ай бұрын
Thanks. Probably, but the workflow would be different, I guess. I have to work on it to see how it can be integrated.
@othoapproto9603
@othoapproto9603 9 ай бұрын
@@controlaltai Cool, I really like the pace and explanation thought out your videos. So much to learn, thanks again, subscribing is a no-brainer.
@ImAlecPonce
@ImAlecPonce 9 ай бұрын
Thanks !!! I'm going to try it out now .
@controlaltai
@controlaltai 9 ай бұрын
Great!! If at any time you need help let me know. Happy to help.
@8561
@8561 8 ай бұрын
Great video! Random question, at 32:19, how did you queue prompt such that the seed changed quickly and the prompt instantly stopped queuing? Also, how would you go about re-applying a faceID or reactor face after it changes in the upscales? Do you face swap at the upscaled pixels are is that not advisable?
@controlaltai
@controlaltai 8 ай бұрын
Thank You!! I change the "control after generate" to randomize. When you do that and hit queue prompt, randomises the seed instantly. Then it will stop, you have to press prompt again for it to use the random seed. Basically its like, Seed 1234 (fixed), change after generation to random, queue prompt, it changes seed to "4567" (random), then change random fix again so that seed does not change at queue prompt. Now when I queue prompt again, it will re generate with 4567 and keep that seed as "change after regeneration" is fixed. For applying face. First make all the necessary changes like adding details etc without upscaling. This should be done in 1024 or Sd 51.5 resolution. Apply the face swap, then just upscale image to image without denoising too much. Face swap will work up to a certain resolution after which it won't be that clear. Its advisable to do face swap, then upscale only (no further adding details). You can do that upscale via Ultimate SD upscale, check mode form linear to none. Upscale 1.5x to 2x at a time only. This will upscale without any details.
@8561
@8561 8 ай бұрын
Thanks for the clear response! Looking forward to more tuts@@controlaltai
@krio_gen
@krio_gen 4 ай бұрын
Thank you very much!
@jasonkaehler4582
@jasonkaehler4582 9 ай бұрын
thanks for this! But when i add my second image, it renders garbage where the character is supposed to be (the background 0 renders fine). Any idea how to fix?
@controlaltai
@controlaltai 9 ай бұрын
Hi, no problem. I need to have some details for troubleshooting. There are multiple workflows in the video, which one are you trying out exactly. What is the checkpoint, confirm the latent resolution, Lora or ControlNet being used? Or are you just doing 1 BG and One Character? Meaning 2 Prompts?
@jasonkaehler4582
@jasonkaehler4582 9 ай бұрын
hi, thanks for reply! I did figure out my problem.
@lenny_Videos
@lenny_Videos 10 ай бұрын
Thanks for the great tutorial. Where can i find the json files for the workflow?
@controlaltai
@controlaltai 10 ай бұрын
Hi, they are posted for KZbin channel Memberships.
@lenny_Videos
@lenny_Videos 10 ай бұрын
@@controlaltai I see no option for youtube membership on your channel...
@lenny_Videos
@lenny_Videos 10 ай бұрын
@@controlaltai Hi there, i see no option for youtube membership on your channel...
@controlaltai
@controlaltai 10 ай бұрын
Hi, Here: www.youtube.com/@controlaltai/join
@harishsuresh8707
@harishsuresh8707 8 ай бұрын
Thanks for making such a detailed workflow with a clear explanation. I've been trying to use this workflow with SDXL models, but it doesn't seem to work. Do the models / workflows have any architecture specific limitations? For example - SD 1.5 only, SD 2.0+ only, SDXL only, Turbo Models only?
@controlaltai
@controlaltai 8 ай бұрын
Thank you! No, the workflow don't have any architecture limitations. The Concept should work with any, as long as the elements used within the architecture are compatible with each other. For example, when using SDXL, use SDXL ControlNet.
@controlaltai
@controlaltai 8 ай бұрын
Send me your workflow via email and mention the checkpoint you are using. I will have a look at what is wrong. When you say it does not work? What exactly is not working? Please elaborate.
@CrankAlexx
@CrankAlexx Ай бұрын
hey, I downloaded the node in 2024 by copying the github link in the manager, but when I add the multiareaconditioning node it looks different from yours, is there any way to get it to work properly again?
@controlaltai
@controlaltai Ай бұрын
Hi, There are two nodes in the custom node. One is for latent composition one is for area conditioning. Select the area one and double check. Secondly the node will be different cause it changes when you connect the clip inputs to it.
@CrankAlexx
@CrankAlexx Ай бұрын
@@controlaltai hey, thank you a lot for your reply
@ambtehrani
@ambtehrani 2 ай бұрын
this node's been abandoned and doesn't work anymore. Would you please suggest a suitable replacement ??
@controlaltai
@controlaltai 2 ай бұрын
There is no replacement. You can still do the area conditioning in comfyui without using that node. Only you won't have the visual view of x and y co ordinates. And the node works. You should know how to manually install from GitHub. You can download the zip or git clone and do a manual install
@RonnieMirands
@RonnieMirands 7 ай бұрын
Just a question, can i use this Area composition for img2img? Cause i am searching this and cant find. By the way, thanks a lot for share this wonderful tutorial!
@controlaltai
@controlaltai 7 ай бұрын
Not possible with image to image. However you can use latent composite and blend a character (text2image) into an existing image.
@ignacionaveran154
@ignacionaveran154 4 ай бұрын
muchas gracias por tu video
@NgocNguyen-ze5yj
@NgocNguyen-ze5yj 4 ай бұрын
Hi there, Do you have Video for workflow change backgrounds and combine subjects and backgrounds? thanks
@controlaltai
@controlaltai 4 ай бұрын
Not exactly, but a custom workflow can be designed doing just that. To do that accurately, there are multiple ways, depending on how perfect you want the blend. If you want lighting on the subject to match the bg etc, workflow becomes complicated. Maybe this helps as some elements from here could be used to select and remove subjects ComfyUI: Yolo World, Inpainting, Outpainting (Workflow Tutorial) kzbin.info/www/bejne/rXbHYqqGoah1l7M
@NgocNguyen-ze5yj
@NgocNguyen-ze5yj 4 ай бұрын
@@controlaltai could you please make a series about that?, I need it for works, remove background, combine subject another background, lighting on subjects and bg...to realistic... Many thanks
@controlaltai
@controlaltai 4 ай бұрын
@NgocNguyen-ze5yj can’t promise, but will try. Some things are already in the pipeline, will add this too it.
@NgocNguyen-ze5yj
@NgocNguyen-ze5yj 4 ай бұрын
@@controlaltai yes thanks you in advanced, love your tutorials ( details and lovely)
@asmedeus448
@asmedeus448 Ай бұрын
I try to use multiareaconditioning with LORA on each prompt. the output image generate independent crop which is it not really great.
@controlaltai
@controlaltai Ай бұрын
Can you share your workflow and lora via email. I can have a look, email me at mail @ controlaltai. com without spaces
@asmedeus448
@asmedeus448 Ай бұрын
@@controlaltai hi, thanks for the fast reply. I'll send you tomorrow with the workflow of the prompt
@calumyuill
@calumyuill 10 ай бұрын
Thanks for tutorial, can you tell me which custom node enables 'Anime Lineart' node? @22:40
@controlaltai
@controlaltai 10 ай бұрын
Hi, welcome and thank you. In comfy manager search for controlnet and install the ComfyUI's ControlNet Auxiliary Preprocessors by Fannovel16. Manual install link is: github.com/Fannovel16/comfyui_controlnet_aux
@calumyuill
@calumyuill 10 ай бұрын
@@controlaltai thank you for your reply. I did already install this using the manager but I still don't get the option to create the 'Anime Lineart' node, any ideas?
@controlaltai
@controlaltai 10 ай бұрын
@calumyuill have you restarted comfy from command prompt (close and reopen), search for preprocessor and you should see all the nodes for it.
@calumyuill
@calumyuill 10 ай бұрын
@@controlaltai thanks again, yes I have restarted from command prompt but when I search for preprocessor I get zero results 😞
@controlaltai
@controlaltai 10 ай бұрын
@calumyuill do one thing, click on update comfy and check if there is any comfy update. Close and restart, then try update all and check if the latest version of the controlnet custom node is installed, close and restart again.
@hamid2688
@hamid2688 9 ай бұрын
proud of u, u really nailed it, can say the one of the best quliaty video over utube on how to achive a quliaty picture with AI !
@ReddyLee1
@ReddyLee1 Ай бұрын
I can only find MultLatentComposite, where is the Conditioning node
@controlaltai
@controlaltai Ай бұрын
Okay, just checked it, its gone with the latest version of comfy. Doesn't work. I will probably have to make a new custom node or a new tutorial avoiding this completely and using native comfy ui multi area conditioning. Will make a pin post in the comments.
@ReddyLee1
@ReddyLee1 Ай бұрын
@@controlaltai MultLatentComposite still work, but it seems to only modify the object area, not the background
@controlaltai
@controlaltai Ай бұрын
Yeah the node is broken after the comfy update. I have to make a new tutorial on how to use area composition without node. It's build in comfy but without visualization.
@YaboosatG
@YaboosatG 7 ай бұрын
Great Malihe. Do you do custom workflow? How we can reach out?
@controlaltai
@controlaltai 7 ай бұрын
Thank You! The team does take up custom Project works. You can reach out to my colleague "Gaurav Seth" via discord "g.seth" or mail " mail @ controlaltai . com" (without spaces). He handles all custom Workflow Projects.
@jjhon8089
@jjhon8089 9 ай бұрын
great tutorial
@headscout
@headscout 5 ай бұрын
Can I use the same method to generate 3 different human x animal character? Says... 1 for fox girl, second for cat girl, and the last for demon girl at the same frame?
@controlaltai
@controlaltai 5 ай бұрын
Since the prompts are isolated, yes. However subjects would be non overlapping or interactive. To have them overlap or interact, a multi latent composite has to be used, which is totally different.
@rei6477
@rei6477 5 ай бұрын
@@controlaltai Hi! (*^▽^*) so, if I understand correctly, this workflow wouldn't allow me to create a scene where, for example, 『Character 1 is in the foreground with their back turned, while Character 2 is in the background, partially overlapped by Character 1, so that only Character 2's face is visible and their body is hidden. 』Is that right? I tried to research more about multi latent composite, but I couldn't find a clear explanation. Could you possibly point me to any articles or videos that explain multi latent composite technique in more detail?
@controlaltai
@controlaltai 5 ай бұрын
Hi, yes you understood correctly. Latent composite is the same as this but we do it via k sampling. Unfortunately there are no articles for it. If you go to the comfy GitHub page for workflow examples you will get a workflow sample from comfy itself for latent composition. I have not done a tutorial for this. But I have used multi latent composition in both the stable video diffusion videos. The concept is similar as that's for video and here you have to do it for images.
@rei6477
@rei6477 5 ай бұрын
@@controlaltai Thank you so much for your reply! I think I should be able to put together a simple workflow using latent composition and K-samplers on my own. I'm currently super busy with creating Loras and stuff, but I'm definitely planning to give it a try once things settle down a bit. And of course, I'll also experiment with the workflow from your video. Thanks again!
@Gabriecielo
@Gabriecielo 9 ай бұрын
Thanks for the tutorial first, but after I installed Dave's nodes, I still could not find it from "Add nodes" menu, neither by search. Any possible issues?
@controlaltai
@controlaltai 9 ай бұрын
Right click - add nodes - daveman42. If its not there, close comfy, restart comfy (browser and command prompt both). Still not there update everything, comfy, custom nodes, close and restart again
@Gabriecielo
@Gabriecielo 9 ай бұрын
@@controlaltai Thanks, tried but still not available. Then I went to github, found someone reported same issue, need to download and overwrite folder, instead of git clone. it worked now.
@screenrec
@screenrec 10 ай бұрын
Thank you. ❤
@godpunisher
@godpunisher 5 ай бұрын
Thank you so much for such a nice tutorial 🤩
@LolyphotoWedding
@LolyphotoWedding Ай бұрын
Updated: ComfyUI: 2445[369f45](2024-08-01) The MultiArea Air Conditioning button cannot be used. Do you have any way to fix this problem?
@controlaltai
@controlaltai Ай бұрын
I had no issues with it. Will recheck with the last update. What error are you getting?
@controlaltai
@controlaltai Ай бұрын
Hi, Just tried it with everything updated. No issues here. You have to tell me what exactly the issue is.
@LolyphotoWedding
@LolyphotoWedding Ай бұрын
@@controlaltai After updating to the new version. The button has a red flag but I can't fix the error. I can't find the Multi-Zone Air Conditioning button even though I've deleted and reinstalled it. I hope you can suggest troubleshooting instructions. Thank you!
@controlaltai
@controlaltai Ай бұрын
You have to tell me the cmd error. When you delete and reinstall you have to close cmd and restart comfy fully. If you are still facing issues email me with the error to mail @ controlaltai . com (without spaces).
@carlosmeza4478
@carlosmeza4478 Ай бұрын
@@controlaltai @LolyphotoWedding ok can confirm the issue. but there is a twist. first the custom node has not been updated for a year and it seems that it doesn't show in the latest version of comfyui portable. (if you look for it, it won't show up in the custom nodes manager to install) people in civitai also comment that is not working for them and the creator has gone silent. if you clone the repository from github (the one that is 14mb) it also doesn't show in the installed custom nodes in the comfy manager... now the twist. even if it doesn't show in the manager the node can be used without any issues. so in a nutshell. it works as intended but comfy manager doesn't acknowledge it's existence. hope it helps XD
@CAPSLOCK_USER
@CAPSLOCK_USER 4 ай бұрын
Great tutorial!
@controlaltai
@controlaltai 4 ай бұрын
Thank you!!
@aloto1240
@aloto1240 9 ай бұрын
Can you do this but loading image’s instead of just prompts?
@controlaltai
@controlaltai 9 ай бұрын
Nice idea, I think so. But the whole workflow will be different. Based on what you want exactly, I have to explore it further.
@aloto1240
@aloto1240 9 ай бұрын
@@controlaltai thanks for the reply. It would be interesting if it can work. I’m new to Comfy UI and stable diffusion so been watching lots of content looking for the perfect workflow, this looks great! Using images or combining images with prompt in this manner would be awesome. Thanks for all the great contents
@controlaltai
@controlaltai 9 ай бұрын
Will put it in the todo list.
@TheRMartz12
@TheRMartz12 9 ай бұрын
I couldn't find 4x_foolhardy_remacri upscaler, It seems it might be an old or discontinued upscaler and I couldn't find it on safetensors. Do you know where to find it or a safer alternative?
@controlaltai
@controlaltai 9 ай бұрын
Hi, it's there on Hugging Face. Here is the Link: huggingface.co/uwg/upscaler/tree/main/ESRGAN
@TheRMartz12
@TheRMartz12 9 ай бұрын
Thank you so much. I was so focused trying to find the upscaler because it was preventing me to move forward on the guide that I forgot to mention that you created an amazing tutorial and I'm very grateful for this content! I took a look and noticed that all the upscalers come in .pth format, both on huggingface and OpenModelDB. I'm new to the world of generative AI, but I do worry that using pickles might leave me at risk for not knowing how to detect if any of them are infected. Do you have any thoughts on this? Is huggingface generally a site to be trusted even for files that are not safetensors?@@controlaltai I would really appreciate your insight on this. Thank you again! 🙏
@controlaltai
@controlaltai 9 ай бұрын
Welcome and Thank You! Basically when I started I faced I had the same doubts. Normally, safetensor and pth are safe. .bin file can be risky. For example, When I was doing the A1111 Tutorial for IPadapter, I verified from 2 to 3 places and GitHub as all ipadapter files at that time were .bin, now there are safetensors. HuggingFace is a big site but safe. Don’t download any .bin unless some trusted source recommends you. Pth and safe tensor should be fine. Also it is not necessary to use the same upscaler. Try this: Open Comfy Manager, go to Install models. Here, search for upscaler and click on install any one and use that. If you are happy with the result use that. Whatever you find within comfy manager is safe, and should not cause any issues.
@TheRMartz12
@TheRMartz12 9 ай бұрын
Ahhh sadly the workflow number two didn't work for me, no matter how I tried, the image traced from ControlNet wouldn't be adapted to the generated image :( Maybe I was asking for something too complicated and abstract in my first try, but for the second attempt I just tried a adding steampunk tower to my composition and still it would blend-in into the background almost completely in the first generation and in the upscales it was completely gone :( The only thing I had different from the tutorial is that my anime lineart preprocessor is in .ph format and the controlnet model came in safetensors directly from the Comfy manager, but I did check and they are called the same than yours. 😢
@TheRMartz12
@TheRMartz12 9 ай бұрын
Also in 24:03, if I make those changes to Ksampler 2 and the Ultimate SD Upscale, both last two images look sooooooooo bad, why do yours look good? In the first workflow I remember your images looking bad as well and you had to change them to what you had before making those tweaks at 24:03 and thats what made the images look good for me.
@controlaltai
@controlaltai 9 ай бұрын
Hi, the second workflow can be tricky because of the complexity. This might help you. 1. The Multi Area Conditioning Strength (of the selected Control Area Subject only). If the current number doesn't work, try increasing it a further 0.5-1.0 at a time. Keep ControlNet at 1 only. 2. In Ultimate SD Upscale, workflow 2, the Mode Type is set to none, which means only Image to Image Upscale without adding anything. This mode will ignore the denoise value set here. What "none" does is, take the image output in pixel space, and double it. No denoising. 3. If the first KSampler image is giving you proper results after increasing Multi Area Strength and the result is becoming worse in the second preview, do not pass it through the second KSampler. Delete/disable the Upscale Latent model, second K sampler, and second VAE decode tile model. 4. Try Replacing the VAE Decode Tiled with the simple VAE Decode Node. 5. Make Sure to have Clip Skip as per the Checkpoint Specifications. If the trained checkpoint has no mention of recommended clip skip, then delete the node and connect everything directly to load model. Normally they have clip skip -1/-2 or none. 6. Make sure the ControlNet Model and PreProcessor both are matching. Lastly, this is highly checkpoint/prompt dependent. Save the Seed and try random seeds to see if different results pop up. Some checkpoints are trained with specific sampler name and scheduler and negative prompts for optimal results. Let me know if it helps.
@LucasMiranda2711
@LucasMiranda2711 Ай бұрын
Doesn't seem to work on new versions of comfyui unfortunately
@controlaltai
@controlaltai Ай бұрын
I have just tested this after your comment, like 2 minutes back on the latest version of comfy, in the matter of fact my comfy uses the nightly PyTorch version, everything works as intended. I just ran all the workflows to double check.
@LucasMiranda2711
@LucasMiranda2711 Ай бұрын
@@controlaltai the project from davemane is archived on his github and the node doesn't even appear on filter from comfyui, trying to install it from github, when creating the node the screen freezes from exceptions
@controlaltai
@controlaltai Ай бұрын
The one that freeze is muLti latent composition, not area conditioning. Check which node you are putting, he has two node, one always had problems, the workflow doesn’t use latent composition.
@NWO_ILLUMINATUS
@NWO_ILLUMINATUS 7 ай бұрын
Even following your exact process, model, sampler, scheduler, etc... my pic ALWAYS has too much noise once I add the conditioner. I have to lower the strength in each index, each strength for each index different. any thoughts?
@controlaltai
@controlaltai 7 ай бұрын
There must be some settings within the nodes that was overlooked, just guessing though. Can you email me the workflow you made. I can have a look at it and figure out what's going on. mail @ controlaltai. com (without spaces)
@NWO_ILLUMINATUS
@NWO_ILLUMINATUS 7 ай бұрын
I definitely, will, as soon as I have a chance. (single daddy, work full time, blah blah blah...). Though I may have figured out my issue.... 4 G VRam. lol. You mentioned that we CAN used a tiled decode, but that it may leave artifacts. I usually get the message, "Switching to tiled because only 4 g Vram or less (paraphrase.)" Safe to assume it's my old @55 GTX 1050ti?@@controlaltai
@controlaltai
@controlaltai 7 ай бұрын
@1111Scorpitarius for 4gb yes. 6 to 8 gb VRam is recommended. Try a lower resolution. Also don’t try first without upscale. That should help.
@NWO_ILLUMINATUS
@NWO_ILLUMINATUS 7 ай бұрын
Thank you for the prompt replies, by the way. Earned you a sub, and I'll watch through all your Comfy vids.
@RompinDonkey-bv8qe
@RompinDonkey-bv8qe 6 ай бұрын
@@NWO_ILLUMINATUS just weighing in here, sorry if you are already aware but there's a software called Fooocus (yup 3 o's) that's really good for those with lower specs. I ran it perfectly on a 1060 ti (I think 4gb Vram lol) - It's not quite as much fun as Comfy imo (i just like the node based format) but i still had a lot of fun with it once I got my head around the settings. Theres a good few more videos out there for it now too.
@tailongjin-yx3ki
@tailongjin-yx3ki 5 ай бұрын
may i add multi loras to this workflow?
@controlaltai
@controlaltai 5 ай бұрын
Yeah there are no issues with that.
@tailongjin-yx3ki
@tailongjin-yx3ki 5 ай бұрын
@@controlaltai i mean to Parallel loras, not series loras, cause i've trained different roles with lora, i wonder whether i can fuse them in one photo
@controlaltai
@controlaltai 5 ай бұрын
@@tailongjin-yx3ki well thats a different workflow and nothing to do with area composition. To answer your question yes, you can blend in multiple loras. Use the lora node and daisy chain them, or use a custom node from RG Three, Lora Stacker, where in one node you can add multiple loras with different weights.
@cXrisp
@cXrisp 2 ай бұрын
Thanks for the info, but that music clip looping over and over and over and over for the entire video gets too annoying.
@manolomaru
@manolomaru 5 ай бұрын
+1 Super video! ✨👌😎🙂😎👍✨
@controlaltai
@controlaltai 5 ай бұрын
Thank you!
@TailspinMedia
@TailspinMedia 6 ай бұрын
does this work for SDXL models or just 1.5?
@controlaltai
@controlaltai 6 ай бұрын
Yes it works with SDXL
@yifanjourney
@yifanjourney Ай бұрын
Omgggg its good
@spraygospel5539
@spraygospel5539 7 ай бұрын
where can I download the controlnet for the anime lineart?
@spraygospel5539
@spraygospel5539 7 ай бұрын
it'd be great if someone can give me the controlnet library
@controlaltai
@controlaltai 7 ай бұрын
Go to comfy manager - custom nodes - install controlnet auxiliary preprocessor node. Here whenever you use any controlnet pre processor, it will download the pre processor model. Models can be downloaded from here: ControlNet Control Lora Models: huggingface.co/stabilityai/control-lora/tree/main ControlNet Models SD1.5: huggingface.co/lllyasviel/ControlNet-v1-1/tree/main ControlNet Models SDXL: huggingface.co/lllyasviel/sd_control_collection/tree/main
@Pyugles
@Pyugles 2 ай бұрын
Great tutorial, but it looks like the creator of the visual area conditioning/latent composition hasn't updated their node, and its completely unusable now.
@controlaltai
@controlaltai 2 ай бұрын
It still works, you have to do a manual install. No issues with the latest version of comfy portable.
@Pyugles
@Pyugles 2 ай бұрын
@@controlaltai Oh! I just did the manual install and it works, tyvm!
@YujingWang-v1d
@YujingWang-v1d 9 ай бұрын
followed the tutorial, but the result is not quite the same. the red riding hood and the house are not sharp even the strength is increased.....
@controlaltai
@controlaltai 9 ай бұрын
Hi, This is highly checkpoint/seed dependant. If everything is the same as per the tutorial, try random seeds. Alternatively, you can send me your workflow, and I can have a look at it as to why you are not getting the desired results.
@user-jx7bh1lx4q
@user-jx7bh1lx4q 2 ай бұрын
need an alternative for multiareaconditioning
@controlaltai
@controlaltai 2 ай бұрын
There is non, do a manual install. It still works.
@user-jx7bh1lx4q
@user-jx7bh1lx4q 2 ай бұрын
@@controlaltai I haven't seen a manual installation in the openart cloud
@controlaltai
@controlaltai 2 ай бұрын
Ermm....i don't know what is open art cloud. But if you have a local install you can git clone the repository. If you are using a cloud service then it would be best to ask them how to get a GitHub repository installed in the comfy folder.
@B4zing4
@B4zing4 6 ай бұрын
Where are the workflows?
@controlaltai
@controlaltai 6 ай бұрын
Mentioned it in the description.
@B4zing4
@B4zing4 6 ай бұрын
@@controlaltai Thanks, wil these workflows also work with other models?
@controlaltai
@controlaltai 6 ай бұрын
If you are referring to the area composition, then you can use any checkpoint model.
@yurigrishov3333
@yurigrishov3333 8 ай бұрын
It just freezing system when I makes index more then 1.
@controlaltai
@controlaltai 8 ай бұрын
Make sure you are using the correct node. The freeze happens on multiple latent composition node. The correct node would be multi area conditioning node. That works fine.
@yurigrishov3333
@yurigrishov3333 8 ай бұрын
@@controlaltai Yes, its two different nodes. But it can be index 0 or 1 only.
@controlaltai
@controlaltai 8 ай бұрын
@yurigrishov3333 If you have more than two connected the index should go higher than 0-1.
@yurigrishov3333
@yurigrishov3333 8 ай бұрын
@@controlaltai Oh, I had skip the step "insert inputs" Now its fine.
@britonx
@britonx 7 ай бұрын
Thanks Sister !!
@CORExSAM
@CORExSAM Ай бұрын
you sound like some character from game of thrones fr
@controlaltai
@controlaltai Ай бұрын
Its an AI voice. And no, its no one from that show, just randomly picked and stayed with it. Works better for presentations style video tutorials.
@CORExSAM
@CORExSAM Ай бұрын
@@controlaltai oh sorry nevermind, i commented on wrong video LOL
@claylgruber7994
@claylgruber7994 5 ай бұрын
abla ismin türkçe türk müsün?
@controlaltai
@controlaltai 5 ай бұрын
I translated your message. Answer is no. Here is the SDXL + Refiner Workflow: drive.google.com/file/d/18zQNC-ejeJ021xcTGJcF3YB-lC7DbO6f/view?usp=sharing
@claylgruber7994
@claylgruber7994 5 ай бұрын
@@controlaltai thanks
@Mehdi0montahw
@Mehdi0montahw 10 ай бұрын
We require a professional episode on converting images to lineart while completely removing the black and gray parts
@controlaltai
@controlaltai 10 ай бұрын
I will try and use images to line art example in one of the future tutorials I make. I have to learn how to do that effectively myself first. If I include it in a video will notify you via replying to this comment. Give me some time please. Thanks.
@Mehdi0montahw
@Mehdi0montahw 10 ай бұрын
This is the strongest video I have found so far that explains what I want. If you want to use it and complete the part of removing the gray and black areas and more precise lines link: Perfect Line Art For Comic Books: Stable Diffusion Tutorial..kzbin.info/www/bejne/i4PSfYeOeayGn8k&ab_channel=SebastianTorres
@controlaltai
@controlaltai 10 ай бұрын
I think I have something better but not for A1111. Will let you know shortly. Am working on it.
@Mehdi0montahw
@Mehdi0montahw 10 ай бұрын
​@@controlaltai Anything that achieves the goal, but I hope it is free
@controlaltai
@controlaltai 10 ай бұрын
Hi, Please check this. Let me know if they are fine or not. The workflow is complicated but all free and rock solid. You just have to adjust 3-4 settings values to get the desired image. Like darker lines, lesser details, more details etc. Video should be online by Tomorrow. drive.google.com/file/d/1AMo9rUWRraTZgsYiCnmURQXMfglO1DbJ/view?usp=sharing
@jerryTang
@jerryTang 9 ай бұрын
looks ipadapt att mask has more control than this,
@controlaltai
@controlaltai 9 ай бұрын
Hi, the ip adapter mask was a recent update, not the original one. However that control is image 2 image, I need to make a video on ipadapter mask for Comfy. However area composition is text to image. What would be interesting is if you combine both. Get a control text to image composition, then take a reference image and add to this using ip adapter masking. Let me know if you are interested in something like that will give it a short.
@onewiththefreaks3664
@onewiththefreaks3664 9 ай бұрын
@@controlaltai I found your very helpful and interesting video because I am trying to build this exact workflow you're mentioning here. I did not know about the visual area conditioning node, so thank you very much for all your time and effort!
This free MIND BLOWING Workflow Just Changed Filmmaking
20:54
Mickmumpitz
Рет қаралды 77 М.
ComfyUI: Advanced Understanding (Part 1)
20:18
Latent Vision
Рет қаралды 97 М.
Win This Dodgeball Game or DIE…
00:36
Alan Chikin Chow
Рет қаралды 29 МЛН
ComfyUI: Yolo World, Inpainting, Outpainting (Workflow Tutorial)
37:46
Understanding Prompting for Stable diffusion in ComfyUI
29:38
Code Crafters Corner
Рет қаралды 11 М.
ip adapter Version 2 + automask [comfyui workflow tutorial]
9:58
Archilives | Ai | Ue5
Рет қаралды 10 М.
Deep dive into the Flux
28:03
Latent Vision
Рет қаралды 38 М.
This RAG AI Agent with n8n + Supabase is the Real Deal
16:27
Cole Medin
Рет қаралды 26 М.
ComfyUI: Image to Line Art Workflow Tutorial
43:52
ControlAltAI
Рет қаралды 18 М.
ComfyUI: Face Detailer (Workflow Tutorial)
27:16
ControlAltAI
Рет қаралды 45 М.
Understanding ComfyUI Nodes: A Comprehensive Guide
27:29
Code Crafters Corner
Рет қаралды 6 М.
Win This Dodgeball Game or DIE…
00:36
Alan Chikin Chow
Рет қаралды 29 МЛН