The wizard says this isn't magic but creates pure magic anyway.
@latentvision8 ай бұрын
Any sufficiently advanced technology is indistinguishable from magic
@NanaSun9348 ай бұрын
I am so thankful for your channel. I have watched countless yourtube video about comfy ui, but yours are definitely one of the clearest with deep understanding of the subject. I hardly leave comment EVER, but i felt the need to write this one. I was watching and rewatching your video and follow along. Its so much fun. Thank you so much!
@tsentenari43533 ай бұрын
this! there is instagram follower counts and then there is "the number of people who feel deeply thankful towards one"
@jayd89358 ай бұрын
I think it was a blessing that I found your channel. These workflows spark my creativity so much.
@DataMysterium8 ай бұрын
Awesome as always, Thank you for sharing those amazing nodes with us.
@xpecto79517 ай бұрын
Please continue doing more informative videos like you always do, everyone else just shows prepared workflows but you actually show how to build them. Can't thank you enough.
@GggggQqqqqq12348 ай бұрын
감사합니다.
@jcboisvert14468 ай бұрын
Thanks
@DarkGrayFantasy8 ай бұрын
Amazing stuff as always Matt3o! Can't wait for the next IPAv2 stuff you got going on!
@alessandrorusso5838 ай бұрын
Great video as always. A large number of interesting things. Always thank you for your time in the community.
@petnebАй бұрын
That is some serviceminded nodes you have created for us. Thank you so much.
@kf_calisthenics8 ай бұрын
Would love a video of you going into depth on the development and programming side of things!
@latentvision8 ай бұрын
maaaaybe 😄
@flisbonwlove8 ай бұрын
Mr. Spinelli always delivering magic!! Thanks and keep the superb work 👏👏🙌🙌
@d4n878 ай бұрын
Grande matt3o, i tuoi nodi sono uno spettacolo! 😁👍 Questo workflow soprattutto sembra assolutamente interessante e malleabile alle problematiche delle varie generazioni
@mattm73198 ай бұрын
the logic you've used in making these nodes makes it so much easier! thank you!
@contrarian88708 ай бұрын
Great stuff, as always! One thing: the two girls were supposed to be "shopping" and the cat/tiger were supposed to be "playing". The subjects transferred properly (clean separation) but there's no trace of either "shopping" or "playing" in the result.
@latentvision8 ай бұрын
the first word in all prompts is "closeup" that basically overcomes anything else in the prompt
@yql-dn1ob8 ай бұрын
Amazing work! improved the usability of the IPadapter!
@johndebattista-q3e8 ай бұрын
Grazie Matteo stai facendo un buon lavoro
@Foolsjoker8 ай бұрын
This is going to be powerful. Good work Mat3o!
@marcos13vinicius118 ай бұрын
it's gonna help million times on my personal project!! thank you
@aivideos3228 ай бұрын
u should be proud of your work. thanks for all you do. Was working on my video workflow with masking ipadaptors for multiple people... this will SOOOOOO make things easier.
@11011Owl8 ай бұрын
most usefull videos about comfyui, thank you SO MUCH, im excited af about how cool it is
@jccluaviz8 ай бұрын
Thank you, thank you, thank you. Great work, my friend. Another master piece of art. Really apreciated.
@latentvision8 ай бұрын
glad to help
@davidb80578 ай бұрын
Brilliant stuff, thanks again, Matteo. Can't wait for the FaceID nodes to be brought to this workflow.
@Skyn3tD1dN0th1ngWr0ng27 күн бұрын
It seems the nondescript anime-'grill' holding the coffee-mug is not a recurrent character anymore :'c I just realized with this video that most of 'Dr.Lt.Data' (which are great resources, I'm not comparing) videos are months old and fully "outdated"... Less than a year and all those guides+workflows will only confuse new users (at least Ipadapter users), thanks for keeping us up to date, Matteo ☕ I finally understood regional prompting from Ipadapter's perspective, it has being a long month.
@ttul8 ай бұрын
Wow, this is so insanely cool. I can’t wait to play with it, Matteo.
@Kentel_AI8 ай бұрын
Thanks again for the great work.
@allhailthealgorithm8 ай бұрын
Amazing, thanks again for all your hard work!
@skycladsquirrel8 ай бұрын
Great video! Thank you for all your hard work!
@Ritesh-PatelАй бұрын
I hardly comment on youtube. But dang, you are next level. Thank you for this and the other videos
@WhySoBroke8 ай бұрын
An instamazing day when Maestro Latente spills his magical brilliance!!
@volli19798 ай бұрын
6:05 "oh shit, this is so cool!" - nothing to add.
@musicandhappinessbyjo7958 ай бұрын
The result looks pretty amazing. Could you maybe do one tutorial where there is combination with control net (not sure if that possible) just so we can also control the position of the characters.
@aliyilmaz8528 ай бұрын
Thanks again for great effort and explanation Matteo. You are amazing! Quick question: Is it possible to use controlnets with IPAdapter Regional Conditioning?
@latentvision8 ай бұрын
yes! absolutely!
@Ulayo8 ай бұрын
Nice! More nodes to play with!
@context_eidolon_music7 ай бұрын
Thanks for all your hard work and genius!
@latentvision7 ай бұрын
just doing my part
@Showdonttell-hq1dk8 ай бұрын
Once again, it's simply wonderful! During a few tests, I noticed that the: "RGB mask from node", needs very bright colors to work. A slightly darker green and it no longer has any effect. Everything else produced cool results on the first try. Thanks for all the work! And I'm just about to follow your ComfyUI app tutorial video to make one myself.
@latentvision8 ай бұрын
you can set thresholds for each color, you can technically grab any shade
@Showdonttell-hq1dk8 ай бұрын
@@latentvision Of course I tried that. But it worked wonderfully with bright colors. It's no big deal. As I said, thanks for the great work! :)
@latentvision8 ай бұрын
@@Showdonttell-hq1dk using black or white and the threshold you can technically get any color. But you can probably better use the node Mask From Segmentation
@mycelianotyours19808 ай бұрын
Thank you for everything!
@rawkeh8 ай бұрын
8:01 "This is not magic," says the wizard
@latentvision8 ай бұрын
I swear it is not :P
@35wangfeng8 ай бұрын
You rock!!!!! Thanks for the amazing job!!!!
@premium26818 ай бұрын
Angel Mateo came down from latent space again to teach the world his magic
@heranzhou69768 ай бұрын
Wonderful. May I ask how I can insert FaceID into this workflow? Right now I get this error: Error occurred when executing IPAdapterFromParams: InsightFace: No face detected.
@jacekfr32528 ай бұрын
"oh shit, this is so cool"
@renegat5528 ай бұрын
great work. thanks a lot!
@autonomousreviews25218 ай бұрын
Excellent! Thank you for your work and for sharing :)
@ojciecvaader92798 ай бұрын
I really love your work
@leolis785 ай бұрын
Hi Matteo, thanks for your contributions to the community. I am trying to use Attention Masking in the process of compositing product photos. The idea is to be able to define in which zone of the image each element is located. For example, in a photo of a wine, define the location of the bottle and the location of the props, such as a wine glass, a bunch of grapes, a corkscrew, etc. But I tried the Attention Masking technique and it is not giving me good results in SDXL. Is it only for Sd1.5? Do you think it is a good technique for this kind of compositions for product photography or do you think there is another better technique? Thanks in advance for your help! 😃😃😃
@latentvision5 ай бұрын
this is complex to answer in a YT comment. depends on the size of the props. You probably need to upscale the image and work with either inpainting or regional prompting. Try to ask on my discord server
@Mika433448 ай бұрын
Great work as always🎉
@kaiserscharrman8 ай бұрын
really really cool addition. thanks
@胡文瀚-o6y5 ай бұрын
Why does it show ClipVision model not found when I use it?
@digidope8 ай бұрын
Just wow! Thanks a lot again!
@pfbeast8 ай бұрын
❤❤❤ as always best tutorial
@GggggQqqqqq12348 ай бұрын
Thank you!
@AnotherPlace8 ай бұрын
Continue creating magic senpai!! ❤️
@nrpacb8 ай бұрын
I learned something new, happy, I want to ask when can we do a tutorial on replacing furniture indoors or something like that?
@latentvision8 ай бұрын
yeah that would be very interesting... I'll think about it
@Shingo_AI_Art8 ай бұрын
Awesome stuff, as always
@PurzBeats8 ай бұрын
"the cat got tigerized"
@Andy_XR8 ай бұрын
Genius. Fact. Again.
@deastman28 ай бұрын
This is so helpful! I’m using closeup selfies of three people to create composite band photos for promotion, and this simplifies the workflow immensely. Question: Do you have any tips to go from three headshots to a composite image which shows three people full length, head to toe? Adding that to the prompts hasn’t worked very well so far, and I’m not sure if adding separate openpose figures for each person would be the way to go? Any advice would be most appreciated!
@latentvision8 ай бұрын
that has to be done in multiple passes. there are many ways you can approach that... it's hard to give you advice on such complex matter on a YT comment
@deastman28 ай бұрын
@@latentvisionI understand. But “multiple passes” gives me an idea anyway. So probably I should generate bodies for each person first, and only then combine the three.
@Cadmeus8 ай бұрын
What a cool update! This looks useful for controlling character clothing, hairstyle and that kind of thing, using reference images. Also, if you compose a 3D scene in Unreal Engine, it can output a segmented object map as colors, which could make this very powerful. You could link prompts and reference images to objects in the scene and then diffuse multiple camera angles from your scene, without any further setup.
@ceegeevibes13358 ай бұрын
love.... thank you !!!
@getmmg16 күн бұрын
Hey, Thanks a lot for this nodes are tutorial. Is there a way this could be connected with faceid so that there will control over where each charecter would be on the scene.
@fukong7 ай бұрын
Great job done! I'm wondering if theres any workflow using faceid series IPadapter with regional prompting...
@latentvision7 ай бұрын
it totally works there's nothing special to do just use the FaceID models
@fukong7 ай бұрын
@@latentvision Thanks so much for reply!! I know I can replace the IPadapter Unified loader with FaceID unified loader in this workflow, but I don't know how to receive images and adjust the v2 weight or choose a weight type while using regional conditioning for FaceID, in other word, I don't know how to create an equivalent "IPadapter FaceID Regional Conditioning" node with existing nodes.
@hashshashin0008 ай бұрын
is there a way to use faceidv2 with this?
@latentvision8 ай бұрын
I will add the faceid nodes next
@hashshashin0008 ай бұрын
@@latentvision ♥
@lilien_rig5 ай бұрын
ahh nice tutorial, I like it very thanks
@francaleu77778 ай бұрын
👏👏👏
@matteoGHgherardi7 ай бұрын
Sei un grande!
@JoeAndolina8 ай бұрын
This workflow is amazing, thank you for sharing! I have been trying to get it to work with two characters generated from two LORAs. The LORAs have been trained on XL so they are expecting to make 1024x1024 images. I have made my whole image larger so that the mask areas are 1024x1024, but still everything is coming out kind of wonky. Have any of you explored a solution for generating two characters from separate LORAs in a single image?
@WiremuTeKani8 ай бұрын
6:04 Yes, yes it is.
@latentvision8 ай бұрын
:)
@Freezasama8 ай бұрын
what a legend
@guilvalente8 ай бұрын
Would this work with Animatediff? Perhaps for segmenting different clothing styles in a fashion film.
@latentvision8 ай бұрын
attention masking absolutely works with animatediff
@elifmiami6 ай бұрын
This is an amazing workflow! I wish we could animate it.
@nicolasmarnic3998 ай бұрын
Hello Mateo! Excellent workflow :) Consultation: To solve the proportion issues, that the cat is the size of a cat and that the tiger is the size of a tiger, the best solution would be to edit the size of the masks? Thanks
@latentvision8 ай бұрын
no, if you need precise sizing you need a controlnet probably. To install the essentials use the Manager or download the zip and unzip it into the custom_nodes directory
@freshlesh30197548 ай бұрын
That was awesome
@tengdongmei8 ай бұрын
This video is great, but I follow the video, why does the portrait not look like the original picture
@tengdongmei8 ай бұрын
What file is the ipadpt in the embedded group read by the author and how to edit it
@walidflux8 ай бұрын
when are going to do videos with ip-adapter workflow?
@latentvision8 ай бұрын
not sure I understand
@walidflux8 ай бұрын
@@latentvision sorry, i meant animation with ip-adapter, there are many workflows out there most famous animatediff and ip-adapter i just though yours is defiantly going to be better
@latentvision8 ай бұрын
@@walidflux I'll try to do more animatediff tutorials, but I need to add a new node that will help with that
@DashengSun-ki9qe8 ай бұрын
Great workflow. Can you add edge control and depth to the process? I tried it but failed. Can you help me? I'm not sure how the nodes are supposed to be connected, it doesn't seem to work.
@latentvision8 ай бұрын
yes it is possible, I will post a workflow in my discord
@knabbi2 ай бұрын
Having some issues getting it to work. Using 2 regions (with images of two people) and combining it like demonstrated in the video it always gives me a fusion of both person. They are never drawn seperated. Played around with the weights, models, etc. Dont get why that is happening. Any tips?
@Vincent-ce7bp8 ай бұрын
If i have a strong color distribution in my reference style image the result seems to put the colors in the same areas as a resulting image. Is there a way around this? (ipadapterPlus with strong setting and style transfer)
@latentvision8 ай бұрын
I'd need to see. what you mean by strong color distribution?
@Vincent-ce7bp8 ай бұрын
@@latentvision I was talking about certain parts at the macro color placement of the style reference image. If for example the upper part of the reference image has an orange leather texture then in the resulting image it is also more likely to have an orange background or orange "parts" in this upper area of the image.
@latentvision8 ай бұрын
@@Vincent-ce7bp in that case probably your best bet is to play with "start_at" (like 0.2) and weight.
@Vincent-ce7bp8 ай бұрын
@@latentvision Thank you for the reply. I do not know if it is possible but perhaps you could code a weight_type option for the style transfer as in the ipAdapter Advanced note has a weight_type option. You could select style transfer as the first weight_type and then there is a subcategory (weight_type2) how this style transfer is applied: linear, ease in, ease out... But this is just a rough guess.
@FotoAntonioCanada8 ай бұрын
Incredible
@Zetanimo8 ай бұрын
how would you go about adding some overlap like the girl and dragon example from the beginning of the video where they are touching? Or does this process have enough leeway to let them interact?
@latentvision8 ай бұрын
The masks can overlap, if the description is good enough the characters can interact. SD is not very good at "interactions" but standard stuff works (hugging, boxing, cheek-to-cheek, etc...). On top you can use controlnets
@Zetanimo8 ай бұрын
Thanks a lot! Looking forward to more content!@@latentvision
@pyyhm8 ай бұрын
Hey matt3o, great stuff! I'm trying to replicate this with SDXL models but getting a blank output. Any ideas?
@thomasmiller76787 ай бұрын
Hi great stuff, is there anyway to do this kind of attention masking with loras, so I can apply separate loras to separate masks? There's a few things kicking around but nothing seems to work all that well.
@latentvision7 ай бұрын
not really (it would be technically feasible probably but not easy)
@thomasmiller76787 ай бұрын
@@latentvision hmm this is why I have been struggling there are some applications nodes for it but from the stuff I've found haven't had much luck yet, might you be able to help me out or do a lil digging maybe you can pull of some more magic! 😄
@divye.ruhela6 ай бұрын
@@thomasmiller7678 But can't you just use the concerned LoRAs in a separate workflow to generate the images you like, then bring them here, apply conditioning and combine?
@thomasmiller76786 ай бұрын
Yes that is possible but it's still not a true influence like the Lora would be if it could be implemented
@michail_7778 ай бұрын
Hi. Thanks for your work. I was wondering. Is there any IPAdapter node that will be linked to AnimateDiff? And this node will work only in a certain frame.That is, if I connect 2 input images, from 0 to 100 frame one image affects the generation, and from 101 frame the second input image affects the generation. But it would be quite nice if from frame 90 to 110 these images are blended.
@latentvision8 ай бұрын
yes I'm working on that
@michail_7778 ай бұрын
@@latentvision Thank you. I've added AnimateDiff and 2CN to your workflow. And it's working well.
@a.zanardi8 ай бұрын
Matteo, FlashFace got released, will you bring it too?
@latentvision8 ай бұрын
I had a look at it, it's weird cookie. 10GB model that only works with SD1.5... I don't know...
@a.zanardi8 ай бұрын
@@latentvision 🤣🤣🤣🤣 Weird cookie it was really fun! Thank you so much for answering!
@ai_gene7 ай бұрын
Why doesn’t it work so well with the SDXL model? In my case, the result is one girl with different styles on two sides of the head.
@latentvision7 ай бұрын
try to use bigger masks, try different checkpoints, use controlnets
@helloRick6188 ай бұрын
really cool
@jerrycurly8 ай бұрын
Is there a way to use controlnets in each region, I was having issues with that?
@latentvision8 ай бұрын
yes of course! just try it
@cafe.caloriesАй бұрын
Thank you for the amazing Videos, which I can say as others, I hardly comment or like but yours are amazingly explained to the depth besides your work. I was just trying to use the same workflow for my image and another friend but the images just have very low similarity to us, I used same setup, model is realisticVisionV60B1_v51HyperVAE and tried others to and played with the image wight between 7-8 even 1 or more also other parameters, but with all attempts can't get same both closer looks to us, as when I use lora fine-tuning, I know that this is not replacement for lora , but your advise would be much appreciated. Thanks again for the amazing videos
@burdenedbyhope8 ай бұрын
is this possible to use ipadapter and attention masks for character and items interaction? like a man handing over an apple or carrying a bag
@latentvision8 ай бұрын
yes of course! why not?!
@burdenedbyhope8 ай бұрын
@@latentvision maybe my weights/start/end are not right, I always have trouble make a known character interact with another known character or a known item. "Known" in this case means using IPAdapter. Most of the example I saw is 2 characters/subjects standing beside each other, not interacting, so I wonder.
@burdenedbyhope8 ай бұрын
@@latentvision I tested it in many cases, the interaction works pretty well; a girl holding an apple, a girl holding a teddy bear... all works well. With 2 girls holding hands, bleeding happens time to time, negative prompts are not always applicable; can the regional conditioning accept a negative image?
@makristudio73588 ай бұрын
Hi, Which one is better IP adapter FaceID vs InstantID ?
@latentvision8 ай бұрын
the are different 😄depends on the application
@crazyrobinhood8 ай бұрын
Molto bene... molto bene )
@fmfly28 ай бұрын
My comfyui don't have 🔧 Mask From RGB/CMY/BW, only have Mask from color. Where do i find it?
@latentvision8 ай бұрын
you just need to upgrade the extension
@jiexu-j9w8 ай бұрын
thanks, for style someone , is there benefit to use ipadapter v2 combine with instantId or just ipadapter v2 face id is enough ? if padapter v2 combine with instantId get more better result , any tutorial for that ? another is does a casual&normal camera taken photo of a person can get a fantasy result use above method ?
@latentvision8 ай бұрын
yes you can combine them to get better results, but don't expect huge improvements, just a tiny bit better :)
@jiexu-j9w8 ай бұрын
@@latentvision thanks, for point2 , can normal human photo from phone camera can be transfer into a style masterpiece using comfyui ? i don't find video in youtube talk about that
@latentvision8 ай бұрын
@@jiexu-j9w depends what you are trying to do. too vague as a question, sorry
@jiexu-j9w8 ай бұрын
@@latentvision below are my case : i want to take my child born photo into a t shirt. but this photo is taken from very long time ago , and the quality is bad , especially the face got a bit vague , anyway its my memory. can i using comfy ui transfer this vague photo into a picture that reserve the pose and face of my child , improved the quality and with T-Shirt art style which suitable for print to t shirt,and it should reserve my child's face and body pose that i can regonize. how can i do with that using comfyui ?
@latentvision8 ай бұрын
@@jiexu-j9w it is possible using a combination of techniques but it's impossible to give you a walk-through in an youtube comment... it highly depends on the conditions of the original picture
@Ai-dl2ut8 ай бұрын
Awesome sir :)
@Fernando-cj2el8 ай бұрын
Mateo, I updated all and nodes still red, am I the only one´?😭
@MiraPloy8 ай бұрын
Sparkle?
@fulldivemedia5 ай бұрын
thanks,and i think you should put the "pill" word in the title :)
@BuildwithAI8 ай бұрын
could you combine this with Lora?
@latentvision8 ай бұрын
one lora per mask? no, you can't the model pipeline is only one