Ok, so we have KZbinrs who teach AI generations, and then we have Matteo, who really understand and has a deep knowledge of what he's using. You play in a different Championship. Thank you
@Av-uv6xu7 ай бұрын
no, we have youtubers exploring and reviewing ai
@Kelticfury9 ай бұрын
This guy's skills are approaching godlike.
@latentvision9 ай бұрын
I'm not worthy 😅
@ooiirraa9 ай бұрын
@@latentvision❤
@tailongjin-yx3ki5 ай бұрын
so it's why he's the developer
@sk1jung9 ай бұрын
I watch your videos many times and your teachings are a great pleasure to me. And I also feel my skills improving. Thank you so much for your teachings
@Homopolitan_ai9 ай бұрын
Ah! Mateo, I don't know if it's your accent, your knowledge, the way you transmit it, your tranquility ... But I'm falling in love baby! ❤❤
@ooiirraa9 ай бұрын
Me tooooooo ❤
@ericren53906 ай бұрын
Thank you so much, Matteo, you really taught me to get to the core of SD gradually. I just watched this video for the first time and there is still a lot to digest and I will have to watch it a few more times to really get a handle on the workflow.
@davidb80579 ай бұрын
Brilliant job, Matteo, and as always, beautifully explained. I love your work. Thanks for sharing it with us.
@Mehdi0montahw9 ай бұрын
Thank you. The method worked for me after several days of trying by following your explanation and building the worklow instead of downloading it to understand each part accurately
@bwheldale9 ай бұрын
Your knowledge and approach and how you think are most inspirational to me, I strive to be more like you if only in a small way. These tutorials are like good food!
@JKG-7779 ай бұрын
Fantastic! Thank you for sharing the process in such great detail and workflow.
@TheFutureThinker9 ай бұрын
thank you @latentvision for the inspiration. Yes , I am totally agree about the Lineart. I will try it out to mask the background like how you mentioned. 👍
@DOntTouCHmYPaNDa9 ай бұрын
You are amazing!!!! I’m in complete awe of your skills and knowledge
@tengdongmei7 ай бұрын
It's beautiful. I hope there will be more animated works, such as animated picture stories
@AtenRIP7 ай бұрын
Your videos have so much valuable information. You're a master at what you do and you deserve way more views. Thank you for your work!
@zoranspirkovski97219 ай бұрын
Awesome. You are sensational with ComfyUI
@rewired19749 ай бұрын
Matteo, thank you for this very impressive and inspiring tutorial! Keep on your extraordinary work.
My first impressions of comfy ui was horrible. But something in me told that this tool will be powerful and feature rich. You've proven this 🎉
@kirbulich9 ай бұрын
This was in March of 2023, now in the end 😮
@ooiirraa9 ай бұрын
Thank you, Matteo!!!! You are so amazing. I am every time feeling excited in advance when i see a new video on your channel. How do you think it might be useful to create a node that would accept an image of any proportions and prepare it for the clip vision internally, splitting it to squares, and one ip-adapter node that could accept its output?
@latentvision9 ай бұрын
hey thanks! Yes, I'm thinking of adding an "auto-tile" node, but it could be expensive computationally if you add a lot of tiles. I have to think about it but it's doable.
@sirmeon12314 ай бұрын
If you want to do more videos about animation in ComfyUI I would be happy to watch! Always such a lot of knowledge in your videos, i love it! You come here looking for one thing and learn three others on the way!
@lenny_Videos9 ай бұрын
You are Such a great value to the community 😊 Many thanks 🙏
@siobhanoconnor6529 ай бұрын
Very Inspiring - Thank You
@banzai3169 ай бұрын
Your technique is flawless victory. Well done!
@comfyuiadrian9 ай бұрын
Many thanks for your sharing and teaching, you really understand all the nodes in ComfyUI..Bravo Matteo!
@KooroshGhotb9 ай бұрын
Absolutely amazing tutorial. Thanks for sharing
@JimmyGhelani7778 ай бұрын
honestly you are amazing! incredibly smart! thank you for your videos!
@latentvision8 ай бұрын
nnnaaah I'm not!
@JimmyGhelani7778 ай бұрын
Haha you are and honestly thank you for sharing your knowledge. I’m new to this. I’m a developer by trade by delving into this world seemed overwhelming until I came across your videos. So thank you again :)
@zake-gh4rb9 ай бұрын
It's incredibly great👍
@ultimategolfarchives47467 ай бұрын
I just got on your video, and I'm not sure why i watched other videos before haha 😂 Crazy good video sir 👌👌👌
@fseang9 ай бұрын
A great mentor
@fseang9 ай бұрын
You are my source of motivation, and I will work hard to learn.
@lucvaligny54108 ай бұрын
Here is the master brain of ComfyUI , everything seems so evident to you, on my part I just need to review 3 or 4 times your video , stop every 15 sec , take a note as a reminder and spend 2h at least, to really assimilate all the knowledge given here. It is such a gift you are sharing here with us. Thanks again for your generosity and sharing knowledge
@dnvman8 ай бұрын
that's so good thanks bro 🙌
@GianPieroAnselmi9 ай бұрын
Grande Matteo! Grazie per il tuo lavoro, per la passione e la condivisione.
@norvsta9 ай бұрын
Great tutorial Matteo!
@AnotherPlace6 ай бұрын
I am overwhelmed with so much information, wish i can borrow your brain .. i cant follow.. i have to watch multiple times...
@Xtremevibes-nd7gm9 ай бұрын
i love this workflow. hopefully, you can discuss in improving the face detail
@latentvision9 ай бұрын
yes, increasing details and sharpness is next thing we need to cover... so many things to do 😄
@ryanontheinside8 ай бұрын
thank you so much for all of your work!
@dissolutevoid9 ай бұрын
wow youre the best ai guy for comfyui
@sairampv19 ай бұрын
I think we can use xmem or cutie to create masks easily instead of cocc segmenter, etc (mentioned in 11:37)
@Xtremevibes-nd7gm9 ай бұрын
do you have plans in creating workflow for old photo restoration?
@latentvision9 ай бұрын
yes, that is a very interesting topic!
@Michael-gf1jn6 ай бұрын
Amazing. YOu are very intelligent human. :)
@heranzhou69768 ай бұрын
This is wonderful! Thank you for showing your techniques. May I ask how to use controlnet tile on a specific region? I used the segmented mask technique you showed, but since the empty space is black, controlnet tile makes that region black too. How to tile control a specific region without making the rest of the image black? I'd really appreciate any tips.
@latentvision8 ай бұрын
Thanks! ACN Advanced Control Net has a "mask" option
@promptmuse9 ай бұрын
Outstanding Matteo 🔥
@ai_and_gaming9 ай бұрын
+1 subscriber, thank you
@claudiamichen-gruber20129 ай бұрын
Das war wirklich wunderschön und ausnehmend informativ. Vielen, vielen Dank 👍😍
@VFXMinds8 ай бұрын
you are awesome :)
@latentvision8 ай бұрын
nah, you are awesome!
@TheAxillar9 ай бұрын
Thank you!
@ltcshow61759 ай бұрын
THANK YOU THANK YOU THANK YOU I have not tested the part where you do every part of the video and put it into 16frames sections, but I will be re-watching that after I make up breakfast I was wondering why things changed when I changed the amount of frames. You make for an amazing teacher I watched the "Image stability and repeatability (ComfyUI + IPAdapter)" and "Animations with IPAdapter and ComfyUI" I didn't learn much on those because I wasn't using comfyui at the time and I fell asleep on the FaceID video because I was tired not because of the content so I don't remember even watching it and then this Video now that I can do something interesting animated things in comfy then I watch this one and it solved my biggest issue(still crossing my fingers but I think so because of how you explained it) and also goes slow enough to help teach how to do some node play/work. I'd like to see more and more content as things evolve which they are rapidly I hope you can keep everyone up to date because I think you can do it better than most.
@ltcshow61759 ай бұрын
Okay so using the Uniform Context Options and mixing that up gets me interesting results my pc is a beast so it isn't too bad but damn I can't wait till things evolve. I'm going to be able to make a movie soon using this technology. I can't believe I can actually do this stuff. Anyways I'm stuck and want certain results I'll be back here later maybe tomorrow or tonight with some questions crossing my fingers that sometime during the holiday weekend you have free time. Happy New year I hope I can ask you for help also I subscribed to you not like that is a good trade-off for some extra help maybe I could buy you a beer or something. I could also get you some drone footage in the summer/winter or something I don't know maybe you will have an idea.
@lizhang-b1x9 ай бұрын
I like your videos the most.❤
@SjonSjine6 ай бұрын
When I have a nice setup (images) and implement Animdiff (TEXT2VIDEO) it always changes and get very blurry. How could I use IPAdapter to get my text2video sharpen again?
@cwhiticar19 ай бұрын
Wow this is incredible
@atlasv25628 ай бұрын
Error occurred when executing OneFormer-COCO-SemSegPreprocessor: No module named 'controlnet_aux.oneformer' how can I fix that :( ?
@roman_vfx4 ай бұрын
magic! :)
@melihdalar66109 ай бұрын
Nice work
@shshsh-zy5qq5 ай бұрын
12:19 hey Matteo, what if I want to keep only blue part instead of red from Mask from Color. how can I set it up? I changed around numbers and I kept getting the entire piece black. thank you so much for the amazing tutorial!
@latentvision5 ай бұрын
load the mask into a paint program and with the eyedropper select the color you want to replace. check the RGB values and you are done. increase the threshold by one or two just to be safe
@shshsh-zy5qq5 ай бұрын
@@latentvision thank you!!!
@gamalfarag9 ай бұрын
i was trying to follow your tutorial step by step instead of copying the workflow for learning purpose but at some point I got this and I can't move any forward: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
@latentvision9 ай бұрын
check IPAdapter repository for help github.com/cubiq/ComfyUI_IPAdapter_plus
@jamesong52969 ай бұрын
Hi I have some issues with the AdnimateDiffLoader. It gives me the error like below Error occurred when executing ADE_AnimateDiffLoaderWithContext: module 'comfy.ops' has no attribute 'Linear' Ive used my own workflow and yours but it seems to have the same issues. Could you let me know if there is a workaround?
@latentvision9 ай бұрын
I'm sorry it's hard to say, I'd suggest to upgrade comfyui and the extensions
@sdgtr48 ай бұрын
I am trying comfyUi for days now, mybe not skilled enough, but thing is not there yet. most nodes and models needed cant be found anywhere. I hope this will one day work better for semi pro users
@JorgeLuisAR9 ай бұрын
great! as always
@javhus8 ай бұрын
Hey, how can I say combine this with more adapters, if I have two things I want to transition between I use a transition mask, but how do I also split the image and give each part more continuity, it would be like 4 ipadapters in total. I'm also using two Face Id masks that apply to specific faces. Any ideas to manage all these attention masks?
@latentvision8 ай бұрын
I'm sorry this is not something I can explain in a YT comment. Check my ipadapter animations video for a simple transition workflow. Then yes you can add more areas with multiple masks... but maybe using other techniques would be better (like tile controlnet)
@javhus8 ай бұрын
Haha thanks for the reply anyway. I figured it out. It ended up involving a lot of mask manipulation. I used two tile controlnets to keep the start and end consistent and multiple ip adapters and masking to interpolate between concepts. The masks get more complex as I add more IPAdapters, and it does take a lot of fine tuning.@@latentvision
@ScraggyDogg9 ай бұрын
Many thanks
@AnimeDiff_9 ай бұрын
Amazing. Thank you
@HooIsit3 ай бұрын
You are the best! thank you very much. I'm getting error maybe you can please help? RuntimeError: mat1 and mat2 shapes cannot be multiplied (1232x768 and 1024x320)
@latentvision3 ай бұрын
mmh you are probably using the wrong combination of models (like the wrong clip vision or the wrong checkpoint)
@filippoc89749 ай бұрын
Gracie 😊
@yngeneer8 ай бұрын
@latentvision : I want to replicate your workflow as a starting point with this, but the ballerina video is unfortionately not available atm. Would it be too presumptuous to ask for upload that file somewhere?
@latentvision8 ай бұрын
seems to be working www.pexels.com/video/person-woman-girl-steps-4990427/
@yngeneer8 ай бұрын
@latentvision ok, for me still not, no matter which browser I use, still after click on that download button just : "Video temporarily unavailable" ... thx for your interest
@yngeneer8 ай бұрын
yep and one day later > now the download working.... for anyone still interested...
@vicentealmanza44319 ай бұрын
Is this possible with a 16:9 video or will I run into problems with CLIP Vision?
@latentvision9 ай бұрын
this video was portrait mode, but it works the same for landscape (16:9 included)
@bwheldale9 ай бұрын
I did a 6-piece landscape for a ratio of 2:3. (640x480 input with six 224x224 pieces to cover 672x448). I'm not sure why I felt the need to post this? I guess it's the excitement of learning comfyui.
@decambra899 ай бұрын
bro, youy nuked it, gg
@pfbeast9 ай бұрын
I try to use your workflow after watching your jellyfish ballerina animation with animatediff video. But "comfyui's controlNet auxiliary preprocessors and comfyui-videoHelperSuite" node fall to load (import failed). I am installing via comfy manager also try to install manually but problem was to solved. I am using Amazon sagemaker studio lab. Please help me to fix this issue.
@DivinityIsPurity9 ай бұрын
Why can't you loop it?
@latentvision9 ай бұрын
of course you can! :)
@didiernaimdefli5 ай бұрын
I quit
@samon298 ай бұрын
Thanks, great job there was only one problem with loading the video approached 768x1664
@kpr29 ай бұрын
Just out of curiosity, you cropped the 488 image into two pieces at 224 rather than 244 which would have been half the original. Was there a reason in particular or just a "close enough" sort of thing? Still learning here, but loving it. :) Thanks!
@latentvision9 ай бұрын
no it's not an error, I downscaled it to 488 but I'm taking only 224 so the ballerina is actually a pinch bigger (ie: I'm cutting out the sides of the animation and concentrating only on the main character). Depending on the video you can totally downscale to 448 that will give you the whole frame. glad you noticed it though
@kpr29 ай бұрын
Thanks for the explanation @@latentvision :)
@aliyilmaz8526 ай бұрын
another useful technique even explained in one sentence, thanks again matteo. you are developping and teaching non stop, you can not be a human! @@latentvision
@DealingWithAB8 ай бұрын
can't seem to find DW preprocessor like the one you have in this video just the basic version where it only has hand, body and face.
@paoloricaldone62738 ай бұрын
Very interesting, thanks. Is there a way in comfyUI to make a 3d object integration in a video matching both the video lighting and the video style? No one seems to be capable of until now.
@tailongjin-yx3ki5 ай бұрын
i'm wondering how u know the parametres so deeply, can u create avideo for tuning the parametres to get the desired results?
@miketoriant9 ай бұрын
I've never enjoyed watching tutorials as much as I have yours. Like, for anything. You are a great teacher.
@EnricoSeifert9 ай бұрын
Hey Mateo, thanks for the great video, I was able to learn a lot again especially in terms of optimization. The max resolution of ClipVision was also new to me. 👍👍👍
@SheRoMan9 ай бұрын
I WICH YOUR VIDEOS NEVER END
@yvann.mp49 ай бұрын
Incredible Work !! Thnqks so much
@DanielPartzsch5 ай бұрын
Great. In the new IP Adapter advanced is no "unfold batch" option anymore. is this obselete with V2 or do you need to use the batch version of the IP adapter instead? Thank you.
@latentvision5 ай бұрын
the batch nodes are for animations yes
@atlasv25628 ай бұрын
H ! amazing workflow - so much better than anything else out there! Do you mind telling us where do download the ip adapter image encoder sd15?
@latentvision8 ай бұрын
please check the extension repository on github
@blender_wiki9 ай бұрын
nice work, 👍👍👍
@lucagenovese72073 ай бұрын
SPETTACOLARE
@EdgardMello9 ай бұрын
Can I use this workflow with macOS? Macbook Pro M1 14º I'm getting this error Error occurred when executing KSampler: The operator 'aten::upsample_bicubic2d.out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature On my windows PC I have only a 1050ti GPU and it's not enough to handle this workflow.
@latentvision9 ай бұрын
I need the full error message please. I believe I know what the problem is but I need to be sure
@EdgardMello9 ай бұрын
By the way ... I would like to endorse most positive comments of people about your channel. From all the channels I've seen yours is by far the most innovative and easy to follow. Maybe it is because it seems that your really know the tech and is more comfortable to pass a way your knowledge to us.
@АбдуллаШихгереев9 ай бұрын
Hi, I get this error "When loading the graph, the following node types where not found: ImageCrop+ ImageCASharpening+ MaskFromColor+ MaskBlur+ ImageResize+ Nodes that have failed to load will show as red on the graph." Although I installed them via Install Missing CustomNodes and restarted them. But there is no result
@latentvision9 ай бұрын
you probably need to update comfyui
@АбдуллаШихгереев9 ай бұрын
@@latentvision I'll try to update. But it seems in my Colab ComfyUI is updated automatically after each launch
@Chad-xd3vr7 ай бұрын
Brilliant again matt3o, thank you. query at 7:50 you put size 488x488, did you mean 448 as in 2x224?
@latentvision7 ай бұрын
I resize the image slightly bigger than I need and then crop out the 224x224 tile. That way I get slightly more details on the parts I'm interested into and also crop out some of the background on the sides. I'm sorry I didn't explain that in the video. But yes, generally you want to crop it at 448
@Chad-xd3vr7 ай бұрын
thank you for the explanation@@latentvision
@pfbeast9 ай бұрын
👍👍👍👌👌👌👌❤❤
@bobdelul9 ай бұрын
This channel is so good. Got a totally different perspective on how to use comfyUI. Well done!
@slightsloan9 ай бұрын
any reason why you use the 1.5 v2 motion model for animatediff over v3?
@latentvision9 ай бұрын
well v3 wasn't out when I started making this video 😄You are free to try it anyway, it should work.
@slightsloan9 ай бұрын
@@latentvision I'm really happy with the results. Thanks for your hard work compiling this information :)
@gamalfarag9 ай бұрын
where i can download 3 ?
@ac3d6579 ай бұрын
maybe its time to try animation out, the greatest of all time made a tutorial ❤❤
@kingtut_AI7 ай бұрын
This is just amazing Mateo! 🤯
@drmuradkhan9 ай бұрын
Man you are out of this world. Really i do not have words to describe what you have unlocked for this world. Thank you. can you please share your discord or any chat group where i can join your community.
@latentvision9 ай бұрын
thanks! my discord: latent.vision/discord
@Gabriecielo9 ай бұрын
Amazing result and super clear explanation on this tutorial, thank you Mateo! One question, I didn't understand "Uniform context options" node very well, which looks like a parameter on AnimateDiff loader, what is it for?
@latentvision9 ай бұрын
models are trained at 16 frames usually (now we have longer models though) the "context options" renders longer videos by computing 16 frames at the time.