All new Attention Masking nodes

  Рет қаралды 31,703

Latent Vision

Latent Vision

Күн бұрын

Пікірлер: 172
@AthrunWilshire
@AthrunWilshire 8 ай бұрын
The wizard says this isn't magic but creates pure magic anyway.
@latentvision
@latentvision 8 ай бұрын
Any sufficiently advanced technology is indistinguishable from magic
@NanaSun934
@NanaSun934 8 ай бұрын
I am so thankful for your channel. I have watched countless yourtube video about comfy ui, but yours are definitely one of the clearest with deep understanding of the subject. I hardly leave comment EVER, but i felt the need to write this one. I was watching and rewatching your video and follow along. Its so much fun. Thank you so much!
@tsentenari4353
@tsentenari4353 3 ай бұрын
this! there is instagram follower counts and then there is "the number of people who feel deeply thankful towards one"
@jayd8935
@jayd8935 8 ай бұрын
I think it was a blessing that I found your channel. These workflows spark my creativity so much.
@DataMysterium
@DataMysterium 8 ай бұрын
Awesome as always, Thank you for sharing those amazing nodes with us.
@xpecto7951
@xpecto7951 7 ай бұрын
Please continue doing more informative videos like you always do, everyone else just shows prepared workflows but you actually show how to build them. Can't thank you enough.
@GggggQqqqqq1234
@GggggQqqqqq1234 8 ай бұрын
감사합니다.
@jcboisvert1446
@jcboisvert1446 8 ай бұрын
Thanks
@DarkGrayFantasy
@DarkGrayFantasy 8 ай бұрын
Amazing stuff as always Matt3o! Can't wait for the next IPAv2 stuff you got going on!
@alessandrorusso583
@alessandrorusso583 8 ай бұрын
Great video as always. A large number of interesting things. Always thank you for your time in the community.
@petneb
@petneb Ай бұрын
That is some serviceminded nodes you have created for us. Thank you so much.
@kf_calisthenics
@kf_calisthenics 8 ай бұрын
Would love a video of you going into depth on the development and programming side of things!
@latentvision
@latentvision 8 ай бұрын
maaaaybe 😄
@flisbonwlove
@flisbonwlove 8 ай бұрын
Mr. Spinelli always delivering magic!! Thanks and keep the superb work 👏👏🙌🙌
@d4n87
@d4n87 8 ай бұрын
Grande matt3o, i tuoi nodi sono uno spettacolo! 😁👍 Questo workflow soprattutto sembra assolutamente interessante e malleabile alle problematiche delle varie generazioni
@mattm7319
@mattm7319 8 ай бұрын
the logic you've used in making these nodes makes it so much easier! thank you!
@contrarian8870
@contrarian8870 8 ай бұрын
Great stuff, as always! One thing: the two girls were supposed to be "shopping" and the cat/tiger were supposed to be "playing". The subjects transferred properly (clean separation) but there's no trace of either "shopping" or "playing" in the result.
@latentvision
@latentvision 8 ай бұрын
the first word in all prompts is "closeup" that basically overcomes anything else in the prompt
@yql-dn1ob
@yql-dn1ob 8 ай бұрын
Amazing work! improved the usability of the IPadapter!
@johndebattista-q3e
@johndebattista-q3e 8 ай бұрын
Grazie Matteo stai facendo un buon lavoro
@Foolsjoker
@Foolsjoker 8 ай бұрын
This is going to be powerful. Good work Mat3o!
@marcos13vinicius11
@marcos13vinicius11 8 ай бұрын
it's gonna help million times on my personal project!! thank you
@aivideos322
@aivideos322 8 ай бұрын
u should be proud of your work. thanks for all you do. Was working on my video workflow with masking ipadaptors for multiple people... this will SOOOOOO make things easier.
@11011Owl
@11011Owl 8 ай бұрын
most usefull videos about comfyui, thank you SO MUCH, im excited af about how cool it is
@jccluaviz
@jccluaviz 8 ай бұрын
Thank you, thank you, thank you. Great work, my friend. Another master piece of art. Really apreciated.
@latentvision
@latentvision 8 ай бұрын
glad to help
@davidb8057
@davidb8057 8 ай бұрын
Brilliant stuff, thanks again, Matteo. Can't wait for the FaceID nodes to be brought to this workflow.
@Skyn3tD1dN0th1ngWr0ng
@Skyn3tD1dN0th1ngWr0ng 27 күн бұрын
It seems the nondescript anime-'grill' holding the coffee-mug is not a recurrent character anymore :'c I just realized with this video that most of 'Dr.Lt.Data' (which are great resources, I'm not comparing) videos are months old and fully "outdated"... Less than a year and all those guides+workflows will only confuse new users (at least Ipadapter users), thanks for keeping us up to date, Matteo ☕ I finally understood regional prompting from Ipadapter's perspective, it has being a long month.
@ttul
@ttul 8 ай бұрын
Wow, this is so insanely cool. I can’t wait to play with it, Matteo.
@Kentel_AI
@Kentel_AI 8 ай бұрын
Thanks again for the great work.
@allhailthealgorithm
@allhailthealgorithm 8 ай бұрын
Amazing, thanks again for all your hard work!
@skycladsquirrel
@skycladsquirrel 8 ай бұрын
Great video! Thank you for all your hard work!
@Ritesh-Patel
@Ritesh-Patel Ай бұрын
I hardly comment on youtube. But dang, you are next level. Thank you for this and the other videos
@WhySoBroke
@WhySoBroke 8 ай бұрын
An instamazing day when Maestro Latente spills his magical brilliance!!
@volli1979
@volli1979 8 ай бұрын
6:05 "oh shit, this is so cool!" - nothing to add.
@musicandhappinessbyjo795
@musicandhappinessbyjo795 8 ай бұрын
The result looks pretty amazing. Could you maybe do one tutorial where there is combination with control net (not sure if that possible) just so we can also control the position of the characters.
@aliyilmaz852
@aliyilmaz852 8 ай бұрын
Thanks again for great effort and explanation Matteo. You are amazing! Quick question: Is it possible to use controlnets with IPAdapter Regional Conditioning?
@latentvision
@latentvision 8 ай бұрын
yes! absolutely!
@Ulayo
@Ulayo 8 ай бұрын
Nice! More nodes to play with!
@context_eidolon_music
@context_eidolon_music 7 ай бұрын
Thanks for all your hard work and genius!
@latentvision
@latentvision 7 ай бұрын
just doing my part
@Showdonttell-hq1dk
@Showdonttell-hq1dk 8 ай бұрын
Once again, it's simply wonderful! During a few tests, I noticed that the: "RGB mask from node", needs very bright colors to work. A slightly darker green and it no longer has any effect. Everything else produced cool results on the first try. Thanks for all the work! And I'm just about to follow your ComfyUI app tutorial video to make one myself.
@latentvision
@latentvision 8 ай бұрын
you can set thresholds for each color, you can technically grab any shade
@Showdonttell-hq1dk
@Showdonttell-hq1dk 8 ай бұрын
@@latentvision Of course I tried that. But it worked wonderfully with bright colors. It's no big deal. As I said, thanks for the great work! :)
@latentvision
@latentvision 8 ай бұрын
@@Showdonttell-hq1dk using black or white and the threshold you can technically get any color. But you can probably better use the node Mask From Segmentation
@mycelianotyours1980
@mycelianotyours1980 8 ай бұрын
Thank you for everything!
@rawkeh
@rawkeh 8 ай бұрын
8:01 "This is not magic," says the wizard
@latentvision
@latentvision 8 ай бұрын
I swear it is not :P
@35wangfeng
@35wangfeng 8 ай бұрын
You rock!!!!! Thanks for the amazing job!!!!
@premium2681
@premium2681 8 ай бұрын
Angel Mateo came down from latent space again to teach the world his magic
@heranzhou6976
@heranzhou6976 8 ай бұрын
Wonderful. May I ask how I can insert FaceID into this workflow? Right now I get this error: Error occurred when executing IPAdapterFromParams: InsightFace: No face detected.
@jacekfr3252
@jacekfr3252 8 ай бұрын
"oh shit, this is so cool"
@renegat552
@renegat552 8 ай бұрын
great work. thanks a lot!
@autonomousreviews2521
@autonomousreviews2521 8 ай бұрын
Excellent! Thank you for your work and for sharing :)
@ojciecvaader9279
@ojciecvaader9279 8 ай бұрын
I really love your work
@leolis78
@leolis78 5 ай бұрын
Hi Matteo, thanks for your contributions to the community. I am trying to use Attention Masking in the process of compositing product photos. The idea is to be able to define in which zone of the image each element is located. For example, in a photo of a wine, define the location of the bottle and the location of the props, such as a wine glass, a bunch of grapes, a corkscrew, etc. But I tried the Attention Masking technique and it is not giving me good results in SDXL. Is it only for Sd1.5? Do you think it is a good technique for this kind of compositions for product photography or do you think there is another better technique? Thanks in advance for your help! 😃😃😃
@latentvision
@latentvision 5 ай бұрын
this is complex to answer in a YT comment. depends on the size of the props. You probably need to upscale the image and work with either inpainting or regional prompting. Try to ask on my discord server
@Mika43344
@Mika43344 8 ай бұрын
Great work as always🎉
@kaiserscharrman
@kaiserscharrman 8 ай бұрын
really really cool addition. thanks
@胡文瀚-o6y
@胡文瀚-o6y 5 ай бұрын
Why does it show ClipVision model not found when I use it?
@digidope
@digidope 8 ай бұрын
Just wow! Thanks a lot again!
@pfbeast
@pfbeast 8 ай бұрын
❤❤❤ as always best tutorial
@GggggQqqqqq1234
@GggggQqqqqq1234 8 ай бұрын
Thank you!
@AnotherPlace
@AnotherPlace 8 ай бұрын
Continue creating magic senpai!! ❤️
@nrpacb
@nrpacb 8 ай бұрын
I learned something new, happy, I want to ask when can we do a tutorial on replacing furniture indoors or something like that?
@latentvision
@latentvision 8 ай бұрын
yeah that would be very interesting... I'll think about it
@Shingo_AI_Art
@Shingo_AI_Art 8 ай бұрын
Awesome stuff, as always
@PurzBeats
@PurzBeats 8 ай бұрын
"the cat got tigerized"
@Andy_XR
@Andy_XR 8 ай бұрын
Genius. Fact. Again.
@deastman2
@deastman2 8 ай бұрын
This is so helpful! I’m using closeup selfies of three people to create composite band photos for promotion, and this simplifies the workflow immensely. Question: Do you have any tips to go from three headshots to a composite image which shows three people full length, head to toe? Adding that to the prompts hasn’t worked very well so far, and I’m not sure if adding separate openpose figures for each person would be the way to go? Any advice would be most appreciated!
@latentvision
@latentvision 8 ай бұрын
that has to be done in multiple passes. there are many ways you can approach that... it's hard to give you advice on such complex matter on a YT comment
@deastman2
@deastman2 8 ай бұрын
@@latentvisionI understand. But “multiple passes” gives me an idea anyway. So probably I should generate bodies for each person first, and only then combine the three.
@Cadmeus
@Cadmeus 8 ай бұрын
What a cool update! This looks useful for controlling character clothing, hairstyle and that kind of thing, using reference images. Also, if you compose a 3D scene in Unreal Engine, it can output a segmented object map as colors, which could make this very powerful. You could link prompts and reference images to objects in the scene and then diffuse multiple camera angles from your scene, without any further setup.
@ceegeevibes1335
@ceegeevibes1335 8 ай бұрын
love.... thank you !!!
@getmmg
@getmmg 16 күн бұрын
Hey, Thanks a lot for this nodes are tutorial. Is there a way this could be connected with faceid so that there will control over where each charecter would be on the scene.
@fukong
@fukong 7 ай бұрын
Great job done! I'm wondering if theres any workflow using faceid series IPadapter with regional prompting...
@latentvision
@latentvision 7 ай бұрын
it totally works there's nothing special to do just use the FaceID models
@fukong
@fukong 7 ай бұрын
@@latentvision Thanks so much for reply!! I know I can replace the IPadapter Unified loader with FaceID unified loader in this workflow, but I don't know how to receive images and adjust the v2 weight or choose a weight type while using regional conditioning for FaceID, in other word, I don't know how to create an equivalent "IPadapter FaceID Regional Conditioning" node with existing nodes.
@hashshashin000
@hashshashin000 8 ай бұрын
is there a way to use faceidv2 with this?
@latentvision
@latentvision 8 ай бұрын
I will add the faceid nodes next
@hashshashin000
@hashshashin000 8 ай бұрын
@@latentvision ♥
@lilien_rig
@lilien_rig 5 ай бұрын
ahh nice tutorial, I like it very thanks
@francaleu7777
@francaleu7777 8 ай бұрын
👏👏👏
@matteoGHgherardi
@matteoGHgherardi 7 ай бұрын
Sei un grande!
@JoeAndolina
@JoeAndolina 8 ай бұрын
This workflow is amazing, thank you for sharing! I have been trying to get it to work with two characters generated from two LORAs. The LORAs have been trained on XL so they are expecting to make 1024x1024 images. I have made my whole image larger so that the mask areas are 1024x1024, but still everything is coming out kind of wonky. Have any of you explored a solution for generating two characters from separate LORAs in a single image?
@WiremuTeKani
@WiremuTeKani 8 ай бұрын
6:04 Yes, yes it is.
@latentvision
@latentvision 8 ай бұрын
:)
@Freezasama
@Freezasama 8 ай бұрын
what a legend
@guilvalente
@guilvalente 8 ай бұрын
Would this work with Animatediff? Perhaps for segmenting different clothing styles in a fashion film.
@latentvision
@latentvision 8 ай бұрын
attention masking absolutely works with animatediff
@elifmiami
@elifmiami 6 ай бұрын
This is an amazing workflow! I wish we could animate it.
@nicolasmarnic399
@nicolasmarnic399 8 ай бұрын
Hello Mateo! Excellent workflow :) Consultation: To solve the proportion issues, that the cat is the size of a cat and that the tiger is the size of a tiger, the best solution would be to edit the size of the masks? Thanks
@latentvision
@latentvision 8 ай бұрын
no, if you need precise sizing you need a controlnet probably. To install the essentials use the Manager or download the zip and unzip it into the custom_nodes directory
@freshlesh3019754
@freshlesh3019754 8 ай бұрын
That was awesome
@tengdongmei
@tengdongmei 8 ай бұрын
This video is great, but I follow the video, why does the portrait not look like the original picture
@tengdongmei
@tengdongmei 8 ай бұрын
What file is the ipadpt in the embedded group read by the author and how to edit it
@walidflux
@walidflux 8 ай бұрын
when are going to do videos with ip-adapter workflow?
@latentvision
@latentvision 8 ай бұрын
not sure I understand
@walidflux
@walidflux 8 ай бұрын
@@latentvision sorry, i meant animation with ip-adapter, there are many workflows out there most famous animatediff and ip-adapter i just though yours is defiantly going to be better
@latentvision
@latentvision 8 ай бұрын
@@walidflux I'll try to do more animatediff tutorials, but I need to add a new node that will help with that
@DashengSun-ki9qe
@DashengSun-ki9qe 8 ай бұрын
Great workflow. Can you add edge control and depth to the process? I tried it but failed. Can you help me? I'm not sure how the nodes are supposed to be connected, it doesn't seem to work.
@latentvision
@latentvision 8 ай бұрын
yes it is possible, I will post a workflow in my discord
@knabbi
@knabbi 2 ай бұрын
Having some issues getting it to work. Using 2 regions (with images of two people) and combining it like demonstrated in the video it always gives me a fusion of both person. They are never drawn seperated. Played around with the weights, models, etc. Dont get why that is happening. Any tips?
@Vincent-ce7bp
@Vincent-ce7bp 8 ай бұрын
If i have a strong color distribution in my reference style image the result seems to put the colors in the same areas as a resulting image. Is there a way around this? (ipadapterPlus with strong setting and style transfer)
@latentvision
@latentvision 8 ай бұрын
I'd need to see. what you mean by strong color distribution?
@Vincent-ce7bp
@Vincent-ce7bp 8 ай бұрын
​@@latentvision I was talking about certain parts at the macro color placement of the style reference image. If for example the upper part of the reference image has an orange leather texture then in the resulting image it is also more likely to have an orange background or orange "parts" in this upper area of the image.
@latentvision
@latentvision 8 ай бұрын
@@Vincent-ce7bp in that case probably your best bet is to play with "start_at" (like 0.2) and weight.
@Vincent-ce7bp
@Vincent-ce7bp 8 ай бұрын
@@latentvision Thank you for the reply. I do not know if it is possible but perhaps you could code a weight_type option for the style transfer as in the ipAdapter Advanced note has a weight_type option. You could select style transfer as the first weight_type and then there is a subcategory (weight_type2) how this style transfer is applied: linear, ease in, ease out... But this is just a rough guess.
@FotoAntonioCanada
@FotoAntonioCanada 8 ай бұрын
Incredible
@Zetanimo
@Zetanimo 8 ай бұрын
how would you go about adding some overlap like the girl and dragon example from the beginning of the video where they are touching? Or does this process have enough leeway to let them interact?
@latentvision
@latentvision 8 ай бұрын
The masks can overlap, if the description is good enough the characters can interact. SD is not very good at "interactions" but standard stuff works (hugging, boxing, cheek-to-cheek, etc...). On top you can use controlnets
@Zetanimo
@Zetanimo 8 ай бұрын
Thanks a lot! Looking forward to more content!@@latentvision
@pyyhm
@pyyhm 8 ай бұрын
Hey matt3o, great stuff! I'm trying to replicate this with SDXL models but getting a blank output. Any ideas?
@thomasmiller7678
@thomasmiller7678 7 ай бұрын
Hi great stuff, is there anyway to do this kind of attention masking with loras, so I can apply separate loras to separate masks? There's a few things kicking around but nothing seems to work all that well.
@latentvision
@latentvision 7 ай бұрын
not really (it would be technically feasible probably but not easy)
@thomasmiller7678
@thomasmiller7678 7 ай бұрын
@@latentvision hmm this is why I have been struggling there are some applications nodes for it but from the stuff I've found haven't had much luck yet, might you be able to help me out or do a lil digging maybe you can pull of some more magic! 😄
@divye.ruhela
@divye.ruhela 6 ай бұрын
@@thomasmiller7678 But can't you just use the concerned LoRAs in a separate workflow to generate the images you like, then bring them here, apply conditioning and combine?
@thomasmiller7678
@thomasmiller7678 6 ай бұрын
Yes that is possible but it's still not a true influence like the Lora would be if it could be implemented
@michail_777
@michail_777 8 ай бұрын
Hi. Thanks for your work. I was wondering. Is there any IPAdapter node that will be linked to AnimateDiff? And this node will work only in a certain frame.That is, if I connect 2 input images, from 0 to 100 frame one image affects the generation, and from 101 frame the second input image affects the generation. But it would be quite nice if from frame 90 to 110 these images are blended.
@latentvision
@latentvision 8 ай бұрын
yes I'm working on that
@michail_777
@michail_777 8 ай бұрын
@@latentvision Thank you. I've added AnimateDiff and 2CN to your workflow. And it's working well.
@a.zanardi
@a.zanardi 8 ай бұрын
Matteo, FlashFace got released, will you bring it too?
@latentvision
@latentvision 8 ай бұрын
I had a look at it, it's weird cookie. 10GB model that only works with SD1.5... I don't know...
@a.zanardi
@a.zanardi 8 ай бұрын
@@latentvision 🤣🤣🤣🤣 Weird cookie it was really fun! Thank you so much for answering!
@ai_gene
@ai_gene 7 ай бұрын
Why doesn’t it work so well with the SDXL model? In my case, the result is one girl with different styles on two sides of the head.
@latentvision
@latentvision 7 ай бұрын
try to use bigger masks, try different checkpoints, use controlnets
@helloRick618
@helloRick618 8 ай бұрын
really cool
@jerrycurly
@jerrycurly 8 ай бұрын
Is there a way to use controlnets in each region, I was having issues with that?
@latentvision
@latentvision 8 ай бұрын
yes of course! just try it
@cafe.calories
@cafe.calories Ай бұрын
Thank you for the amazing Videos, which I can say as others, I hardly comment or like but yours are amazingly explained to the depth besides your work. I was just trying to use the same workflow for my image and another friend but the images just have very low similarity to us, I used same setup, model is realisticVisionV60B1_v51HyperVAE and tried others to and played with the image wight between 7-8 even 1 or more also other parameters, but with all attempts can't get same both closer looks to us, as when I use lora fine-tuning, I know that this is not replacement for lora , but your advise would be much appreciated. Thanks again for the amazing videos
@burdenedbyhope
@burdenedbyhope 8 ай бұрын
is this possible to use ipadapter and attention masks for character and items interaction? like a man handing over an apple or carrying a bag
@latentvision
@latentvision 8 ай бұрын
yes of course! why not?!
@burdenedbyhope
@burdenedbyhope 8 ай бұрын
@@latentvision maybe my weights/start/end are not right, I always have trouble make a known character interact with another known character or a known item. "Known" in this case means using IPAdapter. Most of the example I saw is 2 characters/subjects standing beside each other, not interacting, so I wonder.
@burdenedbyhope
@burdenedbyhope 8 ай бұрын
@@latentvision I tested it in many cases, the interaction works pretty well; a girl holding an apple, a girl holding a teddy bear... all works well. With 2 girls holding hands, bleeding happens time to time, negative prompts are not always applicable; can the regional conditioning accept a negative image?
@makristudio7358
@makristudio7358 8 ай бұрын
Hi, Which one is better IP adapter FaceID vs InstantID ?
@latentvision
@latentvision 8 ай бұрын
the are different 😄depends on the application
@crazyrobinhood
@crazyrobinhood 8 ай бұрын
Molto bene... molto bene )
@fmfly2
@fmfly2 8 ай бұрын
My comfyui don't have 🔧 Mask From RGB/CMY/BW, only have Mask from color. Where do i find it?
@latentvision
@latentvision 8 ай бұрын
you just need to upgrade the extension
@jiexu-j9w
@jiexu-j9w 8 ай бұрын
thanks, for style someone , is there benefit to use ipadapter v2 combine with instantId or just ipadapter v2 face id is enough ? if padapter v2 combine with instantId get more better result , any tutorial for that ? another is does a casual&normal camera taken photo of a person can get a fantasy result use above method ?
@latentvision
@latentvision 8 ай бұрын
yes you can combine them to get better results, but don't expect huge improvements, just a tiny bit better :)
@jiexu-j9w
@jiexu-j9w 8 ай бұрын
@@latentvision thanks, for point2 , can normal human photo from phone camera can be transfer into a style masterpiece using comfyui ? i don't find video in youtube talk about that
@latentvision
@latentvision 8 ай бұрын
@@jiexu-j9w depends what you are trying to do. too vague as a question, sorry
@jiexu-j9w
@jiexu-j9w 8 ай бұрын
@@latentvision below are my case : i want to take my child born photo into a t shirt. but this photo is taken from very long time ago , and the quality is bad , especially the face got a bit vague , anyway its my memory. can i using comfy ui transfer this vague photo into a picture that reserve the pose and face of my child , improved the quality and with T-Shirt art style which suitable for print to t shirt,and it should reserve my child's face and body pose that i can regonize. how can i do with that using comfyui ?
@latentvision
@latentvision 8 ай бұрын
@@jiexu-j9w it is possible using a combination of techniques but it's impossible to give you a walk-through in an youtube comment... it highly depends on the conditions of the original picture
@Ai-dl2ut
@Ai-dl2ut 8 ай бұрын
Awesome sir :)
@Fernando-cj2el
@Fernando-cj2el 8 ай бұрын
Mateo, I updated all and nodes still red, am I the only one´?😭
@MiraPloy
@MiraPloy 8 ай бұрын
Sparkle?
@fulldivemedia
@fulldivemedia 5 ай бұрын
thanks,and i think you should put the "pill" word in the title :)
@BuildwithAI
@BuildwithAI 8 ай бұрын
could you combine this with Lora?
@latentvision
@latentvision 8 ай бұрын
one lora per mask? no, you can't the model pipeline is only one
@eduger
@eduger 8 ай бұрын
amazing
Animation with weight scheduling and IPAdapter
20:50
Latent Vision
Рет қаралды 36 М.
Higher quality images by prompting individual UNet blocks
11:08
Latent Vision
Рет қаралды 20 М.
Try this prank with your friends 😂 @karina-kola
00:18
Andrey Grechka
Рет қаралды 9 МЛН
Quilt Challenge, No Skills, Just Luck#Funnyfamily #Partygames #Funny
00:32
Family Games Media
Рет қаралды 55 МЛН
IL'HAN - Qalqam | Official Music Video
03:17
Ilhan Ihsanov
Рет қаралды 700 М.
Upscale from pixels to real life
20:43
Latent Vision
Рет қаралды 15 М.
IPAdapter v2: all the new features!
16:10
Latent Vision
Рет қаралды 101 М.
Style Transfer - This works like MAGIC!!! - IPAdapter
8:45
Olivio Sarikas
Рет қаралды 32 М.
ComfyUI: Advanced Understanding (Part 1)
20:18
Latent Vision
Рет қаралды 122 М.
Attention Masking with IPAdapter and ComfyUI
11:38
Latent Vision
Рет қаралды 52 М.
A Great New IPAdapter with Licensing Issues: Kolors
12:59
Andrea Baioni
Рет қаралды 6 М.
I Redesigned the ENTIRE YouTube UI from Scratch
19:10
Juxtopposed
Рет қаралды 1 МЛН
Deep dive into the Flux
28:03
Latent Vision
Рет қаралды 48 М.
Image stability and repeatability (ComfyUI + IPAdapter)
18:42
Latent Vision
Рет қаралды 72 М.
Try this prank with your friends 😂 @karina-kola
00:18
Andrey Grechka
Рет қаралды 9 МЛН