dude unsampler is sick! I love that you're showing how some of these other nodes work and not just ipadapter, thanks!
@knabbi Жыл бұрын
"...and now she is pissed". Never had a better introduction for another useful comfyui node😂 Appreciate your work and your entertaining videos. I like your effective and pragmatic way of explanation. Thanks.
@BoolitMagnet Жыл бұрын
Wow. Another great video, so much info and all clearly explained. Your mastery of ComfyUI is impressive.
@Deadgray Жыл бұрын
Hahaha, "and now she's pissed". I would never miss a lesson with such teacher 🙂 Every time I watch something from you I have new ideas, thank you.
@WallyMahar2 ай бұрын
A WORKFLOW I DOWNLOADED AND IT ACTUALLY WORKS! OMFG! You don't understand how rare that is. As a pro artist but A python noob, you don't understand how many of my list that is at least a screen long of workflows that just don't work and I don't understand why. And I have spent hours and hours and usually give up Thank you.
@pedxing Жыл бұрын
absolutely love watching these work sessions. ❤🔥💡💪
@Paulo-ut1li10 ай бұрын
Saying this channel is the best ComfyUI resource on YT is an understatement . Thank you Matteo, please keep up the amazing work!
@ronnykhalil Жыл бұрын
Unsampler is an insane option that I can only begin to imagine its potential. thanks for shining lights on all these unsung heroes. the channel remains my favorite by a long shot
@latent-broadcasting11 ай бұрын
The unsampler blew my mind! It's amazing all the possibilities available with ComfyUI. Thanks for the tutorial!
@ooiirraa Жыл бұрын
Dear Matteo, I became your absolute fan 🎉 your videos and projects (ip-adapter) are generous and abundant. Every your product is valuable but understandable in the same time. Thank you very much, Please keep creating ❤
@chadhamlet11 ай бұрын
Wow! Using the noise in this fashion really makes it so much nicer than image to image. I've done some really great enhancements of some old 1.5 generations that kept the look of the old but dramatically increased the details with the newer SDXL models. I've never had an upscale do something this nice and not change the image. Can't wait to see what you've got planned next. Your videos are amazing! I'd love to see you tackle a workflow that is geared towards reusing a character, face, clothes and in multiple poses!
@TheDocPixel Жыл бұрын
You have read my mind! I've been searching for more information and usage videos and tuts for all of these nodes that are bundled in packages, that other YTers suggest to install, but use only one or 2 of them. Please continue on with these easy, to the point videos for advanced users. WE NEED THEM!
@1E9L9I7J1A6Ай бұрын
not much to say other than thank you very much, great videos, im about to explore your whole channel, you definetly just won a new regular viewer
@johnmcaleer6917 Жыл бұрын
'and now she's pissed' cracked me up.....Your vids continue to impress and your knowledge of such a new subject is amazing....Love your explanations and subject choices..Wonderful stuff again..
@antiquechrono Жыл бұрын
Short, to the point, and absolutely jam packed with information. Great video.
@nawrasryhan11 ай бұрын
The best comfyUI tutorials hands down, the amount of info, small tips, real experience that you show in these videos is unmatched and highly appreciated. Keep it up and of course Thanks for sharing!
@ProzacgodAI11 ай бұрын
I was playing with the unsampler, and went (total 20 steps) - unsampler(5 steps) -> advanced sampler (5 steps -> 10 steps) -> advanced sampler +add noise (10->20) and it produces really good variations. I can even supply it with a new prompt at the last step and it's really really good at integrating it and keeping consistency
@latentvision11 ай бұрын
I guess this is a situation like: give a man a fish and you feed him for a day. Teach him how to fish and you feed him for a lifetime 😄
@ProzacgodAI11 ай бұрын
@@latentvision Give the man the seed for the fish image and he'll have variations for a lifetime...
@michaelbayes802 Жыл бұрын
Wow! You could have made 10 videos with this content. Respect
@64kernel Жыл бұрын
Applying this in my workflow immediately. Very useful. Thanks!
@dck7048 Жыл бұрын
These videos are so consistently useful, thanks for taking the time! Even on subjects that you'd think are "solved" like image variations, the fine control can be a real asset when you're looking to generate something specific.
@eagleeyedjoe0075 Жыл бұрын
These videos are fantastic, I'm learning many new techniques and you've introduced me to loads of new nodes. Can't wait to see the new IPAdapter you mentioned.
@Renzsu Жыл бұрын
Love your videos man, they're a joy to watch. And I like how you keep your examples relatively simple and straight to the point, no unnecessary fluff :)
@crow-mag482711 ай бұрын
Found you after the release of ipadapter, your skills in comfy are amazing. Watching all your videos.
@uk3dcom Жыл бұрын
So many useful nugets of information. Taking control of the generative image is fascinating. Thank you.❤
@BuckleyandAugustin2 ай бұрын
I agree with everyone here your content is so valuable, thank you for all you do Matteo!
@TheJAM_Sr Жыл бұрын
Wow, great demonstration! I have been playing around with combing noises for a bit now and I still learned a lot! I’m going to take what I’ve learned here and play around with all the different type of noise formats.
@world4ai Жыл бұрын
I have to say that so far I found all of your videos really useful. I would like some AnimateDiff tutorials.
@Enricii11 ай бұрын
PAZZESCO! My favourite one was the unsampler method. I think I need to play with it very soon! Grazie ancora per tutto quello che fai!
@terrorcuda183211 ай бұрын
That was a fantastic video. I want to leave work and go home and experiment.
@svenhinrichs407211 ай бұрын
Thanks a lot. Your tutorials are great ! Perfectly explained and going to the details which are really hard to find out without the technical insights. Keep up the great work!
@vizsumit11 ай бұрын
You are making me falling in love with ComfyUI
@latentvision11 ай бұрын
that was the indent ^___^
@moviecartoonworld445911 ай бұрын
I am always grateful to hear amazing and moving lectures. 0
@MikevomMars6 ай бұрын
Just adding a number to the prompt to get a variation is true ZEN - simple but effective 😊
@WhySoBroke Жыл бұрын
You have my full attention Maestro Latente!!! Please create a discord community!! ❤️🇲🇽❤️
@morphidevtalk9 ай бұрын
mindblowing! ty for the workflow! i'll try it for myself
@tiporight10 ай бұрын
Excellent. Thank you for sharing this type of tutorials
@Bartskol Жыл бұрын
This video is gold.
@LucasSavelli-e3w Жыл бұрын
Mateo, YOU are the god! Thank you so much for sharing all your knowledge with us!
@pedxing Жыл бұрын
REALLY looking forward to the seeing your process for the logo animation as well!
@hakandurgut Жыл бұрын
In last 16 mins, i have learned more than i had in last months.... great video, great knowlwdge... ate you an ai scientist?
@latentvision Жыл бұрын
LOL yeah and it was delicious 😆🍰
@hakandurgut Жыл бұрын
hahaha *are
@tonikunec11 ай бұрын
That's pretty amazing! I am kinda new to all this AI thing and still learning a lot, but this video really opened my eyes on how to get started and make even more amazing stuff. Keep those videos coming as it seems you really know your stuff! Subscribed!
@Ulayo Жыл бұрын
This video is amazing! I learned so much today! 👍
@HisWorkman Жыл бұрын
As always this was a fantastic tutorial. Thank you!
@abdelkaioumbouaicha11 ай бұрын
📝 Summary of Key Points: The speaker discusses various techniques for creating small variations on an image using the SDXL workflow. They suggest adding low-weight tokens or random numbers to slightly change the image. The concept of "horror negatives" is introduced, where negative prompts with words like "horror" or "zombie" are used to achieve a clean result. Conditioning comcat is explained as a way to change the style or details of an image while keeping the same composition. Conditioning combine is also discussed for achieving more mutation in the image. The use of IP adapter is explored to guide the composition of the image, using different reference images to achieve different styles. The unsampler node from the confi noise extension is shown as a technique to modify an existing image by removing noise until it reaches the original noise at the first step of generation. Creating a batch of images with little differences is demonstrated using fixed base noise and the slurp latent node. The strength of the noise can be adjusted, and a new batch of similar images can be generated by changing the seed in the noise generator. 💡 Additional Insights and Observations: 💬 "There is no one-size-fits-all solution" - The speaker emphasizes that different techniques may work better for different images and prompts. 📊 No specific data or statistics were mentioned in the video. 🌐 The video provides practical examples and demonstrations to support the techniques discussed. 📣 Concluding Remarks: The video provides a comprehensive overview of techniques for creating image variations using the SDXL workflow. From simple tricks like adding tokens or random numbers to more advanced techniques like conditioning comcat and using IP adapter, the speaker demonstrates practical examples and offers valuable insights for achieving desired image variations. Generated using Talkbud (Browser Extension)
@MicheleBrugiolo Жыл бұрын
Grazie grazie grazie!
@koalanation Жыл бұрын
This is a great essentials video! Thanks Matteo. Not sure if everyone thinks inpainting is lame, though 😂😂😂
@MannyGonzalez8 ай бұрын
Absolute master class. Thanks for these tutorials.
@ysy69 Жыл бұрын
incredible. thnak you
@ChandreshJoshi Жыл бұрын
your approach is very creative and very easy to understand thanks for video
@human-error8 ай бұрын
Amazing as usual Mateo. Gracias !
@roktecha Жыл бұрын
These videos are excellent! Thank you
@steveyy35676 ай бұрын
mind blowing, great job!
@JoeSim8s7 ай бұрын
Pure gold! Thank you!
@Homopolitan_ai9 ай бұрын
Total ❤
@christianblinde Жыл бұрын
Great examples with good explainations
Жыл бұрын
You are amazing.. This is the best video I've ever seen...
@bgtubber5 ай бұрын
Fascinating! Is this something like the Noise Inversion feature in A1111?
@paulofalca0 Жыл бұрын
Great stuff! Thanks!
@tomolson616910 ай бұрын
I noticed you never re-adjusted the values for width/Height on the ClipTextEncode nodes after you switched to the Unsampler demo. Even tho you started working with a different latent size. Was that just an oversight? It didn't seem to make a difference, your images still looked GREAT! I was just curious, I ended up using a node template for SDXL with primitives set up to quickly adjust the values to 4x the latent size as you suggested. Thank you so much for all your teachings! You've helped me GREATLY!
@latentvision10 ай бұрын
yeah I noticed after I posted the video. the size conditioning doesn't make much difference, it's more of a refinement, so it's not crucial, but yeah in this case it's an oversight
@HideousSlots Жыл бұрын
Awesome!
@fgmanfredini Жыл бұрын
Very Nice, really! Very useful, thank you. If i can give you a suggestion would be for a vídeo about dynamic composition using automatic masks. Example: generate a subject, cut it with automatic masking (Sam?) and paste it over a generate background and then a second pass to fix The composition and then generate variations of the background for the same subject or vice versa.
@TimVerweij Жыл бұрын
So much useful information! Thanks!
@j_shelby_damnwird11 ай бұрын
This and Scott's are the coolest AI art channels. Kudos! are these workflows available somewhere for reverse engineering? I tried to follow along but it's hard to keep track of everything that's going on.
@latentvision11 ай бұрын
check the video description, I usually put a few in there
@j_shelby_damnwird11 ай бұрын
@@latentvision thanks man
@bwheldale Жыл бұрын
I'm slowly absorbing these valuable insights, my favourite Comfy channel. At the beginning of 'light conditioning' I wasn't getting subtle changes they were drastic until I tried other seeds. Some worked for subtle changes while some did not. Unless I'm mistaken this light conditioning may be seed dependent. Just wondering if some seeds you tried weren't "subtle friendly"?
@latentvision Жыл бұрын
sometimes it's hard to see them but there's always a difference. Try to use the "enhance difference" node from the Comfy_Essentials extension. Yes, some seeds will show more difference than others, but it's completely random.
@bwheldale Жыл бұрын
My appologies, I was just about to edit my post to say my wiring to each text box was not from both "text_g and text_l". It's now all working fine and looks exactly as yours with the subtle results achieved. I'll also play with the extension as suggested, thank you for the tips.
@sincdraws10 ай бұрын
great stuff as always
@thelookerful6 ай бұрын
These tutorials are great!!
@dflfd9 ай бұрын
thank you, this is really great!
@P4TCH5S Жыл бұрын
so cool! thank you
@ai-roman-ai11 ай бұрын
I love your videos, they are the best! I want to generate keyframes and then interpolate them to create a realistic video in the end without any time constraints. Can you advise me on how I can apply your approaches to create the consistent frames, which you show in this video or other videos? For example, a dog plays with a ball in the garden. The dog must run and be in different positions in each frame, the camera does not move. How to specify the position of the dog and the ball in each keyframe?
@latentvision11 ай бұрын
what are asking is pretty complicated, it can't be really explained in a YT comment
@aliyilmaz8528 ай бұрын
@@latentvision it would be good if you can teach us in another video. btw you are amazing Matteo!
@opposegravity11 ай бұрын
Can you go over all comfy nodes, I’ve learned more watching your videos than any other resource! Thanks
@latentvision11 ай бұрын
I started doing that, but it's a bit boring...
@opposegravity11 ай бұрын
Maybe to make them but not to watch, I'm enjoying the content!@@latentvision
@___x__x_r___xa__x_____f______ Жыл бұрын
Matteo, I wish you would explore latent upscaling and show us some useful possibilities for getting high frequency details most effectively through step iterative upscaling and though other more esoteric modes such as block weights etc. And how to best leverage specialised upscale models such as SkinDiff etc
@latentvision Жыл бұрын
yeah working with noise to increase details is in the pipeline :)
@___x__x_r___xa__x_____f______ Жыл бұрын
@@latentvision right, what you just showed us! that is a great idea. I will try it now. Love this community ! right, what you just showed us! that is a great idea. I will try it now. Love this community !
@PradeepKumar69 ай бұрын
Great video, I have a question what is text_g and text_l in clip text encode? Thanks
@impactframes Жыл бұрын
Another excellent tutorial. ❤
@chornsokun4 ай бұрын
Thank you Matteo for the great content. Could you advise which node/extension used in the clip to convert noise into input?
@latentvision4 ай бұрын
you mean the unsampler? it's comfyui_noise
@chornsokun4 ай бұрын
@@latentvision the step at kzbin.info/www/bejne/e6eXZauhl9OVm7Msi=cWiy-uDpeQelusMM&t=58 and 1:00 noise_seed node I can't find in base comfy
@latentvision4 ай бұрын
@@chornsokun that's just a primitive. convert the seed to an input and you can connect a primitive to it
@blisterfingers816911 ай бұрын
Would conditioning concat be the same as something like Automatic1111's blend function or is it something different? Love these videos, thanks! Also: "a hint of Klimt" had me chuckling.
@latentvision11 ай бұрын
no, blend is another option. The node is called conditioning average.
@danielmatejka1976 Жыл бұрын
thank you ❤
@kakochka111 ай бұрын
@latentvision Could you explain how you created start_at_step primitive (to control both unsampler and kSampler inputs) with just one click and the correct naming? Is this some custom nodes magic? And as an idea for future videos - could you share how you debug the content of different nodes (maskPreivew and PreviewImage aside) with int/bool/etc values in them?
@latentvision11 ай бұрын
double click on the input little dot 😄
@petruschka222 Жыл бұрын
Thank You. Great Job.
@pk.9436 Жыл бұрын
great work 👏
@salvatorecancilla16054 ай бұрын
sei un grande
@kdesign15799 ай бұрын
awesome!
@alexgilseg10 ай бұрын
This is really cool however I have a question. In The Video you set "end at step" to 0 and it keeps the structure of the loaded image. When I set it to 0 it just uses nothing of my loaded image and just goes by the prompt.. And that's what I thought the whole thing was, to go backwards in an image and then load from there so to say.. By setting it to zero don't you tell the workflow to ignore the loaded image ?
@iozsoo10 ай бұрын
Why my SDXL node hasn't got green pins on it? Also, my positive and negative prompts has conditioning, not string :(
@GForcenuwan11 ай бұрын
wow💡
@generalawareness10111 ай бұрын
For whatever reason if I put the int to 0 I get nothing and the closer I get to the sample steps (30 in this example) the more the image comes in.
@AntonioRomero-x1e9 ай бұрын
Ive watched this video many times trying to use one of this methods to fake an "unstable" animation. Animatediff evolved so quickly that it seems imposible now to make each frame in a different style...... can u make a video on how to make a video with animatediff where Ipadapter keeps the identity of the main subject but the rest of the composition changes style in each frame? have in mind that scheduled prompots are not a solution here. It would be very difficult to write a prompt for each frame.
@cyril1111 Жыл бұрын
Thanks for the explanations! Super helpful! Now, Im a bit confused of the width and height of your TextencodeSDXL - it is huge! How come it goes so fast on your workflow, when for me it takes more than 5min with a 4090 ?
@81sw0le Жыл бұрын
I have a unique way of creating characters in midjourney. I'd like to use it as an ipadapter and pose it but I never get any good results. (very detailed, grotesque cartoon style) The goal it to be able to create a character sheet so I can animate it. Have you seen a way to do something like this?
@latentvision Жыл бұрын
I'd need to see the pictures. Technically it's possible, you probably need a checkpoint or a lora with a close style and depends on the kind of result and fidelity you are after.
@81sw0le Жыл бұрын
Do you have a discord so I can send you the images?@@latentvision
@bobgalka4 ай бұрын
I just have to laugh... I wanted to use some of the ideans from this work flow and started a new flow and started building my flow and almost immediately got stuck on on the pos and neg nodes... took me awhile to figure out that the nodes are called primitiveNode... so i added that but it looked nothing like yours.. tried different things... then I though to just copy paste the node to my new on... nope.. no text area to type.... How did you create those primitiveNode nodes to have string out and multiline text area? BTW I am totally enjoying my self watching and learning from your videos. ;O)
@luiswebdev829211 ай бұрын
can you explain more detail why you're using the CLIPTextEncodeSDXL and not just CLIPTextEncode? Is that important to this workflow?
@latentvision11 ай бұрын
no, it's not essential. As I mentioned at the very beginning CLIPTextEncodeSDXL generally gives slightly sharper details
@luiswebdev829211 ай бұрын
@@latentvision that only works with SDXL models right? Is there an alternative with other models (e.g dreamshaper) or for those you would simply use CLIPTextEncode?
@dan323609 Жыл бұрын
What is sigma in comfy (or SD)? What it means, or it does?
@latentvision Жыл бұрын
roughly it is the current progress in the generation. you can compare it to a sigma start/end to know where you are in the image generation
@dan323609 Жыл бұрын
@@latentvision oh i get it, thx
@kikoking50097 ай бұрын
The Unsampler node is not working (import failed) it shows after Downloading
@latentvision7 ай бұрын
comfy made a breaking upgrade, the nodes need to be updated. I believe the unsampler should be fine now
@EMSSpammer12 күн бұрын
Hello can anyone help me with this Error. Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 16, 150, 150] to have 4 channels, but got 16 channels instead This part should be the workflow from Unsampler 09:30
@HalZemmai3 ай бұрын
you are god. I started worshipping you.
@latentvision3 ай бұрын
I'm a demi-god at best
@swannschilling47410 ай бұрын
zombie is a very good negative to remove unwanted artifacts in the face...
@gamersgabangest31799 ай бұрын
Ciao Matteo, che GPU utilizzi? Grazie
@bgtubber4 ай бұрын
I tried this with a few images. I'm getting back a similar image, but not the same ones as the original. What am I doing wrong? Mostly the background is different, while the subject stays more or less the same (some little differences in attire).
@latentvision4 ай бұрын
hard to say, it was an "old" workflow so it might be just a matter of updated checkpoints or different version of some library
@bgtubber4 ай бұрын
@@latentvision Ah, I see. No worries. I'll keep trying. Hopefully I'll figure it out. :)
@Kikoking-y9b7 ай бұрын
Hello I have 2 issues Repeat Latent Batch gives exactly 2 same images. And: Working with Get Sigma it shows this error : Error occurred when executing BNK_GetSigma: 'SDXL' object has no attribute 'get model_object'
@latentvision7 ай бұрын
you probably just need to upgrade comfy
@Kikoking-y9b7 ай бұрын
@@latentvision unfortunately no. The error is still there. Also with the ksampler Variation with noise injection. I tried with juggernaut sdxl checkpoint and sd_xl_base 1.0 checkpoint. Same issue with 'get _model_object
@xieporter7 ай бұрын
I have the same problem
@Kikoking-y9b7 ай бұрын
@@latentvision would it help to delete comfy at all and install it again so maybe like these the error goes away! Because a lot of updates didn't help at all. Its crazy
@whatwherethere Жыл бұрын
How are you getting consistently good images? The moment I change anything in my prompts the image goes crazy. This is nowhere close to my experiences.