I was never the artist type, but i was always a nerd. I love creating but i don't like drawing and this new form of art is incredible.
@OlivioSarikas11 ай бұрын
#### Links from the Video #### JOIN the Contest: contest.openart.ai/ Download the WORKFLOWS: drive.google.com/file/d/1EhEOpQmxStEChqzg3Qfp_phyLyQK43Bx/view?usp=sharing Matt3o Channel: www.youtube.com/@latentvision Deliberate Models: huggingface.co/XpucT/Deliberate/tree/main IP Adapter and Encoder: github.com/cubiq/ComfyUI_IPAdapter_plus MM_SD Models: github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved control_v11f1e_sd15_tile.pth huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11f1e_sd15_tile.pth
@WhySoBroke11 ай бұрын
Matteo is the real deal.... every video tutorial is pure gold!! Not like other idea stealing YT channels
@abdelhakkhalil768411 ай бұрын
Olivio, give the Automatic1111 some love. Many of us are not ready to switch to Comfy as A1111 is easier to use. I use the nodes workflow to create 3D textures, and I know that it's a powerful tool, but sometimes, I just want to load a model and just work on it without having to fiddle around with hundreds or nodes and parameters.
@ScottLahteine11 ай бұрын
One of Olivio’s videos shows how to add a node to ComfyUI that makes it interoperate with A1111 installed on the same machine. It helps to bridge the gap. InvokeAI also has a nice node interface, but as far as I know it still doesn’t connect up with ComfyUI or A1111 just yet. While nodes are fun to get a workflow like the one described in this video, we’ll keep getting better apps and user interfaces that make the process more fluid, and that’s what I look forward to most.
@garrulousskeptic661611 ай бұрын
It amuses me that some like to parse this as some kind of AI art arms race. It is ever evolving, but no one knows to what.😊
@Marian8711 ай бұрын
@@LTE18 nodes have been a thing for a while in various apps, but I have never thought that being something akin to a glorified telephone exchange operator to be the pinnacle of art creation. While AI is amazing i'm sure most people won't favor nodes as the input.
@lennoyl11 ай бұрын
I agree with you but the problem is not the love but the evolution. A1111 evolves too slowly: Comfyui is almost always the first to make new things work despite its horrible interface (It's not complicated to understand but it's annoying to use: you have to prepare a workflow before creating..that's not how I work. I need to improvise and I can't with comfyui). So it's normal to make videos about comfyui when your videos are about news in AI.
@abdelhakkhalil768411 ай бұрын
@@lennoyl Well, Comfy is pushed hard because now it's owned by StabilityAI. A1111 is still developed by the community. StabilityAI can also help the most popular open-source platform for it's models.
@sazarod11 ай бұрын
Keep up the great work, Olivio.
@OlivioSarikas11 ай бұрын
Thank you for your support. really appreciate it :)
@eucharistenjoyer11 ай бұрын
I thought he was the guy behind the nodes, not the technology. His videos are amazing and well explained, and now I feel even more respect for the guy.
@musicandhappinessbyjo79511 ай бұрын
Really love your comfy UI videos. Please do them more, comfyUI seems to have a lot of haters in the community and they don't realize how much potential this thing has.
@joeterzio717511 ай бұрын
I don't hate comfy UI but I'm never going to use it. It's like trying to read a wiring diagram and I have no desire to do that. I see a comfy UI video and I just don't watch.
@kkryptokayden465311 ай бұрын
I didn't like it before but I got past that and started getting used to it. Now I have 3 workflows and use it constantly.
@risinghigherthen11 ай бұрын
these are perfect thank you for the in depth analysis! bravo Olivio!
@vincedodge32111 ай бұрын
Just a few months ago, all of these were still impossible to do. The updates are really fast and exciting.
@miguelgargallo11 ай бұрын
Keep doing your job, you are the best, this is at least my minimum to contribute to your excellent education now
@brandonopolis11 ай бұрын
That image blending is awesome! I need to play around with this... I want to blend a wintery scene into my cousin's air conditioning company logo!
@jtjames7911 ай бұрын
I watched the original video. And I'm definitely going to need an AI to do that for me. But that should be around sooner than later.
@Zerod-rn3ye10 ай бұрын
If you don't mind, how did you get the image @ 9:30 at the top left with the really cool Final Fantasy styled concept art of a female? I see you loaded it but if you created it and can provide the prompt / model to recreate it (and similar neat concepts of that style) or related resource that would be appreciated.
@Inner-Reflections-AI11 ай бұрын
Nice Summary! Such an amazing node to use with animations.
@Shingo_AI_Art11 ай бұрын
Iterative Latent Upscaler gives the best results from my tests
@tristanwalling138811 ай бұрын
Really great video, very helpful tips and workflows, thank you!
@bastienfrancois918011 ай бұрын
This is getting really interesting, a bit like VST plugins or synths for audio, or filters or plugins in photoshop or premiere, only much more powerful!
@AlterMax24-YouTube11 ай бұрын
I don't even know how you do to stay calm. These technologies drives me crazy! Every day, we have something new, something that doesn't work and we have to fix. And you are always in peace! I'd like to pay homage to your patience. Thank you for that! 😅
@wuetsby544810 ай бұрын
awesome You got me really great stuff with the logo animation that was awsome
@zoemorn9 ай бұрын
the worklow for putting two figures into an central image is a lot of fun. Sometimes tho i have found that one of the input images gets ignored entirely (so only one of the two figures is used) and i cant figure out what causes that. is it just seed randomness? i did check the IP Adapter to ensure i didnt accidentally effectively remove it there. I figure it has to do with clip vision crop setting perhaps but havent figured it out. Interestingly if i switch sides of the RGB mask (so make the missing figure to be on the right side if they were missing from the left) that seems to work. the input image figure was centered on the image tho, so would assume clipvision crop = center is correct?
@draken537911 ай бұрын
He isnt the creator of IP-adapter, he created the custom_node for comfyUI that uses ipadapter.
@AB-wf8ek11 ай бұрын
Correct, he's the developer of the IPAdapter Plus custom node, still a total MVP though!
@Bikini_Beats11 ай бұрын
Another great video. Thanks
@clumsymoe11 ай бұрын
Hey Olivio, I gotta say, your channel is always go-to for everything about SD. Really appreciate you keeping everyone in the loop with all the new stuff in AI generative art. Thanks a bunch and keep it up friend!
@alreadythunkit11 ай бұрын
Nice one Olivio.
@wisdombox411 ай бұрын
Hi Olivio Can do a video about how to add prompt styler any workflow, I look on KZbin no one made a proper tutorial, Thank you
@aindmix11 ай бұрын
What will be interesting is text2video outputs from something like pika 1.0 put through a comfy UI workflow to overlay styles and upscale.
@kargulo11 ай бұрын
HI , where can I get IP Adapter encoder 1-5.safetensors for Load CLIP Vision , I can not fint it
@NeoIntelGore11 ай бұрын
Trying to get started with ComfyUI. I can't get the blinking example to work. I tried it with my own two pictures, but I have a feeling it just ignores my reference pictures. I tried it with the example pictures and it just skips half the nodes. After that when I refresh and queue the prompt again it just starts the last node ignoring the rest. What am I doing wrong?
@KingQuantShi11 ай бұрын
Probably rename the video to comfiui
@Bikini_Beats11 ай бұрын
Is the future
@jevinlownardo878411 ай бұрын
@@Bikini_Beatswhat a joke
@Senti_Q10 ай бұрын
@@jevinlownardo8784any suggestions for a competitive alternative?
@J3R3MI69 ай бұрын
@@jevinlownardo8784comfy gang gang
@dkamhaji11 ай бұрын
yes I saw his Video, its incredible stuff. Question for you.. can you use the canvas node you introduced me to to make these rudimentary RGB masks?Im trying it now - you can use its mask but I don't see how to separate the RGB in the same way the image load node does it
@dkamhaji11 ай бұрын
@Olivio, upscale question: in the first workflow from Matteo, the upscale comes from the 1st ksampler to the upscale path and makes its way into the second Ksampler Latent. the second ksamplers model input comes from the path of the og model and ipadpters. and the conditioning from the original prompts. my question is as the image is upscaled in this scenario - is it taking any information from the first kasamplers output? what exactly is being sent from the first ksampler to the second via the latent? is it image information or just the dimensions of the image? I hope this makes sense. I wish someone would go deep into the path of the data and image pixels as it goes from the first gen ksampler and up the upscale path.
@tristanwalling138811 ай бұрын
Thanks!
@kedixia11 ай бұрын
Thanks for the video. Would you mind making a tutorial for SDXL AnimateDiff workflow? ...I just couldn't get it to work. My output is all black unless I adjust the size to 256*256.
@veritas701011 ай бұрын
Props I also recently subbed to to them they are a wizard
@keylanoslokj180610 ай бұрын
If you use colab notebooks, how to achieve similar level of control as having an elaborate GUI?
@NotThatOlivia11 ай бұрын
now you are frying my brain - but I love it!
@johndebattista-q3e11 ай бұрын
Yes you can download them now in the extension I find them today but you need to do an update
@Steve.Jobless11 ай бұрын
OLIVIO, Is it possible to create img2img workflows using SDXL Turbo in ComfyUI???
@ufukzayim668911 ай бұрын
Hi Olivio, I have problem with loading ipadapter. Load IPAdapter Model node shows" ipadapter_file null".I 've made a folder in models folder called ipadapter.And changed model.bin file name to ip-adapter-plus_sd15.safetensors.And updated Confy.But says Prompt outputs failed validation IPAdapterloader: Value not in list: ipadapter_file: 'None' no in [] . Could you please guide me.
@cyril111111 ай бұрын
this is the IP-Adapter node creator :) ( IP-adapter has been created by lllyasviel (the creator of Controlnet & Fooocus)
@alan_yong11 ай бұрын
🎯 Key Takeaways for quick navigation: 00:00 🎨 *Introduction to IP Adapter Workflows* - Overview of workflows by Mato using the IP adapter. - Open art contest details with a prize pool of over $13,000. - Invitation to explore and enter multiple workflows for free. 01:09 🖼️ *IP Adapter in Multi-Style Image Composition* - IP adapter usage for combining three different art styles in an image. - Importance of using a rough mask with specific colors. - Explanation of IP adapter model inputs and locations for model files. 03:50 💻 *Setting up IP Adapter Models and Files* - Detailed guide on downloading and organizing IP adapter models. - Different versions (normal, plus, plus face, full face) and their use cases. - Instructions for saving models in the appropriate folders. 06:14 🔄 *Multi-Image Composition Workflow* - Demonstration of combining multiple images using IP adapter iteratively. - Importance of using the correct mask channel for each image. - Upscaling process for achieving high-resolution and detailed results. 07:48 🎭 *Conditioning Masks for Image Manipulation* - Utilizing conditioning set mask notes to apply prompts to specific image regions. - Example of changing hair color using conditioning on different mask parts. - Highlighting the flexibility of conditioning for various image modifications. 10:19 🎞️ *Creating Blinking Animation* - Generating a blinking animation using a clever image rendering technique. - Importance of using specific checkpoint models and version 1.5 of animate diff loader. - Tips for updating extensions and ensuring smooth workflow execution. 13:28 🌐 *Blending Between Two Images* - Creating an animation blending between two images using masks. - Distinction between 16-frame and 32-frame workflows, considering CPU and GPU usage. - Special attention to control net models and their versions for different workflows. Made with HARPA AI
@Clupea10111 ай бұрын
Great Guide
@LuiNogueira10 ай бұрын
I couldnt manage to make the conditioning through prompt work in the second example with SDXL, is this possible?
@NicolasLeroy-g8h11 ай бұрын
My question is probably silly but i will ask it anyway :) where can i get the RGB.png picture because i don't find it inside of the workflow.
@art311211 ай бұрын
Great video and workflow ideas. Thanks. As an A1111 user I am just trying to explore Comfy UI. I think in the end it could have some sort of macro interface above this piping (like a lot of software e.g. some synths in the audio world). Then casual users can create more easily using just the macro controls, whilst still allowing others to do a deep dive and customise in detail to their needs.
@yoyo1poe11 ай бұрын
Workflows are the macros, you can save a finished picture in the workflows folder and it will import the workflow it was created with when you use "load"
@SeanieinLombok11 ай бұрын
lvoe your content, was reviewing yoru product placement video, however, I really want to try and place products in the hands of AI models/people. Any work flow for this?
@Concepts_Space11 ай бұрын
What's the browser theme that you're using? The tabs look a little more rounded than usual. Thanks for the video, as always.
@shtorm726711 ай бұрын
Ok now it's mostly ComfyUI channel.
@carlingo319111 ай бұрын
Yea, they all sold out.
@OlivioSarikas11 ай бұрын
LOL, yes, totally selling out on a FREE tool - You got me bro! Cancel Culture Rage to the Max please
@MisterWealth9 ай бұрын
Is it possible to use multiple ipadapters on one video clip? so if a person turns around how does it know to keep the stylization of the person?
@pawozakwa7 ай бұрын
How to get this "blueprint" view in stable diffusion?
@nelvero11 ай бұрын
wow! I'm going deeper underground
@Marian8711 ай бұрын
Nodes are the death of passion.....
@op12studio11 ай бұрын
So you just do ComfyUI now? I am on AMD GPU so I cant use it. So if thats all you use then I can know if I should watch the videos or not.
@DJVARAO11 ай бұрын
Wow, Olivio has more than 200k subscribers!
@FusionDraw952711 ай бұрын
great workflow
@luciusblackheart11 ай бұрын
Does this work flow only work for ComfyUI? Can it work for the standard Stable Diff Web UI?
@JJ-vp3bd11 ай бұрын
did you figure this out
@johnmenezes203111 ай бұрын
can this be accomplished in A1111 with segment anything? danke
@sidheart890511 ай бұрын
Getting better and better woohoooo last few videos are just...... umaaaah
@FunwithBlender11 ай бұрын
When i copy your workflow it does not work it says I am missing all the nodes and thne when i say in the manager to isntall it it cant find it something is weird about the new comfyui
@OlivioSarikas11 ай бұрын
you need to update to the latest version of comfyUI
@forfreeiran87497 ай бұрын
Is this possible In forge ?
@blender_wiki11 ай бұрын
This is actually a basic set up , you can do much much more with bbox + auto masking + segmentation and ip adapter.
@OlivioSarikas11 ай бұрын
you, of course. this is only to show an idea of how to use it. :)
@ultimategolfarchives474611 ай бұрын
Hello sir! I'm wondering if there is something like tile upscaler in comfyui? Or something similar that would add detail while upscaling
@mirek19011 ай бұрын
yes
@ultimategolfarchives474611 ай бұрын
@@mirek190I didn't know. Any tips on this type of workflow sir?
@mirek19011 ай бұрын
Are you serious? Find some workflow for it ...@@ultimategolfarchives4746
@fimbulInvierno11 ай бұрын
Can something like this be run in a RTX 4070 Ti?
@mirek19011 ай бұрын
yes ... eastly
@fimbulInvierno11 ай бұрын
@@mirek190even for video generation?
@mirek19011 ай бұрын
@@fimbulInvierno yes For video generation you need 12 GB VRAM
@bigdaddy530311 ай бұрын
That is the same brunette that features in pretty much every image generation I make
@CrystalBreakfast11 ай бұрын
Olivio, please, the "end screen cat" has got to go. The cat is a stock character from an automatic clip making site, so it's not unique to you and other people could use it freely. Plus, you now know so many great ways to make custom animation with all the workflows from the last few months. We need a new outro that's uniquely Olivio! ... also the cat wasn't even pointing at video links at the end of this one, it's kinda awkward. >_>
@toonleap11 ай бұрын
How you create the masks?
@TimFordMedia11 ай бұрын
7:21
@zappazack9 ай бұрын
Where can I get these rgb.png masks?
@OlivioSarikas9 ай бұрын
You paint them yourselfs in any paint program or paint online app
@zappazack9 ай бұрын
Wow extreme quick response thx a lot. Hope u dont mind that I created them through screenshot but I was to impatient testing the workflow...result is amazing@@OlivioSarikas
@SPT111 ай бұрын
This seems like an overly complicated way of avoiding the use of Photoshop. You could end up with the same result just making 3 individual pictures (background, girl 1, girl 2) also using ip-adapter but in Auto1111. then just Open Photoshop, and make 3 layers. either blend everything manually if you know how, or make a rough version and improve it with img2img and/or inpainting in auto1111. I'm sure Comfy UI has its unique qualities, I just think it's always better to combine the use of several programs to produce art. None can do everything perfectly.
@yoyo1poe11 ай бұрын
Photoshop couldn't make the lighting coherent on all the characters, or have them interact like stable diffusion can do
@SPT111 ай бұрын
@@yoyo1poe sure it could, it's not one click and you have to know what you're doing, I'll give you that. But you could totally do it I assure you.
@YouCountSheep10 ай бұрын
A1111 can only dream of such functionality. Some things like controlnet or posing also works in A1111 but in seperate tabs and very clunky to change things imo. Im just doing a bit of ai images, but after I saw comfy I never went back to A1111, and now comfy has all the functionality and even more what A1111 used to have as an advantage when comfy was still pretty new. And the manager helps alot aswell. I saw the IP adapter in another video, that is very powerful stuff, like an automatic inpaint to a degree but smart and for whole pictures. Kinda like a smart controlnet really.
@aceathor11 ай бұрын
Is someone know if I can have 2 different graphic card working together. I have a 3090 Ti and and a 2070 Super. I f I can have 32 Gig for image AI...
@2PeteShakur11 ай бұрын
nope, sorry
@mirek19011 ай бұрын
You have rtx 3090 has 24 GB of VRAM that is more than enough to work literally with everything ....
@effehell759311 ай бұрын
@@mirek190Not by a long shot. These AI programs really need a lot of GPU power. The more the better.
@mirek19011 ай бұрын
@@effehell7593 At the minute ment cheap rtx3090 ( after mining hype ) is the best solution. I bought mine rtx 3090 24 GB VRAM for 700 euro. To work with AI rtx 3090 is as fast as rtx 4080 but has more VRAM 24 GB Vs 16 GB from rtx 4080. So has a bit longer future than 16 GB cards . Right now to generate pictures with the most advanced SDXL versions you need 8-12 GB VRAM. To generate video you need 12 GB. To work with LLM you can fully put a model on your rtx 3090 up to 34B size ( all 65 layers of q4k_m version of ggml). And you get around 40 tokens /s Bigger ggml models like 70B you can put half on GPU and the rest to RAM. 70 lB model will be getting around 3 tokens/s then.
@giochi411 ай бұрын
Wonderful. Personally I find comfyUI a real pain to use. I understand the versatility.
@CaptainKokomoGaming11 ай бұрын
Has a1111 fallen so far behind that this stuff can't be used with it?
@carlingo319111 ай бұрын
All the geniuses prefer this type of interface that's all. If you're a genius and try to talk about anything but Comfy you get blackballed.
@yoyo1poe11 ай бұрын
Unlike your usual videos, lots of interesting information here. Thumb up. For the automatic 1111 users, i think you can achieve the same results as in the first example with image2image, it will just take three renders instead of one but still probably be simpler to work than doing the triple masking+spaghetti wiring in comfy. I wouldn't know how to do the second example in automatic, but animatediff doesn't work in automatic for me for some reason. And tbh, I think the blinking girl example could be done in an animation software easier because ipadapter plus animatediff will just make the character fixed if the strength is too strong, or the style will drift too much when you lower the strength. So in this case it's just alternating two almost still images.
@boogieman723311 ай бұрын
"Unlike your usual videos, " too true
@JohnVanderbeck11 ай бұрын
I still don't really understand just WHAT IPAdapter actually is/does.
@yoyo1poe11 ай бұрын
I don't think anybody knows😂 It replicates a style in the picture, mostly used for replicating faces. And the results can vary wildly depending on the whims of stable diff
@bigbeng9511Ай бұрын
This seems to work as a prompt in the form of an image; basically transforming the image into text description. Very useful since you can describe more with a picture, and perform image processing such as masking, adjustment etc which are hard to do and describe with words... Well I'm still learning also 😅
@JohnVanderbeck29 күн бұрын
@@bigbeng9511 I don't think that's it. That's how Midjourney's image reference works, or at least how it used to work, not sure if it still is. But IPAdapter is far too precise for that to be the case.
@jevinlownardo878411 ай бұрын
Really hate comfyui
@nermal9311 ай бұрын
I see ChaosUI, I leave, no upvote.
@miguelgargallo11 ай бұрын
like 33
@Mohammed-oo5cj11 ай бұрын
a11111 >>>>??????
@mirek19011 ай бұрын
ok boomer
@carlingo319111 ай бұрын
@@mirek190 You're not cool cause you use Comfy.
@Mohammed-oo5cj11 ай бұрын
Whether boomer or not, we're still enjoying life!😊😊@@mirek190
@ardapasa211811 ай бұрын
only %1 of the people using that ui so why r u keep showing things from that ui ?
@mirek19011 ай бұрын
lol ... go dreaming. Most people went to ComfUI nowadays.
@therookiesplaybook11 ай бұрын
That's the most unintuitive thing I can imagine.
@Jaysunn11 ай бұрын
booo
@AlexsForestAdventureChannel11 ай бұрын
Stable diffusion? And comfyui ? Why say stable diffusion when its comfyui ?😅