ComfyUI - SUPER FAST Images in 4 steps or 0.7 seconds! On ANY stable diffusion model or LoRA

  Рет қаралды 31,139

Scott Detweiler

Scott Detweiler

6 ай бұрын

Today we explore how to use the latent consistency LoRA in your workflow. This fantastic method can shorten your preliminary model inference to as little as 0.7 seconds and in only 4 steps using ComfyUI and SDXL. This will also make it a lot easier to run these models on older hardware and is just mind-blowing fast! Now, it isn't perfect, but it sure helps you find some base images quickly.
#comfy #stablediffusion #aiart #ipadapter
You can download the LCS LoRA models from hugging face here:
huggingface.co/latent-consist...
Interested in the finished graph and in supporting the channel as a sponsor? I will post this workflow (along with all of the previous graphs) over in the community area of KZbin. Come on over to the dark side! :-)
/ @sedetweiler

Пікірлер: 156
@JustFeral
@JustFeral 6 ай бұрын
You just made me redo my whole workflow since this alone alows me to iterate on ideas so much faster. This stuff moves so damn fast. I get off youtube for a week and so much changes in AI.
@sedetweiler
@sedetweiler 6 ай бұрын
Yeah, it is a bit insane for sure.
@Satscape
@Satscape 6 ай бұрын
4GB VRAM normally takes 2 to 5 minutes, this takes 20 seconds. Great for use as a starting point!
@skycladsquirrel
@skycladsquirrel 6 ай бұрын
Running on a 4090 and it's incredible with Animate Diff. What a dream. Thanks for the incredible video.
@sedetweiler
@sedetweiler 6 ай бұрын
Glad you enjoyed it!
@sparkilla
@sparkilla Ай бұрын
Thanks for the information, I want extremely detailed hyper realistic images and this helps out a lot adding a sampler before my main sampler, also doing face swapping and supir upscaling in the same workflow results are terrific and also about 10-15 seconds faster per pic now as well.
@mo5909
@mo5909 6 ай бұрын
I have ADHD so I love your short but very inforamtive videos, plz don't stop!
@Cocaine_Cowboy
@Cocaine_Cowboy 6 ай бұрын
Amazing. AnimateDiff speed work just wow! Thank you very much!
@sedetweiler
@sedetweiler 6 ай бұрын
You're welcome!
@JimPenceArt
@JimPenceArt 6 ай бұрын
Thanks! I've got a pretty slow system and it greatly improved the speed. 45 sec/img down from 3+ min/img👍👍
@sedetweiler
@sedetweiler 6 ай бұрын
Great to hear!
@minimalfun
@minimalfun 6 ай бұрын
Incredibly useful, thank you very much, really awesome!
@sedetweiler
@sedetweiler 6 ай бұрын
You're very welcome!
@user-ui2hw5of9l
@user-ui2hw5of9l 6 ай бұрын
Very clear ! Thanks !
@rinpsantos
@rinpsantos 6 ай бұрын
Works like a charm for me. Thank you!!!
@sedetweiler
@sedetweiler 6 ай бұрын
Great to hear!
@EpochEmerge
@EpochEmerge 6 ай бұрын
Could you please explain what did you mean on 4:40, I need to use ModelSamplingDiscrete(lcm) node AFTER LCM-lora? If i want to stack loras
@Darkfredor
@Darkfredor 4 ай бұрын
Impressionant,merci pour le tips
@loubakalouba
@loubakalouba 6 ай бұрын
Thank you for a great tutorial.
@sedetweiler
@sedetweiler 6 ай бұрын
You are welcome!
@hleet
@hleet 6 ай бұрын
Thank you for showing all theses new features ! AI still has so much to show to the world !
@sedetweiler
@sedetweiler 6 ай бұрын
It sure is coming along fast!
@MarkDrMindsetChavez
@MarkDrMindsetChavez 6 ай бұрын
keep 'em coming bro!
@sedetweiler
@sedetweiler 6 ай бұрын
oh yes! you know it!
@feisimo5479
@feisimo5479 6 ай бұрын
Was lost without how to get this installed... figured it out and already love it. Thanks for all your great demos and tutorials
@sedetweiler
@sedetweiler 6 ай бұрын
Great to hear!
@cheese6870
@cheese6870 6 ай бұрын
How do I get it installed?@@sedetweiler
@aiqinggirl
@aiqinggirl 6 ай бұрын
just put the lora into the lora folder, as a normal lora!
6 ай бұрын
Cool video ! Thanks
@sedetweiler
@sedetweiler 6 ай бұрын
Glad you liked it!
@Disco_Tek
@Disco_Tek 6 ай бұрын
Yeah this is a instant tool now to get the prompts and weights close before I really get to work.
@gordonbrinkmann
@gordonbrinkmann 6 ай бұрын
I'm having a technical question, in the video you say, cfg values of 1 or below make the sampler ignore the negative prompt. I have no idea if this is true because I usually have no or just a short negative prompt so I could not see a difference in the images. But what I saw - and this brings me to the technical question: at a cfg value of 1.0 (and only there, neither above nor below 1), the steps took only about 50% to 60% percent of the time as usual with my GPU - so could it be there is something happening exactly at 1.0 that is different from the other values, like it's ignoring the negative prompt only at 1, but not the other values? And if so, is there a way for people who usually don't use negative prompts, to somehow speed up the rendering by making the K Sampler ignore the negative prompt? Because the speed increase is only at cfg 1, simply leaving the prompt empty at e.g. higher values does not work. Unplugging the negative prompt does not work either because it simply throws an error for an unconnected socket and doesn't start to render. Maybe this is not even happening on other machines... but if so, I would really like to find a way to deliberately ignore the negative prompt even at higher cfg values (because my images with cfg 1 are most of the time not detailed enough). Or maybe this has nothing to do with ignoring the negative prompt and is just happening because of that specific cfg value?
@Make_a_Splash
@Make_a_Splash 6 ай бұрын
Very cool,, Thanks
@sedetweiler
@sedetweiler 6 ай бұрын
You bet
@FouadMuhieddine
@FouadMuhieddine 6 ай бұрын
Amazing thank you
@sedetweiler
@sedetweiler 6 ай бұрын
No problem 😊
@maestromikz
@maestromikz 5 ай бұрын
Is this the same with Mac M1? Because i have lcm and lora on my comfyUI but still the loading time is around 300sec.
@oranguerillatan
@oranguerillatan 6 ай бұрын
Hi Scott, great video, thank you. When doing "vid2vid" with comfy and animdiff/controlnet... Do you pass the video frames straight into the ksampler, or do you push empty latents into it? I'm getting sub par results with the former, and have not tried the latter yet.
@sedetweiler
@sedetweiler 6 ай бұрын
I will look into it. I don't do much in the way of video at this time.
@oranguerillatan
@oranguerillatan 6 ай бұрын
@sedetweiler results have gotten better due to some help from the wonderful Coffee Vectors and Purz and others, sd1.5 vid2vid working quite well with lcm and anim diff now, results on my Twitter from last night. Defo still needs some tweaks. Sdxl vid2vid lcm anim diff is still proving a little more elusive, but I am doing a lot of tests to find the right combo of weights and control nets. Results coming soon. Thank you for your amazing walkthroughs, you've really helped get me going with comfy in the recent weeks.
@Deadgray
@Deadgray 6 ай бұрын
Idea here is that you just put lcm lora with any model, and also any sampler and scheduler, you don't have to use lcm as sampler, you can use slow ones like heun and get great results but so much faster. Also with Comfy Efficient Nodes it feels like XY plot with 5x5 is made as fast as 1 image before. Try that, see the difference 🙂
@sedetweiler
@sedetweiler 6 ай бұрын
That's a great idea!
@gordonbrinkmann
@gordonbrinkmann 6 ай бұрын
I tried it with other samplers instead of lcm, and the results were actually terrible no matter if I did a few or my usual number of steps and no matter what cfg value 😆 However, when using lcm set to get good results I found no matter which scheduler I used, all images were very similar so I ended up using karras because it was (marginally) the fastest. Even ddim_uniform which Scott mentions not seeming to work with lcm gave great results - just very different from all the other similar looking images.
@zoemorn
@zoemorn 5 ай бұрын
My ksampler doesnt show an image like Scott's does, can someone advise why? is it a special ksampler?
@moon47usaco
@moon47usaco 6 ай бұрын
Great Scott (pun intended)... This is amazing. SDXL may be back on the table again. 1024 sdxl generation went from less than 20 seconds for one image to less than 5... =0
@sedetweiler
@sedetweiler 6 ай бұрын
It's pretty wicked for sure!
@moon47usaco
@moon47usaco 6 ай бұрын
@@sedetweiler Unfortunate that It seems to degrade quickly with cntrolnet added in the flow. =\
@francescobriganti6029
@francescobriganti6029 6 ай бұрын
Hi! I'm trying it with AnimateDiff but I keep getting this error "'VanillaTemporalModule' object has no attribute 'cons", have you also got it / any solutions? thanks!!
@sedetweiler
@sedetweiler 6 ай бұрын
I will have to mess around with it.
@MrPlasmo
@MrPlasmo 6 ай бұрын
wow - do the LCM Loras only work with ComfyUI or does it also work in A1111?
@sedetweiler
@sedetweiler 6 ай бұрын
I have no idea on A1111.
@___x__x_r___xa__x_____f______
@___x__x_r___xa__x_____f______ 6 ай бұрын
Would love video time on swarm. Really want to run generations on it but still a little awkward
@Kavsanv
@Kavsanv 6 ай бұрын
Thank you! Do you have tutorials to change pose without changing details of character as much as posible?
@sedetweiler
@sedetweiler 6 ай бұрын
Not yet. That will be a bit of a challenge, but we can probably do it using a few techniques.
@Kavsanv
@Kavsanv 6 ай бұрын
Thank you for answering. Yeah to reach that I use mostly inpaint and some combos of AI adapter, posex, and control net a bit helps. Found very cool thing to use right lora over but it seems that it look to similar character to pervious.@@sedetweiler
@sedetweiler
@sedetweiler 6 ай бұрын
@@Kavsanv I was leaning on the IPadapter for a lot of that for sure, but I also think a bit of roop in combination with that would also help.
@victorvaltchev42
@victorvaltchev42 6 ай бұрын
Great content. What if you make it 30-50 steps. Is it better quality than without this lora or is it just speed boost on low steps?
@sedetweiler
@sedetweiler 6 ай бұрын
Nope, it often seems to get worse and it changes a ton as it advances.
@Utoko
@Utoko 6 ай бұрын
Not with SGM_Uniform since it adds constant noise and just keeps changing but I got higher quality on SD15 with 20 steps but with exponential sampler. For SDXL I had the best results also with exponential around 8 steps so far.
@jonmichaelgalindo
@jonmichaelgalindo 6 ай бұрын
It's fantastic for quick prompt experimenting!
@sedetweiler
@sedetweiler 6 ай бұрын
It really is!
@efastcruelx7880
@efastcruelx7880 6 ай бұрын
I tried it, work fine with still image generation, but when work with animatediff, why the image quality dropped significantly
@raven1439
@raven1439 6 ай бұрын
Could you make a video on how to properly connect this lora with sdxl base and refiner from earlier video?
@sedetweiler
@sedetweiler 6 ай бұрын
We do that in live streams as well. It's very similar to what we did here.
@jonmichaelgalindo
@jonmichaelgalindo 6 ай бұрын
If you're on A1111 and don't have LCM sampler, Euler A works well enough to test this. (It's not perfect, but it's usable.)
@Smashachu
@Smashachu 6 ай бұрын
Does this work with tensor RT?
@Al_KR_t
@Al_KR_t 6 ай бұрын
Is it possible to combine it with animatediff? I ran into a lot of errors when I tried, and a lot of models don't seem to be compatible
@sedetweiler
@sedetweiler 6 ай бұрын
I have seen people doing so, but I don't tend to do a lot of animation.
@MrPlasmo
@MrPlasmo 6 ай бұрын
How do you get the Model Sampling Discrete node to show up? I don't have it
@sedetweiler
@sedetweiler 6 ай бұрын
Update comfy.
@efastcruelx7880
@efastcruelx7880 6 ай бұрын
Have you tried it with SD1.5 checkpoint
@twistedcraftproductions1697
@twistedcraftproductions1697 6 ай бұрын
for some reason this lora unloads the checkpoint from memory after every generation so that you have to wait a whole minute for it to load back into memory before the ksampler even starts to do anything. I seen the processes in the background status window so that's how I know. Using sdxl without the lora there is no 60 sec wait for the model to load before the ksampler starts making an image.
@dkamhaji
@dkamhaji 6 ай бұрын
Where can I find the LCM Sampler for my Ksampler? Its not in my list and I just updated my Comfy.
@francoisneko
@francoisneko 6 ай бұрын
I have the same issue
@dkamhaji
@dkamhaji 6 ай бұрын
Just update comfy ui through the manager and restart.
@sedetweiler
@sedetweiler 6 ай бұрын
Yes, always be sure you are updated. 99.9% of the time that will be the cause of most issues.
@dylanfrercks
@dylanfrercks 3 ай бұрын
Even with this LoRa (1.5) it’s taking me 2 minutes to generate a single image. Is my MacBook Pro (8gb ram) actually that bad, or is there something else I could be mission?
@Tofu3435
@Tofu3435 3 ай бұрын
Maybe you run it in CPU mode
@erperejildo
@erperejildo Ай бұрын
don't you have to connect the LoadLora to the prompt? Does it really matter?
@paul606smith
@paul606smith 6 ай бұрын
If you switch sdxl model for segmind ssd cut down model this workflow works even faster and it will work on low end 1650 4gb and 8 GB ram laptop
@AI.ImaGen
@AI.ImaGen 6 ай бұрын
😛It's...AWESOME !!! Especialy to make videos.
@sedetweiler
@sedetweiler 6 ай бұрын
Yeah it is!
@paul606smith
@paul606smith 6 ай бұрын
it does work with gtx1060 6gb and 16gb ram. makes excellent speed improvement
@sedetweiler
@sedetweiler 6 ай бұрын
rock on!
@spiralofhope
@spiralofhope 6 ай бұрын
[edit] - an update fixed it nope. There is no sampler_name "lcm". I had to try euler_ancestral, but that looks mostly shit.
@bwheldale
@bwheldale 6 ай бұрын
I had the same until I updated comfyui via manager -> update comfyui (The update .bat file didn't do it but through manager did)
@AlIguana
@AlIguana 6 ай бұрын
yeah the LCM k-sampler just came out today, you need to update Comfy
@sedetweiler
@sedetweiler 6 ай бұрын
Update comfy. This is less that 24 hours old.
@spiralofhope
@spiralofhope 6 ай бұрын
@@sedetweiler I updated and see it, thanks! I now have a problem with LoRAs not being used, but I'll look around for answers.
@CY_max
@CY_max 6 ай бұрын
i did exactly what you did but it's taking way longer. Dont know why. Im using RTX4060 (laptop)
@Skettalee
@Skettalee 6 ай бұрын
Wait im confused. Does anyone else have the LCM sampler? I dont know how to get that put ii my life but its not there
@sedetweiler
@sedetweiler 6 ай бұрын
Make sure you always update before attempting new workflows. It is not even a day old.
@Queenbeez786
@Queenbeez786 6 ай бұрын
to install put the file in lora model
@thinkright5611
@thinkright5611 4 ай бұрын
Loll..... I was hoping you'd show how the LoRA model connected to the CLIP O_O. Guess not. lol
@stephantual
@stephantual 5 ай бұрын
Thank you! this is useful for quickly making video frames. I use comfyroll , works well with it (but it's not easy - maybe you could make a tutorial? - see what i did here ;) - great vid as usual.
@sedetweiler
@sedetweiler 5 ай бұрын
Great suggestion!
@stephantual
@stephantual 5 ай бұрын
Thank for replying@@sedetweiler . Comfryroll, Rgthree and Trung's 0246 + anything everywhere are my go to nodes right now.
@tartwinkler1711
@tartwinkler1711 6 ай бұрын
I don't see the graph under the member section🤔
@sedetweiler
@sedetweiler 6 ай бұрын
Are you sponsor level or higher? You should see if there with all of the other graphs.
@aerofrost1
@aerofrost1 6 ай бұрын
My ksampler doesn't have the LCM sampler?
@sedetweiler
@sedetweiler 6 ай бұрын
Make sure everything is updated. This sampler is less that 24 hours old.
@aerofrost1
@aerofrost1 6 ай бұрын
@@sedetweiler Updated all and it's there now, thank you! I was running around 16 steps with DDIM but the quality of LCM is even better quality with 4 steps than it was with DDIM 16 steps. Do you know if quality increases exponentially with LCM or does it max at 4?
@JLITZ88
@JLITZ88 6 ай бұрын
workflow posted?
@vilainm99
@vilainm99 6 ай бұрын
Like many others, no LCM sampler. Tried many updates through Manager with every time a restart, but no luck....
@sedetweiler
@sedetweiler 6 ай бұрын
And you are pulling the latest from all of the extensions? These releases are less than a day old, so everything needs to be updated to keep up. Sorry you are unable to find it, but it isn't hidden, something just isn't updating.
@sedetweiler
@sedetweiler 6 ай бұрын
You can actually see it here in the comfy code, added 3 days ago. You need to be sure to do a "git pull" on comfy. github.com/comfyanonymous/ComfyUI/commit/002aefa382585d171aef13c7bd21f64b8664fe28
@vilainm99
@vilainm99 6 ай бұрын
Nuked pinokio, reinstall and tadaaa!! Working great!
@jasoa
@jasoa 6 ай бұрын
Sometimes it's hard to find the model you're using on Huggingface. Did you rename the downloaded LoRA? Edit: I think I found it. I downloaded pytorch_lora_weights.safetensors from lcm-lora-sdxl and renamed it to lcm_sdxl_lora_weights.safetensors to match yours.
@sedetweiler
@sedetweiler 6 ай бұрын
Yes, sorry. They are all named the same thing. You will always need to rename them. I should have mentioned that, but this is a very consistent thing with the names being generic.
@jasoa
@jasoa 6 ай бұрын
Thanks for the tutorial. I wonder how much amazing stuff is hiding in comfyui and the stable diffusion world that we'd never know about without your videos.
@sedetweiler
@sedetweiler 6 ай бұрын
There is a ton! There are also things in AUTO1111 that no one has covered yet that I will probably make videos on as well. So much, and it is constantly evolving!
@bwheldale
@bwheldale 6 ай бұрын
I'm at the download site crossroads feeling lost: Latent Consistency Models LoRAs vs Latent Consistency Models Weights. I downloaded one of each no wiser, it's not eady being a noob. PS: pytorch_lora_weights.safetensors = 380MB other = 4.5GB My guess it's the smaller.
@marhensa
@marhensa 6 ай бұрын
@@bwheldale Latent Consistency Models LoRAs vs Latent Consistency Models Weights. The first one is LCM LoRA (SDXL LCM LoRa, SD1.5 LCM LoRA, SSD1B LCM LoRA), the second one is LCM models (SDXL, SSD1B, Dreamshaper7). This video use the smaller one, the LoRA.
@--signald
@--signald 6 ай бұрын
Does it work with A1111?
@sedetweiler
@sedetweiler 6 ай бұрын
Probably in a few weeks.
@telemole9427
@telemole9427 6 ай бұрын
I have no sampler called 'lcm' - have i missed a step? :(
@sedetweiler
@sedetweiler 6 ай бұрын
Yup, this is a day old, so if you are not up-to-date you will not have it.
@telemole9427
@telemole9427 6 ай бұрын
@@sedetweiler- i discovered! Thanks so much for this one - this is SO fast - reworking tons of workflows now ;)
@JackTorcello
@JackTorcello 6 ай бұрын
Where to find an LCM Sampler?
@sedetweiler
@sedetweiler 6 ай бұрын
Make sure you are on the latest of all nodes and comfy. It is in the comfy core, so a git pull should get you all you need.
@23rix
@23rix 6 ай бұрын
Is this just for sdxl models?
@sedetweiler
@sedetweiler 6 ай бұрын
You can use it with any model as long as you use the proper LoRA.
@wagmi614
@wagmi614 6 ай бұрын
can you animatediff LCM
@christianblinde
@christianblinde 6 ай бұрын
Just tried it, works better than i thought!
@sedetweiler
@sedetweiler 6 ай бұрын
Yup, should work just fine.
@lastlight05
@lastlight05 22 күн бұрын
LOL how do you install this LCM?
@paul606smith
@paul606smith 6 ай бұрын
doesn't work for me. Get out of memory errors with 8gb ram, 4gb 1650 for sdxl. Comfyui normally runs sdxl on this system but really really slow but at least it works.
@extraframe6376
@extraframe6376 6 ай бұрын
Damn! Just a month away and world of AI has chnaged upside down
@phish64209
@phish64209 6 ай бұрын
Tell us about how to fabricate things with AI like your Catan pieces!
@sedetweiler
@sedetweiler 6 ай бұрын
I use AI for ideas, by using the prompts to give me thoughts on things I would normally not have considered.
@0A01amir
@0A01amir 6 ай бұрын
Sadly it's slower than a normal 30setps generation on a low end machine. (caused mainly by Lora itself and comfyui sucks at loading lora before ksampler) 512x512 - 4steps - 1image = 115 seconds. ~7s/it (30 steps normaly takes ~20 seconds - less than 1s/it) 512x768 - 5steps - 1iamge = 130 seconds. ~4s/it (30 steps normaly takes ~35 seconds - less than 2s/it) 644x1000 - 5steps - 1image = 161 seconds. ~6s/it (30 steps normaly takes ~50 seconds - less than 3s/it)
@bronkula
@bronkula 6 ай бұрын
It really feels like you skipped 5 steps here, considering that it is not at all clear how to get lcm into the sampler_name or that you renamed the lora. Can you sticky a comment that explains a couple extra steps to get to your initial position?
@KINGLIFERISM
@KINGLIFERISM 6 ай бұрын
Yeah he wants a cash grab... does not care about people watching his youtube or why would he do that. Here... Latent Consistency Models LoRAs vs Latent Consistency Models Weights. The first one is LCM LoRA (SDXL LCM LoRa, SD1.5 LCM LoRA, SSD1B LCM LoRA), the second one is LCM models (SDXL, SSD1B, Dreamshaper7). This video use the smaller one, the LoRA. Just rename the downloaded LoRA pytorch_lora_weights.safetensors from lcm-lora-sdxl and renamed it to lcm_sdxl_lora_weights.safetensors to match his.
@gordonbrinkmann
@gordonbrinkmann 6 ай бұрын
If you update ComfyUI with the manager, the lcm sampler will be there. When watching tutorials on new features make sure your software is updated...
@whatwherethere
@whatwherethere 6 ай бұрын
@@gordonbrinkmannSo I went through ComfyUI_Manager and selected Update ComfyUI. I still don’t have the lcm sampler. It’s a relatively new install altogether, maybe Friday. Any idea what I am doing wrong?
@gordonbrinkmann
@gordonbrinkmann 6 ай бұрын
@@whatwherethere I updated it after watching the video and not seeing the lcm sampler, then I restarted ComfyUI - did you restart it? The changes will only be implemented after closing ComfyUI and starting it again. That was all I did, then there was the lcm sampler.
@rinpsantos
@rinpsantos 6 ай бұрын
I dont need rename the Lora files. It is optional.
@fanyuworld
@fanyuworld 6 ай бұрын
Decline in quality or
@aiqinggirl
@aiqinggirl 6 ай бұрын
hi, thanks, anybody know if the LCM lora will lead to a low quality of images? because it is too fast so l am totally confused and can't help stop thinking its quality? anyone?
@sedetweiler
@sedetweiler 6 ай бұрын
Yes, it does take a bit of a hit, but it is just different, perhaps not lower quality.
@andresz1606
@andresz1606 6 ай бұрын
Don't forget to install WAS Node Suite and LCM Sampler with the manager before trying to build this workflow. Also, your ModelSamplingDiscrete seems mostly useless, I have 3 loras chained and adding your MSD node makes no remarkable difference or improvement whatsoever.
@sedetweiler
@sedetweiler 6 ай бұрын
Yes, those nodes are critical and I actually feel they should be part of the base product they are so good.
@LouisGedo
@LouisGedo 6 ай бұрын
👋
@sedetweiler
@sedetweiler 6 ай бұрын
🍻
@WhySoBroke
@WhySoBroke 6 ай бұрын
Not instructive at all, was very lost
@othoapproto9603
@othoapproto9603 6 ай бұрын
don't attempt if you don't understand HuggingFace. Love how he assumes you know how to DL and install.
@itscaptainterry
@itscaptainterry 27 күн бұрын
if downloading from HuggingFace is where you get stuck, you should prolly take a couple steps back before you delve into running all of this locally.
@ga1205
@ga1205 2 ай бұрын
Does the model still exist? I went to the link but can't find the same lora.
格斗裁判暴力执法!#fighting #shorts
00:15
武林之巅
Рет қаралды 86 МЛН
О, сосисочки! (Или корейская уличная еда?)
00:32
Кушать Хочу
Рет қаралды 8 МЛН
🍟Best French Fries Homemade #cooking #shorts
00:42
BANKII
Рет қаралды 18 МЛН
СҰЛТАН СҮЛЕЙМАНДАР | bayGUYS
24:46
bayGUYS
Рет қаралды 740 М.
ComfyUI : EASY Face Fixes & Swapping my wife's face into images!
19:27
Scott Detweiler
Рет қаралды 48 М.
ULTIMATE FREE LORA Training In Stable Diffusion! Less Than 7GB VRAM!
21:14
ComfyUI Advanced - 3 Pass Workflow
47:43
Ferniclestix
Рет қаралды 8 М.
ComfyUI: Advanced Understanding (Part 1)
20:18
Latent Vision
Рет қаралды 58 М.
Intro to LoRA Models: What, Where, and How with Stable Diffusion
21:01
Laura Carnevali
Рет қаралды 181 М.
Two Methods for Fixing Faces in ComfyUI
15:55
How Do?
Рет қаралды 5 М.
Я пытался разбить небьющийся бокал
0:57
Аришнев
Рет қаралды 3,9 МЛН