How to AI Animate. AnimateDiff in ComfyUI Tutorial.

  Рет қаралды 147,265

Sebastian Kamph

Sebastian Kamph

Күн бұрын

Text2Video and Video2Video AI Animations in this AnimateDiff Tutorial for ComfyUI.
Install Local ComfyUI • How to install and use...
Use Cloud ComfyUI www.thinkdiffusion.com
Workflows civitai.com/articles/2379/gui...
AnimateDiff Models huggingface.co/guoyww/animate...
ControlNet Models huggingface.co/lllyasviel/Con...
FFmpeg Guide www.wikihow.com/Install-FFmpe...
FFmpeg Download www.gyan.dev/ffmpeg/builds/
Prompt styles for Stable diffusion a1111 & Vlad/SD.Next: / sebs-hilis-79649068
ComfyUI workflow for 1.5 models: / comfyui-1-5-86145057
ComfyUI Workflow for SDXL: / comfyui-workflow-86104919
Get early access to videos and help me, support me on Patreon / sebastiankamph
Chat with me in our community discord: / discord
My Weekly AI Art Challenges • Let's AI Paint - Weekl...
My Stable diffusion workflow to Perfect Images • Revealing my Workflow ...
ControlNet tutorial and install guide • NEW ControlNet for Sta...
Famous Scenes Remade by ControlNet AI • Famous Scenes Remade b...
CHAPTERS
0:00 AI Animate with AnimateDiff
0:42 Download workflows
1:18 How to use AnimateDiff - Text2Video, Settings Guide & Easy Cloud Solution
11:05 Video2Video + Local Install
22:44 Prompt travel/scheduling
25:48 How to install ffmpeg

Пікірлер: 184
@sharpenednoodles
@sharpenednoodles 6 ай бұрын
Finally, an animation tutorial from the king
@chipko
@chipko 5 ай бұрын
This is fantastic.. well produced, well explained and so informative. Thank you.
@sebastiankamph
@sebastiankamph 5 ай бұрын
Glad you enjoyed it!
@Cu-gp4fy
@Cu-gp4fy 6 ай бұрын
Dude your examples are 🔥🔥🔥
@sebastiankamph
@sebastiankamph 6 ай бұрын
Boom, thank you! 😊🌟
@PushedToInsanity
@PushedToInsanity 6 ай бұрын
Great tutorial as usual. Appreciate all the examples you've give, by far my favorite KZbinr when it comes to AI art. Do you offer personalized lessons?
@sebastiankamph
@sebastiankamph 6 ай бұрын
Thank you, how very kind! Yes, I do. You can write to me in Discord. See link in channel or video description. 🥰
@ElHongoVerde
@ElHongoVerde 6 ай бұрын
Friday and new video from Seb. Nothing could go wrong today!
@sebastiankamph
@sebastiankamph 6 ай бұрын
And it's even an extra long one to celebrate friday! 😊🌟
@IAmTheAntiArt
@IAmTheAntiArt 6 ай бұрын
My first T2V SDXL AnimateDiff with HotShotXL vid is rendering right now! This vid got me started with animate diff in comfy 6 days ago. Thanks!
@cosmiccarp5030
@cosmiccarp5030 6 ай бұрын
Sweet, I literally just sat down to learn comfy and animatediff. Your timing is impeccable sir! Comfy UI is worth it everyone! I'm really liking it. Especially the screen shot workflows!
@sebastiankamph
@sebastiankamph 6 ай бұрын
It really is very convenient. For workflows like these, you don't even have to learn Comfy! 😊
@cosmiccarp5030
@cosmiccarp5030 6 ай бұрын
Yes, but with relaxing tutorials like yours, it is a pleasure to learn comfy! Cheers!@@sebastiankamph
@Paperclown
@Paperclown 3 ай бұрын
@@sebastiankamph but you said you would cover txt2video with local later in video and sections dont show that. im not watching a feature length movie just to skip over solutions for people who dont even own a computer ! you dont cover the process of setting up locally for txt2video nodes
@defy_norms
@defy_norms 29 күн бұрын
My gosh, dude you got a subscriber!! You are amazing, keep it up. This helps small KZbin channels like us immensely
@sebastiankamph
@sebastiankamph 24 күн бұрын
Thanks for the sub! Happy to help :)
@stonewhite3576
@stonewhite3576 6 ай бұрын
you're an awesome dude! congratulations 🎉👏
@dustinjohnsenmedia1889
@dustinjohnsenmedia1889 6 ай бұрын
Great video! Now I just figure out how to tack on an upscaler at the end
@Mowgi
@Mowgi 6 ай бұрын
Mate, you have great timing. I've just recently installed ComfyUI and started getting into animatediff, but I really haven't understood consistent animations, just the morphin stuff so far.
@sebastiankamph
@sebastiankamph 6 ай бұрын
Great to hear! Hope the video helped you, let me know how it goes 😊🌟
@joonienyc
@joonienyc Ай бұрын
bro , the best way to experiment with ur basic txt2vid and vid2vid ! thank u
@sebastiankamph
@sebastiankamph Ай бұрын
Glad you liked it!
@dejuak
@dejuak 3 ай бұрын
Hey really nice video i have watched it like 10 times on the last month. I have a question, is there a way to only animate the character but keeping the background static? Would be really awesome
@ShoreAllan
@ShoreAllan 6 ай бұрын
Hello Sebastian, many greetings from Berlin and thank you for your work! What do I have to do to extend your workflow with a face swap from a photo - i.e. video-and-picture-to-video!
@vaderragex90
@vaderragex90 6 ай бұрын
Very nice tutorials, these are so great! Ich liebe es!
@hleet
@hleet 5 ай бұрын
It's very well explained ! Thank you
@sebastiankamph
@sebastiankamph 5 ай бұрын
Glad it was helpful! ☺️
@ronnykhalil
@ronnykhalil 6 ай бұрын
as always, love yo videos! thanks so much
@sebastiankamph
@sebastiankamph 6 ай бұрын
You are so welcome! Glad to have had you aboard for such a long time Ronny! 😊🌟
@YazzCasillas
@YazzCasillas 6 ай бұрын
Very nice tutorial, thanks! Is it already possible to start from an image?
@Kontaktfilms
@Kontaktfilms 3 ай бұрын
Sebastian, I'm a fan of your work and tutorials - I followed along for the Video2Video workflow, all my nodes and models are there, I see my original frames and lineart renders in preview... then when I render the Queue the image under Video Combine is just black. I had the same problem with another workflow. Any idea what could be causing this? I'm on a Apple M1 Ultra. Thank you.
@hesamarmaghani791
@hesamarmaghani791 6 ай бұрын
thanks man This was very helpful
@KDawg5000
@KDawg5000 6 ай бұрын
Very nice tutorial! Just a heads up to those who want to use SDXL models; you can use them, but you have to use the Hot Shot motion module, and set the resolution smaller.
@kiksu1
@kiksu1 6 ай бұрын
SDXL beta model for AnimateDiff was also released just today morning 👍🏻
@KDawg5000
@KDawg5000 6 ай бұрын
@@kiksu1 Wow, nice. I'll have to check it out now. EDIT: I just tried it out in A1111, but not good results. Every frame is a completely separate image, unrelated to the previous frame. Hmmm. Well at least it tried instead of just crashing.
@kiksu1
@kiksu1 6 ай бұрын
@@KDawg5000 Oh yeah, sadly it doesn't do great yet. Tested that too but I'm not convinced yet. Hotshot worked much better.
@core-f
@core-f 6 ай бұрын
Correct me if I am wrong, but I think the frames you use for prompt traveling are not the starting frames for the given prompt, except frame "0". It's the frame where the new prompt is completed. That's why you have a rather long winter in the default example, because it transitions from frame 50 to 75 into winter and then has an extra 25 frames for the winter prompt. For an even distribution you'd need the frames 0, 33, 66 and 100.
@erdbeerbus
@erdbeerbus 6 ай бұрын
excellent, thank you. where can I get your basic vid2vid workflow, please ... which patreon mode is (if) needed? thx in advance :)
@choboruin
@choboruin 2 ай бұрын
First guide that actually let me generate something. Wasted 8 hours today using this and I just got a video after another 30 minutes and its just a puddle LOL. Appreciate the tutorial though! Great content.
@AloofandAbsurd
@AloofandAbsurd 2 ай бұрын
I'm glad you had some luck.
@masoud.art.videos
@masoud.art.videos 5 күн бұрын
Great guide, thank you! first timer using comfyUI here, and one thing is strange to me is processing 10 frames for me takes 40 seconds but 100 frames almost an hour! I have a decent gpu(64gb), which makes me think something is not correctly set up. Any ideas?
@weijiayao9118
@weijiayao9118 3 ай бұрын
I like your online version demo, but can I use local checkpoint models, it seems there are only limited selections.
@journeysinthedeep
@journeysinthedeep 3 ай бұрын
Awesome tutorial bro!! Only issue im running into is my video combine vhs output format is only allowing me to select from webp or gif. I cant choose h.264. Any ideas my guy??
@triangulummapping4516
@triangulummapping4516 2 ай бұрын
Im trying video2video, the background of the result is completely static, just the character is moving fine, is there a way to also give some movement and evolution to the background?
@Tedrer
@Tedrer 4 ай бұрын
Hi. There is some changes, animatediff combine is deprecated. it is the same to use video combine node?
@brandonlane
@brandonlane 3 ай бұрын
Thank you! 😊
@clovernacknime6984
@clovernacknime6984 6 ай бұрын
7:00 I think you mean convergence. Divergence means that a sampler (function) never settles down to (approaches) a particular image (or other output), while convergence means it will.
@sebastiankamph
@sebastiankamph 6 ай бұрын
Right! I wonder what made my brain mix that up 😅
@alpineuniverse
@alpineuniverse 5 ай бұрын
Just commented the same thing and realized someone must have caught it too :)
@bdwedgeofanimotion4106
@bdwedgeofanimotion4106 Ай бұрын
that was nice and need to absorb
@Clupea101
@Clupea101 6 ай бұрын
Great stuff, easy learniing
@sebastiankamph
@sebastiankamph 6 ай бұрын
Glad you think so! 😊
@dougiejones628
@dougiejones628 5 ай бұрын
Very cool, is there a way to use an image as a reference instead of a prompt? I have a CG animated image sequence for the input video. And I've enhanced one frame from that sequence and would love to apply the style of that frame to the entire video. Is that possible? Thanks!
@sebastiankamph
@sebastiankamph 5 ай бұрын
IPadapter or pix2pix
@kietzi
@kietzi 2 ай бұрын
wanted to try the v2v in thinkdiffusion, but i have just errors there and dont know how to load uploaded img-seqences.. so i stay at rendering over night, to see some results at my own machine :P
@myghail
@myghail Ай бұрын
Idk how many times I heard u say "I prefer 2m karras" in the last couple of days. :D Not that I'm complaining! Thanks for the videos, they are incredible - first time I'm actually getting all this and I've tried many times.
@sebastiankamph
@sebastiankamph Ай бұрын
People want to know what I use :D Some just want the answers and not the whys or hows. Glad you're enjoying the content
@myghail
@myghail Ай бұрын
@@sebastiankamph I understand completely + in all honesty I woulda forgotten by now which ain't good considering 2m karras does work the best definitely
@nicocro00
@nicocro00 4 ай бұрын
very nice! what's your setup for the local install? What machine / GPUs are you using?
@sebastiankamph
@sebastiankamph 4 ай бұрын
Rtx 4090 currently. Previously RTX 3080.
@gorgep1242
@gorgep1242 5 ай бұрын
Hey Sebastian. Nice video. Have you planned to make an AnimateDiff video for A1111?
@KINGLIFERISM
@KINGLIFERISM 6 ай бұрын
What you did that many do not... was you made errors and corrected them. That is a sign of a person that understands user experience. That is also a person of empathy. You sir... are a nice person.
@sebastiankamph
@sebastiankamph 6 ай бұрын
That's very kind of you, thank you very much! You're the real mvp 😊💫
@LaloHao
@LaloHao 4 ай бұрын
What GPU were you using in local generation?
@leegregory5617
@leegregory5617 6 ай бұрын
Can you do this starting with a still image, rather than a video?
@ShawnTWhitney
@ShawnTWhitney 5 ай бұрын
Great tutorial but I'm having an install problem with the ComfyUI Manager in Google Colab b/c there's a conflict between two component versions. Can anyone provide some insight into how to fix this?: "Detected that PyTorch and torchvision were compiled with different CUDA major versions. PyTorch has CUDA Version=12.1 and torchvision has CUDA Version=11.8. Please reinstall the torchvision that matches your PyTorch install."
@francsharma7276
@francsharma7276 4 ай бұрын
great video i done at 3070ti 8gb laptop, 480p resolution, hope i can resize it, love u bro
@marcus_ohreallyus
@marcus_ohreallyus 6 ай бұрын
Thanks for all of your great vids. Can you help with a technical problem that happens when I use AnimateDiff in A1111? I always run out of memory with a CUDA error halfway through every AnimateDiff generation, but I have a 4090 with 24GB vram. I've tried almost every bat file argument suggested on reddit, but the error still happens.
@sebastiankamph
@sebastiankamph 6 ай бұрын
Hey, glad you're liking the videos! What settings are you using? Mainly number of frames and resolution. Are you using xformers or sdp?
@marcus_ohreallyus
@marcus_ohreallyus 6 ай бұрын
@@sebastiankamph I have xformers on. Just the default frames, I think its 16. And I don't do any upscale...resolution is 512 by 768. I don't think it truly is a memory error, because I know that people with much lower ram do this with no problems. When I look at VRAM usage on my GPU monitor, there's plenty of RAM during the generation. I thought maybe there was a setting you know about to make the error stop.
@sebastiankamph
@sebastiankamph 6 ай бұрын
@@marcus_ohreallyus that's weird. I would try it in Comfy and see if you get similar results there.
@user-md7or2tt1t
@user-md7or2tt1t 3 ай бұрын
Hi, my ComfyUI has MPS backend out of memory when generating images. Any help?
@Inner-Reflections-AI
@Inner-Reflections-AI 6 ай бұрын
Nice! Thanks for highlighting my workflows! I have learnt a lot from your videos!
@sebastiankamph
@sebastiankamph 6 ай бұрын
I'm happy to hear that! Comfy is probably where I've played the least so far, but for some tasks it feels like a necessity :) Keep up the good work. Maybe an img2video workflow next?
@EmilyNilsen
@EmilyNilsen 6 ай бұрын
I think I'm hooked on your pappa jokes.
@sebastiankamph
@sebastiankamph 6 ай бұрын
Lovely, aren't they? Everyone needs more dad jokes in their lives 😊💫
@zzx2879
@zzx2879 5 ай бұрын
hi i have a question, when i apply the control net, i get an error message as followed, would you recommend a fix? Thanks Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
@zzx2879
@zzx2879 5 ай бұрын
im applying the same control net as you, just using a different checkpoint (photon v1)
@digidope
@digidope 6 ай бұрын
Context window: That is the fps that the model is trained for and you should NOT change it with animateDiff. If you use other model than AnimateDiff like HotShot then HS is trained with 12fps and then you need to change that value.
@EarthWalkerOne
@EarthWalkerOne 6 ай бұрын
Not fps, but the number of frames that animatediff chunks together for temporal consistency. It is motion model dependent though, so increasing the value higher than what the motion model was trained for will give you worse results.
@alpineuniverse
@alpineuniverse 5 ай бұрын
Great tutorial! Thank you for doing this. Quick correction: I think you mean "convergent" instead of "divergent". We want our sampler to converge on an image, not diverge from it. Cheers!
@sebastiankamph
@sebastiankamph 5 ай бұрын
You're absolutely right, thank you!
@ufukzayim6689
@ufukzayim6689 6 ай бұрын
Do you have same or similar tutorial also for A1111 please
@Eightysixstudios
@Eightysixstudios 5 ай бұрын
getting this error message when trying to follow: Error occurred when executing KSampler: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
@Caret-ws1wo
@Caret-ws1wo 4 ай бұрын
Any plans to make a video like this for a1111?
@havelicricket
@havelicricket 3 ай бұрын
(pls help me) Error occurred when executing KSampler: 'NoneType' object has no attribute 'shape'
@LianParma
@LianParma 5 ай бұрын
I've tried to run this and it just won't work. Even tried a new install of ComfyUI from scratch and even more errors pop up when it gets to the KSampler. Is it broken for someone else or it's just for me? :D
@diklashain
@diklashain Ай бұрын
Hey, I keep on getting the same error message: "Error occurred when executing KSampler: module 'comfy.sample' has no attribute 'prepare_mask'" It happens with all the workflows, and I can't find solutions in google... Could you please help me?
@user-cf4on1qy8n
@user-cf4on1qy8n 4 ай бұрын
anyone knows why my mac m1(sonoma) generates black images when using animatediff through comfyui?
@SLAMINGKICKS
@SLAMINGKICKS Ай бұрын
what does this mean ? Motion module 'mm_sd_v15_v2.ckpt' is intended for SD1.5 models, but the provided model is type SDXL.
@sebastiankamph
@sebastiankamph Ай бұрын
There are two types of models (well more, but simplified). 1.5 and SDXL. Most stuff works best with 1.5 models. So you need to swap your SDXL model into an 1.5 one (the main model you have selected).
@CrudelyMade
@CrudelyMade Ай бұрын
do you have a document or video which summarizes the differences between the sampler like euler, karra, etc.. you mentioned some of the differences between the models in this video, and it was a great explanation. would love to see a 'breakdown' of the rest. :-)
@sebastiankamph
@sebastiankamph Ай бұрын
Yes, check my beginner's guides on Stable diffusion where I show a1111. I go through samplers there.
@Maneef-uf3kh
@Maneef-uf3kh 26 күн бұрын
will this work on imac?
@user-pc7ef5sb6x
@user-pc7ef5sb6x 6 ай бұрын
I noticed a difference in quality between Auto1111 and Comfy Animatediff. Animations are less jittery in Auto1111. Not sure if it's something in the way Auto1111 processes the images.
@sebastiankamph
@sebastiankamph 6 ай бұрын
Interesting! I haven't done a comparison between the two. Let me know if you find out more about that.
@art3112
@art3112 6 ай бұрын
@@sebastiankamph I would be interested in a quality comparison between Comfy and A111 too.
@samsilva7209
@samsilva7209 Ай бұрын
when animatediff is completely black, what happened wrong? I'm on a m1 max 64
@tambe2182
@tambe2182 5 ай бұрын
[Errno 2] No such file or directory getting this error, what to do ?
@dragongaiden1992
@dragongaiden1992 3 күн бұрын
Friend, it is functional, but my result is nothing like my input video, it is completely different, it should be said that I am using checkpoint xl and model xl from animatediff, at night I hope to continue testing, thanks for the video, it takes me about 15 or 20 min to render 50 frames locally
@mr-s23
@mr-s23 4 ай бұрын
Can I change the setting to more than 4 seconds?
@Shingo_AI_Art
@Shingo_AI_Art 6 ай бұрын
That thing about divergency with samplers was interesting, it would be nice to have a video reviewing all of them
@uncleben2019
@uncleben2019 3 ай бұрын
Where do I get woman.mp4 (or women in general?)
@ohheyvoid
@ohheyvoid 3 ай бұрын
that joke got me! :D
@icchansan
@icchansan 6 ай бұрын
Looks amazing tuto but I'm getting mass errors with missing nodes, seems likes it manager get it from the repo but still getting errors with vhs videocombine
@sebastiankamph
@sebastiankamph 6 ай бұрын
All nodes in those workflows should be able to be installed with the manager. Let me know if they won't. If vhs videocombine gives you errors, try the regular videocombine. Just double click and type combine to see the ones you have available.
@MrRomanrin
@MrRomanrin 5 ай бұрын
i am searching for a Img2Img + ANIMATION work flow
@khanhtrong7761
@khanhtrong7761 6 ай бұрын
Between ComfyUI and SD1111 WebUI, which one gives better result?
@sebastiankamph
@sebastiankamph 6 ай бұрын
For animatediff, I find that there are more options with comfyui
@EonsAway
@EonsAway 6 ай бұрын
Hiya, How do you get the preview image in the KSampler?
@sebastiankamph
@sebastiankamph 6 ай бұрын
If you click manager and check top left, there's a preview dropdown there.
@SebAnt
@SebAnt 6 ай бұрын
Thanks! I had no desire to try animations but got sucked into the rabbit hole and tried all 3 examples you taught. Your teaching is crystal clear 🙏🏼 I did stumble into some error messages that my path was too long, and I had to modify the registry to enable long pathnames. Last week I learned how to use Reactor to do face swap in A1111. If it’s possible to use that within ComfyUI for videos too, it would be awesome if you could do a tutorial on that .
@sebastiankamph
@sebastiankamph 6 ай бұрын
I'm happy you keep enjoying the videos, Sebastian! And I very much appreciate your support, it is a great help towards me creating these videos. There are face swappers for video, yes. I will for sure get a tutorial done on one of them soon. That's a great idea 😊🌟
@SebAnt
@SebAnt 6 ай бұрын
@@sebastiankamph I just saw a video where someone did face swap in SD using Reactor. Can this be done in ComfyUI? kzbin.info/www/bejne/bJW8kGSKm9-Nitksi=XusTReirIFc0nE8u
@KINGLIFERISM
@KINGLIFERISM 6 ай бұрын
Seond on that@@sebastiankamph But consider this idea (I am not good at comfyUi but have a grasp on a concept that I am not able to do myself) I was wonder since roop has a graphic limitation due to the model it uses why not do ip-adaptor + face in the workflow then face merge/blend into roop this way you get a higher resolution face and then that image goes into the video generation process.... let me know what you think.
@choboruin
@choboruin 2 ай бұрын
It took me 40 minutes with 3080 ryzen 5900 to generate a puddle of mud. What kind of PC are u using? seems ur rendering fast LOL..
@AloofandAbsurd
@AloofandAbsurd 2 ай бұрын
I'm glad I'm not the only one who feels like they wasted their time. Followed everything to a T and I'm STILL missing the input box. Going through ALL THIS BS for a 5 sec looping video? Uh no thanks. I've watched way too many videos on this topic. Comfy-my arse. I don't see a point in asking questions. I'll get no answers! Anyway, good luck to you.
@JohnLeeKim
@JohnLeeKim 4 ай бұрын
Woooh it's working!
@PlayerGamesOtaku
@PlayerGamesOtaku 4 ай бұрын
Hi, why didn't you show many nodes how to install them? I can't find: number of frames and many other nodes, I don't keep them and I can't find them
@sebastiankamph
@sebastiankamph 4 ай бұрын
Use the manager, see comfy install guide
@Disco_Tek
@Disco_Tek 6 ай бұрын
I'm using SDXL and Hotshot. I'm not generating from empty latents though and am running multiple controlnets and temporal net... but the results are extremely good.
@RenoRivsan
@RenoRivsan 2 ай бұрын
Where do I add controlnet?
@koningsbruggen
@koningsbruggen 6 ай бұрын
I love these tutorials
@sebastiankamph
@sebastiankamph 6 ай бұрын
I love seeing you in the comments again!
@juschu85
@juschu85 6 ай бұрын
7:09 I think converge is the word you're looking for.
@A_G420
@A_G420 5 ай бұрын
I may give Think Diffusion a shot. Running a 6800 on Arch Linux for the basic txt2vid on your tutorial took 22 minutes to generate. No controlnet for Comfy on Think Diffusion kind of sucks
@sebastiankamph
@sebastiankamph 5 ай бұрын
Works well for me. What are you missing?
@A_G420
@A_G420 5 ай бұрын
From looking at the site it appears that way. I'll do a free trial today & mess around with it. Thanks@@sebastiankamph
@user-xj3fg3bd3d
@user-xj3fg3bd3d 13 күн бұрын
what should I do if there's a message that some node types were not found
@sebastiankamph
@sebastiankamph 7 күн бұрын
Did you go into the manager and try installing missing custom nodes?
@mikerotchburns1622
@mikerotchburns1622 2 ай бұрын
the lineart preprocessor is bugged or something, its getting hung up on that. updating the nodes does nothing. its not shown as a missing node. changing to open pose does nothing
@Suketh
@Suketh Ай бұрын
try to "Enable win32 long paths" to 1 in your Registry Editor
@kevinbrennan4066
@kevinbrennan4066 6 ай бұрын
Thanks for this great video!
@sebastiankamph
@sebastiankamph 6 ай бұрын
Glad you liked it! 🌟😊
@AshChoudhary
@AshChoudhary 5 ай бұрын
How can we add Lora in this?
@stereotyp9991
@stereotyp9991 6 ай бұрын
servus! wo kommst du her?
@MrRomanrin
@MrRomanrin 5 ай бұрын
ok WHERE ARE THE WORKFLOWS PICTURES ?
@dkontey6421
@dkontey6421 6 ай бұрын
where is this video to video workflow?
@kkryptokayden4653
@kkryptokayden4653 6 ай бұрын
is it possible to swap faces as well in vid to vid maybe adding a reactor node or roop node?
@Full_Zeb
@Full_Zeb 5 ай бұрын
My comfy wont let me install any nodes
@abdullahxie
@abdullahxie 6 ай бұрын
i am getting error of VHS_Videocombine from a week can you help me ?
@sebastiankamph
@sebastiankamph 6 ай бұрын
Make sure to update all
@abdullahxie
@abdullahxie 6 ай бұрын
@@sebastiankamph still not working, may be it’s bcz of ffmpeg version ?
@techviking23
@techviking23 6 ай бұрын
WOWWWW
@sebastiankamph
@sebastiankamph 6 ай бұрын
Now go make some fantastic animations!
@MatichekYoutube
@MatichekYoutube 6 ай бұрын
and for the final - we can do LCM insane speeds and AnimateDiff? Letsgooo?
@sebastiankamph
@sebastiankamph 6 ай бұрын
Yasss! 😁
@MatichekYoutube
@MatichekYoutube 6 ай бұрын
@@sebastiankamph i can not imagine you would be able to do like 5min video in 10 min rendering time :)
@MilesBellas
@MilesBellas 6 ай бұрын
A new animateDiff playlist next?
@sebastiankamph
@sebastiankamph 6 ай бұрын
That sounds like a fantastic idea!
@rooqueen6259
@rooqueen6259 14 күн бұрын
Guys who have come across the fact that the loading 2 new models stops at 0% or I also had an example - the loading 3 new models is 9% and no longer continues. What is the problem? :c
@felipealmeida5880
@felipealmeida5880 6 ай бұрын
It's cool, but the animations are not consistent, they are short and if you don't have a very good PC, it takes a long time to generate just a few seconds
@skybeast2738
@skybeast2738 2 ай бұрын
True, although with controlnet animations can be quite consistent
@dhanang
@dhanang 2 ай бұрын
AI is still at very initial stage. Considering what these programs can do like 1 year ago, this is a leap.
@ThatGuyNamedBender
@ThatGuyNamedBender Ай бұрын
I guarantee it takes less than it takes to render a 2 second clip in blender. Man script kiddies are too damn bitchy nowadays. Like bro you're already generating art with text, not time or skill deal with the limitations 🤣🤣🤣
@XobyThePoet
@XobyThePoet 3 ай бұрын
I wish I could get this to work. I am envious of those who can do this. I have wasted money on getting this shit to work and I am fucking done with it. Who do I have to pay to just set this up on my damn computer for me?
@AraShiNoMiwaKo
@AraShiNoMiwaKo 4 ай бұрын
How long until an AI do all of this for me? absolutely not user friendly.
@ArdiUtamaIDWGD
@ArdiUtamaIDWGD 5 ай бұрын
1:49 how to load
@BenKDesigns
@BenKDesigns 6 ай бұрын
It's weird how you're always plugging ThinkDiffusion's subpar services. What about Run Diffusion?
@techviking23
@techviking23 6 ай бұрын
Tried rundiff, can't figure out how to download a model, and when I want install an extension I can't, seems broken or just too complicated. ThinkDiff is way better for me personally
@sebastiankamph
@sebastiankamph 6 ай бұрын
I like a good underdog story and I think the service is fantastic. It's a small independent team of passionate developers pouring their hearts into creating the best possible product with open source means. I'm sure RunDiffusion does fantastic already, most likely having more users, revenue and money at their disposal being the bigger company.
NEW! How to change light with AI
11:43
Sebastian Kamph
Рет қаралды 7 М.
Зу-зу Күлпәш. Стоп. (1-бөлім)
52:33
ASTANATV Movie
Рет қаралды 995 М.
GADGETS VS HACKS || Random Useful Tools For your child #hacks #gadgets
00:35
Normal vs Smokers !! 😱😱😱
00:12
Tibo InShape
Рет қаралды 67 МЛН
Notion Goal Tracker v2.0 (Template Tour)
8:53
Kina's Notion
Рет қаралды 9
Sora Will Change an Industry Forever.
8:31
Sebastian Kamph
Рет қаралды 8 М.
Mastering ComfyUI: How to use ReActor for Face Swap - TUTORIAL
8:09
ComfyUI Masking With IPADAPTER Workflow
12:31
Grafting Rayman
Рет қаралды 1,9 М.
Run SDXL Locally With ComfyUI (2024 Stable Diffusion Guide)
22:27
How To Make A.I. Animations with AnimateDiff + A1111 | FULL TUTORIAL
12:46
Fast AI Animatioon with AnimateDiff and Stable Diffusion
9:32
Vladimir Chopine [GeekatPlay]
Рет қаралды 16 М.
How good is the latest version of ChatGPT? | BBC News
23:16
BBC News
Рет қаралды 91 М.
GPT-4o Deep Dive: the AI that CRUSHES everything
28:11
AI Search
Рет қаралды 68 М.
Миллионеры из тундры 🤷🏼‍♀️
0:33
Liseykina - путешествия
Рет қаралды 6 МЛН
mistake the fish with fishing boy #fish #fishing #shorts
0:28
The Fisherman 71
Рет қаралды 9 МЛН
スマッシュBINGO
0:59
卓キチちゃんねる
Рет қаралды 60 МЛН