Easy Image to Video with AnimateDiff (in ComfyUI)

  Рет қаралды 38,687

Koala Nation

Koala Nation

Күн бұрын

Пікірлер: 115
@koalanation
@koalanation 5 ай бұрын
If you prefer it without the ai voice, check out with original voice (in Spanish) kzbin.info/www/bejne/faSyhaF9mr12eck
@tech3653
@tech3653 3 ай бұрын
any toturoal for easy translation of voice offline using ai ?
@Kratos30000
@Kratos30000 3 ай бұрын
Which GPU do you need for these kind of animations?
@DanielThiele
@DanielThiele 2 ай бұрын
honestly, its really good for ai voice. no hablo espanol senior. muchas gracias. :D
@koalanation
@koalanation 2 ай бұрын
@@DanielThiele 🤣🤣🤣
@chrisfletcher2646
@chrisfletcher2646 Ай бұрын
I've been frustrated with AI voiceovers for a lot of stuff i watch, but now that I know you did this as a non-native language it makes sense and I'm grateful as well. Muchas muchas gracias!!!!!!!
@BeratLjumani
@BeratLjumani Ай бұрын
mostly ok tutorial but the main issue I have is that maybe this needs a part 1 showing all the files you downloaded for the load check point, Lora loader model etc... because if you don't have that stuff your just left scrambling to civit to try and find the same files you use and that's annoying and confusing.
@koalanation
@koalanation Ай бұрын
Got it. My assumption is that the viewer already knows it, as I have shown how it is done (for other models) in other videos. But I see that is not true for everyone. However, if I show it always may become repetitive...I may make a short video showing how it is done and refer to it in other videos.
@levi_melon369
@levi_melon369 10 күн бұрын
your voice make everything much more clearer, may i have the workflow of this? Which helps a lot in conveniency!
@koalanation
@koalanation 7 күн бұрын
ko-fi.com/s/3dbeef74fd
@sebastiancasanova8292
@sebastiancasanova8292 3 күн бұрын
its an AI voice lol.
@boo3321
@boo3321 5 ай бұрын
very easy tutorial only took me HOURS to do it, I'm curious how to make people walk or move with comfyui.
@koalanation
@koalanation 5 ай бұрын
Well, I cut quite a bit to show only the main steps, otherwise the video is mostly rendering... For moving people, controlNet with a reference video of someone walking is probably the way. With motion director should also be possible, I believe, but I need to find the time to try and see if results are 👍
@user-cb4jx8og2k
@user-cb4jx8og2k 4 ай бұрын
great video you skipped some steps but still detailed. Question, do we not need to change the text prompt for each randomized pic > Also why did you use Load video path node for an image ?
@koalanation
@koalanation 4 ай бұрын
Hi! In principle you do not need to change it, but you can, of course. Take into account the 'tile' control net is rather strong and you cannot do big transformations. The Load video node allows you to use http addresses, but the Image load node not (at least did not work for me). That is why I use it for the randomized image.
@joshuadelorimier1619
@joshuadelorimier1619 11 күн бұрын
appreciate your tutorials glad you don't just show a workflow
@user-Cyrine
@user-Cyrine 4 ай бұрын
Love your videos so much! Can you make a tutorial video on FlexClip’s AI tools? Really looking forward to that!
@koalanation
@koalanation 4 ай бұрын
Thanks for the idea!
@ristom1
@ristom1 3 ай бұрын
Kickass video man!!! Im trying to learn cool AI like this for music viduals this is 10/10 cool. Gonna also do blender renders as bases and use ai to make them trippy. Have any tutorials for video to video?
@koalanation
@koalanation 2 ай бұрын
Check out the morphing and audioreactive videos. Using masks is more complicated but gives you more power to play with
@ristom1
@ristom1 2 ай бұрын
@@koalanation thank you!!!
@SemorezX
@SemorezX 2 ай бұрын
awesome works , thank u so much
@VanessaSmith-Vain88
@VanessaSmith-Vain88 5 ай бұрын
Can you set up the whole thing for us to use it?
@lildrill
@lildrill 2 ай бұрын
🤣😂
@tianhayamizu8815
@tianhayamizu8815 Ай бұрын
hello,I often watch your videos to learn. Could you explain how to create a long animation, like one over 10 seconds, using Animatediff? thank you!
@koalanation
@koalanation Ай бұрын
With AnimateDiff WITH context options is possible to do animations as long as your machine can handle. If you do, for example, 8 fps, you need 80 frames. The number of frames is defined by the batch number in an empty latent or equivalent connected to the k sampler
@tianhayamizu8815
@tianhayamizu8815 Ай бұрын
@@koalanation I'm a beginner and really don't understand how to go about it, especially since I have no experience in animation. If you have the time, could you possibly create a tutorial video on how to make a short film using Animatediff? I would really appreciate it and am very eager to learn more. Thank you so much!
@tianhayamizu8815
@tianhayamizu8815 Ай бұрын
@@koalanation Could I ask, in case my computer isn't powerful enough, would it be possible to generate a few seconds of video at a time and then use other software to stitch them together into a short film that's a few minutes long?
@koalanation
@koalanation Ай бұрын
@@tianhayamizu8815 you can always make short clips and then stitch them with a regulard video editor
@koalanation
@koalanation Ай бұрын
This is maybe too detailed for you, but give it a try: kzbin.info/www/bejne/o4LbmGqpp7Cdrqssi=X5XLYro5PQ1pfOYE
@estebanmoraga3126
@estebanmoraga3126 4 ай бұрын
Thans for the tutorial! Question: Is it possible to feed Comfy with a reference video for it to animate the image using said video as reference? Like, say I have an image of a character, and I give Comfy a video of someone skateboarding, is there a method with which I could get comfy to animate the character skateboarding based on the video? Cheers and thanks in advance!
@koalanation
@koalanation 4 ай бұрын
Yes! You can use a reference video and use controlnets such as openpose, depth, lineart, etc, to guide the composition of each frame. There are many videos and tutorials about it.
@estebanmoraga3126
@estebanmoraga3126 4 ай бұрын
@@koalanation Thanks for replying! The most I've been able to find are tutorials on animating a referenced image using prompts or generating a video using another video as reference also using prompts, have yet to find one where they animate a reference image based on a reference video, guess I just have to look harder tho!
@koalanation
@koalanation 4 ай бұрын
Check out: kzbin.info/www/bejne/joCYloGAZr1lqKs. Take into account this is rather complex with all the samplers and so on. Here: kzbin.info/www/bejne/gZKXdoGaa5iJeNE, I think it is more clear, but take into account the IP Adapter node it does not work like in the video anymore.
@sohamkokate5794
@sohamkokate5794 Ай бұрын
Hi! "Given groups=1, weight of size [320, 5, 3, 3], expected input[16, 4, 96, 64] to have 5 channels, but got 4 channels instead" can you help?
@koalanation
@koalanation Ай бұрын
Seems to be related with one of controlnet or animatediff models. Try to change or bypass, one by one, the controlnet and animatediff nodes and see if the workflow runs. When you have found where the issue is, check the model is correct.
@LuigiEspositoGraphic
@LuigiEspositoGraphic 2 ай бұрын
it works well, but the details are extremely lower than the original image, how can I fix it?
@koalanation
@koalanation Ай бұрын
You may want to increase the tile settings, but yeah, the method do changes things. Try also to use a checkpoint corresponding to the style of the original image (realistic, cartoon, anime...), to get better adherence. Those are some ideas....obviously, image upscaling or a second AD pass may also help
@bordignonjunior
@bordignonjunior 5 ай бұрын
Geeez this takes long to run. which gpu do you have? amazing tutorial !!!
@koalanation
@koalanation 5 ай бұрын
Hi! Thanks! I am using a RTX4090/3090 or A5000 via Runpod, which generates the video rather fast. You can try to decrease the number of frames and also the resolution of the images. Try to do interpolation with 3 frames instead of 2.
@kargulo
@kargulo 2 ай бұрын
I have 4060 with 16G and 50 % is taking 15 min , that is first creating , I hope next one will be faster :)
@Rachelcenter1
@Rachelcenter1 2 ай бұрын
4:53 those blur effects you put over your video make it hard to see what you're doing.
@koalanation
@koalanation 2 ай бұрын
Yep, I got too enthusiastic with the video effects when editing the video...promise not to overdo next time
@lechu89
@lechu89 Ай бұрын
Hi! "IPAdapterUnifiedLoader - ClipVision model not found." can you help?
@koalanation
@koalanation Ай бұрын
Hi! This node was supposed to simplify combining the IP adapter and clipvision models, but for some systems it seems it gives more problems than a solution. My advice would be to use the IP adapter model loader and Clipvision model loader separately, and connect them (and the model) independently to the IP Adapter node
@sebastiancasanova8292
@sebastiancasanova8292 3 күн бұрын
@@koalanation as someone who merely followed your steps and found the same problem, I have no idea what any of this means or does, so I don't understand your solution. Could you explain what to do like I'm 5? Please and thanks a lot.
@koalanation
@koalanation 3 күн бұрын
@@sebastiancasanova8292 don't use the unified loader. Use the Clipvision loader and the IP Adapter model loader.
@SeanOdyssey
@SeanOdyssey 2 ай бұрын
Thankyow
@Rachelcenter1
@Rachelcenter1 2 ай бұрын
4:54 I got to this part of the tutorial, my workflow is at 88% ksampler and then the word "reconnecting" came over the screen. terminal: [AnimateDiffEvo] - INFO - Using motion module AnimateLCM_sd15_t2v.ckpt:v2. Unloading models for lowram load. UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' (I have a 128gb ram computer)
@koalanation
@koalanation 2 ай бұрын
Hi! That looks a bit odd having 128 gb and stopping at 88%..sometimes comfyui crashes when the CPU is overloaded...try to test with less frames or smaller ones to see if that is the case.
@Rachelcenter1
@Rachelcenter1 2 ай бұрын
@@koalanation when a video loader box is present you can go to Select_every_nth = if you put 1 its going to generate every frame of the video. If you choose 2 its going to generate every other frame of the video.... Since you dont have that box, what is the equivalent in your workflow?
@Rachelcenter1
@Rachelcenter1 2 ай бұрын
@@koalanation i tried 16 frames and all it gave me was an all black box
@YING180
@YING180 4 ай бұрын
thank you for your video, that's very helpful
@dschony
@dschony 3 ай бұрын
It was a little problematic to install all this modules and nodes. The webui crashed and I had to update it, recover the venv and also to reinstall comfyui's dependencies ... it took hours. Nothing for newbies.
@dschony
@dschony 3 ай бұрын
Ok. Found out, that compared to the time it takes for generation, this little time to fix the environment is nothing. But I like the tutorial ;)
@misterV123
@misterV123 2 ай бұрын
​@@dschony Hey) Which GPU do you use and how much time does it take to generate 1-2 sec video?
@dschony
@dschony 2 ай бұрын
@@misterV123 GPU: NVIDIA GeForce RTX 3060 / 8GB VRAM. It takes about 1 hour for 2 sec with a frame rate of 30, or 1 min per picture. It depends of the used models and nodes, of the settings (steps) and more.
@dschony
@dschony 2 ай бұрын
Well, I found, that it's better not to use the stable diffusion WebUI with the ComfyUI extension, but to use a separate standalone installaion of ComfyUI with it's own environment.
@koalanation
@koalanation 2 ай бұрын
Good you could find a work around. Sometimes all custom nodes and models can be tricky in comfyui with all updates and constant changes. Thanks for providing such good advice to others!
@CuddleUTube
@CuddleUTube 26 күн бұрын
is there a way to reduce Vram load? i dont mind waiting longer but atm i legit cant even start this
@koalanation
@koalanation 26 күн бұрын
@@CuddleUTube somehow smaller and less frames. Try also to reduce context size.
@SiverStrkeO1
@SiverStrkeO1 4 ай бұрын
Great video! I'm new to all that and im wondering of there is a way to keep the details. I'm trying to use a city skyline as img to video, and there for example, a lot of windows are getting removed.
@koalanation
@koalanation 4 ай бұрын
That seems difficult with this method If the windows are small. Reducing the scale factor may work. Otherwise some trick with masks and controlnets may work but I have not really tried it with sparsectrl
@dsnake10
@dsnake10 5 күн бұрын
The size of tensor a (257) must match the size of tensor b (577) at non-singleton dimension 1- Any help?
@koalanation
@koalanation 2 күн бұрын
Which node is giving you the error? Try to have the node packs and comfyui?
@dsnake10
@dsnake10 2 күн бұрын
@koalanation Thanks fixed, was mixing SD1.5 and SDXL on different adapters, thanks for answering!
@koalanation
@koalanation 2 күн бұрын
@@dsnake10 great you figured it out! 👍
@VanessaSmith-Vain88
@VanessaSmith-Vain88 5 ай бұрын
Yeah, that was really easy, piece of a cake 🤣
@koalanation
@koalanation 5 ай бұрын
Yes, it was 🤣
@jaydenvincent2007
@jaydenvincent2007 4 ай бұрын
when I click Queue Prompt is says "TypeError: this is undefined" and nothing happen. I have all required nodes/models, and comfyui updated/restarted. can you please help?
@koalanation
@koalanation 4 ай бұрын
Hi! I have never encountered this error...googling it refers to an issue with MixLab nodes...not sure if that would be your case. Maybe try to disable or uninstall custom nodes to see if there is one affecting ComfyUI.
@policani
@policani 3 ай бұрын
Sparse Control Scribble is also difficult to search for. I have no results for all three words, and three results for Control Scribble.
@koalanation
@koalanation 3 ай бұрын
The models are here: huggingface.co/guoyww/animatediff/tree/main
@HOT4C1DR41N
@HOT4C1DR41N 4 ай бұрын
I couldn't make it work :( I get this error every time: Error occurred when executing ADE_ApplyAnimateDiffModel: 'MotionModelPatcher' object has no attribute 'model_keys'
@koalanation
@koalanation 4 ай бұрын
Seems odd...are you using AnimateLCM_t2v? Maybe try with other model to see if it runs, or use the gen 1 AnimateDiff Loader
@katonbunshin5935
@katonbunshin5935 4 ай бұрын
I have the same
@koalanation
@koalanation 4 ай бұрын
Use the model at: civitai.com/models/452153/animatelcm and make sure nodes and comfy is up to date.
@katonbunshin5935
@katonbunshin5935 4 ай бұрын
@@koalanation oh... i wrote solution here but i dont know why is it not added... so... in my situation, there was problem when i was updating AnimateDiff from Manager. To fix it remove AnimateDiff from custom nodes and get AnimateDiff from repo, then place it in Custom nodes - works for me
@koalanation
@koalanation 4 ай бұрын
Ok! I have not see it neither...anyway, sometimes during updates this things happen
@frankliamanass9948
@frankliamanass9948 5 ай бұрын
It all worked and animates the image but every time it comes out very bright and faded. Any suggestion on how to fix it?
@frankliamanass9948
@frankliamanass9948 5 ай бұрын
It appears the results in the tutorial are also faded and over brightened but at the end when you show examples they look fine. Did you find a fix or was it in your post processing?
@koalanation
@koalanation 5 ай бұрын
Depending on the source of image, settings, etc, the image might be too dark or too bright, as you say. There are nodes that do that. I like Image Filter adjustments. But I think it is better to use a regular video editor, it is faster and easier to use.
@hamster_poodle
@hamster_poodle 5 ай бұрын
hello! Does SparseControl work with AnimateDiff LCM properly? not V3?
@koalanation
@koalanation 5 ай бұрын
Hi! With the V3 lora adapter works. I am not sure if that is the way it was intended, but it does something. I have tried to use the RGB sparse but I do not manage to get it work nicely...you can also switch to version 3 and fine tune results, but obviously generations will take longer
@marcdevinci893
@marcdevinci893 3 ай бұрын
I carefully followed and really want to get this going but getting a KSampler error. 'Given groups=1, weight of size [320, 5, 3, 3], expected input[32, 4, 96, 64] to have 5 channels, but got 4 channels instead'
@koalanation
@koalanation 3 ай бұрын
Try to change or bypass, one by one, the controlnet and animatediff nodes and see if the workflow runs.
@doctor_risk
@doctor_risk 2 ай бұрын
Is it possible to input 2 pictures and have AI make a video transitioning from one to the other?
@koalanation
@koalanation Ай бұрын
Yes, you can transition using masks. Check out my morphing and audioreactive videos to get an idea
@Shahriar.H
@Shahriar.H 2 ай бұрын
ModelPatcherAndInjector.patch_model() got an unexpected keyword argument 'lowvram_model_memory' I'm getting the above error on the KSampler before VAE encode node. How do I fix this? Edit: I'm using Stability Matrix to run ComfyUI if that's a relevant information.
@koalanation
@koalanation Ай бұрын
Hi, There was an issue raised in the AD github, they said it should have been fixed. Try to update both the AnimateDiff Evolved nodes and ComfyUI. I do not know how is that done in Stability Matrix...in ComfyUI, normally I do it via the Manager.
@vl7823
@vl7823 4 ай бұрын
hey i'm getting this error "Could not allocate tensor with 828375040 bytes. There is not enough GPU video memory available!" I have an AMD Rx6800Xt 16gb vram, any workaround or fix? Thanks
@koalanation
@koalanation 4 ай бұрын
Hey! Not sure what the messages are with AMD, but maybe you can try first reducing the size of the latents and/or reducing the batch size. Looks like some limitation with the VRAM.
@misterV123
@misterV123 2 ай бұрын
hey! Were you able to fix and launch it?
@elifmiami
@elifmiami 3 ай бұрын
I was wondering how did you bring node number on the box ?
@koalanation
@koalanation 3 ай бұрын
If you go to the Manager, on the left column you will see the option 'Badge'. There you can set the number of the node to appear over the node.
@elifmiami
@elifmiami 3 ай бұрын
@@koalanation thank you !
@MarcusBankz68
@MarcusBankz68 4 ай бұрын
I'm getting an error with IPAdapterUnifiedLoader, says clipvision model not found. I've downloaded a few versions and put them in my clip_vision folder but still getting the error. Is there a specific one for this node?
@koalanation
@koalanation 4 ай бұрын
Sometimes with IP adapter is confusing...try to use the IP adapter model and clipvision separately (without using the unified loader) following the instructions of the IP adapter repo. I like plus and VIT-G. github.com/cubiq/ComfyUI_IPAdapter_plus?tab=readme-ov-file
@ForChiddlers
@ForChiddlers Ай бұрын
@@koalanation I got this IP Adapter Clipvision error as well. What can we do there concrete? It seems, that an Ip Adapter has to be fed into the IPAdapter Unified loader left input param. But where does it come from? And why is it working without that on your machine?
@koalanation
@koalanation Ай бұрын
​@@ForChiddlersIt only needs the model as input. The preset should load IP adapter and clipvision, but the node sometimes messes up. In case of issues, it is better to use the Clipvision loader and the IPAdapter loader individually, and connect them directly to the IPAdapter Apply Node (without the Unified Loader)
@joonienyc
@joonienyc 5 ай бұрын
hey buddy , how did u copy second Ksampler with all lines connected duplicated ? at time line 4:40
@koalanation
@koalanation 5 ай бұрын
Copy normally with ctrl+c, then paste with ctrl+shift+v.
@joonienyc
@joonienyc 5 ай бұрын
@@koalanation ty my man
@kizentheslayer
@kizentheslayer 3 ай бұрын
where do i save teh animate lcm model to?
@koalanation
@koalanation 3 ай бұрын
models/animatediff_models
@generalawareness101
@generalawareness101 3 ай бұрын
yeah, no to SD1.5 anything.
@AB-wf8ek
@AB-wf8ek 3 ай бұрын
Did SD1.5 hurt your feelings?
@ManuelViedo
@ManuelViedo 4 ай бұрын
"easy"
@koalanation
@koalanation 4 ай бұрын
🤣🤣🤣
LEARN ANIMATE-DIFF IN 10 MINUTES (EASY FOR BEGINNERS!)
10:25
AIKnowledge2Go
Рет қаралды 20 М.
Wait for the last one 🤣🤣 #shorts #minecraft
00:28
Cosmo Guy
Рет қаралды 22 МЛН
the balloon deflated while it was flying #tiktok
00:19
Анастасия Тарасова
Рет қаралды 35 МЛН
MAGIC TIME ​⁠@Whoispelagheya
00:28
MasomkaMagic
Рет қаралды 37 МЛН
Blender 4.3 Is Going to Be AMAZING!
14:16
Gamefromscratch
Рет қаралды 72 М.
This FREE AI Image Gen BEAT Flux & Midjourney!
15:28
Theoretically Media
Рет қаралды 33 М.
Why do Studios Ignore Blender?
8:52
Film Stop
Рет қаралды 108 М.
Complete Advanced AI Course in 82 Minutes
1:22:01
Ethan Nelson - Scale with AI
Рет қаралды 1,2 М.
Wait for the last one 🤣🤣 #shorts #minecraft
00:28
Cosmo Guy
Рет қаралды 22 МЛН