If you prefer it without the ai voice, check out with original voice (in Spanish) kzbin.info/www/bejne/faSyhaF9mr12eck
@tech36533 ай бұрын
any toturoal for easy translation of voice offline using ai ?
@Kratos300003 ай бұрын
Which GPU do you need for these kind of animations?
@DanielThiele2 ай бұрын
honestly, its really good for ai voice. no hablo espanol senior. muchas gracias. :D
@koalanation2 ай бұрын
@@DanielThiele 🤣🤣🤣
@chrisfletcher2646Ай бұрын
I've been frustrated with AI voiceovers for a lot of stuff i watch, but now that I know you did this as a non-native language it makes sense and I'm grateful as well. Muchas muchas gracias!!!!!!!
@BeratLjumaniАй бұрын
mostly ok tutorial but the main issue I have is that maybe this needs a part 1 showing all the files you downloaded for the load check point, Lora loader model etc... because if you don't have that stuff your just left scrambling to civit to try and find the same files you use and that's annoying and confusing.
@koalanationАй бұрын
Got it. My assumption is that the viewer already knows it, as I have shown how it is done (for other models) in other videos. But I see that is not true for everyone. However, if I show it always may become repetitive...I may make a short video showing how it is done and refer to it in other videos.
@levi_melon36910 күн бұрын
your voice make everything much more clearer, may i have the workflow of this? Which helps a lot in conveniency!
@koalanation7 күн бұрын
ko-fi.com/s/3dbeef74fd
@sebastiancasanova82923 күн бұрын
its an AI voice lol.
@boo33215 ай бұрын
very easy tutorial only took me HOURS to do it, I'm curious how to make people walk or move with comfyui.
@koalanation5 ай бұрын
Well, I cut quite a bit to show only the main steps, otherwise the video is mostly rendering... For moving people, controlNet with a reference video of someone walking is probably the way. With motion director should also be possible, I believe, but I need to find the time to try and see if results are 👍
@user-cb4jx8og2k4 ай бұрын
great video you skipped some steps but still detailed. Question, do we not need to change the text prompt for each randomized pic > Also why did you use Load video path node for an image ?
@koalanation4 ай бұрын
Hi! In principle you do not need to change it, but you can, of course. Take into account the 'tile' control net is rather strong and you cannot do big transformations. The Load video node allows you to use http addresses, but the Image load node not (at least did not work for me). That is why I use it for the randomized image.
@joshuadelorimier161911 күн бұрын
appreciate your tutorials glad you don't just show a workflow
@user-Cyrine4 ай бұрын
Love your videos so much! Can you make a tutorial video on FlexClip’s AI tools? Really looking forward to that!
@koalanation4 ай бұрын
Thanks for the idea!
@ristom13 ай бұрын
Kickass video man!!! Im trying to learn cool AI like this for music viduals this is 10/10 cool. Gonna also do blender renders as bases and use ai to make them trippy. Have any tutorials for video to video?
@koalanation2 ай бұрын
Check out the morphing and audioreactive videos. Using masks is more complicated but gives you more power to play with
@ristom12 ай бұрын
@@koalanation thank you!!!
@SemorezX2 ай бұрын
awesome works , thank u so much
@VanessaSmith-Vain885 ай бұрын
Can you set up the whole thing for us to use it?
@lildrill2 ай бұрын
🤣😂
@tianhayamizu8815Ай бұрын
hello,I often watch your videos to learn. Could you explain how to create a long animation, like one over 10 seconds, using Animatediff? thank you!
@koalanationАй бұрын
With AnimateDiff WITH context options is possible to do animations as long as your machine can handle. If you do, for example, 8 fps, you need 80 frames. The number of frames is defined by the batch number in an empty latent or equivalent connected to the k sampler
@tianhayamizu8815Ай бұрын
@@koalanation I'm a beginner and really don't understand how to go about it, especially since I have no experience in animation. If you have the time, could you possibly create a tutorial video on how to make a short film using Animatediff? I would really appreciate it and am very eager to learn more. Thank you so much!
@tianhayamizu8815Ай бұрын
@@koalanation Could I ask, in case my computer isn't powerful enough, would it be possible to generate a few seconds of video at a time and then use other software to stitch them together into a short film that's a few minutes long?
@koalanationАй бұрын
@@tianhayamizu8815 you can always make short clips and then stitch them with a regulard video editor
@koalanationАй бұрын
This is maybe too detailed for you, but give it a try: kzbin.info/www/bejne/o4LbmGqpp7Cdrqssi=X5XLYro5PQ1pfOYE
@estebanmoraga31264 ай бұрын
Thans for the tutorial! Question: Is it possible to feed Comfy with a reference video for it to animate the image using said video as reference? Like, say I have an image of a character, and I give Comfy a video of someone skateboarding, is there a method with which I could get comfy to animate the character skateboarding based on the video? Cheers and thanks in advance!
@koalanation4 ай бұрын
Yes! You can use a reference video and use controlnets such as openpose, depth, lineart, etc, to guide the composition of each frame. There are many videos and tutorials about it.
@estebanmoraga31264 ай бұрын
@@koalanation Thanks for replying! The most I've been able to find are tutorials on animating a referenced image using prompts or generating a video using another video as reference also using prompts, have yet to find one where they animate a reference image based on a reference video, guess I just have to look harder tho!
@koalanation4 ай бұрын
Check out: kzbin.info/www/bejne/joCYloGAZr1lqKs. Take into account this is rather complex with all the samplers and so on. Here: kzbin.info/www/bejne/gZKXdoGaa5iJeNE, I think it is more clear, but take into account the IP Adapter node it does not work like in the video anymore.
@sohamkokate5794Ай бұрын
Hi! "Given groups=1, weight of size [320, 5, 3, 3], expected input[16, 4, 96, 64] to have 5 channels, but got 4 channels instead" can you help?
@koalanationАй бұрын
Seems to be related with one of controlnet or animatediff models. Try to change or bypass, one by one, the controlnet and animatediff nodes and see if the workflow runs. When you have found where the issue is, check the model is correct.
@LuigiEspositoGraphic2 ай бұрын
it works well, but the details are extremely lower than the original image, how can I fix it?
@koalanationАй бұрын
You may want to increase the tile settings, but yeah, the method do changes things. Try also to use a checkpoint corresponding to the style of the original image (realistic, cartoon, anime...), to get better adherence. Those are some ideas....obviously, image upscaling or a second AD pass may also help
@bordignonjunior5 ай бұрын
Geeez this takes long to run. which gpu do you have? amazing tutorial !!!
@koalanation5 ай бұрын
Hi! Thanks! I am using a RTX4090/3090 or A5000 via Runpod, which generates the video rather fast. You can try to decrease the number of frames and also the resolution of the images. Try to do interpolation with 3 frames instead of 2.
@kargulo2 ай бұрын
I have 4060 with 16G and 50 % is taking 15 min , that is first creating , I hope next one will be faster :)
@Rachelcenter12 ай бұрын
4:53 those blur effects you put over your video make it hard to see what you're doing.
@koalanation2 ай бұрын
Yep, I got too enthusiastic with the video effects when editing the video...promise not to overdo next time
@lechu89Ай бұрын
Hi! "IPAdapterUnifiedLoader - ClipVision model not found." can you help?
@koalanationАй бұрын
Hi! This node was supposed to simplify combining the IP adapter and clipvision models, but for some systems it seems it gives more problems than a solution. My advice would be to use the IP adapter model loader and Clipvision model loader separately, and connect them (and the model) independently to the IP Adapter node
@sebastiancasanova82923 күн бұрын
@@koalanation as someone who merely followed your steps and found the same problem, I have no idea what any of this means or does, so I don't understand your solution. Could you explain what to do like I'm 5? Please and thanks a lot.
@koalanation3 күн бұрын
@@sebastiancasanova8292 don't use the unified loader. Use the Clipvision loader and the IP Adapter model loader.
@SeanOdyssey2 ай бұрын
Thankyow
@Rachelcenter12 ай бұрын
4:54 I got to this part of the tutorial, my workflow is at 88% ksampler and then the word "reconnecting" came over the screen. terminal: [AnimateDiffEvo] - INFO - Using motion module AnimateLCM_sd15_t2v.ckpt:v2. Unloading models for lowram load. UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' (I have a 128gb ram computer)
@koalanation2 ай бұрын
Hi! That looks a bit odd having 128 gb and stopping at 88%..sometimes comfyui crashes when the CPU is overloaded...try to test with less frames or smaller ones to see if that is the case.
@Rachelcenter12 ай бұрын
@@koalanation when a video loader box is present you can go to Select_every_nth = if you put 1 its going to generate every frame of the video. If you choose 2 its going to generate every other frame of the video.... Since you dont have that box, what is the equivalent in your workflow?
@Rachelcenter12 ай бұрын
@@koalanation i tried 16 frames and all it gave me was an all black box
@YING1804 ай бұрын
thank you for your video, that's very helpful
@dschony3 ай бұрын
It was a little problematic to install all this modules and nodes. The webui crashed and I had to update it, recover the venv and also to reinstall comfyui's dependencies ... it took hours. Nothing for newbies.
@dschony3 ай бұрын
Ok. Found out, that compared to the time it takes for generation, this little time to fix the environment is nothing. But I like the tutorial ;)
@misterV1232 ай бұрын
@@dschony Hey) Which GPU do you use and how much time does it take to generate 1-2 sec video?
@dschony2 ай бұрын
@@misterV123 GPU: NVIDIA GeForce RTX 3060 / 8GB VRAM. It takes about 1 hour for 2 sec with a frame rate of 30, or 1 min per picture. It depends of the used models and nodes, of the settings (steps) and more.
@dschony2 ай бұрын
Well, I found, that it's better not to use the stable diffusion WebUI with the ComfyUI extension, but to use a separate standalone installaion of ComfyUI with it's own environment.
@koalanation2 ай бұрын
Good you could find a work around. Sometimes all custom nodes and models can be tricky in comfyui with all updates and constant changes. Thanks for providing such good advice to others!
@CuddleUTube26 күн бұрын
is there a way to reduce Vram load? i dont mind waiting longer but atm i legit cant even start this
@koalanation26 күн бұрын
@@CuddleUTube somehow smaller and less frames. Try also to reduce context size.
@SiverStrkeO14 ай бұрын
Great video! I'm new to all that and im wondering of there is a way to keep the details. I'm trying to use a city skyline as img to video, and there for example, a lot of windows are getting removed.
@koalanation4 ай бұрын
That seems difficult with this method If the windows are small. Reducing the scale factor may work. Otherwise some trick with masks and controlnets may work but I have not really tried it with sparsectrl
@dsnake105 күн бұрын
The size of tensor a (257) must match the size of tensor b (577) at non-singleton dimension 1- Any help?
@koalanation2 күн бұрын
Which node is giving you the error? Try to have the node packs and comfyui?
@dsnake102 күн бұрын
@koalanation Thanks fixed, was mixing SD1.5 and SDXL on different adapters, thanks for answering!
@koalanation2 күн бұрын
@@dsnake10 great you figured it out! 👍
@VanessaSmith-Vain885 ай бұрын
Yeah, that was really easy, piece of a cake 🤣
@koalanation5 ай бұрын
Yes, it was 🤣
@jaydenvincent20074 ай бұрын
when I click Queue Prompt is says "TypeError: this is undefined" and nothing happen. I have all required nodes/models, and comfyui updated/restarted. can you please help?
@koalanation4 ай бұрын
Hi! I have never encountered this error...googling it refers to an issue with MixLab nodes...not sure if that would be your case. Maybe try to disable or uninstall custom nodes to see if there is one affecting ComfyUI.
@policani3 ай бұрын
Sparse Control Scribble is also difficult to search for. I have no results for all three words, and three results for Control Scribble.
@koalanation3 ай бұрын
The models are here: huggingface.co/guoyww/animatediff/tree/main
@HOT4C1DR41N4 ай бұрын
I couldn't make it work :( I get this error every time: Error occurred when executing ADE_ApplyAnimateDiffModel: 'MotionModelPatcher' object has no attribute 'model_keys'
@koalanation4 ай бұрын
Seems odd...are you using AnimateLCM_t2v? Maybe try with other model to see if it runs, or use the gen 1 AnimateDiff Loader
@katonbunshin59354 ай бұрын
I have the same
@koalanation4 ай бұрын
Use the model at: civitai.com/models/452153/animatelcm and make sure nodes and comfy is up to date.
@katonbunshin59354 ай бұрын
@@koalanation oh... i wrote solution here but i dont know why is it not added... so... in my situation, there was problem when i was updating AnimateDiff from Manager. To fix it remove AnimateDiff from custom nodes and get AnimateDiff from repo, then place it in Custom nodes - works for me
@koalanation4 ай бұрын
Ok! I have not see it neither...anyway, sometimes during updates this things happen
@frankliamanass99485 ай бұрын
It all worked and animates the image but every time it comes out very bright and faded. Any suggestion on how to fix it?
@frankliamanass99485 ай бұрын
It appears the results in the tutorial are also faded and over brightened but at the end when you show examples they look fine. Did you find a fix or was it in your post processing?
@koalanation5 ай бұрын
Depending on the source of image, settings, etc, the image might be too dark or too bright, as you say. There are nodes that do that. I like Image Filter adjustments. But I think it is better to use a regular video editor, it is faster and easier to use.
@hamster_poodle5 ай бұрын
hello! Does SparseControl work with AnimateDiff LCM properly? not V3?
@koalanation5 ай бұрын
Hi! With the V3 lora adapter works. I am not sure if that is the way it was intended, but it does something. I have tried to use the RGB sparse but I do not manage to get it work nicely...you can also switch to version 3 and fine tune results, but obviously generations will take longer
@marcdevinci8933 ай бұрын
I carefully followed and really want to get this going but getting a KSampler error. 'Given groups=1, weight of size [320, 5, 3, 3], expected input[32, 4, 96, 64] to have 5 channels, but got 4 channels instead'
@koalanation3 ай бұрын
Try to change or bypass, one by one, the controlnet and animatediff nodes and see if the workflow runs.
@doctor_risk2 ай бұрын
Is it possible to input 2 pictures and have AI make a video transitioning from one to the other?
@koalanationАй бұрын
Yes, you can transition using masks. Check out my morphing and audioreactive videos to get an idea
@Shahriar.H2 ай бұрын
ModelPatcherAndInjector.patch_model() got an unexpected keyword argument 'lowvram_model_memory' I'm getting the above error on the KSampler before VAE encode node. How do I fix this? Edit: I'm using Stability Matrix to run ComfyUI if that's a relevant information.
@koalanationАй бұрын
Hi, There was an issue raised in the AD github, they said it should have been fixed. Try to update both the AnimateDiff Evolved nodes and ComfyUI. I do not know how is that done in Stability Matrix...in ComfyUI, normally I do it via the Manager.
@vl78234 ай бұрын
hey i'm getting this error "Could not allocate tensor with 828375040 bytes. There is not enough GPU video memory available!" I have an AMD Rx6800Xt 16gb vram, any workaround or fix? Thanks
@koalanation4 ай бұрын
Hey! Not sure what the messages are with AMD, but maybe you can try first reducing the size of the latents and/or reducing the batch size. Looks like some limitation with the VRAM.
@misterV1232 ай бұрын
hey! Were you able to fix and launch it?
@elifmiami3 ай бұрын
I was wondering how did you bring node number on the box ?
@koalanation3 ай бұрын
If you go to the Manager, on the left column you will see the option 'Badge'. There you can set the number of the node to appear over the node.
@elifmiami3 ай бұрын
@@koalanation thank you !
@MarcusBankz684 ай бұрын
I'm getting an error with IPAdapterUnifiedLoader, says clipvision model not found. I've downloaded a few versions and put them in my clip_vision folder but still getting the error. Is there a specific one for this node?
@koalanation4 ай бұрын
Sometimes with IP adapter is confusing...try to use the IP adapter model and clipvision separately (without using the unified loader) following the instructions of the IP adapter repo. I like plus and VIT-G. github.com/cubiq/ComfyUI_IPAdapter_plus?tab=readme-ov-file
@ForChiddlersАй бұрын
@@koalanation I got this IP Adapter Clipvision error as well. What can we do there concrete? It seems, that an Ip Adapter has to be fed into the IPAdapter Unified loader left input param. But where does it come from? And why is it working without that on your machine?
@koalanationАй бұрын
@@ForChiddlersIt only needs the model as input. The preset should load IP adapter and clipvision, but the node sometimes messes up. In case of issues, it is better to use the Clipvision loader and the IPAdapter loader individually, and connect them directly to the IPAdapter Apply Node (without the Unified Loader)
@joonienyc5 ай бұрын
hey buddy , how did u copy second Ksampler with all lines connected duplicated ? at time line 4:40
@koalanation5 ай бұрын
Copy normally with ctrl+c, then paste with ctrl+shift+v.