ComfyUI: Master Morphing Videos with Plug-and-Play AnimateDiff Workflow (Tutorial)

  Рет қаралды 9,685

Abe aTech

Abe aTech

Күн бұрын

Push your creative boundaries with ComfyUI using a free plug and play workflow! Generate captivating loops, eye-catching intros, and more! This free and powerful tool is perfect for creators of all levels.
Chapters:
00:00 Sample Morphing Videos
01:15 Downloads
02:09 Folder locations
02:14 Workflow Overview
04:10 Generating first Morph
04:40 Running the Workflow
04:47 Quick bonus tips
06:35 Supercharge the Workflow
08:58 Getting more variation in batches
10:31 Scaling up
10:59 Scaling up with model
11:35 This is pretty cool
I'll show you how to make morphing videos and use images to create stunning animations and videos,
You'll also learn how to use text prompts to morph between anything you can imagine!
Plus there are some valuable tips and tricks to streamline the comfyui morphing video workflow and save time while creating your own mind-bending visuals.
#########
Links:
########
Workflow: Morpheus Modified workflow for text to image to video
openart.ai/workflows/abeatech...
Tutorial for Batch Generating Text to Image using external text file:
• ComfyUI: Batch Generat...
Workflow: ipiv's Morph - img2vid AnimateDiff LCM:
civitai.com/models/372584?mod...
Note: See 02:09 of the video for Model folder locations
AnimateDiff:
huggingface.co/wangfuyun/Anim...
VAE:
huggingface.co/stabilityai/sd...
AnimateLCM LORA:
huggingface.co/wangfuyun/Anim...
Clip Vision Model ViT-H:
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors download and rename:
huggingface.co/h94/IP-Adapter...
Clip Vision Model ViT-G:
CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors download and rename -
huggingface.co/h94/IP-Adapter...
IPADAPTER MODEL:
huggingface.co/h94/IP-Adapter...
Control Net (QRCode):
huggingface.co/monster-labs/c...
Motions Animations for AnimateDiff: civitai.com/posts/2011230
################
Music: Bensound.com/royalty-free-music
License code: LU8J6ZAOXHXNOAI4

Пікірлер: 78
@ted328
@ted328 27 күн бұрын
Literally the answer to my prayers, have been looking for exactly this for MONTHS
@alessandrogiusti1949
@alessandrogiusti1949 19 күн бұрын
After following many tutorial, you are the only one gettin to me the results in a very clear way. Thank you so much!
@SylvainSangla
@SylvainSangla 19 күн бұрын
Thanks a lot for sharing this, very precise and complete guide ! 🥰 Cheers from France !
@amunlevy2721
@amunlevy2721 16 күн бұрын
Getting these errors that nodes are missing even when installed IP Adapter Plus... missing nodes: IPAdapterBatch and IPAdapterUnifiedLoader
@AlvaroFCelis
@AlvaroFCelis 17 күн бұрын
Thank you so much! Very clear, and organized. Subbed..
@TechWithHabbz
@TechWithHabbz 28 күн бұрын
You about to blow up bro. Keep it going. Btw, I was subscriber #48 😁
@abeatech
@abeatech 28 күн бұрын
Thanks for the sub!
@user-yo8pw8wd3z
@user-yo8pw8wd3z 11 сағат бұрын
good video. where can i find the link to the additional video masks? I don't see it in the description
@MSigh
@MSigh 23 күн бұрын
Excellent! 👍👍👍
@SF8008
@SF8008 23 күн бұрын
Amazing! Thanks a lot for this!!! btw - which nodes do I need to disable in order to get back to the original flow? (the one that is based only on input images and not on prompts)
@MariusBLid
@MariusBLid 27 күн бұрын
Great stuff man! Thank you 😀what are your specs btw? I only have 8gb vram
@mcqx4
@mcqx4 29 күн бұрын
Nice tutorial, thanks!
@abeatech
@abeatech 28 күн бұрын
Glad it was helpful!
@popo-fd3fr
@popo-fd3fr 15 күн бұрын
Thanks man. I just subscribed
@velvetjones8634
@velvetjones8634 Ай бұрын
Very helpful, thanks!
@abeatech
@abeatech Ай бұрын
Glad it was helpful!
@zarone9270
@zarone9270 26 күн бұрын
thx Abe!
@Injaznito1
@Injaznito1 Күн бұрын
NICE! I tried and it works great. Thanx for the tut! Question though. I tried changing the 96 to a larger number so the changes between pictures takes a bit longer but I don't see any difference. Is there something I'm missing? Thanx!
@gorkemtekdal
@gorkemtekdal Ай бұрын
Great video! I want to ask that can we use init image for this workflow like we do on Deforum? I need the video starts with a specific image on the first frame of the video, then it should changes through the prompts. Do you know how does it possible on ComfyUI / AnimateDiff? Thank you!
@abeatech
@abeatech 29 күн бұрын
I haven't personally used deforum, but it sounds like its the same concept. This workflow uses 4 init images at different points during the 96 frames to guide the animation. The ipadapter and control net nodes do most of the heavy lifting so prompts aren't really needed, but i've used them to fine tune outputs. I'd encourage you to try it out and see if it gives you the results you're looking for.
@Halfgawd_Halfdevil
@Halfgawd_Halfdevil 20 күн бұрын
Managed to get this running. It does okay but I am not seeing much influence from the control net motion video input. Any way to make that more apparent? Also have notice a Shutterstock overlay near the bottom of the clip. it is translucent but noticeable. kind of ruins everything. anyway, to eliminate that artifact?
@TheNexusRealm
@TheNexusRealm 14 күн бұрын
cool, how long did it take you?
@frankiematassa1689
@frankiematassa1689 Күн бұрын
Error occurred when executing IPAdapterBatch: Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 1024]). I followed this video exactly and am only using SDL 1.5 checkpoints. I cannot find anywhere how to fix this
@Ai_Gen_mayyit
@Ai_Gen_mayyit 6 күн бұрын
Error occurred when executing VHS_LoadVideoPath: module 'cv2' has no attribute 'VideoCapture'
@rowanwhile
@rowanwhile 26 күн бұрын
Brilliant video. thanks so much for sharing your knowledge.
@Ai_Gen_mayyit
@Ai_Gen_mayyit 5 күн бұрын
Error occurred when executing VHS_LoadVideoPath: module 'cv2' has no attribute 'VideoCapture' your video timestep: 04:20
@aslgg8114
@aslgg8114 29 күн бұрын
What should I do to make the reference image persistent
@BrianDressel
@BrianDressel 19 күн бұрын
Excellent walkthrough of this, thanks.
@wagmi614
@wagmi614 16 күн бұрын
can could one add some kind of ip adaptar to add your own face to transform?
@paluruba
@paluruba 24 күн бұрын
Thank you for this video! Any idea what to do when the videos are blurry?
@jesseybijl2104
@jesseybijl2104 9 күн бұрын
Same here, any answer?
@MichaelL-mq4uw
@MichaelL-mq4uw 27 күн бұрын
why do you need controlnet at all? can it be skipped and morph without any mask?
@cabb_
@cabb_ 26 күн бұрын
ipiv did an incredible job with this workflow!. Thanks for the tutorial.
@tetianaf5172
@tetianaf5172 4 күн бұрын
Hi! I have this error all the time: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm). Though I use 1.5 checkpoint. Please help
@ImTheMan725
@ImTheMan725 9 күн бұрын
Why can't your morph 20/50 pictures?
@ComfyCott
@ComfyCott 14 күн бұрын
Dude I loved this video! You explain things very well and I love how you explain in detail as you build out strings of nodes! subbed!
@kwondiddy
@kwondiddy 7 күн бұрын
I'm getting errors when trying to run... a few items that say "value not in list: ckpt_name:" "value not in list: lora_name" and "value not in list: vae_name:" I'm certain I put all the downloads in the correct folders and name everything appropriately.... Any thoughts?
@pro_rock1910
@pro_rock1910 25 күн бұрын
❤‍🔥❤‍🔥❤‍🔥
@saundersnp
@saundersnp 24 күн бұрын
I've encountered this error : Error occurred when executing RIFE VFI: Tensor type unknown to einops
@brockpenner1
@brockpenner1 23 күн бұрын
ComfyUI threw an error in the VRAM Debug node of Frame Interpolation: Error occurred when executing VRAM_Debug: VRAM_Debug.VRAMdebug() got an unexpected keyword argument 'image_passthrough' Any help would be appreciated!
@efastcruelx7880
@efastcruelx7880 6 күн бұрын
Why my generated animation very different from the reference images
@CoqueTornado
@CoqueTornado Ай бұрын
great tutorial, I am wondering... how many vram does this setup need?
@abeatech
@abeatech 29 күн бұрын
i've heard of people running this successfully on as little as 8gb VRAM, but you'll probably need to turn of the frame interpolation. you can also try running this on the cloud at openart (but your checkpoint options might be limited): openart.ai/workflows/abeatech/tutorial-morpheus---morphing-videos-using-text-or-images-txt2img2vid/fOrrmsUtKEcBfopPrMXi
@CoqueTornado
@CoqueTornado 26 күн бұрын
@@abeatech thank you!! will try the two suggestions! congrats for the channel!
@TinyLLMDemos
@TinyLLMDemos 5 күн бұрын
where do i get your input images
@user-vm1ul3ck6f
@user-vm1ul3ck6f 29 күн бұрын
Help! I encountered this error while running it Error occurred when executing IPAdapterUnifiedLoader: Module 'comfy. model_base' has no attribute 'SDXL_instructpix2pix'
@abeatech
@abeatech 29 күн бұрын
Sounds like it could be a couple of things: a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5 or b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)
@cohlsendk
@cohlsendk 25 күн бұрын
Is there an way to increase frames/batch size for FadeMask?? Everything over 96 is messing up the Facemask -.-''
@cohlsendk
@cohlsendk 25 күн бұрын
Got it :D
@yakiryyy
@yakiryyy Ай бұрын
Hey! I've managed to get this working but I was under the impression this workflow will animate between the given reference images. The results I get are pretty different from the reference images. Am I wrong in my assumption?
@abeatech
@abeatech Ай бұрын
You're right - it uses the reference images (4 frames vs 96 total frames) as a starting point and generates additional frames, but the results should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation
@efastcruelx7880
@efastcruelx7880 6 күн бұрын
@@abeatech Is there any way to make the result more like reference images
@devoiddesign
@devoiddesign 25 күн бұрын
Hi! any suggestion for missing IPAdapter? I am confused because i didn't get an error to install or update and I have all of the IPAdapter nodes installed... the process stopped on the "IPAdapter Unified Loader" node. !!! Exception during processing!!! IPAdapter model not found. Traceback (most recent call last): File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/workspace/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/workspace/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 453, in load_models raise Exception("IPAdapter model not found.") Exception: IPAdapter model not found.
@tilkitilkitam
@tilkitilkitam 20 күн бұрын
same problem
@tilkitilkitam
@tilkitilkitam 20 күн бұрын
ip-adapter_sd15_vit-G.safetensors - install this from the manager
@devoiddesign
@devoiddesign 20 күн бұрын
@@tilkitilkitam Thank you for responding. I already had the model installed but it was not seeing it. I ended up restarting Comfy completely after I updated everything from the manager instead of only doing a hard refresh and that fixed it.
@TinyLLMDemos
@TinyLLMDemos 5 күн бұрын
how do i kick it off?
@Adrianvideoedits
@Adrianvideoedits 5 күн бұрын
you didnt explain most important part, which is how to run same batch with and without upscale. It generates new batches everytime you queue prompt so preview batch is waste of time. I like the idea though.
@WalkerW2O
@WalkerW2O 6 күн бұрын
Hi Abe aTech, very informative and i like your work very much.
@rooqueen6259
@rooqueen6259 14 күн бұрын
Guys who have come across the fact that the loading 2 new models stops at 0% or I also had an example - the loading 3 new models is 9% and no longer continues. What is the problem? :c
@axxslr8862
@axxslr8862 25 күн бұрын
in my comfy UI there is no manager option ...... help please
@ESLCSDivyasagar
@ESLCSDivyasagar 2 күн бұрын
search in youtube how to install
@creed4788
@creed4788 24 күн бұрын
Vram required?
@Adrianvideoedits
@Adrianvideoedits 5 күн бұрын
16gb for upscaled
@creed4788
@creed4788 5 күн бұрын
@@Adrianvideoedits Could you make the videos first and then close and load the upscaler to improve the quality or does it have to be all together and it can't be done in 2 different workflows?
@ErysonRodriguez
@ErysonRodriguez 28 күн бұрын
noob question: why my results more different from my output
@ErysonRodriguez
@ErysonRodriguez 28 күн бұрын
i mean, what images i loaded have different output instead transitioning
@abeatech
@abeatech 28 күн бұрын
The results will not exactly be the same, but they should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation. Also worth double checking that you have the VAE and LCM lora selected in the settings module
@3djramiclone
@3djramiclone 16 күн бұрын
This is not for beginners, put that on the description mate
@kaikaikikit
@kaikaikikit 9 күн бұрын
what are you are crying about...go find a beginner class when it's too hard to understand...
@user-vm1ul3ck6f
@user-vm1ul3ck6f 29 күн бұрын
Help! I encountered this error while running it
@user-vm1ul3ck6f
@user-vm1ul3ck6f 29 күн бұрын
Error occurred when executing IPAdapterUnifiedLoader : module 'comfy.model base’ has no attribute 'SDXl instructpix2pix
@abeatech
@abeatech 28 күн бұрын
Sounds like it could be a couple of things: a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5 or b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)
@Halfgawd_Halfdevil
@Halfgawd_Halfdevil 21 күн бұрын
@@abeatech it say s in the note to install it in the clip vision folder. but that is not it as none of the preloaded models are there and the new one installed there does not appear in the dropdown selector. so if it is not that folder then where are you supposed to install it? if the node is bad why is it used in the work flow in the first place? shouldn't it just have the ipadapter plus node?
ComfyUI Masking With IPADAPTER Workflow
12:31
Grafting Rayman
Рет қаралды 1,7 М.
skibidi toilet 73 (part 2)
04:15
DaFuq!?Boom!
Рет қаралды 32 МЛН
когда одна дома // EVA mash
00:51
EVA mash
Рет қаралды 11 МЛН
Teenagers Show Kindness by Repairing Grandmother's Old Fence #shorts
00:37
Fabiosa Best Lifehacks
Рет қаралды 38 МЛН
INSANE OpenAI News: GPT-4o and your own AI partner
28:48
AI Search
Рет қаралды 479 М.
How to use PuLID in ComfyUI
20:53
Latent Vision
Рет қаралды 14 М.
Become a Style Transfer Master with ComfyUI and IPAdapter
19:02
Latent Vision
Рет қаралды 17 М.
From Stills to Motion - AI Image Interpolation in ComfyUI!
11:32
Nerdy Rodent
Рет қаралды 27 М.
Create Consistent Character Face/Body/Clothes From Multiple Angles
12:39
ComfyUI AnimateDiff Prompt Travel: Runpod.io Cloud GPUs Tutorial
22:49
Infinite Variations with ComfyUI
16:25
Latent Vision
Рет қаралды 14 М.
Edit My Photo change back coloured with Bast Tech
0:45
BST TECH
Рет қаралды 333 М.
Introducing GPT-4o
26:13
OpenAI
Рет қаралды 3,8 МЛН
🤯Самая КРУТАЯ Функция #shorts
0:58
YOLODROID
Рет қаралды 3,6 МЛН
СЛОМАЛСЯ ПК ЗА 2000$🤬
0:59
Корнеич
Рет қаралды 2,4 МЛН