The future of video seems WILD 🚀 ThinkDiffusion: bit.ly/3PoCkoN - if you can't see the manager after installing, make sure you have the latest version of GIT and re-install, if it persists, retry the installation methods here: civitai.com/models/71980/comfyui-manager
@asialsky5 ай бұрын
If you want longer videos, just save the final frame and run it again as the new source image. It may take a few attempts to get camera motion that doesn't make you sick, but it'll get the job done.
@MDMZ5 ай бұрын
great idea, but there might be a visible cut, did you try it ?
@VinylSolutions3 ай бұрын
@@MDMZdepending on the output it may or may not work from outputs we've seen on various image to video/gif creations.
@drimscape17 күн бұрын
i tired of this tuts where no one tells how to control camera movement. u can use online version it has camera control button (zoom in, wiggle etc) but no one tell how to do it locally
@ZakariaNada9 ай бұрын
Your tutorials are much easier to follow than most KZbinrs. Keep it up! But if you have time, you still need to do the controlnet animatediff or any video generation method using controlnet.
@MDMZ9 ай бұрын
Noted! and thanks a lot
@apdirectedit8 ай бұрын
BIG BIG BIG FACTS!!! FULL SUPPORT INSTANT SUB'D!
@DJTide6 ай бұрын
ty for the workflow! it really saves me alot of hassle having to use other websites just to animate my images!
@MedAmineTN9 ай бұрын
The best person to explain the montage method in A.I
@MDMZ9 ай бұрын
Thank you bro
@timoi77778 ай бұрын
Bro you are the best tutorial person I've found. Thanks
@MDMZ8 ай бұрын
glad I could help
@romeoottaviani53258 ай бұрын
hello, great guide. Congratulations. But I have a question, if for now the maximum limit is 5 seconds, how do some people animate the models that run on instagram? Just faceswap?
@MDMZ8 ай бұрын
I think you are referring to video2video, I just posted a video on that, this is img2video
@romeoottaviani53258 ай бұрын
@@MDMZ tks
@Thomas_Leo8 ай бұрын
Very concise tutorials. All your tutorials always help me. 👍👍
@MDMZ7 ай бұрын
glad to hear
@holly111111115 ай бұрын
THE BEST TUTORIAL!!! THANKS THANKS GRACIIIIIAAAAAAAAS 😍😍😍
@MDMZ5 ай бұрын
Glad it was helpful!
@Morri919 ай бұрын
What ever the others are. You re allways next level and complete describer. Thank you brother❤
@MDMZ9 ай бұрын
this means a LOT 🙏
@shireensingh2834Ай бұрын
a question- do we really have to use this complex method (comfy UI) like you have done for other things as well. or is there any other less complex way to do it? THANKS
@MDMZАй бұрын
other things such as...?
@deepinworld19 ай бұрын
Hey I don't have pc Can you do this on mobile especially in moon valley ai
@MDMZ9 ай бұрын
I've actually covered this topic in the video
@matt_fpv9 ай бұрын
Great tutorial, thank you! Do you know a way to upscale the video directly in comfyUI? Not everyone has Topaz Video AI, its quite expensive...
@MDMZ9 ай бұрын
there are ways to do it with custom nodes, I covered one example here: kzbin.info/www/bejne/oZ69ZoZojZponLcsi=ehay3olnLfJckLUN&t=547
@A1Ch335324 күн бұрын
Hey good tutorial. Your custom workflow causing error and other one runs fine. How to fix this?
@MDMZ21 күн бұрын
what kind of errors? can you share on discord ?
@Saidgarciac3 ай бұрын
Could you help me with the installation process of img2vid for Comfyui to run it from stability matrix?
@JefHarrisnation8 ай бұрын
Great and easy tutorial, and thanks for the workflow template. Though I'm not able to find the Video Combine node.
@MDMZ7 ай бұрын
did yo utry installing it from the ComfyUI manager ?
@jvtosiartist9 ай бұрын
Thanks again!!! Great tutorial
@MDMZ9 ай бұрын
Glad you liked it!
@jvtosiartist9 ай бұрын
@@MDMZ Thanks to your help, I just tried Think Diffusion, it is very good, it is very fast and easy to use. This is what I did: kzbin.infoGy-HeW3Stj8?si=sc1i8Z6qiuACuC9V
@hosseinahmadi18557 ай бұрын
is it possible to control the result of the video by giving some prompts? And another question. Is there a model to produce transition between two images/frames (I have frame images, but I want to turn them into a video and frames are not very close to each other so it cannot be used in conventional video editing software)?
@MDMZ7 ай бұрын
I guess you can add prompt nodes to have more control, but I'm yet to try it myself, you can try using RunwayMl instead. I don't have an answer for the second question, sorry :/
@ruggio.o9 ай бұрын
I'm currently using stable diffusion video on Mac m2 pro and I have the same problem with the ksampler... in particular this one: "Conv3D is not supported on MPS". Do somebody know how to deal with it and fix the problem? It would be very nice if someone can help me 🙏🏻
@MDMZ9 ай бұрын
running on MAC can be tricky sometimes, looks like a PyTorch issue, you might be able to get some help here: github.com/cocktailpeanut/mac-svd-install Please let me know if that worked If it persists, I would definitely try an online solution instead
@spacemultimedia29797 ай бұрын
Can you do this on the normal Stable diffusion? With the normal UI, I hate this type of UI with tons of modules everywhere. If I installed comfy UI locally, would it be a separate ui from the normal Stable diffusion? It wouldn't launch the same for sure. Would that mess up my current SD local install?
@MDMZ7 ай бұрын
this won't interfere with your local A1111 installation, I highly recommend you give it a shot, it looks complicated but it's much more flexible and gets easier with time
@reviewflicks2 ай бұрын
I am looking for the video combine node, cant seem to see it anywhere. Need it to change output file type, any help? cheers!
@MDMZ2 ай бұрын
I believe it's been changed to another node, you can use my custom workflows from the description, those still have it
@AbdulKhamis7 ай бұрын
very useful! Thank you!
@illuminat_empire4 ай бұрын
Really good content 👍
@MDMZ4 ай бұрын
Thank you 🙌
@ertezsssz8 ай бұрын
is there a way to add a Clip Text Encode module between the SVD_img2vid_Conditioning and the KSampler to add positive and negative prompt to have more control with the camera movement ?
@MDMZ8 ай бұрын
interesting, not sure about that, but will look into it
@ertezsssz8 ай бұрын
@@MDMZ thank you very much, i think it could be an even greater tool to be able to influence the animation with prompts, not sure if its possible or not or even if the curent models are able to do that
@MDMZ8 ай бұрын
@@ertezsssz I remember seeing motion models that allow that, but you can also try including movement description in the prompts, it usually helps
@ertezsssz8 ай бұрын
I searched on civitai without finding one
@SefaKaratekin8 ай бұрын
I get this error. Does anyone have any idea about how to fix it? Prompt outputs failed validation: Exception when validating node: VideoCombine.VALIDATE_INPUTS() got an unexpected keyword argument 'frame_rate' VHS_VideoCombine: - Exception when validating node: VideoCombine.VALIDATE_INPUTS() got an unexpected keyword argument 'frame_rate'
@luqaszoq8 ай бұрын
I had the same problem and fixed it by Update All in Manager.
@SefaKaratekin8 ай бұрын
@@luqaszoq It worked for me, thanks a lot for your help, mate!
@tendest15 ай бұрын
Hi dude, is there any method to increase the length of the generated video?
@MDMZ5 ай бұрын
not that I know of :/
@simonhick91247 ай бұрын
The video Combine does not upload the video, it has an output that says filenames, i dont know what node I should use to see the video
@MDMZ7 ай бұрын
you mean the video is not being saved to your output folder ?
@nachoquiroga3078 ай бұрын
con una laptop se puede hacer? tengo un i5 de 10ma generacion y una rtx 3060
@parscryptoАй бұрын
Error occurred when executing SVD_img2vid_Conditioning: 'NoneType' object has no attribute 'encode_image hello how to fix?
@MDMZАй бұрын
hi, you can head over to discord for help
@lightndreamsmachine8 ай бұрын
love this video
@prabuddhagupta5199 ай бұрын
I installed the comfy ui manager, but the option isn't visible there. i updated the comfy ui but still the manager option is not showing. can you tell me what to do?
@MDMZ9 ай бұрын
make sure you have the latest version of GIT and re-install, if it persists, retry the installation methods here: civitai.com/models/71980/comfyui-manager
@sameeramin8228 ай бұрын
Awesome bro 😎 big fan ❤
@michelingesoftАй бұрын
What requirements are neede to run this locally?
@MDMZАй бұрын
you might be able to run this on low settings with 8GB VRAM
@TheBeastiaryPetshop8 ай бұрын
followed all the steps but i get Prompt outputs failed validation ImageOnlyCheckpointLoader: - Value not in list: ckpt_name: 'svd.safetensors' not in (list of length 21) any help?
@MDMZ8 ай бұрын
have you downloaded the svd model? make sure it's in the right folder
@Thomas_Leo8 ай бұрын
You have to actually select the downloaded checkpoint from the list. The default name doesn't match the one you downloaded.
@AymanSaeed-c5m7 ай бұрын
can I make loop animated pictures, or what called cinemagraph with AI?
@MDMZ7 ай бұрын
I don't think there are ways to make it look so seamless using these tools, unless there's a method I don't know of.
@AymanSaeed-c5m7 ай бұрын
@@MDMZ Thank you.
@Samuirobotics6 ай бұрын
I have a problem like this% ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): and here goes long list with the cherry on top - RuntimeError: MPS backend out of memory (MPS allocated: 15.61 GB, other allocations: 2.90 GB, max allowed: 18.13 GB). Tried to allocate 562.50 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure). Prompt executed in 3701.64 seconds Appreciate for any help. MacBook Pro m1 16GB
@MDMZ6 ай бұрын
plz check the pinned comment
@andresfc4354 ай бұрын
incredible, really helfull ❤❤
@MDMZ4 ай бұрын
Glad it helped!
@sinc3564 ай бұрын
When I queue prompt, my problem is Cuda out of memory how to fix that?
@MDMZ4 ай бұрын
it means your VRAM is a little low for it, did you try setting a lower resolution ? and maybe a smaller steps number
@sinc3564 ай бұрын
@@MDMZ how to set lower resolution?
@kakashi999088 ай бұрын
Can you do this with stable diffusion too?
@MDMZ8 ай бұрын
this IS stable diffusion
@muhammadsyafiq10048 ай бұрын
is it possible to run this with laptop? Seems load heavy.
@MDMZ8 ай бұрын
It will depend on the specs, you can definitely give it a shot
@socialsculptmedia-vs8jm7 ай бұрын
Doesn't work 'NoneType' object has no attribute 'encode_image' "How to solve this?
@MDMZ7 ай бұрын
which node is showing in red when yo uexecute?
@Draqee42 ай бұрын
am facing this issue in local device Error occurred when executing ImageOnlyCheckpointLoader: Error while deserializing header: HeaderTooLarge File "K:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy_extras odes_video_model.py", line 21, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=False, output_clipvision=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 498, in load_checkpoint_guess_config sd = comfy.utils.load_torch_file(ckpt_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 15, in load_torch_file sd = safetensors.torch.load_file(ckpt, device=device.type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\safetensors\torch.py", line 311, in load_file with safe_open(filename, framework="pt", device=device) as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@MDMZ2 ай бұрын
can't say for sure, but it's possible that you are downloading the wrong files, make sure you get the correct models and place them in the right folders
@Draqee42 ай бұрын
@@MDMZ yeah i was doing it wrong now it s working
@kaleabspica84376 ай бұрын
cant find the manager option
@MDMZ6 ай бұрын
check the pinned comment
@lilillllii2466 ай бұрын
is it possible text prompt?
@MDMZ6 ай бұрын
yes, I've covered it in the video
@leylauk4 ай бұрын
Gen 2 is easier?
@MDMZ4 ай бұрын
it definitely is, but to me SVD gives better results
@ThaMoonwalkerFSDBetaChannel7 ай бұрын
is this only for PC?
@MDMZ7 ай бұрын
you might be able to make it run on MAC, dome people did
@decanecaudeoscar696119 күн бұрын
I am the only one deleting all his souvenirs to get enough space for all these models ?
@MDMZ15 күн бұрын
the struggle is real!
@jamiemunsey4 ай бұрын
Is anyone else getting "500 - Internal Server Error" when trying to upload images? Or know how to fix that?
@MDMZ3 ай бұрын
hi Jamie, might need more context, does this happen when u try to generate ? or literally when u browse for an image and try to upload it ? are u using ThinkDiffusion or running locally
@jamiemunsey3 ай бұрын
@@MDMZ I get that error when trying to upload an image! Running it off Google Cloud (think that's the only way, since I have an Intel Mac?)
@MDMZ3 ай бұрын
@@jamiemunsey I see, not sure how hosting and image loading works over there, can u try loading different images from different locations and see if any of it works? othwerwise you can check other services such as thinkdiffusion, if it's cheaper than google cloud it might make more sense to use that instead
@cucciolo1822 ай бұрын
wow any updates?
@MDMZ2 ай бұрын
Luma dream machine is now better than this, at least in my experience
@ruthskiba-otway-peelvirtua96787 ай бұрын
Can this be done in Automatic1111?
@MDMZ7 ай бұрын
I believe so
@rohithkumarsp5 ай бұрын
how to do this locally
@MDMZ5 ай бұрын
it's covered in the video
@alberto64674yt9 ай бұрын
i hate comfy ui any method to use that on automatic1111 ui?
@MDMZ8 ай бұрын
why!?
@alberto64674yt8 ай бұрын
@@MDMZ is harder to me understand that module mess better the automatic1111 its just tabs and options
@alberto64674yt8 ай бұрын
node*
@DIAMIN3057 ай бұрын
RuntimeError: input must be 4-dimensional
@MDMZ7 ай бұрын
do you see any nodes turning red when you execute? I'm suspecting that you're loading a non-video file, or a corrupt video file
@druqsdude5 ай бұрын
because you have an AMD card and running it with --directml thats what i found online
@raymondvaldes_8 ай бұрын
not a TIFF file (header b'n' not valid)
@raymondvaldes_8 ай бұрын
please help man, i dont get this
@raymondvaldes_8 ай бұрын
happens with jpegs, png work fine
@RodieOsc7 ай бұрын
Time to cook my 4090
@MDMZ7 ай бұрын
welcome to the club
@suzanazzz6 ай бұрын
Awesome videos thank YOu! Question is it possible to upload my custom models to thinkDiffusion (comfyUi)?
@MDMZ6 ай бұрын
Yes you can!
@시니어티브8 ай бұрын
비디오 생성기
@AllSeasonsVideo9 ай бұрын
Think diffusion is a rip off of a much larger and much better service. Everyone uses that one. Way more apps. Way better service. Think diff is nothing special.
@maresionut-laurentiu71289 ай бұрын
Except it's totally free
@Justafakestory9 ай бұрын
Lol what service
@nelsonduffle4969 ай бұрын
What service are you referring to?
@tooluuke76385 ай бұрын
@@maresionut-laurentiu7128 Think diffusion is free just for 30 minutes