i cant keep up with so many amazing tools to create great content.. this is AWESOME
@idoshor4470Ай бұрын
you are a king. thanks for sharing man! one main issue which I noticed is that the stich between the steps is noticeable. mainly in the action that is happening in the frame. I guess maybe the solution is in the prompt. some like adding "Slowly starting to.." or something a like.
@doin-doitnowАй бұрын
Amazing, thankyou 🎉
@TheMunteanuAlexАй бұрын
Yesterday Kijai updated the CogVideoX wrapper with CogVideoX-5b 1.5 and new nodes.
@TheFutureThinkerАй бұрын
I played with the server edition of 1.5 model before. Looks good
@rageshantony2182Ай бұрын
@@TheFutureThinker server edition of 1.5 model?? that means ?
@TheFutureThinkerАй бұрын
@rageshantony2182 The original Hugging Face model, without GGUF or compress all in 1 file Sft. try this: huggingface.co/THUDM/CogVideoX1.5-5B-SAT
@TUSHARGOPALKA-nj7jx15 сағат бұрын
IS there a way to make better quality videos or upscale videos?
@phosphorescence10 күн бұрын
'CogVideoXTransformer3DModel' object has no attribute 'context_embedder' Can't get past this error. I checked my comfyui/models/cogvideo/cogvideox-5bi2v folder and it was missing a few files, I went to hugging face and downloaded all files and folders that were missing, still no dice. Tried t5xxl_fp8 & t5xxl_fp16, same error. Also my (Down)load CogVideo Model Node is missing certain options that you have, it's missing "fp8_transformer" and "compile" I still like the general principle however, I'm a complete newb and just copy cat what others are doing, but I sure do hope to replicate this method of taking the last frames using some kind of custom node and continuing an animation from the last frame.
@TomHimanenАй бұрын
Wow, just wow. You are doing God's work bro!
@TheFutureThinkerАй бұрын
Glad it help
@insurancecasino5790Ай бұрын
Just 120 frames. If we can get those frames generated first, then we can go back and make every image perfect before we make the video. Is there a way to see every frame?
@TheFutureThinkerАй бұрын
Save as image , instead of save in Video combine node
@TheFutureThinkerАй бұрын
Then you can modify each images
@insurancecasino5790Ай бұрын
@@TheFutureThinker Alright thanks.
@insurancecasino5790Ай бұрын
@@TheFutureThinker 🔥
@TheFutureThinkerАй бұрын
Yes, that will be like go back to the basic, before in A1111 img2img batch gen for animation. But thats how we can fix some frames honestly
@kalakala4803Ай бұрын
Thanks! I will try it tomorrow in office :)
@TheFutureThinkerАй бұрын
Have fun tomorrow and we try 5B 1.5 Comfyui edition 😉 as we try server side last time.
@TheRoninteamАй бұрын
really amazing tutorial
@TheFutureThinkerАй бұрын
Have fun👍
@showyougaming5299Ай бұрын
Could this work with 1080 or 1070 8G devices?
@sonic55193Ай бұрын
Any way to create loop videos?
@crazyleafdesignwebАй бұрын
nice! Things can be use in work.
@TheFutureThinkerАй бұрын
Absolutely!
@giuseppedaizzole7025Ай бұрын
This looks great, one question, would i be able to use it on a rtx 12gb with 64 ram cpu? and if yes..how long would take, i've tried the cogvideo tetxt to video in flow and it never end the proces. thanks for sharing ur knowledge and investigation, really appreciate.
@RDUBTutorialАй бұрын
Will it run on Mac Studio M2 128gram?
@Aaron_JasonАй бұрын
Just a heads up, interpolating frames is not making the animation faster, it's just adding more frames. Probably best to speed up the animation to make it look real time, so about 4x speed up iirc. Interpolation just makes the movement smoother, not faster.
@TheFutureThinkerАй бұрын
yes correct, thank you. it's smoother.
@synesthesiaharmonicsАй бұрын
Always comprehensive and overall delivering pure functionality and with speed as well!
@TheFutureThinkerАй бұрын
Glad it helps
@SandCastleManiaАй бұрын
Nice Benji. Thank you for your detailed, hard work. DeeCeeHawk
@lindesfahlgaming5608Ай бұрын
Hi there, my CogVideo Nodes looking different, therefore my Flow look different to. I dont have any Pipe. Is that a newer version then yours?
@TheFutureThinkerАй бұрын
Yes , Cog have new version update. Will do another video of the update node soon. Thanks
@velRicАй бұрын
Great result. Is cog2video handle the interpolation between the start frame and the end frame? That tactic gives a more control how to build the scene
@TheFutureThinkerАй бұрын
The newly updated nodes yes, it can do start and end frame
@jonathanerichАй бұрын
this is great
@TharindaMarasinghaАй бұрын
Where can I download the model from?
@golddiggerprankzАй бұрын
Please add a description of the GPU specs you are using? I always follow your videos but all I have is a laptop, even with low VRAM specs
@TheFutureThinkerАй бұрын
Um.. then you need to rent cloud gpu then.
@hugoalvarez923Ай бұрын
Maybe if you show all the frames of the video, you can choose the frame to extend from, no only the last one. It could help to use a video before it makes morphing or something. And this idea could be use also on pyramid flow that i prefer because is faster and letme use my computer while its working
@TheFutureThinkerАй бұрын
Choose frame on the math expression node that I connected on each video extend groups. Do you math count on which frame you want to start with, then it will be okay.
@TheFutureThinkerАй бұрын
By the way, I will add an input on next version update. So you can pick a number after frames preview.
@ammarzammam2255Ай бұрын
I couldn't make it work on my Nvidia 15 vram It crashes every time i queue it because of the high usage of vram do you have a solution for those who doesn't have expensive gpu
@DigiBhemАй бұрын
❤️❤️❤️
@SeanieinLombokАй бұрын
First
@TheFutureThinkerАй бұрын
Sean👋👋👋
@amarnamarpanАй бұрын
what gpu do you use?
@TheFutureThinkerАй бұрын
4090
@froilen13Ай бұрын
Looks cool, but I don't think I could use it to tell a compelling story. Just cool images without context. When do you think this would be good enough to make an animating cartoon?
@TheFutureThinkerАй бұрын
Use Kling, Runway, Mimimax then
@MellayCAN2 сағат бұрын
LoadImageFromPath [Errno 13] Permission denied: 'D:\\Yeni klasör' I get an error like this
@2424mediaАй бұрын
Why is it that Kling and other private models are significantly better?
@TheFutureThinkerАй бұрын
If you have a server GPU or Rig and running the full version of Mochi , not the trim down size of Comfyui version. You can get a better result. I saw it.
@leepuznowskiАй бұрын
@@TheFutureThinker Have you tried this? I would be interested as we have a GPU server but I do not know how we could set this up.
@TheFutureThinkerАй бұрын
@@leepuznowski yes , Mochi, and Cog 1.5 try it last week in my company server gpu. Its not only able to generate means its okay to use, higher VRam GPU also create better quality, even using the same AI model.
@leepuznowskiАй бұрын
@@TheFutureThinker Very interesting. Will you be doing a video on this possibly? How to set it up locally. What GPUs do you have on your company server? We have two A6000 with 48 GB. The genmo website says it needs about 60GB to process, but it is possible to split between GPUs.
@damarctaАй бұрын
thanks a lot. unfortunately all my videos have lots of noise.