Mochi 1 Text2Video Workflow - Able To Generate Multiple Seconds (Freebie) www.patreon.com/posts/115533849?
@Relinked-Media13 күн бұрын
Great video! Can you do a All in one installer?
@TheFutureThinker13 күн бұрын
@@Relinked-Media Will try it and see
@AgustinCaniglia199217 күн бұрын
It's amazing how comfyUI keeps being allways updated and includes so many AI tools.
@motionau18 күн бұрын
Awesome work from Kijai on this one. Optimising Mochi so we can run it on consumer gpus.
@TheFutureThinker18 күн бұрын
Looking forward to img2vid weights release.
@kait3n1018 күн бұрын
I read that you can use tiled VAE to overcome the OOM crash. People got it working on RTX3060 12GB! edit: Nvm, I saw your final section. Btw thanks for the tutorial!
@TheFutureThinker18 күн бұрын
Yup, started from raw workflow to tiling optimze 👍
@darksushi900017 күн бұрын
3090 here rendering 7 seconds of video in 35 minutes all day long zero crashes
@darksushi900017 күн бұрын
Some more info: Ryzen 7950x, 32GB RAM using 17GB RAM, using 15.6GB VRAM. Diffusion model is preview fp8 scaled and vae is preview bf16
@Donzo8917 күн бұрын
No way, I was disappointment when he mentioned 4090. Hopefully my 4070 ti super 16gb can handle it.
@technobabble7717 күн бұрын
Runpod would get you there for a few bucks if you want to play with it
@darksushi900016 күн бұрын
@@Donzo89 just about. Pretty sure I've seen a way to get it down to 12GB VRAM
@salsa_danza3 күн бұрын
Is img2video possible with Mochi?
@gjohgj14 күн бұрын
Thanks for this! Very curious about the video2video flow:)
@TheFutureThinker13 күн бұрын
Mochi Edit? This one kzbin.info/www/bejne/inyXk2qrna2aoc0si=7dO7alISVpEgITUt
@theskyspire17 күн бұрын
Maybe we didn't see a dragon, because you said "dargon" on the prompt.
@TheFutureThinker17 күн бұрын
Yea the first one. updated: here's the correct one : x.com/AIfutureBenji/status/1854781439365333412
@MilesBellas13 күн бұрын
Purz did a live review, before Halloween,where he rented out a H100 for a few dollars and got some interesting results quickly.
@TheFutureThinker13 күн бұрын
After trim down, the model does lose some quality in Comfyui
@FusionDeveloper15 күн бұрын
I want a ksampler that shows a correct accurate preview of frame by frame, as it generates. You can go down to 320x320 and as low of length as 13. You can choose 480x480 to save some time instead of 848x480.
@insurancecasino579018 күн бұрын
Once again. Amazing video. You don't have an image to vid interpreter? Just an extension right?
@TheFutureThinker18 күн бұрын
Yes, just the extension
@GreenAppelPie17 күн бұрын
Excellent!
@Nibot202317 күн бұрын
can't wait for the img2vid portion
@TheFutureThinker17 күн бұрын
Once it releases, i will make a automate flow to create documentary, or narrative base videos. Hehe
@AndikaKamal17 күн бұрын
@@TheFutureThinker 👍
@maratgazizulin4 күн бұрын
@@TheFutureThinker img2vid works though
@TheFutureThinker4 күн бұрын
@maratgazizulin its can generate, but is the model weight of the current version built for img2vid? See the tech spec. huggingface.co/genmo/mochi-1-preview
@Secantmonroe17 күн бұрын
i tried mochi1 last month with RTX 3090 24gb it took 1 hr to produce a 6 s video , not usable , u still need to 3 retries to get the desired result \
@eliassuzumura18 күн бұрын
You're a GOD
@TheFutureThinker18 күн бұрын
No I am not. And I only have one God , he is watching us. 😉
@ian25936 күн бұрын
Can you drop the frame rate but use a Rife node to smooth it out?
@mdbk27708 күн бұрын
13:05 i am being able to make 5 sec video. Trying 10 sec but that does not work. My pc is with 5950x 64g 4090.
@aprismaaprisma409014 күн бұрын
I get in all workflows, even with the all in one this message: KSampler meshgrid expects all tensors to have the same dtype, Any idea?
@TheFutureThinker14 күн бұрын
Your comfy need update
@dracothecreative17 күн бұрын
Hey all, so how do i get the mochi vaeloader and mochi decode?
@giuseppedaizzole702518 күн бұрын
wow...this looks amazing..can this make img2video? thanks
@TheFutureThinker18 күн бұрын
There's video2video for Mochi, but no img2vid yet.
@giuseppedaizzole702518 күн бұрын
@@TheFutureThinker ok...thanks
@eduardomoscatelli277517 күн бұрын
How to do imageToVideo?
@SUP3RMASSIVE18 күн бұрын
I got a 4090 and can only generate 1sec vids. Like u if I go higher it freezes on vae decode.
@TheFutureThinker18 күн бұрын
Yes me too, Watch till the end you will find the answer
@ryansenger93717 күн бұрын
Can this do image to video? Or video to video?
@FusionDeveloper15 күн бұрын
Yes, there are workflows for it now, I think it is called "MoChi Edit".
@WiseOwlLearning18 күн бұрын
which model do you recommend for RTX 4090?
@TheFutureThinker18 күн бұрын
Fp16
@Dinesh-x6l17 күн бұрын
Image to video works?
@TheFutureThinker17 күн бұрын
Img2vid model weights coming soon. The current version i2v, but it have v2v nodes already. It is weird, but is it what it is.
@getmonie39318 күн бұрын
can I use models from last week? I have the 2- 20 gig models, gguf's, and vae. Theyre huge so I was hoping but I think it has to be new one?
@getmonie39318 күн бұрын
also, I'm on m2 max 32 gb so will that be an issue as well?
@TheFutureThinker18 күн бұрын
I tried the last week models (I guess you are mentioning the one from KJ Mochi Wrapper model) And last part of the video i mixed the VAE from there and using Native node for Sampler and Model Loader
@TheFutureThinker18 күн бұрын
For Mac, I haven't try it. Since 4 years ago , my iMac and Macbook, then I stop using Apple products.
@rageshantony218217 күн бұрын
I tried a 10 second video with RTX 6000 Ada 48 GB. After 20 mins sampling, then it entered VAE, the VAE burst even with kjili tiled decoding ending the generation in vain.
@TheFutureThinker17 күн бұрын
10 seconds. Nice try. While the model is supporting 5 seconds video clip.
@rageshantony218217 күн бұрын
@@TheFutureThinker I set the length as 241. But I unable to get the result due to VAE crash
@TheFutureThinker17 күн бұрын
The max length I did was 129. It was using the original model weights, not the Comfyui version
@KananaEstate18 күн бұрын
Is it only 4090 that can run Mochi locally? Any other lower spec possible?
@TheFutureThinker18 күн бұрын
Recommend GTX 4090
@J-ld9cl18 күн бұрын
Does it need more VRAM or RAM?
@TheFutureThinker18 күн бұрын
Don't know, if you have 100GB VRam and Ram or you have 1GB on each?
@JohnVanderbeck17 күн бұрын
gah my 4090 is in my sim rig, the AI rig only has a 3090ti :(
@TheFutureThinker17 күн бұрын
1 , 4090 is good enough to run this. No worry
@JohnVanderbeck17 күн бұрын
@@TheFutureThinker yeah but my point was the 4090 isn't in my AI machine, only a 3090ti which has been plenty until now. The 4090 is in my gaming/sim rig :D
@zGenMedia18 күн бұрын
If you do not have a powerful GPU just watch... do not touch.
@MilesBellas18 күн бұрын
Wow.....
@TheFutureThinker18 күн бұрын
Local + video
@DaveTheAIMad17 күн бұрын
so you need a 4090? a 3090 wont do it?
@TheFutureThinker17 күн бұрын
You can try. The 4090 was the model ComfyUI.org tested with.
@FedorBP15 күн бұрын
RAM is the same. Should work, but slower.
@DaveTheAIMad15 күн бұрын
@@FedorBP I did give it a try earlier, 6 minutes on the default settings, a little over a second of video... Still awesome we can run it on local machines. Just need a way to use our own starting images now :)
@amigoface18 күн бұрын
can it run on 4070 12 gb ?
@TheFutureThinker18 күн бұрын
Yes but be patient with the loading
@amigoface18 күн бұрын
@@TheFutureThinker cool, after the loading , is the generation relatively quick in your opinion ?
@TheFutureThinker17 күн бұрын
@amigoface it feels like the speed I usually did in AnimateDiff for 15 seconds video.
@amigoface16 күн бұрын
@@TheFutureThinker ok thanks
@MilesBellas18 күн бұрын
Dual RTX cards with NVLINK needs support.
@TheFutureThinker18 күн бұрын
How about A6000 ?
@Guus18 күн бұрын
@@TheFutureThinker I run it on A6000. 4 second video takes 5 minutes. Do you know if I can run 2 gpu’s at the same time? And how?
@MilesBellas17 күн бұрын
@@TheFutureThinker I have dual RTX A6000s in the main machine.....