Mochi 1 Text2Video Workflow - Able To Generate Multiple Seconds (Freebie) www.patreon.com/posts/115533849?
@Relinked-Media10 сағат бұрын
Great video! Can you do a All in one installer?
@TheFutureThinker7 сағат бұрын
@@Relinked-Media Will try it and see
@AgustinCaniglia19925 күн бұрын
It's amazing how comfyUI keeps being allways updated and includes so many AI tools.
@motionau5 күн бұрын
Awesome work from Kijai on this one. Optimising Mochi so we can run it on consumer gpus.
@TheFutureThinker5 күн бұрын
Looking forward to img2vid weights release.
@gjohgjКүн бұрын
Thanks for this! Very curious about the video2video flow:)
@TheFutureThinkerКүн бұрын
Mochi Edit? This one kzbin.info/www/bejne/inyXk2qrna2aoc0si=7dO7alISVpEgITUt
@darksushi90004 күн бұрын
3090 here rendering 7 seconds of video in 35 minutes all day long zero crashes
@darksushi90004 күн бұрын
Some more info: Ryzen 7950x, 32GB RAM using 17GB RAM, using 15.6GB VRAM. Diffusion model is preview fp8 scaled and vae is preview bf16
@Donzo894 күн бұрын
No way, I was disappointment when he mentioned 4090. Hopefully my 4070 ti super 16gb can handle it.
@technobabble774 күн бұрын
Runpod would get you there for a few bucks if you want to play with it
@darksushi90003 күн бұрын
@@Donzo89 just about. Pretty sure I've seen a way to get it down to 12GB VRAM
@kait3n105 күн бұрын
I read that you can use tiled VAE to overcome the OOM crash. People got it working on RTX3060 12GB! edit: Nvm, I saw your final section. Btw thanks for the tutorial!
@TheFutureThinker5 күн бұрын
Yup, started from raw workflow to tiling optimze 👍
@Nibot20234 күн бұрын
can't wait for the img2vid portion
@TheFutureThinker4 күн бұрын
Once it releases, i will make a automate flow to create documentary, or narrative base videos. Hehe
@AndikaKamal4 күн бұрын
@@TheFutureThinker 👍
@FusionDeveloper2 күн бұрын
I want a ksampler that shows a correct accurate preview of frame by frame, as it generates. You can go down to 320x320 and as low of length as 13. You can choose 480x480 to save some time instead of 848x480.
@theskyspire4 күн бұрын
Maybe we didn't see a dragon, because you said "dargon" on the prompt.
@TheFutureThinker4 күн бұрын
Yea the first one. updated: here's the correct one : x.com/AIfutureBenji/status/1854781439365333412
@MilesBellas7 сағат бұрын
Purz did a live review, before Halloween,where he rented out a H100 for a few dollars and got some interesting results quickly.
@TheFutureThinker7 сағат бұрын
After trim down, the model does lose some quality in Comfyui
@insurancecasino57905 күн бұрын
Once again. Amazing video. You don't have an image to vid interpreter? Just an extension right?
@TheFutureThinker5 күн бұрын
Yes, just the extension
@GreenAppelPie5 күн бұрын
Excellent!
@aprismaaprisma4090Күн бұрын
I get in all workflows, even with the all in one this message: KSampler meshgrid expects all tensors to have the same dtype, Any idea?
@TheFutureThinkerКүн бұрын
Your comfy need update
@Secantmonroe4 күн бұрын
i tried mochi1 last month with RTX 3090 24gb it took 1 hr to produce a 6 s video , not usable , u still need to 3 retries to get the desired result \
@dracothecreative5 күн бұрын
Hey all, so how do i get the mochi vaeloader and mochi decode?
@WiseOwlLearning5 күн бұрын
which model do you recommend for RTX 4090?
@TheFutureThinker5 күн бұрын
Fp16
@eduardomoscatelli27754 күн бұрын
How to do imageToVideo?
@eliassuzumura5 күн бұрын
You're a GOD
@TheFutureThinker5 күн бұрын
No I am not. And I only have one God , he is watching us. 😉
@ryansenger9374 күн бұрын
Can this do image to video? Or video to video?
@FusionDeveloper2 күн бұрын
Yes, there are workflows for it now, I think it is called "MoChi Edit".
@Dinesh-x6l4 күн бұрын
Image to video works?
@TheFutureThinker4 күн бұрын
Img2vid model weights coming soon. The current version i2v, but it have v2v nodes already. It is weird, but is it what it is.
@SUP3RMASSIVE5 күн бұрын
I got a 4090 and can only generate 1sec vids. Like u if I go higher it freezes on vae decode.
@TheFutureThinker5 күн бұрын
Yes me too, Watch till the end you will find the answer
@getmonie3935 күн бұрын
can I use models from last week? I have the 2- 20 gig models, gguf's, and vae. Theyre huge so I was hoping but I think it has to be new one?
@getmonie3935 күн бұрын
also, I'm on m2 max 32 gb so will that be an issue as well?
@TheFutureThinker5 күн бұрын
I tried the last week models (I guess you are mentioning the one from KJ Mochi Wrapper model) And last part of the video i mixed the VAE from there and using Native node for Sampler and Model Loader
@TheFutureThinker5 күн бұрын
For Mac, I haven't try it. Since 4 years ago , my iMac and Macbook, then I stop using Apple products.
@KananaEstate5 күн бұрын
Is it only 4090 that can run Mochi locally? Any other lower spec possible?
@TheFutureThinker5 күн бұрын
Recommend GTX 4090
@giuseppedaizzole70255 күн бұрын
wow...this looks amazing..can this make img2video? thanks
@TheFutureThinker5 күн бұрын
There's video2video for Mochi, but no img2vid yet.
@giuseppedaizzole70255 күн бұрын
@@TheFutureThinker ok...thanks
@J-ld9cl5 күн бұрын
Does it need more VRAM or RAM?
@TheFutureThinker5 күн бұрын
Don't know, if you have 100GB VRam and Ram or you have 1GB on each?
@DaveTheAIMad4 күн бұрын
so you need a 4090? a 3090 wont do it?
@TheFutureThinker4 күн бұрын
You can try. The 4090 was the model ComfyUI.org tested with.
@FedorBP3 күн бұрын
RAM is the same. Should work, but slower.
@DaveTheAIMad2 күн бұрын
@@FedorBP I did give it a try earlier, 6 minutes on the default settings, a little over a second of video... Still awesome we can run it on local machines. Just need a way to use our own starting images now :)
@JohnVanderbeck5 күн бұрын
gah my 4090 is in my sim rig, the AI rig only has a 3090ti :(
@TheFutureThinker4 күн бұрын
1 , 4090 is good enough to run this. No worry
@JohnVanderbeck4 күн бұрын
@@TheFutureThinker yeah but my point was the 4090 isn't in my AI machine, only a 3090ti which has been plenty until now. The 4090 is in my gaming/sim rig :D
@rageshantony21824 күн бұрын
I tried a 10 second video with RTX 6000 Ada 48 GB. After 20 mins sampling, then it entered VAE, the VAE burst even with kjili tiled decoding ending the generation in vain.
@TheFutureThinker4 күн бұрын
10 seconds. Nice try. While the model is supporting 5 seconds video clip.
@rageshantony21824 күн бұрын
@@TheFutureThinker I set the length as 241. But I unable to get the result due to VAE crash
@TheFutureThinker4 күн бұрын
The max length I did was 129. It was using the original model weights, not the Comfyui version
@zGenMedia5 күн бұрын
If you do not have a powerful GPU just watch... do not touch.
@MilesBellas5 күн бұрын
Wow.....
@TheFutureThinker5 күн бұрын
Local + video
@amigoface5 күн бұрын
can it run on 4070 12 gb ?
@TheFutureThinker5 күн бұрын
Yes but be patient with the loading
@amigoface5 күн бұрын
@@TheFutureThinker cool, after the loading , is the generation relatively quick in your opinion ?
@TheFutureThinker4 күн бұрын
@amigoface it feels like the speed I usually did in AnimateDiff for 15 seconds video.
@amigoface3 күн бұрын
@@TheFutureThinker ok thanks
@MilesBellas5 күн бұрын
Dual RTX cards with NVLINK needs support.
@TheFutureThinker5 күн бұрын
How about A6000 ?
@Guus5 күн бұрын
@@TheFutureThinker I run it on A6000. 4 second video takes 5 minutes. Do you know if I can run 2 gpu’s at the same time? And how?
@MilesBellas5 күн бұрын
@@TheFutureThinker I have dual RTX A6000s in the main machine.....