So CogvideoX or Mochi 1 , which one you see is better? In terms of running in local.
@davidblenkinsopp959717 күн бұрын
Thanks for this, this is amazing. While the second image is amazing it looks like the contrast has been turned up to 11 - is there any way of toning this down?
@TheFutureThinker17 күн бұрын
Hi, yes we can adjust. Because the node in Unsampling and Resampling were using default values for most of the setting. We can lower the steps or CFG in Resampling, and it lower the contrast. Another way to do is let it run as it is, and we add another node after VAE Decoding for the images output, to do auto contrast. I remember one called Auto Layer custom node which cover most of the color tuning and layering features.
@crazyleafdesignweb17 күн бұрын
So the Mochi Edit , in the Github page mentioned. A similar strategy as RF-Inversion
@TheFutureThinker17 күн бұрын
Yes. Unsampling RF Inversion. There are Comfyui custom node for this as well.
@Guus14 күн бұрын
Yoo. Do you think proper controllnets will be on the way soon?
@TheFutureThinker14 күн бұрын
I think the road map for an AI Video model to become a solid model. Stage 1- Txt2vid, 2- img2vid , 3- start /end frames , 4- then controlnet for pose or v2v Runway Gen3 as a good example.
@kalakala480317 күн бұрын
Awesome thanks! Will try it in my PC. My 4070 can generate 6.8 seconds txt2vid 😉
@TheFutureThinker17 күн бұрын
😎👍
@skyebrows17 күн бұрын
Now if you can figure out how to un deep fry the image. Maybe if someone makes a lora that understands contrast.