As usual, thank you for the explaination and all the small details, like the file being the same as the flux models, i genuinely think you're a great teacher, taking the time to explain what does what is great!
@CodeCraftersCornerАй бұрын
Thank you very much! Glad you like them!
@juanjesusligero39121 күн бұрын
Thank you for your videos! ^^
@CodeCraftersCorner21 күн бұрын
Glad you like them!
@jysn10125 күн бұрын
Awesome! def following.
@CodeCraftersCorner25 күн бұрын
Awesome! Thank you!
@jorgemiranda2613Ай бұрын
Thanks for sharing this !
@CodeCraftersCornerАй бұрын
Thanks for watching!
@Ollegruss_MusicАй бұрын
Thanks for the video and for the links to resources.
@CodeCraftersCornerАй бұрын
Thank you!
@RubenTainoAIАй бұрын
Thank you!
@CodeCraftersCornerАй бұрын
Thank you for watching!
@Redtash1Ай бұрын
Thanks for your videos. There is a workflow for video to video in custom nodes Hunyaun and example folder.
@CodeCraftersCornerАй бұрын
Yes, It requires the ComfyUI-HunyuanVideoWrapper by Kijai.
@SeanScarbrough26 күн бұрын
Im new to Comfy, came over form A1111. And where did you get your workflow?
@CodeCraftersCorner26 күн бұрын
Welcome to the community! You can find free workflows on the channel. In all video description, there is a section called "[RESOURCES]" where I share my workflows. Some are built by me, other are from the developer of the model I am showcasing.
@xcom9648Ай бұрын
There is a video to video workflow in the non official comfy version that was released a while back.
@CodeCraftersCornerАй бұрын
Yes, this is the repo: ComfyUI-HunyuanVideoWrapper by Kijai
@juanjesusligero39121 күн бұрын
@@CodeCraftersCorner Are both official and non official Hunyuan versions compatible? What I mean is, can I install and use both without breaking ComfyUI?
@xcom964821 күн бұрын
@@juanjesusligero391 Have not tried both but i don't see why it would be a problem. Don't worry about it, if you have a issue you can just delete one and it will not effect your core comfy installation.
@juanjesusligero39121 күн бұрын
@@xcom9648 I think that if the installation of the unoficial one changes the Python environment, it could break the official one. It probably won't, but I'd prefer not to risk my comfyui install if it can be helped :)
@Elektrashock1Ай бұрын
Hunyuan Latent video node is missing. Updated Comfy. but not available.? Did you also install Hunyuan video wrapper?
@CodeCraftersCornerАй бұрын
Hello, no need for custom nodes for this one. They are all native nodes (built in). Are you sure your ComfyUI is updated correctly. Try to manually update if you used the Manager.
@johnedwards7655Ай бұрын
had the same problem - manually update with comfy update folder helped
@CodeCraftersCornerАй бұрын
@@johnedwards7655 Glad the manual method worked.
@henkhbit5748Ай бұрын
Thanks, try it and it was ok but does not follow the prompt completely. I think the 5 sec limitation impact following the prompt more correctly.
@CodeCraftersCornerАй бұрын
Thanks for sharing! I think so too. It is better for generating b-rolls clips like atmospheric backgrounds.
@CryptoIndia9Ай бұрын
I am trying it on my 4090 card with 24GB Vram for a 5 sec video , its taking 30-40 minutes..... but is it safe for my card as the requirement says 40GB or 80GB VRam ? Also sometimes im getting out of memory error on VAE Decode (Tiled) ....
@CodeCraftersCornerАй бұрын
Hello, this does not feel right. On 12 GB VRAM, it takes me 14 minutes to generate. Maybe you have the resolution too high or you are missing some dependencies. for VAE decode one, i have mine set up as tile size = 128 and overlap 32.
@CryptoIndia9Ай бұрын
@@CodeCraftersCorner I was using vae setting as 160/64 with 154 frames to be generated but now after changing vae to 128/32 its taking same 30-40 minutes on my 4090. Im using weight_dtype fp8_e4m3fn_fast ... all other setting same as given in workflow provided
@CryptoIndia9Ай бұрын
no in fact it took 1hr 20min.... I actually locked my PC and it was working in background, may be the reason behind it but anyway it was noway near to your time... now im trying same prompt using LTXV v0.9.1 to check the time to genarate....
@CryptoIndia9Ай бұрын
Using LTXV v0.9.1 it took just 24 secs on my 4090 to generate 153 frames of video... amazingly fast!!!
@CodeCraftersCornerАй бұрын
@@CryptoIndia9 Do you have anything running in the background. While I was recording, the generation was stuck at 1/20 for more than 20 minutes. When I stopped and closed my recording app, the generation took 14 minutes with the settings shown in the video. Yes, 100% system utilization for me as well.
@armauploads1034Ай бұрын
Is there also the possibility of IMG2Video and can you please show a workflow for it? 🙂
@CodeCraftersCornerАй бұрын
Not with this model! If you can run the HunYanVideo Wrapper, then there is a workflow for it.
@Darkwing8707Ай бұрын
@@CodeCraftersCorner That method just uses llava to create a description of an image. It's not really I2V.
@nadora0Ай бұрын
there is a fp8 v of HunYuan video can i use it with this workflow ?
@CGFUN829Ай бұрын
There is a gguf of the model along with llama used with it.
@nadora0Ай бұрын
@@CGFUN829 ca u give me link for that plz ?
@CodeCraftersCornerАй бұрын
Hello, this is the native ComfyUI implementation. You can get the GGUF version from the GitHub page.
@giuseppedaizzole7025Ай бұрын
having low-vram, why haven't you made a video using the GGUF models?
@CodeCraftersCornerАй бұрын
I was testing out to see if it can run on my system and I shared my results.
@giuseppedaizzole7025Ай бұрын
@@CodeCraftersCorner Next one GGUF...:) ..Thanks
@CodeCraftersCornerАй бұрын
@@giuseppedaizzole7025 Okay, I will check if i can run it and the quality. If good, i will make a video.
@giuseppedaizzole7025Ай бұрын
@@CodeCraftersCorner Great, really appreciate that u answer, thanks.
@Elektrashock1Ай бұрын
Updated Comfy but no Latent Video node?
@CodeCraftersCornerАй бұрын
Okay, try to do this. In the ComfyUI folder, open a CMD / Terminal. Type git log and check if you have the commit 52c1d93. This was pushed yesterday (December 20th). It's possible your ComfyUI is not being updated correctly.
@nadora0Ай бұрын
and the ltx 0.9.1 here and give me error becuase vae and i test new vae and same issu
@PyruxNetworksАй бұрын
update comfyui
@CodeCraftersCornerАй бұрын
Hello, please update your ComfyUI to the latest version. You also download the latest copy from their GitHub if you do not want to update your current version.
@silentage6310Ай бұрын
unfortunaly this is not used all gpu in pc. for 24gb its allow to render 480*272px (for x4 upscale to FHD) and 241 frames (10sec).
@CodeCraftersCornerАй бұрын
Thanks for sharing!
@saulg195Ай бұрын
Can you use your own image?
@CodeCraftersCornerАй бұрын
Hello, not with this version of the model.
@andresz1606Ай бұрын
Certainly not with less than 24GB VRAM. The VAE Decode will fail if your card can't handle the sampled video, but only after wasting a great deal of time with all the previous nodes, making it twice as useless. Don't even bother with less than 24 or 40GB VRAM.
@CodeCraftersCornerАй бұрын
As I showed in the video, you can run it with less than 24GB, although it takes longer (14 minutes per video). Make sure to decrease the values in the VAE Decode (Tiled). As a tip, if the seed is fixed, you can test the best values for you by generating with default values. Once you get the "Out of memory" error, decrease the values and queue the prompt again. Since ComfyUI will not run nodes which have executed successfully (if seed if fixed.), it will jump directly to VAE Decode node. Keep changing the values until you have something that works for you.
@SonidosEnArmonia_1992Ай бұрын
This modelo does NOT do img2vid
@CodeCraftersCornerАй бұрын
Yes, not for now. It's in their plan to have an Image-to-Video Model.
@pwknaiАй бұрын
There is one nice solution to solve all this complicated V-Ram problem simply. Let's buy 'Nvidia H100' we all! (...when our yearly income have reached million dollars T_T)
@CodeCraftersCornerАй бұрын
I'm afraid that's not a realistic solution for most of us!