Dude, I'm going crazy. I have made several loras, the background moves but the character remains static. I have tried different workflows and nodes, and when I manage to move the character it loses the lora characteristics. What could be my mistake? Any training parameters or should I use a special nodes or workflow?
@TheArt-OfficialTrainerКүн бұрын
Are you only using images? Unfortunately need to use 33 frame videos to get a lot of movement from the character. The requirement for that is 24GB VRAM
@yiyo4375Күн бұрын
@@TheArt-OfficialTrainerI only use images. But I have downloaded loras made with only 20 images with more mobility in civitai and fluid movements. I think I'm doing something wrong...
@吴威尔森-x6s4 күн бұрын
The sound is a bit small
@TheArt-OfficialTrainer4 күн бұрын
Thank you for the feedback! I am brand new to KZbin, so my video quality will only get better from here on out. Your feeback helps me improve
@zuzannakalac90945 күн бұрын
How to get rid off this flickering effect in the movie?
@TheArt-OfficialTrainer5 күн бұрын
You can try to put words like “flickering” or “flashing” in the negative prompt!
@JefHarrisnation5 күн бұрын
Great and simple tutorial. Many Thanks!
@JefHarrisnation5 күн бұрын
After running this, and getting it to run great, I restarted the pod but the 3000 is no longer active. running the "python main.py --listen --port 3000" doesn't help. Any suggestions?
@TheArt-OfficialTrainer5 күн бұрын
Did you restart a new pod? Make sure in the pod configuration, you’re exposing port 3000. The only other reason port 3000 wouldn’t work is if there was another process running on it (i.e. the previous comfyui is still running the on port 3000)
@mr2ti415 күн бұрын
Remember I subbed before your 100k :) good job man. I think you’ll go far - keep it up with the good info and content.
@TheArt-OfficialTrainer5 күн бұрын
That’s so kind of you to say! I appreciate the support, I’ll keep the content coming
@gjohgj6 күн бұрын
Very nice for img2vid and vid2vid! Wonder if controlnet also works with this model
@TheArt-OfficialTrainer6 күн бұрын
No controlnets yet and no way to train or finetune Cosmos yet from what I have read. I'm sure there are people researching it, so hopefully we see more come out soon!
@PhantasyAI06 күн бұрын
32:20 you used the wrong model by the way. You want the Cosmos-1.0-Autoregressive-13B-Video2World or Cosmos-1.0-Autoregressive-5B-Video2World I think. Edit: I think Autoreg is not yet supported in comfyui. I might be wrong.
@TheArt-OfficialTrainer6 күн бұрын
Yeah, only the diffusion models are supported in ComfyUI currently. I followed Comfy’s documentation for which models to use. You can read a little more about the diffusion models here. developer.nvidia.com/blog/advancing-physical-ai-with-nvidia-cosmos-world-foundation-model-platform To be honest I don’t know much about the autoregressive models, I need to read up some more
@PhantasyAI06 күн бұрын
Turns out you can also finetune/train the model using something they made called Nvidia Nemo. Can you make a video on that? Would love to do a full finetune or maybe create loras or something for this model! I've been getting insane results with image to video!
@TheArt-OfficialTrainer6 күн бұрын
Based on this link, NeMo is for finetuning LLMs. www.nvidia.com/en-us/ai-data-science/products/nemo/?ncid=no-ncid Got a link for evidence of using it to train the Cosmos models?
@TheArt-OfficialTrainer6 күн бұрын
Following up, I signed up for the Nvidia Developer Portal and confirmed that NeMo is just for finetuning LLMs. Hopefully there are people out there researching how to finetune Cosmos like we can with Hunyuan, CogX, LTX, etc.
@TheArt-OfficialTrainer6 күн бұрын
@@PhantasyAI0 interesting, I stand corrected! Looks like NeMo can be used for this model, I’m going to give it a shot. If it works well, I will upload a video next week.
@dailystory66787 күн бұрын
Great video, thanks a lot! I just want to know ,do we need to stop the training first to test the LoRAs, or is it possible to test them simultaneously?
@TheArt-OfficialTrainer7 күн бұрын
It’s a question of how much VRAM you have available. If you use a GPU like an H100 or maybe an A6000 Ada, you can test while training is running. Training takes almost exactly 24GB of VRAM when using 33 frame videos, and Hunyuan takes roughly 8-12GB minimum to run, so you need at least 32GB to attempt that. If you’re training with only images, you may be able to do low resolution testing on 24GB of VRAM.
@joacosolbes92838 күн бұрын
Amazing video bro, which are the minimal specs to try to run this locally? GB graphic card?
@TheArt-OfficialTrainer8 күн бұрын
You can probably do it with under 12gb! Make sure to use 20 double blocks and 40 single blocks on the hunyuan block swapping node. Latent Sync is very lightweight, Hunyuan video is the heavy part of the workflow. I’m going to make a video in the future where I progressively add more space saving optimization to see how little VRAM we can use for Hunyuan.
@joacosolbes92838 күн бұрын
@TheArt-OfficialTrainer thks mate
@羽瀨川小鳶8 күн бұрын
Can you show me the effect of generating videos from anime images?
@TheArt-OfficialTrainer8 күн бұрын
Yes, here you go! www.patreon.com/posts/cosmos-anime-120080465 Unfortunately, Cosmos is better suited for real-world examples because it's purpose is to generated real world datasets to train machines. It doesn't do animated video very well.
@yiyo437510 күн бұрын
Friend, I would like to know if there is a command to pause the training. For now I do a forced closure and I don't know if it will harm the samples. I would really appreciate it! 😢
@TheArt-OfficialTrainer10 күн бұрын
Don’t worry, killing the command process won’t harm the samples! To resume from the last checkpoint, use the --resume_from_checkpoint flag in the command line command
@yiyo43759 күн бұрын
@@TheArt-OfficialTrainerI understand. Friend I have been training, but loras have deformed hands. Do you know what parameter may be affecting that?
@TheArt-OfficialTrainer9 күн бұрын
Unfortunately that’s typically a problem with ai. Unless you are training a hand lora in particular, the hands could warp and move somewhat unrealistically since ai tends to struggle with the concept of hands. Some seeds will be better than others
@yiyo43758 күн бұрын
Friend, do you know if it is better to train without a background, whether it is a simple or white background to help learn the physics of the character. Or it's worse and then you don't understand any more instructions. With my setup I can't do many tests because it takes me a long time to train. Any advice helps me!
@TheArt-OfficialTrainer8 күн бұрын
@ I would suggest using diverse backgrounds, not just simple or white! If you just use a simple white background, the Hunyuan model will tend to generate white backgrounds no matter what your prompt is.
@WhySoBroke11 күн бұрын
Fantastic video!! Can you please share a good config settings file to get started?
@TheArt-OfficialTrainer11 күн бұрын
There is already one in the diffusion-pipe repo! Take a look in this video where I am setting up the configs, you’ll see where I pull the example config settings file
@ModRebelMockups14 күн бұрын
what would a line in the install.sh file look like if i added it to this script to download a comfy node, like comfyui-essentials, for example? where to put?
@TheArt-OfficialTrainer13 күн бұрын
Very similar to installing the comfy manager! Find the git repo, and clone it into custom_nodes folder, then install the requirements.txt file with “pip install -r requirements.txt” if the git repo has a requirements.txt file
@ModRebelMockups9 күн бұрын
@@TheArt-OfficialTrainer thanks!
@ModRebelMockups14 күн бұрын
so ok i made the script. i ran it the first time i created the pod. i exited the pod. not deleting it tho. reopening it. now what do i do to run comfy again? cuz the comfyui folder is already there so i dont need to run the script again. but comfy isnt automatically opening on 3000
@ModRebelMockups14 күн бұрын
nvm figured it out. start the python env and run the python main.py again. leaving here in case helps someone else
@ModRebelMockups15 күн бұрын
thanks so much for helping me create my own custom startup script! prev took 15 mins to start using the ComfyUI template. Now it takes me only 4 minutes! i highly recommend everyone to create their own startup script using this video as a base to work from. i was running the ComfyUI template and every few times i ended and restarted the pod and ran comfyui itd just be a blank white page. so annoying to have to delete the pod and make a new one and wait another 15 mins for all to load. now i can load only exactly what i want and need and not something random and hope it works. thanks soooooo much! i hope you keep posting videos to help us newbies create our own processes, workflows, etc. most ppl just give u some templete some workflow but its not really what we need. we need someone to teach us how to teach ourselves to make stuff custom. isnt that the power of comfyui? i dont wanna copy paste anymore. i just need some help to teach myself w some base knowledges, like this vid. thank you so much! my next request would be to go thru a practical example like, lets say you want to create a workflow that does some specific task like create a realistic person and change the color of their clothes but not the clothes themselves. and also control their posing from an idea you have in your head (not copy another images pose cuz u want something unique) then show how u go from that idea... to finished workflow. explaining your thought process along the way, the why and how u decide what nodes to use. instead of just here is a workflow use it! teach us the process. thank you!
@TheArt-OfficialTrainer11 күн бұрын
So glad I could help! I will take those suggestions into consideration! There are a lot of folks out there creating workflows, but few people showing the backend of how things work under the hood which is why I have focused on that so far. In the meantime I would check civitai or similar sites for workflows that people in the community have created!
@icepickgma15 күн бұрын
Good job, thanks for the valuable tutorial!
@louiewashere315 күн бұрын
I just can't keep up my ComfyUI directory is a mess, everyday there something new to add and learn . lol
@TheArt-OfficialTrainer15 күн бұрын
It really does move so fast! That’s what keeps it interesting :)
@ModRebelMockups17 күн бұрын
could u pls show how to add hugging face token to the aria2c line to add something that reqs a hf token?
@TheArt-OfficialTrainer16 күн бұрын
Sure! I can do a video on huggingface in the future. Add this to the aria2c command: --header “Authorization: Bearer mytoken123" So it should look like: aria2c -c -x 16 -s 16 --header “Authorization: Bearer mytoken123" url.com -d /directory -o filename.safetensors
@ModRebelMockups15 күн бұрын
@@TheArt-OfficialTrainer thanks! i am trying to follow your vid. i put in pip install -r requirements.txt into terminal in runpod and it did some stuff than returned error: Downloading torch-2.5.1-cp311-cp311-manylinux1_x86_64.whl (906.5 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.4/906.5 MB 192.7 MB/s eta 0:00:05 ERROR: Exception: Traceback (most recent call last): File "/workspace/ComfyUI/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/response.py", line 438, in _error_catcher yield File "/workspace/ComfyUI/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/response.py", line 561, in read data = self._fp_read(amt) if not fp_closed else b"" ^^^^^^^^^^^^^^^^^^ File "/workspace/ComfyUI/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/response.py", line 527, in _fp_read return self._fp.read(amt) if amt is not None else self._fp.read() ^^^^^^^^^^^^^^^^^^ File "/workspace/ComfyUI/venv/lib/python3.11/site-packages/pip/_vendor/cachecontrol/filewrapper.py", line 98, in read data: bytes = self.__fp.read(amt) ^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/http/client.py", line 473, in read s = self.fp.read(amt) ^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/socket.py", line 718, in readinto return self._sock.recv_into(b) ^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/ssl.py", line 1314, in recv_into return self.read(nbytes, buffer) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/ssl.py", line 1166, in read return self._sslobj.read(len, buffer) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TimeoutError: The read operation timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/workspace/ComfyUI/venv/lib/python3.11/site-packages/pip/_internal/cli/base_command.py", line 180, in exc_logging_wrapper status = run_func(*args) ^^^^^^^^^^^^^^^ File "/workspace/ComfyUI/venv/lib/python3.11/site-packages/pip/_internal/cli/req_command.py", line 245, in wrapper return func(self, options, args) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/ComfyUI/venv/lib/python3.11/site-packages/pip/_internal/commands/install.py", line 377, in run requirement_set = resolver.resolve( ^^^^^^^^^^^^^^^^^ File "/workspace/ComfyUI/venv/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 179, in resolve self.factory.preparer.prepare_linked_requirements_more(reqs) File "/workspace/ComfyUI/venv/lib/python3.11/site-packages/pip/_internal/operations/prepare.py", line 552, in prepare_linked_requirements_more self._complete_partial_requirements( File "/workspace/ComfyUI/venv/lib/python3.11/site-packages/pip/_internal/operations/prepare.py", line 467, in _complete_partial_requirements for link, (filepath, _) in batch_download: File "/workspace/ComfyUI/venv/lib/python3.11/site-packages/pip/_internal/network/download.py", line 183, in __call__ for chunk in chunks: File "/workspace/ComfyUI/venv/lib/python3.11/site-packages/pip/_internal/cli/progress_bars.py", line 53, in _rich_progress_bar for chunk in iterable: File "/workspace/ComfyUI/venv/lib/python3.11/site-packages/pip/_internal/network/utils.py", line 63, in response_chunks for chunk in response.raw.stream( File "/workspace/ComfyUI/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/response.py", line 622, in stream data = self.read(amt=amt, decode_content=decode_content) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/ComfyUI/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/response.py", line 560, in read with self._error_catcher(): File "/usr/lib/python3.11/contextlib.py", line 158, in __exit__ self.gen.throw(typ, value, traceback) File "/workspace/ComfyUI/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/response.py", line 443, in _error_catcher raise ReadTimeoutError(self._pool, None, "Read timed out.") pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. what happened?
@ModRebelMockups15 күн бұрын
fixed it. needed to first update pip with pip install --upgrade pip before running pip install -r requirements.txt.
@TheArt-OfficialTrainer13 күн бұрын
Glad you were able to figure out the fix!
@ModRebelMockups9 күн бұрын
@@TheArt-OfficialTrainer what would this line look like for a civitai model w their auth token?
@thenextension916017 күн бұрын
thanks for showing this end to end, very helpful.
@TheArt-OfficialTrainer16 күн бұрын
Glad it was helpful!
@Chaz-x1i19 күн бұрын
I have a better idea, you make the loras and then make them available for download, because who has $1600 for a 24 GB rig?
@TheArt-OfficialTrainer19 күн бұрын
Training a lora this way should only cost $3 max! There are too many Lora possibilities for me to create all the ones that people want.
@Chaz-x1i19 күн бұрын
@@TheArt-OfficialTrainer How, by renting GPU online? What would happen if I tried to do it with a RTX 3060, just take a long time or not even possible?
@TheArt-OfficialTrainer18 күн бұрын
Yeah, this tutorial uses RunPod. On a 3060, there’s a chance you could train using 512x512 images. And if that doesn’t work, you could even try 256x256. Just make sure to adjust the bucket size correctly.
@Chaz-x1i18 күн бұрын
@@TheArt-OfficialTrainer Thanks. I found there actually are a bunch of Hunyuan LoRAs available on CivitAI anyway, though if I ever do want to make one your video will come in handy.
@mohammedAli-h9p7d21 күн бұрын
Do u have a discord or any type of contact/ socials so I can ask u questions if I struggle w things / need help?
@TheArt-OfficialTrainer20 күн бұрын
Message me on my Patreon!
@TheArt-OfficialTrainer20 күн бұрын
I’m thinking about creating a Discord in the future, but haven’t done it yet
@davidhudson26721 күн бұрын
thanks for the video, for the image to video part, additional prompts to shape the video should go to that text_b part right?
@TheArt-OfficialTrainer20 күн бұрын
I would put your prompt in the “Flux Prompt Enhance” node. The only time I would change what’s in “text_a” or “text_b” would be if you have very specific words you want in your prompt. For example a LoRa trigger word, which needs to be spelled exactly a certain way.
@TheArt-OfficialTrainer20 күн бұрын
But yes, all that node is doing is appending text to the Flux Prompt Enhance output
@gjohgj22 күн бұрын
Amazing vid, thx!
@TheArt-OfficialTrainer22 күн бұрын
Glad it helped!
@toketokepass22 күн бұрын
When ever I try to run the Hunyuan video wrapper workflow to use loras, I get this error: "Error(s) in loading state_dict for AutoencoderKLCausal3D: Missing key(s) in state_dict: "encoder.down_blocks.0.resnets.0.norm1.weight etc.." - where as when I use the native Hunyuan workflow I get no such error. Im wondering if I need a different vae or something..
@TheArt-OfficialTrainer22 күн бұрын
Are you using the “Hunyuan Decode” node? There is a specific one for Kijai’s nodes
@toketokepass22 күн бұрын
@@TheArt-OfficialTrainer Im using all the Kaji specific nodes, though I think Ive figured out the issue - I dont have sageattention and triton installed and they look like a pain to
@TheArt-OfficialTrainer22 күн бұрын
You don’t need to use sageattention, you can just use flash attention in that dropdown instead. I’m not sure whether Triton is required if you don’t use sageattention.
@toketokepass22 күн бұрын
@@TheArt-OfficialTrainer I didnt have flash attn installed either, so i used comfy attn which seems to be preinstalled. is there a big difference between sage, flash and comfy attn?
@toketokepass22 күн бұрын
@@TheArt-OfficialTrainer Cheers for your help so far man. Im currently trying to get kijai's img2vid working (IP2V) but get this error AttributeError: 'LlavaForConditionalGeneration' object has no attribute 'final_layer_norm' .. Chat gpt seems to think this is related to not having a transformers lib installed, but there was no mention of needing to install a transformers lib on kijai's git page
@Jutochoppa123 күн бұрын
Whats the purpose of the block edit node and torch compile settings node (noob)? why not just use the lora select?
@TheArt-OfficialTrainer23 күн бұрын
They help save VRAM! With Kijai’s custom nodes, If you don’t use torch compile and block edit, you won’t be able to fit the full 720x1280x129f on 24GB VRAM with those nodes, I think you can fit the full resolution into <16GB VRAM with torch compile and block swap
@DM-dy6vn23 күн бұрын
Obviously, ComfyUI saves the last used workflow somewhere, and reload it automatically even after a clean fresh installation. I don't know how to feel about it.
@TheArt-OfficialTrainer23 күн бұрын
If you’re using RunPod, I actually think it’s cached on the webpage, so if you use the same pod and do a clean install, it uses the previously cached webpage to reload. If you’re using your own local machine, I’m not sure..
@miken3d24 күн бұрын
great video, thanks!!!
@ZacMagee24 күн бұрын
Love your runpod content man. Keep it up. Question, why do you use the notebook rather than the docker setups? Is it possible to expose an API this way?
@TheArt-OfficialTrainer23 күн бұрын
I’m meaning to learn docker, and I know it would be beneficial, but just haven’t gotten around to it yet. I’m also typically playing around with different environment setups, if I ever went to productionize a workflow, I would have already set up a docker file. You can definitely run ComfyUI workflows through an API, that’s on the list for videos for me to make
@bit.chan.0124 күн бұрын
It works with pinokio, should be easier right from emulator
@bit.chan.0124 күн бұрын
Yeah takes too much man, Info needed is steps to take to get it working, hardware required and time for piece whith what level of hardware
@TheArt-OfficialTrainer21 күн бұрын
Thanks for the tip! This is more of a live stream type video where I show my process for trying a model the first time so that others can learn on their own and won’t have to wait for a tutorial to come up showing them how.
@임수경-e5v25 күн бұрын
Following your video, I encountered the following error: No module named 'hyvideo' Is there a solution for this error?
@TheArt-OfficialTrainer24 күн бұрын
During Lora Training? Or during ComfyUI workflow?
@임수경-e5v24 күн бұрын
@@TheArt-OfficialTrainer Lora
@TheArt-OfficialTrainer24 күн бұрын
@임수경-e5v can you share the actual error? I would guess either you’re missing a package or the path to your hunyuan video model is incorrect
@dewabrata8325 күн бұрын
Really cool tutorial, detailed and clear, thank you
@TheArt-OfficialTrainer25 күн бұрын
Thank you for watching, I’n glad I could help!
@Ganymede198625 күн бұрын
I tried these steps on QuickPod which has much better prices , however SageAttention fails to compile using the template they have. They’ve got Cuda 12.4 and 12.6 bundled with Python 3.10 . I noticed you selected a template with python 3.11 and cuda 12.4 , and I did try this on Runpod and SageAttention installs just fine Do you know why it might fail with 3.10 and cuda 12.4?
@TheArt-OfficialTrainer24 күн бұрын
Im not sure, I would try two things. 1, you could just do “pip install sageattention” which will just download v1.0.6, which is only like 2% slower than v2. Other option would be to just install your virtual environment with python3.11. You can just do “apt-get -y install python3.11 python3.11-venv” and then “python3.11 -m venv venv” which will now use python3.11 for your environment, which hopefully will fix the issue. If that doesn’t work, let me know what the actual compile error is
@Ganymede198624 күн бұрын
yeah I already tried a venv with Python 3.11 and it didn’t fix the issue. Regardless , I did find a workaround. I copy pasted the same docker template url for QuickPod and it now works fine. Once again , thank you for your tutorials so far!
@TheArt-OfficialTrainer24 күн бұрын
Nice! Glad you found a fix. Happy you’re enjoying the videos!
@TheArt-OfficialTrainer24 күн бұрын
@ganymede1986 What’s your experience with QuickPod been like? Looks much cheaper than RunPod, I’d be interested in looking at it as an alternative if it’s reliable
@Ganymede198624 күн бұрын
@ had a great experience with them. They even gave me extra credits on the first top up They don’t have H100s , mostly 4090, 3090, Tesla and A series types. I was able to get a 4090 pod for 0.2$ per hour with 100GB storage. Also, their support was pretty helpful when I needed assistance with opening some ports (they even threw in more extra credit during that conversation). Would recommend them
@Falkonar26 күн бұрын
Absolutely stunning ! Thank you !
@gamersgabangest317926 күн бұрын
The problem with the workflow on my computer is that the first generation runs fine, but the subsequent generations are slow even if I unload the model or cache.
@TheArt-OfficialTrainer26 күн бұрын
That sounds like a memory leak. If you restart comfyui using the restart button in ComfyUI Manager, does the generation go fast again? Unfortunately memory leaks are typically due to the source code.. could be another custom node you have installed or within the ltx video custom nodes
@nemesisleather26 күн бұрын
Nice work dude. Very helpful, clear, and detailed tutorial. I really appreciate the effort that went into this.
@TheArt-OfficialTrainer26 күн бұрын
Glad I could help!
@PhantasyAI026 күн бұрын
You are the best, THANK YOU SO MUCH!!!!!!!!! Hands down one of the best AI channels.
@TheArt-OfficialTrainer26 күн бұрын
Thank you for your support! Glad I could help.
@toketokepass27 күн бұрын
Ive seen an FP8 version, and a 12GB version of Hunyuan on civitai, which of these would be best for a 3090?
@lucifer981427 күн бұрын
I used the fp8 version with the 12gb vram workflow on my 4060(8gb vram) and it works fine, it takes me around 10 minutes to generate a 3 second vid. So I guess the fp8 should run perfectly on a 3090. Don't think about it, a lot of people had claimed hunyuan took them an hour on their 4090, they are filthy liars, if my 8gb card can do it 10 mins, it's not impossible, so don't blindly listen to any such false claims, give it a try yourself and you'll understand better.
@TheArt-OfficialTrainer27 күн бұрын
These are the two repos for Hunyuan models, I would not download them from civitai, download them directly from the source: Hunyuan: tinyurl.com/ypjmc4ns Kijai: tinyurl.com/y4vwrjet I would start with Kijai's cfgdistill_fp8 model because it is the best balance of quality, speed, and VRAM usage. As the other poster said, you won't have any trouble running on a 3090, but you may not be able to run at the full 720p resolution without Torch Compile and Block Swap working. You will just need to adjust the resolution and number of frames until it fits in your VRAM.
@lucifer981426 күн бұрын
@@TheArt-OfficialTrainer no offense in one of these links you've shared, there is a pickletensor file and as you very well know pickletensors ain't known for their safety.
@TheArt-OfficialTrainer26 күн бұрын
That’s the official Hunyuan Video repository from Tencent. All models came from those files.
@PhantasyAI029 күн бұрын
What I wished existed is a custom node, allowing me to run comfyui locally but connected to runpod serverless api. Do you know if anything like that exists? Also a serverless guide for comfyui would be a cool addition to the series.
@TheArt-OfficialTrainer29 күн бұрын
I think you would just need to create your own docker image with all of the custom nodes you need installed and then give it a json file to executes the workflow you want. I haven’t worked with serverless before, but I’ll add it to my list to learn and I’ll make a video if I find it useful!
@drmuradkhan29 күн бұрын
i plan to buy rtx 3090 with 24GB VRAM. will it work for this workflow ?
@drmuradkhan29 күн бұрын
and thank you for the tutorial
@TheArt-OfficialTrainer29 күн бұрын
Yes, this will absolutely run on a 3090! I’m showing Linux here, so if you use Windows OS, you need to set up WSL to use SageAttenion. Only problem I’ve heard with 3090 is that there have been some torch compile issues reported. (The node I show around 15mins in). If torch compile doesn’t work, you’ll just have to lower the resolution (height x width) until it fits in your VRAM
@drmuradkhan29 күн бұрын
@TheArt-OfficialTrainer thank you for your reply. I will message again if I find any issue I hope you will respond and I wish you all the best for your channel
@PhantasyAI029 күн бұрын
First subscriber! Great videos man. Learning runpod because of you :)
@TheArt-OfficialTrainer29 күн бұрын
@@PhantasyAI0 Glad to hear it! I’ll have a lot more videos coming out, so stay tuned! :)
@Marco-jg8dz29 күн бұрын
thank you so much!
@TheArt-OfficialTrainer29 күн бұрын
Absolutely, glad to help! I hope you’ll consider subscribing, I have a lot more videos coming, I am aiming to be the first one to have videos out for all the new AI models!
@PhantasyAI029 күн бұрын
Can you make a video using the runpods "Hunyuan Lora Train Simple Interface" template please? NO videos exist showing us how to train Hunyuan Video loras. I'm trying to train it on videos in order to teach it motion. Please if you can make a video on this.
@TheArt-OfficialTrainer29 күн бұрын
Absolutely! I already had that in the pipeline. Probably before Jan. 1st, I just need to collect a dataset