Another winner install guide and with your tested steps, Well Done!!!
@StableAIHub2 ай бұрын
Thank you.
@jyotishuniverse2 ай бұрын
So easy to install. Thank you. There were some errors after installation but worked fine.
@StableAIHub2 ай бұрын
u r welcome.
@nitsgamer77402 ай бұрын
Pre-quantized version of models will be helpful. It takes time to quantize during launch. Worked fine on 12GB. Thank you
@TheSslawek892 ай бұрын
Where Find this model and how to installThanks😀
@StableAIHub2 ай бұрын
You don't need separate model. Just setup as shown in video.
@indecomsh2 ай бұрын
Working fine on 12 GB VRAM and very fast too. Appreciate the guide.
@nguyenhongduong2906Ай бұрын
Awesome, thank you so much, this tutorial is so convenient and easy!
@trishul19792 ай бұрын
Thank you. Very easy guide.
@johnnyshand65142 ай бұрын
On my RTX 3060 12gb it takes an eye watering nine and a half minutes per image! The Pinokio version is about the same, using the full size model. Surely this one should be faster? Is there anyway to speed things up at all?
@StableAIHub2 ай бұрын
I don't think the speed can be increased unless you have lots of VRAM. A lot of users complained on their github page.
@alexanderkuo1578Ай бұрын
I feel like I'm doing something wrong. Using the Pinokio verstion on a RTX 3080 8 GB, it took me 1 hr 40 mins for an image...
@StableAIHubАй бұрын
@@alexanderkuo1578 Sorry I don't know about Pinokio version. You can try this standalone version.
@garciagarikaperez70232 ай бұрын
how to install LivePortraitTalker
@StableAIHub2 ай бұрын
Let me check.
@StableAIHub2 ай бұрын
The results are bad. Do you still want installation guide for it. github.com/mkara44/liveportrait_talker/issues/8
@ROKKor-hs8tg2 ай бұрын
Thank you.. Where is the path to the compressed version..... Can you show only the compression code.... Can you upload an 8-bit compressed version?
@StableAIHub2 ай бұрын
Everything remains as-is. The compression/Quantization happens through code.
@QorQar2 ай бұрын
How do I save a compressed version to use it instead of the original version because it needs more than 12 GB of RAM?
@QorQar2 ай бұрын
What is the quantization code to quantize the original version in Colab tpu and then use the quantized version in Colab T4?
@StableAIHub2 ай бұрын
@@QorQar I am also looking to understand how to create quantized version and than use it. So I far I am only able to do that with GLM-4
@ROKKor-hs8tg2 ай бұрын
❤❤❤❤❤❤❤❤❤😂@@StableAIHub
@Vanced2DuaАй бұрын
thanks
@rogersnelson74832 ай бұрын
11 minutes in only at 4%. Useless for me with only 8 Gig. Also the model is over 15 Gig. I tried the huggingface page, did 1 image, tried a second image it said I have run out of the daily usage.
@StableAIHub2 ай бұрын
Yeah huggingface have changed the usage per user. Not sure why it is not working for you. If you see the video I myself tested on 8GB VRAM.
@rogersnelson74832 ай бұрын
@@StableAIHub My guess it's using the CPU instead of GPU. I have had a few problems with other programs because of this or it could be my card. Maybe I will try again. I may need to learn more python command line.
@StableAIHub2 ай бұрын
It's easy to check 1. Activate virtual environment 2. pip list See if torch is installed with CUDA or not.
@letsgobro162 ай бұрын
so slow and results are not good with my experience
@StableAIHub2 ай бұрын
Yeah a bit slow. I am happy with the results considering it is free. In year's time we will see good progress and tools with better output.
@abujr101Ай бұрын
getting error " need conda init first before activating conda" when trying to activate conda omnigen
@StableAIHubАй бұрын
Don't use powershell, use command prompt.
@ZorlacSkaterАй бұрын
I get "you need to call conda init first" but if I do it give me another error.
@StableAIHubАй бұрын
For which step are you getting this error?
@ZorlacSkaterАй бұрын
@@StableAIHub For `conda activate omnigen"
@alexanderkuo1578Ай бұрын
I got a bunch of warnings (namely about how I should enable developer mode or run python in admin for caching assets)... but it still worked! Thanks. Periodically, I get an error (I forget what it is), but closing and re-starting the whole thing "fixes" the error. And some other times when I start it and click generate, it says "Loading safetensors" for about 2-3 mins, and then it just exits by itself. Buuttttt, is there a way I can "update" the OmniGen to the newest "version?" I see from their Git demo that they've added a few options for low VRAM, as well as an easy "set output to input dimension". Not sure what else they have under the hood.
@StableAIHubАй бұрын
If they have added options to optimize memory usage, I suggest use their repo and see if it works. Let me know if it is not working I will have another look.
@alexanderkuo1578Ай бұрын
@ thanks.. it is as easy as running the git command from your instructions pointing to their repo? Do I need to then repeat the other steps, or will it overwrite the existing files (versus thinking it’s a new “install” because it’s a different repo). Sorry I’m new to this, so very basic question.. thanks!
@StableAIHubАй бұрын
@@alexanderkuo1578 Follow the instructions on their github github.com/VectorSpaceLab/OmniGen?tab=readme-ov-file#5-quick-start
@alexanderkuo1578Ай бұрын
@@StableAIHub I liked your pip install from requirements, I guess I can't do that then? I don't know how to find out which Cuda version I need to install.. did yours detect that automatically, or just default to one of the versions? But if I install into the same Condo environment I created following your install, can I just clone the git directly since everything else should have been installed?
@alexanderkuo1578Ай бұрын
I think I figured it out myself. From within the same conda venv, I deleted the Omnigen folder an cloned it again from their respository. I then had to "re-install" Gradio Spaces (it spit an error that spaces wasn't found when I didn't). Then I was able to re-run it. The install was "smart" enough too to know I had already installed everything, so nothing new was downloaded/installed when I ran the PIP command for their repository & Gradio. [edit] but now my images are taking 1 hr+, whereas on your install, they were ~3 mins each. maybe i'll just go back to your install.. no idea what's going on behind the scene
@AInfectados2 ай бұрын
50 steps is really necessary?
@StableAIHub2 ай бұрын
This is the default value. Try decreasing and see if the quality is reduced.