LM Studio-Local Inference Server-Voice Conversation-with Text Input Option and Code-Part 2

  Рет қаралды 6,572

VideotronicMaker

VideotronicMaker

Күн бұрын

Пікірлер: 18
@amrohendawi6007
@amrohendawi6007 5 ай бұрын
very nice video. comprehensive and easy to follow
@godned74
@godned74 Ай бұрын
Just to let you know. It is possible to use a powerful llm with out a gpu using gpt4all interface which is an lm studio alternative with the main difference being you can use a powerful open source llm with even the most modest computer even if you have no gpu. the trick is to add the token count to the python script.
@XinYue-ki3uw
@XinYue-ki3uw 5 ай бұрын
I had fun watching! Great learning pal.
@videotronicmaker
@videotronicmaker 5 ай бұрын
Glad you enjoyed it
@Jesulex82
@Jesulex82 3 ай бұрын
Pero puedes mantener una conversacion con este modelo? Habla en español? ¿Podemos descargarlo?
@videotronicmaker
@videotronicmaker 3 ай бұрын
The model used in this video is the Phi 2 model by Microsoft. This model only understands English. Here is the huggingface page with more information. huggingface.co/microsoft/phi-2 You would need a different model for Spanish. I am not sure which models this size understand Spanish. However, I have seen models where other people have added a second version of the base model and made it in a different language so I would say check on hugging face and search for something like, "Phi-3-sp".
@Techn0man1ac
@Techn0man1ac 5 ай бұрын
KITT, is that car KITT?
@videotronicmaker
@videotronicmaker 5 ай бұрын
Yes. Next will be Twiggy.
@Techn0man1ac
@Techn0man1ac 5 ай бұрын
@@videotronicmaker 👍
@Alex29196
@Alex29196 8 ай бұрын
Cool bro, please integrate it with ai style tts instead of local tts. thanks !!
@videotronicmaker
@videotronicmaker 8 ай бұрын
Definitely. Coming shortly.
@hannahpadilla2699
@hannahpadilla2699 8 ай бұрын
I found this video very helpful!!
@videotronicmaker
@videotronicmaker 8 ай бұрын
Glad I could be of service ;-)
@videotronicmaker
@videotronicmaker 8 ай бұрын
Here you go. Better natural language processing. Latest video with code: kzbin.info/www/bejne/ooTRqod5aNGba9E
@MaxJM74
@MaxJM74 2 ай бұрын
👀
LM Studio: How to Run a Local Inference Server-with Python code-Part 1
26:41
When mom gets home, but you're in rollerblades.
00:40
Daniel LaBelle
Рет қаралды 80 МЛН
兔子姐姐最终逃走了吗?#小丑#兔子警官#家庭
00:58
小蚂蚁和小宇宙
Рет қаралды 13 МЛН
Kluster Duo #настольныеигры #boardgames #игры #games #настолки #настольные_игры
00:47
Когда отец одевает ребёнка @JaySharon
00:16
История одного вокалиста
Рет қаралды 16 МЛН
This FREE AI Tool is the Dream for Authors
25:28
The Nerdy Novelist
Рет қаралды 17 М.
World’s Fastest Talking AI: Deepgram + Groq
11:45
Greg Kamradt
Рет қаралды 53 М.
Self Hosting Has Changed My Life - What I Self Host
17:31
The Linux Cast
Рет қаралды 85 М.
Google Gemma 2B on LM Studio Inference Server: Real Testing
27:36
VideotronicMaker
Рет қаралды 3 М.
This AI Coder Is On Another Level (Pythagora Tutorial)
43:21
Matthew Berman
Рет қаралды 128 М.
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 1,2 МЛН
Run ALL Your AI Locally in Minutes (LLMs, RAG, and more)
20:19
Cole Medin
Рет қаралды 192 М.
Using Llama Coder As Your AI Assistant
9:18
Matt Williams
Рет қаралды 72 М.
When mom gets home, but you're in rollerblades.
00:40
Daniel LaBelle
Рет қаралды 80 МЛН