Local LLM with Ollama, LLAMA3 and LM Studio // Private AI Server

  Рет қаралды 14,931

VirtualizationHowto

VirtualizationHowto

Күн бұрын

Пікірлер: 22
@kenmurphy4259
@kenmurphy4259 7 ай бұрын
Thanks Brandon, nice review of what’s out there for local LLMs
@SteheveRodriguez
@SteheveRodriguez 7 ай бұрын
Its great idea, thanks Brandon. I will test on my homelab.
@fermatdad
@fermatdad 7 ай бұрын
Thank you for the helpful tutorial.
@romayojr
@romayojr 7 ай бұрын
this is awesome and can’t wait to try it. is there a mobile app for open webui?
@jjaard
@jjaard 7 ай бұрын
I suspect technically it can easily run via any browser
@mjes911
@mjes911 6 ай бұрын
How many concurrent users can this support for business cases?
@trucpham9772
@trucpham9772 7 ай бұрын
How to run ollama3 in macos, i want public localhost to use nextchatgpt , can you share command this solution
@kderectorful
@kderectorful 6 ай бұрын
I am accessing the openai server via a Mac, and my guess is that the netsh command is about your windows workstation you are accessing the server. Is there a similar command that would need to be run, or if I do this on my Linux server via Firefox, will I still have the same issue. I cannot seem to get the ollama3:latest installed for openweb. Any insight would be greatly appreciated as this was the most concise video I have seen on the topic.
@LibyaAi
@LibyaAi 7 ай бұрын
Nobody explained how to install ollama and run it in properite way ، it should be step's ، is docker important to install before ollama? I tried to install ollama alone and it doesn't installed completely!! I don't know why
@kironlau
@kironlau 7 ай бұрын
1. you should mention what is your os 2. read the official documentation 3. if you run on win, just download the exe/msi file, install with one click(and click yes...)
@TheTricou
@TheTricou 5 ай бұрын
so this "guide" is missing some key things like how to change the ip for wsl then how to run ollama like a service. even in his written guide is not telling on how to do this.
@thiesenf
@thiesenf 5 ай бұрын
GPT4ALL is another good locally running chat interface... it can run On both the CPU but also on the GPU using Vulkan...
@totallyperfectworld
@totallyperfectworld 15 күн бұрын
How many CudaCores do you need for running this without getting frustrated. I know, the more the better. But what really makes sense? Just trying to find out what graphic card I should bet without busting my bank account…
@r3furbish3dbrain12
@r3furbish3dbrain12 Күн бұрын
The Nvidia RTX 3060 is currently the best option for those looking to explore local LLMs without breaking the bank. It offers 12GB of VRAM and a solid amount of CUDA cores, making it capable of running numerous models effectively. If you later decide to invest in a more powerful GPU, the RTX 3060 can still complement the new GPU, ensuring it remains useful. In my opinion, if you're uncertain about your needs, it's wise to avoid spending on high-end Nvidia models. I made that mistake three months ago by almost purchasing a 4090, only to realize that local LLMs are primarily effective for specific tasks, such as average-quality code assistance. The technology is advancing rapidly, but we still lag behind major players. Additionally, with the RTX 3060, you can utilize applications like Stable Diffusion to create quality images. This card strikes a balance between performance and affordability, making it an excellent choice for many users. Edited with my Local LLM running with the 3060, because English is my second language 😉
@SyamsQbattar
@SyamsQbattar 6 ай бұрын
Is LMStudio better than Ollama?
@camsand6109
@camsand6109 6 ай бұрын
no, but its a good option
@SyamsQbattar
@SyamsQbattar 6 ай бұрын
@@camsand6109 then, Ollama is better?
@CyberSonic157
@CyberSonic157 17 сағат бұрын
@@camsand6109 Why is Ollama better?
@nobody-P
@nobody-P 7 ай бұрын
😮I'm gonna try this now
@klovvin
@klovvin 6 ай бұрын
This would be better content if done by an AI
@thiesenf
@thiesenf 5 ай бұрын
Atleast we got the usually extremely boring stock videos as B rolls... *sigh*...
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 1,6 МЛН
Learn Ollama in 15 Minutes - Run LLM Models Locally for FREE
14:02
Tech With Tim
Рет қаралды 69 М.
UFC 287 : Перейра VS Адесанья 2
6:02
Setanta Sports UFC
Рет қаралды 486 М.
ССЫЛКА НА ИГРУ В КОММЕНТАХ #shorts
0:36
Паша Осадчий
Рет қаралды 8 МЛН
Local LLM Challenge | Speed vs Efficiency
16:25
Alex Ziskind
Рет қаралды 142 М.
I Analyzed My Finance With Local LLMs
17:51
Thu Vu
Рет қаралды 504 М.
How To Host AI Locally: Ollama and Open WebUI
17:44
Naomi Brockwell TV
Рет қаралды 36 М.
Cheap mini runs a 70B LLM 🤯
11:22
Alex Ziskind
Рет қаралды 315 М.
Using Clusters to Boost LLMs 🚀
13:00
Alex Ziskind
Рет қаралды 93 М.
Run A.I. Locally On Your Computer With Ollama
9:35
DistroTube
Рет қаралды 26 М.
Unleash the power of Local LLM's with Ollama x AnythingLLM
10:15
Tim Carambat
Рет қаралды 133 М.
UFC 287 : Перейра VS Адесанья 2
6:02
Setanta Sports UFC
Рет қаралды 486 М.