Thanks Brandon, nice review of what’s out there for local LLMs
@SteheveRodriguez7 ай бұрын
Its great idea, thanks Brandon. I will test on my homelab.
@fermatdad7 ай бұрын
Thank you for the helpful tutorial.
@romayojr7 ай бұрын
this is awesome and can’t wait to try it. is there a mobile app for open webui?
@jjaard7 ай бұрын
I suspect technically it can easily run via any browser
@mjes9116 ай бұрын
How many concurrent users can this support for business cases?
@trucpham97727 ай бұрын
How to run ollama3 in macos, i want public localhost to use nextchatgpt , can you share command this solution
@kderectorful6 ай бұрын
I am accessing the openai server via a Mac, and my guess is that the netsh command is about your windows workstation you are accessing the server. Is there a similar command that would need to be run, or if I do this on my Linux server via Firefox, will I still have the same issue. I cannot seem to get the ollama3:latest installed for openweb. Any insight would be greatly appreciated as this was the most concise video I have seen on the topic.
@LibyaAi7 ай бұрын
Nobody explained how to install ollama and run it in properite way ، it should be step's ، is docker important to install before ollama? I tried to install ollama alone and it doesn't installed completely!! I don't know why
@kironlau7 ай бұрын
1. you should mention what is your os 2. read the official documentation 3. if you run on win, just download the exe/msi file, install with one click(and click yes...)
@TheTricou5 ай бұрын
so this "guide" is missing some key things like how to change the ip for wsl then how to run ollama like a service. even in his written guide is not telling on how to do this.
@thiesenf5 ай бұрын
GPT4ALL is another good locally running chat interface... it can run On both the CPU but also on the GPU using Vulkan...
@totallyperfectworld15 күн бұрын
How many CudaCores do you need for running this without getting frustrated. I know, the more the better. But what really makes sense? Just trying to find out what graphic card I should bet without busting my bank account…
@r3furbish3dbrain12Күн бұрын
The Nvidia RTX 3060 is currently the best option for those looking to explore local LLMs without breaking the bank. It offers 12GB of VRAM and a solid amount of CUDA cores, making it capable of running numerous models effectively. If you later decide to invest in a more powerful GPU, the RTX 3060 can still complement the new GPU, ensuring it remains useful. In my opinion, if you're uncertain about your needs, it's wise to avoid spending on high-end Nvidia models. I made that mistake three months ago by almost purchasing a 4090, only to realize that local LLMs are primarily effective for specific tasks, such as average-quality code assistance. The technology is advancing rapidly, but we still lag behind major players. Additionally, with the RTX 3060, you can utilize applications like Stable Diffusion to create quality images. This card strikes a balance between performance and affordability, making it an excellent choice for many users. Edited with my Local LLM running with the 3060, because English is my second language 😉
@SyamsQbattar6 ай бұрын
Is LMStudio better than Ollama?
@camsand61096 ай бұрын
no, but its a good option
@SyamsQbattar6 ай бұрын
@@camsand6109 then, Ollama is better?
@CyberSonic15717 сағат бұрын
@@camsand6109 Why is Ollama better?
@nobody-P7 ай бұрын
😮I'm gonna try this now
@klovvin6 ай бұрын
This would be better content if done by an AI
@thiesenf5 ай бұрын
Atleast we got the usually extremely boring stock videos as B rolls... *sigh*...