Getting Started with Ollama and Web UI

  Рет қаралды 84,504

Dan Vega

Dan Vega

Күн бұрын

Пікірлер: 92
@hfislwpa
@hfislwpa 6 ай бұрын
2 videos in 1 day? Woah! Thanks
@aonangthailandboattours4757
@aonangthailandboattours4757 2 ай бұрын
indeed, as the second will reduce views of the first so it wasnt for self benefit to upload 2
@RamkumarL-o9z
@RamkumarL-o9z 6 ай бұрын
Interesting tutorial with Web UI and Ollama, Thanks!!!
@AleksandarT10
@AleksandarT10 6 ай бұрын
Great one Dan! Keep ups updated on the AI stuff!
@abrahammonteza
@abrahammonteza 2 ай бұрын
Excelente explicacion !!!!! , simple y directo a la vena como dicen aqui en mi pais
@aonangthailandboattours4757
@aonangthailandboattours4757 2 ай бұрын
cuidado open web ui. de verdad, es malefico. haya mejores. gpt 4 all, follamac, alpaca
@Fayaz-Rehman
@Fayaz-Rehman Ай бұрын
Five Stars ***** - Thanks for sharing.
@khalildureidy
@khalildureidy 6 ай бұрын
Big thanks from Palestine
@ilkou
@ilkou 6 ай бұрын
❤💚🖤
@elhadjibrahimabalde1234
@elhadjibrahimabalde1234 5 ай бұрын
be safe
@kashifmanzoor7949
@kashifmanzoor7949 5 ай бұрын
Stay strong
@sarbabali
@sarbabali 2 ай бұрын
Stay safe from evil.zionists
@fokyewtoob8835
@fokyewtoob8835 Ай бұрын
Love from USA free Palestine
@hoomtal
@hoomtal 4 күн бұрын
Useful 👌🏼
@user-yz2ct2sy3l
@user-yz2ct2sy3l 21 сағат бұрын
Thx a lot sir.
@quarteratom
@quarteratom 2 ай бұрын
Which program stores the local user data? Ollama or Web UI? Data like improvements to the model, chat history. How do multiple users work, which program does that? Can different users access other users' data? Does 1 user "improving" the model affect other users' conversations? How can you completely reset all the environment?
@borntobomb
@borntobomb 5 ай бұрын
Note for 405B: We are releasing multiple versions of the 405B model to accommodate its large size and facilitate multiple deployment options: MP16 (Model Parallel 16) is the full version of BF16 weights. These weights can only be served on multiple nodes using pipelined parallel inference. At minimum it would need 2 nodes of 8 GPUs to serve. MP8 (Model Parallel 8) is also the full version of BF16 weights, but can be served on a single node with 8 GPUs by using dynamic FP8 (Floating Point 8) quantization. We are providing reference code for it. You can download these weights and experiment with different quantization techniques outside of what we are providing. FP8 (Floating Point 8) is a quantized version of the weights. These weights can be served on a single node with 8 GPUs by using the static FP quantization. We have provided reference code for it as well. 405B model requires significant storage and computational resources, occupying approximately 750GB of disk storage space and necessitating two nodes on MP16 for inferencing.
@AlexSmile-y2x
@AlexSmile-y2x 5 ай бұрын
and what about 70B? How it could be served? Could some of llama 3.1 be used by simple 16-cores laptop with integrated GPU and 32GB ram?
@isaac10231
@isaac10231 5 ай бұрын
When you say "we" do you work for meta?
@borntobomb
@borntobomb 3 ай бұрын
@@isaac10231 im reprinting from release notes. Understand?
@bause6182
@bause6182 6 ай бұрын
Ollama should integrate a feature like artifact that allow you to test your html css code in a mini webview
@aonangthailandboattours4757
@aonangthailandboattours4757 2 ай бұрын
you should integrate a monthly 1000 dollar payment into my bank account.. thats a good idea too. I am affraid LLM's are the way of inputting and outputting, it is other applications, software and hardware that does stuff like that. i.e. a browser to display css, the web ui and LLMs use Markdown not html so cannot do stuff like youtube embeds. Besides, F12 on most browsers will give you that anyway
@CortezLabs
@CortezLabs Ай бұрын
Thank you
@kasirbarati3336
@kasirbarati3336 Ай бұрын
Loved this 🤩😍
@elhadjibrahimabalde1234
@elhadjibrahimabalde1234 5 ай бұрын
hello. After installing OpenWebUI, I am unable to find OLLAM under 'Select a Model'. Is this due to a specific configuration? For information, my system is running Ubuntu 24.04.
@SayemHasnat-e4h
@SayemHasnat-e4h 5 ай бұрын
How can I connect my local ollama3 with webUi, My webUI couldn't find the locally running ollama3
@MURD3R3D
@MURD3R3D 5 ай бұрын
same problem
@MURD3R3D
@MURD3R3D 5 ай бұрын
from home page of your webUI localhost3000 in your browser, click on your account name in the lower left, then click settings, then "models", then you can pull llama3.1 by typing it in the "pull" box and clicking the download button. when it completes, close webUI and reopen it. then i had the option to select 3.1 8B from the models list
@SayemHasnat-e4h
@SayemHasnat-e4h 5 ай бұрын
@@MURD3R3D i found that happen due to docker networking.
@manojkl1323
@manojkl1323 4 ай бұрын
I faced similar problem. Restarting the system, Starting Ollama, starting the docker desktop and container solved the issue for me.
@vrynstudios
@vrynstudios 5 ай бұрын
A perfect tutorial.
@KennylexLuckless
@KennylexLuckless Ай бұрын
In the beginning you asked "why" to use a local LLM, I think you forgot, "Online connectivity". I do sometime take my laptop to place where I have no WiFi or do not think the WiFi is secure, but I still want to use a LLM to analyze text and scripts.
@nikunjdhawan1
@nikunjdhawan1 Ай бұрын
Very helpful
@lwjunior2
@lwjunior2 6 ай бұрын
This is great. Thank you
@billblair3155
@billblair3155 4 ай бұрын
Good stuff Big Dawg!
@MrI8igmac
@MrI8igmac 3 ай бұрын
I have spent all morning trying to getup and running. I can get ollama running and also open webui on port 3000. But there are no models on the web ui
@DanVega
@DanVega 3 ай бұрын
If you got ollama installed you need to install a model. What happens if you run ollama list
@mikeyz8775
@mikeyz8775 3 ай бұрын
@@DanVega deepseek-coder-v2:latest 63fb193b3a9b 8.9 GB 2 hours ago llama3.1:latest 42182419e950 4.7 GB 6 hours ago
@mikeyz8775
@mikeyz8775 3 ай бұрын
this is my desktop.
@MrI8igmac
@MrI8igmac 3 ай бұрын
@@DanVega Ollama list shows deepseek-coder-v2 id:63fb Llama3.1:latest id:4218
@rockylau4267
@rockylau4267 4 ай бұрын
Thanks DAN, good video! It runs so smooth. Sorry I am a new subscriber. I want to know what is your computer hardware for my reference. Many thanks!!
@landsman737
@landsman737 Ай бұрын
Very nice
@abhinaysingh1420
@abhinaysingh1420 4 ай бұрын
this Is really helpful
@majithg
@majithg 21 сағат бұрын
Image to video generator ai models available at Ollama ??
@je2587
@je2587 5 ай бұрын
Love your terminal, which tools do you use to customize it?
@expire5050
@expire5050 5 ай бұрын
finally setup open webui thanks to you. i'd approached it, seen "docker" and left it on my todo list for weeks/months. I'm running gemma2 2b on my gtx 1060 6gb vram. any suggestions on good models for my size?
@Peter-x29
@Peter-x29 2 ай бұрын
How did you connect to the api?!
@quanbuiinh604
@quanbuiinh604 3 ай бұрын
Hello, thank you for your video. Could you please let me know if I can use Llama 3.1 on my laptop, which only has an NVIDIA GeForce MX330?
@dsmith004
@dsmith004 Ай бұрын
I am runnung llama3.1 on my Alien R17 without issue.
@haidersyed6554
@haidersyed6554 5 күн бұрын
How can we access its API programmatically ?
@DanVega
@DanVega 4 күн бұрын
If you're using Java you can use Spring AI to talk to Ollama
@chameleon_bp
@chameleon_bp 6 ай бұрын
Dan, what the specs for your local machine?
@termino2184
@termino2184 4 ай бұрын
Does Open WebUI support creating an API endpoint for AI models or is it just a chat UI? does expose the models as a RESTful API ?
@transmeta01
@transmeta01 Ай бұрын
no, but ollama does. See docs.
@mochammadrevaldi1790
@mochammadrevaldi1790 5 ай бұрын
in Ollama Is there an admin dashboard for tuning the model, sir?
@vactum0
@vactum0 5 ай бұрын
my ollama running same model is deadslow, running in laptop i5 11th gen without GPU 26GB Ram. Is it because of no dedicated GPU?
@MarincaGheorghe
@MarincaGheorghe 3 ай бұрын
No GPU is a deal breaker for perf.
@Marssonde1
@Marssonde1 3 ай бұрын
despite my model being listed with ollama list it unfortunately doesnt show up in the webui as an option not sure what to do since i am not skilled in such things
@abubakkarsiddique13
@abubakkarsiddique13 4 ай бұрын
Hey, Its nice Can you list all the specs of your machine, so for running 8b/9b model?
@BoredThatsWhy
@BoredThatsWhy Күн бұрын
how do i add deepseek to localhost?
@CleoCat75
@CleoCat75 Ай бұрын
I installed this under WSL on Windows 11 and it's really slow. is it because it's under WSL and not native on my windows box?! I have a 3080ti GPU and i9 processor and yours is MUCH faster than mine.
@JREGANZOCLIPS
@JREGANZOCLIPS Ай бұрын
Hello! Which software is used to make this video? thanks in advance
@carlp4954
@carlp4954 4 ай бұрын
Do you mind telling us what your Mac book specs are?
@trapez_yt
@trapez_yt 5 ай бұрын
Hey, could you make a video on how to edit the login page? I want to make the login page to my liking.
@aonangthailandboattours4757
@aonangthailandboattours4757 2 ай бұрын
ask your LLM to restyle it for you... same as when you want to know the time you dont ask your friend, you look at your phone
@MarvinEstrada-q5l
@MarvinEstrada-q5l 3 күн бұрын
Hey Dan, really like it, I created my own chat, Work realy cool, I tell my son about my own Chat, He said "Oh Wow That's cool",.If you can explaint step by step the installation of the components will be easy to fallow, I look others videos to fallow installation the others way i will be lost.
@GlobalTraveler-c9c
@GlobalTraveler-c9c Ай бұрын
How does one install Web UI?!!! Went to the link and see these .dll files. Please advise. Thanks.
@zo7lef
@zo7lef 6 ай бұрын
Would make a video on how to integrate llama 3 to wordpress website, making chatbot or co pilot
@Enki-AI
@Enki-AI 4 ай бұрын
hey Dan can you help me out I have an issue i cant figure out, i usedto host ollama webui locally and online on a server, but im not sure why its not working anymore
@abiolambamalu7061
@abiolambamalu7061 4 ай бұрын
Thanks so so much for this I'd been struggling with it for so long. So I usually have this problem where it's really slow and if I try to reference a document like you did, it just keeps loading and never responds. I did everything you did except that I use phi model instead of llama 3.1. could this be the reason? Thanks in advance😊
@NikolaiMhishi
@NikolaiMhishi 6 ай бұрын
Bro you the G
@stoicguac9030
@stoicguac9030 6 ай бұрын
Is WebUI a replacement for aider?
@meshuggah24
@meshuggah24 4 ай бұрын
is it normal for docker to take up 15gb of ram on your machine?
@DrMacabre
@DrMacabre 5 ай бұрын
hello, any idea how to set keep_alive when running the windows exe ?
@jaroslavsedlacek7077
@jaroslavsedlacek7077 6 ай бұрын
Is there an integration for Open WebUI + Spring AI?
@vikas-jz3tv
@vikas-jz3tv 5 ай бұрын
How we can tune a model with custom data?
@kelthekonqrr
@kelthekonqrr 4 ай бұрын
Is it possible to build it out to monetize ?
@AliHelmi-GET
@AliHelmi-GET 5 ай бұрын
Thank you, I tried it but it is very slow, running it on a laptop with 16GB RAM!
@kevinladd2583
@kevinladd2583 3 күн бұрын
If you don't have a GPU it will be very slow try a smaller model like 8b or smaller
@Statvar
@Statvar 6 күн бұрын
Kinda dumb you have to make an account for docker
@9A4GEMilan
@9A4GEMilan Ай бұрын
But, it is for linux, and I am searching for windows. Darn!
@chucky_genz
@chucky_genz 3 ай бұрын
Talk to much
@rh3dstroke
@rh3dstroke 17 күн бұрын
😁👍👏🏼👏🏼👏🏼🦿🦾🧠
@selub1058
@selub1058 Ай бұрын
You skipped configuration of WeUI. It's unfair. 😢 Excellent video, but without this important thing it will not work.👎
@betterwithmaul
@betterwithmaul 4 ай бұрын
finnaly my gpu has other task than gaming
@shuangg
@shuangg 6 ай бұрын
6 months behind everyone else.
@MAKU011111
@MAKU011111 Ай бұрын
where do I get spring-boot-reference.pdf ?
the ONLY way to run Deepseek...
11:59
NetworkChuck
Рет қаралды 471 М.
How To Host AI Locally: Ollama and Open WebUI
17:44
Naomi Brockwell TV
Рет қаралды 37 М.
Beat Ronaldo, Win $1,000,000
22:45
MrBeast
Рет қаралды 158 МЛН
How Strong Is Tape?
00:24
Stokes Twins
Рет қаралды 96 МЛН
Learn Ollama in 15 Minutes - Run LLM Models Locally for FREE
14:02
Tech With Tim
Рет қаралды 95 М.
Obsidian with Ollama
6:33
AIpreneur-J
Рет қаралды 27 М.
Why is every React site so slow?
13:52
Theo - t3․gg
Рет қаралды 144 М.
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 1,6 МЛН
How to run DeepSeek on your computer (100% private)
20:11
David Ondrej
Рет қаралды 74 М.
Unlimited AI Agents running locally with Ollama & AnythingLLM
15:21
Tim Carambat
Рет қаралды 178 М.
EASIEST Way to Fine-Tune a LLM and Use It With Ollama
5:18
warpdotdev
Рет қаралды 262 М.
Beat Ronaldo, Win $1,000,000
22:45
MrBeast
Рет қаралды 158 МЛН