indeed, as the second will reduce views of the first so it wasnt for self benefit to upload 2
@RamkumarL-o9z6 ай бұрын
Interesting tutorial with Web UI and Ollama, Thanks!!!
@AleksandarT106 ай бұрын
Great one Dan! Keep ups updated on the AI stuff!
@abrahammonteza2 ай бұрын
Excelente explicacion !!!!! , simple y directo a la vena como dicen aqui en mi pais
@aonangthailandboattours47572 ай бұрын
cuidado open web ui. de verdad, es malefico. haya mejores. gpt 4 all, follamac, alpaca
@Fayaz-RehmanАй бұрын
Five Stars ***** - Thanks for sharing.
@khalildureidy6 ай бұрын
Big thanks from Palestine
@ilkou6 ай бұрын
❤💚🖤
@elhadjibrahimabalde12345 ай бұрын
be safe
@kashifmanzoor79495 ай бұрын
Stay strong
@sarbabali2 ай бұрын
Stay safe from evil.zionists
@fokyewtoob8835Ай бұрын
Love from USA free Palestine
@hoomtal4 күн бұрын
Useful 👌🏼
@user-yz2ct2sy3l21 сағат бұрын
Thx a lot sir.
@quarteratom2 ай бұрын
Which program stores the local user data? Ollama or Web UI? Data like improvements to the model, chat history. How do multiple users work, which program does that? Can different users access other users' data? Does 1 user "improving" the model affect other users' conversations? How can you completely reset all the environment?
@borntobomb5 ай бұрын
Note for 405B: We are releasing multiple versions of the 405B model to accommodate its large size and facilitate multiple deployment options: MP16 (Model Parallel 16) is the full version of BF16 weights. These weights can only be served on multiple nodes using pipelined parallel inference. At minimum it would need 2 nodes of 8 GPUs to serve. MP8 (Model Parallel 8) is also the full version of BF16 weights, but can be served on a single node with 8 GPUs by using dynamic FP8 (Floating Point 8) quantization. We are providing reference code for it. You can download these weights and experiment with different quantization techniques outside of what we are providing. FP8 (Floating Point 8) is a quantized version of the weights. These weights can be served on a single node with 8 GPUs by using the static FP quantization. We have provided reference code for it as well. 405B model requires significant storage and computational resources, occupying approximately 750GB of disk storage space and necessitating two nodes on MP16 for inferencing.
@AlexSmile-y2x5 ай бұрын
and what about 70B? How it could be served? Could some of llama 3.1 be used by simple 16-cores laptop with integrated GPU and 32GB ram?
@isaac102315 ай бұрын
When you say "we" do you work for meta?
@borntobomb3 ай бұрын
@@isaac10231 im reprinting from release notes. Understand?
@bause61826 ай бұрын
Ollama should integrate a feature like artifact that allow you to test your html css code in a mini webview
@aonangthailandboattours47572 ай бұрын
you should integrate a monthly 1000 dollar payment into my bank account.. thats a good idea too. I am affraid LLM's are the way of inputting and outputting, it is other applications, software and hardware that does stuff like that. i.e. a browser to display css, the web ui and LLMs use Markdown not html so cannot do stuff like youtube embeds. Besides, F12 on most browsers will give you that anyway
@CortezLabsАй бұрын
Thank you
@kasirbarati3336Ай бұрын
Loved this 🤩😍
@elhadjibrahimabalde12345 ай бұрын
hello. After installing OpenWebUI, I am unable to find OLLAM under 'Select a Model'. Is this due to a specific configuration? For information, my system is running Ubuntu 24.04.
@SayemHasnat-e4h5 ай бұрын
How can I connect my local ollama3 with webUi, My webUI couldn't find the locally running ollama3
@MURD3R3D5 ай бұрын
same problem
@MURD3R3D5 ай бұрын
from home page of your webUI localhost3000 in your browser, click on your account name in the lower left, then click settings, then "models", then you can pull llama3.1 by typing it in the "pull" box and clicking the download button. when it completes, close webUI and reopen it. then i had the option to select 3.1 8B from the models list
@SayemHasnat-e4h5 ай бұрын
@@MURD3R3D i found that happen due to docker networking.
@manojkl13234 ай бұрын
I faced similar problem. Restarting the system, Starting Ollama, starting the docker desktop and container solved the issue for me.
@vrynstudios5 ай бұрын
A perfect tutorial.
@KennylexLucklessАй бұрын
In the beginning you asked "why" to use a local LLM, I think you forgot, "Online connectivity". I do sometime take my laptop to place where I have no WiFi or do not think the WiFi is secure, but I still want to use a LLM to analyze text and scripts.
@nikunjdhawan1Ай бұрын
Very helpful
@lwjunior26 ай бұрын
This is great. Thank you
@billblair31554 ай бұрын
Good stuff Big Dawg!
@MrI8igmac3 ай бұрын
I have spent all morning trying to getup and running. I can get ollama running and also open webui on port 3000. But there are no models on the web ui
@DanVega3 ай бұрын
If you got ollama installed you need to install a model. What happens if you run ollama list
@mikeyz87753 ай бұрын
@@DanVega deepseek-coder-v2:latest 63fb193b3a9b 8.9 GB 2 hours ago llama3.1:latest 42182419e950 4.7 GB 6 hours ago
@mikeyz87753 ай бұрын
this is my desktop.
@MrI8igmac3 ай бұрын
@@DanVega Ollama list shows deepseek-coder-v2 id:63fb Llama3.1:latest id:4218
@rockylau42674 ай бұрын
Thanks DAN, good video! It runs so smooth. Sorry I am a new subscriber. I want to know what is your computer hardware for my reference. Many thanks!!
@landsman737Ай бұрын
Very nice
@abhinaysingh14204 ай бұрын
this Is really helpful
@majithg21 сағат бұрын
Image to video generator ai models available at Ollama ??
@je25875 ай бұрын
Love your terminal, which tools do you use to customize it?
@expire50505 ай бұрын
finally setup open webui thanks to you. i'd approached it, seen "docker" and left it on my todo list for weeks/months. I'm running gemma2 2b on my gtx 1060 6gb vram. any suggestions on good models for my size?
@Peter-x292 ай бұрын
How did you connect to the api?!
@quanbuiinh6043 ай бұрын
Hello, thank you for your video. Could you please let me know if I can use Llama 3.1 on my laptop, which only has an NVIDIA GeForce MX330?
@dsmith004Ай бұрын
I am runnung llama3.1 on my Alien R17 without issue.
@haidersyed65545 күн бұрын
How can we access its API programmatically ?
@DanVega4 күн бұрын
If you're using Java you can use Spring AI to talk to Ollama
@chameleon_bp6 ай бұрын
Dan, what the specs for your local machine?
@termino21844 ай бұрын
Does Open WebUI support creating an API endpoint for AI models or is it just a chat UI? does expose the models as a RESTful API ?
@transmeta01Ай бұрын
no, but ollama does. See docs.
@mochammadrevaldi17905 ай бұрын
in Ollama Is there an admin dashboard for tuning the model, sir?
@vactum05 ай бұрын
my ollama running same model is deadslow, running in laptop i5 11th gen without GPU 26GB Ram. Is it because of no dedicated GPU?
@MarincaGheorghe3 ай бұрын
No GPU is a deal breaker for perf.
@Marssonde13 ай бұрын
despite my model being listed with ollama list it unfortunately doesnt show up in the webui as an option not sure what to do since i am not skilled in such things
@abubakkarsiddique134 ай бұрын
Hey, Its nice Can you list all the specs of your machine, so for running 8b/9b model?
@BoredThatsWhyКүн бұрын
how do i add deepseek to localhost?
@CleoCat75Ай бұрын
I installed this under WSL on Windows 11 and it's really slow. is it because it's under WSL and not native on my windows box?! I have a 3080ti GPU and i9 processor and yours is MUCH faster than mine.
@JREGANZOCLIPSАй бұрын
Hello! Which software is used to make this video? thanks in advance
@carlp49544 ай бұрын
Do you mind telling us what your Mac book specs are?
@trapez_yt5 ай бұрын
Hey, could you make a video on how to edit the login page? I want to make the login page to my liking.
@aonangthailandboattours47572 ай бұрын
ask your LLM to restyle it for you... same as when you want to know the time you dont ask your friend, you look at your phone
@MarvinEstrada-q5l3 күн бұрын
Hey Dan, really like it, I created my own chat, Work realy cool, I tell my son about my own Chat, He said "Oh Wow That's cool",.If you can explaint step by step the installation of the components will be easy to fallow, I look others videos to fallow installation the others way i will be lost.
@GlobalTraveler-c9cАй бұрын
How does one install Web UI?!!! Went to the link and see these .dll files. Please advise. Thanks.
@zo7lef6 ай бұрын
Would make a video on how to integrate llama 3 to wordpress website, making chatbot or co pilot
@Enki-AI4 ай бұрын
hey Dan can you help me out I have an issue i cant figure out, i usedto host ollama webui locally and online on a server, but im not sure why its not working anymore
@abiolambamalu70614 ай бұрын
Thanks so so much for this I'd been struggling with it for so long. So I usually have this problem where it's really slow and if I try to reference a document like you did, it just keeps loading and never responds. I did everything you did except that I use phi model instead of llama 3.1. could this be the reason? Thanks in advance😊
@NikolaiMhishi6 ай бұрын
Bro you the G
@stoicguac90306 ай бұрын
Is WebUI a replacement for aider?
@meshuggah244 ай бұрын
is it normal for docker to take up 15gb of ram on your machine?
@DrMacabre5 ай бұрын
hello, any idea how to set keep_alive when running the windows exe ?
@jaroslavsedlacek70776 ай бұрын
Is there an integration for Open WebUI + Spring AI?
@vikas-jz3tv5 ай бұрын
How we can tune a model with custom data?
@kelthekonqrr4 ай бұрын
Is it possible to build it out to monetize ?
@AliHelmi-GET5 ай бұрын
Thank you, I tried it but it is very slow, running it on a laptop with 16GB RAM!
@kevinladd25833 күн бұрын
If you don't have a GPU it will be very slow try a smaller model like 8b or smaller
@Statvar6 күн бұрын
Kinda dumb you have to make an account for docker
@9A4GEMilanАй бұрын
But, it is for linux, and I am searching for windows. Darn!
@chucky_genz3 ай бұрын
Talk to much
@rh3dstroke17 күн бұрын
😁👍👏🏼👏🏼👏🏼🦿🦾🧠
@selub1058Ай бұрын
You skipped configuration of WeUI. It's unfair. 😢 Excellent video, but without this important thing it will not work.👎