ULTIMATE Llama 3 UI: Dive into Open WebUI & Ollama!

  Рет қаралды 9,897

AI DevBytes

AI DevBytes

Күн бұрын

Ollama + Llama 3 + Open WebUI: In this video, we will walk you through step by step how to set up Open WebUI on your computer to host Ollama models.
🚀 What You'll Learn:
* How to install Docker
* Setup Open WebUI with Docker
* Basics of using Open WebUI
* Pull new Ollama models down using Open WebUI
Chatpers:
00:00:00 - Intro
00:00:30 - Installing Docker
00:02:13 - Installing Open WebUI
00:05:15 - Setting up and Reviewing Open WebUI
00:11:28 - Pulling Ollama models with Open WebUI
🔗 Docker
www.docker.com/products/docke...
Docker Open WebUI Command:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Ollama Models
ollama.com/library
Open WebUI Docs
docs.openwebui.com/getting-st...
📺 Other Videos you might like:
🧑‍💻 PART 2 - ULTIMATE Llama 3 UI: Chat with Docs | Open WebUI & Ollama!: • ULTIMATE Llama 3 UI: C...
🧑‍💻 MACS OLLAMA SETUP - How To Run UNCENSORED AI Models on Mac (M1/M2/M3): • OLLAMA | Want To Run U...
🪟 WINDOWS OLLAMA SETUP - Run FREE Local UNCENSORED AI Models on Windows with Ollama: • OLLAMA | Want To Run U...
📄 Create your own CUSTOMIZED Llama 3 model using Ollama • Create your own CUSTOM...
🖼️ Ollama & LLava | Build a FREE Image Analyzer Chatbot Using Ollama, LLava & Streamlit! • Mastering AI Vision Ch...
🤖 Streamlit & Ollama | How to Build a Local UNCENSORED AI Chatbot: • Streamlit & Ollama | H...
🧑‍💻 My MacBook Pro Specs:
Apple MacBook Pro M3 Max
14-Core CPU
30-Core GPU
36GB Unified Memory
1TB SSD Storage
_____________________________________
🔔 / @aidevbytes Subscribe to our channel for more tutorials and coding tips
👍 Like this video if you found it helpful!
💬 Share your thoughts and questions in the comments section below!
GitHub: github.com/AIDevBytes
🏆 My Goals for the Channel 🏆
_____________________________________
My goal for this channel is to share the knowledge I have gained over 20+ years in the field of technology in an easy-to-consume way. My focus will be on offering tutorials related to cloud technology, development, generative AI, and security-related topics.
I'm also considering expanding my content to include short videos focused on tech career advice, particularly aimed at individuals aspiring to enter "Big Tech." Drawing from my experiences as both an individual contributor and a manager at Amazon Web Services, where I currently work, I aim to share insights and guidance to help others navigate their career paths in the tech industry.
_____________________________________

Пікірлер: 45
@VjRocker007
@VjRocker007 Ай бұрын
You're a legend, thank you for posting these how to videos. Keep it up, for a person who doesn't have time to go through all of these docs these have been incredibly helpful. Appreciate all of your hard work!!
@proterotype
@proterotype Ай бұрын
Dude you’re becoming one of my go-to guys for this stuff
@GetzAI
@GetzAI Ай бұрын
Agree, very happy I found his channel.
@gewoonrescalt4162
@gewoonrescalt4162 Ай бұрын
your video's are really good ngl you helped me a lot with some things thank you
@stanleykwong6438
@stanleykwong6438 Ай бұрын
i've been trying to get my own models setup and your series of videos have been exceptionally clear, please post more videos for an indepth use of Open WebUI and perhaps for the following so that non-technical people like me can implement it. 1. How to setup and allow other computers with WebUI installed to access the computer that has ollama installed on the same network. 2. How to setup "My Modelfiles" within WebUI, specifically how the Content section of the Modelfile is compiled, and how to incorporate other parameters beyond temperature. Exceptional work! I will spread the word with my team. Keep it up.
@AIDevBytes
@AIDevBytes 29 күн бұрын
I will look creating separate videos on those two topics. Thanks for the feedback.
@victor_ndambani
@victor_ndambani Ай бұрын
thank u man you now have a new sub
@anthanh1921
@anthanh1921 Ай бұрын
Thanks a lot ❤
@GetzAI
@GetzAI Ай бұрын
This is great, thanks. How about models on external drives?
@AIDevBytes
@AIDevBytes Ай бұрын
Are you asking if you can store the models on an external drive and use them with Ollama or Open WebUI. Just want to make sure I'm following your question.
@GetzAI
@GetzAI Ай бұрын
@@AIDevBytes Yes. Also, I tried to DL 'ollama run llama3:70b-instruct' and it gets to 100% but never appears in the list.
@AIDevBytes
@AIDevBytes Ай бұрын
There is no configuration in Ollama to use models on external drives. Also, the performance would probably be very bad even if you could, because the model would need to transfer from an external disk to your GPU each time it needed to load. If you are downloading the 70B parameter model and it's not running you probably don't have the GPU power to run it. Check out this tool to see if you have the GPU power to run a 70B model. huggingface.co/spaces/Vokturz/can-it-run-llm I have a pretty good computer but the largest model I can run it 40B.
@GetzAI
@GetzAI Ай бұрын
@@AIDevBytes the 70b isn't even showing. I presume I can't run it on my old MBP, but was going to give it a go :) Load from external would only be once each time you run it, yes? This is great information as I will have to be sure my new Mac has the space. I did use that tool, but in my old Intel MBP and it just redirected me to the Apple Store LOL Love to hear your opinion in a video on the new M4 Studios coming out soon.
@AIDevBytes
@AIDevBytes Ай бұрын
@@GetzAI correct an Intel MBP can not run these models because of the lack of GPU computing power. You really need an Apple Silicon chip to run these models on Macs. I am running an M3 Max with 36 GB of RAM. I'll have to check out the M4 Studios.
@stephenzzz
@stephenzzz Ай бұрын
question if you don't mind. I want to have my sales information content incorporated behind a chat/RAG to answer questions from this content. Which system out there do you think would work best, that is low code. Ideally next part will be to access this via a membership website.
@TokyoNeko8
@TokyoNeko8 Ай бұрын
this tool now has built in RAG... put in PDF band upload.
@AIDevBytes
@AIDevBytes Ай бұрын
@stephenzzz, as @TokyoNeko8 stated, you can upload documents to chat with in the Open WebUI. I did not cover this in the video since I wanted to make it a short intro to using the Open WebUI. I may release another short video showing how to do this for those who are wondering, but it's pretty straightforward to do.
@AIDevBytes
@AIDevBytes Ай бұрын
@stephenzzz Checkout this video for how you can chat with your docs. kzbin.info/www/bejne/oXXadnydotaUe6c You should be able to upload your sales data using the document feature of Open WebUI and then ask questions about the data. Hope this helps.
@conerwei6720
@conerwei6720 Ай бұрын
my computer did now have good gpu,can you make a video about how wo use cloud ollama and local webui?
@conerwei6720
@conerwei6720 Ай бұрын
did not have good gpu,sorry
@AIDevBytes
@AIDevBytes Ай бұрын
Yes, I'll create a video of how to run Ollama in a cloud environment. I'll see how to setup Open WebUI also in a cloud environment. Probably two different videos at later dates.
@0x-003
@0x-003 Ай бұрын
another question which model is best suited for programming?, i see many different but its abit hard to find the best of them? if i already have llama3 70B installed, is there any reason to install codellama 70B? or is it 2 completely things?
@AIDevBytes
@AIDevBytes Ай бұрын
For coding specific models yes codellama. Which parameter count is going to depend on your hardware capabilities. If you have hardware that can run the 70B model I would use that.
@0x-003
@0x-003 Ай бұрын
@@AIDevBytes which model is good for overall questions? like everyday things etc
@AIDevBytes
@AIDevBytes Ай бұрын
For open source models I like llama3 for general purpose use. I also like Mixtral:8x7b if you have the hardware to run it.
@0x-003
@0x-003 Ай бұрын
ive just installed ollama, the windows version (preview) and i want to use Open WebUI with it, can i do that inside Docker, even though i installed Ollama on my windows PC? i didnt use WSL it is showing its running on port localhost:11434
@AIDevBytes
@AIDevBytes Ай бұрын
Yes, running this on Windows should work in a similar manner. Are you running into any particular issue?
@0x-003
@0x-003 Ай бұрын
@@AIDevBytes No, just wanted to make sure, and yeah it seems to be working.
@0x-003
@0x-003 Ай бұрын
How much better is this than the paid version of ChatGPT?
@AIDevBytes
@AIDevBytes Ай бұрын
I have not done a lot of side by side comparisons, but based on my test and reviews from others it's on par with ChatGPT. I will say that I have found it does what I need it to do.
@wgreric8427
@wgreric8427 Ай бұрын
can I generate images with this
@TokyoNeko8
@TokyoNeko8 Ай бұрын
need another tool such as Automatic1111 but integration still very limited and basic. I have it running but i prefer going to Autmatic1111 directly by a big margin
@AIDevBytes
@AIDevBytes Ай бұрын
you can if you integrate Stable Diffusion through the configuration settings. Here is a link to Stable Diffusion. github.com/AUTOMATIC1111/stable-diffusion-webui
@alphaobeisance3594
@alphaobeisance3594 Ай бұрын
I can't seem to comprehend how to provide access to my AI through OpenWebUI via my domain. I'd like to grant access for my family but I for the life of me can't seem to get it set up for public access.
@AIDevBytes
@AIDevBytes Ай бұрын
In this video, I demonstrate local access only. I will explore how to host this on a server and create a follow-up video on the process. Note that hosting on a server will incur costs due to GPU pricing.
@alphaobeisance3594
@alphaobeisance3594 Ай бұрын
@@AIDevBytes I've got homelab and hardware. Just can't seem to figure out the networking side of things for some reason. Tried proxy through Apache but it must be over my head as I can't get it to function correctly.
@AIDevBytes
@AIDevBytes Ай бұрын
Gotcha, yeah unfortunately I wouldn't really be able to help there since I don't know your network configuration or topology. You should be able to go into your router settings and setup port forwarding to you homlab server.
@art3mis635
@art3mis635 Ай бұрын
when running the docker command I get this error: %2Fdocker_engine/_ping": open //./pipe/docker_engine: The system cannot find the file specified. See 'docker run --help'. any help ?
@AIDevBytes
@AIDevBytes Ай бұрын
Did you already have docker installed or did you install a new version of docker?
@art3mis635
@art3mis635 Ай бұрын
@@AIDevBytes yes I have docker desktop installed, but there is a newer version. Should I update ? Current version: 4.29.0 (145265) New version: 4.30.0 (149282)
@AIDevBytes
@AIDevBytes Ай бұрын
I recommend running the latest version to see if that fixes the issue. I am running the newest version of Docker.
@art3mis635
@art3mis635 Ай бұрын
@@AIDevBytes it worked thank you
@AIDevBytes
@AIDevBytes Ай бұрын
glad that fixed your issue
ACCESS Open WebUI & Llama 3 ANYWHERE on Your Local Network!
7:13
AI DevBytes
Рет қаралды 2,6 М.
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 731 М.
Купили айфон для собачки #shorts #iribaby
00:31
ТАМАЕВ vs ВЕНГАЛБИ. Самая Быстрая BMW M5 vs CLS 63
1:15:39
Асхаб Тамаев
Рет қаралды 4,1 МЛН
Каха инструкция по шашлыку
01:00
К-Media
Рет қаралды 8 МЛН
FREE Local LLMs on Apple Silicon | FAST!
15:09
Alex Ziskind
Рет қаралды 137 М.
A deep dive into using Tailscale with Docker
31:58
Tailscale
Рет қаралды 45 М.
You've been using AI Wrong
30:58
NetworkChuck
Рет қаралды 348 М.
How To Run Llama 3 8B, 70B Models On Your Laptop (Free)
4:12
School of Machine Learning
Рет қаралды 13 М.
Unleash the power of Local LLM's with Ollama x AnythingLLM
10:15
Tim Carambat
Рет қаралды 100 М.
Better Searches With Local AI
8:30
Matt Williams
Рет қаралды 23 М.
The Secret Behind Ollama's Magic: Revealed!
8:27
Matt Williams
Рет қаралды 28 М.
Ollama UI Tutorial - Incredible Local LLM UI With EVERY Feature
10:11
Matthew Berman
Рет қаралды 80 М.
Will the battery emit smoke if it rotates rapidly?
0:11
Meaningful Cartoons 183
Рет қаралды 18 МЛН
КОПИМ НА АЙФОН В ТГК АРСЕНИЙ СЭДГАПП🛒
0:59