Use Your Self-Hosted LLM Anywhere with Ollama Web UI

  Рет қаралды 61,374

Decoder

Decoder

Күн бұрын

Take your self-hosted Ollama models to the next level with Ollama Web UI, which provides a beautiful interface and features like chat history, voice input, and user management. We'll also explore how to use this interface and the models that power it on your phone using the powerful Ngrok tool.
Watch my other Ollama videos - • Get Started with Ollama
Links:
Code from the video - decoder.sh/videos/use-your-se...
Ollama - ollama.ai
Docker - docs.docker.com/engine/install/
Ollama Web UI - github.com/ollama-webui/ollam...
NGrok - ngrok.com/docs/getting-started/
Timestamps:
00:00 - Is this free ChatGPT?
00:16 - Tools Needed
00:19 - Tools: Ollama
00:25 - Tools: Docker
00:38 - Tools: Ollama Web UI
00:55 - Tools: Ngrok
01:12 - Ollama status check
01:37 - Docker command walkthrough
04:20 - Starting the docker container
04:33 - Container status check
04:53 - Web UI Sign In
05:17 - Web UI Walkthrough
07:11 - Getting started with Ngrok
07:55 - Running Ngrok
08:29 - Ollama Web UI on our Phone!!
09:37 - Outro - What's Next?
Credits:
Wikimedia.org for the photo of Earth

Пікірлер: 212
@kameit00
@kameit00 4 ай бұрын
Just in case you missed it, your auth token was visible since the position changed on your screen changed. If you want to regenerate it. Thanks for posting your videos!
@decoder-sh
@decoder-sh 4 ай бұрын
Good eye! That was one of the things I told myself I wouldn’t do as I started this process, and of course it’s the first thing I did 😰 But don’t worry, I regenerated it before publishing the video as part of good hygiene. Stay tuned to see what other PII I leak 😅
@proterotype
@proterotype 3 ай бұрын
Good man @kameit00
@kornflakesss
@kornflakesss 3 ай бұрын
Such a good person
@JarppaGuru
@JarppaGuru 2 ай бұрын
if we can generate it it wont matter? LOL. there is nothing good with keys. its only for tracking
@steftrando
@steftrando 3 ай бұрын
See, these are the types of KZbin tech videos I like to watch. This guy is clearly a very knowledgeable senior dev, and he puts more priority into the tech than a fancy KZbinr influencer setup.
@decoder-sh
@decoder-sh 3 ай бұрын
Thank you for watching, and for the kind words! Don't expect to see me on a sound stage anytime soon 😂
@mikect05
@mikect05 3 ай бұрын
Like how KZbin used to be. Less loud intro music. Less advertising. Less sponsor segments. Less click bait titles. Less hiding the actual valued info among wasted time. Makes sense though... If your explicit about what your video is about the people interested will watch, but if you make it a mystery than hopefully anyone that thinks it might be helpful will click, then they have to parse through.... Fack "don't recommend (Shii) channel!!"
@imadsaddik
@imadsaddik 4 ай бұрын
Oh man, I don't know how and why KZbin recommended your video, but I am very happy that they did. I enjoyed this video a lot
@decoder-sh
@decoder-sh 4 ай бұрын
Happy to hear it, welcome to my channel!
@xXWillyxWonkaXx
@xXWillyxWonkaXx 3 ай бұрын
I second that. Straight to the point, very sharp with the info, thank you bro
@decoder-sh
@decoder-sh 3 ай бұрын
@@ts757arseI'm thrilled to hear that! I'd love to hear more about your business if you're willing to share
@decoder-sh
@decoder-sh 3 ай бұрын
@@ts757arselooks like it got nuked :( I need to set up a google org email with my domain so I can talk to viewers 1:1
@decoder-sh
@decoder-sh 3 ай бұрын
@@ts757arse Ah so it's like pen testing simulation and planning? Very cool, that's a necessary service. Self-hosting an uncensored model seems like the perfect use case. Nuke test still fails, but I finally set up "david at decoder dot sh"!
@SashaBaych
@SashaBaych 4 ай бұрын
You are really good at explaining things! Thank you so much. No useless hype, just plain useful hands on information that is completely understandable.
@decoder-sh
@decoder-sh 4 ай бұрын
Thank you so much for watching and leaving a comment! I’ll continue to do my best to make straightforward and easy to understand videos in the future 🫡
@TheColdharbour
@TheColdharbour 4 ай бұрын
Really enjoyed this video too! Complete success, really well paced and carefully explained! Looking forward to the next one (open source LLMs) - thanks for the great tutorials! :)
@NevsTechBits
@NevsTechBits 8 күн бұрын
Great info! Commenting to show support! Keep going my guy!
@decoder-sh
@decoder-sh 5 күн бұрын
Thanks for your support, I’m looking forward to making more!
@Bearistotle_
@Bearistotle_ 4 ай бұрын
Amazing tutorial, all the steps are broken down and explained very well
@annbjer
@annbjer 4 ай бұрын
Really cool stuff, thanks for keeping it clear and to the point. It’s awesome that experimenting with local and custom models is becoming more accessible. I’m definitely planning to give it a try and hope to design my own custom interfaces someday. Just subbed and looking forward to learning more!
@decoder-sh
@decoder-sh 4 ай бұрын
I look forward to seeing what you create! I have some really fun videos planned, thanks for the sub :)
@RamseyLEL
@RamseyLEL 4 ай бұрын
Solid, detailed, and thorough video tutorial
@anand83r
@anand83r 3 ай бұрын
Very useful, simple to understand and very focused on subject 👌. Its hard to find Americans like this who delivers messages without sugarcoating or too much filler content. Good jobs 👌. people its worth to support this person👏
@decoder-sh
@decoder-sh 3 ай бұрын
Thank you for your support!
@mikect05
@mikect05 3 ай бұрын
So excited to find your channel... looking forward to more videos. I'm a total noob so feel a bit like I'm floating out in space.
@PublikSchool
@PublikSchool 9 күн бұрын
Great video! was the most seamless of any video I've watched
@decoder-sh
@decoder-sh 5 күн бұрын
Thank you for watching!
@eric.o
@eric.o 4 ай бұрын
Excellent video, super easy to follow
@paoloavogadro7329
@paoloavogadro7329 4 ай бұрын
Very well done, quick and clean to the point.
@decoder-sh
@decoder-sh 4 ай бұрын
I'm glad you think so, thanks for watching!
@chrisumali9841
@chrisumali9841 3 ай бұрын
Thanks for the demo and info, have a great day
@aolowude
@aolowude 2 ай бұрын
Worked like a charm. Great walkthrough!
@decoder-sh
@decoder-sh 2 ай бұрын
Happy to hear it!
@dhmkkk
@dhmkkk 3 ай бұрын
What a great tutorial please keep on making more content!
@decoder-sh
@decoder-sh 3 ай бұрын
Thanks for watching, I certainly will!
@WolfeByteLabs
@WolfeByteLabs 29 күн бұрын
Thanks so much for this video man. Awesome entry point to local + private llms
@decoder-sh
@decoder-sh 27 күн бұрын
My pleasure, thanks for watching!
@adamtechnology3204
@adamtechnology3204 3 ай бұрын
This was really benificial thank you a lot!
@anthony.boyington
@anthony.boyington 3 ай бұрын
Very good video and easy to follow.
@scott701230
@scott701230 4 ай бұрын
Awesomeness! Thank you for the Tutorial!
@decoder-sh
@decoder-sh 4 ай бұрын
My pleasure, thanks for watching!
@ronaldokun
@ronaldokun 2 ай бұрын
Thank you for the exceptional tutorial!
@decoder-sh
@decoder-sh 2 ай бұрын
My pleasure, thanks for subscribing!
@collinsk8754
@collinsk8754 4 ай бұрын
Excellent tutorial 👏👏!
@decoder-sh
@decoder-sh 4 ай бұрын
I’m glad you enjoyed it!
@JacobLehman-ov4eu
@JacobLehman-ov4eu Ай бұрын
Thanks, very helpful and simple. I'm very new to all of this (and coding) but it really fascinates me. I would love to be able to set up an LLM with RAG and use in web ui so that my coworkers could test projects. I will get there and your content is very helpful!
@bhagavanprasad
@bhagavanprasad Ай бұрын
Excellent. thank you
@rgm4646
@rgm4646 24 күн бұрын
This works great! thanks!!
@decoder-sh
@decoder-sh 18 күн бұрын
Thank you so much!
@bndy0
@bndy0 3 ай бұрын
Ollama WebUI has been renamed to Open WebUI, video tutorial on how to update would be helpful!
@decoder-sh
@decoder-sh 3 ай бұрын
Looks like it's the same codebase, but I could possibly go over the migration? Appears to be just a couple commands github.com/open-webui/open-webui?tab=readme-ov-file#moving-from-ollama-webui-to-open-webui
@Uconnspartan
@Uconnspartan 3 ай бұрын
Great content!
@decoder-sh
@decoder-sh 3 ай бұрын
Thanks for watching!
@MacProUser99876
@MacProUser99876 4 ай бұрын
Beautiful stuff, mate!
@decoder-sh
@decoder-sh 4 ай бұрын
Cheers, thank you!
@synaestesia-bg3ew
@synaestesia-bg3ew 4 ай бұрын
​@decoder-sh Everything seems to look so easy with you. I did this a month ago, but not so easy.
@decoder-sh
@decoder-sh 4 ай бұрын
@@synaestesia-bg3ew thank you for the kind words!
@CodingScot
@CodingScot 3 ай бұрын
It could be done through docker, portainer and nginx proxy manager as well?
@kashifrit
@kashifrit Ай бұрын
its an extremely good video
@decoder-sh
@decoder-sh Ай бұрын
Thank you for watching!
@UnchartedWorlds
@UnchartedWorlds 4 ай бұрын
Thank you keep it up! Sub made
@safetime100
@safetime100 3 ай бұрын
Legend ❤
@yashkaul802
@yashkaul802 Ай бұрын
please make a video on deploying this on huggingface spaces or AWS ECS. Great video!
@keylanoslokj1806
@keylanoslokj1806 3 ай бұрын
Great info. What kind of beast workstation server you need to set-up though to run your own gpt?
@decoder-sh
@decoder-sh 3 ай бұрын
Depends what your needs are! If you just want to use a small model for simple tasks, any gpu in the last 5(?) years should be fine, or a beefy cpu. I’m using an M1 MacBook Pro, though I’ve also got requests for Linux demos and would be happy to show you how models run on a 2080ti
@iseverynametakenwtf1
@iseverynametakenwtf1 4 ай бұрын
This is cool. Might see if I can get LM Studio to work. Why not host your own server too?
@Candyapplebone
@Candyapplebone 2 ай бұрын
Interesting. You really didn’t have to code that much to actually get it all up and running.
@decoder-sh
@decoder-sh 2 ай бұрын
Yes indeed! There will be more coding in future videos, but in the beginning I’d like to show what’s possible without much coding experience
@ollimacp
@ollimacp 3 ай бұрын
Splendid tutorial. Thanks alot :) You got a like and a sub from me! And if i write a custom model(Memgpt+CrewAI) and want to use the WebUI, would it be better to try to get the model into a ollama modelfile, or just expose the model via an API which mimiks the standard (openai)?
@decoder-sh
@decoder-sh 3 ай бұрын
Thanks for watching! It looks like MemGPT isn't a model as much as a library that uses models (via openAI and their own endpoint) to act as agents. So a modelfile wouldn't work, but it does look like they have some instructions for connection to a UI (oogabooga in this case memgpt.readme.io/docs/local_llm). Best of luck, let us know how it goes!
@aimademerich
@aimademerich 4 ай бұрын
Phenomenal
@khalidkifayat
@khalidkifayat 3 ай бұрын
great one. few questions here 1. can u through some light on input/output token consumption to/from LLM. 2. How can we give this app to client as service provider ?? thank you
@spencerfunk6697
@spencerfunk6697 Ай бұрын
integration with open interpreter would be cool
@baheth3elmy16
@baheth3elmy16 3 ай бұрын
I really liked your video, I subscribed of course. I don't think Ollama adds much with the current abundant services available for mobile.
@decoder-sh
@decoder-sh 3 ай бұрын
Thanks for watching and subscribing! What are your current favorite LLM apps?
@baheth3elmy16
@baheth3elmy16 3 ай бұрын
@@decoder-sh I use Oobabooga sometimes on its own and sometimes I use SillyTavern as a front, and Faraday, for local LLM
@baheth3elmy16
@baheth3elmy16 3 ай бұрын
@@decoder-sh Oobabooga sometimes on its own and sometimes with SillyTavern, and Faraday
@VimalMeena7
@VimalMeena7 4 ай бұрын
everything working final locally but when i run it on internet using ngrok it shows "Ollama WebUI Backend Required". although my backend running ... on local system i am getting responses to my queries. please help. i am not able to resolve it.
@soyhenryxyz
@soyhenryxyz 4 ай бұрын
For cloud hosting of the Ollama web UI, which services do you suggest? Additionally, are there any services you recommend for API use to avoid installing and storing large models? appreciate any insight here and great video!
@simonbrennan7283
@simonbrennan7283 3 ай бұрын
Most people considering self hosting would be doing so because of privacy and security concerns, which I think is the target audience for this video. Cloud hosting totally defeats the purpose.
@decoder-sh
@decoder-sh 3 ай бұрын
I don't have any recommended services at the moment, but I would like to research and create a video reviewing a few of the major providers in the near future. Ditto for API providers, I've been mostly focused on self-hosting at the moment. Some that come to mind are openAI (obviously), Mistral (mistral.ai/product/), and one that was just announced is Groq (wow.groq.com/)
@danteinferno8983
@danteinferno8983 2 ай бұрын
Hi can we have a local AI Model installed in our Linux VPS and then use it with API to integrate it in our WordPress website or something like it?
@jayadky5983
@jayadky5983 4 ай бұрын
Hey, good work mate! I wanted to know if we could self host our Ollama API to Ngrok just as we hosted WebUI? I am using a server to run ollama and I have to ssh in everytime to use it. So, can we instead forward the ollama localhost api to ngrock and then use it in my machine?
@decoder-sh
@decoder-sh 4 ай бұрын
Yeah you could definitely do that! Let me know how it works out for you :)
@SODKGB
@SODKGB 3 ай бұрын
I would like to make changes to the provided interface for example hide/remove left menu bar, change colors, change fonts or adding some graphics. Any pointers to the right direction would be great. Thinking might need to download the web-ui and edit the source before starting docker and ollama?
@decoder-sh
@decoder-sh 3 ай бұрын
The UI already allows you to show/hide the left menu (there's a tiny button that's hard to see, but it's there). Beyond that, yes you'd need to download their repo and manually edit their frontend code. Let me know how it turns out!
@SODKGB
@SODKGB 3 ай бұрын
@decoder-sh It's been a lot of hacking. At least the ollama for windows in combination with docker is fast and easy. Potential exists to use Python to send and receive content from local server and modify the content to accept variables via get or post.
@alizaka1467
@alizaka1467 3 ай бұрын
Can we use GPT models with this? Thanks. Great video as always
@decoder-sh
@decoder-sh 3 ай бұрын
Do you mean OpenAI? Yes you can add your OpenAI API key to the webui in Settings. Sorry for not showing that!
@YorkyPoo_UAV
@YorkyPoo_UAV 2 ай бұрын
At first I thought it was great but since I've turned on then off a VPN, I can't get models to load on the remote page. Also every time I start an instance, a new code is generated so I can keep using the same URL.
@luiferreira8437
@luiferreira8437 4 ай бұрын
Thanks for the video. I would like to know if it is possible to have this be done with a RAG system built on ollama and also add a diffuser model (like stable diffusion) to generate images
@decoder-sh
@decoder-sh 4 ай бұрын
This is my first time hearing someone talk about combining RAG with image generation - what kind of use case do you have in mind?
@luiferreira8437
@luiferreira8437 4 ай бұрын
@@decoder-sh the idea that I have is to improve model accuracy on a certain topic, while having the option to generate images if needed. Some use case would be like writing a book, keeping consistent characters descriptions and images. I actually didn’t have in mind both simultaneously, but it could be interesting
@decoder-sh
@decoder-sh 4 ай бұрын
That seems a bit more like a knowledge graph where you update connections or attributes of entities as the model parses more text. I'll be covering some RAG topics in the near future and would like to eventually get to knowledge graphs and their use with LLMs
@hmdz150
@hmdz150 4 ай бұрын
This is amazing, does the ollama web ui work with pdf files too?
@decoder-sh
@decoder-sh 4 ай бұрын
It does have document uploading abilities, but I haven’t looked at their code to see how that actually works under the hood. I believe it does do some naive parsing and embedding generation. Try uploading a document and asking a question about it!
@Shivam-bi5uo
@Shivam-bi5uo 3 ай бұрын
can you help me, if i want to host a fine tuned LLM how can i do so?
@sitedev
@sitedev 4 ай бұрын
This is nuts. Imagine if you could (you probably can) connect this with a RAG system running on the local machine which contains a business's entire knowledge base and then deploy it to your entire sales/support team.
@decoder-sh
@decoder-sh 4 ай бұрын
You totally can! Maybe as a browser extension that integrates with gmail? I'm planning a series on RAG now, and may eventually discuss productionizing and use cases as well. Stay tuned 📺
@sitedev
@sitedev 4 ай бұрын
@@decoder-sh Cool. I saw another video yesterday discussing using very small LLM's fine-tuned for specific function calling - I can imagine this would also be a neat method of extending the local ai to perform other tasks too (replying to requests via email etc). Have you experimented with local LLMs and function calling?
@thegamechanger3793
@thegamechanger3793 Ай бұрын
Do you need good cpu/ram to run? Just trying to see when you install docker/LLM/grok if it require high end system requirements?
@decoder-sh
@decoder-sh Ай бұрын
It depends on the model you want to run. docker & ngrok don't require much resources at all, and I've seen people run (heavily quantized) 7B models on a raspberry Pi. I'm using an M1 macbook, but it's overkill for smaller models.
@peterparker5161
@peterparker5161 22 күн бұрын
You can run Phi-3 mini quantized on an entry level laptop with 8gb RAM. If you have 4gb VRAM, the response will be very quick.
@Enkumnu
@Enkumnu 17 күн бұрын
Very interesting! However, can we configure Ollama on a specific port? The default is localhost, but how do we use a server with a specific IP address (e.g., 192.168.0.10)?
@JenuelDev
@JenuelDev 18 күн бұрын
Hi! I wanna deploy this on my own server, how to do that?
@Wade_NZ
@Wade_NZ Ай бұрын
My AV (Bitdefender) goes nuts and wont allow the NGROK Agent to remain installed on my PC :(
@rajkumar3433
@rajkumar3433 3 ай бұрын
What will be deployment command on azure Linux machine.
@adamtechnology3204
@adamtechnology3204 3 ай бұрын
How can I see the hardware requirements for each model? Since even phi doesnt give me response back after minutes waiting I have really old laptop XD
@Fordtruck4sale
@Fordtruck4sale 3 ай бұрын
How does this handle multiple users wanting to load multiple different models at same time? FIFO?
@decoder-sh
@decoder-sh 3 ай бұрын
Yes I believe so
@hypergraphic
@hypergraphic 2 ай бұрын
Great walk-through, although I think I will just install it on a vps instead.
@decoder-sh
@decoder-sh 2 ай бұрын
A VPS also works! Which would you use?
@kevinfox9535
@kevinfox9535 Ай бұрын
I used webui to run mistral but its very slow. I have 3050 6gb vram with 16gb ram. However i can run ollama mistral model fine on command prompt.
@big_sock_bully3461
@big_sock_bully3461 3 ай бұрын
Is there any other way I can keep ngrok running in the background like I wanna integrate it with my own personal website so ngrok won't work. So do you have any other solution ?
@decoder-sh
@decoder-sh 3 ай бұрын
If you want to run ngrok (or anything) as a background task, you can just add "&" after the command. See here www.digitalocean.com/community/tutorials/how-to-use-bash-s-job-control-to-manage-foreground-and-background-processes#starting-processes If you're on linux, you could also create a service for it which is a bit more of a sustainable way of accomplishing this.
@matthewarchibald5118
@matthewarchibald5118 Ай бұрын
would it be possible to use tailscale instead of ngrok?
@decoder-sh
@decoder-sh Ай бұрын
If you're just using it for yourself, or with other people that you trust to share a VPN with, then tailscale definitely works! In that case your UI address will either be localhost or whatever your tailscale dns name is. I use tailscale myself for networking my devices
@nachesdios1470
@nachesdios1470 2 ай бұрын
This is really cool, but for anyone that wants to try this out, be careful when exposing services on the internet. - Check updates regularly - try to break the app yourself first before exposing it - I would highly recommend monitoring activity closely.
@maidenseddie1701
@maidenseddie1701 3 ай бұрын
Thanks for the clear video. What are the use cases for a non-technical person like me to use a self-hosted LLM? Or is this video only for developers working at businesses? Would like to understand why a person would use a self-hosted LLM when there are LLMs already there like Llama, GPT4, Claude 2.0 and Google’s Gemini? I still don’t understand the use case for using self hosted LLMs.
@decoder-sh
@decoder-sh 3 ай бұрын
Fair question! You might self-host an LLM if you wanted to keep your data private, or if you wanted greater flexibility in which models you use, or if you didn't want to pay for API access, or if you didn't want to be constrained by OpenAI's intentionally restrictive system prompt. Let us know if you decide to give self-hosting a try!
@maidenseddie1701
@maidenseddie1701 3 ай бұрын
@@decoder-sh thank you, will give it a shot. I’m a non-technical person!
@maidenseddie1701
@maidenseddie1701 3 ай бұрын
I’m trying to follow your steps but I’m stuck at the command line on Mac. I can’t seem to add more than one line of code as whenever I hit the return key, command line is processing that single line of code. Unable to paste the entire code. Can you message the code so i can paste it in its entirety. Your help will be greatly appreciated!
@decoder-sh
@decoder-sh 3 ай бұрын
@@maidenseddie1701​​⁠ah you either need to put it all on one line OR end each line with a forwardslash \. This has the effect of escaping the newline character that follows it. See the code herev decoder.sh/videos/use-your-self_hosted-llm-anywhere-with-ollama-web-ui
@maidenseddie1701
@maidenseddie1701 3 ай бұрын
@@decoder-sh thank you for the link, pasting the full code helped. I have other issues though and I really want to build this out, so will appreciate your help until I get this right! Can I DM you on LinkedIn or anywhere else? 1. The Ollama interface doesn’t load the response from Llama2 when I test it with ‘Tell me a random fun fact about the Roman Empire.’ Is my computer too slow? I have 8GB RAM and using Chrome browser. Only 1 out of 4 attempts returned an answer so far. So it has worked just once. 2. ngrok: Terminal keeps saying ngrok: command not found when I ask it to check “ngrok config check”. How do I proceed? I’m desperate to make this work :)
@leandrogoethals6599
@leandrogoethals6599 4 ай бұрын
oh thx man i was tyred of going to rdp with port forwarding where it ran locally ;)
@decoder-sh
@decoder-sh 4 ай бұрын
I actually do something a little similar - I use tailscale as a VPN into my home network, then I can easily access whatever services are running. Ngrok is great for a one-off, but I use the VPN daily since I don't need to share it with anyone else.
@leandrogoethals6599
@leandrogoethals6599 4 ай бұрын
@@decoder-sh But don't u lose the ability to use the foreign network when connecting when not using virtual adapters? Wich is a pain on phones
@shobhitagnihotri416
@shobhitagnihotri416 4 ай бұрын
I am not able to understand to docker part , May be some glitch at my MacBook .I s there any way we can do it without use of docker
@decoder-sh
@decoder-sh 3 ай бұрын
It will be a bit messier, but they do provide instructions for non-Docker installation. Docker desktop should just be a .dmg you open to install github.com/ollama-webui/ollama-webui?tab=readme-ov-file#how-to-install-without-docker
@bhagavanprasad
@bhagavanprasad Ай бұрын
Question: Docker image is running, but web-ui is not listing any models that are installed in my PC How to fix it?
@riseupallday
@riseupallday 15 күн бұрын
Download any model of your choice using ollama run name_of_model
@johnmyers3233
@johnmyers3233 Ай бұрын
Does a pile downloaded seems to be coming up with some malicious software
@dvn8ter
@dvn8ter 3 ай бұрын
⭐️⭐️⭐️⭐️⭐️
@decoder-sh
@decoder-sh 3 ай бұрын
Thanks for watching!
@michamohe
@michamohe 4 ай бұрын
I'm on a windows 11 machine, is there anything I would do differently with that in mind?
@decoder-sh
@decoder-sh 4 ай бұрын
Ollama is working on windows support now! x.com/ollama/status/1757560242320408723 For now, you can still run ollama on ubuntu in windows via WSL.
@gold-junge91
@gold-junge91 3 ай бұрын
on my root server, its not working its looks like the docker container have no access to ollama, the troubleshoot section doesn't help
@decoder-sh
@decoder-sh 3 ай бұрын
Do you have any logs that you could share? Ollama is running? When url is listed when you go into the web UI settings and look at the "ollama api url"?
@kashifrit
@kashifrit Ай бұрын
NGROK keeps changing the link everytime it gets started up ?
@decoder-sh
@decoder-sh Ай бұрын
Yes, each session's link will be unique. It may be possible to have consistent links if you pay for their service
@neuralgarden
@neuralgarden 3 ай бұрын
looks like Docker only works on Windows, Linux and Intel Macs, but not M1 Macs.. are there any other alternatives?
@decoder-sh
@decoder-sh 3 ай бұрын
This video was made on an M1 Mac, docker should work!
@neuralgarden
@neuralgarden 3 ай бұрын
@@decoder-sh oh wait never mind I got it to work. For some reason it says only intel macs on the docker website but I scrolled down to the bottom of the website and found the download button for M1 macs. Thanks, great tutorial btw.
@mernik5599
@mernik5599 Ай бұрын
Please! How can I add function calling to this ollama served web ui? And is it possible to add internet access so that if I ask for today's news highlights then it can give a summary of news from today
@decoder-sh
@decoder-sh Ай бұрын
I'm not sure if open-webui supports functoin calling from their UI, unfortunately
@shanesteven4578
@shanesteven4578 4 ай бұрын
Would love to see what you could do with something like ‘Arduino GIGA R1 WiFi’ with Screen and others such small devices as ESP32 Meshtastic, LLM’s being accessible on such devices with LLM’s limited to subject specific such as: emergency services, medical, logistics, finance, administration, sales & marketing, radio communications, agriculture, math, etc etc ….
@decoder-sh
@decoder-sh 4 ай бұрын
As long as it has a screen and internet connection, you can use this method to interact with your LLM's on the device!
@albertlan
@albertlan 3 ай бұрын
Anyone know how to access ollama via API like you would with ChatGPT? I got the webui working, would love to be able to code on my laptop and utilize the remote PC's GPU
@decoder-sh
@decoder-sh 3 ай бұрын
I find that the easiest way to use services on another machine is just to ssh into it. So if you have ollama serving its api on your beefy machine on port 11434, then from your local machine you’d run ssh -L 11434:11434 beefy-user@beefy-local-ip-address. This assumes you have sshd running on your other machine, but it’s not hard to set up
@albertlan
@albertlan 3 ай бұрын
@@decoder-sh how did you know my user name lol. I finally got it working thru nginx but the speed was too slow to be useful unfortunately
@ArtificialChange
@ArtificialChange 3 ай бұрын
my olama wont install models and i dont know where to put them, theres no folder called models
@decoder-sh
@decoder-sh 3 ай бұрын
Once you have ollama installed, it should manage the model files for you (you shouldn't need to put them anywhere yourself). If `ollama pull [some-model]` isn't working for you, you may need to re-install ollama
@ArtificialChange
@ArtificialChange 2 ай бұрын
@@decoder-shI will give it another try. I want to know where to put my own models
@Rambo51TV
@Rambo51TV 3 ай бұрын
An you show how do use it offline with personal information?
@decoder-sh
@decoder-sh 2 ай бұрын
I will have videos about this coming soon!
@JT-tg9uo
@JT-tg9uo 4 ай бұрын
Everything works but I can't select a model. I can acess from phone , etc. but cannot select model.
@decoder-sh
@decoder-sh 4 ай бұрын
It may be that you don't have any models installed yet? I didn't actually call that out in the video, so that's my bad! In the web ui go to settings > models, and then type in any of the model names you see here ollama.ai/library ("phi" is an easy one to start with). Let me know if that was the issue! Thanks for watching.
@JT-tg9uo
@JT-tg9uo 4 ай бұрын
Thank you Sir I'll give it a whirl
@JT-tg9uo
@JT-tg9uo 4 ай бұрын
Yeh it says Ollama:webuii server connection error when trying to pull phi or any other. But other than that it works from phone etc.
@JT-tg9uo
@JT-tg9uo 4 ай бұрын
Ollama works fine from terminal with phi, etc. Maybe docker not configured right. I never used docker before.
@ANIMATION_YT520
@ANIMATION_YT520 3 ай бұрын
Bro , how do you connect to internet for free using domain host
@decoder-sh
@decoder-sh 3 ай бұрын
Do you mean how would you use a custom domain with ngrok for free? I'm not sure if that's possible, that's probably something they'd make you pay for.
@fedorp4713
@fedorp4713 3 ай бұрын
Wow, hosting an app on a free hostname from your home, it's just like 2002.
@decoder-sh
@decoder-sh 3 ай бұрын
Next I'll show you how to use an LLM to create your very own ringtone
@fedorp4713
@fedorp4713 3 ай бұрын
@@decoder-sh How will that work with my pager?
@decoder-sh
@decoder-sh 3 ай бұрын
@@fedorp4713I’ve seen people make music with HDDs, I’m sure we can quantize some Beach Boys to play on DTMF
@fedorp4713
@fedorp4713 3 ай бұрын
@@decoder-sh Love it! Subbed, can't wait for the boomer pager LLM series.
@samarbid13
@samarbid13 Ай бұрын
Ngrok is considered a security risk because it is closed-source, leaving users uncertain about how their data is being handled.
@decoder-sh
@decoder-sh Ай бұрын
Fair enough! One could also just use a VPN of their choice (including self-hosted Wireguard) to connect their phone to the host device, and reach the webui on localhost
@MrMehrd
@MrMehrd 3 ай бұрын
Should we connected to the internet ?
@decoder-sh
@decoder-sh 3 ай бұрын
Yes you'll need to be connected to the internet
@optalgin2371
@optalgin2371 Ай бұрын
Do you have to use 3000:8080 ?
@decoder-sh
@decoder-sh Ай бұрын
No, you can change the docker config to use whatever host ports you want
@optalgin2371
@optalgin2371 Ай бұрын
@@decoder-sh What if I want to use the ollama server on my win machine and connect the OpenWebUI to the server on a different Mac machine? I've seen there's a code using ollama on a different host but whenever I use that code with 3000:8080 the UI page opens I can register change things but it doesn't connect, however when I use the network flag fix it doesn't even load the webui page.
@optalgin2371
@optalgin2371 Ай бұрын
@@decoder-sh Is there a way to use this method to connect two machines?
@J4M13M1LL3R
@J4M13M1LL3R 4 ай бұрын
Please wen llamas with image recognition
@decoder-sh
@decoder-sh 4 ай бұрын
The llamas have eyes! You can use multimodal models with ollama NOW. Currently the two models that support images are llava and bakllava and both are sub 7b params I believe
@PhenomRom
@PhenomRom 3 ай бұрын
why didnt you put the commands in the description
@decoder-sh
@decoder-sh 3 ай бұрын
KZbin doesn't support code blocks in the description so I spent the day writing code to generate a static site for each video, so I can post the code there. Enjoy! decoder.sh/videos/use-your-self_hosted-llm-anywhere-with-ollama-web-ui
@PhenomRom
@PhenomRom 3 ай бұрын
oh wow. thank you @@decoder-sh
@decoder-sh
@decoder-sh 3 ай бұрын
@@PhenomRommy pleasure! Might do a video about how to make my website too 😂
@BogdanTestsSoftware
@BogdanTestsSoftware 4 ай бұрын
What hardware do I need to run this container? GPU ? Ah, found it: "WARNING: No NVIDIA GPU detected. Ollama will run in CPU-only mode."
@ArtificialChange
@ArtificialChange 3 ай бұрын
remember your docker looks different
@gabrielkasonde367
@gabrielkasonde367 3 ай бұрын
please add the commands to the description, thank you.
@decoder-sh
@decoder-sh 3 ай бұрын
Good call, will do!
@JarppaGuru
@JarppaGuru 2 ай бұрын
yes now we can use AI answer what was trained. this is question answer this. we allready had jarvis with voice LOL. now we back text LOL
@arupde6320
@arupde6320 4 ай бұрын
be regular
@decoder-sh
@decoder-sh 3 ай бұрын
L1 or L2?
@photize
@photize 3 ай бұрын
Great video , macOs what happened to the majority vote you lost me there not even a mention for the non lemmings Nvidia crew!
@decoder-sh
@decoder-sh 3 ай бұрын
Far enough, I’d be happy to do some videos for Linux as well! Thanks for watching
@photize
@photize 3 ай бұрын
@@decoder-sh I'm presuming the majority to be Windows, it is amazing me how many ai guys have CrApple where in the real world many are using gaming machines for investing time in ai. (Not me I'm just an Apple hater)
@garbagechannel6514
@garbagechannel6514 4 ай бұрын
isnt the electric bill higher than just paying for chatgpt
@decoder-sh
@decoder-sh 4 ай бұрын
Depends on the price of electricity where you are, and how much you use it! But running llms locally has other benefits as well. No need for an internet connection, no vendor lock-in, no concern about sending your data to meta or openai, ability to use different models for different jobs, plus some people just like to own their whole stack. It would be interesting to figure out the electricity utilization per token for an average gpu though…
@garbagechannel6514
@garbagechannel6514 4 ай бұрын
@@decoder-sh true enough, the concept is appealing but thats what holds me back atm. i was also looking at on demand cloud servers but it seems like it would get either very expensive or very slow if u let an instance spin up for every query. most effective does seem to be anything with shared resources like chatgpt
@Soniboy84
@Soniboy84 4 ай бұрын
You forgot to mention that you need a chunky computer at home running those models, potentially costing $1000s
@decoder-sh
@decoder-sh 4 ай бұрын
It doesn't hurt! But even small models like Phi are pretty functional and don't have very high hw requirements. Plus if you're a gamer then you've already got a chunky GPU, and LLMs give you one more thing you can use it for 👨‍🔬
@razorree
@razorree 3 ай бұрын
another 'ollama' tutorial....
@decoder-sh
@decoder-sh 3 ай бұрын
Guywhowatchesollamatutorialssayswhat
@arquitectoqasdetautomatiza5373
@arquitectoqasdetautomatiza5373 2 ай бұрын
Eres la mera v3rga carnal, por favor sigue subiendo videos
Unlimited AI Agents running locally with Ollama & AnythingLLM
15:21
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 716 М.
They RUINED Everything! 😢
00:31
Carter Sharer
Рет қаралды 24 МЛН
Servidor IA para tu Intranet 100% local usando Ollama y Open WebUI
21:20
Apple Just Integrated ChatGPT and Elon Musk is Furious!
8:08
AI Revolution
Рет қаралды 23 М.
Stop paying for Google Photos! Self host on your own PC!!
10:59
Set up a Mac in 2024 for Power Users and Developers
1:00:34
Syntax
Рет қаралды 243 М.
RAG from the Ground Up with Python and Ollama
15:32
Decoder
Рет қаралды 23 М.
How ChatGPT Built My App in Minutes 🤯
8:28
Website Learners
Рет қаралды 1,9 МЛН
HTMX Sucks
25:16
Theo - t3․gg
Рет қаралды 86 М.
Bluetooth Desert Eagle
0:27
ts blur
Рет қаралды 6 МЛН
AI от Apple - ОБЪЯСНЯЕМ
24:19
Droider
Рет қаралды 91 М.
Will the battery emit smoke if it rotates rapidly?
0:11
Meaningful Cartoons 183
Рет қаралды 9 МЛН
Best Beast Sounds Handsfree For Multi Phone
0:42
MUN HD
Рет қаралды 341 М.