The Secret Behind Ollama's Magic: Revealed!

  Рет қаралды 28,427

Matt Williams

Matt Williams

4 ай бұрын

Ollama is amazing and let's you run LLM's locally on your machine. But how does it work? What are the pieces you need to use? This video covers it all.
Be sure to sign up to my monthly newsletter at technovangelist.com/newsletter
And if interested in supporting me, sign up for my patreon at / technovangelist

Пікірлер: 102
@erniea5843
@erniea5843 4 ай бұрын
Appreciate you taking the time to put this video together!
@mo_i_nas
@mo_i_nas 3 ай бұрын
that 10 second pause at the end of the video was like expecting post-credits scene from a Marvel movie :D
@piero957
@piero957 4 ай бұрын
Excellent content and explanation, your videos get my like when they start, just to be sure I don't forget it.
@darenbaker4569
@darenbaker4569 4 ай бұрын
Sorry but you are so cool, so much enjoying your videos and saving me months of learning the docs. This ai world is moving so fast for an old person like me but making me feel young again, but it feels like we are still at 1400k modem stage of dev. Love what your doing.
@Slimpickens45
@Slimpickens45 4 ай бұрын
Thanks for the in depth review. Good stuff sir!
@aaronk9910
@aaronk9910 2 ай бұрын
Awesome and calm explanation. By far the best channel about this topic.
@lucioussmoothy
@lucioussmoothy 4 ай бұрын
Great job compiling and explaining this information. Fantastically useful video - keep up the great work brother
@Autom4tic
@Autom4tic 3 ай бұрын
Hey, Matt! Thank you for such a clear explanation! Cheers!
@lukasksiezak4929
@lukasksiezak4929 Ай бұрын
It is still unclear to me how Ollama really works, was hoping for a deeper dive in that topic.
@jayk806
@jayk806 Ай бұрын
Really enjoyed this video. I especially appreciated the section on creating derivative models. Very informative. Thanks!
@bjaburg
@bjaburg 3 ай бұрын
And I thought I knew everything. Until I found your video(s). Thanks again!!
@fabriai
@fabriai 4 ай бұрын
Thanks! Good architecture video. Amazinf.
@AnythingGodamnit
@AnythingGodamnit 4 ай бұрын
Thanks for this. This helped me click that the model is not the big thing, but rather the weights are. I find that a bit confusing because people talk about training a model, not training the weights.
@s11-informationatyourservi44
@s11-informationatyourservi44 4 ай бұрын
awesome tutorial
@isaacyimgaingkuissu3720
@isaacyimgaingkuissu3720 3 ай бұрын
Great job. Thanks to go deep in how the architecture work and where are the possible break points. One question: Do you have any architecture schema about each bloc of ollama and how to interact each others?
@TheOnlyEpsilonAlpha
@TheOnlyEpsilonAlpha 3 ай бұрын
The explanation that LLAMA doesn’t upload things was the most interesting for me and it got answered in that video. It’s no wonder based on where llama comes from
@Techonsapevole
@Techonsapevole 4 ай бұрын
Great video, can you make a video on autogen-studio + ollama ? Is there a local model able to run the playground examples ?
@technovangelist
@technovangelist 4 ай бұрын
Yes. Been meaning to cover that
@pschakravarthi
@pschakravarthi 29 күн бұрын
Thanks for the details. Can you create a video on running 70B (or larger models) that doesn't fit entirely in GPU memory?
@richardpulliam9096
@richardpulliam9096 Ай бұрын
Great presentation @Matt, I do have one question as it relates to the saving option, if you save once and then continue asking questions will it continue to save, or will you need to do this after or during each session?
@technovangelist
@technovangelist Ай бұрын
That’s not really the purpose of the save. save creates a new model based on the current state. Once you create the system prompt and parameters, save it so that you can use it later.
@timtensor6994
@timtensor6994 3 ай бұрын
Hi Matt great content. I think it would be great if you have some kind of a cookbook with your experiments ? Similar to hugging face. On other note would be great to have vidoes on RAG +open source, running agents . they seem to be hot topics for now
@DonBranson1
@DonBranson1 4 ай бұрын
another great video. I just need to implement rag so i can remember it.
@m12652
@m12652 3 ай бұрын
Thanks for this... just what I needed 👍what would really help would be a list of example prompts, or sequences of prompts, that produce workable code. So far for me there's been almost no good results for anything more than a very simple question (which I likely would never ask)... for example I asked for a popup list that when a list item is clicked, dismisses the popup and returns the item. What i got was a non modal contact form with a select list. Many iterations later still nothing useful. Cheers...
@MrJaggy123
@MrJaggy123 4 ай бұрын
I do have an "anything else" question : it seems like Ollama has its own custom place and way of storing models. Does this mean that if I want to do other stuff with the models, say for instance using EleutherAI's benchmarking stuff, I'll need a copy of all the models? I currently have lots and lots of models, almost 3T worth, so I'd rather not have to have 6T if I can help it...
@xugefu
@xugefu 3 ай бұрын
Thanks!
@natekidwell
@natekidwell 2 ай бұрын
Amazing presentation. Is there a paper or post discussing the dockeresque style of ollama's manifest?
@technovangelist
@technovangelist 2 ай бұрын
I don’t think so
@sandrocavali9810
@sandrocavali9810 4 ай бұрын
Excelente
@bobbaganush1
@bobbaganush1 2 ай бұрын
Why do none of the models work on my end? When I try mistral or llama 2, I get a message stating those terms are considered hate speech. Is there a workaround for this?
@felixchin1
@felixchin1 4 ай бұрын
Hey! Big fan and user of Ollama, thank you for everything you do! Just out of curiosity, how does Ollama work on M1/M2 Macs which lack NVIDIA gpus? When I try to run Mixtral through PyTorch, I get an error about not having NVIDIA CUDA. How does Ollama bypass this problem?
@technovangelist
@technovangelist 4 ай бұрын
the team runs on apple silicon macs, so that came first. Metal and Apple Silicon is super powerful. Ollama doesn't use Pytorch or Python. It avoids a lot of problems.
@felixchin1
@felixchin1 4 ай бұрын
@@technovangelist So it’s PyTorch that’s dependent on CUDA, not the LLM model itself? Are there any frameworks besides from Ollama that work with Apple Silicon? Seems like tensorflow doesn’t work on Apple Silicon either? Asking because I’m trying to fine-tune models locally. Wishing for the day Ollama adds local fine-tuning as a feature!
@jdray
@jdray 4 ай бұрын
This is super helpful as a reference for my efforts to get the Security people to stop freaking out about what demons might be infecting people's computers if they allow something like Ollama to be distributed. Can you comment further on what happens with those "memory" files? I presume they stay local even if someone pushes the model that they're associated with. Also, I'd like to see a short video (or maybe there is one) on the concept of someone "customizing a model and pushing it to the Library". What customizations are available? If customizing doesn't change the weights, why would someone do it? How is a "model" different than "the weights"? These are all things that Security is going to ask.
@podunkman2709
@podunkman2709 4 ай бұрын
Hi Matt! Thanks for your videos - u inspired me to get into subject. How can I use my own data in ollama? I would try to build up some search engine to find products accoding some given attributes. So I have product title, sku, description. Would be good if I could get product detail (including product number) when posting query like "mineral oil, 5 w40" or so. Ho can I start with that?
@technovangelist
@technovangelist 4 ай бұрын
If you have structured data and you want to search for things that match then more traditional db lookup is better and faster.
@podunkman2709
@podunkman2709 4 ай бұрын
@@technovangelist Well, it is not so structured 🙂 Very often product descriptions are quite chaotic and user type not exactly what's inside descriptions. Anyway, whatever it is - how to use own data?
@brinkoo7
@brinkoo7 4 ай бұрын
For those on Arch linux, don't make the mistake I did if you have Nvidia.. I install the package ollama from AUR, but never noticed the ollama-cuda package which is sooo much faster.
@karlofranic1299
@karlofranic1299 3 ай бұрын
Can someone explain how does Ollama manage to run large models on my server when I can't even run smaller ones? (Ollama can run 34b versions of models and without it I run out of memory already on 7b versions)
@technovangelist
@technovangelist 3 ай бұрын
I assume without ollama you are trying to work with unquantized models? A 7b unquantized model will take 32gb of vram at least because it’s 4bytes per parameter. Quantization reduces the size of the parameters with almost no impact on its precision. So it can be reduced to needing 3.5gb for best performance or as little as under 2. A 34b model could fit in as little as 9gb.
@daryladhityahenry
@daryladhityahenry 4 ай бұрын
Question about ollama: I have 8Gb VRAM. First request I ask to run mistral7b for example. The second request I ask to run llama2. Is it auto changing the model or what? Thanks. Is it able to do what I said? Because if yes, it's very useful for small VRAM like me :D
@technovangelist
@technovangelist 3 ай бұрын
Yeah. With smaller models this is closer to magic. It dumps the first model from memory and then loads the second and answers the question. With larger models you feel the time taken to load and unload models.
@daryladhityahenry
@daryladhityahenry 3 ай бұрын
@@technovangelist Woahh.. Why I just know about this......Thank you so much. This is exactly what I need later for my personal project.
@atrocitus777
@atrocitus777 4 ай бұрын
how can i point ollama to my own registry where say im hosting models on my own on prem hardware?
@technovangelist
@technovangelist 4 ай бұрын
That will come soon. The person on the team who wrote the registry I think was able to use his code from docker hub when he created that. But a lot had to change to support ollama. Docker images are minuscule compared to LLMs
@Xiripyu
@Xiripyu 4 ай бұрын
Ollama vs LM studio, what is better and what is difference?
@technovangelist
@technovangelist 4 ай бұрын
The way I have heard most describe it is that pm studio is great to start with but you quickly hit walls that are difficult to deal with. Ollama will let you do far more with better speed and efficiency. Plus Ollama is open source.
@FloodGold
@FloodGold 4 ай бұрын
Discord is Chaos, and I don't like Chaos, but I really like your videos, so thanks.
@technovangelist
@technovangelist 4 ай бұрын
I think the discord is lining up nicely with what the maintainer team is wanting. It’s not chaos its activity.
@user-xh3ig8qq7j
@user-xh3ig8qq7j 3 ай бұрын
do i have to duplicate the model to add my permanent system prompt?
@technovangelist
@technovangelist 3 ай бұрын
Creating a new model based on another with a new system prompt will mean an additional few kb. It will use the same weights layer.
@user-xh3ig8qq7j
@user-xh3ig8qq7j 3 ай бұрын
great ~@@technovangelist but how do i do this?
@nevokrien95
@nevokrien95 4 ай бұрын
For linux u can just ctrl c out of the ollama serve and thats good enough for the gpu memory.
@technovangelist
@technovangelist 4 ай бұрын
Yes. Do that and wait 5 minutes.
@nevokrien95
@nevokrien95 4 ай бұрын
@technovangelist dosent it force the gpu to clear? On my machine at least it seems to do that. Because when u kill the ollama parent process the os clears the gpu memory automatically
@technovangelist
@technovangelist 4 ай бұрын
ahh sorry, you are running ollama serve at the cli rather than the recommended way of running it as a service. Got it. I think most use it the normal way and thats what I cover in the video.
@fuba44
@fuba44 4 ай бұрын
If you "export OLLAMA_HOST = YOUR_IP_HERE ollama serve" to serve ollama on another IP like inside your tailnet like i am, the ollama instance you get cannot use the models downloaded form the "normal" "ollama run" instance. you have to download every model again and manage them in two (or more) places. is that a bug or is there a meaning with doing it like that? Also programs like autogenStudio does not set the "num_ctx" so it defaults to 2048, is there a way to set that on a per model basis like just set it for "mistral:7b", i guess i could create a model file maybe? but then i have to do it for each model/new model i have/get. would be so much easier if it just pulled that from the ollama site as a model specific default value. ALSO, love your videos just subbed!
@technovangelist
@technovangelist 4 ай бұрын
Yes, if you setup environment variables the wrong way as you showed, then it’s a different user accessing them. So the models aren't where they are expected to be. I have a couple videos on here that show how to set environment variables for Mac, Linux, and Windows. When you set them properly you won't have any issues.
@technovangelist
@technovangelist 4 ай бұрын
For the num_ctx question, that’s where a modelfile comes in. You can set that parameter and get the larger context, but be warned, large context takes a huge amount of memory
@fuba44
@fuba44 4 ай бұрын
@@technovangelist thank you, I will go find those videos.
@technovangelist
@technovangelist 4 ай бұрын
actually, you are right. I should do another that shows that scenario a bit more. thanks
@sarthaksarangi1
@sarthaksarangi1 4 ай бұрын
​​@@technovangelist Can you please provide more context on how to do it, for the num_ctx question
@sarthaksarangi1
@sarthaksarangi1 4 ай бұрын
Can you help us out with a detailed video on how to add a new layer to a ollama model 6:05
@technovangelist
@technovangelist 4 ай бұрын
are you asking about doing the fine tuning, or adding the fine tune adapter to ollama?
@sarthaksarangi1
@sarthaksarangi1 4 ай бұрын
@@technovangelist I meant doing the Fine tuning
@sarthaksarangi1
@sarthaksarangi1 4 ай бұрын
@@technovangelist And also I'll be very thankful if you provide me with more context on the fine tune adapter to Ollama( Am new to all those concepts, appreciate your advice 😃🫡)
@sarthaksarangi1
@sarthaksarangi1 3 ай бұрын
Waiting on your knowledge 😉 😀 ​@@technovangelist
@tuurblaffe
@tuurblaffe 4 ай бұрын
lol at sutfin the waves xD
@tiredofeverythingnew
@tiredofeverythingnew 4 ай бұрын
Brilliant, thanks, Matt, but you do look a bit thirsty after that video; better keep your bottle closer.
@technovangelist
@technovangelist 4 ай бұрын
That water bottle lights up when it’s time to drink and I still forget.
@synaestesia-bg3ew
@synaestesia-bg3ew 4 ай бұрын
​@@technovangelist I have two good suggestions for a new video for you : 1),To show a way to use the ollama downloaded nodems by both windows and Linux(Wls2) in the same machine ,avoiding to download twice the same models . 2) what best hardware spec to use ollama on a fast and reliable flash drive using usb3.2 .
@technovangelist
@technovangelist 4 ай бұрын
Im unable to do the first. I don't have a windows machine... just 6 macs in this house, plus a few proxmox machines mostly running kubernetes. and since i can't get WSL on any cloud instances, I can't get it working. if asking for best hardware spec to use ollama from me....well, based on what I said above... hard to beat a mac studio.
@nathank5140
@nathank5140 4 ай бұрын
You should cover if it’s running in a container run time. I think it is.
@technovangelist
@technovangelist 4 ай бұрын
If what’s in a container runtime? Ollama? No ollama by default does not use docker. That said some choose to run it in docker and the members of the ollama team created docker desktop and docker hub when they worked at docker
@nathank5140
@nathank5140 4 ай бұрын
@@technovangelist then I’m curious how the nvidia drivers work and are isolated and don’t conflict with my system ones.
@technovangelist
@technovangelist 4 ай бұрын
It is using the system ones
@santoshshetty6
@santoshshetty6 3 ай бұрын
Ollama doesn't support parallel processing. So it can't be used for commercial purpose.
@technovangelist
@technovangelist 3 ай бұрын
Those are two completely unrelated statements.
@technovangelist
@technovangelist 3 ай бұрын
A well architected solution will get you everything you want. But plenty are using as is today for commercial use.
@TalkingWithBots
@TalkingWithBots 4 ай бұрын
☀🌊🏄‍♀
@ikjb8561
@ikjb8561 3 ай бұрын
Issue with ollama is it does not scale beyond the user using it. Requests get queued
@technovangelist
@technovangelist 3 ай бұрын
Not everyone has to like it. You don’t like that it does what it’s designed to do. Serve one user really really well rather than slowing it down for everyone.
@HUEHUEUHEPony
@HUEHUEUHEPony 4 ай бұрын
How does it not run on Windows
@technovangelist
@technovangelist 4 ай бұрын
Huh? It does run on windows. Last week they released the native app and for months before it worked with WSL
@ShanyGolan
@ShanyGolan 4 ай бұрын
Problem is that even mixtral sucks compared to gpt4
@technovangelist
@technovangelist 4 ай бұрын
There are some things that gpt does better and others that open source models do better. Even some of the smallest models are on par with what chatgpt offers but far faster, which is a pretty big deal. Now if you just look at benchmarks then it might look like chatgpt wins everywhere, but I haven't seen a benchmark that reflects reality yet.
@patriot0971
@patriot0971 10 күн бұрын
I am just sick of all the AI applications explicitly using CUDA drivers. Anyone with AMD and Intel GPUs are out of luck.
@technovangelist
@technovangelist 10 күн бұрын
So you aren’t using ollama? It has support for amd and intel arc is trickling in now
@patriot0971
@patriot0971 9 күн бұрын
@@technovangelist I am using ollama however I am using it on my M2 mac mini. I would love to use it on my gaming rig that has 64GB RAM and a dedicated Intel ARC GPU
@technovangelist
@technovangelist 9 күн бұрын
Intel Arc is being added... not sure if you need to do anything special to enable, but the last couple versions have had it. Nothing in the release notes yet.
@technovangelist
@technovangelist 9 күн бұрын
I was really surprised to see this coming because Intel has never made it easy to build for their gpus, compared to nvidia or Apple.
@patriot0971
@patriot0971 9 күн бұрын
@@technovangelist Thank you for the update. I just updated Ollama and retested on my gaming rig and same result, it just uses the AMD CPU (slow) vs the Intel ARC GPU . I will keep an eye on the release notes.
@timmygilbert4102
@timmygilbert4102 4 ай бұрын
Insert a palworld joke here
@technovangelist
@technovangelist 4 ай бұрын
hmm, never heard of palworld, and looking it up i don't see the connection. Can you clue me in?
Getting Started on Ollama
11:26
Matt Williams
Рет қаралды 36 М.
Create your own CUSTOMIZED Llama 3 model using Ollama
12:55
AI DevBytes
Рет қаралды 14 М.
Её Старший Брат Настоящий Джентельмен ❤️
00:18
Глеб Рандалайнен
Рет қаралды 8 МЛН
Smart Sigma Kid #funny #sigma #comedy
00:19
CRAZY GREAPA
Рет қаралды 16 МЛН
Testing out OLLAMA - Your own ChatGPT locally! [ENGLISH SUBTITLES]
14:40
Have You Picked the Wrong AI Agent Framework?
13:10
Matt Williams
Рет қаралды 40 М.
Using Ollama To Build a FULLY LOCAL "ChatGPT Clone"
11:17
Matthew Berman
Рет қаралды 239 М.
Ollama 0.1.26 Makes Embedding 100x Better
8:17
Matt Williams
Рет қаралды 43 М.
Popularity doesn't always mean Great, But Pretty Good is Possible
17:27
Why Most Programmers DON'T Last
18:56
Thriving Technologist
Рет қаралды 262 М.
How programmers flex on each other
6:20
Fireship
Рет қаралды 2,2 МЛН
We may have killed p10k, so I found the perfect replacement.
19:48
Dreams of Autonomy
Рет қаралды 76 М.
Tmux has forever changed the way I write code.
13:30
Dreams of Code
Рет қаралды 913 М.
Нашел еще 70+ нововведений в iOS 18!
11:04
i like you subscriber ♥️♥️ #trending #iphone #apple #iphonefold
0:14