Getting Started on Ollama

  Рет қаралды 63,460

Matt Williams

Matt Williams

Күн бұрын

Join Matt Williams, a founding member of the Ollama team, as he guides you through a comprehensive journey from installation to advanced usage of Ollama - a powerful tool for running AI models locally on your machine.
In this fast-paced, information-packed video, you'll learn:
Hardware requirements for Mac, Windows, and Linux
Step-by-step installation process
How to download and run AI models
Creating custom models with specific instructions
Tips for managing model storage and performance
Whether you're a beginner or an experienced user, this video will equip you with the knowledge to leverage Ollama's capabilities on your local machine. Discover how to interact with various models, understand their differences, and even create your own specialized AI assistants.
Perfect for developers, AI enthusiasts, or anyone interested in running powerful language models without relying on cloud services. Get ready to unlock the full potential of local AI with Ollama!
#Ollama #LocalAI #MachineLearning #AITutorial
My Links 🔗
👉🏻 Subscribe (free): / technovangelist
👉🏻 Join and Support: / @technovangelist
👉🏻 Newsletter: technovangelis...
👉🏻 Twitter: / technovangelist
👉🏻 Discord: / discord
👉🏻 Patreon: / technovangelist
👉🏻 Instagram: / technovangelist
👉🏻 Threads: www.threads.ne...
👉🏻 LinkedIn: / technovangelist
👉🏻 All Source Code: github.com/tec...
Want to sponsor this channel? Let me know what your plans are here: technovangelis...

Пікірлер: 143
@milorad9301
@milorad9301 8 ай бұрын
Thank you, Matt! Please create more videos like this; they're really clear and simple.
@zerotheory941
@zerotheory941 8 ай бұрын
If you can make a video about crew AI explaining it as simply as you did here, you'd be my hero.
@edwardrhodes4403
@edwardrhodes4403 8 ай бұрын
And also Autogen and other agents like Devika and how to integrate them
@MonsieurGinger
@MonsieurGinger 5 ай бұрын
I just did all of this yesterday and still watched your video from start to finish. Very clear and concise. I look forward to your other videos.
@D1s0rdr
@D1s0rdr 6 ай бұрын
Im so happy there's someone in the mix who actually has a career in AI/Dev. Seriously, really enjoying your content. Dont listen to any of these jerks.
@sdaiwepm
@sdaiwepm 6 ай бұрын
Thank you for such a helpful explanation. I wish more tech explainers and presenters were this clear and structured.
@continuouslearner
@continuouslearner 5 ай бұрын
Would have been good to cover what ollama is and what problems does it solve, for about 30sec-1min, before going into hardwarre requirements etc.
@bens4446
@bens4446 8 ай бұрын
Thanks! Just downloaded Ollama and was feeling a bit lost. Would really appreciate some guidance on integrating speech recognition and text to speech into the chatbot. But just about anything you say will probably be useful. Please keep 'em coming!
@exxonrcg
@exxonrcg 4 ай бұрын
you are a good educator. Thanks for this
@volt5
@volt5 3 ай бұрын
Thanks, this was crystal clear. I just started trying Llama and your video helps me to orient myself.,
@hiltonwong5419
@hiltonwong5419 5 ай бұрын
I have watched a few of your videos. I love the way you explain things simply and clearly. Keep it up. Thank you for your work to all of us.
@technovangelist
@technovangelist 5 ай бұрын
Thanks so much
@Cube_Box
@Cube_Box 3 ай бұрын
Absolutely love this channel
@German_dude175
@German_dude175 2 ай бұрын
This is the quality content im looking for! Great video!
@technovangelist
@technovangelist 2 ай бұрын
Glad you enjoy it!
@Filipe9171
@Filipe9171 5 ай бұрын
This is gold. Thank you, Mr Williams!
@ValentinPletzer
@ValentinPletzer 8 ай бұрын
Thanks. I really learned a lot by watching your videos. I recently ran into an issue when writing a new template model for few shot learning. Most of the times it responds like expected but sometimes it responds to my prompt and then also inserts it's own command by adding [INST] some other prompt … and also answers it. I probably made some mistake but I cannot figure it out. That's why I would love to see you make a video on templates (if this isn't too much to ask).
@jahbini
@jahbini 8 ай бұрын
I second that request!
@incrastic6437
@incrastic6437 8 ай бұрын
Excellent introduction. Thanks for the help
@RobCowie
@RobCowie 6 ай бұрын
Does it phone "home" at all, or is the model I use locally, assuming the machine is connected to the Internet, shared publicly at all, and is it secure?
@technovangelist
@technovangelist 6 ай бұрын
It doesn’t reach out anywhere unless you write a program to have it do something like that.
@talktoeric
@talktoeric 5 ай бұрын
I really like this channel! The presentation is great and understandable. It is easy to follow along. Thanks.
@enmingwang6332
@enmingwang6332 5 ай бұрын
What a great tutorial, clear, concise and informative!!!
@technovangelist
@technovangelist 5 ай бұрын
Glad you liked it.
@sebingtoon
@sebingtoon 8 ай бұрын
Hi Matt, do you know what determines the length of a model's answer? How does the model 'know' when to stop? Is it hard coded into the model or is it controlled by Ollama? Thanks
@samsquamsh78
@samsquamsh78 8 ай бұрын
I like your videos, always spot on and pedagogical! Why did you leave ollama?
@technovangelist
@technovangelist 8 ай бұрын
If we find ourselves in the same room I’ll talk about it there.
@mister-ace
@mister-ace 4 ай бұрын
Thank you for the video! I'm very interested in how to conduct sentiment analysis of data? For example, I have 100 texts and I would like to know their sentiment. (at the same time, bert, etc. did not work for me, because they are only for English, and for other languages ​​the alternatives are very bad).. What could you recommend (as if I were 5 years old, lol).
@cankobebryant
@cankobebryant 3 ай бұрын
Thanks, great video!
@nicosilva4750
@nicosilva4750 8 ай бұрын
Do the models return Markdown ...like lists? extended Markdown ...like tables and LaTeX? I have written my own desktop client that I put on all our machines to use OpenAI and their API (cheaper than $20 * 5/month). So I would like to have a network server for my home to run a local model. Can I set it up there and have everyone use it, or would there be performance issues? ...what about simultaneous usage?
@emil8367
@emil8367 8 ай бұрын
thanks for sharing, prune is something what I missed but very useful due to the fact of downloading large files and loosing them after each restart, was very annoying. I see ollama didn't documented it well or maybe I overlooked it
@SergiySev
@SergiySev 8 ай бұрын
Great video, thank you for Ollama introduction! is there a way to add my own data to the model or shrink model to a particular topic? for example TailwindCSS, there is a sorce code, docs, library of the project, is there any way to train model to be able generate layouts and components based on a provided data?
@fastmamajama
@fastmamajama 4 ай бұрын
wow. thanks. i am trying to use ai to capture ufos. ive been using opencv to detect saucers, tictacs and orbs. i wonder if i could use ollama to detect ufos on videos that opencv records or make the process more effective
@dirtydevotee
@dirtydevotee 27 күн бұрын
An excellent video. Thank you!
@liammcmullen4497
@liammcmullen4497 8 ай бұрын
Great Overview Matt, your a star!
@wirreswuergen
@wirreswuergen 6 ай бұрын
Thank you, Matt! Your videos are awesome and already helped me a lot :)
@NoHack_Know_How
@NoHack_Know_How 4 ай бұрын
Question - I have a mining board "ASUS B250 MINING EXPERT LGA1151 DDR4" do you think I can use that to host OllAMA 2?
@technovangelist
@technovangelist 4 ай бұрын
Depends on the gpus.
@technovangelist
@technovangelist 4 ай бұрын
But what is the gpu that you have
@NoHack_Know_How
@NoHack_Know_How 4 ай бұрын
@@technovangelist I have 8 videos cards at the moment not sure how many cudas yet ?
@technovangelist
@technovangelist 4 ай бұрын
What gpu do you have? If you have 8 old gpus that aren’t supported it won’t help. That’s why I keep asking what gpu you have.
@NoHack_Know_How
@NoHack_Know_How 4 ай бұрын
@@technovangelist Hey sorry for the delay, I was actually getting that information; they are all Radeon 580, and from what I read they don't support it.
@ftlbaby
@ftlbaby 7 ай бұрын
Thanks for this! I just setup Ollama with wizard-vicuna-uncensored:30b-q8_0. Do you know what's different in the fp16 models?
@ec_gadgets
@ec_gadgets 8 ай бұрын
You explained it perfectly, thank you
@technovangelist
@technovangelist 8 ай бұрын
Glad it was helpful!
@nholmes86
@nholmes86 7 ай бұрын
I successful run Ollama with llama 3 on Mac OS M1 8G, it runs better when you close other apps .
@harumambaru
@harumambaru 5 ай бұрын
Thanks for explanation! Love the outro :)
@YotamGuttman
@YotamGuttman 5 ай бұрын
fascinating. thank you for these videos!
@sci1200
@sci1200 2 ай бұрын
Thank you, Matt! BTW, I saw you're drinking at the end of video😄
@vicnent75
@vicnent75 6 ай бұрын
thank you for you job Matt.
@Leon-AlexisSauer
@Leon-AlexisSauer 2 ай бұрын
yoo so far as i understand , ollama is not an application right? or is there a way to get it like that i am new to this
@technovangelist
@technovangelist 2 ай бұрын
Not sure I understand. Ollama is an application to run ai models
@JoaoKruschewsky
@JoaoKruschewsky 7 ай бұрын
Hello from Brazil. I really liked your content ! thanks
@alexsnow2993
@alexsnow2993 7 ай бұрын
Hello! My video card is an RX580. Is there a way to make it work?
@alexsnow2993
@alexsnow2993 7 ай бұрын
Using the rx580, it will be slow? Or not work at all?
@technovangelist
@technovangelist 7 ай бұрын
I don’t see it on the compatibility list. github.com/ollama/ollama/blob/main/docs/gpu.md
@technovangelist
@technovangelist 7 ай бұрын
Just won’t work at all. I think ollama requires the newer amd drivers and amd didn’t make it backwards compatible with older cards.
@alexsnow2993
@alexsnow2993 7 ай бұрын
Thanks for the info! I can't get another V-card at the moment, and using the CPU it is a no go. Is there any version or any other AI out there, that can be configured locally?
@technovangelist
@technovangelist 7 ай бұрын
Everything I know of is going to need a decent recent gpu.
@mshonle
@mshonle 8 ай бұрын
Here’s a video request: can you do one on LMSys’s SGLang? Particularly using constrained decoding?
@sanzzrulezz
@sanzzrulezz 4 ай бұрын
Hi Matt, After installation, Ollama is not opening in Mac? Any tips? Thank you.
@technovangelist
@technovangelist 4 ай бұрын
So when you run ollama in the terminal nothing happens?
@sanzzrulezz
@sanzzrulezz 4 ай бұрын
​@@technovangelist Hi Matt, I'm new to this terminal and coding, but I love to learn how to execute this. Currently, I have installed Ollama and downloaded your Mac zip file from GitHub, but I'm not sure how to run this and get it working as software to rename images. Can you guide me on this? It would be really helpful. Thank you.
@sanzzrulezz
@sanzzrulezz 4 ай бұрын
@@technovangelist I have pulled llava:13b in Terminal. However, I don't understand how to run the macOS x86 file from GitHub, which is renamed as airenamer, in Terminal.
@juanjesusligero391
@juanjesusligero391 8 ай бұрын
Thank you so much for your tutorials! :D I would like to suggest an idea for a future video that I would be really interested in watching: a more detailed exploration of the various models (such as the instruct/base/etc. ones you've mentioned). Again, thank you very much! You rock! ^^/
@Delchursing
@Delchursing 8 ай бұрын
Great video. The costs are a bit unclear to me. Would a local ollama/llm be free to use?
@technovangelist
@technovangelist 8 ай бұрын
What costs? You have to own a computer with a gpu. That’s it
@piotrnakonieczka1111
@piotrnakonieczka1111 Ай бұрын
Hi, I like your channel 😊 could you toch the topic of how to use ollama with two(or more) GPU’s. Option one to use them to speed up the compute and option two to use the vram of both GPU’s to be able to load larger model. Thx in advance 😊
@abhijeetkumar8044
@abhijeetkumar8044 8 ай бұрын
Please create videos on how to fine tune these models 🙏
@thepassionatecoder5404
@thepassionatecoder5404 8 ай бұрын
Do I need to know match, statistics, etc... apart from programming?
@AwesomeCanadianHomes
@AwesomeCanadianHomes 8 ай бұрын
I have a feeling Duncan Trussell is a subscriber : )
@blackwinegum
@blackwinegum 6 ай бұрын
I just dont get any sort of CLI when i install Ollama , the app just shows "view logs" and "Quit Ollama"
@technovangelist
@technovangelist 6 ай бұрын
so when you run ollama at the command line you don't see anything?
@blackwinegum
@blackwinegum 6 ай бұрын
@@technovangelist I think i've figured it out, i think my firewall was blocking something, thanks for replying.
@richardurwin
@richardurwin 6 ай бұрын
Thank you for the video
@PoGGiE06
@PoGGiE06 8 ай бұрын
Thanks Matt, why does everyone use Mistral rather than Mixtral?
@technovangelist
@technovangelist 8 ай бұрын
Too slow
@bens4446
@bens4446 8 ай бұрын
FYI- My llama2 install is working reasonably fast without a GPU, just a ryzan 5600G CPU, which has some rudimentary graphics capacity built into it.
@Thymed
@Thymed 5 ай бұрын
More than rudimentary but still I get your point. much less than modern GPUs or recent APUs
@flexchamp
@flexchamp 5 ай бұрын
10 out of 10!
@hotbird3
@hotbird3 6 ай бұрын
You're a very smart person 👍👊
@makesnosense6304
@makesnosense6304 8 ай бұрын
9:40 To have the same result you just need the same input, seed (and other parameters), no? Reason it's different every time is because the seed is random for every request, right? The seed used (and other parameters) make the result different because it takes a different path in the weight model.
@technovangelist
@technovangelist 8 ай бұрын
Using the same seed and temp doesn’t always guarantee the same result
@makesnosense6304
@makesnosense6304 8 ай бұрын
@@technovangelist Ah, because temp is a percentage randomness scale of sort.
@makesnosense6304
@makesnosense6304 8 ай бұрын
@@technovangelist What if temp is 0? Or 1?
@technovangelist
@technovangelist 8 ай бұрын
It’s not guaranteed
@sepulchral.
@sepulchral. 4 ай бұрын
Does anyone know where the models save to on Windows?
@technovangelist
@technovangelist 4 ай бұрын
the windows docs has the answer to this and other questions: github.com/ollama/ollama/blob/main/docs/windows.md
@sepulchral.
@sepulchral. 4 ай бұрын
@@technovangelist explorer %HOMEPATH%\.ollama - thanks.
@mrrohitjadhav470
@mrrohitjadhav470 8 ай бұрын
it would have been great to know how to install other models not mentioned in Ollama library with specific type of Low Vram and GGUF
@technovangelist
@technovangelist 8 ай бұрын
check out kzbin.info/www/bejne/ZqDYZmSiYrJ_edE
@mrrohitjadhav470
@mrrohitjadhav470 8 ай бұрын
@@technovangelist Thanks a lot❤
@qewolf
@qewolf 6 ай бұрын
Verry cool, thank you 🙏
@axeljohannes3464
@axeljohannes3464 7 ай бұрын
Wait do I need to download anything or not? You say "Now the model should be downloaded, so you can run it with ollama run mistral" Why would it be downloaded? I just installed the Ollama software. Does it download all the models automatically? This seems very unclear
@technovangelist
@technovangelist 7 ай бұрын
I think you must have skipped around a bit. I very clearly said to install and then run ollama pull to download the model. Then while downloading talked about what’s going on. Then the model is downloaded and you can run it. When you downloaded the model you only downloaded that model. Why download anything? Because you want to run it.
@axeljohannes3464
@axeljohannes3464 7 ай бұрын
Thanks! I got it to work@@technovangelist
@axeljohannes3464
@axeljohannes3464 7 ай бұрын
I think what confused me is the term pull, and what that actually meant. So when you got to the point of speaking about downloaded, I was like "Hey, no one said anything about downloading anything"
@Drkayb
@Drkayb 8 ай бұрын
Excellent summary, thanks alot.
@tecnopadre
@tecnopadre 8 ай бұрын
Sometimes your level is so high some others too simple. Cheers.
@userou-ig1ze
@userou-ig1ze 8 ай бұрын
I wish there was an easier way to fill in template text, and parsing pdfs. I've seen the 'function calling' video's, but somehow it's still eluding me how do get this done as easily as possible (e.g. sending a pdf over api in a curl request from another machine, and rename it sensibly/according to content)
@technovangelist
@technovangelist 8 ай бұрын
The biggest problem there is the pdf. You can’t easily get to the contents of the pdf. The text. It’s often jumbled up. PDF is the worst format you can use if you want the text and to do something with it. That’s also one of the benefits of pdf. It obfuscates the source text so folks can’t do anything with the text.
@userou-ig1ze
@userou-ig1ze 8 ай бұрын
​@@technovangelistthanks for the reply! I used pdf2text but it was not exactly perfectly successful. I wonder how ollama frontends (e.g. webgui or webui) solve this for their RAG? Gave me hope that there is a good way of doing it 🎉
@anshulsingh8326
@anshulsingh8326 6 ай бұрын
Subed ❤️ If only you taught maths too
@Mike-vj8do
@Mike-vj8do 7 ай бұрын
AMAZING!
@sergey_a
@sergey_a 8 ай бұрын
thanks for the informative video. some examples should be displayed in a video, rather than spoken, for example, to show how to use environment variables
@technovangelist
@technovangelist 8 ай бұрын
There are a number of videos pointed out throughout the video that provide all the examples
@briann1233
@briann1233 7 ай бұрын
Can Ollama be used in prod on Linux server?
@technovangelist
@technovangelist 7 ай бұрын
absolutely. lots of folks are doing just that.
@briann1233
@briann1233 7 ай бұрын
@@technovangelistWow, that is amazing. I would appreciate it if you could provide documents that would guide me in deploying a model for production use and "Function Calling." Ollama is an excellent tool for a startup to keep costs down and avoid OpenAI usage costs.
@briann1233
@briann1233 7 ай бұрын
It also gives us flexibility to keep our data in-house.
@briann1233
@briann1233 7 ай бұрын
​@@technovangelist This is awesome!. Do you have any reference deploy successfully into prod? We are trying to avoid OpenAI and are looking for open-source AI models with "Function calling."
@technovangelist
@technovangelist 7 ай бұрын
all the docs in the github repo. but it’s a pretty simple app without many dependencies. I don't know of any guidance though.
@user-wr4yl7tx3w
@user-wr4yl7tx3w 8 ай бұрын
Great content
@JeppeGybergyoutube
@JeppeGybergyoutube 7 ай бұрын
Nice video
@stebansb
@stebansb 6 ай бұрын
great content, a telegram group would be great!
@technovangelist
@technovangelist 6 ай бұрын
Telegram??? I think I used it once at an Idan Raichel concert but never since. What’s special about a telegram group?
@stebansb
@stebansb 6 ай бұрын
@@technovangelist the other option being Discord, I feel is simpler, cleaner user interface, yet very powerful; popular with business and a slightly more mature cohort. The other option is Discord, slower more complex, popular among gamers. Either way, will be cool to have something to build a community beyond KZbin.
@Andres-m2u
@Andres-m2u 8 ай бұрын
Ollama runs very well on a M3 Max
@jovwvy
@jovwvy Ай бұрын
Very helpfunctie.
@thiagoassisfernandes
@thiagoassisfernandes 8 ай бұрын
arch and nix are system-d distros
@technovangelist
@technovangelist 8 ай бұрын
Doh!
@viniciussilvano4177
@viniciussilvano4177 5 ай бұрын
Please, do more compatibility with GPUs. I have rx580. My processor is crying hehe
@technovangelist
@technovangelist 5 ай бұрын
That’s a request for AMD to add support to those older lower end cards I think.
@viniciussilvano4177
@viniciussilvano4177 5 ай бұрын
@@technovangelist Is there any way I can use a library that allows me to do this. Or is it actually something that depends on AMD? I'm really impressed with what using Ollama as an API has added to my projects. I would like to find some way to speed up processing without having to spend money, at least for now.
@technovangelist
@technovangelist 5 ай бұрын
But amd support requires a certain level of the drivers which amd only has working for newer cards. I think the only option is to buy a more recent card. The 580 is 5 years old.
@MrI8igmac
@MrI8igmac Ай бұрын
I could make the console work if it displayed code with a syntax color highlieght
@technovangelist
@technovangelist Ай бұрын
if highlighting is holding you back, you have other issues...
@MrI8igmac
@MrI8igmac Ай бұрын
@@technovangelist \033[91m definitely
@K600K300
@K600K300 8 ай бұрын
thank you
@dmbrv
@dmbrv 8 ай бұрын
thanks
@robert_kotula
@robert_kotula 7 ай бұрын
Booted up Ollama with the llama2 model and my M1 MBP just froze 💀
@technovangelist
@technovangelist 7 ай бұрын
That is bizarre…as in you would be the first person that has happened to. Running macOS I assume. Installed using the installer? What else was running? So you installed and then opened a terminal and run ollama run llama2 and then nothing? Probably easiest to solve on the discord.
@robert_kotula
@robert_kotula 7 ай бұрын
@@technovangelist I’ll join the discord channel and try to troubleshoot. I’ve had a couple of tabs open in Safari and one tab in Firefox Developer addition, nothing else. Will need to dig into the performance stats on the laptop.
@mcawesome4150
@mcawesome4150 8 ай бұрын
you should have more views and subscribers
@technovangelist
@technovangelist 8 ай бұрын
Thanks. Both are accelerating quickly. But feel free to share. I like to say I am working on my first million subscribers. Only 985,000 short.
@florentflote
@florentflote 8 ай бұрын
@jyashi1
@jyashi1 8 ай бұрын
First
@technovangelist
@technovangelist 8 ай бұрын
First what.
@technovangelist
@technovangelist 8 ай бұрын
About 6 hours late to be first comment
@lucasmuso
@lucasmuso 4 ай бұрын
Bro did not just use FAQ as a whole assed word.
@technovangelist
@technovangelist 4 ай бұрын
Ahh you must be new to this technology world. Welcome.
Is Open Webui The Ultimate Ollama Frontend Choice?
16:43
Matt Williams
Рет қаралды 104 М.
I love small and awesome models
11:43
Matt Williams
Рет қаралды 27 М.
How many people are in the changing room? #devil #lilith #funny #shorts
00:39
УДИВИЛ ВСЕХ СВОИМ УХОДОМ!😳 #shorts
00:49
I forced EVERYONE to use Linux
22:59
NetworkChuck
Рет қаралды 596 М.
This Llama 3 is powerful and uncensored, let’s run it
14:58
David Ondrej
Рет қаралды 171 М.
So, I Tried Arch Linux.. (and Hyprland btw)
36:10
Livakivi
Рет қаралды 208 М.
Supercharge your Python App with RAG and Ollama in Minutes
9:42
Matt Williams
Рет қаралды 38 М.
Have You Picked the Wrong AI Agent Framework?
13:10
Matt Williams
Рет қаралды 79 М.
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 1,4 МЛН
Qwen Just Casually Started the Local AI Revolution
16:05
Cole Medin
Рет қаралды 102 М.
How programmers flex on each other
6:20
Fireship
Рет қаралды 2,5 МЛН
iPhone - г*овно?! 💩 #apple #iphone
1:01
Алишер Бейсебай - Техноблог
Рет қаралды 740 М.
Which one made you like this video?#keyboard
0:32
Tapkx
Рет қаралды 8 МЛН
How to quickly draw up a plan on iPad Procreate! #ipad手工#设计人
0:37
Cách tính trở kháng loa khi đấu nối tiếp và song song!
0:20
MP3-плеер от Канье Уэста
1:00
Rozetked
Рет қаралды 289 М.