Writing Better Code with Ollama

  Рет қаралды 40,168

Matt Williams

Matt Williams

4 ай бұрын

Copilot changed everything for developers around the world. Then they started charging for it. And it won't work offline. And there is the security and privacy thing. Well, you can have the functionality of Copilot without all the headaches.
Be sure to sign up to my monthly newsletter at technovangelist.com/newsletter
And if interested in supporting me, sign up for my patreon at / technovangelist

Пікірлер: 107
@wadejohnson4542
@wadejohnson4542 4 ай бұрын
Hi Matt. I love Ollama and you just make me love it even more. I look forward to your videos. They are always concise, informative and intelligent. Thank you for your work.
@technovangelist
@technovangelist 4 ай бұрын
Thanks so much for saying that. Let me know if there are any topics you would like to see.
@davidjflorezr3099
@davidjflorezr3099 13 күн бұрын
I was already appreciating you when you explained how to code with llama, and you had already said you were into building cool things, but when you said the joke about being in stuck in coder rather than having to put up with the nature of puget sounds, I subscribed.
@thegrumpydeveloper
@thegrumpydeveloper 4 ай бұрын
Great content! So much here and didn’t even feel rushed in the short amount of time to cover all this.
@dngoldn
@dngoldn 4 ай бұрын
Great video. I’ve found those two extensions to be the best as well. The small, fast model for the autocomplete. The bigger better model for Continue. Deepseek for both, but I havent tried Codellama. Complete game changer for offline coding!
@moonfan2493
@moonfan2493 4 ай бұрын
Really good stuff Matt. I'm excited about the Ollama python stuff. Good to know we've got some coding support as you highlighted. Cheers.
@technovangelist
@technovangelist 4 ай бұрын
Thanks so much for the comment. Let me know if you have any ideas for content to cover on here.
@tiredofeverythingnew
@tiredofeverythingnew 4 ай бұрын
Matt you are spoiling us with the amount of uploads, keep it up but remember to rest its the weekend.
@technovangelist
@technovangelist 4 ай бұрын
I wish I could go faster.
@technovangelist
@technovangelist 4 ай бұрын
I’m not planning to slow down for a few months
@nofoobar
@nofoobar 4 ай бұрын
There is a tremendous ammount of work Ollama team is doing
@technovangelist
@technovangelist 4 ай бұрын
Thanks a lot 😊
@roz1
@roz1 4 ай бұрын
I'm subscribed really awesome high quality video with good demo
@dr.mikeybee
@dr.mikeybee 4 ай бұрын
Terrific. Thanks for contributing so much to the community!
@technovangelist
@technovangelist 4 ай бұрын
Thanks. Making these is a lot of fun
@bahmanasheghi
@bahmanasheghi Ай бұрын
Hi Matt, ollama is running slower than ml studio with full gpu offload, is there a way to configure this with ollama as well? Thanks for the great content
@hmdz150
@hmdz150 4 ай бұрын
I tested pre-Release version of Continue extension for VS Code with Ollama and set deep seek as the model. Amazing! I can’t believe I can use such powerful AI autocomplete in my VSCode for free… For Free! And it works so well
@jdray
@jdray 4 ай бұрын
Great stuff, Matt. I'm going to try this on my machine. So far I've been using ChatGPT 4 as my coding companion because my results with CodeLllama (running in Ollama) haven't produced code as good as that coming out of ChatGPT, and it's slower on my 16GB M1 MBP. However, I'd like to play around with different models and see how they do. Cheers.
@technovangelist
@technovangelist 4 ай бұрын
using codellama at the cli isn't always convenient. But getting llama coder to suggest for me is pretty great.
@avramgrossman6084
@avramgrossman6084 Ай бұрын
Matt: great video and very timely. I'm tasked with writing a web Dashboard to talk with Ollama and Ollam3 LLM. I have it downloaded as Windows and talking to me. I also downloaded it under Ubuntu. Also working there. Using VS Code, I installed the Llama Code addon. I see it in the lower right. However, I write import Ollama from "ollama" and get a message "Cannot find module "ollama" or it's corresponding type declarations. Getting the environment for development and tools setup and working is probably a bigger obstacle than learning how to use the API. Any advice on the missing module. Thanks
@technovangelist
@technovangelist Ай бұрын
Sound unrelated to the plugin. This is just that you haven’t added ollama to the environment
@WillWillWill
@WillWillWill 4 ай бұрын
Great video Matt! After an ollama upgrade, I had Continue integration without issue. What config is required for code suggestion/completion? And would the process be different for python code completion (as opposed to ts/js you demonstrated)?
@technovangelist
@technovangelist 4 ай бұрын
I don’t think there is any diff with python or rust or c# or anything else. But continue isn’t for tab completion, only the chat interface… I think.
@kaimildner2153
@kaimildner2153 3 ай бұрын
Hi. I actually play around with Ollama in vs code. So I have a question. What is the Llama coder extension for? I installed it, but couldn't figure out what it does our how to use it. Maybe i configured something wrong? But from the documentation it's also not clear how to proberly use it. So now I don't know if I does something wrong 😢
@LeetLife
@LeetLife 4 ай бұрын
Love it ! :D its like I am on the island with you man! I live in Sammamish
@technovangelist
@technovangelist 4 ай бұрын
Nice. I used to work over there when I was at Microsoft. I was a sales engineer and my sales guy was based there. That’s when I was living in Bellevue.
@JohnMitchellCalif
@JohnMitchellCalif 4 ай бұрын
intriguing and useful! Subscribed.
@technovangelist
@technovangelist 4 ай бұрын
Awesome. If you have anything you would like to see in the future, let me know.
@mercadolibreventas
@mercadolibreventas 3 ай бұрын
Can you make a video on Egpu settings and using Ollama to use it, as a perfered set up?
@TimothyGraupmann
@TimothyGraupmann 4 ай бұрын
I'd love to hear any recommendations for AI Coders that can make UI mockups and iterate on UI mockups.
@technovangelist
@technovangelist 4 ай бұрын
That would be interesting
@ischmitty
@ischmitty 4 ай бұрын
Greetings from Victoria!
@technovangelist
@technovangelist 4 ай бұрын
Hello there! Victoria is beautiful. We were there a year ago or so to attend a wedding at .... Hatley Castle in Colwood. Stayed at the Empress and it was amazing... always wanted to stay there after going to the Indian buffet that has been closed for years.
@maximood-tired
@maximood-tired 4 ай бұрын
hey, cool video! could you maybe do a video about mixtral8x7b?
@c0t1
@c0t1 4 ай бұрын
Thank you, Matt! I've been looking for an extension just like this. I looked at Cody, but it uses LM Studio to interface with the LLM, and I haven't messed with LM Studio yet (Linux guy, and that product is a version behind and in beta for Linux.)
@technovangelist
@technovangelist 4 ай бұрын
Actually it works with ollama too since about 2 weeks after we started building ollama.
@c0t1
@c0t1 4 ай бұрын
@@technovangelist Thanks! I didn't know.
@c0t1
@c0t1 4 ай бұрын
Perhaps a video on hooking up cody with ollama would be popular. I'd sure be interested.
@technovangelist
@technovangelist 4 ай бұрын
Yup. That’s definitely one to do. Thanks.
@brunoais
@brunoais 4 ай бұрын
2:50: Where can I find it for vscodium?
@mayorc
@mayorc 4 ай бұрын
Cody is not local, if I remember it has a free plan with a very limited number of requests/month to the endpoint.
@technovangelist
@technovangelist 4 ай бұрын
Yeah. Quinn mentioned that they still tokenize and cache on their services even if the model is local.
@bermuda6877
@bermuda6877 4 ай бұрын
thats fantastic! I am curious on availability which languages could it autofill for ?
@technovangelist
@technovangelist 4 ай бұрын
I’m not sure what the full list is. It’s most of the popular langs
@bermuda6877
@bermuda6877 4 ай бұрын
@@technovangelist Golang?
@technovangelist
@technovangelist 4 ай бұрын
Golang is definitely among the most popular. I would expect to see go, Java, js, ts, python, rust, I plan to do a video on this.
@bermuda6877
@bermuda6877 4 ай бұрын
looking forward to it :) @@technovangelist
@spite07
@spite07 4 ай бұрын
great content, which is the font used in vs code?
@technovangelist
@technovangelist 4 ай бұрын
Hmmm I think I set it to jet brains ages ago
@yagoa
@yagoa 4 ай бұрын
how do I do it if Ollama is on my LAN?
@NuncNuncNuncNunc
@NuncNuncNuncNunc 3 ай бұрын
I'd love to seereal metrics - speed improvement, code quality, etc? Also, am I using up 4g for each application/plugin that uses the same model?
@technovangelist
@technovangelist 3 ай бұрын
Haven't seen any metrics that accurately reflect stuff like that. Can you point to anything? I don't know about other tools, but if using Ollama, then all the tools would share the memory.
@NuncNuncNuncNunc
@NuncNuncNuncNunc 3 ай бұрын
@@technovangelist I meant disk storage for each application's version of the model data. E.g I just downloaded the codellama params from Meta. Will a VSCode plugin use those parameters or download another set. If I have two plugins will each have its own copy of model parameters. Regarding sharing memory, I don't understand how this would be possible for two applications to share memory unless there were something like a model server protocol that loaded models and applications communicated with a server.
@technovangelist
@technovangelist 3 ай бұрын
both apps are making a connection to ollama. Ollama is actually running the model. so disk and memory would be shared
@NuncNuncNuncNunc
@NuncNuncNuncNunc 3 ай бұрын
@@technovangelist Thanks. I've finally got a couple models downloaded. Stuck getting torchrun example in README to work. Will look for a setup tutorial. Thanks again.
@technovangelist
@technovangelist 3 ай бұрын
torchrun?? whats that? whose readme are you talking about. Sounds like a python thing. Ollama doesn't use that, apart from in their python library.
@stinkymccheese8010
@stinkymccheese8010 3 күн бұрын
It would be interesting to see what AI and machine learning could come up with as far as Neuro-training and neuro-therapy protocols.
@technovangelist
@technovangelist 3 күн бұрын
Can you tell me more about that? What does that mean?
@stinkymccheese8010
@stinkymccheese8010 3 күн бұрын
@@technovangelist it’s a behaviorist approach to mental health, basically a psychoanalyst does an initial assessment then a technician uses neurofeedback equipment to generate a baseline on the client’s brain function, then the psychoanalyst uses the baseline combined with the initial consult to identify issues and develop a course of treatment which involves using the neurofeedback to interact with a program that will help restore the problem areas of the brain to some degree of normality. All this is based on the theory that mental illness is a function of neurological deregulation, which is just a fancy way of saying that for any number of reasons the sundry Brodmann Areas of the client’s brain are out of sync and not communicating properly and need help getting back into sync.
@technovangelist
@technovangelist 3 күн бұрын
Interesting. That's a world I know nothing about. Thanks so much for filling me in.
@stinkymccheese8010
@stinkymccheese8010 3 күн бұрын
@@technovangelist yea after I asked the question I realized it was probably a little too niche I sometimes forget not everyone has read the same books as me.
@MegaQseft
@MegaQseft 4 ай бұрын
Before running the JavaScript code you need to run $ ollama run though, right?
@technovangelist
@technovangelist 4 ай бұрын
no. the service is already running. ollama run is just for starting the command line ui for interactive use.
@MegaQseft
@MegaQseft 4 ай бұрын
@technovangelist When i just run the JS code i get this error: TypeError: fetch failed at Object.fetch (node:internal/deps/undici/undici:11730:11) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) { cause: Error: connect ECONNREFUSED 127.0.0.1:11434
@technovangelist
@technovangelist 4 ай бұрын
can you tell me more about what you are running?
@oz5277
@oz5277 Ай бұрын
"Continue" can't use ollama hosted on my another laptop, while LLama code can
@oz5277
@oz5277 Ай бұрын
I suppose it can, but I'll need to dig deeply into settings
@lancemarchetti8673
@lancemarchetti8673 4 ай бұрын
Brilliant!
@technovangelist
@technovangelist 4 ай бұрын
Thanks for the comment. Glad you enjoyed the video.
@cig_in_mouth3786
@cig_in_mouth3786 4 ай бұрын
I tried llama coder not worked Continue worked. Others are flaky at best. Thanks for videos i will look more extensions and models for my need. If you have any idea that's great Q. Llama coder always said model not available but it is their and continue can use it and respond me back
@technovangelist
@technovangelist 4 ай бұрын
sorry you had problems. Have you asked in the ollama discord? discord.gg/ollama.
@Vedmalex
@Vedmalex 4 ай бұрын
Cool! thank you
@technovangelist
@technovangelist 4 ай бұрын
Thanks for giving it a watch. And for leaving a comment
@Tarbard
@Tarbard 4 ай бұрын
Cool. I'll have to look for something similar for pycharm.
@technovangelist
@technovangelist 4 ай бұрын
I feel like continue worked with jetbrains. Is that who makes pycharm?
@Tarbard
@Tarbard 4 ай бұрын
@@technovangelist Yep. Thanks - Continue looks good though it gets bad reviews in pycharm so i'll have to see, codeGPT seems to be another option.
@surajthakkar3420
@surajthakkar3420 3 ай бұрын
LLamacoder vs aider. What is better?
@technovangelist
@technovangelist 3 ай бұрын
Better???? Hard to say. For me, personally, I think there is no question that llama coder is better. Aider seemed hard to get started with and a bit kludgy. In my opinion. Once they are installed and just looking at what they do rather than the UI, they are identical. All of the tools in this space are basically identical. The models do the hard work here.
@AjayKumar-nt7lx
@AjayKumar-nt7lx 4 ай бұрын
How can one contact you for consulting engagements
@technovangelist
@technovangelist 4 ай бұрын
There is probably an email on this account somewhere. Or you can dm me on the discord. Discord.gg/ollama. I don’t think I want to do that but open to a conversation
@adriansrfr
@adriansrfr 4 ай бұрын
What are the hardware requirements?
@technovangelist
@technovangelist 4 ай бұрын
Any machine that runs ollama should be fine. Or are you asking what’s needed to run ollama? For Mac it’s best on apple silicon. On windows or Linux it’s best for now with an nvidia gpu and soon with amd gpu
@adriansrfr
@adriansrfr 4 ай бұрын
@@technovangelist , yes, I figured it would need a gpu; otherwise, it would be ridiculously slow.
@MelroyvandenBerg
@MelroyvandenBerg 4 ай бұрын
Mac or WIndows??.. You forget Linux shortcut. Sir.
@technovangelist
@technovangelist 4 ай бұрын
Im on Mac, Ollama works on Mac and Linux and for now Windows with WSL2
@VaibhavShewale
@VaibhavShewale 4 ай бұрын
my system cant handle it and it would just crash
@sanjamkapoor9843
@sanjamkapoor9843 4 ай бұрын
these things require minimum 64 gb ram to work near to that of gpt 3.5. Not worth it below, also requires good vrams around 16 to 24 gb which are around damn expensive too. better use gpt 3.5 or pay for gpt 4. Bard sucks at coding damn hard and nowhere gives responses as nice as gpt.
@technovangelist
@technovangelist 4 ай бұрын
Most of the models require 16GB to work well though some of the more exciting models are great in as low as 4-8 GB. Deepseek coder 1.6 is pretty amazing and will definitely run on low end hardware.
@user-uw7st6vn1z
@user-uw7st6vn1z 4 ай бұрын
I have already paid copilot for 1 year….
@technovangelist
@technovangelist 4 ай бұрын
I remember that problem happened a lot with new relic. Folks would see Datadog, want to switch over because it was multiple orders of magnitude cheaper than anything else on the market but they already signed multi year contracts with new relic and were stuck.
@sarafarron7844
@sarafarron7844 4 ай бұрын
wft why white theme
@technovangelist
@technovangelist 4 ай бұрын
Because it’s better. And easier on eyes
@Soniboy84
@Soniboy84 4 ай бұрын
Windows users cry in the corner.
@technovangelist
@technovangelist 4 ай бұрын
Why? Ollama has worked on windows for months?
@illogicallogic2039
@illogicallogic2039 4 ай бұрын
​@@technovangelistwhen I visited the ollama website it says "available for macOS & Linux Windows coming soon"
@ROKIBULHASANTANZIM
@ROKIBULHASANTANZIM 4 ай бұрын
Just use it via WSL 2
@Soniboy84
@Soniboy84 4 ай бұрын
@@technovangelist It says for download that windows is "coming soon". Running this under WSL2 is dog-slow, with about a 0.5 token per second on a $3000 gaming laptop.
@technovangelist
@technovangelist 4 ай бұрын
If you don’t have a gpu it will be slow. Native windows app won’t be able to go any faster.
Using Llama Coder As Your AI Assistant
9:18
Matt Williams
Рет қаралды 63 М.
I Analyzed My Finance With Local LLMs
17:51
Thu Vu data analytics
Рет қаралды 416 М.
Which one is the best? #katebrush #shorts
00:12
Kate Brush
Рет қаралды 21 МЛН
La revancha 😱
00:55
Juan De Dios Pantoja 2
Рет қаралды 51 МЛН
Каха ограбил банк
01:00
К-Media
Рет қаралды 2,3 МЛН
Final muy inesperado 🥹
00:48
Juan De Dios Pantoja
Рет қаралды 12 МЛН
Getting Started on Ollama
11:26
Matt Williams
Рет қаралды 34 М.
Powerful VSCode Tips And Tricks For Python Development And Design
15:50
Using ollama and phi3 in VS code as an github copilot alternative
16:37
This may be my favorite simple Ollama GUI
9:31
Matt Williams
Рет қаралды 21 М.
What Makes Rust Different?
12:38
No Boilerplate
Рет қаралды 197 М.
FINALLY! Open-Source "LLaMA Code" Coding Assistant (Tutorial)
7:21
Matthew Berman
Рет қаралды 124 М.
Finally Ollama has an OpenAI compatible API
10:47
Matt Williams
Рет қаралды 16 М.
CONCURRENCY IS NOT WHAT YOU THINK
16:59
Core Dumped
Рет қаралды 85 М.
Have You Picked the Wrong AI Agent Framework?
13:10
Matt Williams
Рет қаралды 36 М.
Ollama: How To Create Custom Models From HuggingFace ( GGUF )
10:54
Data Science Basics
Рет қаралды 10 М.
One To Three USB Convert
0:42
Edit Zone 1.8M views
Рет қаралды 438 М.
Купил этот ваш VR.
37:21
Ремонтяш
Рет қаралды 231 М.
Девушка и AirPods Max 😳
0:59
ОТЛИЧНИКИ
Рет қаралды 16 М.
Жёсткий тест чехла Spigen Classic C1
0:56
Romancev768
Рет қаралды 713 М.
WWDC 2024 Recap: Is Apple Intelligence Legit?
18:23
Marques Brownlee
Рет қаралды 5 МЛН