Local AI Coding in VS Code: Installing Llama 3 with continue.dev & Ollama

  Рет қаралды 30,886

Jan Koch

Jan Koch

Күн бұрын

Want to take your VS Code experience to the next level with AI-powered coding assistance? In this step-by-step tutorial, discover how to supercharge Visual Studio Code with the incredible Llama 3 AI model using the game-changing continue.dev extension and Ollama.
Resources mentioned in the episode:
🤖 continue.dev
🤖 github.com/con...
🤖 ollama.com
Learn how to:
Install the continue.dev extension in VS Code
Download and set up the powerful Llama 3 AI model with Ollama
Configure continue.dev to work seamlessly with Llama 3
Leverage Llama 3's AI capabilities for code completion, explanations, and refactoring
-
Boost your coding productivity and efficiency with this unbeatable setup
Whether you're a beginner looking to enhance your coding experience or a seasoned developer seeking to optimize your workflow, this video will guide you through the process of integrating Llama 3 AI into your VS Code environment using the intuitive continue.dev extension.
Don't miss out on this opportunity to revolutionize the way you code. Watch now and unlock the full potential of AI-assisted programming in Visual Studio Code!
#vscode #Llama3 #ContinueDev #AICoding #CodeWithAI #Ollama #Productivity

Пікірлер: 55
@hackeymabel1617
@hackeymabel1617 2 ай бұрын
Mine still working without the apiBase field... Tutorials on Continue and Ollama is so rare. Thank you man
@iamjankoch
@iamjankoch Ай бұрын
@@hackeymabel1617 so glad you got value from it!
@eckhardEhm
@eckhardEhm 8 ай бұрын
nice clip jan, thanks its working like a charm. hardest part was getting rid of amazon aws stuff in vs code that kept on installing amazon Q and stealing the code completion ^^.
@iamjankoch
@iamjankoch 8 ай бұрын
I'm glad I didn't have AWS connected to VS Code in that case :D Glad it worked well for you!
@HowardGil
@HowardGil 8 ай бұрын
Dope. I'm going on an RV trip for two weeks so will have spotty service but can still ship 🚢
@iamjankoch
@iamjankoch 8 ай бұрын
Sounds awesome, enjoy the trip!
@ai9001-so8lj
@ai9001-so8lj 14 күн бұрын
Das Thema super spannend und deine ruhige, fokussierte Art ist didaktisch auch super! Ist ein kleines Update für neuere Local AI für den Mac geplant oder gibt es da nix interessantes? Und wie kann ich div. Dokumentationen der KI zugänglich machen, so überlege ich gerade ob die KI PineScript v.6 lernen könnte ohne ständig Code von v.4 oder v.5 zu verwechseln. 1000 Dank! :)
@iamjankoch
@iamjankoch 13 күн бұрын
@@ai9001-so8lj ich hab leider aktuell viel mit Projekten zu tun und darum wenig Zeit für ein Update. Aber Themen wie Kontext aus Dokumentationen sind mega spannend 🧐🤩
@jasonp3484
@jasonp3484 8 ай бұрын
Great video my friend. Worked like a charm.I thank you very much!.
@iamjankoch
@iamjankoch 8 ай бұрын
Glad to hear that, happy coding!
@codelinx
@codelinx 5 ай бұрын
This is awesome. Currently using codeium, but will install good later and give it a go.
@iamjankoch
@iamjankoch 5 ай бұрын
@@codelinx let me know how it goes 💪🤖
@PythonApps-m1s
@PythonApps-m1s 15 күн бұрын
Hi @iamjankoch, I am getting following error "Error: digest mismatch, file must be downloaded again:" while pulling any mode from ollama, is there any other way to download a local llm from ollma except for a pull?
@Ilan-Aviv
@Ilan-Aviv 5 ай бұрын
simple and great explanation ! thank you
@iamjankoch
@iamjankoch 5 ай бұрын
@@Ilan-Aviv you bet, glad the tutorial was useful! What are you building with AI?
@Ilan-Aviv
@Ilan-Aviv 5 ай бұрын
@@iamjankoch Building a code assistant agent to work inside VSC editor, to help me with other projects. actually I will ask for your advice: I have a trading bot written in NodeJS react while this project written by another programmer, for some parts struggling with it development. like to have a useful AI assistant to help me find bugs and understand the app structure. the app runs server with browser client while it have about 130 files. tried to use open AI GPT but it's to many files for it, while it's loosing the context, with the other issues it have. I came into the conclusion that the best way is to have a local llm running on my machine. if you have any recommendations for the right AI assistant you would use, I'll appreciate you advice.🙏
@scornwell100
@scornwell100 5 ай бұрын
Doesn't work for me. I can do it from command line but Continue plugin seems not working at all. Did all configuration and it responds with nothing.
@iamjankoch
@iamjankoch 5 ай бұрын
@@scornwell100 did you check the issues listed in their GitHub repo? github.com/continuedev/continue You can also join their Discord server to get more detailed help: discord.gg/vapESyrFmJ
@caiohrgm22
@caiohrgm22 7 ай бұрын
Great video ! really helped me setting things up!!
@iamjankoch
@iamjankoch 7 ай бұрын
@@caiohrgm22 glad to hear that!!!
@almaoX
@almaoX 7 ай бұрын
Thnaks alot! =)
@iamjankoch
@iamjankoch 7 ай бұрын
You’re welcome!
@rxGG12
@rxGG12 11 күн бұрын
How high the specs of laptop to run this ollama locally??
@iamjankoch
@iamjankoch 11 күн бұрын
@rxGG12 I use a MBP M2 Pro with 16GB in this video
@rxGG12
@rxGG12 11 күн бұрын
@iamjankoch ohh, thankss😁👍 I think I can't run it because I'm with 4gb laptop ram XD
@uwegenosdude
@uwegenosdude 2 ай бұрын
Thanks for the interesting video. Can I run this setup on a PC with a RTX 2060 GPU having only 6 GB of VRAM? Or do I need at least 4,7 GB (llama3.1:8b) + 1,8 GB (starcode 2b)? Or does my GPU VRAM only has to be large enough so that the starcoder LLM can fit into it?
@33_Sifly
@33_Sifly 2 ай бұрын
From what I could do with a 3070, take the lighter/medium ones so that your GPU won't struggle. Well, now I have a 4090, and I still don't know what can stop this card, lol.
@uwegenosdude
@uwegenosdude 2 ай бұрын
@sifly4683 thanks for the answer. I tried it and it works with my 6GB GPU and qwen-coder:3b
@PedrinbeepOriginal
@PedrinbeepOriginal 8 ай бұрын
Thank you! I was looking for this. Wat are the specs of your mac?
@iamjankoch
@iamjankoch 8 ай бұрын
Glad you enjoyed the video! It’s a 2023 MacBook Pro with M2 Pro and 16 GB RAM
@PedrinbeepOriginal
@PedrinbeepOriginal 8 ай бұрын
@@iamjankoch Nice, thank you for the fast answer, I was thinking if I need a M3/M2 Max with a lot of ram to load Llama 3 in the MacBook.
@iamjankoch
@iamjankoch 8 ай бұрын
@@PedrinbeepOriginal Not really. Granted, I don't do much other heavy work when I'm coding but it runs super smooth with the 16GB. The only time I wish I had more RAM is when doing video editing lol
@tanercoder1915
@tanercoder1915 5 ай бұрын
Great explanation
@iamjankoch
@iamjankoch 5 ай бұрын
@@tanercoder1915 thank you!
@JaiRaj26
@JaiRaj26 7 ай бұрын
Is 8gb RAM sufficient? I have enough Storage but when I try to use this after installing, it just doesn't work. Keeps loading.
@iamjankoch
@iamjankoch 7 ай бұрын
I run it on 16 GB. The processor and GPU are quite important for Ollama as well. 16GB RAM are recommended: github.com/open-webui/open-webui/discussions/736#
@scornwell100
@scornwell100 5 ай бұрын
I run it with 11GB of VRAM from command line and it seems fine, but inside VSCode I can't get it to respond, it throws errors that the stream is not readable.
@rodrigoaaronmartineztellez3572
@rodrigoaaronmartineztellez3572 4 ай бұрын
What about using rtx 4060ti 16 vram with ryzen 9 5950x could works well ?
@iamjankoch
@iamjankoch 3 ай бұрын
@@rodrigoaaronmartineztellez3572 yes, that should handle Ollama quite well
@ctopedja
@ctopedja 3 ай бұрын
wish you posted your full config.json on pastebin or somewhere since default config is not even near that you show and guide is useless without full setup
@iamjankoch
@iamjankoch 3 ай бұрын
Here you go: gist.github.com/jan-koch/9e4ea0a9e0c049fe4e169d6a5c1e8b74 Hope this helps
@nooov7220
@nooov7220 2 ай бұрын
i cant believe no one else here mentions this in the comment section, he just skipped like a couple of steps that he didnt even mention just make a very short video.
@dzimoremusic5515
@dzimoremusic5515 3 ай бұрын
hatur nuhun pisan ... sehat sehat terus kang
@2ru2pacFan
@2ru2pacFan 7 ай бұрын
Thank you so much!
@iamjankoch
@iamjankoch 7 ай бұрын
@@2ru2pacFan glad you enjoyed the tutorial!
@superfreiheit1
@superfreiheit1 4 ай бұрын
codearea is to small cant see
@user-13853jxjdd
@user-13853jxjdd 6 ай бұрын
luv u bro
@iamjankoch
@iamjankoch 6 ай бұрын
@@user-13853jxjdd glad you enjoyed the video!
@josersleal
@josersleal 3 ай бұрын
forgot to mention how much you need to pay for a computer that can handle this otherwise it will not even start or take literally ages to do anything or you have to use the lowest llms that cant do s&T(t. it's all hype to get money into openai and otehrs like
@iamjankoch
@iamjankoch 3 ай бұрын
@@josersleal I have an M2 Pro MacBook Pro with 16GB, for reference
@ybirdinc
@ybirdinc Ай бұрын
I don't know if I did it right but after 8 months from publishing this video some things were different. So I copied the "models" array of your config.json file, which is: "models": [ { "title": "Gemini 1.5 Flash", "model": "gemini-1.5-flash-latest", "contextLength": 1000000, "apiKey": "AIzaSyAIJ49oNQAMSEionN0v5R7fWE1fHgmeMuo", "provider": "gemini" }, { "title": "Llama 3", "provider": "ollama", "model": "llama3" }, { "title": "Ollama", "provider": "ollama", "model": "AUTODETECT" }, { "model": "AUTODETECT", "title": "Ollama (1)", "completionOptions": {}, "apiBase": "localhost:11434", "provider": "ollama" } ],
@zsoltandraspatka6056
@zsoltandraspatka6056 7 күн бұрын
Hey, it's probably too late and you already noticed, but you've copied out your Gemini API key as well. Please make sure you revoke this key, as it could be used by malicious actors for racking up cost on your account.
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 1,6 МЛН
Free LLM extension for VS Code and JetBrains, replace ChatGPT and Copilot: Continue.dev
41:06
Bret Fisher Cloud Native DevOps
Рет қаралды 2,2 М.
«Жат бауыр» телехикаясы І 30 - бөлім | Соңғы бөлім
52:59
Qazaqstan TV / Қазақстан Ұлттық Арнасы
Рет қаралды 340 М.
요즘유행 찍는법
0:34
오마이비키 OMV
Рет қаралды 12 МЛН
Using Llama Coder As Your AI Assistant
9:18
Matt Williams
Рет қаралды 76 М.
Coder avec une IA 100% locale dans VSCode (Tutoriel + Test)
11:25
Cheap mini runs a 70B LLM 🤯
11:22
Alex Ziskind
Рет қаралды 345 М.
Local LLM Challenge | Speed vs Efficiency
16:25
Alex Ziskind
Рет қаралды 159 М.
UV and Ruff: Next-gen Python Tooling
56:54
Matt Layman
Рет қаралды 7 М.
Unlimited AI Agents running locally with Ollama & AnythingLLM
15:21
Tim Carambat
Рет қаралды 180 М.
Host Your Own AI Code Assistant with Docker, Ollama and Continue!
17:49
Wolfgang's Channel
Рет қаралды 119 М.
EASIEST Way to Fine-Tune a LLM and Use It With Ollama
5:18
warpdotdev
Рет қаралды 276 М.
This VS Code AI Coding Assistant Is A Game Changer!
14:27
codeSTACKr
Рет қаралды 212 М.