FINALLY! Open-Source "LLaMA Code" Coding Assistant (Tutorial)

  Рет қаралды 144,353

Matthew Berman

Matthew Berman

Күн бұрын

This is a free, 100% open-source coding assistant (Copilot) based on Code LLaMA living in VSCode. It is super fast and works incredibly well. Plus, no internet connection is required!
Download Cody for VS Code today: srcgr.ph/ugx6n
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewber...
Need AI Consulting? ✅
forwardfuture.ai/
Rent a GPU (MassedCompute) 🚀
bit.ly/matthew...
USE CODE "MatthewBerman" for 50% discount
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V

Пікірлер: 296
@matthew_berman
@matthew_berman 7 ай бұрын
Llama code 70b video coming soon!
@DopeTropic
@DopeTropic 7 ай бұрын
Can you make a video to a local LLM with fine tuning guide?
@orangeraven3869
@orangeraven3869 7 ай бұрын
codellama 70b has been amazing for me so far. code is definitely SOTA for local model. Can't wait to see tunes and merges like Phind or DeepSeek in the near future. Will you cover miqu 70b too? Rumors aside, it's closest to GPT-4 for any local model yet and I predict it produces a surprise or two if you put it through your normal benchmarks.
@Ricolaaaaaaaaaaaaaaaaa
@Ricolaaaaaaaaaaaaaaaaa 7 ай бұрын
@@orangeraven3869 How is it compared to the latest GPT4 build?
@SaveTheDoctorREAL
@SaveTheDoctorREAL 7 ай бұрын
LOL cant wait!
@Chodak166
@Chodak166 7 ай бұрын
How about the current huggingface leader, the moreh momo 72b model?
@rohithgoud30
@rohithgoud30 7 ай бұрын
I typically don't rely too heavily on AI when coding. I use TabbyML, which has a limited model, but it works for me. It's completely open-source and includes a VSCode extension too. It's free and doesn't require login. I use the DeepSeekCoder 6.7B model locally.
@hrgdavor
@hrgdavor 7 ай бұрын
thanks for the hint, I was looking for that. I hate that cloud crap.
@haroldasraz
@haroldasraz 7 ай бұрын
Cheers for the suggestion.
@YadraVoat
@YadraVoat 6 ай бұрын
VSCode? Why not VSCodium?
@justingolden21
@justingolden21 4 ай бұрын
Just tried tabby, thanks!
@lynxinaction
@lynxinaction Ай бұрын
How did u run it ,can u give me the steps 😅?
@5Komma5
@5Komma5 7 ай бұрын
Need to sign in to use the plugin. No thanks. That is not completely local.
@carktok
@carktok 7 ай бұрын
Are you saying you had to login to authenticate your license to use a local instance of their software for free? 🤯
@nicolaspace1182
@nicolaspace1182 7 ай бұрын
⁠@@carktokYes, and that is a deal breaker for many people, believe it or not.
@cesarruiz1202
@cesarruiz1202 7 ай бұрын
Yeah buts that's mainly because they're paying OpenAI and Claude 2 completions API to use it without cost. Also if you want to I think you can self host Cody without login to sourcegraph.
@vaisakhkm783
@vaisakhkm783 7 ай бұрын
cody is open source, you can completely run it locally..
@SFSylvester
@SFSylvester 7 ай бұрын
@@vaisakhkm783 It's not open-source if they force you to login. My machine, my rules!
@jbo8540
@jbo8540 7 ай бұрын
Matt Williams, a member of the ollama team, shows how to make this work 100% free and open source in his video "writing better code with ollama"
@mickelodiansurname9578
@mickelodiansurname9578 7 ай бұрын
thanks for that head up man
@brian2590
@brian2590 7 ай бұрын
This is how i am setup. works great!
@LanceJordan
@LanceJordan 7 ай бұрын
link please?
@mickelodiansurname9578
@mickelodiansurname9578 7 ай бұрын
@@LanceJordan "writing better code with ollama" btw there's an issue on YT putting links into a comment, even YT links, seemingly a lot of comments with links go on the missing list!
@ArthurMartins-jw8fq
@ArthurMartins-jw8fq 6 ай бұрын
Does it have knowledge of the entire codebase?
@AlexanderBukh
@AlexanderBukh 7 ай бұрын
How is it local if i have to authorize with 3rd party 😮
@HUEHUEUHEPony
@HUEHUEUHEPony 7 ай бұрын
it is not, it is clickbait
@hqcart1
@hqcart1 7 ай бұрын
nothing is free dude.
@zachlevine1857
@zachlevine1857 7 ай бұрын
Pay a little money and have fun my people!
@mayorc
@mayorc 7 ай бұрын
The problem with Cody is that it just autocomplete with local models, a thing you can do with many VsCode Extensions like LLaMA Coder, an many more. All the nice features use the online version, which is extremely limited in numbers of requests if you go for the free plan (a bit of expansion on the monthly numbers of these would make things better to test or to grow a serious interest later leading to a better plan). Also there is a, not indifferent, number of extensions that do those nice features (chat, document, smells, refactoring, explain and tests) the same all in one extension and for free using local models (ollama or openai compatible endpoints). Cody does these features a little better and has a better interaction with the codebase, probably due to the bigger context window (at least from my tests) and a nicer implementaion/integration in VScode, but unless you pay you're not gonna really benefit from them cause of the low free number of requests you can afford, which aren't really enough to seriously dive in.
@ruifigueiredo5695
@ruifigueiredo5695 7 ай бұрын
Matthew just confirmed on a post above that the limitations on the free tier does not apply if you run the model locally.
@alx8439
@alx8439 7 ай бұрын
Can you suggest any particular alternative among those "different number of extensions"?
@mayorc
@mayorc 7 ай бұрын
@@alx8439 There are many, I so far tested a few but I'm not using them at the moment so I don't remember those names. What I did was searching extensions with names like "chat, gpt, AI, code, llama" and many will be there, then you have to test them one by one (this if what i did). I suggest you go for those who already in the description and in the pictures show options for customization like base URL for ollama or openai compatible local servers. I think one of them has "genie" in the name.
@woozie_tv
@woozie_tv 7 ай бұрын
i'm curious of those too @@alx8439
@alx8439
@alx8439 7 ай бұрын
I'll answer myself then: Twinny, Privy, Continue, TabbyML
@KodandocomFaria
@KodandocomFaria 7 ай бұрын
I know it is a sponsored video, but is there any open source alternative to Cody extension? We need a completely local solution, because Cody may use telemetry and gathering some information behind the scenes
@Nik.leonard
@Nik.leonard 7 ай бұрын
Continue does chat and fix, but doesn’t do autocompletion, and is quite unstable. There is another one that does autocomplete with ollama (LlamaCode).
@UvekProblem
@UvekProblem 7 ай бұрын
You have collama which is a fork of Cody and uses llama.cpp
@hqcart1
@hqcart1 7 ай бұрын
@@Nik.leonardPhind, best free one.
@alx8439
@alx8439 7 ай бұрын
Twinny, Privy, TabbyML
@kartiknarang3152
@kartiknarang3152 5 ай бұрын
one more issue with cody is it can take only 15 files for context at a time while i need an assistant that can take whole folder of project
@RichardGetzPhotography
@RichardGetzPhotography 7 ай бұрын
Is it cody that understands? I think it is the LM that does. Also, why $9 if I am running everything locally?
@mc9723
@mc9723 7 ай бұрын
Even if its not world changing breakthroughs, the speed at which all this tech is expanding can not be overstated. I remember one of the research labs was talking about how every morning they would wake up and another lab had solved something they had just started/were about to start. This is a crazy time to be alive, stay healthy everyone.
@a5tr00
@a5tr00 7 ай бұрын
since you have to sign in, does it sends any data upstream when you use local models?
@supercurioTube
@supercurioTube 7 ай бұрын
Wait, you have GitHub Copilot enabled there too, which shows up in your editor. Are you sure that the completion itself is not provided by the GitHub Copilot extension and not Cody with the local model?
@kate-pt2ny
@kate-pt2ny 7 ай бұрын
There is text for you to choose in the video, and it has the icon of cody, so you can see that it is the code generated by cody.
@SageGoatKing
@SageGoatKing 7 ай бұрын
Am I misunderstanding something or are you advertising this as an open source solution while it still dependent on a 3rd party service? What exactly is cody? I would have assumed if it is completely local, it's just a plugin that lets you use local models on your machine. Yet you describe it as having multiple versions with different features in each tier, including a paid tier. How exactly does that qualify as open source?
@zachlevine1857
@zachlevine1857 7 ай бұрын
He shows you how fast it is.
@Joe_Brig
@Joe_Brig 7 ай бұрын
I'm looking for a local code assistant. I don't mind supporting the project, with a license for example, but I don't want to log in each use or at all. How often does this phone-home? Will it work if my IDE is offline? Pass.
@ryzikx
@ryzikx 7 ай бұрын
wait for llama code 70b tutorial
@InnocentiusLacrimosa
@InnocentiusLacrimosa 7 ай бұрын
​@@ryzikxthat should require > 40GB VRAM.
@AlexanderBukh
@AlexanderBukh 7 ай бұрын
@@ryzikx 70b would require 2x 4090 or 3090. 34b takes 1.
@kshitijnigam
@kshitijnigam 7 ай бұрын
Tabby and code llama can do that , let me find the link to the playlist
@Resursator
@Resursator 7 ай бұрын
The only time I'm coding - is while being on flight. I'm so glad I can use LLM from now on!
@AlexanderBukh
@AlexanderBukh 7 ай бұрын
About 40 minutes of battery life. Yep, i ran llms on my 15watt 7520U laptop. My 5900HX would gobble the battery even faster i think.
@themaridv2000
@themaridv2000 6 ай бұрын
Apparently they only support the given models. And the llama one actually only uses coda-llama13b. Basically it can't run something like Mistral or other llama models. Am I right?
@WhiteDragon103
@WhiteDragon103 6 ай бұрын
ollama : The term 'ollama' is not recognized as the name of a cmdlet, function, script file, or operable program is there a working tutorial for windows 10?
@xCallMeLucky
@xCallMeLucky 3 ай бұрын
restart your pc
@olimpialucio
@olimpialucio 7 ай бұрын
Is it possible to use it in a Windows and WSL system? If yes how we should install LLaMA?
@thethiny
@thethiny 7 ай бұрын
Same steps
@mdazhardware
@mdazhardware 7 ай бұрын
Thanks for this awesome tutorial, how to do that for Windows os??
@ruifigueiredo5695
@ruifigueiredo5695 7 ай бұрын
Does anyone knows if the 500 autocompletions per month on the Free tier, also applies if we run codellama locally?
@matthew_berman
@matthew_berman 7 ай бұрын
You get unlimited code completions with a local model.
@synaestesia-bg3ew
@synaestesia-bg3ew 7 ай бұрын
​@matthew_berman it said "Windows version is coming soon",I had to stop at the download step ,so I cannot continue this tutorial. Not everyone got a lunix machine or a powerful Mac. Could you warn people about prerequisites before starting new videos? That would help thanks.
@skybuck2000
@skybuck2000 5 ай бұрын
Ok it worked, kinda funny: I wrote first two lines and last line and the rest cody did after telling it to "generate fibonnaci sequence code"... thanks might be usefull some day, bit flimsy, but interesting, next I try if it can translate code too function Fibannoci : integer; begin var a, b, c: integer; a := 0; b := 1; while b < 100 do begin writeln(b); c := a + b; a := b; b := c; end; end; end;
@Ray88G
@Ray88G 7 ай бұрын
Can you please also include steps for those who are using Windows
@TheEasyWay2
@TheEasyWay2 7 ай бұрын
I followed your instructions and I failed at 2:38 because I'm using Linux I'm seeing a different output. And thanks for your assistance.
@captanblue
@captanblue 7 ай бұрын
Unfortunately Ollama is still not on windows, and Linux/Mac isn't an option for me so I'll have to give this a pass, unless it can work with LM studio
@ruifigueiredo5695
@ruifigueiredo5695 7 ай бұрын
Has anyone manage to make it work with LM Studio?
@Ludecan
@Ludecan 7 ай бұрын
This is so cool, but doesn't the Cody login kind of invalidate the local benefits? A 3rd party still gets access to your code.
@mayorc
@mayorc 7 ай бұрын
Yes, don't know though how and if the code is retained long term somehow the moment you start chatting with your codebase, plus free version has very limited amount of request you can issue a month, 500 autocomplete requests a month ( that you would probably end in a day or two considering the moment you stop typing it will process a request immediately in a few seconds delay), this is solvable with the local model, but then you have only 20 chat messages or builtin commands per months which make them useless unless you choose the paid plan.
@DanVoronov
@DanVoronov 7 ай бұрын
Despite the extension being available in the marketplace of VSCodium, after registration, it attempts to open regular Visual Studio Code (VSC) and doesn't function properly. It's unfortunate to encounter developers creating coding helpers that turn out to be broken tools.
@haydnrayturner1383
@haydnrayturner1383 7 ай бұрын
*sigh*Any idea when Ollama is coming to windows??
@Krisdomain
@Krisdomain 7 ай бұрын
How can you are not enjoying creating unit test
@rbrcurtis
@rbrcurtis 7 ай бұрын
the default model for cody/ollama to use is deepseek-coder:6.7b-base-q4_K_M. You have to change this in raw json settings if you want to use a different model.
@amitdingare5064
@amitdingare5064 7 ай бұрын
how would you do that? appreciate the help.
@jeffspaulding43
@jeffspaulding43 7 ай бұрын
Don't think this is an option unless you got a pretty good graphics card. I set mine up and gave it a setup to autocomplete. I heard my mac cpu fan going crazy, and it took about 20 secs to get a 5 token suggestion (it was correct tho :P )
@SahilP2648
@SahilP2648 7 ай бұрын
Get an M3 Max 64 or 96GB MacBook Pro. The inference speed is really good. For development it seems like you need a really good Mac nowadays.
@JonathansaidMorenonuñez-x7w
@JonathansaidMorenonuñez-x7w 7 ай бұрын
Enterprise AI is the best alternative for OpenAI, always helpful with coding questions
@henrychien9177
@henrychien9177 7 ай бұрын
what about window?~anyway to run llama ?
@Baleur
@Baleur 7 ай бұрын
So the local one is the 7b version, not the 70b? Or is it a typo in the release?
@InnocentiusLacrimosa
@InnocentiusLacrimosa 7 ай бұрын
70b was released and it can be run locally, but it is a massive model and should require around 40GB VRAM.
@keithprice3369
@keithprice3369 7 ай бұрын
So, Pro is free for 2 more days? 😁
@planetchubby
@planetchubby 7 ай бұрын
Nice! Seems to work pretty well on my linux laptop. Would be great if I could save my 10 euros a month for copilot.
@skybuck2000
@skybuck2000 5 ай бұрын
Now the only thing I need to figure out is how to add a command to the cody pop up menu or something to add: "translate from go language to pascal language" so I don't have to re-type this constantly... testing big translation now...
@maddoglv
@maddoglv 5 ай бұрын
if someone has error when running `ollama pull codellama:7b-code` in terminal just close and reopen your VSCode
@technovangelist
@technovangelist 7 ай бұрын
It’s not actually fully offline. It still uses their services for embedding and caching even when using local models.
@michai333
@michai333 7 ай бұрын
Thanks so much! We need a video on how to train a local model via LM Studio / VS / Python
@Tom_Neverwinter
@Tom_Neverwinter 7 ай бұрын
or just use oobabooga and stop using junk?
@scitechtalktv9742
@scitechtalktv9742 7 ай бұрын
What an amazing new development! Thanks for you video. A question: can I use this to complete translate a Python code repository to C++ with the goal to make it run faster? How exactly would we go about doing this?
@paolovolante
@paolovolante 7 ай бұрын
Hi, thanks! I use chatgpt 3.5 for generating python code by just describing what I want. It kind of works... In your opinion is this solution you propose better than gpt 3.5?
@Daniel-xh9ot
@Daniel-xh9ot 7 ай бұрын
Way better than gpt3.5, gpt3.5 is pretty outdated even for simple tasks.
@vransomware7601
@vransomware7601 7 ай бұрын
can it be run using text generation web UI
@first-thoughtgiver-of-will2456
@first-thoughtgiver-of-will2456 7 ай бұрын
Awesome video! This video series is the best source for cutting edge practical AI applications bar none. Thanks for all the work you do.
@vivekpadman5248
@vivekpadman5248 7 ай бұрын
thanks for the video, this is absolutely a blessing of an assistant
@ArturRoszczyk
@ArturRoszczyk 7 ай бұрын
It does not work for me. The extension seems to prefer connecting to sourcegraph over the internet, even though it shows it selected codellama from usntable-ollama. Inference simply does not work if I unplug the wire.
@alx8439
@alx8439 7 ай бұрын
Try other, better extensions. There are number of truly open source ones, which run locally unlike this gimmick. Privy, Twinny, TabbyML, Continue and many more
@evanmarshall9498
@evanmarshall9498 7 ай бұрын
Does this method also allow for completion for large code bases like you went over in a previous tutorial using universal-ctags? Or do you have to still download and use universal-ctags? I think it was your aider-chat tutorial. I do not work with pyhton so using this vscode extension and cody is much better for me (front end developer using HTML, CSS and JS).
@TrevorSullivan
@TrevorSullivan 21 күн бұрын
Why does this extension force you to sign in if you are using a self-hosted Ollama service? That's really weird. Seems like they just want to collect your data.
@gainzflex4125
@gainzflex4125 14 күн бұрын
because they plan to monetize
@alx8439
@alx8439 7 ай бұрын
If you love open source and hate products which are having strings attached and spying on you, prefer to use VSCodium instead of VSCode which is having a lot of telemetry included by default
@skybuck2000
@skybuck2000 5 ай бұрын
I get some strange window says: edit instruction code, I guess I have to tell it with to do... generate fibonnaci sequence code perhaps ?
@iseverynametakenwtf1
@iseverynametakenwtf1 7 ай бұрын
can you select the OpenAI one and run it through LM Studio locally too?
@skybuck2000
@skybuck2000 5 ай бұрын
You lost me at the terminal step, how you get into ollama, is that it's folder ?
@nobound
@nobound 6 ай бұрын
I have a similar setup, but I'm encountering difficulty getting Cody to function offline. Despite specifying the local model (codellama) and disabling telemetry, the logs indicate that it's still attempting to connect to sourcegraph for each operation.
@fairyroot1653
@fairyroot1653 7 ай бұрын
Cody is love, I'll continue using it even after 14th of February because it's just that amazing.
@edisdead2008
@edisdead2008 7 ай бұрын
I thought I was getting a video on Llama, not a sales pitch for cody or w/e it's called (i dont care)
@Sigmatechnica
@Sigmatechnica 5 ай бұрын
what's the point of a local model if you have to sign into some random service to use it???
@jawadmansoor6064
@jawadmansoor6064 7 ай бұрын
can it only work with ollama? or what if i have a server from llama.cpp running on desired/same port as that of ollama, will it not work? what url (complete, including port) does ollama output, so that I can make my server running on same url. of course it will be localhost like localhost:8080 (original as where llama.cpp server runs) localhost:8081/v1/chat/completion (if api_like_OAI is used). so what does ollama output?
@YadraVoat
@YadraVoat 6 ай бұрын
1:17 - Um, why Visual Studio Code when there's VSCodium available?
@micknamens8659
@micknamens8659 7 ай бұрын
The code for the Fibonacci function is correct in the sense of a specification. But as an implementation it's totally inefficient with exponential time O(2^n). (In functional languages where all functions are referential transparent their results can be cached transparently, called "memoizing. But Python lacks this feature.)
@skybuck2000
@skybuck2000 5 ай бұрын
However I did not yet install go extension, maybe if go extension is installed, maybe cody can then do code translation from go language ? hmmm not sure yet.... probably not... but very maybe..
@DaivionOsaghae
@DaivionOsaghae 6 ай бұрын
Who's here for the Chaos that comes when he starts evaluating the models on BabyAGI?
@bradstudio
@bradstudio 7 ай бұрын
Nova editor needs support for this.
@mrtn5882
@mrtn5882 2 күн бұрын
What hardware do you need to run this on premise?
@cyanophage4351
@cyanophage4351 6 ай бұрын
Tried on windows and couldn't get it to connect to my ollama. The dropdown was set to "experimental-ollama" and "codellama" but when I asked in the chat "what can you do" it would reply with "i'm claude from anthropic" so not sure what is up with that
@skybuck2000
@skybuck2000 5 ай бұрын
Seems to conflict with omni pascal extension/code completion, not sure if both can be used ? Any ideas ?
@skybuck2000
@skybuck2000 5 ай бұрын
Must the pull be placed in some special folder ? This is not explained, I doubt this will work, the way I did it, don't want models on SSD C drive but HD G drive to experiment with it and safe space on SSDs which really need it like windows updates, got twice 4 TB on SSD but still...
@freaq.creation
@freaq.creation 7 ай бұрын
It's not working... I get an error where it says it can't find the model :(
@skybuck2000
@skybuck2000 5 ай бұрын
It also automatically opened a command prompt... can proceed from there... plus there is an item in the start menu... probably linked to this messy installation.
@piero957
@piero957 6 ай бұрын
Ni Matthew, it seems Cody "Pro" is not free anymore, or at least it isn't obvious how to get it free with both the current stable or beta. Arch Linux and VSCodium here. I would like more AutoGen Studio LOW-CODE real-world examples, included agents that do web search+retrieval and embedding the results together with uploaded PDFs to get an offline RAG with online updated search.
@monaluthra4769
@monaluthra4769 7 ай бұрын
Please make a tutorial on how to use AlphaGeometry
@skybuck2000
@skybuck2000 5 ай бұрын
Tried it with C cause python apperently not installed in vs code by default, didn't work for C code, but I see cody is working somewhat, a yellow light bulb appears. I came here for code translation, though code generation is interesting too and similiar, but can cody translate code too ? from go to delphi/pascal ? Is what I am interested in...
@JoeBrigAI
@JoeBrigAI 6 ай бұрын
No local models when using JetBrains plugin?
@warezit
@warezit 7 ай бұрын
🎯 Key Takeaways for quick navigation: 00:00 💻 *Introduction to Local Coding Assistants* - Introduction to the concept of a local coding assistant and its advantages, - Mention of the coding assistant "Codi" setup with "Olama" for local development. 01:07 🔧 *Setting Up the Coding Environment* - Guide on installing Visual Studio Code and the Codi extension, - Instructions on signing in and authorizing the Codi extension for use. 02:00 🚀 *Enabling Local Autocomplete with Olama* - Steps to switch from GPT-4 to local model support using Olama, - Downloading and setting up the Olama model for local inference. 03:39 🛠️ *Demonstrating Local Autocomplete in Action* - A practical demonstration of the local autocomplete feature, - Examples include writing a Fibonacci method and generating code snippets. 05:27 🌟 *Exploring Additional Features of Codi* - Description of other useful features in Codi not powered by local models, - Examples include chatting with the assistant, adding documentation, and generating unit tests. 07:04 📣 *Conclusion and Sponsor Acknowledgment* - Final thoughts on the capabilities of Codi and its comparison to GitHub Copilot, - Appreciation for Codi's sponsorship of the video. Made with HARPA AI
@olimpialucio
@olimpialucio 7 ай бұрын
Thank you very much for your replay. What type of HW is required to run this model locally
@kritikusi-666
@kritikusi-666 7 ай бұрын
correction. By default, it uses Claude 2.0
@skybuck2000
@skybuck2000 5 ай бұрын
cody settings: provider: Now it says experimental ollama, curious how to connect it to the pull/download... watching video/continueing
@skybuck2000
@skybuck2000 5 ай бұрын
OH WOW ! After installing GO language extension official from google, (and also omni pascal language extension) CODY can translate from go language to pascal language ! LOL VERY IMPRESSIVE ! LOL. THANKS !!!! =D
@RebelliousX
@RebelliousX 7 ай бұрын
ollama for Windows is available now
@neronenerone7366
@neronenerone7366 7 ай бұрын
How about using the same idea but with gpt pilot
@d-popov
@d-popov 7 ай бұрын
That's great! But how is it magically linked to the ollama? How to specify other ollama hosted model (13/34b)?
@Yorley-v6f
@Yorley-v6f 7 ай бұрын
Curious how they compare.
@georgeknerr
@georgeknerr 7 ай бұрын
Love your channel Matthew! For me however, 100% Local is not having to have an account with an external vendor to run your coding assistant completely locally. I'm looking for just that.
@skybuck2000
@skybuck2000 5 ай бұрын
It now looks like once code is selected from pull down list: CodyCompletionProvider:initialized: experimental-ollama/codellama:7b-code
@michaelvarney.
@michaelvarney. 7 ай бұрын
How do you deploy this on an a completely airgapped network? No network connections during install.
@lynxinaction
@lynxinaction Ай бұрын
Why isn't it working?? It's just cycling my message with cody
@ew3995
@ew3995 7 ай бұрын
can you use this for reviewing PRs?
@DiomedesDominguez
@DiomedesDominguez 6 ай бұрын
Do I need a GPU of 4 GB vRAM or more for the 7b? Also, Python is the easiest of the programming languages, can I use cody locally for C/C++ or C# and other more robust languages?
@rafaeldelgrossi
@rafaeldelgrossi 7 ай бұрын
there is no unstable-ollama option for me
@skybuck2000
@skybuck2000 5 ай бұрын
It installed itself on windows in something messy like: C:\Users\skybu\AppData\Local\Programs\Ollama
@shaileshsundram
@shaileshsundram 7 ай бұрын
I am using 2017 MacBook Air. Will using it be instantaneous?
@manhomme3870
@manhomme3870 7 ай бұрын
What to do when you love the idea but you have a windows machine? is there another interesting alternative to Ollama that are available for windows?????????
@skybuck2000
@skybuck2000 5 ай бұрын
Also it's already downloaded, what is the pull for ?
@skybuck2000
@skybuck2000 5 ай бұрын
This ollama pull is apperently to download a model... I guess it can be stored anywhere...
@Novelltrade
@Novelltrade 7 ай бұрын
Please make video using GPT- PILOT with LLaMA locally I appreciate all the videos you do though me a lot
@peterfallman1106
@peterfallman1106 7 ай бұрын
Great but what are the the requirements for Microsoft servers and clients?
@pierruno
@pierruno 7 ай бұрын
Can you write in the Title for what OS this Tutorial is?
@user-qr4jf4tv2x
@user-qr4jf4tv2x 7 ай бұрын
small models are the future
@piratepartyftw
@piratepartyftw 7 ай бұрын
Will the Chat function be available with Ollama soon?
@skybuck2000
@skybuck2000 5 ай бұрын
It worked, made a folder and then went there and command: PS G:\Models\ollama> ollama pull codellama:7b-code
@BrandosLounge
@BrandosLounge 7 ай бұрын
No matter what i do, i always get this when asking for instructions - "retrieved codebase context before initialization". Is there a discord where we can get support for this?
@yvesdaniellontsi
@yvesdaniellontsi 4 ай бұрын
Thanks for this awesome tutorial
@aijokker
@aijokker 7 ай бұрын
Is it better than chatgpt4?
The SIMPLE Way to Build Full Stack AI Apps (Tutorial)
10:41
Matthew Berman
Рет қаралды 25 М.
Which One Is The Best - From Small To Giant #katebrush #shorts
00:17
Electric Flying Bird with Hanging Wire Automatic for Ceiling Parrot
00:15
META's New Code LLaMA 70b BEATS GPT4 At Coding (Open Source)
9:25
Matthew Berman
Рет қаралды 82 М.
I Tried Every AI Coding Assistant
24:50
Conner Ardman
Рет қаралды 799 М.
Amazing New VS Code AI Coding Assistant with Open Source Models
10:37
Zuck Reveals AI's "Final Form", Open-Source, and Scorched Earth Strategy
20:50
This VS Code AI Coding Assistant Is A Game Changer!
14:27
codeSTACKr
Рет қаралды 192 М.
Run ALL Your AI Locally in Minutes (LLMs, RAG, and more)
20:19
Cole Medin
Рет қаралды 74 М.
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 1,1 МЛН
Using Llama Coder As Your AI Assistant
9:18
Matt Williams
Рет қаралды 70 М.
Which One Is The Best - From Small To Giant #katebrush #shorts
00:17