Thank you Dave for these great tutorials, as a non-native English speaker you speak English so evident that I can comprehend I'm stiil learning HTML and CSS Dave with your tutorials, your tutorials are so invaluable 🌻❤
@DaveGrayTeachesCode7 ай бұрын
Glad I can help!
@DaanVanCamp5 ай бұрын
Thank you for the tutorial. I found out that deepcoder v2 runs faster (and better according to the leaderboard) than qwen on my computer.
@MunyaradziRangaIncrease7 ай бұрын
Exactly what I was looking for. Thank you very much!
@BrettCooper47027 ай бұрын
Thanks, that worked really well. Used it to write some nodejs / express / ejs code, a combo i hand not done and it got the code running nicely
@DaveGrayTeachesCode7 ай бұрын
Glad to hear that!
@oliverodell310513 күн бұрын
Just such an awesome tutorial, thank you !
@louicoder7 ай бұрын
Thanks so very much, Local LLMs are the next big thing. so thankful for this video. Just earned a new sub here 🌟🌟🌟🌟🌟
@DaveGrayTeachesCode7 ай бұрын
You're welcome!
@grumpykoala68964 ай бұрын
Thank you. Exactly what i was looking for.
@EricOnYouTube4 ай бұрын
Things you are showing here no longer align with Continue when I installed it today in Oct 24. :(
@yacineelhakimhaddouche68057 ай бұрын
Thank you Dave, good quality content right there ❤
@DaveGrayTeachesCode7 ай бұрын
You're welcome!
@nitinkumar72255 ай бұрын
Thanks a lot Dave!
@MOJICA72577 ай бұрын
Great work Dave!!! 🎉🎉🎉🎉❤❤❤
@DaveGrayTeachesCode7 ай бұрын
Thanks so much!!
@mikevaleriano95577 ай бұрын
Will take a look, but I'm finding hard to believe there's something out there to replace supermaven. Copilot is trash compared to it now.
@DaveGrayTeachesCode7 ай бұрын
Yes so many new things. I like this because you can keep changing the models as they improve and the extension is constantly being updated as well.
@Getfit-us7 ай бұрын
I agree supermaven is awesome. Switched from copilot
@RabahTaib-mn4fs7 ай бұрын
I tried it and instantly loved it-it's even better than my Tabnine Pro subscription! Thanks for letting me know it exists
@coders_rant7 ай бұрын
Whats the models name?
@lazymass7 ай бұрын
Supermaven? It's fast, but the code it produces is... Not that great... It completely neglects my coding style. Cursor is much better
@yaryariseating2 ай бұрын
Thank you for discovering new AI code completion. I have a question: what are the minimum system requirements to run this local code completion?
@bernardiho7 ай бұрын
From Nigeria Africa! Love your content.
@DaveGrayTeachesCode7 ай бұрын
Thank you!
@shivanshsharma49166 ай бұрын
how it compares to codium?
@evheniydan4 ай бұрын
One important caveat (which wasn't mentioned) is that you need a pretty good PC to use this comfortably.
@shreyashraj4 ай бұрын
When it comes to running ai model locally, it's kind of a given that you will need a strong computing power.
@ZoharYosefАй бұрын
GPU
@ajaytechandgaming74874 ай бұрын
Ur a big help mate
@miraclemilo69536 ай бұрын
thanks for your tutorial
@MarcoLombardo75 ай бұрын
Is the model aware of the entire codebase of the project?
@chadvavra4 ай бұрын
You can do this with Aider also and while it has a map of the repo, it only knows the files you add
@anasselaoufi76203 ай бұрын
A Nextjs frontend with Django backend tutorial would be nice 🙏
@mortezafarhangpanah2566 ай бұрын
Thanks Dave
@louicoder7 ай бұрын
I tried this and it's amazing. The only drawback is the response is really slow. Do you have any idea about that? I'm running on a MacBook Pro 2019 16inch Intel with 32GB RAM, 2.6GHZ 6-core CPU
@mikrowizja11307 ай бұрын
I have the same issue. Did you manage to fix that?
@codecaine6 ай бұрын
intel macs are slow not the ARM versions
@savemybutt6 ай бұрын
I, also, have this issue. Sometimes it never responds and I have to cancel the question. Not having a good experience. How do I uninstall the 4GB LLM? Going back to ChatGPT4.
@codecaine5 ай бұрын
@q_pato_o I have a 64gb arm mac and 8b will still run slow. It because it is using the CPU.
@codecaine5 ай бұрын
M1 8gb ARM macbook air can run 8gb no problem. for me.
@noweare124 күн бұрын
This is crazy. This will eventually greatly reduce the need for programmers.
@catreunion7 ай бұрын
Thank you teacher.
@JonBrookes7 ай бұрын
thanks Dave, your always as ever on the money so to speak. Local LLMs may well be the next thing and privacy is king so this is well worth the time and investment. Thanks agai, your a ⭐cheers
@DaveGrayTeachesCode7 ай бұрын
You're welcome and I agree!
@JonBrookes7 ай бұрын
@@DaveGrayTeachesCode yep, I've got deepseek-coder-v2 running already, thanks to your prompting me to take a look on now windows up to now I've been running on WSL so in order to get this to work I had to first stop ollama in WSL with syttemctl commands to stop and disable but that aside, running in windows now which is terrific, its even working with dart / flutter which is amazing
@MarkWilliamson-n4s6 ай бұрын
Thanks Dave.❤❤❤
@tpevers1048Ай бұрын
Do you have some gpu for powering that beast
@alijvn37833 ай бұрын
This is great! Thank you! How does this compare with Codeium Free? Which one is better?
@CanDoSo_org5 ай бұрын
Thank you, Dave. Have you tried cursor? How do you like it?
@forcebrew4 ай бұрын
I really appreciate your videos! Could you recommend a large language model for coding that I can run locally without worrying about my computer slowing down?
@m.f.mfazrin87204 ай бұрын
If you want to try any Local LLM make sure you have a Powerful workstation
@mhl_57 ай бұрын
thank you dave, you are amazing
@DaveGrayTeachesCode7 ай бұрын
You're welcome!
@canardeur83907 ай бұрын
You definitely deserve your entry to the Nirvana! (The thing is: you bring so much enlightenment to our world that if you do so, you will be missed a lot! Hopefully you volunteer to reincarnate on this planet to bring your light!)
@bwsstha88647 ай бұрын
Thanks alot Dave, As per the request for upcoming videos I would be happy if you could provide honoJS, react / nextJs and postgress tutorial
@DaveGrayTeachesCode7 ай бұрын
Many possibilities there! Thanks for the request!
@SouravAhmed-b7m7 ай бұрын
Does it have inline suggestion?
@DaveGrayTeachesCode7 ай бұрын
Yes it does
@sheri9695 ай бұрын
awesome. thanks.
@VahidArjmand66Ай бұрын
what is hardware requirments for run locally ?
@MunyaradziRangaIncrease7 ай бұрын
With the advent of *Apple Intelligence*, I am thinking there may be a way to integrate your code editor into the model like this. If and when its possible PLEASE make a video on that. 😄
@boopfer3877 ай бұрын
Great Dave!
@DaveGrayTeachesCode7 ай бұрын
Thank you!
@PrensCin3 ай бұрын
you are big brother thank you
@alexanderkomanov41517 ай бұрын
Thanks Dave!
@DaveGrayTeachesCode7 ай бұрын
You're welcome!
@helloworldcsofficial7 ай бұрын
Great! More of this please!
@DaveGrayTeachesCode7 ай бұрын
Thanks for the feedback! I do want to compare more of these solutions. I really like this one because it is local and you choose your own model - which allows you to upgrade as models improve.
@helloworldcsofficial7 ай бұрын
@@DaveGrayTeachesCode 🙏
@maciek3673 ай бұрын
you're legend
@AlexanderBelov-y8o7 ай бұрын
Does this also do code completion / suggestions?
@DaveGrayTeachesCode7 ай бұрын
Yes
@Or1g3nn6 ай бұрын
What hardware are you running this with? I am running a 11th gen i7 with 32gb ram and SSD and unfortunately codeqwen is running at an absolute snails pace to point that it is unusable 😢
@lead-nik3 ай бұрын
Same on macbook pro M1Pro 32g RAM - unusable. Few promps in terminal - and CPU goes to 100% load. Will try to move ollama with codeqwen to the server with GPU
@Peacemaker.4047 ай бұрын
hey dave, do you use linux and if you do which distro do you use, i'm trying to switch from windows.
@DaveGrayTeachesCode7 ай бұрын
I used to. My favorite for a long time was Debian. Then everyone went to Ubuntu. Last I knew, Ubuntu was easiest to switch to.
@Peacemaker.4047 ай бұрын
@@DaveGrayTeachesCode thanks, i'll try that.
@drkgumby7 ай бұрын
Everybody has an opinion on which Linux distro is best, and all of us are 100% right. :) I use PopOS as a daily driver and suggest you give it a try. Zorin and Mint are also often suggested for somebody just coming over from Windows. Ubuntu is a good choice as well. Depending on your hardware, you should be able to boot from a USB stick and try any of these before you commit to an installation.
@Peacemaker.4047 ай бұрын
@@drkgumby Yes, exactly. I was confused between PopOS and Mint, also I've used Ubuntu in the past.
@drkgumby7 ай бұрын
@@Peacemaker.404 If you can boot from USB for a test drive then it's easier to try them all and pick the one that best meets your needs. Unless you are doing something that requires some specific software that requires Windows, you will probably find them to all work for you. Then you can pick the one that looks/feels the best.
@anshulsingh83265 ай бұрын
Hi anything for vs studio?
@youlearn74745 ай бұрын
hey Dev, your tutorials are awsome and provide deep incites. i tried this with qwen2 modal but its not working as expected. i have setup everthing, but its very slow on my system, so better to go with chatgpt. but i wonder is there any lightweight application that we can use for auto code completion similar to copilot.
@aymenbachiri-yh2hd7 ай бұрын
thank you so much
@DaveGrayTeachesCode7 ай бұрын
You're welcome!
@wolfganggrojcig25282 ай бұрын
This sounds pretty powerful. I'll give it a try. Does anyone have a model they would recommend for the following programming languages: Assembly x64, Kotlin, JavaScript C/C++, HTML/CSS and Java?
@BBBhop6 ай бұрын
hi mate, does this work for regular visual studio too? I cant stand vscode haha
@zenguitarankh7 ай бұрын
Mine is "indexing" for my first code generation. I'm guessing it's a one time thing though.
@PeterBolkeАй бұрын
I am using Cursor AI currently,...i have the bare feeling that those guys use Qwen in the background...
@slimbennasrallah23517 ай бұрын
Thank you this is an amazing video! 🎉 I have a question concerning it's ability or not to understand the full context of a codebase/project/folder or is it limited to selected code/limited number of open files
@mhl_57 ай бұрын
from continue extension docs: Completions don't know about my code We are working on this! Right now Continue uses the Language Server Protocol to add definitions to the prompt, as well as using similarity search over recently edited files. We will be improving the accuracy of this system greatly over the next few weeks.
@DaveGrayTeachesCode7 ай бұрын
Which model you choose and how many tokens it supports will impact that, but yes, continue supports that. Reference: docs.continue.dev/walkthroughs/codebase-embeddings
@mayr.i6 ай бұрын
have auto complete?
@EdmundCiegoBelize7 ай бұрын
Would this be able to access all the code in the codebase? Or only the opened files?
@DaveGrayTeachesCode7 ай бұрын
Yes you can use @codebase or @folder. Reference: docs.continue.dev/walkthroughs/codebase-embeddings
@aakashswastik94587 ай бұрын
Can we do video on detail of ecma script used forjs
@chadvavra4 ай бұрын
Compliment approved
@TSMohanKumar7 ай бұрын
Hello sir l'm learn the python from your channel but i forgot everything after some days and after a long time I'm learning again python please give some suggestions to learn python
@DaveGrayTeachesCode7 ай бұрын
Don't rush. Just learn one thing and then try to apply it. The more you use it, the easier it is to remember. Just learn a little something new every day.
@TSMohanKumar7 ай бұрын
@@DaveGrayTeachesCode ok thank you sir but in python course I couldn't understand the operators you wouldn't explain the some operators . How to understand that and were you learn python and other new technologies
@DaveGrayTeachesCode7 ай бұрын
@@TSMohanKumar there are many python resources available. If what I said or taught did not stick with you, sometimes it is good to reference other sources of information. Putting all of these together will help your understanding.
@douglaskipyegon21837 ай бұрын
@@TSMohanKumar you don't have to understand everything. Be kind to yourself and everything will fall into place the more you code. So don't stress if you don't understand something
@gddrew6 ай бұрын
Installed in VSCode, but can’t get the chat in intellij
@matt112fly6 ай бұрын
you need a helluva PC to run this, but let's give it a try... if anything i can just buy a bit more RAM ;p
@skewlines41522 ай бұрын
For me it seems way slower than it should be. Running on AMD 4800U with 16gb of ram and the starcoder2:3b model.
@DaveGrayTeachesCode2 ай бұрын
@@skewlines4152 might need more RAM to speed it up.
@skewlines41522 ай бұрын
@@DaveGrayTeachesCode I"ll try it on my gaming pc tomorrow and report the difference.
@habibium7 ай бұрын
Hey Dave, can you share the hardware specs of the machine where you tried this? I am curious if my MBA M1 with 16GB can handle it or not.
@DaveGrayTeachesCode7 ай бұрын
I'm using a Windows PC that's about 3 years old. Your m1 is good, but RAM might be a concern. I added a lot of RAM to render my videos faster.
@akshaykumarsingh87157 ай бұрын
I installed deepseek but that extention is not picking up it self can you tell me what i can write in config file? Please help
@MilindP7 ай бұрын
Can we set the token size since there is a limit with copilot. Thank you another wonderful tutorial.
@DaveGrayTeachesCode7 ай бұрын
It will only be limited by the limits of the model you choose.
@MilindP7 ай бұрын
@@DaveGrayTeachesCode Thank you.
@rahu1gg7 ай бұрын
Will the Assistan able to auto complete react and next.js code ??
@DaveGrayTeachesCode7 ай бұрын
Yes, it has auto complete. The setting I change in the video is tab auto complete.
@hbela10007 ай бұрын
Thanks.Which is the best Ollama LLM for nextJS 14 free / licensed ?
@DaveGrayTeachesCode7 ай бұрын
I don't think you can target a framework like that unless someone specifically creates an LLM for it. Just go with the rankings for coding like I do with EvalPlus in this video.
@snivels7 ай бұрын
Does this all happen locally? No posting your code to some server in Vietnam somewhere?
@DaveGrayTeachesCode7 ай бұрын
Right! And yes, 100% local.
@andromilk26347 ай бұрын
@@DaveGrayTeachesCode This assumes we have a really good computer?
@nix77057 ай бұрын
Hello, I can see that you have python and js lessons on your channel, but what language has more opportunities to start job at least for free or for food:/? I learned python and Django before, but dropped it because people said that it's too slow, and i'm practicing MERN now, of course it feels a bit harder than Django. And i'm a bit confused now, was it right choice or wrong to drop python web
@DaveGrayTeachesCode7 ай бұрын
There are jobs for both. Difficult to say which would be better. Both are among the most popular programming languages.
@timtanhueco19907 ай бұрын
Hi! Thanks for this video! Is the Ollama and codeqwen 100% free, has IntelliSense and autocomplete aside from code chat?
@DaveGrayTeachesCode7 ай бұрын
Yes to all
@yoJuicy7 ай бұрын
Dave, you missed the entire install on VSCode 4:18
@DaveGrayTeachesCode7 ай бұрын
I installed at 3:56 so not sure what you mean? I do mention that I previously had it installed so I did not get the splash screen again. I talk about what you should see.
@yoJuicy7 ай бұрын
@@DaveGrayTeachesCode Thanks for the reply, its a screen that tells users to install llama3 and run it as well as starcoder2. Easy spot to get lost. thanks!
@togya47 ай бұрын
Dave will ever make a course about the new nextjs futures
@DaveGrayTeachesCode7 ай бұрын
I'm waiting until Next.js 15 is promoted beyond release candidate. Then I will consider it.
@RobertMcGovernTarasis7 ай бұрын
Cheers. Haven’t tried CodeQwen yet. I prefer LM Studio to Ollama if nothing else because it keeps the downloaded file in a for. That’s usable by other programs. Where as Ollama just hides the download intoweirdly named files.
@DaveGrayTeachesCode7 ай бұрын
Good info! Thanks!
@smhameed191822 күн бұрын
I would like to see python to adobe pdf
@this.tushar7 ай бұрын
Want to see docker, docker compose from scratch
@eleah26657 ай бұрын
Did I miss something? I did not see code completion as you type your own code.
@DaveGrayTeachesCode7 ай бұрын
There is tab auto completion. That is the setting I changed in the config - but yeah, trying to fit it all into 10 minutes or less, I didn't demo everything.
@KidusDawit-h9g7 ай бұрын
I was wondering if you could make a postgresql tutorial
@DaveGrayTeachesCode7 ай бұрын
Nice request! I've been thinking about that 🙌
@visheshbajpayee93087 ай бұрын
I am using MacBook m1 and did everything as mentioned in the video. It seems like vscode is lagging after applying all the configuration. Also, auto suggestion is not working for me. Is there anything m missing?
@DaveGrayTeachesCode7 ай бұрын
I didn't add it to my Mac yet to compare, but installed locally shouldn't create a lag. I think you've got plenty of power, too. Maybe a quick restart of VS Code? As mentioned, I did have to restart Windows.
@christerjohanzzon7 ай бұрын
I've tried a few models locally, and I really like having an AI assistant locally. Too bad I can't run it on my laptop. Anyone know of any good free or low cost alternatives?
@DaveGrayTeachesCode7 ай бұрын
Yeah running locally will take some power. If your laptop runs short on that, you might want to look at services that don't run locally.
@christerjohanzzon7 ай бұрын
As an answer to myself, I just found out about Chat RTX from Nvidia...it can run local LLM's and train on your local data as well. Now I only need a extension to integrate it into VS Code.
@hornickt7 ай бұрын
Thank you for the great content. I have a private ollama server running in my company with dedicated video card. Is it possible to connect Continue to this server in my LAN? Ollama is not installed on my local workstation, it is on a server here in my environment.
@DaveGrayTeachesCode7 ай бұрын
I don't know, but if you find out, please share here. Interesting!
@twd27 ай бұрын
Awesome 😍....
@fabrice98487 ай бұрын
you're the best (must be exhausting)
@BilalAulakh237 ай бұрын
Legend
@nileshgosavii7 ай бұрын
It is so cool but it crashes my pc every few minutes. Even though my PC meets the requirements.
@DaveGrayTeachesCode7 ай бұрын
Hmm, I'm on a PC and that hasn't happened. Strange indeed. It does seem to be a bit power hungry according to some comments, but mine is far from a recent PC.
@personone68817 ай бұрын
Excuse me please - *Oi!* - :D - thank u hello! Ummm... so at 3:07 u explain "...go back to your Terminal in VS Studio Code and run that with Ollama" - and I understand that, and that's fine... But what NOBODY EVER EXPLAINS or more accurately what it seems more like to be WHAT EVERYONE FAILS TO RESPONSIBLEY ADVISE is - {C:\in\which\gosh\darnit\someone\tellme\please\directory} ??? - it's either one of these so let's try multiple choice for ease and efficiency for all: Do I run the install command here (*): A) .../Users//> * B) .../Users//CODE/> * C) .../Users//CODE/Projects/> * D) .../Users//CODE/Projects/myProjectsRootDir> * or, simply even here: E) C:/ * ! ! ! Or are model packages installed as global dependencies so it matters none what directory your terminal is currently pointing to when sending the ollama run command? If so, is this true for all (or "most") ai chat models or just in this case for ollama? THANK YOU IN ADVANCE TO ANYONE HOW CAN CLARIFY THIS FOR ME
@personone68817 ай бұрын
wow - erm there's a part in there that translates a bit harsh lol - wasn't tryna be... wasn't pointing fingers at anyone... sorry if it came across any bit a little fiery
@DaveGrayTeachesCode7 ай бұрын
Someone else commented about how Ollama stores the LLM models with weird filenames, but they didn't say where. They did mention another choice of theirs that makes the models available to other software, too. Might look for that comment from earlier today.
@personone68816 ай бұрын
@@DaveGrayTeachesCode did u ever locate that post/choice?
@mfpears5 ай бұрын
Better resalts
@mrelqori79317 ай бұрын
but you need strong strong CPU
@DaveGrayTeachesCode7 ай бұрын
Mine isn't too strong. I'd say mid.
@ashutosh97 ай бұрын
This is lagging so much in my M2 MacBook Air
@DaveGrayTeachesCode7 ай бұрын
From what I'm hearing in the comments, it seems like more RAM helps. I have a PC that's a few years old but lots of RAM.
@9622AX7 ай бұрын
Well its good. But takes away many system resources.
@DaveGrayTeachesCode7 ай бұрын
Could be a drawback of keeping everything local depending on machine power. I'm not usually running many other tasks while coding/chatting with it.
@darwinmanalo54367 ай бұрын
You need a high end computer to run that locally though. My mac m1 lags lol so I'll stick to copilot
@DaveGrayTeachesCode7 ай бұрын
Hmm, I have a mid PC running it without issues. I do have extra RAM though. Might make a difference.
@Kricke877 ай бұрын
Can confirm as well. I did the same test as Dave and it took about 1-2 minutes for the entire code to be written using codeqwen. Don't know if it's CPU or GPU that slows it down. I have 32gb RAM a 9400k and a AMD 580RX. So I guess at least my PC is not as fast as Dave's. But fun little piece of software to use. I also use Cody Free version as an alternative to Copilot, as I'm still learning programming and only do a little coding at my current job, so I don't feel it's worth forking out $ yet.
@alexeyfilippov427 ай бұрын
Спасибо огромное
@DaveGrayTeachesCode7 ай бұрын
You're welcome!
@palashjyotiborah98887 ай бұрын
It's old news. 😢
@DaveGrayTeachesCode7 ай бұрын
Definitely not for everyone. But yeah, we do hear about things at different times.
@nuttbaked7 ай бұрын
first time hearing about this
@vivekkaushik95087 ай бұрын
I think Codeium is better even with free version
@DaveGrayTeachesCode7 ай бұрын
You've already had time to compare both? I want to compare others. Can you choose your own free model with Codeium? If so, it comes down to the extension features and UI comparison.
@vivekkaushik95087 ай бұрын
Good sir@@DaveGrayTeachesCode 1. Codeium doesn't require me to install and run a local ollama instance which hogs the compute and memory of my MBA making it unbearable to code. 2. Codeium free version doesn't give the ability to choose models but the Pro model has ability to choose GPT 4 models. Haven't tried that, don't have that kinda money but free is fast and good enough for my use case - Web dev. 3. Codeium setup is just 1 click. 🙂
@unknotmiguel7 ай бұрын
The goal usually is to run locally due to privacy of code concerns, I believe...
@vivekkaushik95087 ай бұрын
@@unknotmiguel Umm if someone actually reads through the privacy section of these Copilots-as-service apps, they don't send your code back to their server lol. But they send metrics and that too depending on Paid version can be turned off!!! Also, PRIVACY is a MYTH. And privacy in code is a joke. My poor 2 cents.