How to Run Llama 3 Locally on your Computer (Ollama, LM Studio)

  Рет қаралды 34,037

Mervin Praison

Mervin Praison

Күн бұрын

🌟 Welcome to today's exciting tutorial where we dive into running Llama 3 completely locally on your computer! In this video, I'll guide you through the installation process using Ollama, LM Studio, and Jan AI, ensuring your data stays private while harnessing the power of AI. Whether you're a Mac, Windows, or Linux user, I've got you covered. Don't forget to hit the like button and subscribe for more AI-focused content. Let's jump right in!
👉 What you'll learn:
Downloading and installing Llama 3 on different operating systems.
Running Llama 3 using Ollama, LM Studio, and Jan AI.
Tips to optimise your local AI setup for speed and efficiency.
Real-time demonstrations and meal plan generation using Llama 3.
🔗 Useful Links:
Download Ollama: ollama.com/
LM Studio Website: lmstudio.ai
Jan AI Downloads: jan.ai
🔗 Resources:
Sponsor a Video: mer.vin/contact/
Do a Demo of Your Product: mer.vin/contact/
Patreon: / mervinpraison
Ko-fi: ko-fi.com/mervinpraison
Discord: / discord
Twitter / X : / mervinpraison
Code: mer.vin/2024/04/llama-3-run-l...
Jan AI Advanced: • Jan AI: Run Open Sourc...
📌 Timestamps:
0:00 - Introduction to Running Llama 3 Locally
0:24 - Starting the Installation Process
0:47 - Downloading Llama 3 via Olama
1:26 - Setting up Llama 3 with LM Studio
2:32 - Installing Llama 3 with Jan AI
3:12 - Using Olama API with Llama 3
3:55 - Running Local Servers with LM Studio
👍 If you find this video helpful, please like, share, and subscribe for more tech tutorials. Your support helps us bring more useful content to you!

Пікірлер: 49
@JohnCorrUK
@JohnCorrUK 2 ай бұрын
Mervin ... My project for the week and you have just published this very helpful video 😊
@sean_vikoren
@sean_vikoren 2 ай бұрын
I am hoping to get this running in a vm. Hyper-V was not good at sharing the GPU, but VirtualBox is going fine so far, and the rumor is that it shares GPUs better. Good luck!
@pratikkumar939
@pratikkumar939 2 ай бұрын
U r brilliant. i was struggling and you came just in perfect timing
2 ай бұрын
And that's why I expanded the GPU in Mac Studio! Fast-and-furious 🚀🔥👏
@user-mk9tv3by4n
@user-mk9tv3by4n 2 ай бұрын
very useful
@MeinDeutschkurs
@MeinDeutschkurs 2 ай бұрын
Woohoo! This model will help me to place some decisions before processing the prompt. Not really agents, but a batch of to-dos. Typical function calling, I think. 🎉🎉🎉🎉🎉
@nufh
@nufh 2 ай бұрын
Which one is the best?
@felipeclavijo1736
@felipeclavijo1736 25 күн бұрын
YOU ARE INCREDIBLE !!!!!!!!!!!!!!!! do you know of any LLM running locally that can read and analyze data on my computer? for instance; being able to access excel files and to provide analysis within the laptop.
@Mario_Kart_567
@Mario_Kart_567 Ай бұрын
When asking "Do you send data to Meta?" here is the LLAMA3 answer: "It's possible that some of the data I process or generate during our conversation might be stored on Meta's servers as part of their infrastructure. This could include: 1. Logs: Meta may store logs of our conversation, including the text we've exchanged, to help improve their platform and services. 2. Training data: The conversations I have with users like you can contribute to the training data used to improve my language models. 3. User behavior data: Meta might collect data on your interactions with their platform, such as search queries or clicks, which could be used to personalize your experience." You've been warned...
@joseph-ianex
@joseph-ianex Ай бұрын
Preprogrammed response, it doesn't send anything if run locally. You can run it without internet. Running on Meta's side or using their servers they are 100% taking your data.
@jini611
@jini611 Ай бұрын
Mervin, thanks for the amazing video. Could you please create a video that elaborate the locally LLAMA 3 connecting to your SQL database and creates the sql query? I know you have the video that creates the SQL queries but I need to have a connection to that local database.
@schmutz06
@schmutz06 2 ай бұрын
just dabbling in this for the first time, when I ran that terminal command, where did it download llama 3? presumably to system32 folder which ollama defaulted to, but i dont see it. New to this.
@user-wr4yl7tx3w
@user-wr4yl7tx3w 2 ай бұрын
can you do a video on Jan AI. not sure what it is exactly.
@eduardocruzism
@eduardocruzism Ай бұрын
How do I know if its using CPU or GPU? I mean, when I make a question my GPU usage goes from 1% to 30% and then to 1% again when its finished. But my CPU usage does the same. So is it using CPU or GPU?
@FusionDeveloper
@FusionDeveloper 2 ай бұрын
Thanks, I didn't realize i could just open the command prompt to launch it. I assumed Ollama had it's own window and I was struggling to find where to open it.
@magn8
@magn8 Ай бұрын
Same. I kept opening it.
@jalam1001
@jalam1001 2 ай бұрын
Thanks for video. I have been using llm studio. Its very slow. What's the hardware specification of your system?
@dennissdigitaldump8619
@dennissdigitaldump8619 Ай бұрын
You absolutely have to have a GPU. The more VRAM the better. 12Gb is kinda the minimum.
@stanTrX
@stanTrX 2 ай бұрын
thanks but why do we have to download the model both for command line and for lm studio too.. aren't they the same model file? can't we use ollama serve??
@anindabanik208
@anindabanik208 2 ай бұрын
Please make a video for local agent that run kaggle /colab using llama 3
@firstlast493
@firstlast493 2 ай бұрын
How about code competition in VSCode?
@jets115
@jets115 2 ай бұрын
Can you do a video on llama.cpp, api, and concurrent users?
@eduardatonga7056
@eduardatonga7056 2 күн бұрын
Im new to this, is LM studio better than using Chainlit ?
@jennilthiyam980
@jennilthiyam980 Ай бұрын
is you approach totally safe for using sensitive data? is the model completely local or are you just using API method?
@MervinPraison
@MervinPraison Ай бұрын
Save to use, as it’s running locally and don’t use api
@secaja92
@secaja92 Ай бұрын
Hi Mervin, could you tell me what are the specifications of your Mac? I recently ran LM Studio and noticed a spike in CPU usage after sending a prompt. I just want to confirm if this issue could be related to the specifications. my Mac is an M2 Pro with 16GB of RAM.
@MervinPraison
@MervinPraison Ай бұрын
Yes it will spike , and I use M2 Max 32GB For normal model it’s working fine . But you can expect spike
@themanavpaul
@themanavpaul Ай бұрын
No one would believe me, I ran it on my I5 8th gen U series CPU with 2gb Nvidia MX250. 1 query takes 50 mins to answer.
@nhtna4706
@nhtna4706 2 ай бұрын
Pleas , Make a video to run Grok v1.5 locally , can u ?
@JarppaGuru
@JarppaGuru 2 ай бұрын
same as 2 and 1?
@mikemartin8444
@mikemartin8444 Ай бұрын
Please answer this. I have an Nvidia 3090 (24gb) on a home brew pc. Can I run it on it? I just want to try running the models locally and don’t want to spend cloud dollars.
@hardwalker95
@hardwalker95 Ай бұрын
it should be alright for llama 3 8b. i read it requires 20gb of vram
@negibamaxim9851
@negibamaxim9851 Ай бұрын
i am duing that but insted of llama 3 i get the first llama
@fiorellademedina8419
@fiorellademedina8419 2 ай бұрын
Is this llama 70b or 30b?
@quenlood70
@quenlood70 21 күн бұрын
8b
@Shaylenhira
@Shaylenhira Ай бұрын
Is this free? Or does it cost you per API call you make?
@joygumero
@joygumero 12 күн бұрын
How do I make Llama 3 recognize my mic and respond by voice?
@eprd313
@eprd313 6 күн бұрын
I guess you'd have to install a good speech recognition as well as text to speech model and integrate the three. Or maybe you can find something like that already done in huggingface
@sangu_akhirat
@sangu_akhirat 2 ай бұрын
Broo.. let me know about specification of computer, my computer run ollama soo slow. This is specification my comp: Intel Core I7 6700HQ CPU 2.60GHz, RAM 32 DDR4
@etcgroup8811
@etcgroup8811 Ай бұрын
+10
@Ginto_O
@Ginto_O Ай бұрын
does ollama use GPU?
@quenlood70
@quenlood70 21 күн бұрын
Yes
@emanuelec2704
@emanuelec2704 2 ай бұрын
When I use llama 3 8B on ollama or LM Studio, it is much dumber than on OpenRouter. Even after resetting all parameters to factory and loading the llama 3 preset. Even with the full non-quantized 8-bit version on LM studio.
@fiorellademedina8419
@fiorellademedina8419 2 ай бұрын
how do you know if it’s the version 8B or 70b?
@emanuelec2704
@emanuelec2704 2 ай бұрын
@@fiorellademedina8419 For the local model, it's at the beginning of the filename. And you can also tell by the size. For the OpenRouter version, it's always stated in the name of the model you are using.
@socialtraffichq5067
@socialtraffichq5067 19 күн бұрын
Hold on to your papers
@podunkman2709
@podunkman2709 Ай бұрын
Quality of ollama 3 is just hopeless. Just take a look at this ****: >>> How many liters of water per minute can a Dutch windmill pump out? That's an interesting question! The answer depends on the specific design and size of the windmill. However, I can give you some general information. Traditionally, Dutch windmills are designed to pump water from shallow sources, such as polders or wells, for irrigation purposes. The capacity of these windmills varies greatly, but a typical small to medium-sized windmill might be able to pump around 0.5 to 2 liters per minute (L/min). Some larger industrial-scale windpumps can pump much more, up to 10-20 L/min or even more, depending on the turbine design and the pressure head of the water. Really?
@srenlarsen3148
@srenlarsen3148 2 ай бұрын
And still it cant do math. And still it do hallucinations And still it is old data from stop time = 2023 almost 1 year back and people still dont get it not up to date. But anyway sure it a bit better than Liama2 version. And still 70B is the online version to big to run local. Unly 7-8b in 2 and 3 version can be run local. But everyone has this so not really any new thing. In CMD, in Web UI, in Python, Javascript or what ever they what online platform they use. The unly real hype about it, is that it has some more data in the models. Else it is all the same thing. And yes free for everyone to use as they please them self like the older models.
Getting Started on Ollama
11:26
Matt Williams
Рет қаралды 36 М.
Khóa ly biệt
01:00
Đào Nguyễn Ánh - Hữu Hưng
Рет қаралды 19 МЛН
Increíble final 😱
00:37
Juan De Dios Pantoja 2
Рет қаралды 100 МЛН
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 760 М.
All You Need To Know About Running LLMs Locally
10:30
bycloud
Рет қаралды 115 М.
Ollama: Run LLMs Locally On Your Computer (Fast and Easy)
6:06
Run your own AI (but private)
22:13
NetworkChuck
Рет қаралды 1,2 МЛН
Python RAG Tutorial (with Local LLMs): AI For Your PDFs
21:33
pixegami
Рет қаралды 117 М.
I Analyzed My Finance With Local LLMs
17:51
Thu Vu data analytics
Рет қаралды 423 М.
"okay, but I want Llama 3 for my specific use case" - Here's how
24:20
Stop paying for ChatGPT with these two tools | LMStudio x AnythingLLM
11:13
Колесо из туалетной бумаги 😱
0:20
ТРЕНДИ ШОРТС
Рет қаралды 3,4 МЛН
🇹🇷Kemer Beach Antalya - Awesome Views - Türkiye
0:12
Benimle Gor
Рет қаралды 25 МЛН
Pura Pura Keracunan Lagi #shorts
0:16
AKU ELIP
Рет қаралды 27 МЛН
The day of the sea 🌊 🤣❤️ #demariki
0:22
Demariki
Рет қаралды 65 МЛН