How To Run Llama 3 8B, 70B Models On Your Laptop (Free)

  Рет қаралды 8,999

School of Machine Learning

School of Machine Learning

Ай бұрын

Written guide: schoolofmachinelearning.com/2...
Unlock the power of AI right from your laptop with this comprehensive tutorial on how to set up and run Meta's latest LLaMA models (8B and 70B versions). We will use Ollama to run these models locally on your laptop and that too for free.
What You'll Learn:
- An overview of LLaMA models and their capabilities.
- Step-by-step instructions on setting up your system for LLaMA 3.
- Tips on optimizing performance for both the 8B and 70B models.
- Troubleshooting common issues to ensure a smooth operation.
#LLaMA3 #MetaAI #AITutorial #MachineLearning #Coding #TechTutorial

Пікірлер: 37
@PJ-hi1gz
@PJ-hi1gz 26 күн бұрын
Informative and straight to the point, thank you!
@SchoolofMachineLearning
@SchoolofMachineLearning 25 күн бұрын
thank you :)
@dosomethingwild4999
@dosomethingwild4999 4 күн бұрын
NEAT!
@MiraclesofCreation
@MiraclesofCreation 16 күн бұрын
nice guide with easy written instruction thanks
@SchoolofMachineLearning
@SchoolofMachineLearning 15 күн бұрын
Glad you liked it
@sphansel3257
@sphansel3257 19 күн бұрын
most underrated channel. you deserve way more dude!☺
@SchoolofMachineLearning
@SchoolofMachineLearning 18 күн бұрын
thank you :)
@mustafamohsen
@mustafamohsen 25 күн бұрын
Thank you for the guide, great stuff! Just a heads up, there's a slight error in the command table within the written guide. The command for the 70B should be `ollama run llama3:70b` instead of `ollama run llama3:8b`
@SchoolofMachineLearning
@SchoolofMachineLearning 25 күн бұрын
Thanks, fixed!
@thesattary
@thesattary 7 күн бұрын
I'm jealous of your internet speed bro :(
@SchoolofMachineLearning
@SchoolofMachineLearning 6 күн бұрын
haha :)
@nqaiser
@nqaiser Ай бұрын
Hello, What would be recommended hardware specs to run Llama 3 70b at good performance for multiple users(~5 users).
@SchoolofMachineLearning
@SchoolofMachineLearning 29 күн бұрын
For what you require it makes more sense to call Llama via an API as it will be much cheaper. It's currently $0.64/1M input and $0.80/1M output tokens on Groq AI (that's the cheapest one I've seen). For hardware, I haven't built anything like that so not sure, maybe an A100? :D But for a single user from what I've seen online, good specs are: An Apple M2 Ultra w/ 24-core CPU, 60-core GPU, 128GB RAM (costs $8000 with the monitor) runs Meta-Llama-3-70B-Instruct.Q4_0.llamafile at 14 tok/sec (prompt eval is 82 tok/sec).
@nqaiser
@nqaiser 28 күн бұрын
@@SchoolofMachineLearning the sort of application I am considering requires an onpremise deployment so deploying it in cloud/consuming via api isn't an option. I am a bit more inclined towards Linux/Windows ecosystem. What would be the total VRAM/Ram required for the 70b model. Also does using 4bit quantized model result in some loss of accuracy, is that noticeable in the output?
@qtUnluckyThreshh
@qtUnluckyThreshh 29 күн бұрын
Does it have an endpoint I can access from localhost so I can make my own html interface?
@SchoolofMachineLearning
@SchoolofMachineLearning 28 күн бұрын
Meta doesn't directly provide an API access but you can access via Groq/Replicate/Microsoft/Databricks.
@WatsitTooyah
@WatsitTooyah 11 күн бұрын
open webui already exists too
@Muzick
@Muzick Ай бұрын
I've installed the 70B model on my desktop which has 64GB of memory. But it is running super slow. Any tips? Thanks!
@SchoolofMachineLearning
@SchoolofMachineLearning Ай бұрын
The short answer is to get a more powerful GPU :D
@swarupkumar2
@swarupkumar2 Ай бұрын
​@@SchoolofMachineLearningwhat should be the minimum GPU? Is RTX 3060 12GB enough?
@SchoolofMachineLearning
@SchoolofMachineLearning Ай бұрын
I don't think that is going to be enough. By default, Ollama downloads a 4-bit quant. Which for Llama 3 70B is 40 GB. Your GPU has only 12 GB of VRAM, so the rest has to be offloaded into system RAM, which is much slower. You have two options: - Use the 8B model instead (ollama run llama3:8b) - Use a smaller quant (ollama run llama3:70b-instruct-q2_K)
@schmutz06
@schmutz06 Ай бұрын
I ran into the same, and having looked around it appears £20-30K GPUs with ~40GB VRAM are the type you'd need to manage the 70b model. It is, after all, 40GB of data; where your GPU is insufficient, this will be loaded to your RAM, which is exponentially slower than video card memory at performing this work.
@schmutz06
@schmutz06 Ай бұрын
@@SchoolofMachineLearning what is that q2_K? i have a 12GB 3080Ti, is that the best option for me? I read some who attempted this found the 7b model was superior.
@ElcoolMo
@ElcoolMo 21 күн бұрын
forgive me I am new to coding, but could i get it running outside the terminal so it can have a nice GUI
@SchoolofMachineLearning
@SchoolofMachineLearning 21 күн бұрын
Yes, you can. Here is a tutorial for a nice interface using webUI: github.com/open-webui/open-webui. You can also directly use on Meta.ai.
@hunterking4228
@hunterking4228 Ай бұрын
Can I run 8B on my 8GB memory. Will it work ? I dont mind it being slow
@SchoolofMachineLearning
@SchoolofMachineLearning Ай бұрын
It will have extremely poor performance, even then I don't think you will be able to run. But you can give it a shot.
@nastastic
@nastastic 24 күн бұрын
I tried it and it's a waste of time. Computer freezes with simple commands and takes ages to come out of freeze. m3 macbook pro with 8gb ram
@juritronics
@juritronics Ай бұрын
doesn't it have an API that we can use instead of installing it in our own pc's
@SchoolofMachineLearning
@SchoolofMachineLearning Ай бұрын
Meta doesn't provide llama 3 API directly afaik but if you want to try out llama 3 you can do so on Meta.ai. A lot of other companies provide llama 3 API such as Databricks, Replicate, Microsoft, etc.
@maizizhamdo
@maizizhamdo Ай бұрын
groq offre llma 3 70 b for free with api
@KnutJohannessen
@KnutJohannessen Ай бұрын
How slow is 70b on your laptop?
@SchoolofMachineLearning
@SchoolofMachineLearning Ай бұрын
The requirements are: - 16GB memory for 8B model. - 32GB memory for 70B model (even then it is very slow). I have not tried the 70B model on my laptop but I'm assuming it is almost not usable.
@behunkydory9966
@behunkydory9966 26 күн бұрын
@@SchoolofMachineLearning How can I check memory requirements information about Llama-3 models? Especially I want to know the requirements for 70B model.
@WatsitTooyah
@WatsitTooyah 11 күн бұрын
70B model on 32GB mac m1 max is taking like a minute per word... 8B model is very fast.
Llama 3 - 8B & 70B Deep Dive
23:54
Sam Witteveen
Рет қаралды 31 М.
How to Create Realistic AI Avatar Videos | AI Avatar Generator
6:08
Website Learners
Рет қаралды 4,2 М.
didn't want to let me in #tiktok
00:20
Анастасия Тарасова
Рет қаралды 12 МЛН
How To Choose Ramen Date Night 🍜
00:58
Jojo Sim
Рет қаралды 59 МЛН
О, сосисочки! (Или корейская уличная еда?)
00:32
Кушать Хочу
Рет қаралды 6 МЛН
Balloon Pop Racing Is INTENSE!!!
01:00
A4
Рет қаралды 17 МЛН
Ollama - Local Models on your machine
9:33
Sam Witteveen
Рет қаралды 62 М.
Meta Llama3 70B with RTX 3090 FE & DDR5 32GB
0:38
hansung Bang
Рет қаралды 877
Stop paying for ChatGPT with these two tools | LMStudio x AnythingLLM
11:13
Run ANY Open-Source Model LOCALLY (LM Studio Tutorial)
12:16
Matthew Berman
Рет қаралды 122 М.
Unlocking The Power Of AI: Creating Python Apps With Ollama!
12:12
Matt Williams
Рет қаралды 15 М.
How To Use Meta Llama3 With Huggingface And Ollama
8:27
Krish Naik
Рет қаралды 26 М.
Unleash the power of Local LLM's with Ollama x AnythingLLM
10:15
Tim Carambat
Рет қаралды 88 М.
didn't want to let me in #tiktok
00:20
Анастасия Тарасова
Рет қаралды 12 МЛН