Efficient Fine-Tuning for Llama-v2-7b on a Single GPU

  Рет қаралды 82,820

DeepLearningAI

DeepLearningAI

Күн бұрын

The first problem you’re likely to encounter when fine-tuning an LLM is the “host out of memory” error. It’s more difficult for fine-tuning the 7B parameter Llama-2 model which requires more memory. In this talk, we are having Piero Molino and Travis Addair from the open-source Ludwig project to show you how to tackle this problem.
The good news is that, with an optimized LLM training framework like Ludwig.ai, you can get the host memory overhead back down to a more reasonable host memory even when training on multiple GPUs.
In this hands-on workshop, we‘ll discuss the unique challenges in finetuning LLMs and show you how you can tackle these challenges with open-source tools through a demo.
By the end of this session, attendees will understand:
- How to fine-tune LLMs like Llama-2-7b on a single GPU
- Techniques like parameter efficient tuning and quantization, and how they can help
- How to train a 7b param model on a single T4 GPU (QLoRA)
- How to deploy tuned models like Llama-2 to production
- Continued training with RLHF
- How to use RAG to do question answering with trained LLMs
This session will equip ML engineers to unlock the capabilities of LLMs like Llama-2 on for their own projects.
This event is inspired by DeepLearning.AI’s GenAI short courses, created in collaboration with AI companies across the globe. Our courses help you learn new skills, tools, and concepts efficiently within 1 hour.
www.deeplearning.ai/short-cou...
Here is the link to the notebook used in the workshop:
pbase.ai/FineTuneLlama
Speakers:
Piero Molino, Co-founder and CEO of Predibase
/ pieromolino
Travis Addair, Co-founder and CTO of Predibase
/ travisaddair

Пікірлер: 61
@thelinuxkid
@thelinuxkid 9 ай бұрын
Very helpful! Already trained llama-2 with custom classifications using the cookbook. Thanks!
@dinupavithran
@dinupavithran 6 ай бұрын
Very informative. Direct and to the point content in a easily understandable presentation.
@manojselvakumar4262
@manojselvakumar4262 6 ай бұрын
Great content, well presented!
@Ev3ntHorizon
@Ev3ntHorizon 8 ай бұрын
Excellent coverage, thankyou.
@karanjakhar
@karanjakhar 8 ай бұрын
Really helpful. Thank you 👍
@msfasha
@msfasha 9 ай бұрын
Clear and informative, thanx.
@thedelicatecook2
@thedelicatecook2 Ай бұрын
Well this was simply excellent, thank you 🙏🏻
@Ay-fj6xf
@Ay-fj6xf 7 ай бұрын
Great video, thank you!
@tomhavy
@tomhavy 9 ай бұрын
Thank you!
@nguyenanhnguyen7658
@nguyenanhnguyen7658 8 ай бұрын
Very helpful. Thanks.
@andres.yodars
@andres.yodars 9 ай бұрын
One of the most complete videos. Must watch
@jirikosek3714
@jirikosek3714 9 ай бұрын
Great job, thumbs up!
@ab8891
@ab8891 9 ай бұрын
Excellent xtal clear surgery on GPU VRAM utilization...
@goelnikhils
@goelnikhils 9 ай бұрын
Amazing Content of fine tuning LLM
@KarimMarbouh
@KarimMarbouh 9 ай бұрын
🖖alignement by sectoring hyperparameters in behaviour, nice one
@rgeromegnace
@rgeromegnace 9 ай бұрын
Eh, c'était super. Merci beaucoup!
@rajgothi2633
@rajgothi2633 7 ай бұрын
amazing video
@bachbouch
@bachbouch 7 ай бұрын
Amazing ❤
@hemanth8195
@hemanth8195 9 ай бұрын
Thankyou
@ggm4857
@ggm4857 9 ай бұрын
I like to kindly request @DeepLearningAI to prepare such hands-on workshop on fine-tunning Source Code Models.
@Deeplearningai
@Deeplearningai 9 ай бұрын
Don't miss our short course on the subject! www.deeplearning.ai/short-courses/finetuning-large-language-models/
@ggm4857
@ggm4857 9 ай бұрын
@@Deeplearningai , Wow thanks.
@user-fc5nz9wp2o
@user-fc5nz9wp2o 9 ай бұрын
Cool video. If I want to fine-tune it on a single specific tassk (keyword extraction), should I first train an instruction-tuned model, and then train that on my specific task? Or mix the datasets together?
@shubhramishra8698
@shubhramishra8698 9 ай бұрын
also working on keyword extraction! I was wondering if you'd had any success fine tuning?
@nekro9t2
@nekro9t2 9 ай бұрын
Please can you provide a link to the slides?
@TheGargalon
@TheGargalon 8 ай бұрын
And I was under the delusion that I would be able to fine-tune the 70B param model on my 4090. Oh well...
@iukeay
@iukeay 7 ай бұрын
I got a 40b model working on a 4090
@TheGargalon
@TheGargalon 7 ай бұрын
@@iukeay Did you fine tune it, or just inference?
@ahsanulhaque4811
@ahsanulhaque4811 3 ай бұрын
70B param? hahaha.
@ayushyadav-bm2to
@ayushyadav-bm2to 4 ай бұрын
What's the music in the beginning, can't shake it off
@zubairdotnet
@zubairdotnet 9 ай бұрын
Nvidia H100 GPU on Lambda labs is just $2/hr, I am using it for past few months unlike $12.29/hr on AWS as shown in the slide. I get it, it's still not cheap but just worth mentioning here
@pieromolino_pb
@pieromolino_pb 9 ай бұрын
You are right, we reported the AWS price there as it's hte most popular option and it was not practical to show all the pricing of all the vendors. But yes you can get them for cheaper elsewhere like from Lambda, thanks for pointing it out
@rankun203
@rankun203 9 ай бұрын
Last time I tried it, H100s are out of stock on Lambda
@zubairdotnet
@zubairdotnet 9 ай бұрын
@@rankun203 They are available only in specific region mine is in Utah, I don't think they have expanded it plus there is no storage available in this region meaning if you shut down your instance, all data is lost
@Abraham_doestech
@Abraham_doestech 9 ай бұрын
together AI is $1.4/hr on your own fine tuned model :)
@PieroMolino
@PieroMolino 9 ай бұрын
@@Abraham_doestech Predibase is cheaper than that
@ggm4857
@ggm4857 9 ай бұрын
Hello everyone, I would be so happy if the recorded video have caption/subtitles.
@kaifeekhan_25
@kaifeekhan_25 9 ай бұрын
Right
@dmf500
@dmf500 9 ай бұрын
it does, you just have to enable it! 😂
@kaifeekhan_25
@kaifeekhan_25 9 ай бұрын
​@@dmf500now it is enabled😂
@stalinamirtharaj1353
@stalinamirtharaj1353 8 ай бұрын
@pieromolino_pb -Is Ludwig allows to locally download and deploy the fine-tuned model?
@nminhptnk
@nminhptnk 8 ай бұрын
I ran Colab T4 and still got into “RuntimeError: CUDA Out of memory”. Any thing else I can do please?
@feysalmustak9604
@feysalmustak9604 9 ай бұрын
How long did the entire training process take?
@edwardduda4222
@edwardduda4222 2 ай бұрын
Depends on your hardware, dataset, and hyper parameters you’re manipulating. The training process is the longest phase in developing a model.
@arjunaaround4013
@arjunaaround4013 9 ай бұрын
❤❤❤
@PickaxeAI
@PickaxeAI 9 ай бұрын
at 51:30 he says don't repeat the same prompt in the training data. What if I am fine-tuning the model on a single task but with thousands of different inputs for the same prompt?
@brandtbeal880
@brandtbeal880 9 ай бұрын
It will cause overfitting. It would be similar to training an image classifier with a 1000 pictures of roses and only one lilly, then asking it to predict both classes with good accuracy. You want the data to have a normal distribution around your problem space.
@satyamgupta2182
@satyamgupta2182 8 ай бұрын
@PickaxeAI Did you come across a solution for this?
@manojselvakumar4262
@manojselvakumar4262 6 ай бұрын
Can you give an example for the task? I'm trying to understand in what situation you'd need different completions for the same prompt
@kevinehsani3358
@kevinehsani3358 9 ай бұрын
epochs=3, since we are fine tunning, would epochs=1 would suffice?
@pieromolino_pb
@pieromolino_pb 9 ай бұрын
It really depends on the dataset. Ludwig has also an early stopping mechanism where you can specify the number of epochs (or steps) without improvement before stopping, so you could set epochs to a relatively large number and have the early stopping take care of not wasting compute time
@leepro
@leepro Ай бұрын
Cool! ❤
@Neberheim
@Neberheim 6 ай бұрын
This seems to make a case for Apple Silicon for training. The M3 Max performs close to an RTX 3080, but with access to up to 192GB of memory.
@ahsanulhaque4811
@ahsanulhaque4811 3 ай бұрын
Did you try on Apple silicon M1 Max?
@mohammadrezagh4881
@mohammadrezagh4881 8 ай бұрын
when I run the code in Perform Inference, I frequently receive ValueError: If `eos_token_id` is defined, make sure that `pad_token_id` is defined. what should I do?
@arnavgrg
@arnavgrg 8 ай бұрын
This is now fixed on Ludwig master!
@SDAravind
@SDAravind 8 ай бұрын
can you share the slide, please?
Fine-tuning Large Language Models (LLMs) | w/ Example Code
28:18
Shaw Talebi
Рет қаралды 249 М.
Building with Instruction-Tuned LLMs: A Step-by-Step Guide
59:35
DeepLearningAI
Рет қаралды 48 М.
100❤️
00:20
Nonomen ノノメン
Рет қаралды 67 МЛН
I Built a Shelter House For myself and Сat🐱📦🏠
00:35
TooTool
Рет қаралды 31 МЛН
Which one is the best? #katebrush #shorts
00:12
Kate Brush
Рет қаралды 19 МЛН
The EASIEST way to finetune LLAMA-v2 on local machine!
17:26
Abhishek Thakur
Рет қаралды 164 М.
🐐Llama 3 Fine-Tune with RLHF [Free Colab 👇🏽]
14:30
Whispering AI
Рет қаралды 13 М.
QLoRA-How to Fine-tune an LLM on a Single GPU (w/ Python Code)
36:58
Lessons From Fine-Tuning Llama-2
28:57
Anyscale
Рет қаралды 6 М.
Python RAG Tutorial (with Local LLMs): AI For Your PDFs
21:33
pixegami
Рет қаралды 106 М.
Building Production-Ready RAG Applications: Jerry Liu
18:35
AI Engineer
Рет қаралды 264 М.
LoRA explained (and a bit about precision and quantization)
17:07
Don’t Build AI Products The Way Everyone Else Is Doing It
12:52
Steve (Builder.io)
Рет қаралды 338 М.
Fine-tuning LLMs with PEFT and LoRA
15:35
Sam Witteveen
Рет қаралды 112 М.
До конца😂😂😂😂
0:19
Суета
Рет қаралды 3,2 МЛН
🐷🐽🐖OINK! #kidslearning
0:13
J House jr.
Рет қаралды 8 МЛН
12 июня 2024 г.
1:01
Dragon Нургелды 🐉
Рет қаралды 1,3 МЛН
A pack of chips with a surprise 🤣😍❤️ #demariki
0:14
Demariki
Рет қаралды 32 МЛН
Зу-зу Күлпәш.Түс (16 бөлім)
40:42
ASTANATV Movie
Рет қаралды 774 М.