Fine-tune LLama2 w/ PEFT, LoRA, 4bit, TRL, SFT code

  Рет қаралды 16,676

code_your_own_AI

code_your_own_AI

10 ай бұрын

Code script how to fine-tune LLama 2 model with parameter efficient fine-tuning, a low rank approximation of matrix and tensor structures, a 4-bit quantization of tensors, a transformer based Reinforcement Learning (RL) and HuggingFace's Supervised Fine-tuning trainer. LLama v2 model, finetuning.
Plus we code a synthetic dataset for our LLama 2 model to fine-tune on, w/ GPT-4 (or your preferred CLAUDE 2 or ....) as the central intelligence - to create task specific datasets for a given user query to fine-tune LLMs on.
All rights with Matt Shumer for his Jupyter NB on fine-tuning LLama 2 model:
colab.research.google.com/dri...
See also Matt Shumer's Github repo for the GPT-LLM-Trainer:
github.com/mshumer/gpt-llm-tr...
#gpt
#finetuning
#llama2

Пікірлер: 23
@lifsys
@lifsys 7 ай бұрын
Fantastic! Appreciate the knowledge you are sharing.
@lifeofcode
@lifeofcode 9 ай бұрын
Bro, I appreciate you so much for this fire content you been pumping out, after checking you out over the past week, you have gained a subscriber for sure. Great stuff, please keep this up!!
@echofloripa
@echofloripa 9 ай бұрын
I guess you need a better LLM in order to improve Llama2.
@code4AI
@code4AI 9 ай бұрын
No, not a better LLM. In my next video I show a different way ...
@echofloripa
@echofloripa 9 ай бұрын
@@code4AI interesting, looking forward to that!
@elrecreoadan878
@elrecreoadan878 8 ай бұрын
Awsome content! When is it adecuate to fine tune an llm instead of working or as a complement for the botpress knowledge base?
@dustingifford214
@dustingifford214 9 ай бұрын
Do you have a discord community? I have been following you for awhile now and have so many questions. BTW this is amazing but I really want to talk more about instructor embeddings FAISS db and instruction fine tuning something really small like flan t5 small/base. I'm curious on if with peft lora ability to freeze and manipulate the weights of the base model would we be able to run a real form of intelligence on a cpu? I know the amount of data would be a lot but would we be able to see Fair results? Sorry in advance if this is wrong place for this question
@echofloripa
@echofloripa 9 ай бұрын
Is it possible to train, in the same training go, a dataset made of prompt/response and full text files?
@moonly3781
@moonly3781 3 ай бұрын
Thank you for this Video!! I'm new to fine-tuning and trying to understand more about it. Can someone explain if test and evaluation datasets are needed for instruction datasets? I'm not quite sure how test and evaluation datasets work with instruction data. Additionally, I'd love to know what's the best percentage split for instruction fine-tuning on a dataset of 5K rows. Would a 10-10-80 or a 20-20-60 split be more suitable? Any advice would be greatly appreciated!
@vs7438
@vs7438 9 ай бұрын
Thanks for sharing.... do you know if this one can be tuned to 8bit. the one you mentioned to 8 bit does not applies to this.
@akeshagarwal794
@akeshagarwal794 9 ай бұрын
So In reinforcement learning, the reward model was LLama 2 itself or chatgpt4?
@user-wr4yl7tx3w
@user-wr4yl7tx3w 9 ай бұрын
how long did it take to run the collar notebook, using T4 GPU or TPU?
@hunkims
@hunkims 8 ай бұрын
Why we need to merge the model again in the last stage?
@echofloripa
@echofloripa 9 ай бұрын
Channel: "You know this..." Myself: "nooo, I don't, go back... " 😅😅😅
@code4AI
@code4AI 9 ай бұрын
smile.... I know this feeling ...
@echofloripa
@echofloripa 9 ай бұрын
@@code4AI 😅😅😅
@echofloripa
@echofloripa 9 ай бұрын
Can I run the Colab NB on a free account?
@redgenAI
@redgenAI 9 ай бұрын
Could we do this without openAI and off of something completely offline?
@AdamBrusselback
@AdamBrusselback 9 ай бұрын
Honestly, no not yet. As the local 70b models improve, they will become better at extracting and generating synthetic data so it may become possible. I wasted a bunch of time trying to use local models for some of my data pipeline and couldn't get anything close to as reliable as gpt3.5-turbo, and it was only capable of handling a handful of my tasks in my data pipeline.
@phoenixfire6559
@phoenixfire6559 9 ай бұрын
Yes you can, just skip the open ai code and load in your dataset. There are quite a few optimisations that can be done to this code (flash attention, packing, higher gradient accumulation, more regular validation checking etc), but its a decent place to start fine tuning. There are plenty of examples online of fine tuning personal datasets.
@phoenixfire6559
@phoenixfire6559 9 ай бұрын
@@AdamBrusselback Generally an LLM fine tuned on a SINGLE task is better than GPT 3.5 turbo - single task does not mean "summarisation", it means "summarise this medical document in this style" i.e. specific. If GPT 3.5 is still better after a fine tune when doing a specific task then there are a whole host of reasons, usually user errors, why the fine tune failed e.g. poor initial model choice (e.g. vocab list inappropriate), poor quality data, not enough data, poor fine tuning parameters etc. Remember, this is only for a specific task. However, if you are going beyond a limited scope or need generalisation, GPT 3.5 will trounce it, simply because it has more processing power and better training data.
@madarauchiha2584
@madarauchiha2584 8 ай бұрын
V400 is not free though
Code to Fine-tune ChatGPT w/ synthetic GPT-4 dataset
11:31
code_your_own_AI
Рет қаралды 4,2 М.
How to Code RLHF on LLama2 w/ LoRA, 4-bit, TRL, DPO
36:14
code_your_own_AI
Рет қаралды 12 М.
Just try to use a cool gadget 😍
00:33
123 GO! SHORTS
Рет қаралды 78 МЛН
Which one is the best? #katebrush #shorts
00:12
Kate Brush
Рет қаралды 22 МЛН
🐐Llama 3 Fine-Tune with RLHF [Free Colab 👇🏽]
14:30
Whispering AI
Рет қаралды 13 М.
PEFT LoRA Explained in Detail - Fine-Tune your LLM on your local GPU
40:55
How 1 Bit LLMs Work
46:50
Oxen
Рет қаралды 26 М.
New Tutorial on LLM Quantization w/ QLoRA, GPTQ and Llamacpp, LLama 2
26:53
Fine-tuning Large Language Models (LLMs) | w/ Example Code
28:18
Shaw Talebi
Рет қаралды 251 М.
Fine-tune Mixtral 8x7B (MoE) on Custom Data - Step by Step Guide
19:20
Prompt Engineering
Рет қаралды 36 М.
Understanding 4bit Quantization: QLoRA explained (w/ Colab)
42:06
code_your_own_AI
Рет қаралды 38 М.
#miniphone
0:16
Miniphone
Рет қаралды 3,2 МЛН
ТОП-5 культовых телефонов‼️
1:00
Pedant.ru
Рет қаралды 17 М.
Мечта Каждого Геймера
0:59
ЖЕЛЕЗНЫЙ КОРОЛЬ
Рет қаралды 1,5 МЛН
ПОКУПКА ТЕЛЕФОНА С АВИТО?🤭
1:00
Корнеич
Рет қаралды 2,3 МЛН