Рет қаралды 35,708
Need help with AI? Book a call: calendly.com/shawhintalebi
In this video, I discuss how to fine-tune an LLM using QLoRA (i.e. Quantized Low-rank Adaptation). Example code is provided for training a custom KZbin comment responder using Mistral-7b-Instruct.
👉 Series Playlist: • Large Language Models ...
🎥 Fine-tuning with OpenAI: • 3 Ways to Make a Custo...
📰 Read more: medium.com/towards-data-scien...
💻 Colab: colab.research.google.com/dri...
💻 GitHub: github.com/ShawhinT/KZbin-B...
🤗 Model: huggingface.co/shawhin/shawgp...
🤗 Dataset: huggingface.co/datasets/shawh...
Resources
[1] Fine-tuning LLMs: • Fine-tuning Large Lang...
[2] ZeRO paper: arxiv.org/abs/1910.02054
[3] QLoRA paper: arxiv.org/abs/2305.14314
[4] Phi-1 paper: arxiv.org/abs/2306.11644
[5] LoRA paper: arxiv.org/abs/2106.09685
--
Homepage: shawhintalebi.com/
Socials
/ shawhin
/ shawhintalebi
/ shawhint
/ shawhintalebi
The Data Entrepreneurs
🎥 KZbin: / @thedataentrepreneurs
👉 Discord: / discord
📰 Medium: / the-data
📅 Events: lu.ma/tde
🗞️ Newsletter: the-data-entrepreneurs.ck.pag...
Support ❤️
www.buymeacoffee.com/shawhint
Intro - 0:00
Fine-tuning (recap) - 0:45
LLMs are (computationally) expensive - 1:22
What is Quantization? - 4:49
4 Ingredients of QLoRA - 7:10
Ingredient 1: 4-bit NormalFloat - 7:28
Ingredient 2: Double Quantization - 9:54
Ingredient 3: Paged Optimizer - 13:45
Ingredient 4: LoRA - 15:40
Bringing it all together - 18:24
Example code: Fine-tuning Mistral-7b-Instruct for YT Comments - 20:35
What's Next? - 35:22