Fine-tune my Coding-LLM w/ PEFT LoRA Quantization

  Рет қаралды 4,489

Discover AI

Discover AI

Күн бұрын

Coding-LLM are trained on old data. Even the latest GPT-4 Turbo Code Interpreter (CI) has a knowledge cut-off at April 2023. All AI research from the last 7 moths are not in the training data of commercial coding LLMs. And RAG lines of code do not help at all, given the complex interdependencies of code libs.
Therefore an elegant solution for AI researcher is to fine-tune your own Coding-LLM on the latest GitHub repos and coding data. Which is exactly the content of this video: How to fine-tune your personal coding-LLM (or a Co-pilot like Microsoft's GitHub co-pilot or any CODE-LLM like StarCoder).
#ai
#coding
#pythonprogramming

Пікірлер: 8
@javiergimenezmoya86
@javiergimenezmoya86 10 ай бұрын
This channel has supreme quality.
@sup5356
@sup5356 10 ай бұрын
Outstanding content, beautifully conveyed
@FaizanKhan-x1g9u
@FaizanKhan-x1g9u 10 ай бұрын
Thanks for sharing the knowledge. ❤
@unclecode
@unclecode 10 ай бұрын
I am at 00:02:40, already super excited for what you decided to create a video. Amazing. Thanks
@unclecode
@unclecode 10 ай бұрын
I finished the video, extremely appreciate, the way you put all these valuable information together. Looking forward for part 2.
@shayanshamsi7540
@shayanshamsi7540 10 ай бұрын
Thankyou for this amazing video! Could you please make a coding walkthrough for soft prompts?
@daryladhityahenry
@daryladhityahenry 10 ай бұрын
Hi! I have a question about deepspeed or accelerate library from huggingface. It said that can make a portion of the model to be trained on, so it can be used across multiple GPU, then.... Can it also become a scheduler? I mean, if yes then we can just using 1 GPU, just train for a longer time for many sequence of training batch that should be spread across GPUs. I hope I can use my own GPU to train @_@.
@idoronen9497
@idoronen9497 10 ай бұрын
Appreciate the video! Can I apply the same technique to fine-tune a model like LLaVA?
Fine-tune my Coding-LLM w/ PEFT LoRA Quantization - PART 2
24:45
Discover AI
Рет қаралды 2,2 М.
RAG optimized PEFT-LoRA: Your Questions answered
32:14
Discover AI
Рет қаралды 4,7 М.
Остановили аттракцион из-за дочки!
00:42
Victoria Portfolio
Рет қаралды 3,6 МЛН
💩Поу и Поулина ☠️МОЧАТ 😖Хмурых Тварей?!
00:34
Ной Анимация
Рет қаралды 1,8 МЛН
New LLM-Quantization LoftQ outperforms QLoRA
14:15
Discover AI
Рет қаралды 4,6 М.
Fine-tuning LLMs with PEFT and LoRA
15:35
Sam Witteveen
Рет қаралды 126 М.
Fine-tuning Large Language Models (LLMs) | w/ Example Code
28:18
Shaw Talebi
Рет қаралды 321 М.
New: AI Agent Self-Improvement + Self-Fine-Tune
37:46
Discover AI
Рет қаралды 10 М.
LoRA explained (and a bit about precision and quantization)
17:07
GraphRAG: The Marriage of Knowledge Graphs and RAG: Emil Eifrem
19:15
Graph-of-Thoughts (GoT) for AI reasoning Agents
41:34
Discover AI
Рет қаралды 14 М.
StarCoder - The LLM to make you a coding star?
17:11
Sam Witteveen
Рет қаралды 18 М.
Отличия iphone 16 Pro Max от 15 Pro Max
0:46
Romancev768
Рет қаралды 416 М.
Распаковка 16 iPhone pro max
0:50
KERRY CATT
Рет қаралды 214 М.
CRAZY KEYBOARD CHALLENGE 😮How fast could you type?
0:41
Hipyo Tech
Рет қаралды 22 МЛН
iPhone 16 Vs S25 ultra💀
1:01
Skinnycomics
Рет қаралды 4,7 МЛН