WizardCoder 34B: Complex Fine-Tuning Explained

  Рет қаралды 4,276

code_your_own_AI

code_your_own_AI

Күн бұрын

Significant performance difference of WizardCoder-Python-34B-V1.0 (based on Code LLama 2) explained, when compared to simple Code LLama - Python 34B.
The difference is the added complexity cascade in the evolving instruction fine-tuning.
original source (all rights with authors):
github.com/nlpxucan/WizardLM
#agents
#llama2
#codegeneration

Пікірлер: 5
@Nick_With_A_Stick
@Nick_With_A_Stick 10 ай бұрын
This video is just perfect. Literally exactly what I needed thanks for your hard work! when I make my datasets I use a evol and orca with human guided prompting technique. It tends to work pretty well.
@cecilsalas8721
@cecilsalas8721 10 ай бұрын
Excellent content ! 😊
@yannickpezeu3419
@yannickpezeu3419 10 ай бұрын
Thanks !
@ashwinkotgire1372
@ashwinkotgire1372 8 ай бұрын
Can you please tell me whether the Instruct finetuning was done on Llama-chat or normal Llama model? And the GPT model used to generate the data was ChatGPT or normal GPT provided by OpenAi?
@fernandofernandesneto7238
@fernandofernandesneto7238 10 ай бұрын
Fantastic explanation. By the way, the langchain llm does not exhibit critic / reasoning capabilities ... that's what would be missing (am I right?)
NEW FALCON-180B: create 3 AGENTS simultaneously?
5:58
code_your_own_AI
Рет қаралды 1,5 М.
How to code long-context LLM: LongLoRA explained on LLama 2 100K
35:53
code_your_own_AI
Рет қаралды 5 М.
小女孩把路人当成离世的妈妈,太感人了.#short #angel #clown
00:53
QLoRA-How to Fine-tune an LLM on a Single GPU (w/ Python Code)
36:58
How to Code RLHF on LLama2 w/ LoRA, 4-bit, TRL, DPO
36:14
code_your_own_AI
Рет қаралды 12 М.
Prompt Engineering, RAG, and Fine-tuning: Benefits and When to Use
15:21
🐐Llama 3 Fine-Tune with RLHF [Free Colab 👇🏽]
14:30
Whispering AI
Рет қаралды 14 М.
Training Your Own AI Model Is Not As Hard As You (Probably) Think
10:24
Steve (Builder.io)
Рет қаралды 446 М.
Train MISTRAL 7B to outperform LLama 2 70B (ZEPHYR 7B Alpha)
19:21
code_your_own_AI
Рет қаралды 4,5 М.
How to Improve LLMs with RAG (Overview + Python Code)
21:41
Shaw Talebi
Рет қаралды 27 М.
PEFT LoRA Explained in Detail - Fine-Tune your LLM on your local GPU
40:55
Finetuning Open-Source LLMs
20:05
Sebastian Raschka
Рет қаралды 28 М.
Graph-of-Thoughts (GoT) for AI reasoning Agents
41:34
code_your_own_AI
Рет қаралды 13 М.
#miniphone
0:16
Miniphone
Рет қаралды 3,5 МЛН
Cadiz smart lock official account unlocks the aesthetics of returning home
0:30
Asus  VivoBook Винда за 8 часов!
1:00
Sergey Delaisy
Рет қаралды 1 МЛН