No video

Fine-tuning Open Source LLMs with Mistral | Tokenization & Model Performance

  Рет қаралды 554

DataCamp

DataCamp

Күн бұрын

While cutting-edge large language models can write almost any text you like, they are expensive to run. You can get the same performance for less money by using a smaller model and fine-tuning it to your needs.
In this session, Andrea, a Computing Engineer at CERN, and Josep, a Data Scientist at the Catalan Tourist Board, will walk you through the steps needed to customize the open-source Mistral LLM. You'll learn about choosing a suitable LLM, getting training data, tokenization, evaluating model performance, and best practices for fine-tuning.
Key Takeaways:
- Learn how to fine-tune a large language model using the Hugging Face Python ecosystem.
- Learn about the steps to prepare for fine-tuning and how to evaluate your success.
- Learn about best practices for fine-tuning models.

Пікірлер
Prompt Engineering, RAG, and Fine-tuning: Benefits and When to Use
15:21
Large Language Models (LLMs) - Everything You NEED To Know
25:20
Matthew Berman
Рет қаралды 80 М.
Magic? 😨
00:14
Andrey Grechka
Рет қаралды 16 МЛН
艾莎撒娇得到王子的原谅#艾莎
00:24
在逃的公主
Рет қаралды 52 МЛН
How I Did The SELF BENDING Spoon 😱🥄 #shorts
00:19
Wian
Рет қаралды 36 МЛН
Challenge matching picture with Alfredo Larin family! 😁
00:21
BigSchool
Рет қаралды 42 МЛН
Internet is going wild over this problem
9:12
MindYourDecisions
Рет қаралды 157 М.
Fine-tuning Large Language Models (LLMs) | w/ Example Code
28:18
Shaw Talebi
Рет қаралды 299 М.
Create fine-tuned models with NO-CODE for Ollama & LMStudio!
21:52
Tim Carambat
Рет қаралды 24 М.
Claude 3.5 Deep Dive: This new AI destroys GPT
36:28
AI Search
Рет қаралды 621 М.
Fine-Tuning GPT Models with Python
23:14
NeuralNine
Рет қаралды 10 М.
Master Fine-Tuning Mistral AI Models with Official Mistral-FineTune Package
23:32
Magic? 😨
00:14
Andrey Grechka
Рет қаралды 16 МЛН