No video

Fine-tuning Open Source LLMs with Mistral | Tokenization & Model Performance

  Рет қаралды 554

DataCamp

DataCamp

Күн бұрын

While cutting-edge large language models can write almost any text you like, they are expensive to run. You can get the same performance for less money by using a smaller model and fine-tuning it to your needs.
In this session, Andrea, a Computing Engineer at CERN, and Josep, a Data Scientist at the Catalan Tourist Board, will walk you through the steps needed to customize the open-source Mistral LLM. You'll learn about choosing a suitable LLM, getting training data, tokenization, evaluating model performance, and best practices for fine-tuning.
Key Takeaways:
- Learn how to fine-tune a large language model using the Hugging Face Python ecosystem.
- Learn about the steps to prepare for fine-tuning and how to evaluate your success.
- Learn about best practices for fine-tuning models.

Пікірлер
Fine-tuning Large Language Models (LLMs) | w/ Example Code
28:18
Shaw Talebi
Рет қаралды 299 М.
Bringing Generative AI to the Enterprise
58:40
DataCamp
Рет қаралды 519
Smart Sigma Kid #funny #sigma #comedy
00:40
CRAZY GREAPA
Рет қаралды 40 МЛН
Survive 100 Days In Nuclear Bunker, Win $500,000
32:21
MrBeast
Рет қаралды 163 МЛН
Official PyTorch Documentary: Powering the AI Revolution
35:53
Internet is going wild over this problem
9:12
MindYourDecisions
Рет қаралды 158 М.
Radical Simplicity
45:53
ThePrimeTime
Рет қаралды 234 М.
Prompt Engineering, RAG, and Fine-tuning: Benefits and When to Use
15:21
Claude 3.5 Deep Dive: This new AI destroys GPT
36:28
AI Search
Рет қаралды 621 М.
Why The Sun is Bigger Than You Think
10:30
StarTalk
Рет қаралды 313 М.
Fine-Tuning GPT Models with Python
23:14
NeuralNine
Рет қаралды 10 М.
AI and the Productivity Paradox
39:07
IBM Technology
Рет қаралды 69 М.
Smart Sigma Kid #funny #sigma #comedy
00:40
CRAZY GREAPA
Рет қаралды 40 МЛН