Fine-tuning Tiny LLM on Your Data | Sentiment Analysis with TinyLlama and LoRA on a Single GPU

  Рет қаралды 14,164

Venelin Valkov

Venelin Valkov

Күн бұрын

Пікірлер: 15
@venelin_valkov
@venelin_valkov 7 ай бұрын
Full text tutorial (requires MLExpert Pro): www.mlexpert.io/bootcamp/fine-tuning-tiny-llm-on-custom-dataset
@ansea1234
@ansea1234 7 ай бұрын
Thank you very much for this wonderful video. Among other things, the details you give are really very useful!
@geniusxbyofejiroagbaduta8665
@geniusxbyofejiroagbaduta8665 7 ай бұрын
Thanks for this totorial
@xugefu
@xugefu 7 ай бұрын
Thanks!
@researchforumonline
@researchforumonline 7 ай бұрын
Thanks
@ziddiengineer
@ziddiengineer 7 ай бұрын
Can u send notebook of this tutorial
@temiwale88
@temiwale88 5 ай бұрын
Hello. Thank you for this work! I don't see the jupyter notebook in the github repo.
@unclecode
@unclecode 3 ай бұрын
Such an ontime tutorial! I'm working on some SLMs and need insights on Fine-tuning parameters. Your video is a huge help, thx for that! Couldn't find the colab for this project in the repository, any chance the colab is available? Btw I'm one of your mlexpert members.
@venelin_valkov
@venelin_valkov 3 ай бұрын
Here is the colab link: colab.research.google.com/github/curiousily/AI-Bootcamp/blob/master/08.llm-fine-tuning.ipynb From the GitHub repo:github.com/curiousily/AI-Bootcamp Thank you for watching and subscribing!
@devtest202
@devtest202 6 ай бұрын
Hi thanks!! A question for a model in which I have more than 2,000 pdfs. Do you recommend improving the handling of vector databases? When do you recommend fine tunning and when do you recommend vector database
@Iiochilios1756
@Iiochilios1756 6 ай бұрын
Please explain one interesting moment: First you add special token and then enlage embedding dimension to take this new token into account. At that point the new embedding is initialized by random values. Later you apply to target modules and embedding layer is absent in that list. My questions: 1) when you will train new embedding you have just added? Original model is freezed, only LoRA layers will be trained by trainer(). 2) why you do not add ###Titel, ###Text and ###Prediction as special tokens and let'em be part of the text?
@mohamedkeddache4202
@mohamedkeddache4202 7 ай бұрын
thanks for the video. but ... is that a language model? idk a lot about Ai, but it looks like a multi class classification. LLM is supposed to be like chat gpt right ?
@onoff5604
@onoff5604 7 ай бұрын
Kind of hard to tell if this is a close match to needs...where I can't see anything at all...
@trollerninja4356
@trollerninja4356 4 ай бұрын
getting NameError: DataCollatorForCompletionOnlyLM not found. I also checked the docs. I didn't find any Class named DataCollatorForCompletionOnlyLM
@kekuramusa
@kekuramusa 3 ай бұрын
from trl import DataCollatorForCompletionOnlyLM
QLoRA-How to Fine-tune an LLM on a Single GPU (w/ Python Code)
36:58
ПРИКОЛЫ НАД БРАТОМ #shorts
00:23
Паша Осадчий
Рет қаралды 3,6 МЛН
Violet Beauregarde Doll🫐
00:58
PIRANKA
Рет қаралды 50 МЛН
At the end of the video, deadpool did this #harleyquinn #deadpool3 #wolverin #shorts
00:15
Anastasyia Prichinina. Actress. Cosplayer.
Рет қаралды 16 МЛН
LLAMA-3.1 🦙: EASIET WAY To FINE-TUNE ON YOUR DATA 🙌
15:08
Prompt Engineering
Рет қаралды 26 М.
Fine-tuning Large Language Models (LLMs) | w/ Example Code
28:18
Shaw Talebi
Рет қаралды 309 М.
How 1 Bit LLMs Work
46:50
Oxen
Рет қаралды 27 М.
Llama 3 Fine Tuning for Dummies (with 16k, 32k,... Context)
23:16
Nodematic Tutorials
Рет қаралды 28 М.
How AI 'Understands' Images (CLIP) - Computerphile
18:05
Computerphile
Рет қаралды 199 М.
Fine-tuning on Wikipedia Datasets
43:40
Trelis Research
Рет қаралды 2,6 М.
Create fine-tuned models with NO-CODE for Ollama & LMStudio!
21:52
Tim Carambat
Рет қаралды 27 М.
ПРИКОЛЫ НАД БРАТОМ #shorts
00:23
Паша Осадчий
Рет қаралды 3,6 МЛН