LLAMA-3 🦙: EASIET WAY To FINE-TUNE ON YOUR DATA 🙌

  Рет қаралды 44,158

Prompt Engineering

Prompt Engineering

Күн бұрын

Learn how to fine-tune the latest llama3 on your own data with Unsloth.
🦾 Discord: / discord
☕ Buy me a Coffee: ko-fi.com/promptengineering
|🔴 Patreon: / promptengineering
💼Consulting: calendly.com/engineerprompt/c...
📧 Business Contact: engineerprompt@gmail.com
Become Member: tinyurl.com/y5h28s6h
💻 Pre-configured localGPT VM: bit.ly/localGPT (use Code: PromptEngineering for 50% off).
Signup for Advanced RAG:
tally.so/r/3y9bb0
LINKS:
Announcement: llama.meta.com/llama3/
Meta Platform: meta.ai
unsloth.ai/
huggingface.co/unsloth
Notebook: tinyurl.com/4ez2rprt
Github Tutorial: github.com/PromtEngineer/Yout...
TIMESTAMPS:
[00:00] Fine-tuning Llama3
[00:30] Deep Dive into Fine-Tuning with Unsloth
[01:28] Training Parameters and Data Preparation
[05:36] Setting training parameters with Unsloth
[11:03] Saving and Utilizing Your Fine-Tuned Model
All Interesting Videos:
Everything LangChain: • LangChain
Everything LLM: • Large Language Models
Everything Midjourney: • MidJourney Tutorials
AI Image Generation: • AI Image Generation Tu...

Пікірлер: 77
@spicer41282
@spicer41282 Ай бұрын
Thank you! More fine tuning case studies please on Llama 3! Much appreciated 🙏 your presentation on this!
@engineerprompt
@engineerprompt Ай бұрын
Will be making alot more on it. Stay tuned.
@lemonsqueeezey
@lemonsqueeezey Ай бұрын
thank you so much for this useful video!
@Joe-tk8cx
@Joe-tk8cx Ай бұрын
Thank you so much for sharing this was wonderful, I have a question, I am a beginner in LLM model world, which playlist on your channel can I start from ? Thank you
@mrtwtrn
@mrtwtrn 20 күн бұрын
Was having such a hard time training llms before this, thankyou
@engineerprompt
@engineerprompt 20 күн бұрын
glad it was helpful
@user-vt1qs1ge7m
@user-vt1qs1ge7m Күн бұрын
can you make a video on how to pass a test csv to the finetuned model and get response column
@hadebeh2588
@hadebeh2588 Ай бұрын
Thank your very much for your great video. I ran the workbook but did not manage to find the GGUF files on Huggingsface. I put in my HF-Token, but that did not work. Do I have to change the code?
@KleiAliaj-us9ip
@KleiAliaj-us9ip Ай бұрын
great video. But how to add more than one datasets ?
@agedbytes82
@agedbytes82 Ай бұрын
Amazing, thanks!
@engineerprompt
@engineerprompt Ай бұрын
Glad you like it!
@loicbaconnier9150
@loicbaconnier9150 Ай бұрын
Hello ilpossible to generate gguf, compilation problem … Did you try it ?
@KleiAliaj
@KleiAliaj Ай бұрын
Great video mate. How can i add more than one dataset ?
@scottlewis2653
@scottlewis2653 29 күн бұрын
Mediatek's Dimensity chips + Meta's Llama 3 AI = The dream team for on-device intelligence.
@StephenRayner
@StephenRayner Ай бұрын
Excellent thank you
@skeiriyalance7274
@skeiriyalance7274 10 күн бұрын
how can i use my csv as dataset , im new
@danielhanchen
@danielhanchen Ай бұрын
Fantastic work and always love your videos! :)
@engineerprompt
@engineerprompt Ай бұрын
Thank you
@pfifo_fast
@pfifo_fast 25 күн бұрын
This video lacks alot of helpful info... Anyone can just open the examples and read them just the same as you did. I would have liked to be given extra detail and tips about how to actually do fine-tuning... Some of the topics I am struggling with include, how to load custom data, how to use a different prompt template, how to define validation data, when to use validation data, what learning rates are good, how do i determine how many epochs to run... Im sorry buddy, but I have to give this video a thumbs down as it really truly and honestly dosent provide any useful info that isnt already in the notebook.
@shahzadiqbal7646
@shahzadiqbal7646 Ай бұрын
Can you make a video on how to use local llama 3 to understand large c++ or c# code base
@iCode21
@iCode21 17 күн бұрын
search for ollama,
@jannik3475
@jannik3475 Ай бұрын
Is there a way to sort of „brand“ llama 3. So that the model responds to „Who are you?“ a custom answer? Thank you!
@engineerprompt
@engineerprompt Ай бұрын
Yes, you can just add that as part of the system message
@metanulski
@metanulski Ай бұрын
Regarding the save option. Do I have to delete the parts that I dont what, or how does this work?
@engineerprompt
@engineerprompt Ай бұрын
You can just comment those parts. Put # in front of those lines which you don't need.
@VerdonTrigance
@VerdonTrigance Ай бұрын
How to actually train models? And I mean non-supervised training where I have a set of documents and want to learn on it and probably find author's 'style' or tendency?
@PYETech
@PYETech 28 күн бұрын
You need to create some process to transfer all the knowledge in these documents in the form of "prompt":"best output". Usually we use an team of agents to do it for us.
@metanulski
@metanulski Ай бұрын
One more comment :-). this Video is about fintung a model, but there is no real explanation why. We finetune with the standard Alpaca dataset, but there is no explanation why. It would be great if you could do a follow up and show us how to create datasets.
@SeeFoodDie
@SeeFoodDie Ай бұрын
Thanks
@RodCoelho
@RodCoelho Ай бұрын
How do you train a model by adding the knowledge in a book, which will like only have 1 column of text?
@engineerprompt
@engineerprompt Ай бұрын
In that case, you will have to convert the book into question answers and format it in the similar fashion. You can use an LLM to convert the book to QA using an LLM
@kingofutopia
@kingofutopia Ай бұрын
Awesome, thanks
@engineerprompt
@engineerprompt Ай бұрын
🙏
@researchpaper7440
@researchpaper7440 Ай бұрын
great it was quick
@dogsmartsmart
@dogsmartsmart Ай бұрын
Thank you! but Mac m3 max can use mlx to fine-tune?
@engineerprompt
@engineerprompt Ай бұрын
Yes
@CharlesOkwuagwu
@CharlesOkwuagwu Ай бұрын
Hi, please what if we have already downloaded a gguf file? How do we apply that locally?
@engineerprompt
@engineerprompt Ай бұрын
I am not sure if you can do that. Will need to do further research on it.
@DemiGoodUA
@DemiGoodUA Ай бұрын
Hi, nice video. But how to finetune model on my codebase?
@engineerprompt
@engineerprompt Ай бұрын
You can use the same setup. Just replace the instruction and input with your code.
@DemiGoodUA
@DemiGoodUA Ай бұрын
@@engineerprompt how to divide code on "question - answer" pairs? or I can place whole codebase to single instruction
@pubgkiller2903
@pubgkiller2903 Ай бұрын
I have already finetune using unsloth for testing purpose.
@engineerprompt
@engineerprompt Ай бұрын
Great, how are the results looking?
@pubgkiller2903
@pubgkiller2903 Ай бұрын
@@engineerprompt great results and thanks for your support to AI community
@TheIITianExplorer
@TheIITianExplorer Ай бұрын
Bro can you tell me about unsloth, how it is different from the basics of using Qlora? And also I used Qlora for Fine-tuning llama 2, can I just paste llama 3 model I'd to use in place of that? I hope you understood my question, waiting for your reply 😊
@pubgkiller2903
@pubgkiller2903 Ай бұрын
@@TheIITianExplorer unsloth library is very useful library for finetune using LoRA technique . QLoRA is Quantization and LoRA so if use Unsloth then the same output you will get as unsloth already quantise the LLMs
@roopad8742
@roopad8742 Ай бұрын
What datasets did you fine tune it on? Have you run any benchmarks?
@georgevideosessions2321
@georgevideosessions2321 Ай бұрын
Have you ever thought about writing a no-code fine-tuning on premise app?
@engineerprompt
@engineerprompt Ай бұрын
There is autotrain for that
@modicool
@modicool Ай бұрын
One thing I am unsure of is how to transform my data into a training set. I have the target format: the written body of work, but no "instruction" or "input" of course. I've seen some people try to generate it with ChatGPT, but this seems counter-intuitive. There must be an established method of actually manipulating data into a training set. Where is that piece?
@engineerprompt
@engineerprompt Ай бұрын
You will need to have a {input, response} pair in order to fine-tune an instruct model. Unfortunately, there is no way around it unless you are just pre-training the base model.
@ashwinsveta
@ashwinsveta Ай бұрын
We fine
@jackdorsey3504
@jackdorsey3504 27 күн бұрын
Sir, we cannot open the colab website...
@jackdorsey3504
@jackdorsey3504 27 күн бұрын
Already solved...
@user-lz8wv7rp1o
@user-lz8wv7rp1o Ай бұрын
great
@tamim8540
@tamim8540 Ай бұрын
Hello can I fine tune it using colab free version?
@engineerprompt
@engineerprompt Ай бұрын
This is using the free version
@cucciolo182
@cucciolo182 Ай бұрын
Next week Gemini 2 with text to video 😂
@metanulski
@metanulski Ай бұрын
So 60 steps is to low. But what it a good number of steps?
@engineerprompt
@engineerprompt Ай бұрын
Usually you want to set epochs to 1 or 2
@metanulski
@metanulski Ай бұрын
@@engineerprompt So 60 to120 steps max, since one epoch is 60 steps?
@asadurrehman3591
@asadurrehman3591 Ай бұрын
can i fintune using colab free gpu?
@engineerprompt
@engineerprompt Ай бұрын
Yes, this uses the free collab.
@asadurrehman3591
@asadurrehman3591 Ай бұрын
@@engineerprompt love you broooo
@HoneIrimana
@HoneIrimana Ай бұрын
They messed up releasing llama 3 because it believes it is sentient
@nikolavukcevic360
@nikolavukcevic360 24 күн бұрын
Why you didnt provide any examples of training. It would make this video 10 times better.
@engineerprompt
@engineerprompt 24 күн бұрын
that is coming...
@anantkabra6825
@anantkabra6825 7 күн бұрын
Has anybody trued pushing to hugging face? I need help in that part, pls reply to the message incase you have
@engineerprompt
@engineerprompt 7 күн бұрын
when you create a api key, make sure to enable the write permission on that key otherwise, it wouldn't upload the model.
@Matlockization
@Matlockization 15 күн бұрын
It's a Zuckerberg free AI........that makes me wonder. And you have to agree to hand over contact info and what else, I wonder ?
@user-hn7cq5kk5y
@user-hn7cq5kk5y 13 күн бұрын
Don't share trash
@piffdaddy420
@piffdaddy420 19 күн бұрын
you really should just make videos in your own language because who the fk can even understand what you are saying?
Insanely Fast LLAMA-3 on Groq Playground and API for FREE
8:54
Prompt Engineering
Рет қаралды 23 М.
QLoRA-How to Fine-tune an LLM on a Single GPU (w/ Python Code)
36:58
Cat story: from hate to love! 😻 #cat #cute #kitten
00:40
Stocat
Рет қаралды 13 МЛН
Did you find it?! 🤔✨✍️ #funnyart
00:11
Artistomg
Рет қаралды 121 МЛН
"okay, but I want Llama 3 for my specific use case" - Here's how
24:20
Run your own AI (but private)
22:13
NetworkChuck
Рет қаралды 1,1 МЛН
Get your own custom Phi-3-mini for your use cases
17:46
Prompt Engineering
Рет қаралды 11 М.
Master Fine-Tuning Mistral AI Models with Official Mistral-FineTune Package
23:32
All You Need To Know About Running LLMs Locally
10:30
bycloud
Рет қаралды 100 М.
Fine-tuning Large Language Models (LLMs) | w/ Example Code
28:18
Shaw Talebi
Рет қаралды 236 М.
I Analyzed My Finance With Local LLMs
17:51
Thu Vu data analytics
Рет қаралды 394 М.
Обзор игрового компьютера Макса 2в1
23:34
How charged your battery?
0:14
V.A. show / Магика
Рет қаралды 2,3 МЛН
ПРОБЛЕМА МЕХАНИЧЕСКИХ КЛАВИАТУР!🤬
0:59
Корнеич
Рет қаралды 3,3 МЛН