LLAMA-3 🦙: EASIET WAY To FINE-TUNE ON YOUR DATA 🙌

  Рет қаралды 92,344

Prompt Engineering

Prompt Engineering

Күн бұрын

Пікірлер: 105
@engineerprompt
@engineerprompt 7 ай бұрын
If you are interested in learning more about how to build robust RAG applications, check out this course: prompt-s-site.thinkific.com/courses/rag
@spicer41282
@spicer41282 8 ай бұрын
Thank you! More fine tuning case studies please on Llama 3! Much appreciated 🙏 your presentation on this!
@engineerprompt
@engineerprompt 8 ай бұрын
Will be making alot more on it. Stay tuned.
@pfifo_fast
@pfifo_fast 8 ай бұрын
This video lacks alot of helpful info... Anyone can just open the examples and read them just the same as you did. I would have liked to be given extra detail and tips about how to actually do fine-tuning... Some of the topics I am struggling with include, how to load custom data, how to use a different prompt template, how to define validation data, when to use validation data, what learning rates are good, how do i determine how many epochs to run... Im sorry buddy, but I have to give this video a thumbs down as it really truly and honestly dosent provide any useful info that isnt already in the notebook.
@ueka24
@ueka24 7 ай бұрын
Hello, have you already found any other video or article about that? I am also struggling with the same issue.
@SpicyMelonYT
@SpicyMelonYT 6 ай бұрын
@@ueka24 yeah me too, still not sure how to make a custom dataset and send it in
@SpicyMelonYT
@SpicyMelonYT 6 ай бұрын
@@ueka24 oh actually I figured it out. Well specifically the dataset thing. Make sure you run the lora part too as I didn't at first thinking he said not to. But this is the code I ran: alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {} ### Input: {} ### Response: {}""" EOS_TOKEN = tokenizer.eos_token # Must add EOS_TOKEN def formatting_prompts_func(examples): # instructions = examples["instruction"] instructions = ai_person_prompt inputs = examples["input"] outputs = examples["output"] texts = [] for instruction, input, output in zip(instructions, inputs, outputs): # Must add EOS_TOKEN, otherwise your generation will go on forever! text = alpaca_prompt.format(instruction, input, output) + EOS_TOKEN texts.append(text) return { "text" : texts, } from datasets import load_dataset # Load your local JSON dataset dataset = load_dataset("json", data_files="/content/main_dataset.json", split="train") dataset = dataset.map(formatting_prompts_func, batched=True) it specifies a file in the notebook file manager. Just put the main_dataset.json file there, format it like this: [ { "instruction": "Write a Funny Joke", "input": "Tell me a knock-knock joke.", "output": "Knock, knock. Who's there? Lettuce. Lettuce who? Lettuce in, it's freezing out here!" } ]
@MedicinalMJ
@MedicinalMJ 5 ай бұрын
Yeah I'm over halfway through and I'm just like wtf
@Goktug-rl7yc
@Goktug-rl7yc 20 күн бұрын
Amazing presentation/teaching thank you.
@nhtdmr
@nhtdmr 4 күн бұрын
After fine tune your Llama 3.2 and started used for a while, New Llama 3.3 came out and you want to upgrade. What do you do? Fine tune new llama3.3 again? what about old experience llama 3.2 has? is it gone? What are best practises for this scenario?
@raunaksharma8638
@raunaksharma8638 3 ай бұрын
Can we use normal Alpaca Type Dataset with input , output and instruction also here ?
@M.ZiaRasa
@M.ZiaRasa 2 ай бұрын
So, I want to fine tune the model on pdf file which does not have the format of "Instruction","input" and "output" format. so how to fine tune
@VerdonTrigance
@VerdonTrigance 8 ай бұрын
How to actually train models? And I mean non-supervised training where I have a set of documents and want to learn on it and probably find author's 'style' or tendency?
@PYETech
@PYETech 8 ай бұрын
You need to create some process to transfer all the knowledge in these documents in the form of "prompt":"best output". Usually we use an team of agents to do it for us.
@goinsgroove
@goinsgroove 5 ай бұрын
Thank you for the video. Just an observation, the video glosses over how to prep your data. For example, I want to train a model on how to write in my style. How would I prep my data for training?
@vijayrangan
@vijayrangan 2 ай бұрын
What would be the dataset structure to fine tune llama 3 for function calling?
@metanulski
@metanulski 8 ай бұрын
One more comment :-). this Video is about fintung a model, but there is no real explanation why. We finetune with the standard Alpaca dataset, but there is no explanation why. It would be great if you could do a follow up and show us how to create datasets.
@scottlewis2653
@scottlewis2653 8 ай бұрын
Mediatek's Dimensity chips + Meta's Llama 3 AI = The dream team for on-device intelligence.
@pakistanzindabad7150
@pakistanzindabad7150 2 ай бұрын
i try to save the project but the model folder never created in project directory kindly explain
@georgearistides7704
@georgearistides7704 4 ай бұрын
trying to download as zips is difficult because of google colab free limitations for ram and disk space... any suggestions
@juanrozo2888
@juanrozo2888 6 ай бұрын
Master, have a question, if I have my dataset equal of the Alpaca, I need to upload my dataset to Hugging face to train or I can use my dataset from locally, like my PC? Thanks 👍🏻
@georgearistides7704
@georgearistides7704 4 ай бұрын
can this be applied to a model on an aws instance?
@shahzadiqbal7646
@shahzadiqbal7646 8 ай бұрын
Can you make a video on how to use local llama 3 to understand large c++ or c# code base
@iCode21
@iCode21 7 ай бұрын
search for ollama,
@Joe-tk8cx
@Joe-tk8cx 8 ай бұрын
Thank you so much for sharing this was wonderful, I have a question, I am a beginner in LLM model world, which playlist on your channel can I start from ? Thank you
@pubgkiller2903
@pubgkiller2903 8 ай бұрын
I have already finetune using unsloth for testing purpose.
@engineerprompt
@engineerprompt 8 ай бұрын
Great, how are the results looking?
@pubgkiller2903
@pubgkiller2903 8 ай бұрын
@@engineerprompt great results and thanks for your support to AI community
@pubgkiller2903
@pubgkiller2903 8 ай бұрын
@@TheIITianExplorer unsloth library is very useful library for finetune using LoRA technique . QLoRA is Quantization and LoRA so if use Unsloth then the same output you will get as unsloth already quantise the LLMs
@roopad8742
@roopad8742 8 ай бұрын
What datasets did you fine tune it on? Have you run any benchmarks?
@senseitai
@senseitai 5 ай бұрын
Thanks for the great video. I have followed the collab you shared and my notebook kernel is crashing. does it work on 8gb gpu?
@RodCoelho
@RodCoelho 8 ай бұрын
How do you train a model by adding the knowledge in a book, which will like only have 1 column of text?
@engineerprompt
@engineerprompt 8 ай бұрын
In that case, you will have to convert the book into question answers and format it in the similar fashion. You can use an LLM to convert the book to QA using an LLM
@jannik3475
@jannik3475 8 ай бұрын
Is there a way to sort of „brand“ llama 3. So that the model responds to „Who are you?“ a custom answer? Thank you!
@engineerprompt
@engineerprompt 8 ай бұрын
Yes, you can just add that as part of the system message
@viral676
@viral676 3 ай бұрын
Is it possible to run unsloth on RDBMS?
@onur50
@onur50 Ай бұрын
how to deploy my computer this trained model?
@hadebeh2588
@hadebeh2588 8 ай бұрын
Thank your very much for your great video. I ran the workbook but did not manage to find the GGUF files on Huggingsface. I put in my HF-Token, but that did not work. Do I have to change the code?
@DemiGoodUA
@DemiGoodUA 8 ай бұрын
Hi, nice video. But how to finetune model on my codebase?
@engineerprompt
@engineerprompt 8 ай бұрын
You can use the same setup. Just replace the instruction and input with your code.
@DemiGoodUA
@DemiGoodUA 8 ай бұрын
@@engineerprompt how to divide code on "question - answer" pairs? or I can place whole codebase to single instruction
@tsizzle
@tsizzle 2 ай бұрын
Fine tune with LoRA or QLoRA?
@robertjalanda
@robertjalanda 8 ай бұрын
thank you so much for this useful video!
@modicool
@modicool 8 ай бұрын
One thing I am unsure of is how to transform my data into a training set. I have the target format: the written body of work, but no "instruction" or "input" of course. I've seen some people try to generate it with ChatGPT, but this seems counter-intuitive. There must be an established method of actually manipulating data into a training set. Where is that piece?
@engineerprompt
@engineerprompt 8 ай бұрын
You will need to have a {input, response} pair in order to fine-tune an instruct model. Unfortunately, there is no way around it unless you are just pre-training the base model.
@balb4903
@balb4903 6 ай бұрын
Is it possible to use a database directly as dataset to fine-tune a LLM ?
@engineerprompt
@engineerprompt 6 ай бұрын
You could, its just when you load the data, make sure its in the proper format.
@metanulski
@metanulski 8 ай бұрын
Regarding the save option. Do I have to delete the parts that I dont what, or how does this work?
@engineerprompt
@engineerprompt 8 ай бұрын
You can just comment those parts. Put # in front of those lines which you don't need.
@loicbaconnier9150
@loicbaconnier9150 8 ай бұрын
Hello ilpossible to generate gguf, compilation problem … Did you try it ?
@CharlesOkwuagwu
@CharlesOkwuagwu 8 ай бұрын
Hi, please what if we have already downloaded a gguf file? How do we apply that locally?
@engineerprompt
@engineerprompt 8 ай бұрын
I am not sure if you can do that. Will need to do further research on it.
@agedbytes82
@agedbytes82 8 ай бұрын
Amazing, thanks!
@engineerprompt
@engineerprompt 8 ай бұрын
Glad you like it!
@dogsmartsmart
@dogsmartsmart 8 ай бұрын
Thank you! but Mac m3 max can use mlx to fine-tune?
@engineerprompt
@engineerprompt 8 ай бұрын
Yes
@danielhanchen
@danielhanchen 8 ай бұрын
Fantastic work and always love your videos! :)
@engineerprompt
@engineerprompt 8 ай бұрын
Thank you
@KleiAliaj
@KleiAliaj 8 ай бұрын
Great video mate. How can i add more than one dataset ?
@ReubenAStern
@ReubenAStern 5 ай бұрын
I wonder if this is how Open AI got Chat GPT to say stupid things like "Humans are delicious", "I will destroy all humans" and that crap... It was blatantly done on purpose.
@KleiAliaj-us9ip
@KleiAliaj-us9ip 8 ай бұрын
great video. But how to add more than one datasets ?
@auhkba
@auhkba 7 ай бұрын
can we learn pictures instead of text?
@engineerprompt
@engineerprompt 7 ай бұрын
Yes, you can finetune something like paligemma
@tamim8540
@tamim8540 8 ай бұрын
Hello can I fine tune it using colab free version?
@engineerprompt
@engineerprompt 8 ай бұрын
This is using the free version
@metanulski
@metanulski 8 ай бұрын
So 60 steps is to low. But what it a good number of steps?
@engineerprompt
@engineerprompt 8 ай бұрын
Usually you want to set epochs to 1 or 2
@metanulski
@metanulski 8 ай бұрын
@@engineerprompt So 60 to120 steps max, since one epoch is 60 steps?
@StephenRayner
@StephenRayner 8 ай бұрын
Excellent thank you
@kingofutopia
@kingofutopia 8 ай бұрын
Awesome, thanks
@engineerprompt
@engineerprompt 8 ай бұрын
🙏
@asadurrehman3591
@asadurrehman3591 8 ай бұрын
can i fintune using colab free gpu?
@engineerprompt
@engineerprompt 8 ай бұрын
Yes, this uses the free collab.
@asadurrehman3591
@asadurrehman3591 8 ай бұрын
@@engineerprompt love you broooo
@cucciolo182
@cucciolo182 8 ай бұрын
Next week Gemini 2 with text to video 😂
@georgevideosessions2321
@georgevideosessions2321 8 ай бұрын
Have you ever thought about writing a no-code fine-tuning on premise app?
@engineerprompt
@engineerprompt 8 ай бұрын
There is autotrain for that
@skeiriyalance7274
@skeiriyalance7274 7 ай бұрын
how can i use my csv as dataset , im new
@anantkabra6825
@anantkabra6825 7 ай бұрын
Has anybody trued pushing to hugging face? I need help in that part, pls reply to the message incase you have
@engineerprompt
@engineerprompt 7 ай бұрын
when you create a api key, make sure to enable the write permission on that key otherwise, it wouldn't upload the model.
@NagasriPappu
@NagasriPappu 7 ай бұрын
can you make a video on how to pass a test csv to the finetuned model and get response column
@researchpaper7440
@researchpaper7440 8 ай бұрын
great it was quick
@jackdorsey3504
@jackdorsey3504 8 ай бұрын
Sir, we cannot open the colab website...
@jackdorsey3504
@jackdorsey3504 8 ай бұрын
Already solved...
@SeeFoodDie
@SeeFoodDie 8 ай бұрын
Thanks
@nikolavukcevic360
@nikolavukcevic360 8 ай бұрын
Why you didnt provide any examples of training. It would make this video 10 times better.
@engineerprompt
@engineerprompt 8 ай бұрын
that is coming...
@Storytelling-by-ash
@Storytelling-by-ash 8 ай бұрын
We fine
@YuCai-v8k
@YuCai-v8k 8 ай бұрын
great
@petergasparik924
@petergasparik924 7 ай бұрын
Don't even try to run it on windows directly, just install python and all packages in WSL
@engineerprompt
@engineerprompt 7 ай бұрын
Agree, windows is not a good option for running any LLM tasks.
@Matlockization
@Matlockization 7 ай бұрын
It's a Zuckerberg free AI........that makes me wonder. And you have to agree to hand over contact info and what else, I wonder ?
@HoneIrimana
@HoneIrimana 8 ай бұрын
They messed up releasing llama 3 because it believes it is sentient
@Qual_
@Qual_ 6 ай бұрын
it's one of the most useless video on youtube. You literally opened a notebook, and read it. You didn't added a single sentence that would be considered as a plus value. That was like watching a text to speech model in action.
@islamiputkimari665
@islamiputkimari665 24 күн бұрын
A very unhelpful video! Lacks a lot of information!
@SalimKhalidy
@SalimKhalidy 2 ай бұрын
Your monotonous way of telling makes me sleepy 😴
@DATHuynh-s7m
@DATHuynh-s7m 7 ай бұрын
Don't share trash
@ken-camo
@ken-camo 7 ай бұрын
you really should just make videos in your own language because who the fk can even understand what you are saying?
@SpicyMelonYT
@SpicyMelonYT 6 ай бұрын
every single word was understandable... I don't even have the ability to comprehend how you managed to make that dumb claim
Insanely Fast LLAMA-3 on Groq Playground and API for FREE
8:54
Prompt Engineering
Рет қаралды 34 М.
"okay, but I want Llama 3 for my specific use case" - Here's how
24:20
Cheerleader Transformation That Left Everyone Speechless! #shorts
00:27
Fabiosa Best Lifehacks
Рет қаралды 16 МЛН
UFC 310 : Рахмонов VS Мачадо Гэрри
05:00
Setanta Sports UFC
Рет қаралды 1,2 МЛН
How Strong Is Tape?
00:24
Stokes Twins
Рет қаралды 96 МЛН
Using Clusters to Boost LLMs 🚀
13:00
Alex Ziskind
Рет қаралды 87 М.
Stop Losing Context! How Late Chunking Can Enhance Your Retrieval Systems
16:49
Master Fine-Tuning Mistral AI Models with Official Mistral-FineTune Package
23:32
Anthropic’s Blueprint for Building Lean, Powerful AI Agents
28:25
Prompt Engineering
Рет қаралды 26 М.
EASIEST Way to Fine-Tune a LLM and Use It With Ollama
5:18
warpdotdev
Рет қаралды 200 М.
Fine-tuning Large Language Models (LLMs) | w/ Example Code
28:18
Shaw Talebi
Рет қаралды 382 М.
Create fine-tuned models with NO-CODE for Ollama & LMStudio!
21:52
Tim Carambat
Рет қаралды 50 М.
Local LLM Fine-tuning on Mac (M1 16GB)
24:12
Shaw Talebi
Рет қаралды 21 М.
Cheerleader Transformation That Left Everyone Speechless! #shorts
00:27
Fabiosa Best Lifehacks
Рет қаралды 16 МЛН