LLAMA-3.1 🦙: EASIET WAY To FINE-TUNE ON YOUR DATA 🙌

  Рет қаралды 49,992

Prompt Engineering

Prompt Engineering

Күн бұрын

Пікірлер: 63
@VerdonTrigance
@VerdonTrigance 6 ай бұрын
The most complex problm is preparing dataset with QnA from which u gonna learn. And this is what I'd like to see.
@engineerprompt
@engineerprompt 6 ай бұрын
here is a previous video I did on creating custom datasets: kzbin.info/www/bejne/sGO0dmRopZieg68
@unclecode
@unclecode 6 ай бұрын
Fascinating! The two Australian brothers did a fantastic job of introducing the Unsloth to the community.
@engineerprompt
@engineerprompt 6 ай бұрын
Agree, they are doing great job.
@paul1979uk2000
@paul1979uk2000 6 ай бұрын
The more I'm seeing of A.I. advancement, I'm coming to the concluding that better isn't always better, and the real battleground isn't so much which is the best when many of the better A.I. models are so close to each other in quality, the real battleground for me is the quality for the size, so a bit like we do for hardware, performance per watt, but in this case, performance per billion parameters, if you can maintain or have better quality at a smaller size, that is a major advantage, especially if it's open source and can run locally on your hardware. So as good as the big A.I. models are, they are too tightly controlled and very limited in how you can run them, in most cases online because of how big they are, the real game changer I think is with the smaller open source models that you can run locally, the advantage they've got is that they can be fully integrated and specialised in the OS, apps and games, they also have the advantage of less privacy, security and other concerns like that. If the current advancements of A.I. models continues and hardware continues to progress, I suspect the online big models are not going to matter that much as the smaller ones we can run locally will be able to do most of the things we want, and that's when things get really interesting as A.I. gets far more integrated into our daily lives, something that's really limited with these online centralised A.I. models and for countless reasons. At the end of the day, what's going to win out isn't going to be the best, good enough will do for most of us, what will really win out is what is smaller, capable and can be run locally, which basically rules out the big online A.I. services as there are too many privacy and security concerns with them, especially as A.I. becomes more capable and integrated into our lives.
@MYPL89
@MYPL89 5 ай бұрын
You have no idea how this video helped me!! THANK YOU SO MUCH
@sizilienrockt9457
@sizilienrockt9457 6 ай бұрын
like this and this show the easy way for ppls who not are student for ai . not newbie frendly to complex tutorial
@TeamDman
@TeamDman 6 ай бұрын
Haven't watched yet but thank you for all your guides on this, I know where to come when I need to do this myself !!
@engineerprompt
@engineerprompt 6 ай бұрын
thank you!!!
@Kiran.KillStreak
@Kiran.KillStreak 6 ай бұрын
Thanks for video, every minute detailed video .superb.
@DevtalTalks
@DevtalTalks 4 ай бұрын
Excellent explanation!!
@developerashish6849
@developerashish6849 6 ай бұрын
This is awesome, and tutorial is so easy to understand too
@engineerprompt
@engineerprompt 6 ай бұрын
:)
@avataraang3334
@avataraang3334 5 ай бұрын
Thank You Brother, Truly
@advanceprogramming225
@advanceprogramming225 3 ай бұрын
Thanks ❤
@bakegleeson8653
@bakegleeson8653 4 ай бұрын
Thank you 😀
@robertjalanda
@robertjalanda 6 ай бұрын
Thank you for this video tutorial very helpful!
@johnclay7422
@johnclay7422 6 ай бұрын
hello sir !!!! wonderful contribution!!! can you practically train the model on the data so that we can learn . I am new to this field and your channel is amazing. thanks
@engineerprompt
@engineerprompt 6 ай бұрын
here is a previous video I did on creating custom datasets: kzbin.info/www/bejne/sGO0dmRopZieg68
@truptimohanty9386
@truptimohanty9386 3 ай бұрын
Thank you so much for this wonderful video! I have a couple of questions: For max_seq_length = 2048 # Choose any! We auto-support RoPE scaling internally!, could you clarify, whether it handles cases where the input sequence length exceeds 2048 tokens? Also, when determining the max sequence length for custom data, should it include the combined length of the instruction, input, and output? Thank you again for your insights!
@wilfredomartel7781
@wilfredomartel7781 19 күн бұрын
You have excellent videos-I really enjoy your content! I’m exploring the idea of working with an LLM capable of handling large contexts. For instance, I’d like to process a 200-page book as input while generating an output length of around 16k tokens. Do you have any suggestions or ideas on how to achieve this? Thanks in advance for your insights!
@engineerprompt
@engineerprompt 18 күн бұрын
For something like this, Gemini might be an option. Most of the other models do not have large enough context window to support this. But this also depends on the complexity of the task as well.
@wilfredomartel7781
@wilfredomartel7781 18 күн бұрын
@@engineerprompt thanks. I have seen with unsloth they got 128 large context.
@GandalfTheBrown117
@GandalfTheBrown117 6 ай бұрын
Thank you!!
@ngamcode2485
@ngamcode2485 2 ай бұрын
Hello, Amazing video Thanks you a lot. Is it possible to fine-tune llama-3 to do translation task into new language like african languages ?
@alexo7431
@alexo7431 6 ай бұрын
Thanks for sharing
@nikitagarashchuk2430
@nikitagarashchuk2430 6 ай бұрын
Thanks for video!! Can you inform me how to deploy a fine tuned model?
@engineerprompt
@engineerprompt 6 ай бұрын
check out this playlist on deployment: kzbin.info/www/bejne/haa0c6t4p7Rlo9U&ab_channel=PromptEngineering
@KrishnaBalajiPatilB22CS078
@KrishnaBalajiPatilB22CS078 Ай бұрын
How can I save this fine-tuned model to a format that I can deploy on a website?
@akki_the_tecki
@akki_the_tecki 6 ай бұрын
why nobody speaks about, "How should we convert my CONFIDENTIAL RAW text/ PDF into Datasets"????
@pladselsker8340
@pladselsker8340 6 ай бұрын
This is so funny, been looking for this yesterday and today now. Maybe I'm just now realizing after 20 years of google searching experience that I'm bad at googling.
@engineerprompt
@engineerprompt 6 ай бұрын
here is a previous video I did on creating custom datasets: kzbin.info/www/bejne/sGO0dmRopZieg68
@karthikb.s.k.4486
@karthikb.s.k.4486 6 ай бұрын
Nice . Can we run this in our local machine and what config needed to run in local mackbook. Or colab is preferred please let me know.Also can you suggest is mackbook good for handling LLMS
@unclecode
@unclecode 6 ай бұрын
Training should be conducted on a CUDA device, but the resulting model can be used on MPS devices (MacBook M series) and CPUs. For fine-tuning models on Mac using MLX-a powerful, open-source array framework for Apple silicon-there's a vibrant community supporting it.
@onur50
@onur50 2 ай бұрын
how to deployment our model, on my computer :) thx
@georgebongo-o6n
@georgebongo-o6n 4 ай бұрын
I have problem in creating my own datasets manually. Like CSV file format, how can structure it in CSV file and read it to the fine tunning process?
@RajaReivan
@RajaReivan 5 ай бұрын
can i have a validation set in sfttrainer?
@deepadharshinipalrajan8849
@deepadharshinipalrajan8849 6 ай бұрын
Are we able to fine tune the model directly which is available in the ollama server?
@sujjee
@sujjee 6 ай бұрын
hey can you please explain how to fine to model and deploy to own server if privacy is concern. please make a tutorial on it
@engineerprompt
@engineerprompt 6 ай бұрын
Here is a playlist on deployments: kzbin.info/www/bejne/haa0c6t4p7Rlo9U&ab_channel=PromptEngineering
@fl028
@fl028 6 ай бұрын
Is it normal that the fined tuned version response with the ### Instruction, ### Input, ### Response pattern. Do I have a alternative in the training section, when i want only the response?
@awu878
@awu878 6 ай бұрын
The colab link doesn't seems to work
@fra4897
@fra4897 6 ай бұрын
the issues is that benchmark are broken, seeing the graph is pointless at this point
@engineerprompt
@engineerprompt 6 ай бұрын
I agree but unfortunately that's the only thing we have at the moment.
@fra4897
@fra4897 6 ай бұрын
@@engineerprompt well it is not true, you can craft your own benchmark, some people are doing it, and share with us what you thing from it
@deepadharshinipalrajan8849
@deepadharshinipalrajan8849 6 ай бұрын
Does unsloth support CPU configuration?
@engineerprompt
@engineerprompt 6 ай бұрын
it needs GPU
@deepadharshinipalrajan8849
@deepadharshinipalrajan8849 6 ай бұрын
​@@engineerprompt Are we able to train the model which is available in ollama directly by taking that as base model?
@rainmaker1989
@rainmaker1989 6 ай бұрын
Hello bro how r u. I just started here but confused where to begin. Can you guide me in a specific direction. Thank you :)
@engineerprompt
@engineerprompt 6 ай бұрын
what exactly is your confusion. Are you interested in getting started with LLMs or fine-tuning them?
@rainmaker1989
@rainmaker1989 6 ай бұрын
@@engineerprompt which should I start 1st and from which video or playlist should I start?
@ThaLiquidEdit
@ThaLiquidEdit 6 ай бұрын
Could you make a video on how to create a training set to fine-tune a model? I want to fine-tune a model like LLAMA-3.1 that creates YAML sections for different tasks similar to ansible. For example when I prompt: "Create a user alice" it should generate a YAML in a specific format like user: action: create username: alice Can you show how we can create such a training set. I can't create thousands of training data manually.
@AlpMercan14
@AlpMercan14 6 ай бұрын
You can use function calling also it may be enough rather than fully finetuneing
@engineerprompt
@engineerprompt 6 ай бұрын
You can achieve this through prompting. fine-tuning should be a last resort. You dont' need it in most cases.
@ThaLiquidEdit
@ThaLiquidEdit 6 ай бұрын
@@engineerprompt Could you give an example? You mean like explain the format of the YAML file, make an example and e.g. write "Whenever I create an user, output this YAML file"?
@AlpMercan14
@AlpMercan14 6 ай бұрын
Can you show how can ı finetune it and store the new one locally. Like ı already have llama3.1 on my local. I want to finetune it and use it
@engineerprompt
@engineerprompt 6 ай бұрын
Towards the end of the video, I show how to save and load the models locally. You can use that part of the code
@caslor2002
@caslor2002 6 ай бұрын
nice video but as most of the other in the same topic use an all ready dataset... i would prefer to see a video juat for a basic construction of a custom alpaca dataset... I think is what is missing from the most of the same kind tutorials.. the logic and the method to create your own alpaca dataset, what if a question has more than one answer? what if a simple question need to be clarified by the user depending of two probabilities ? and then follows the answer based on the clarification user inputs etc ....
@engineerprompt
@engineerprompt 6 ай бұрын
here is a previous video I did on the topic: kzbin.info/www/bejne/sGO0dmRopZieg68
@caslor2002
@caslor2002 6 ай бұрын
@@engineerprompt Thanks for your reply.. just checked the Link, awesome video... Thanks for sharing your knowledge!!
@One.manuel
@One.manuel 6 ай бұрын
You are not fine tuning a damn thing bro
@mike6335
@mike6335 6 ай бұрын
Meaning… he wants a step by step vs a high level how to.
@ss56075
@ss56075 5 ай бұрын
When I am using this code "model.push_to_hub_merged("My_Modal_Path", tokenizer, save_method="merged_16bit")" it shows this error "TypeError: argument of type 'NoneType' is not iterable". All files are saved successfully, but when unsloth trying to upload it shows this error.
人是不能做到吗?#火影忍者 #家人  #佐助
00:20
火影忍者一家
Рет қаралды 20 МЛН
Tuna 🍣 ​⁠@patrickzeinali ​⁠@ChefRush
00:48
albert_cancook
Рет қаралды 148 МЛН
Quando eu quero Sushi (sem desperdiçar) 🍣
00:26
Los Wagners
Рет қаралды 15 МЛН
"okay, but I want Llama 3 for my specific use case" - Here's how
24:20
EASIEST Way to Fine-Tune LLAMA-3.2 and Run it in Ollama
17:36
Prompt Engineering
Рет қаралды 72 М.
Fast Fine Tuning with Unsloth
16:29
Matt Williams
Рет қаралды 10 М.
EASIEST Way to Fine-Tune a LLM and Use It With Ollama
5:18
warpdotdev
Рет қаралды 281 М.
RAG vs. Fine Tuning
8:57
IBM Technology
Рет қаралды 153 М.
QLoRA-How to Fine-tune an LLM on a Single GPU (w/ Python Code)
36:58
Gemini 2.0 Pro
17:41
Prompt Engineering
Рет қаралды 27 М.
The Best RAG Technique Yet? Anthropic’s Contextual Retrieval Explained!
16:14