Thanks for watching and in case you'd like to see me cover some other topic in the next video, let me know!
@alexdw5 Жыл бұрын
Great video, thank you Maya! Can you do a video on other use cases for fine-tuning LLMs and provide some examples?
@getlucky8952 Жыл бұрын
I'd like to get a deeper understanding of this topic. Can you please suggest any materials for this?
@maya-akim Жыл бұрын
@@alexdw5 Thanks for the feedback! That's a great idea and I think It might be the topic of my next video!
@alexdw5 Жыл бұрын
@@maya-akim oh awesome! Your videos rule! I’ve subscribed and will keep an eye out for it
@maya-akim Жыл бұрын
@@getlucky8952 I just updated description box with the resources that I've been using for this video, so you can check it out! One more thing that I can recommend is perplexity.ai It does all the research for you and provides links to sources. It's very easy to research new topics this way. It's GPT4 + access to internet
@unshadowlabs Жыл бұрын
Thanks for the great video. Could you please provide a link to your dataset or what an example of the dataset looks like when it is formatted?
@medoeldin Жыл бұрын
Hi Maya, great video. I love your ability to convey dense material in an easy to follow manner. With respect to finetuning a model to posses your tone of voice, what criteria should be considered when selecting a base model? Also, what is the optimal structure of a writing sample and how many samples should we incorporate for best results ? Looking forward to more great videos! Thanks!
@1littlecoder Жыл бұрын
Nicely made video 🎉
@maya-akim Жыл бұрын
Thanks a lot 🙏🏻
@MaJetiGizzle Жыл бұрын
Great video on simple fine-tuning with LoRA for better quality model outputs!
@MoeMomin9 ай бұрын
supersmart! deep understanding of complex topics! Amazing video! keep it up :)
@davesandberg9 ай бұрын
Awesome videos! Thank you for putting them together. Sincerely appreciated! The "pacing" can be a little intense on this one....
@tk01509 ай бұрын
Thanks for your clear and educational videos!
@youtub-user Жыл бұрын
thank you for this video could you provide the link to your dataset or an example similar to it .
@volt83997 ай бұрын
Thank you so much for making this video
@animehypeofficial Жыл бұрын
Thanks for video! can you do it using another opensource model like LLaMa-2, because meta access takes time, and can you show your own dataset so we know how to organize ours, also cover like what if you have three paraments like tweet disaster prediction(kaggle) .
@RichardGetzPhotography9 ай бұрын
Another great video- Thanks!!
@oryxchannel Жыл бұрын
Why does everyone gloss over the feeling I have that any quantized LLM is 20% of the full model experience. It's not you alone. Quantization is accepted under the unspoken implication that it's "local" or "native" and therefore private. Yeah it's private if your cpu/gpu is powerful enough to sit through one sentence outputted per minute and not second. And a quantized version is subject to knowing things like system prompt versus standard prompt as well as fine tuning.
@jamesjonnes Жыл бұрын
QLora can finetune quantized model. The video is very good, but I'm hesitant to request access for the Colab lol
@zZzzZz_803 Жыл бұрын
Thank you for this great video. Can you show how to use the saved fine tuned model directly on a different colab?
@Aurora12488 Жыл бұрын
Fine-tuning is to assist in *behavior*, not knowledge. Results can end up worse if you're trying to use fine-tuning to add additional knowledge to the LLM. Data lookups injected into the context are the only reliable way to assist with making LLMs have access to new data.
@maya-akim Жыл бұрын
Very well said!
@echofloripa Жыл бұрын
Thanks a lot for the video! 👏😻 Can I fine tune with a questionnaire answer and question, and after train with full text files?
@alasdairvfr8 ай бұрын
I'm getting this error: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. How to resolve?
@larawehbee Жыл бұрын
Great Video!! Thanks for sharing. I didnt notice before that we cannot merge the quantized model! I have few questions here: 1. Can we quantize the merged model ? I mean after merging peft adapters with the base model, can we quantize it and use the finetuned llama2 quantized model ? 2. The merged model now is HF model, how can i load it for inference? In the google colab code, you used it directly from the merged model but i want to load it from file Thanks!!
@echofloripa Жыл бұрын
Check the page "Baby Step: Fine-Tune Falcon-7B and LLaMa-7b Large Language Models for your custom datasets." It shows how to train and then load for inference.
@isidoras.radojkovic2074 Жыл бұрын
Great video! Thank you! 👌
@kunalr_ai9 ай бұрын
very beautiful video
@saviewer Жыл бұрын
Great video. What sort of computing power is needed to run this? Do you run it on the cloud or locally? Thanks.
@PredictAnythingSoftware Жыл бұрын
She is running it in the cloud, but you can also run it in a much more user-friendly manner using the text-generation GUI. However, this approach requires a substantial amount of RAM to operate. But don't worry, the investment in resources is worthwhile. By utilizing this method, you can train your own AI model that has the potential to outperform even GPT-4 in terms of intelligence. The catch is that this AI model can achieve excellence in only one particular area, though not necessarily a comprehensive mastery across diverse tasks like GPT-4 does with its trillion parameters. Consider the analogy of specialized tools versus a Swiss Army knife. The cloud-based approach with a text-generation GUI is akin to a specialized tool that is tailored for a specific task. For instance, think of a precision instrument designed solely for crafting intricate woodwork. It excels in this task but wouldn't be ideal for opening a bottle, unlike a Swiss Army knife that can handle various tasks reasonably well. Imagine you wish to develop AI for medical diagnosis and language translation. Instead of aiming for an all-knowing AI on a consumer-grade PC, you could train one AI model meticulously for medical diagnostics and another for language translation. This focused training allows each AI to excel in its dedicated field, just like having a medical specialist and a linguistic expert. In essence, the principle here is specialization. If you want an AI that is outstanding at a specific task, it's best to concentrate its training on that area, much like how an Olympic athlete dedicates themselves to excelling in one sport. This doesn't mean your AI won't be intelligent or useful; it means it will be exceptionally proficient in its chosen field. In conclusion, the prospect of training specialized AI models, each dedicated to excelling in a particular task, is akin to cultivating experts in various fields rather than seeking an all-encompassing omniscient AI. This strategy optimizes resource usage and results in AI systems that are highly competent in their designated domains.
@gokulraja1580 Жыл бұрын
i have an doubt , you are using lora adapter but where you freezed the pre trained weight in that code, i'm beginner. plese explain
@Udayanverma Жыл бұрын
could you show how to use this trained model with steps. I have internally hosted wiki pages with loads of contents on it. how can we train the model on such unstructured content.
@trackerprince677311 ай бұрын
Is fine-tuning sufficient to make domain expert llm? I mean would it be better to combine fine tuning + embeddings?
@khalilchi5726 Жыл бұрын
Can we get a link to the dataset you used ?
@amparoconsuelo9451 Жыл бұрын
Can a subsequent SFT and RTHF with different, additional or lesser contents change the character, improve, or degrade a GPT model? Can you modify a GPT model? How?
@echofloripa Жыл бұрын
Did you say "the quantisized version has 7gigabytes as the full version is only .7 gigabytes"? Isn't the other way around?
@waleedkazmi668011 ай бұрын
Hi Maya, thank you for your video. Can you really dumb it down for a non coder. I feel a lot of people will watch your video where you teach most of this by showing how you code and what does your code do. There's a lot of us in communities that are turning towards programming because of AI.
@geekyprogrammer4831 Жыл бұрын
Beautiful girls are also interested in AI now!
@verasalem5071 Жыл бұрын
Can I suggest that you do not move back and forth in your chair, it's quite distracting to focus on what you are saying with a large white chair moving behind you. Other than that, this is one of the more informative compact form videos I've seen on fine tuning
@chrisBruner Жыл бұрын
suggestion, I can't listen to your rapid fire pace of speech. Please slow down, and take a breath. I get exhausted just trying to keep up.