This is a very important learning point. When I was watching your video, I agreed with you that it was not good the bot kept saying "I'm a very experienced person." But later on, when I dove into your json file, I realized that the dialogue in the file was bad. When I searched for "experienced" in the json, I got 61 hits, with the same "I'm not sure. I'm a very experienced person." over and over again. So the model worked, your code worked, the bot learns exactly what you taught it in the dataset. If your dataset is bad, your bot will talk nonsense.
@programming_hut Жыл бұрын
I am planning to release series of about 30 days long all about embedding langchain datasets huggingface and these llms CurrentlyI am extensively researching about these and will surely upload. And thanks for your intuition I will look about this too.
@gingerbread_GB Жыл бұрын
@@programming_hut Thanks for the response, I'm an amateur learning about this stuff too. I wish you can do a tutorial on LoRA with Huggingface transformers. for us at home, we don't have access to powerful GPUs to train big models. LORA make it more feasible. Another big part I'm struggling with is how to get big data. There are plenty of datasets out there scraped from the internet, but they are mostly filled with trash. Like inappropriate languages, messy grammar. May that's something you can talk about.
@programming_hut Жыл бұрын
Sure I will try to cover after some research
@ilyas8523 Жыл бұрын
great video, I was stuck on some steps until I found your video.
@bennievaneeden2720 Жыл бұрын
Great video. Glad to see people actually debugging their code. It really helps to better grasp whats going on
@kaenovama Жыл бұрын
Ayy, this is what I've been looking for
@ammar46 Жыл бұрын
Nothing is generated for me. I used exact code as you used. Can you help understand why this is happening??
@Websbird4 ай бұрын
Great help - Can you recommend any editor for windows just like yours? and is that possible for you to create a colab or kaggle for same project?
@elibrignac8050 Жыл бұрын
Epic video, just one question: After saving the model, how can I load it and make inferences.
@jackeyhua5 ай бұрын
Hi there, very good video, really appreciate. Currently we are facing a problem that the input token does not generate any bot output, like . Can you help figure it out?
@amitvyas7905 Жыл бұрын
Hi! It was a good video. I would like to know once the model is trained how can we check the accuracy? Can you generate a ROUGE score? That way we know how goo or bad the model is.
@SabaMomeni-i1n Жыл бұрын
great video. just wondering why u used
@tobi418 Жыл бұрын
Hello, thank you for sharing your code. I have question, Why don't you use Trainer class of HuggingFaces Transformers library for training?
@programming_hut Жыл бұрын
Nvm I just wanted to try this way
@tobi418 Жыл бұрын
@@programming_hut is there any advantage this way? Can I use Trainer class? Also can I use TextDataset class for import and tokenize my dataset?
@rakesh2233 Жыл бұрын
Great video, thanks. I am a beginner studying about these LLM's. I have a small doubt, I have seen people use different data formats to fine-tune different LLMs. For example, the following format can be used for Llama-2: { "instruction": "", "input": "", "output": "" } and sometimes the format below is used for chatglm2-6b: { "content": "", "summary": "" } Is it related to what format was used for pre-training or actually both can be used for different llms, how do I organize my custom data if I want to fine-tune a llm?
@fp-mirzariyasatali1985 Жыл бұрын
Its great to try multiple techniques
@zimiziАй бұрын
when do you breath?
@sanvisundarrajan11 ай бұрын
Hey , I debugged the same exact way .. the bot does'nt reply though. Can someone help with this ?
@plashless3406 Жыл бұрын
This is amazing. Really appreciate your efforts, brother.
@ucduyvo4552 Жыл бұрын
Great Video, thanks
@sharathmandadi Жыл бұрын
what changes required to use this code for squad Question Answering training
@rishabhramola448 Жыл бұрын
Really helpful!!! Deserve more views
@georgekokkinakis7288 Жыл бұрын
What if we don't add the < , and to the training data? I have a dataset where each sample is formated as [context] [user_question] [bot_answer] and each sample is separated from the next one by an empty line. I am using a pretrained model lighteternal/gpt2-finetuned-greek
@shubhamkumar8093 Жыл бұрын
Very Informative 😊
@arjunparasher7005 Жыл бұрын
🙌🙌
@muntazkaleem7977 Жыл бұрын
Nice work
@programming_hut Жыл бұрын
Thanks
@nicandropotena2682 Жыл бұрын
Is it possible to train this model with the history of the conversation too? to keep track of what user said, in order to mantain a logical sense to the conversation.
@programming_hut Жыл бұрын
Working on that would probably find out and then will make tutorial.
@miningahsean886 Жыл бұрын
Good Video Sir, where did you source your dataset?
@manthanghonge0076 ай бұрын
hey can you tell me how to connect this gpt 2 model from front end
@jappanjotkaur55172 ай бұрын
do you figured it out?
@sanketgaikwad996 Жыл бұрын
thanks a lot bro
@lekhwar Жыл бұрын
Great
@nitron3006 Жыл бұрын
after i run your code training doesn't work, it just kept on 0% | | 0/12 [00:00
@joshwaphly6 ай бұрын
same error try chaning GPU to T4 in Run time tab Run selection, thanks alot to @programming_hut. You have enlightened me
@kotcraftchannelukraine6118 Жыл бұрын
Is it possible to fine tune OPT-125M or GPT-Neo-125M using this?
@MrJ3 Жыл бұрын
It's using the HuggingFace API so it's easy to swap models as long as the models support training on the task you are interested in. Just swap the model name out.
@quantumjun Жыл бұрын
if finetuning needs to add a new layer?
@programming_hut Жыл бұрын
Fine-tuning trains a pretrained model on a new dataset without training from scratch Now it’s your choice to add or remove layer
@quantumjun Жыл бұрын
@@programming_hut thank you very much
@quantumjun Жыл бұрын
I reckon if we add a new layer for chatbot, it may overfit the chat data?
@quantumjun Жыл бұрын
since we don’t have enough data for that?
@coldbubbyguy6206 Жыл бұрын
Is it normal for the training to get stuck at 0% if I only have access to CPU?
@insigniamalignia Жыл бұрын
can't you use google colab? it's free
@sibesss295019 күн бұрын
you saved my life . wonderful video ,can you give me valid discord account . the one in the description not working
@甘楽-u7v Жыл бұрын
I wonder where I can find more dataset for finetuning a chatbot GPT-2, if anybody have idea please tell me, thanks.
@programming_hut Жыл бұрын
You can use chatgpt itself to generate more data You can look at alpaca model”s dataset
@UncleDavid Жыл бұрын
why u breathing so fast? are u nervous?
@programming_hut Жыл бұрын
😂 you pointed / I had cold 🥶 at that time
@ucduyvo4552 Жыл бұрын
Thanhs sir, but the language in the fine tuning gpt2 video is English, how about the languages different English
@programming_hut Жыл бұрын
I might work on that and will try making video for it…
@neel_aksh Жыл бұрын
Can i use this with gpt-2-simple
@mohammedismail6872 Жыл бұрын
bro pleasse place your mic away from keyboard.
@robosergTV Жыл бұрын
please speak faster, video too slow.
@programming_hut Жыл бұрын
KZbin has feature to make it fast in playback speed 😬
@balarajagopalan4981 Жыл бұрын
Talk slowly, man. Can't understand what you're saying.
@programming_hut Жыл бұрын
Sorry but you can reduce the speed
@rigeshyogi Жыл бұрын
@@programming_hut If you want to teach something, speak slowly and clearly so that you're understood. Otherwise, you're not going to reach a broader audience.
@programming_hut Жыл бұрын
Sure will keep in mind next time
@sloowed_reveerb Жыл бұрын
I'm not an English speaker and I understood everything! (and I'm not from India 😅😀) You can turn on subtitles if you're struggling :) Amazing tutorial btw, thanks to the author!