HuggingFace Crash Course - Sentiment Analysis, Model Hub, Fine Tuning

  Рет қаралды 116,999

Patrick Loeber

Patrick Loeber

Күн бұрын

Пікірлер: 93
@patloeber
@patloeber 3 жыл бұрын
Do you like HuggingFace?
@nicklansbury3166
@nicklansbury3166 3 жыл бұрын
Yes. Do I win? 😎
@trevormuchenje1553
@trevormuchenje1553 3 жыл бұрын
very much!
@jieweiyang3567
@jieweiyang3567 2 жыл бұрын
yes
@scar2080
@scar2080 2 жыл бұрын
Ja because of you 😁🤣. Du bist (Best )🥂
@saminchowdhury7995
@saminchowdhury7995 2 жыл бұрын
I can't believe it's free
@JoshPeak
@JoshPeak 2 жыл бұрын
The last 6-8 minutes of this video is exactly what I have been trying to hunt down as a tutorial. Thank you!
@just_ign
@just_ign Жыл бұрын
There are so many videos out there that show how to use huggingface's models with a pipeline and making it seem so "easy" to do things, which it is. But unlike those videos, this one really shows how we can use models natively and train them with our own cycles. Instead of portraying things as "easy", you decided to show how to actually get things done and I absolutely loved that!! Thanks for the tutorial :D
@shubhamgattani5357
@shubhamgattani5357 6 ай бұрын
Almost 3 years for this video, and still so much relevant today. Thank you sir.
@aidarfaizrakhmanov1901
@aidarfaizrakhmanov1901 2 жыл бұрын
Maaan! I liked how you started the tutorial: well-explained and sweet for the beginners. Starting from Pytorch classification, you probably assumed "enough with beginners, let's level up 100x times lol". Many lines of code with arguments you wrote, require some googling, hence a quicky high-level explanation of those could do magic. Nevertheless, thanks for making this video mate.
@netrahirani3147
@netrahirani3147 2 жыл бұрын
I feel like I've hit a jackpot! It took me forever to find such an easy-to-learn video. Das war sehr gut! Danke!
@oliverguhr8746
@oliverguhr8746 2 жыл бұрын
Thanks for using my model :)
@haralc
@haralc Жыл бұрын
OMG! Thanks for this video! Don't have to deal with French accent anymore!
@CppExpedition
@CppExpedition 2 жыл бұрын
i've seen lots of tutorials... this is the best of all!
@WalkAloneLive
@WalkAloneLive 3 жыл бұрын
I was ready to subscribe for you for second time :D
@patloeber
@patloeber 3 жыл бұрын
yeah :)
@parttimelarry
@parttimelarry 3 жыл бұрын
Excited about this one, thanks!
@caiyu538
@caiyu538 Жыл бұрын
Clear explanation for beginner. Great
@Lakshraut
@Lakshraut Жыл бұрын
Your presentation is excellent.
@philipp5636
@philipp5636 Жыл бұрын
Holy shit this just saved my and my thesis from a week of pain. Thank you very much!
@mairadebayser5383
@mairadebayser5383 2 жыл бұрын
Nice video. It seems that my work in 2015 while at IBM Research which was exactly the same thing presented in this video has been widely accepted in the Machine Learning community. Cool.🤗
@patloeber
@patloeber 2 жыл бұрын
Thank you! Yeah the ML community has grown a lot :)
@shubhamgattani5357
@shubhamgattani5357 6 ай бұрын
This earth needs more researchers like you. (Instead the number of politicians keep growing 🤣)
@imdadood5705
@imdadood5705 3 жыл бұрын
I am simple man! I see Patrick I like the video!
@patloeber
@patloeber 3 жыл бұрын
❤️
@HuevoFriteR
@HuevoFriteR 2 жыл бұрын
Thanks for the tutorial buddy, it was amazing!
@robosergTV
@robosergTV 3 жыл бұрын
Please make a whole series on this :) There is also a very nice framework on top of this called "simple transformers"
@patloeber
@patloeber 3 жыл бұрын
thanks for the suggestion
@kinwong6383
@kinwong6383 3 жыл бұрын
This is really powerful and efficient for real world usage. I wonder if Kaggle have a rule to ban people doing this on competitions. We almost hear Patrick speaks German. That was so close! Danke for the video!
@prettiestthing
@prettiestthing 2 жыл бұрын
Loving this ❤! Please do a series on this 🥳
@vijaypalmanit
@vijaypalmanit 3 жыл бұрын
Very nice explanation, many things got cleared I had confusion about eg tokenizers. Really liked the video and your way of teaching. Expecting more like fine tuning bert on custom dataset, please make video on it.
@sanjaybhatikar
@sanjaybhatikar Жыл бұрын
How would you do neural transfer learning (retraining) by unfreezing only the fully connected layers? I was given to understand that this is the proper way to fine-tune a deep learning model, not retraining all model parameters.
@SanataniAryavrat
@SanataniAryavrat 3 жыл бұрын
Thank you Patrick... this was much awaited course... can you please create a full length tutorial including deploying an "dashboard app" on docker
@juliank7408
@juliank7408 8 ай бұрын
Thanks! Well explained!
@haralc
@haralc Жыл бұрын
Hi, would you please make video for the text-generation and question-answering, from dissecting how the pipeline does it and then fine-tuning?
@annarocha9769
@annarocha9769 2 жыл бұрын
Thank you soooooooo much for this, suscribed :)
@haralc
@haralc Жыл бұрын
Would you please make another video with the latest version of the libraries?
@LuisMorales-bc7ro
@LuisMorales-bc7ro Жыл бұрын
I love you patrick
@canernm
@canernm 3 жыл бұрын
Hello, thank you for the extremely valuable video. I do have one question however. During the fine-tuning process, in the first case where we use Trainer(): as far as I can tell, the model and the data are not in the GPU by default, and we also do not move them there (as we do in the custom PyTorch training loop). I tried it in a notebook and when I run the command "next(model.parameters()).is_cuda", where model is the from_pretrained() model, it returns False. Still, moving the model to the GPU would be the same even in this case (with the trainer), by doing from_pretrained('...').to('cuda'). However, when we only have a dataset and we dont create a dataloader, I am not sure how to move it to the GPU. Do you now perhaps? I would appreciate it a lot!
@abhishekriyer
@abhishekriyer 3 жыл бұрын
@Patrick: Could you pls share the code link for the above ??. Or it's already there I am unable to find it
@UsmanMalik57
@UsmanMalik57 3 жыл бұрын
Hello, For fine-tuning multiclass text classification model, the approach remains same? Te
@jesusmtz29
@jesusmtz29 2 жыл бұрын
I like your tutorials. However, just one small critique. Sometimes I feel you're just reading code for me. I can do that but i think the value of yt tutorials is to explain why we do certain things. Otherwise im just punching lines in. Sorry if this sounds harsh I don't mean it that way
@KamalSingh-zo1ol
@KamalSingh-zo1ol Жыл бұрын
Great video, can you make video on how to change cache from default directory to other drive?
@mays7n
@mays7n Жыл бұрын
very helpful, thanks aaaaaa loooot
@straightup7up
@straightup7up Жыл бұрын
I'm confused, if I'm using a model from hugging face on my desktop, does the model communicate with remote cloud services when running the model?
@mathsharking
@mathsharking Жыл бұрын
Good tutorial
@komalkukreja4441
@komalkukreja4441 2 жыл бұрын
While loading xlm robera from my machine which I saved as .bin or .pth, I am getting incompatible key error while loading saved model from my local machine for evaluation
@jaypie9092
@jaypie9092 Жыл бұрын
I'm using visual Studio and have done all the installs and it is not working. I have the venv started and installed PyTorch and the transformers. I have it in the project directory. Am I missing something?
@NickPark-n2x
@NickPark-n2x 10 ай бұрын
so for the german part, you can get the same result without the attention mask?
@ironF5
@ironF5 2 жыл бұрын
the fine tuning is done with supervised dataset, how to do in case of self-supervised case? inwhich the data is not labeled but the model retrains on your data and make judgments?
@xieen7976
@xieen7976 Жыл бұрын
hi, where is the. "train_test_split" function come from?
@yuandi9410
@yuandi9410 Жыл бұрын
Hey I can't find the model of license to activate it it's doesn't show up????
@sumapriiya
@sumapriiya 2 жыл бұрын
I tried to fine tune the RobertaModel to a custom dataset using the Trainer object, and then saved the model and tokenizer to my Google drive. But on retrieving the model and predicting it on a validation dataset gives me the same class prediction(all with negative values), do you have any idea why? Thanks for your help.
@nirash8018
@nirash8018 2 жыл бұрын
36:02 How would you go on and make specific predictions?
@darraghcaffrey4082
@darraghcaffrey4082 3 жыл бұрын
Can someone explain what's going on with these two lines of code as its only explained with Tensorflow on hugging face. I understand it's a dictionary but its a little confusing item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} item['labels'] = torch.tensor(self.labels[idx])
@aseemsrivastava3995
@aseemsrivastava3995 3 жыл бұрын
A series on this would be reallly great! Like the one you have with PyTorch. In that series you can go for some very complex architectures in NLP publications. viz. using standard lstm/gru with bert tokens and at linear layers + softmax is easy. I am asking if you could show implementing other attention tweaking strategies or other similar complex architectures that people do these days in publications would really help us!
@Mike-jr7re
@Mike-jr7re 2 жыл бұрын
Patrick, do you know how to remove the models from the HD? I'm seeing that for each model it downloads it directly on the mac. Due to space problem, how can I remove them if I don't use them anymore. Thanks a lot!
@xingyubian5654
@xingyubian5654 2 жыл бұрын
goated video
@philcui9268
@philcui9268 2 жыл бұрын
Hi Patrick, this is a nice tutorial. Can we have the access of the code?
@茂张-y4s
@茂张-y4s 2 жыл бұрын
where to get the source code?
@pandapanda5889
@pandapanda5889 2 жыл бұрын
Hi, what should I do when I have a lot of comments and posts without labels? I'm a beginner and what I see on the Internet so far is always text data with labels such as movie reviews etc.
@v1hana350
@v1hana350 2 жыл бұрын
What is the meaning of fine-tuning and Pre-trained in Transformers?
@soulwreckedyouth877
@soulwreckedyouth877 3 жыл бұрын
How do I finetune the German Sentiment by Oliver Guhr? Can I just follow your steps or do I have to take care with a special Tokenizer or anything? Cheers and thanks for your work
@shanukadulshan7154
@shanukadulshan7154 3 жыл бұрын
hey bro, how are you executing importing lines only once?(I noticed they turned in to grey)
@testingemailstestingemails4245
@testingemailstestingemails4245 3 жыл бұрын
how to do that trained huggingface model on my own dataset? how i can start ? i don't know the structure of the dataset? help.. very help how I store voice and how to lik with its text how to orgnize that I an looking for any one help me in this planet Should I look for the answer in Mars?!
@AshishSingh-753
@AshishSingh-753 3 жыл бұрын
Patrick What to choose doing projects for computer nlp or choose only one
@patloeber
@patloeber 3 жыл бұрын
I recommend focusing on one in the beginning until you are comfortable. but do what interests you the most :)
@AshishSingh-753
@AshishSingh-753 3 жыл бұрын
Thanks IAM a biology and math student if you have any idea about how to use AI on this topics inform me Patrick
@andrea-mj9ce
@andrea-mj9ce 2 жыл бұрын
Is the code that you typed available ?
@artemkeller2571
@artemkeller2571 Жыл бұрын
You show how to use your own tokenizer, but you are not explaining what it is and why would i possibly want to use a different one :( Aswell as what batch is,logits, what are all those strange numbers and how can it be any usefull. Also what is actually a pytorch. And many other stuff. Its like following your steps without understanding of what im actually doing right now. Still the best explain i found so far tho... Thanks!
@pathikghugare9918
@pathikghugare9918 3 жыл бұрын
which PyCharm theme are you using?
@patloeber
@patloeber 3 жыл бұрын
Dracula
@md.rafiuzzamanbhuiyanafrid2087
@md.rafiuzzamanbhuiyanafrid2087 3 жыл бұрын
Good one. Please, Can you share the github link for fine-tuning script ?
@haralc
@haralc Жыл бұрын
Trying to make this work with text-generation pipeline but to no avail... feel so dumb .... from transformers import pipeline generator = pipeline("text-generation", model=model, tokenizer=tokenizer) generator(["the quick brown fox"])
@trevormuchenje1553
@trevormuchenje1553 3 жыл бұрын
is there a specific reason for using pytorch instead of tensorflow for this task
@patloeber
@patloeber 3 жыл бұрын
nope, both are fine. I just had to choose one here ;)
@trevormuchenje1553
@trevormuchenje1553 3 жыл бұрын
@@patloeber okay great. Thanks for the wonderful tutorial
@736939
@736939 2 жыл бұрын
I didn't get your fine-tuning, because (As I understood) Fine tuning means that you should freeze some part of your neural network (by setting requires_grad=False) and train only some part (usually output layers) of your model, and after it unfreeze layers.
@andrea-mj9ce
@andrea-mj9ce 2 жыл бұрын
Some links are broken
@benxneo
@benxneo 3 жыл бұрын
does it support R also?
@ubaidulkhan
@ubaidulkhan 10 ай бұрын
Are you using a local GPU?
@mandilkarki5134
@mandilkarki5134 3 жыл бұрын
Yayyyy
@airepublic9864
@airepublic9864 2 жыл бұрын
Have the same voice of kevin from ,data school
@soumilyade1057
@soumilyade1057 Жыл бұрын
most blogs and videos contain the same information. An already prepared dataset comes with certain benefits. Merely going through a snippet of code doesn't help much. 😑
@enriquecarbo9096
@enriquecarbo9096 2 жыл бұрын
500 likes :)
@smnt
@smnt 2 жыл бұрын
Is anyone else triggered by the fact that he kept calling the "huggingface" a "smiley face"?
@salemsheikh254
@salemsheikh254 7 ай бұрын
DUll
Fine-tuning Large Language Models (LLMs) | w/ Example Code
28:18
Shaw Talebi
Рет қаралды 363 М.
Fine-Tuning BERT for Text Classification (Python Code)
23:24
Shaw Talebi
Рет қаралды 6 М.
SIZE DOESN’T MATTER @benjaminjiujitsu
00:46
Natan por Aí
Рет қаралды 7 МЛН
Мясо вегана? 🧐 @Whatthefshow
01:01
История одного вокалиста
Рет қаралды 2 МЛН
Одну кружечку 😂❤️
00:12
Денис Кукояка
Рет қаралды 1,2 МЛН
What is Hugging Face? - Machine Learning Hub Explained
10:05
NeuralNine
Рет қаралды 39 М.
How I’d learn ML in 2024 (if I could start over)
7:05
Boris Meinardus
Рет қаралды 1,2 МЛН
Fine-tuning LLMs with PEFT and LoRA
15:35
Sam Witteveen
Рет қаралды 134 М.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 3,8 МЛН