The last 6-8 minutes of this video is exactly what I have been trying to hunt down as a tutorial. Thank you!
@just_ign Жыл бұрын
There are so many videos out there that show how to use huggingface's models with a pipeline and making it seem so "easy" to do things, which it is. But unlike those videos, this one really shows how we can use models natively and train them with our own cycles. Instead of portraying things as "easy", you decided to show how to actually get things done and I absolutely loved that!! Thanks for the tutorial :D
@shubhamgattani53576 ай бұрын
Almost 3 years for this video, and still so much relevant today. Thank you sir.
@aidarfaizrakhmanov19012 жыл бұрын
Maaan! I liked how you started the tutorial: well-explained and sweet for the beginners. Starting from Pytorch classification, you probably assumed "enough with beginners, let's level up 100x times lol". Many lines of code with arguments you wrote, require some googling, hence a quicky high-level explanation of those could do magic. Nevertheless, thanks for making this video mate.
@netrahirani31472 жыл бұрын
I feel like I've hit a jackpot! It took me forever to find such an easy-to-learn video. Das war sehr gut! Danke!
@oliverguhr87462 жыл бұрын
Thanks for using my model :)
@haralc Жыл бұрын
OMG! Thanks for this video! Don't have to deal with French accent anymore!
@CppExpedition2 жыл бұрын
i've seen lots of tutorials... this is the best of all!
@WalkAloneLive3 жыл бұрын
I was ready to subscribe for you for second time :D
@patloeber3 жыл бұрын
yeah :)
@parttimelarry3 жыл бұрын
Excited about this one, thanks!
@caiyu538 Жыл бұрын
Clear explanation for beginner. Great
@Lakshraut Жыл бұрын
Your presentation is excellent.
@philipp5636 Жыл бұрын
Holy shit this just saved my and my thesis from a week of pain. Thank you very much!
@mairadebayser53832 жыл бұрын
Nice video. It seems that my work in 2015 while at IBM Research which was exactly the same thing presented in this video has been widely accepted in the Machine Learning community. Cool.🤗
@patloeber2 жыл бұрын
Thank you! Yeah the ML community has grown a lot :)
@shubhamgattani53576 ай бұрын
This earth needs more researchers like you. (Instead the number of politicians keep growing 🤣)
@imdadood57053 жыл бұрын
I am simple man! I see Patrick I like the video!
@patloeber3 жыл бұрын
❤️
@HuevoFriteR2 жыл бұрын
Thanks for the tutorial buddy, it was amazing!
@robosergTV3 жыл бұрын
Please make a whole series on this :) There is also a very nice framework on top of this called "simple transformers"
@patloeber3 жыл бұрын
thanks for the suggestion
@kinwong63833 жыл бұрын
This is really powerful and efficient for real world usage. I wonder if Kaggle have a rule to ban people doing this on competitions. We almost hear Patrick speaks German. That was so close! Danke for the video!
@prettiestthing2 жыл бұрын
Loving this ❤! Please do a series on this 🥳
@vijaypalmanit3 жыл бұрын
Very nice explanation, many things got cleared I had confusion about eg tokenizers. Really liked the video and your way of teaching. Expecting more like fine tuning bert on custom dataset, please make video on it.
@sanjaybhatikar Жыл бұрын
How would you do neural transfer learning (retraining) by unfreezing only the fully connected layers? I was given to understand that this is the proper way to fine-tune a deep learning model, not retraining all model parameters.
@SanataniAryavrat3 жыл бұрын
Thank you Patrick... this was much awaited course... can you please create a full length tutorial including deploying an "dashboard app" on docker
@juliank74088 ай бұрын
Thanks! Well explained!
@haralc Жыл бұрын
Hi, would you please make video for the text-generation and question-answering, from dissecting how the pipeline does it and then fine-tuning?
@annarocha97692 жыл бұрын
Thank you soooooooo much for this, suscribed :)
@haralc Жыл бұрын
Would you please make another video with the latest version of the libraries?
@LuisMorales-bc7ro Жыл бұрын
I love you patrick
@canernm3 жыл бұрын
Hello, thank you for the extremely valuable video. I do have one question however. During the fine-tuning process, in the first case where we use Trainer(): as far as I can tell, the model and the data are not in the GPU by default, and we also do not move them there (as we do in the custom PyTorch training loop). I tried it in a notebook and when I run the command "next(model.parameters()).is_cuda", where model is the from_pretrained() model, it returns False. Still, moving the model to the GPU would be the same even in this case (with the trainer), by doing from_pretrained('...').to('cuda'). However, when we only have a dataset and we dont create a dataloader, I am not sure how to move it to the GPU. Do you now perhaps? I would appreciate it a lot!
@abhishekriyer3 жыл бұрын
@Patrick: Could you pls share the code link for the above ??. Or it's already there I am unable to find it
@UsmanMalik573 жыл бұрын
Hello, For fine-tuning multiclass text classification model, the approach remains same? Te
@jesusmtz292 жыл бұрын
I like your tutorials. However, just one small critique. Sometimes I feel you're just reading code for me. I can do that but i think the value of yt tutorials is to explain why we do certain things. Otherwise im just punching lines in. Sorry if this sounds harsh I don't mean it that way
@KamalSingh-zo1ol Жыл бұрын
Great video, can you make video on how to change cache from default directory to other drive?
@mays7n Жыл бұрын
very helpful, thanks aaaaaa loooot
@straightup7up Жыл бұрын
I'm confused, if I'm using a model from hugging face on my desktop, does the model communicate with remote cloud services when running the model?
@mathsharking Жыл бұрын
Good tutorial
@komalkukreja44412 жыл бұрын
While loading xlm robera from my machine which I saved as .bin or .pth, I am getting incompatible key error while loading saved model from my local machine for evaluation
@jaypie9092 Жыл бұрын
I'm using visual Studio and have done all the installs and it is not working. I have the venv started and installed PyTorch and the transformers. I have it in the project directory. Am I missing something?
@NickPark-n2x10 ай бұрын
so for the german part, you can get the same result without the attention mask?
@ironF52 жыл бұрын
the fine tuning is done with supervised dataset, how to do in case of self-supervised case? inwhich the data is not labeled but the model retrains on your data and make judgments?
@xieen7976 Жыл бұрын
hi, where is the. "train_test_split" function come from?
@yuandi9410 Жыл бұрын
Hey I can't find the model of license to activate it it's doesn't show up????
@sumapriiya2 жыл бұрын
I tried to fine tune the RobertaModel to a custom dataset using the Trainer object, and then saved the model and tokenizer to my Google drive. But on retrieving the model and predicting it on a validation dataset gives me the same class prediction(all with negative values), do you have any idea why? Thanks for your help.
@nirash80182 жыл бұрын
36:02 How would you go on and make specific predictions?
@darraghcaffrey40823 жыл бұрын
Can someone explain what's going on with these two lines of code as its only explained with Tensorflow on hugging face. I understand it's a dictionary but its a little confusing item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} item['labels'] = torch.tensor(self.labels[idx])
@aseemsrivastava39953 жыл бұрын
A series on this would be reallly great! Like the one you have with PyTorch. In that series you can go for some very complex architectures in NLP publications. viz. using standard lstm/gru with bert tokens and at linear layers + softmax is easy. I am asking if you could show implementing other attention tweaking strategies or other similar complex architectures that people do these days in publications would really help us!
@Mike-jr7re2 жыл бұрын
Patrick, do you know how to remove the models from the HD? I'm seeing that for each model it downloads it directly on the mac. Due to space problem, how can I remove them if I don't use them anymore. Thanks a lot!
@xingyubian56542 жыл бұрын
goated video
@philcui92682 жыл бұрын
Hi Patrick, this is a nice tutorial. Can we have the access of the code?
@茂张-y4s2 жыл бұрын
where to get the source code?
@pandapanda58892 жыл бұрын
Hi, what should I do when I have a lot of comments and posts without labels? I'm a beginner and what I see on the Internet so far is always text data with labels such as movie reviews etc.
@v1hana3502 жыл бұрын
What is the meaning of fine-tuning and Pre-trained in Transformers?
@soulwreckedyouth8773 жыл бұрын
How do I finetune the German Sentiment by Oliver Guhr? Can I just follow your steps or do I have to take care with a special Tokenizer or anything? Cheers and thanks for your work
@shanukadulshan71543 жыл бұрын
hey bro, how are you executing importing lines only once?(I noticed they turned in to grey)
@testingemailstestingemails42453 жыл бұрын
how to do that trained huggingface model on my own dataset? how i can start ? i don't know the structure of the dataset? help.. very help how I store voice and how to lik with its text how to orgnize that I an looking for any one help me in this planet Should I look for the answer in Mars?!
@AshishSingh-7533 жыл бұрын
Patrick What to choose doing projects for computer nlp or choose only one
@patloeber3 жыл бұрын
I recommend focusing on one in the beginning until you are comfortable. but do what interests you the most :)
@AshishSingh-7533 жыл бұрын
Thanks IAM a biology and math student if you have any idea about how to use AI on this topics inform me Patrick
@andrea-mj9ce2 жыл бұрын
Is the code that you typed available ?
@artemkeller2571 Жыл бұрын
You show how to use your own tokenizer, but you are not explaining what it is and why would i possibly want to use a different one :( Aswell as what batch is,logits, what are all those strange numbers and how can it be any usefull. Also what is actually a pytorch. And many other stuff. Its like following your steps without understanding of what im actually doing right now. Still the best explain i found so far tho... Thanks!
@pathikghugare99183 жыл бұрын
which PyCharm theme are you using?
@patloeber3 жыл бұрын
Dracula
@md.rafiuzzamanbhuiyanafrid20873 жыл бұрын
Good one. Please, Can you share the github link for fine-tuning script ?
@haralc Жыл бұрын
Trying to make this work with text-generation pipeline but to no avail... feel so dumb .... from transformers import pipeline generator = pipeline("text-generation", model=model, tokenizer=tokenizer) generator(["the quick brown fox"])
@trevormuchenje15533 жыл бұрын
is there a specific reason for using pytorch instead of tensorflow for this task
@patloeber3 жыл бұрын
nope, both are fine. I just had to choose one here ;)
@trevormuchenje15533 жыл бұрын
@@patloeber okay great. Thanks for the wonderful tutorial
@7369392 жыл бұрын
I didn't get your fine-tuning, because (As I understood) Fine tuning means that you should freeze some part of your neural network (by setting requires_grad=False) and train only some part (usually output layers) of your model, and after it unfreeze layers.
@andrea-mj9ce2 жыл бұрын
Some links are broken
@benxneo3 жыл бұрын
does it support R also?
@ubaidulkhan10 ай бұрын
Are you using a local GPU?
@mandilkarki51343 жыл бұрын
Yayyyy
@airepublic98642 жыл бұрын
Have the same voice of kevin from ,data school
@soumilyade1057 Жыл бұрын
most blogs and videos contain the same information. An already prepared dataset comes with certain benefits. Merely going through a snippet of code doesn't help much. 😑
@enriquecarbo90962 жыл бұрын
500 likes :)
@smnt2 жыл бұрын
Is anyone else triggered by the fact that he kept calling the "huggingface" a "smiley face"?