As a 26-year-old data scientist with three years of industry experience, I closely follow your course, Jeremy. I want to express my gratitude for your excellent teachings and the enthusiasm you bring to every class. Thank you very much.
@DJcatamount Жыл бұрын
This is a legendary video! Just within a year after this upload human society is being transformed by these general purpose "Transformers" 🚀
@TheAero Жыл бұрын
55k people watched, 5k finished this lesson, 1k will apply what learned, 100 will excel in their knowledge. Work hard and you will be a legend. Small steps, huge goals.!
@honedattraction41104 ай бұрын
1 will write your comment
@jharkins Жыл бұрын
Going through the course - Jeremy your teaching style is amazing. I *really* appreciate what you're doing. 41:42 was my mind blown moment this class. It's arbitrary - you just have to do something consistent for it to learn from. So amazing that we're at this point in the deep learning curve already. Thanks!
@ILikeAI1 Жыл бұрын
It’s great to see that the hugging face model hub is nearly 10X the size since this was recorded
@YashSinghal4 ай бұрын
20x now :)
@vikramsandu6054 Жыл бұрын
Amazing video. The thing I like most about is the small hacks and tricks provided by Jeremy in between the topics.
@mizoru_2 жыл бұрын
Great to have an introduction to transformers from Jeremy!
@tanmeyrawal6442 жыл бұрын
At 1:33:31 num_labels depends upon the number of categories. So if we treating this as classification problem then it should have been num_labels=5
@harshathammegowda161511 ай бұрын
the labels in num_labels is in a different context here. consider the label as the feature/category being predicted and not the category's values. so here its just the score which is 1 category/column, hence num_labels=1 albeit it can have upto 5 values - 0, 0.25, 0.5, 0.75, 1.0 so, if the model were to also predict something like patent acceptance/reject, then num_labels=2 (score + this)
@DevashishJose Жыл бұрын
Thank you so much for this amazing lecture Jeremy. as always this was really insightful and a good learning experience.
@Xxpat Жыл бұрын
What an awesome lecture..
@mukhtarbimurat5106 Жыл бұрын
Great lesson, thanks!
@watsomk4 ай бұрын
Holy crap I feel like I have so much to learn
@adamkonopka4942 Жыл бұрын
Awesome lesson! Thanks for this series!
@erichlehmann36672 жыл бұрын
Favorite lesson so far 👌
@lechatsportif1243 ай бұрын
He uploaded this video July 2022 and chatgpt was released Nov 2022. I wonder if Jeremy thinks NLP still has such a big potential in the future.
@khaled79042 ай бұрын
What do you mean? Would chatgpt effects NLP potential badly?
@stevesan2 жыл бұрын
great video. a question about validation and testing: at 58:44 he says you have to "go back to square one" if your test set result is bad. what does that mean in practice? does that mean you have to delete your data, in the rm * sense? or just perhaps re-shuffle train vs. test vs. validation (which may not be possible like in the time series case..in which case, get new data?) and even if your test result WAS good, there is still a chance THAT was by coincidence, right?
@NegatioNZor Жыл бұрын
If you got a decent result on the validation-set, and then end up with a bad result on your held-out test set, this means that your solution (probably) has some flaw. "Going back to square one" in this sense, just means that you have to re-evaluate your solution. Often the best way of doing that is testing the most basic model, with the most basic data you have, just to see that it gives sensible answers in that case. It has nothing to do with deleting the data or re-shuffling train/test :)
@analyticsroot18982 жыл бұрын
great tutorial
@ucmaster2210 Жыл бұрын
I have a list of keywords, thousands of rows long. Which deep learning model to use to classify them into different topics? Topics are not known in advance. Thank
@juancruzalric66058 ай бұрын
if you want to predict two classes: 1 and 0 from a dataset. How can you add the F1_score metric?
@bloody_albatross Жыл бұрын
Does it matter in what order the vocabulary is numbered? Like assume the vocabulary is just the English alphabet, does it matter for how the neuronal network works if A B C is numbered 1 2 3 or e.g. 26 3 15? Given all the continuous mathematical operations in the network (floating point math), does it matter which tokens are numerically next to each other and wich have a bigger distance?
@SarathSp062 жыл бұрын
Great content. I was wondering why the AutoTokenizer have to be initialized with a pre-trained model if all it does is tokenization. How would it differ when different models are used ?
@Slava7052 жыл бұрын
A pretrained model has a vocabulary and a tokenizer is based on a vocabulary. Also I guess each model's tokenizer produces a slightly different data structure, that's why there is no single universal tokenizer.
@schrodingersspoon17052 жыл бұрын
Just my attempt to answer the question, I could be wrong. I believe it is because each pre-trained model has its own method of tokenization that it accepts, so each model has its own Tokenizer. AutoTokenizer given the model you are going to use just fetches the corresponding Tokenizer that works with that model.
@tharunnarannagari21482 жыл бұрын
I guess, different tokenizers generate different tokens for same sentence. And pretrained model would expect the incoming input tokens to match its embedding layer weights for best finetuning, since model weights are freezed.
@toromanow Жыл бұрын
Where can I find the notebook for this lesson? The Chapter 4 from the book is about something different (image classifier).
@chrgeo8342 Жыл бұрын
Check chapter 10 of the book
@davidchen6087 Жыл бұрын
A bit confused about the predicted value being a continuous number between 0~1. I thought we were training a classifier that would categorize the inputs as identical, similar, different.
@florianvonstosch9 ай бұрын
From the Kaggle competition page: Score meanings The scores are in the 0-1 range with increments of 0.25 with the following meanings: 1.0 - Very close match. This is typically an exact match except possibly for differences in conjugation, quantity (e.g. singular vs. plural), and addition or removal of stopwords (e.g. “the”, “and”, “or”). 0.75 - Close synonym, e.g. “mobile phone” vs. “cellphone”. This also includes abbreviations, e.g. "TCP" -> "transmission control protocol". 0.5 - Synonyms which don’t have the same meaning (same function, same properties). This includes broad-narrow (hyponym) and narrow-broad (hypernym) matches. 0.25 - Somewhat related, e.g. the two phrases are in the same high level domain but are not synonyms. This also includes antonyms. 0.0 - Unrelated.
@florianvonstosch9 ай бұрын
Just noticed the "with increments of 0.25" part. I guess this makes the problem kind of a hybrid between classification and regression.
@DearGeorge32 жыл бұрын
I faced a lot weird warning in the transformers block while executing the script. Absolutely unclear which of them can be ignored and which are critical. Can report but where..
@eftilija2 жыл бұрын
you can always post any questions or issues on the fastai forums