[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality

  Рет қаралды 24,006

Yannic Kilcher

Yannic Kilcher

Күн бұрын

Пікірлер
@sg785
@sg785 4 жыл бұрын
classic papers is maybe the best addition to this kind of content. i find it really useful and important to come back to old papers sometimes and look at them from the perspective of modern state of dl.
@kappadistributive
@kappadistributive 4 жыл бұрын
+1 As my history teacher in high school used to say: You must know where you came from to know where you are going.
@robertlucente657
@robertlucente657 2 күн бұрын
It is refreshing to have all these classic worked through - They are helpful to mid-tier people - The experts don't need help - For beginners it is to much - And so mid-tier is helpful
@Scranny
@Scranny 4 жыл бұрын
Wow. Just wow. This was a fantastic overview of word2vec. Your explanations of the minute details and the vague and harder to grasp concepts of their paper were exceptional. Your comments of their unconventional authorship and writing style issues were also on point. I felt like I learned and re-learned how word2vec really works. Yes, please cover more classic papers, because understanding the foundations is important. Way to go Yannic!
@oostopitre
@oostopitre 4 жыл бұрын
There is so much value in the videos just by core content itself. However, anecdotes like how the 'Hierarchical softmax' was a distraction in the paper adds much more context and hence understanding. Thank you for these videos :)
@MrjbushM
@MrjbushM 4 жыл бұрын
Thanks for this classic series papers for us that are learning deep learning is important to cover the classic and main old ideas in the field.
@michaelfrost6437
@michaelfrost6437 3 жыл бұрын
My browser crashed along with my 50,000 tabs. I restored them and suddenly Yannic is telling me about 5 papers simultaneously.
@ShivaramKR
@ShivaramKR 4 жыл бұрын
Thanks Yannic for the [Classic] videos! These videos are more useful than many of the papers which do small incremental improvements.
@florianhonicke5448
@florianhonicke5448 4 жыл бұрын
Welcome to Yannic`s paper museum :) Very nice to look at older papers as well!
@doyourealise
@doyourealise 4 жыл бұрын
wow, I am learning word2vec from yesterday, and was struggling to grasp the concept and here you uploaded the video, explaining the paper!
@DiegoJimenez-ic8by
@DiegoJimenez-ic8by 4 жыл бұрын
Thanks for visiting such an important paper!!! Awesome content!!
@leapdaniel8058
@leapdaniel8058 4 жыл бұрын
I would definitely be into a playlist of "classical" data science videos like this. There is so much content to absorb, being able to focus on the ones that have been proven historically and vetted would be awesome. It also gives you a chance to reference how things have improved since then, which is nice to know.
@fotisj321
@fotisj321 4 жыл бұрын
Great explanation of a paper as usual. And this paper (or the three of them) changed so much. Even if token-based embeddings are usually preferably. for some applications type-based word embeddings are probably still the better choice, for example if you are interested in the history of concepts and want to track their semantic change.
@kappadistributive
@kappadistributive 4 жыл бұрын
To provide another argument for the case of classical papers: It is very difficult to anticipate which ideas will stand the test of time in the moment of their creation. But visiting ‘classical’ papers we allow ourselves the benefit of hindsight - examining those ideas that time proved to be invaluable.
@ironic_bond
@ironic_bond 4 жыл бұрын
Really enjoying watching these videos. You did a great job explaining them!
@thearianrobben
@thearianrobben 4 жыл бұрын
always good to look back classic papers
@adriandip8448
@adriandip8448 2 жыл бұрын
Thank you!!! So much better than the Standford class.
@wizardOfRobots
@wizardOfRobots 3 жыл бұрын
Thanks you. I couldn't understand word2vec from prof. Andrew Ng's video, but you explained it clearly!
@harshpoddar2113
@harshpoddar2113 3 жыл бұрын
Really loved your explanation. Thank You.
@francoisdupont2108
@francoisdupont2108 4 жыл бұрын
Classic papers are a great Ideas. It's really helpful for those like me who are new in ML. I often try to read some papers that are extension of algorithms introduced in the classic ones and I struggle to understand them since I don't have the prerequisite.
@joseiglesias330
@joseiglesias330 4 жыл бұрын
Yes, more historical papers!!
@sonOfLiberty100
@sonOfLiberty100 4 жыл бұрын
Love it, more of old papers :)
@zd676
@zd676 4 жыл бұрын
Please keep going with the amazing content! Love it!
@spaceisawesome1
@spaceisawesome1 4 жыл бұрын
Wait you're supposed to be having a break! This is your second video in two days. 😅
@tech4028
@tech4028 4 жыл бұрын
The videos are pre-recorded! He's amazing, man.
@spaceisawesome1
@spaceisawesome1 4 жыл бұрын
Indeed what a guy. I think he's doing some good things with this channel!
@aflah7572
@aflah7572 3 жыл бұрын
Love this series, looking forward to more such videos
@binjianxin7830
@binjianxin7830 3 жыл бұрын
OMG I’m revisiting this clip for negative sampling because I was confused by it in understanding the node embedding of random walk in GNN.
@thepaulozip
@thepaulozip 4 жыл бұрын
Wow that's nice! Please do more about classical papers!
@TechVizTheDataScienceGuy
@TechVizTheDataScienceGuy 4 жыл бұрын
Classic series 🔥
@thntk
@thntk 4 жыл бұрын
Can you please give references to your claim at 5:20? You said that Queen is just one of the closest words to King and the computation -man+woman is irrelevant; that makes sense in this case, but I don't see how it can explain more complicated analogies such as plural form analogy? I would like to read more about this.
@YannicKilcher
@YannicKilcher 4 жыл бұрын
arxiv.org/abs/1905.09866
@carlossegura403
@carlossegura403 4 жыл бұрын
This is awesome!
@herp_derpingson
@herp_derpingson 4 жыл бұрын
5:00 Thats news to me. I remember trying it out myself, the king queen thing worked while a lot of other analogies didnt, I didnt put much thought to it back then. . 25:13 3/4 is 75% which is very close to 80%, which makes me think, it has something to do with Pareto Principle. Maybe 4/5 didnt do better because we truncated the tail of the distribution. . 27:40 Heuristics = Wild ass guess. Computer Science 101 :D . 30:30 I think they didnt do that because back in 2013 they didnt have the option :) Tensorflow was made public in late 2015. Back in 2013 there was no Tensorflow, no TPUs and GPU clusters were super niche.
@kappadistributive
@kappadistributive 4 жыл бұрын
Regarding your second comment: 80% don’t magically translate to exponent here in the way you seem to suggest: To see this, consider the extreme case in which 1 contributor causes 19% of the effect. This contributor would receive the same exponent in its probability mass function that it would receive in a much less extreme power-law scenario. It would seem, however, that the 19% contributor should be sampled *way* less frequent than that.
@YannicKilcher
@YannicKilcher 4 жыл бұрын
Yes you're probably right with there not being GPUs, but they had their whole MapReduce infrastructure etc, it would have been easy for them to just keep it at that scale.
@aa-xn5hc
@aa-xn5hc 4 жыл бұрын
Yes, i love historical papers
@ativjoshi1049
@ativjoshi1049 4 жыл бұрын
More videos like this please....
@danberm1755
@danberm1755 Жыл бұрын
Thanks! 👍
@saswatnanda3481
@saswatnanda3481 2 жыл бұрын
one video on Efficient Estimation of Word Representations in Vector Space please
@aa-xn5hc
@aa-xn5hc 4 жыл бұрын
Really Great!
@scottmiller2591
@scottmiller2591 4 жыл бұрын
Don't forget that Word2Vec is part of the encoding in the front end of a transformer, so w2v is still plenty relevant!
@YuenHsienTseng
@YuenHsienTseng 4 жыл бұрын
As far as I know, Transformers or the like (especially BERT) use Byte Pair Encoding to tackle the out-of-vocabulary problem. The vocabulary size is often reduced to within 30000, rather than 10 to 5 or 7. Therefore, no Word2Vec embeddings there (but an input embeddings layer is still there whose weights are learned when the Transformer is trained). Despite of this change, the concept of Word2Vec does really influentially affect how we apply deep leaning in natural language processing.
@vladimirradenkovic9119
@vladimirradenkovic9119 Жыл бұрын
I love you man!
@Notshife
@Notshife 4 жыл бұрын
Yannic, are you also taking a break from such regular reading of papers in your personal time as well? And if not, do you think you could provide a "this is interesting" list in your discord channel when you happen to come across interesting papers?
@simba2702
@simba2702 4 жыл бұрын
I love your videos. Just a side note, when you try to explain things with notes make them readable so that if I jump to a random section I can understand what you are trying to explain.
@ikopysitsky
@ikopysitsky Жыл бұрын
I may be mistaken here but if you're maximizing the objective function for negative sampling your negative and positive signs for the WO vs Wi should be reversed, so it should be minimizing instead of maximizing.
@M0481
@M0481 4 жыл бұрын
A comment: In 3:33 you mention that with PCA these are the first 2 dimensions that are portrayed. I don't think this is true, right? PCA allows you to map a certain percentage of the expressiveness of the data into a lower dimensional space. This is unequal to simply getting the first two dimensions.
@YannicKilcher
@YannicKilcher 4 жыл бұрын
Correct, I meant the first two PCA dimensions, not data dimensions
@kurianbenoy1459
@kurianbenoy1459 4 жыл бұрын
Obviosuly like this
@GuilhermeOliveira-kx4mz
@GuilhermeOliveira-kx4mz Жыл бұрын
To All my students. Let me know personally if you find my comment. Cheers!
@jeremykothe2847
@jeremykothe2847 4 жыл бұрын
0 0 0 0 0.05 0.95 st!
@maxdoner4528
@maxdoner4528 3 жыл бұрын
Gj
Big Bird: Transformers for Longer Sequences (Paper Explained)
34:30
Yannic Kilcher
Рет қаралды 24 М.
To Brawl AND BEYOND!
00:51
Brawl Stars
Рет қаралды 17 МЛН
“Don’t stop the chances.”
00:44
ISSEI / いっせい
Рет қаралды 62 МЛН
Understanding Word2Vec
17:52
Jordan Boyd-Graber
Рет қаралды 78 М.
GPT-3: Language Models are Few-Shot Learners (Paper Explained)
1:04:30
Yannic Kilcher
Рет қаралды 213 М.
The Dome Paradox: A Loophole in Newton's Laws
22:59
Up and Atom
Рет қаралды 807 М.
Lecture 2 | Word Vector Representations: word2vec
1:18:17
Stanford University School of Engineering
Рет қаралды 512 М.
[Classic] Generative Adversarial Networks (Paper Explained)
37:04
Yannic Kilcher
Рет қаралды 65 М.
Language Models are Open Knowledge Graphs (Paper Explained)
52:16
Yannic Kilcher
Рет қаралды 36 М.
Rethinking Attention with Performers (Paper Explained)
54:39
Yannic Kilcher
Рет қаралды 56 М.
Linformer: Self-Attention with Linear Complexity (Paper Explained)
50:24
To Brawl AND BEYOND!
00:51
Brawl Stars
Рет қаралды 17 МЛН