L15.6 RNNs for Classification: A Many-to-One Word RNN

  Рет қаралды 6,228

Sebastian Raschka

Sebastian Raschka

Күн бұрын

Пікірлер: 18
@NoNonsense_01
@NoNonsense_01 2 жыл бұрын
The conceptual clarity in these videos is astonishing. I am surely going to purchase your book now. One little thing though, you might have noticed, at 7:22, that the number of columns in one hot vector should have been eleven (for 0 to 10 index) instead of ten. Also, at 23:11 the row chosen for "is" and "shining" should be interchanged (lookup for "is" should be row no 3 and for "shining" it should be row no. 5). Besides, I have rarely seen anyone explain steps 3 and 4 so beautifully. You have my gratitude for that.
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
Glad to hear the video is useful overall! And great catch, I totally miscounted here and there is one index missing! Haha, in practice, in a general one-hot encoding context, it is common to drop one column because of redundancy. Or in other words, you can deduce the 11th column from the other 10. But yeah, that's not what I did here and it was more like a typo! Thanks for mentioning, I wish there was a way to edit videos on YT ;)
@harishpl02
@harishpl02 2 жыл бұрын
Thank you for your lot of efforts in putting it in slide and making presentations.. I have been following deep learning path. It's really helpful
@mostinho7
@mostinho7 10 ай бұрын
5:40 turning sentence into vector input with bag of words or vocabulary, each word maps to a number. Without a word embedding then the numbers don’t capture semantic meaning 11:40 after one hot encoding, we use an embedding matrix to get the embedding
@saadouch
@saadouch 3 жыл бұрын
hi @SebastienRaschka , thank you for this amazing video. I just wanted to mention that steps 3 and 4 are actually quite important to mention because some problems like the one that got me to find this video, are way to complex and need a deeper understanding of what happens behind the curtains. Anyway, I'm so grateful for your efforts and keep up the good work!!
@rhythmofdata1969
@rhythmofdata1969 2 жыл бұрын
Hi, Great videos! Question: In the slide, shouldn't the one hot vector have 11 positions for the vocabulary which includes and ? I only see 10 slots in the one hot encoding on the slides.
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
Good catch! I probably dropped one column (it is relatively common, because one of the columns will always be redundant. I.e., if all 10 columns are 0, it implies that the 11th column has the 1)
@LanaDominkovic
@LanaDominkovic Жыл бұрын
@@SebastianRaschka that would mean that representation for 10/padding would be all 0? because 1 would be on index 11 which is now dropped?
@736939
@736939 2 жыл бұрын
22:16 Is it created randomly or there is some rule to create Embedding matrix? Thank you.
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
Good question! Usually it's initialized from random values. It's basically a fully-connected layer. But since the inputs are sparse, PyTorch implements a special Embedding layer to make computations more efficient. But in a conceptual way, you can think of it as a fully connected layer that is randomly initialized and then learned.
@dankal444
@dankal444 8 ай бұрын
It is not random. It's purpose is to represent words as vectors in such a way that similar words result in similar vectors in the vector space (and different words in not similar vectors). You can search for `word2vec` in google and how to train it, that's very common way to "vectorize" words.
@КонстантинДемьянов-л2п
@КонстантинДемьянов-л2п 2 жыл бұрын
excellent explanation my man
@SaschaRobitzki
@SaschaRobitzki 10 ай бұрын
9:00 Why are the one-hot vectors for "the" , "sun", and "shining" one-indexed and the one for "the" is zero-indexed? Just a mistake?
@DanielTobi00
@DanielTobi00 7 ай бұрын
Definitely a mistake, he pointed them out.
@SaimonThapa
@SaimonThapa 2 жыл бұрын
i see some bob marley reference there!
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
Haha, you are the first and only one who noticed :)
@SaimonThapa
@SaimonThapa 2 жыл бұрын
Hehe I was singing along when I saw that. And thank you very much for this informative video!
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
@@SaimonThapa Glad you were having fun ^^
L15.7 An RNN Sentiment Classifier in PyTorch
40:00
Sebastian Raschka
Рет қаралды 14 М.
L19.1 Sequence Generation with Word and Character RNNs
17:44
Sebastian Raschka
Рет қаралды 8 М.
ТВОИ РОДИТЕЛИ И ЧЕЛОВЕК ПАУК 😂#shorts
00:59
BATEK_OFFICIAL
Рет қаралды 4,1 МЛН
Человек паук уже не тот
00:32
Miracle
Рет қаралды 4,2 МЛН
L15.5 Long Short-Term Memory
16:58
Sebastian Raschka
Рет қаралды 7 М.
Multivariate Time Series Forecasting Using LSTM, GRU & 1d CNNs
1:08:14
L19.3 RNNs with an Attention Mechanism
22:19
Sebastian Raschka
Рет қаралды 19 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 718 М.
Recurrent Neural Networks (RNNs), Clearly Explained!!!
16:37
StatQuest with Josh Starmer
Рет қаралды 589 М.
L16.2 A Fully-Connected Autoencoder
16:35
Sebastian Raschka
Рет қаралды 4,4 М.