What are Embedding Layers in Keras (11.5)

  Рет қаралды 35,125

Jeff Heaton

Jeff Heaton

Күн бұрын

Пікірлер: 42
@raulsaenz6177
@raulsaenz6177 5 жыл бұрын
Great video. After going through several explanations and videos, yours is the clearest and I finally understand the use of the Embedding layer. Thank you.
@giovannimeono8802
@giovannimeono8802 2 жыл бұрын
I agree with this comment. This video is the clearest explanation for embeddings I've been able to find.
@suryagaur7440
@suryagaur7440 5 жыл бұрын
don't have words to explain how great this series is.!! speechless.!!
@WisamMechano
@WisamMechano 4 жыл бұрын
This was a very helpful video, mostly vids focus on the use case rather than what the embedding is. You nailed it with a very elaborate explanation. Thank you
@nitroflap
@nitroflap 4 жыл бұрын
The best explaining of the Embeddings in tensorflow which I've whenever seen.
@himanshutanwani_
@himanshutanwani_ 4 жыл бұрын
At 12:00, instead of one hot, can we use tf.keras.preprocessing.text.Tokenizer and fit_on_texts methods, please correct me if i am wrong.
@drstoneftw6084
@drstoneftw6084 4 жыл бұрын
my exact same thought
@AlexeyMatushevsky
@AlexeyMatushevsky 3 жыл бұрын
The discovery of the year! Thank you for your lectures!
@HeatonResearch
@HeatonResearch 3 жыл бұрын
You're very welcome!
@FiveJungYetNoSmite
@FiveJungYetNoSmite 2 жыл бұрын
Good video. I would have liked to see a single sentence inputted into the model at the end to show how to evaluate single inputs
@ashishpatil1716
@ashishpatil1716 4 жыл бұрын
Best explanation of embedding layers ever !
@coobit
@coobit 4 жыл бұрын
i can't get it.. 6:33 The input vector is [1,2] and the output is 2 rows of the lookuptable but no row is multiplied by 2... how is this possible? 9:47 Why the hell input is [[0,1]] and the output is 2 rows of the lookuptable? I mean why is the input like this? The dimentions of the input and the lookup matrix do not match. The multiplication is meaningless. Or am I missing smth?
@alexanderk5835
@alexanderk5835 3 жыл бұрын
Really good video, very digestible. Thank you Jeff!
@HeatonResearch
@HeatonResearch 3 жыл бұрын
Thanks! Glad it was helpful.
@sambitmukherjee1713
@sambitmukherjee1713 4 жыл бұрын
Great explanation Jeff.
@SatyaBhambhani
@SatyaBhambhani 2 жыл бұрын
this was awesome! i am hunting down videos for multinomial text classification, and this helped shed insights on when to use embedding, why to, and how! and also the production phase for corps? exactly what i was looking for!
@amitraichowdhury8148
@amitraichowdhury8148 3 жыл бұрын
Amazing video.....beautifully explained!. This is exactly what I was looking for to understand the Embedding layer. Great work!...please keep uploading more videos :)
@HeatonResearch
@HeatonResearch 3 жыл бұрын
Awesome, thank you! Subscribe so you do not miss any :-)
@beizhou2488
@beizhou2488 5 жыл бұрын
We already have the word2vector model that can map words to vectors. I am wondering why we need to build the word Embedding layer by ourself? Because Embedding layer and word2vector model does exactly the same things, and word2vec model is well-trained.
@RH-mk3rp
@RH-mk3rp 2 жыл бұрын
An explanation of gradient descent and how the loss gradients are propagated back to the embedding layer would be nice
@netfission
@netfission 4 жыл бұрын
Professionally done! Good job!
@blasttrash
@blasttrash 2 жыл бұрын
now how to do find_similar using that embedding weights layer?
@davidporterrealestate
@davidporterrealestate 2 жыл бұрын
This was great, esp. the 2nd half
@stackexchange7353
@stackexchange7353 4 жыл бұрын
Question: How could you use model persistence for sub tasks when using two different datasets? I created a cop of the original, and substituted 3 labels in my target column for another label. For instance, I have a NLP multi-classification problem, where I need to classify the x as 4 diffferent labels like 1, 2, 3, or 4. 1, 2, 3 labels are related, and their labels can be substituted as 5 so that it's now a binary classification problem. Now, I only need to differentiate between 4 and 5, but I'm still left with the classification between 1, 2, 3, which I'm not too sure how to use the initial classification (4 and 5 binary classification) to help in the second model. I can't find any information if SKLearn allows this like Keras does. Thanks for any suggestions.
@mukherjisandeep
@mukherjisandeep 2 жыл бұрын
Thank you for the great explanation! Further, I wanted to understand, is there a way we can look up the embeddings for each word in the corpuses
@guzu672
@guzu672 3 жыл бұрын
Finally! My struggle ended 😁👍
@ankitmaheshwari7310
@ankitmaheshwari7310 4 жыл бұрын
Expecting more information
@sebastian81ism
@sebastian81ism 4 жыл бұрын
awesome Explanation!
@HeatonResearch
@HeatonResearch 4 жыл бұрын
Thanks!
@mohajeramir
@mohajeramir 4 жыл бұрын
This was very helpful. Thank you
@HeatonResearch
@HeatonResearch 4 жыл бұрын
Glad it was helpful!
@beizhou2488
@beizhou2488 5 жыл бұрын
Hi, will we learn the attention model in the near future? Like LSTM and attention.
@HeatonResearch
@HeatonResearch 5 жыл бұрын
Attention, not currently, but I may do a related video on it outside the course.
@beizhou2488
@beizhou2488 5 жыл бұрын
@@HeatonResearch Great. Thank you so much. Look forward to that tutorial.
@tonycardinal413
@tonycardinal413 3 жыл бұрын
Thank you sooo much. Washington U must be an awesome college. If you write model.add(Embedding (10, 4, input_length =2)), Is the number of neurons in the embedded layer 10? or is it 4? or 2 ? Also is the embedded layer the same as the input layer? thanks so much !
@suryagaur7440
@suryagaur7440 5 жыл бұрын
While creating Embedding layer input_dim are the number of unique words in vocabulary which is 2 as input_data =np.array([1,2]), SO why we put it 10 ??
@sachink7955
@sachink7955 4 жыл бұрын
10 is the number of unique words we have.
@apratimgholap2930
@apratimgholap2930 4 жыл бұрын
you mention its dimension reduction but then again point and say not exactly can you elaborate?
@sanjaykrish8719
@sanjaykrish8719 5 жыл бұрын
Awesome.. Love it
@ramonolivier57
@ramonolivier57 4 жыл бұрын
Good video and your simple coding examples are excellent (because I can replicate them and try it out). However, your explanation (narration) in the last 4 or so minutes gets compressed.... you speak very very fast and scroll very fast, including some scrolling that basically seems to happen off-screen. Thanks for the lesson!
Introduction to the OpenAI Gym (12.1)
16:26
Jeff Heaton
Рет қаралды 9 М.
244 - What are embedding layers in keras?
18:24
DigitalSreeni
Рет қаралды 22 М.
Enceinte et en Bazard: Les Chroniques du Nettoyage ! 🚽✨
00:21
Two More French
Рет қаралды 42 МЛН
BAYGUYSTAN | 1 СЕРИЯ | bayGUYS
36:55
bayGUYS
Рет қаралды 1,9 МЛН
Programming LSTM with Keras and TensorFlow (10.2)
27:53
Jeff Heaton
Рет қаралды 38 М.
Word Embeddings
14:28
macheads101
Рет қаралды 157 М.
you will never ask about pointers again after watching this video
8:03
Word2Vec and Text Classification (11.2)
13:25
Jeff Heaton
Рет қаралды 17 М.
Variational Autoencoders
15:05
Arxiv Insights
Рет қаралды 521 М.
But what is a neural network? | Deep learning chapter 1
18:40
3Blue1Brown
Рет қаралды 18 МЛН
Keras Preprocessing Layers
37:14
TensorFlow
Рет қаралды 34 М.
Why LLMs Are Going to a Dead End Explained | AGI Lambda
14:46
AGI Lambda
Рет қаралды 12 М.