Great video. After going through several explanations and videos, yours is the clearest and I finally understand the use of the Embedding layer. Thank you.
@giovannimeono88022 жыл бұрын
I agree with this comment. This video is the clearest explanation for embeddings I've been able to find.
@suryagaur74405 жыл бұрын
don't have words to explain how great this series is.!! speechless.!!
@WisamMechano4 жыл бұрын
This was a very helpful video, mostly vids focus on the use case rather than what the embedding is. You nailed it with a very elaborate explanation. Thank you
@nitroflap4 жыл бұрын
The best explaining of the Embeddings in tensorflow which I've whenever seen.
@himanshutanwani_4 жыл бұрын
At 12:00, instead of one hot, can we use tf.keras.preprocessing.text.Tokenizer and fit_on_texts methods, please correct me if i am wrong.
@drstoneftw60844 жыл бұрын
my exact same thought
@AlexeyMatushevsky3 жыл бұрын
The discovery of the year! Thank you for your lectures!
@HeatonResearch3 жыл бұрын
You're very welcome!
@FiveJungYetNoSmite2 жыл бұрын
Good video. I would have liked to see a single sentence inputted into the model at the end to show how to evaluate single inputs
@ashishpatil17164 жыл бұрын
Best explanation of embedding layers ever !
@coobit4 жыл бұрын
i can't get it.. 6:33 The input vector is [1,2] and the output is 2 rows of the lookuptable but no row is multiplied by 2... how is this possible? 9:47 Why the hell input is [[0,1]] and the output is 2 rows of the lookuptable? I mean why is the input like this? The dimentions of the input and the lookup matrix do not match. The multiplication is meaningless. Or am I missing smth?
@alexanderk58353 жыл бұрын
Really good video, very digestible. Thank you Jeff!
@HeatonResearch3 жыл бұрын
Thanks! Glad it was helpful.
@sambitmukherjee17134 жыл бұрын
Great explanation Jeff.
@SatyaBhambhani2 жыл бұрын
this was awesome! i am hunting down videos for multinomial text classification, and this helped shed insights on when to use embedding, why to, and how! and also the production phase for corps? exactly what i was looking for!
@amitraichowdhury81483 жыл бұрын
Amazing video.....beautifully explained!. This is exactly what I was looking for to understand the Embedding layer. Great work!...please keep uploading more videos :)
@HeatonResearch3 жыл бұрын
Awesome, thank you! Subscribe so you do not miss any :-)
@beizhou24885 жыл бұрын
We already have the word2vector model that can map words to vectors. I am wondering why we need to build the word Embedding layer by ourself? Because Embedding layer and word2vector model does exactly the same things, and word2vec model is well-trained.
@RH-mk3rp2 жыл бұрын
An explanation of gradient descent and how the loss gradients are propagated back to the embedding layer would be nice
@netfission4 жыл бұрын
Professionally done! Good job!
@blasttrash2 жыл бұрын
now how to do find_similar using that embedding weights layer?
@davidporterrealestate2 жыл бұрын
This was great, esp. the 2nd half
@stackexchange73534 жыл бұрын
Question: How could you use model persistence for sub tasks when using two different datasets? I created a cop of the original, and substituted 3 labels in my target column for another label. For instance, I have a NLP multi-classification problem, where I need to classify the x as 4 diffferent labels like 1, 2, 3, or 4. 1, 2, 3 labels are related, and their labels can be substituted as 5 so that it's now a binary classification problem. Now, I only need to differentiate between 4 and 5, but I'm still left with the classification between 1, 2, 3, which I'm not too sure how to use the initial classification (4 and 5 binary classification) to help in the second model. I can't find any information if SKLearn allows this like Keras does. Thanks for any suggestions.
@mukherjisandeep2 жыл бұрын
Thank you for the great explanation! Further, I wanted to understand, is there a way we can look up the embeddings for each word in the corpuses
@guzu6723 жыл бұрын
Finally! My struggle ended 😁👍
@ankitmaheshwari73104 жыл бұрын
Expecting more information
@sebastian81ism4 жыл бұрын
awesome Explanation!
@HeatonResearch4 жыл бұрын
Thanks!
@mohajeramir4 жыл бұрын
This was very helpful. Thank you
@HeatonResearch4 жыл бұрын
Glad it was helpful!
@beizhou24885 жыл бұрын
Hi, will we learn the attention model in the near future? Like LSTM and attention.
@HeatonResearch5 жыл бұрын
Attention, not currently, but I may do a related video on it outside the course.
@beizhou24885 жыл бұрын
@@HeatonResearch Great. Thank you so much. Look forward to that tutorial.
@tonycardinal4133 жыл бұрын
Thank you sooo much. Washington U must be an awesome college. If you write model.add(Embedding (10, 4, input_length =2)), Is the number of neurons in the embedded layer 10? or is it 4? or 2 ? Also is the embedded layer the same as the input layer? thanks so much !
@suryagaur74405 жыл бұрын
While creating Embedding layer input_dim are the number of unique words in vocabulary which is 2 as input_data =np.array([1,2]), SO why we put it 10 ??
@sachink79554 жыл бұрын
10 is the number of unique words we have.
@apratimgholap29304 жыл бұрын
you mention its dimension reduction but then again point and say not exactly can you elaborate?
@sanjaykrish87195 жыл бұрын
Awesome.. Love it
@ramonolivier574 жыл бұрын
Good video and your simple coding examples are excellent (because I can replicate them and try it out). However, your explanation (narration) in the last 4 or so minutes gets compressed.... you speak very very fast and scroll very fast, including some scrolling that basically seems to happen off-screen. Thanks for the lesson!