Steps To Follow 1. Sentences 2. One hot Representation-- index from the dic 3. Onhot Repre---> Embeddind Layer Keras To form Embedding matrix 4. Embedding matrix
@vinitkanse57034 жыл бұрын
How to decide the vocab_size ??
@BhupinderSingh-rg9be4 жыл бұрын
sir please include this part in which u write steps in notepad in every video, it feels easy to understand the code.
@mathavraj93784 жыл бұрын
is the embedding layer values here formed by word2vec ?
@tararawat29554 жыл бұрын
how dimensions are decided for textual data...in your case what all features we have considered for each word..i mean on what basis we can decide dimension....
@DineshBabu-gn8cm3 жыл бұрын
@@vinitkanse5703 please explain me what is vocabulary.
@ijeffking4 жыл бұрын
I cannot thank you enough for this particular video. The length to which you have gone to explain Word Embeddings is highly appreciated. A world of Thanks.
@BrandoMalqui9 ай бұрын
Thank you so much. I was having trouble understanding embedding which I need to implement for a model in one of my classes but you have made it very clear and easy to understand.
@arpitbaranwal72364 жыл бұрын
Thanks Krish for wonderful playlist.
@Gamezone-kq5sx2 жыл бұрын
The best than applied ai... Really best video ...
@googlecolab91414 жыл бұрын
better explanation than Stanford CS224N: NLP with Deep Learning | Winter 2019 course. thank you sir
@spartacuspolok66243 жыл бұрын
Your video helped me a lot to understand it and to start working as a beginner.
@debashisghosh31333 жыл бұрын
Awesome explanation Krish hats off...thnx a ton
@shaiksuleman31914 жыл бұрын
You and Codebasics are 2 eyes in teaching.There are some many doctors ,only few will inject injection with out pain.
@gauravsahani24994 жыл бұрын
Thankyou so much Krish Sir, for this wonderful Playlist!, Learned a Lot!
@vivekbhat7203 жыл бұрын
Thankssss bro for putting such great effort in teaching.
@vibivij19484 жыл бұрын
Nice Video Krish...
@sir-lordwiafe99283 жыл бұрын
Thank you Sir. Very very helpful.
@affandibenardi5484 жыл бұрын
it make sense and simple to understand thx bro
@enricowijaya8668 Жыл бұрын
AMAZING EXPLANATION, thank youu!!
@bhushanchaudhari3784 жыл бұрын
Mind blowing explanation 🤙
@louerleseigneur45323 жыл бұрын
Thanks Krish
@radhikapatil80034 жыл бұрын
Hi sir please suggest the face recognition CNN model..which is comparable with mobile face recognition
@maralazizi2 жыл бұрын
Thank you, it was a great explanation!
@praneethaluru26014 жыл бұрын
Literally my doubt got clarified...
@arjitdabral84494 жыл бұрын
Sir plz explain 1.) what were the features here according to which the vectors were created 2.) Where does the features come from can we use our own features
@nobalg34824 жыл бұрын
@Krish Naik, would love to have an answer to this..... Please can you explain this as well.
@tararawat29554 жыл бұрын
even I have the same question..Kindly answer @Krish
@hang14452 жыл бұрын
@Krish Naik I would like to ask the same question
@RitikSingh-ub7kc4 жыл бұрын
Krish, can you explain some applications of nlp using lstm like next word prediction, translation and Image captioning ?
@kmnm94633 жыл бұрын
Hi Krish, Great video on Word Embedding. At 03:10 I think it is not one_hot which passes the values [2000, 4000, 5500] to Embedding layer. This is done by Tokenizer class from the same Keras. Tokenizer will create a dictionary of Words and their integer value - the length of the dictionary will be equal to total unique words in the corpus. Also one_hot is not efficient and after Tokenizer has come into the scene, one_hot is seldom used. from user - Krish ( my name shortened too is Krish :)
@sampaths49322 жыл бұрын
Good
@shamussim1374 жыл бұрын
This is so goodd!!!Thanks Krish
@rishikeshthakur64974 жыл бұрын
Your videos are really great sir. Hat's off you. Please, also make video on sentence embedding technique like infersent
@ramonolivier574 жыл бұрын
You speak a little too fast in some section, but you explain everything very well. I still am missing understanding on "global average pooling". Thank you!
@EliRahimkhani Жыл бұрын
Great video! a couple of questions: how we can see these 10 features for each word? are the features the same? if not how the features are selected?
@sowmyakavali26703 жыл бұрын
Krish , How the word embedding layer convert one hot encoding vector to fixed dimensional embedding matrix ? How the 10 k values converted into 300 values? Here we assume that one hot vector means its a sparse vector only , and is the embedding matrix also s sparse matrix or else dense matrix ? @KrishNaik
@User-nq9ee3 жыл бұрын
Hi very nice explanation, but we did not mention any feature but only feature size then how all those words got assigned dimension values. on what basis.
@infinitum904 жыл бұрын
Krish, can u explain how index 6654 get converted to vector of 10 dimension vector, exactly what algorithm keras used to convert an index into vector .
@bhavinmoriya92163 жыл бұрын
THanks for the awesome video. Don't we need to fit before predict? If not, why?
@fidaullah37754 жыл бұрын
Thanks for sharing video, but please also make video on LSTM
@lemoniall65532 жыл бұрын
if we have a sentence "vishy eat bread". then we vectorize the word "eaat"(misspelled word), why does fasttext see that the word "eaat" is more similar to the word "eat"?. How is the architecture?, is it possible for fasttext without using skipgram to be able to classify words?. Thanks
@snagseeker8 ай бұрын
sir,how it is dividing row words and columns word why both rows and columns does't have same words ?
@sandipansarkar92114 жыл бұрын
Thanks Krish once again
@theniyal4 жыл бұрын
Can't you do the one hot representation with Tensorflow 'tokenizer and sequence' fucntion?
@ManelTIC3 жыл бұрын
What is t'he relstion between embedding words and context window?
@cool_videos60164 жыл бұрын
Hi krish, It was great video I am a beginner and I started seeing some projects on kaggle related to lstm I always had one doubt that is when to use a specific layer like in some projects they use two lstms they use dropout with certain value and these things are different in different projects and I get confused how did they choose these layers . I would request u to make a video on how can we know when to use a certain layer and why
@DineshBabu-gn8cm3 жыл бұрын
I don't understand what vocabulary and vocabulary size are please some one explain. then what if our word is not in our vocabulary of 10 k. Please someone explain.
@Ankitkumar-qh6tx Жыл бұрын
very helpful
@ahmedosama49734 жыл бұрын
Thnx sir I have one doubt for this ..what will be the benefit for the word representative is that we can predict sentience or what the advantages
@kalppanwala64394 жыл бұрын
Me waiting daily for Krish's videos be like me waiting for Money Heist Season 5 (aage kya hoga iss video ke baad) :)
@barax94623 жыл бұрын
Hello, im doing a project in which im not allowd to use AI librarries so, Can you pleaes explain this: - if i have an initial weight-matrix consisting of embeding matrix of size (300)x(|vocab_size| = 4k) and its filled with random values. And then we have an one-hotted sentence input of size say 3k sen = [0,0,0,0,157, 8, 900,100] etc.. my mian question is how to multply/dot-prodct the embedding-matrix with the sentence vector??? im really confused about this. should i convert the sentence vector into a matrix of size |sentence_vector| x embedding. or should just multply the indeices with the embedding matrix?????
@chandanbp4 жыл бұрын
How are we deciding no of features, what are those features exactly in the given problem?.
@krishnaik064 жыл бұрын
It is difficult to interpret..
@chandanbp4 жыл бұрын
@@krishnaik06 Or can you just give an analogy what those features could be
@patrickadjei96763 жыл бұрын
Please Explain Why your dimension is 10 when your set of features are 8. And why are you using a vocal of 10000 when your actual vocabulary is far less than 10000. Please explain.
@pulunghendroprastyo58683 жыл бұрын
Up
@cCcs6 Жыл бұрын
Hello Krish, I have one question. I followed your tutorial and created those word embeddings. However, how can I fit them to a model (for example SVM) since they have a 3D-shape the model does not accept it. Thank you in advance, Christ
@muntazirmehdi5033 жыл бұрын
how do we set the vocab_size to 10k or any other number
@varunpusarla3 жыл бұрын
How to decide the value for vocab size ?
@abdelhomi8363 жыл бұрын
At the very last part of your video can you do an inverse transform to your input for Semantical predictions instead of matrix prediction output?
@infinitum904 жыл бұрын
Krish,thank you for the clear visualization but 300 dimension is the parameter but how keras embedding layer calculated the 300 values of vector.Pl. can u explain.
@suvarnadeore88103 жыл бұрын
Thank you sir
@tanmoybhowmick82304 жыл бұрын
Hey krish can you show some deployment of ml model ??
@mohamednajiaboo98173 жыл бұрын
Thanks for the video. When we add the embedding we need to set the feature size. Here we set as 10. So how we the Keras know which of the 10 features need to be selected?
@Trouble.drouble4 жыл бұрын
Krish is dictionary " is bag of words "
@krishnaik064 жыл бұрын
Dictionary are all the possible words
@Trouble.drouble4 жыл бұрын
Then what is this Bag of words ji
@krishnaik064 жыл бұрын
Check my NLP playlist there I have explained
@Trouble.drouble4 жыл бұрын
Sure sir 👍 I'll see thank you sir
@siddharthpandit21174 жыл бұрын
why not padding added at the end. What changes if padding is at the end?
@SB-bu3xt4 жыл бұрын
Sir make a video on step by step guide for beginners who wants to learn ML
@RAZZKIRAN4 жыл бұрын
actually one hot encoding is vector represation, again embedding layer converts to vector representaion?
@RAZZKIRAN4 жыл бұрын
confused ? do we need to converted one hot repesentation embedding layer?
@kmnm94633 жыл бұрын
Hi - I think No. Just use Tokenizer from Keras and pass it into the Embedding Layer. One Hot is not required. This is from a user ( my name too is Krish - so don't confuse with Krish(presenter - the great teacher). :)
@khangamminh5123 жыл бұрын
Is the output of this with word2vec different in nature, guys?
@ameerhussain54054 жыл бұрын
Finally I understood the embedding layer!! I had gone through many tutorials but failed when I started implementing but with this one I came through. Can't thank you enough Krish!! I would like to add 1 point to your code if it helps any 1, if we could add a Flatten() layer after the embedding layer then we could add Dense() layers and make predictions say if we are doing text classification.This model wouldn't do any good in terms of accuracy. But this helped me build an intuition on shapes of the tensors and what happens to text when fed into a deep networks which is much difficult to visualise than what happens to images when fed into a CNN.(at least for me 😅)
@SameerSk4 жыл бұрын
instead of tf.keras.layers.Flatten() use tf.keras.layers.GlobalAveragePooling1D().
@suryanarayanan51583 жыл бұрын
amazing
@mohdzaid65334 жыл бұрын
Sir how can I install tensor flow and keras package R ?
@rishabhvarshney22342 жыл бұрын
can you tell me how vocab = 10000, vocab means unique word in whole corpus write?
@wilman92064 жыл бұрын
hi sir i wanted to know how can i search for the sentences using word embedding thanks in advance.
@maYYidtS4 жыл бұрын
can anyone suggest to me....what if the text size more than 8 words we lose information right....is there any other way to overcome this problem?
@kaziasifahmed24434 жыл бұрын
plz can anyone tell me how did 100000 parameters are formed?? meaning what works behind 100000 parameters.10 dim *10000 voc size ? but why?
@sutopa83774 жыл бұрын
Hi Krish, Can you explain the concept of attention mechanism and please explain it in general, not related to encoder-decoder machine translation application.
@vinitkanse57034 жыл бұрын
How to decide the value of vocab_size ?
@krishnaik064 жыл бұрын
How many important words may be there in a dictionary?
@vinitkanse57034 жыл бұрын
@@krishnaik06 okay, but how do we differentiate between important words and other words? And I have one more doubt, In the video of word embeddings you said the "Embeddings are nothing but a feature representation " . So while implementing it (tf.keras.layers.Embedding) which features and what type of features and on what basis they are generated?? Like in the video there were gender , age ,etc (300 features) please look into my doubt, Thanks and Reagrds
@krishnaik064 жыл бұрын
Features things we cannot see as such...but based on the research paper it is how it is represented
@nobalg34824 жыл бұрын
@@krishnaik06 can you please refer to some research papers or make some video. I think you meant by 'we can;t see as such' is that one should thing this as an abstract idea happening behind the scenes; the system is creating and mapping features on its own?
@aartiahlawat82284 жыл бұрын
@@krishnaik06 yes sir same doubt i am also facing
@rog00794 жыл бұрын
so whats the difference between word2vec, and this embedding layer provided by keras?, do they perform the same job>?
@noureddineghoggali9995 Жыл бұрын
Can you please make some tutorials on how to integrate the constraints in machine learning or deep learning? Thank you
@Maryamkhan-lo1hq3 жыл бұрын
Great explanation. one question what if we pass our dataframe which consist of 500 posts with label text instead of sentences, does it works?
@ADESHKUMAR-yz2el3 жыл бұрын
how can dictionary have index? it is traversed by keys here we don't care about the position of elements , we call elements through keys regardless of its position. plz correct me if i am wrong , index of dictionary sounds confusing 😩
@anagham24134 жыл бұрын
How can I convert more than 32 words?
@BharathKResu4 жыл бұрын
Hey krish, can u guide us on NER, like resume parsing for example..
@MRaufTabassam4 жыл бұрын
Did it work same for urdu?
@krishnaprasad-un3hy4 жыл бұрын
Krish this is a wonderful explanation. I just wanted to know that, I have watched your previous three videos on NLP and i want to learn this technique from scratch. So, is that enough or we have other topics to cover?
@fineescape12572 жыл бұрын
Ifi watched this video and I don't understand, anyone has a suggestion of what I should watch first?
@chaitanyakulkarni60124 жыл бұрын
please do a video of installation of tensorflow for gpu i have same laptop as yours facing issues from a long time
@aaryannakhat18424 жыл бұрын
which gpu do you have?
@biswajitroy-zp6lk4 жыл бұрын
how to judge how many features to take
@akhilyeduresi81454 жыл бұрын
Hi Krish, Where Can I find Word2Vec and GLove implementation the same as above?
@pranayghosh15844 жыл бұрын
why need to pad each sentence before embedding?
@VijayKumar-bk5be4 жыл бұрын
Hey krish can u help us more videos by using system as well (practical)?
@mohsinimam2048 Жыл бұрын
Thanks!
@bilalhameed2483 жыл бұрын
voc_size=10000 dim=10 You did not explain these two variable clearly, why we consider dim=10 and voc_size=10000 please explain with some logic
@priyanshramnani17514 жыл бұрын
Thank you! :)
@ravishankar21803 жыл бұрын
your vocab size is hardly 50 and you have taken as 10000?? strange!