Check out our premium machine learning course with 2 Industry projects: codebasics.io/courses/machine-learning-for-data-science-beginners-to-advanced
@mehmetbakideniz Жыл бұрын
I started searching wordtovec videos after failing to understand it by following NG's lessons. That is the single video that can actually tell that the word embeddings are 'the side effects' of the training process and this is how it finally clicked for me. Thank you very much!
@anubhavthakur29856 ай бұрын
then you didn't searchthe youtube enough
@ahmedgamberli22502 жыл бұрын
I like how you love your homeland and use it in all examples. Greetings and Love from Azerbaijan.
@sunilgundrai64642 жыл бұрын
As part of my NLP dissertation, I was looking for some real time use cases with some clear explanation. I found this a super useful and thank you for great demonstration with so many examples which are easy to understand. You rock with your teaching skills!!
@girmayohannis46597 ай бұрын
nice to meet you here, in which university are you studying for your PhD?Thanks
@austinburcham42602 ай бұрын
This was unbelievably helpful. This filled in the intuition gap behind word2vec that I was so desperately looking for.
@assafbotzer7952 Жыл бұрын
So clear, so eloquent, and so concise. Your contents are gift to this world. Thank you for using your intelligence, diligence and teaching skills to make a positive mark.
@fidaeharchli45908 ай бұрын
i confirm
@raom21273 жыл бұрын
Presenting complex understand matter in an simplified way Dhaval Sir we you are an patience,consistent, simplified ,organised way of subject presentation expert.Basics-Theroy-Coding-Pratice..with..Great Explaination.
@sanjeebkumargouda14713 жыл бұрын
Great explanation .. 🙌🙌🙌 After watching many videos on this topic finally my understanding is cristal clear. You are doing awesome job sir.
@codebasics3 жыл бұрын
I appreciate you leaving a comment of appreciation
@mimansamaheshwari46642 жыл бұрын
One of the best videos on word2vec
@gayathrigirishnair74052 жыл бұрын
This is the video that finally helped me grasp this concept. Thank You!
@robertcormia79709 ай бұрын
This was a useful introduction, I don't have the math chops to understand it, but it was useful to hear some of these definitions.
@sumit1212853 жыл бұрын
you are the real teacher.....what should i say for you ???? thank you sir...thank you so much.........
@anonymous-or9pw10 ай бұрын
He played it really well when he marked male = -1
@yonahcitron2262 жыл бұрын
incredible content. this guy is one of the best on youtube
@codebasics2 жыл бұрын
I appreciate you leaving a comment of appreciation
@Sunilgayakawad Жыл бұрын
Crystal clear explanation!! Thanks you so much sir
@vgreddysaragada Жыл бұрын
Super explanation ..Thank you so much
@SoftRelaxingAndCalmMusicNature Жыл бұрын
Well done. This is one of the best course on word2vec so far. I do have a master degree in AI and event that I did not work professionaly in the field your cour brough a lot of souvenirs haaa.. During my master 15 years ago I introduced an archaich method for resolving question/answering based on linkgramar, wordnet, verbnet and semnet. At the end of my syntactical analysis I also discovered that by just using world context it was possible to comme up with a vector representation of named entities.. The innovation is here is the use of neural network to give a value to the world. This is just brilliant. In my thesis I was already showing that language is just a code representing a subjective version of one universe and that human and animal comunicate using theirs own code.
@PremKumar1362 жыл бұрын
Awesome explanation. Crystal Clear.
@kanisrini015 ай бұрын
Amazing Video 👏🌟. Thank you so much for the great explanation
@619vijay4 ай бұрын
Very useful and informative
@farnoushnazary72954 ай бұрын
Awesome explanation. Thanks!
@overthrowenjoyer2 ай бұрын
What a great video and what a great explanation! ❤
@moni11223312 жыл бұрын
great teacher, great explanation, great presentation, great context
@fidaeharchli45908 ай бұрын
thank you
@tagoreji21432 жыл бұрын
Good Explanation Sir.Thank you
@kimdaeeun6683 Жыл бұрын
Easy explanation!! Tks much👍👍
@ledinhanhtan9 ай бұрын
Mind blowing 🤯🤯 Thank you!
@javierlopezcampoy5951 Жыл бұрын
Great explanation! Thank you very much
@notknown93073 жыл бұрын
thanx we are learning a lot from you
@codebasics3 жыл бұрын
Glad it was helpful!
@notknown93073 жыл бұрын
@@codebasics waiting for your next upload you are doing your work very well👍👍
@dhirajkumarsahu9993 жыл бұрын
Great Visual way of teaching! Thank you so much Sir ❤️
@shivav73792 жыл бұрын
A Very good explaination - really very helpful
@moutazalreesh1763Күн бұрын
Thank you so much for this amazing explanation! . I was wondering if it's possible to download the PowerPoint slides used in this course. Could you please let me know where I can access them?
@abir955713 жыл бұрын
There's a subtle mistake in your CBOW explanation at 8:34 . In CBOW the target is always the central word based on context i.e the surrounding word . That means for a substring "Emperor ordered his" and window size of 3 the target is "ordered" and features are "Emperor , this"
@ashwinshetgaonkar63292 жыл бұрын
so he explained skip gram
@ashwinshetgaonkar63292 жыл бұрын
so he explained skip gram
@abir955712 жыл бұрын
@@ashwinshetgaonkar6329 yes
@BARaaz043 жыл бұрын
Very good explanation. Thanks.
@namansethi17673 жыл бұрын
Thank you Sir for this playlist
@neerajashish70423 жыл бұрын
Approximately how many videos are going to come in this series except the existing videos, by the way thanks a lot sir, the only playlist on youtube which was way more knowledgeable for machine learning and deep learning..
@codebasics3 жыл бұрын
There will be atleast 5 to 10 videos coming up and then I will start the project series
@ashwinivalmiki76363 жыл бұрын
Hello sir, Please make a video on GRE and IELTS preparation , this will be more useful and helpful to students like me planning to study Masters Abroad as your videos are clear, we get motivated . Thank you.
@vishaldas63463 жыл бұрын
Also what would be your next topic in deep learning, is it sequence to sequence models?
@ShahabShokouhi9 ай бұрын
I was watching Andrew Ng's course on sequence models and his lecture on word2vec is just a bullshit. Thanks god I found your video, amazing explanation.
@ashokkonatham88573 жыл бұрын
Wow, very very clear . Thank you 🙏
@codebasics3 жыл бұрын
Glad it was helpful!
@vinaykumardaivajna5260 Жыл бұрын
Great explanation as always
@list100012 жыл бұрын
Thank you! The explanation was very clear.
@shubhamwaingade41442 жыл бұрын
Awesome explanation of the concept!
@pradeept32810 ай бұрын
Great explanation
@vishaldas63463 жыл бұрын
I think Dhaval, there is no non-linear activation function between the input layer and hidden layer. Correct me if I am wrong.
@darshangangurde78553 жыл бұрын
thanks a lot...holly great..pls complete the playlist asap
@manikant19903 жыл бұрын
superbly explained !!
@vikaspatildod2 жыл бұрын
Beautiful video
@minruili47893 жыл бұрын
Fantastic explanation!
@pretomghosh62314 ай бұрын
This COBW and Skipgram is kind of encoding+decoding architecture like the autoencoders if I am not wrong?
@notknown93073 жыл бұрын
Excited 😄
@madhu1987ful3 жыл бұрын
Awesome man...loved it...can you pls upload some code walk through of this concept -- some gud projects
@harshvardhanagrawal4 ай бұрын
Where do we get the predicted output from? How do we enter it for comparison?
@phil97n11 ай бұрын
Awesome thank you
@taufiqulhaque49873 жыл бұрын
would you please create a playlist on NLP?
@lohitsalavadhi69123 жыл бұрын
Great explained finally
@codebasics3 жыл бұрын
🙏🙏
@trendyjewellery19876 ай бұрын
Superb
@wenzhang58792 жыл бұрын
I think you mean 'side products' rather than 'side effect'?
@BeradinhoYilmaz Жыл бұрын
is there standart real list for every onject given here. For example for cats, tails 0.2?
@wp130011 ай бұрын
7:20 Meaning of word can be inferred by surrounding words
@bibhupadhy41552 жыл бұрын
Great Explanation :) Crisp and to the point , Better than Hrithik Roshan Super Hero Movie's Explanation :P :P
@ChaitraC91912 жыл бұрын
Hello I have doubt in this explanation, aren't all the weights gonna be same when our neural network is trained ? what I mean is once we train a network W(T)X is what triggers a output node so how do we have different weights for every output word
@cherupawan3777 Жыл бұрын
Did u got answer to this
@djelloulbouchiha-cunaamaal78482 жыл бұрын
We need a course about NLP Transformers..
@thurakyawnyein61137 ай бұрын
superb..
@Cooldude5786 Жыл бұрын
The statement "King - man + woman = Queen" is well-known in machine learning. However, when we examine the characteristics of a king, they often include being super rich, having authority, and possibly not having a tail. Yet, there is a contradiction: a lion is also referred to as a king, and it does have a tail. How can a computer differentiate between a human king and an animal king? Doesn't this introduce bias since the training corpus typically associates "king" with humans rather than animals? Just because something appears less frequently or is absent from the corpus doesn't mean it lacks value or significance.
@lisali62052 жыл бұрын
you are the best
@imanqoly Жыл бұрын
The more you dig deeper into a thing, the greater the tutor gets
@amanagrawal41984 ай бұрын
I think there's a mistake , because in both cbow and skip gram , the weights that made the embdeddings are always between input and hidden layer , and here in cbow you mentioned the weights between hidden and output are considered.
@amanagrawal41984 ай бұрын
CBOW Model Architecture Review The CBOW architecture works by predicting a target word based on the context words around it. Here’s a step-by-step explanation of the flow: Input Layer: This consists of several one-hot encoded vectors corresponding to the context words. Projection Layer (Hidden Layer): Each one-hot vector is used to retrieve a word embedding from the first weight matrix (input-to-hidden weights). Unlike typical neural networks, there is no activation function here; the embeddings are simply summed or averaged (depending on the implementation) to produce a single dense vector. This vector represents the combined semantic content of the context words. Output Layer: The averaged embedding vector is then projected to the output layer using a second set of weights (hidden-to-output weights). The output layer is a softmax layer that predicts the probability distribution over the vocabulary for the target word. Role of Weights in Embedding Formation Input-to-Hidden Weights: This is essentially the embedding matrix. Each row in this matrix corresponds to the embedding of a word in the vocabulary. When context words are fed into the model, their embeddings are retrieved by indexing this matrix with the one-hot vectors. These embeddings are what you typically extract and use as pre-trained embeddings for other tasks. Hidden-to-Output Weights: These weights are used to transform the combined embedding from the hidden layer into a prediction for the target word. Each column in this matrix (since it's typically the transpose of the embedding matrix in many implementations) can be seen as a "contextual embedding" of a word when it acts as a target.
@yasaswinigollapally76032 жыл бұрын
Sir your video is awesome 🙌,i have one doubt ,what is the main difference between skip gram and bag of words model?
@prasanth123cet2 жыл бұрын
Will be get nearly identical word vectors for CBOW and skim gram methods for a particular word say 'king'?
@thamizharasim59702 жыл бұрын
Thanks a lot 😌
@rahulsoni4123 жыл бұрын
Thanks a lot for explaining this using a neural network diagram :)
@codebasics3 жыл бұрын
🙂👍
@rahulsoni4123 жыл бұрын
@@codebasics can you explain how the number of weights are calculated in word embedding, I mean the number of total weights. I was getting confused while calculating the number of weights.
@prasannan-robots3 жыл бұрын
Thanks for this awesome tutorial waiting for coding part :)
@anpowersoftpowersoft5 ай бұрын
Amazing
@uwaisahamedimad5562 жыл бұрын
Hi, it is a wonderful explanation for word2vec I've ever seen.I have a question,I have my own corpus and I have built multiple wor2vec models, How to evaluate these models and how am I gonna choose the best one???
@codebasics2 жыл бұрын
One approach is to take a classification or some other NLP problem in your domain and build NLP classification model using your embeddings. You can then check the performance of those models to evaluate how effective embeddings are
@uwaisahamedimad5562 жыл бұрын
@@codebasics thanks a lot for the reply. based on your answer it seems like there is no standard or at least a well-established evaluation method for the performance of word embeddings.
@jongcheulkim72843 жыл бұрын
Thank you.
@lemoniall65532 жыл бұрын
Is word2vec using dimensional reduction too?
@anandakhan41602 жыл бұрын
sir, how do u unzip the json file using git bash , is not clear to me. help me plz. thanks.
@arjunbali20792 жыл бұрын
Thanks sir
@kmnm94633 жыл бұрын
Hi Dhaval, Great video on W2V, The link for the coding part of implementing Word2Vec in Python, please?
@codebasics3 жыл бұрын
Yes that video is coming up soon. I have not yet uploaded it
@bii7102 жыл бұрын
That was a great explanation. Thanks. I have this one question in my mind. If all words in documents are unique then how word2vec will find vector for the last 2 words? Considering cbow
@vitocorleone19912 жыл бұрын
Brilliant
@akshansh_006 ай бұрын
bam! life saver
@umerfarooque63738 ай бұрын
How to evaluate a word2vector model
@ibrahemnasser27442 жыл бұрын
What a mathematician would do when he/she hear you say "a vector is nothing but a set of numbers"
@sebinsaji95733 жыл бұрын
Can you say about cyber security scopes skills
@amanbajaj7591 Жыл бұрын
wheere is neural network link?
@houchj03723 жыл бұрын
doesn't CBOW mean Contextual Bag of Words?
@codebasics3 жыл бұрын
Continuous Bag Of Words: analyticsindiamag.com/the-continuous-bag-of-words-cbow-model-in-nlp-hands-on-implementation-with-codes/
@houchj03723 жыл бұрын
@@codebasics you are correct, thank you. By the way, this video is excellent.
@PavanKumar-bk1sz3 жыл бұрын
Can I get an admission in bsc data science after 12th commerce in St Xavier's College Mumbai ???? and I've mathmatics in optional subject ??? please please please please please please please please please tell me I've been requesting you for 6 months 🙏🙏🙏🙏🙏🙏🙏🙏🙏🙏🙏🙏🙏🙏🙏🙏🙏🙏🙏🙏🙏🙏🙏🙏🙏
@humanardaki79112 жыл бұрын
working?
@johnnysaikia243911 ай бұрын
King of the jungle has tail though
@amitmishra5474 Жыл бұрын
Lion King has tail 😅
@mmenjic3 жыл бұрын
3:56 why horse and woman are same gender for start ????? then king minus men is gender -2 adding a woman or horse to that you get gender -1 which is men or king !?????
@priyeshsrivastava80252 жыл бұрын
no its (-1) - (-1) + (+1) = +1 i.e. queen
@pawansinha8442 Жыл бұрын
but in case of king of jungle that is lion, he has a tail,😃 just saying...
@RePuLseHQKing3 жыл бұрын
3:35 paygap lmao
@santoshsaklani50192 жыл бұрын
Kindly make video on vulnerability prediction using wordtovec
@mubashiraqeel93329 ай бұрын
the thing is your all videos are connected to previous I am unable to watch a whole video you always made me pause and watch a previous video that's really a problem first i was watching the text classification video you said go watch bert first then in that video you said go watch word2vec then you said go watch part 1 first then now in this video you said go watch neural network now tell do you really want me to watch a whole video because i am just opening a new tab repitively.
@danielbrockerttravel3 ай бұрын
This headline is a complete lie that is not how embedding work and it’s highly unethical
@greenweed32533 ай бұрын
can you recommend other sources then?
@danielbrockerttravel3 ай бұрын
@@greenweed3253 Statquest teaches it more responsibly. He specifically told me he avoided the king-man + woman = Queen example because it simply isn't how word vectors function in practice.
@shivangiawasthi9388 Жыл бұрын
found a better explanation here - kzbin.info/www/bejne/gJ7Ik5SXpaaWgc0