The best.. you elucidated this topic with charm !! Thanks Sujan
@NormalizedNerd4 жыл бұрын
Glad it helped you...Keep supporting ❤️
@magelauditore3334 жыл бұрын
Commenting after 1/3rd part f the video. It is really very clear. Wait up and continue this. You will get huge lots of subs. Keep it up
@NormalizedNerd4 жыл бұрын
Thank you so much...it means a lot :D
@TechResearch054 жыл бұрын
Very clear description. I was struggling to understand it but your video was very simple and provided required information
@NormalizedNerd4 жыл бұрын
I'm glad to hear that. Keep supporting. ❤️
@ZohairAhmed0072 жыл бұрын
Thanks, that is one of the best explanations, I Understand a lot.
@shivanineeli53923 жыл бұрын
please continue making NLP videos , please we want more and more if possible entire AI we would love u hear from you!
@NormalizedNerd3 жыл бұрын
Keep supporting!
@debjyotibanerjee77504 жыл бұрын
Really good explanation, now understood the concept completely!!!!!!!
@NormalizedNerd4 жыл бұрын
Glad this video was helpful. Keep supporting man!
@manikant19902 жыл бұрын
Beautiful Explanation, I love it!! 👍👍
@NormalizedNerd2 жыл бұрын
Thank you! 😃
@swagatmishra93504 жыл бұрын
Really awsum vdo.... So easy and clear explanation... Loved it.. Please make more vdos.. Thanks a lot..
@NormalizedNerd4 жыл бұрын
I'm glad that you loved it. More videos are coming :D
@pratibhagoudar68173 жыл бұрын
Thanks bruhhh🤍.... it's more clear dan compare regular classes # nlp
@NormalizedNerd3 жыл бұрын
Great to hear that :D
@mastercomputersciencewitha59853 жыл бұрын
Very Nice explaination sir. Thank you so much sir.
@dipannita74364 жыл бұрын
one of the best explanation
@NormalizedNerd4 жыл бұрын
Thanks a lot!
@jjtalks67972 жыл бұрын
Super explanation
@vaidehideshpande14893 жыл бұрын
great explanation!!
@NormalizedNerd3 жыл бұрын
Thank you!
@rajivs92873 жыл бұрын
awesome video
@NormalizedNerd3 жыл бұрын
:D
@spartacuspolok66243 жыл бұрын
It was really helpful.
@NormalizedNerd3 жыл бұрын
Thanks!
@meetsavsani97394 жыл бұрын
Great work buddy!
@NormalizedNerd4 жыл бұрын
Thanks a lot!
@aniketchhabra89123 жыл бұрын
This is amazing!!
@edwardrouth4 жыл бұрын
Hi, This could sound bit naive but i just want to know how did you figure out the parameter that you are passing to "api.load()" which is "word2vec-google-news-300". I mean there must be a list of API from where you got this right ? I googled it but i found there are just links and its bit confusing too. Thanks.
@NormalizedNerd4 жыл бұрын
We were all naive once so don't worry. I'm using gensim api. So you can find the correct parameters from their documentation/repo. Here you can find a file called list.json => github.com/RaRe-Technologies/gensim-data You can also find the list of the models from the GitHub readme.
@tobiascornille4 жыл бұрын
You said Skipgram predicts the context words from the target word, but then later you just compute the sigmoid (so not a softmax) to know if one pair of a target word and a context word is correct. I don't really see how this is "predicting" the context words. Is there something else going on? I'm very confused since it seems like every explanation is saying something different...
@NormalizedNerd4 жыл бұрын
Because, we are taking one context (or random) word at a time and pair it with the target word. If we get context-target pair then class is 1. If we get random-target pair then class is 0.
@intelligentinvesto90603 жыл бұрын
What is the loss function used?
@NormalizedNerd3 жыл бұрын
- log[P(w_target | w_context)
@coxixx4 жыл бұрын
please make a video about how back propagation works in skip gram.
@NormalizedNerd4 жыл бұрын
Backpropagation in the word2vec model is really hard to explain in a single video however, I found a great resource to learn about it. www.claudiobellei.com/2018/01/06/backprop-word2vec/ I hope one can understand this article after watching the video.
@rexwan5613 жыл бұрын
intro video remind me its wednesday my dudddde
@hemangshrimali63083 жыл бұрын
Nice video
@ccuuttww3 жыл бұрын
I want to ask a question is it all vector of words in same length? Because I have an idea if we use DNA sequence(of course not in same length) instead of just words can we train a model to get a better classify result?
@NormalizedNerd3 жыл бұрын
The length of each word vector is same. Because the idea behind word2vec was to represent every word using vectors of fixed length.
@ccuuttww3 жыл бұрын
@@NormalizedNerd can we fit DNA sequence into it? I know we can fit image into it
@md.shafaatjamilrokon85872 жыл бұрын
Thanks man
@ThoTran-oi3xi3 жыл бұрын
Thank you so much for your video, can you turn on subtitles for this video? Because I'm not from England, I can't hear you clearly but the video has no subtitles
@NormalizedNerd3 жыл бұрын
Currently, I don't have the resources to put subtitles on every video. However, I'll try to do it for some videos.
@r_pydatascience3 жыл бұрын
Nice video.Does word2vec represent medical vocabularies? I have a medically text corpus that has about 100000 tokens. What do you think I should do?
@magelauditore3334 жыл бұрын
Just Awesome.
@NormalizedNerd4 жыл бұрын
Thanks again!
@krishcp77183 жыл бұрын
Hi, Your videos on NLP are great. For , most_similar(positive = ['boy', 'queen'], negative='girl', topn=1) I am getting : [('teenage_girl', 0.35459333658218384)]. What could be happening here? Krish
@NormalizedNerd3 жыл бұрын
Strange!...that shouldn't happen. Please check your code. You can download my notebook and run it. The link is in the description. Edit: Ohh, I get it. You used "girl" instead of ["girl"]. Interesting...I didn't know it behaves like this when just a string is passed :o
@debjyotibanerjee77504 жыл бұрын
Bro just tell me one thing, while creating vectors of the words, do we need to remove stopwords, and lemmatize our text data, cause I believe if we do the mentioned text pre-processing steps, then may be the word2vec model may be not able to understand the context, and the training will not happen properly. If you could say something, that would help me a lot in my project.
@NormalizedNerd4 жыл бұрын
Great question. TBH it depends on the project you are working on. Google's Word2Vec doesn't implement lemmatization (also removes very few stop words) so if you are planning to use that then don't lemmatize. But if you are gonna train your word2vec then you can do all sorts of preprocessing. A rule of thumb is if your data size is very large then don't lemmatize. For stop words I'd say remove only the ones that doesn't change the context very much (like a, an, etc.)
@debjyotibanerjee77504 жыл бұрын
@@NormalizedNerd okay.. Thank you for the information bro. Are you there on LinkedIn?
@NormalizedNerd4 жыл бұрын
yeah...here's my profile www.linkedin.com/in/sujandutta99/
@gulsanbor4 жыл бұрын
excellent
@NormalizedNerd4 жыл бұрын
Thank you...keep supporting
@gulsanbor4 жыл бұрын
@@NormalizedNerd sure . Connect with me in Linkedin if possible www.linkedin.com/in/gulsan19/
@coxixx4 жыл бұрын
awesome
@NormalizedNerd4 жыл бұрын
Thank you :)
@Hephasto3 жыл бұрын
Shouldn’t it be grammatically correct I love making videos instead of to make