Day 5-Training Word2Vec From Scratch And AvgWord2vec Indepth Inutuition|Krish Naik

  Рет қаралды 51,765

Krish Naik

Krish Naik

Күн бұрын

Пікірлер: 45
@princejindal3618
@princejindal3618 2 жыл бұрын
For the people who are trying to implement the train_test_split and getting error. The reason for this is when you apply lemmatize some sentences in the corpus are turned to blanks. Try running the below code after lemmatizer code: [[i,j,k] for i,j,k in zip(list(map(len,corpus)),corpus, messages['message']) if i0 ,corpus))] y=pd.get_dummies(y['label']) y=y.iloc[:,1].values y.shape
@iamthejims
@iamthejims 2 жыл бұрын
Thank you so much. This helped.
@user-9bk
@user-9bk Жыл бұрын
After train-test-split when I want to fit X_train,y_train it shows me error : from sklearn.naive_bayes import MultinomialNB model_NB= MultinomialNB() model_NB.fit(X_train,y_train) pls help me to slv this assignment. provide me git-hub link
@Pradoom45
@Pradoom45 5 ай бұрын
bro how did you got this and how did you know how to solve this problem
@ajayrathore7045
@ajayrathore7045 3 ай бұрын
​@@Pradoom45maybe chatgpt
@devkumaracharyaiitbombay5341
@devkumaracharyaiitbombay5341 12 күн бұрын
16:27 yes sir you are doing very good to people thank you sir. god will bless you for helping others.
@rajujadhav1392
@rajujadhav1392 2 ай бұрын
You have been doing excellent work by helping thousands of students learn the advanced technologies, please keep doing for the betterment of society.
@nikhilgupta4859
@nikhilgupta4859 2 жыл бұрын
Heyy Krish, I am your subscriber from past 1.5 year and I feel honoured to tell you, after following you I finally got a job transition as a senior data scientist at an MNC 6 month back. Now I have understood the datascience project ecosystem in my company. You are one of the contributors for my success. Thanks a Ton!!!!! Also I would like to open my hands for helping learners. So learners you can tag me asking any doubts. I would be more than happy helping you.
@klaus_aj5895
@klaus_aj5895 2 жыл бұрын
Hi @nikhil i have a query, i want to do address abbreviation expansion using this approach. For example, i have address "123, silver lane St., Nr Mapple Cir., " Kind off to expanded expected output "123, silver lane Street., Near Mapple Circle." Any help would be appreciated. Thanks
@jayashreepaul3890
@jayashreepaul3890 5 ай бұрын
@@klaus_aj5895 u can use contractions library. i m not sure while dealing with huge amount of real time data but u can search for the same.
@ravichoudhary2365
@ravichoudhary2365 2 жыл бұрын
Thank you Krish for your Amazing Video. I have learn a lot from your videos. Since last 2 years I have been following you. Thank for everything
@datasciencegyan5145
@datasciencegyan5145 2 жыл бұрын
You can continue with the quiz it's really fun and getting to know how much knowledge we are getting.
@saimanohar3363
@saimanohar3363 2 жыл бұрын
Thanks, Krish for providing free sessions. Really appreciate your guidance. 👏
@alankarsharma4550
@alankarsharma4550 18 күн бұрын
you are perfect!
@mihirparmar9441
@mihirparmar9441 2 ай бұрын
Thank you so much sir :) !!
@progamer0256
@progamer0256 2 жыл бұрын
sir i dont attend your live session becuase of job but later i watch your every videos to catch each and every word u says
@bigbossdailydrama
@bigbossdailydrama 9 ай бұрын
Thank you Sir 🎉
@kshitijnishant4968
@kshitijnishant4968 6 ай бұрын
There is some flaw in the self-trained Word2Vec model, I was not able to convert and store X as array in X_new like shown in the video. Any reasons?
@litonpaul6133
@litonpaul6133 2 жыл бұрын
Hi Krish..Please share interview questions regarding the topic which you are teaching after completing the teaching every session. It will be helpful. Day by day people will be ready for interviews. That is the idea.
@pankajkumarbarman765
@pankajkumarbarman765 2 жыл бұрын
Thank you sir for this amazing session 👌👌👌👌👌👌
@ShubhamKumar-tj5jw
@ShubhamKumar-tj5jw 6 ай бұрын
Thnaks krish
@AnkitSharma-yh3nm
@AnkitSharma-yh3nm 2 жыл бұрын
Awesome Session😊
@WahranRai
@WahranRai 2 жыл бұрын
By taking the average, 2 different sentences (input) could have the same AvgWord2vec
@kartiksood8105
@kartiksood8105 8 ай бұрын
I am getting this error while training my avgWord2Vec model. Any fixes? TypeError: only size-1 arrays can be converted to Python scalars
@MM-vx8go
@MM-vx8go Жыл бұрын
Informative 🎉
@anirudhagrawal5044
@anirudhagrawal5044 2 жыл бұрын
Krish, I'll be as honest as possible. There is no better educator who teaches soo well about data science and also gives the more profound knowledge of concept that is not even available on any platform that is paid thank you so much for teaching .I am really grateful to you.
@sagarbp-2854
@sagarbp-2854 8 ай бұрын
Hi Krish, Ineuron link is showing 404 error. I wanted to download resources
@ratnak1058
@ratnak1058 2 жыл бұрын
Thank you sir
@hargovind2776
@hargovind2776 2 жыл бұрын
Awesome stuff
@technicaljethya993
@technicaljethya993 2 жыл бұрын
Thanks 🙏
@mallikamehta3928
@mallikamehta3928 2 жыл бұрын
how to open an account on github and post our projects
@sandipansarkar9211
@sandipansarkar9211 2 жыл бұрын
finished watching
@vijayalaxmimchatter6650
@vijayalaxmimchatter6650 2 жыл бұрын
Hi Sir, actually I tried building model which was part of assignment.. but I was getting error while splitting the data into train and test.. can you please do it in next class...
@nishanandal-e4f
@nishanandal-e4f 4 ай бұрын
how we can get the data?
@aditya7042
@aditya7042 5 ай бұрын
TypeError: only length-1 arrays can be converted to Python scalars The above exception was the direct cause of the following exception: How to solve this error for training machine learning model?
@ratnak1058
@ratnak1058 2 жыл бұрын
Sir please explain interview questions with answers
@aakashpanda2412
@aakashpanda2412 2 ай бұрын
Hi Krish , there is one doubt, you explained that window size will be the dimension of the word in your previous video of nlp explaining word2vec but here you are explicitly providing vector_size?Why?
@venkatasubbareddykachiredd9343
@venkatasubbareddykachiredd9343 2 жыл бұрын
Hey Krish, I am not sure if you have clarified this, but I have a question. When we train our own model for Word2Vec, do we have to 1) split the corpus even before lemmatizing and generating word2vec model or 2) include the entire corpus (because lemmatization requires the entire vocabulary) and then split only when training the model. If we follow the first method, will we have an out-of-vocabulary issue?
@saurabharbal2684
@saurabharbal2684 Жыл бұрын
Hi sir hat's off to you, I am facing errors while implementation of word2vec and avg_word2vec on mails dataset. Please help me solving this error
@user-kz4xe5to1g
@user-kz4xe5to1g Жыл бұрын
44:05 45:02 lol
@swamiranjit754
@swamiranjit754 6 ай бұрын
🤣🤣🤣🤣
@Vansh-v2k
@Vansh-v2k 4 ай бұрын
When we are creating our own Word2Vec model, then after training the word2vec you have written many times "model.wv.index_to_key" or "model.wv['king'].similar" but wv is a variable where we have loaded the "word2vec-google-news-300" model. So why is it so ? Why we are writing the wv? In avg word2vec model you are also writing the "wv" variable ?
@adityanarendra5886
@adityanarendra5886 2 жыл бұрын
When doing TrainTest Split for the AvgWord2Vec in the day5 notebook at the end one. Its showing: Found input variables with inconsistent numbers of samples: [5564, 5572] My code: from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,random_state=0) Please Help. Btw Loved the Session👍🏽.
@princejindal3618
@princejindal3618 2 жыл бұрын
The reason for this is when you apply lemmatize some sentences in the corpus are turned to blanks. Try running the below code after lemmatizer code: [[i,j,k] for i,j,k in zip(list(map(len,corpus)),corpus, messages['message']) if i0 ,corpus))] y=pd.get_dummies(y['label']) y=y.iloc[:,1].values y.shape
@shubhsharma4016
@shubhsharma4016 6 ай бұрын
@@princejindal3618 I LOVE YOU BROTHER. I was stuck on this for sometime, I knew the solution but couldn't figure out how to find those missing values and remove them. Thankyou so much
人是不能做到吗?#火影忍者 #家人  #佐助
00:20
火影忍者一家
Рет қаралды 12 МЛН
The evil clown plays a prank on the angel
00:39
超人夫妇
Рет қаралды 50 МЛН
Леон киллер и Оля Полякова 😹
00:42
Канал Смеха
Рет қаралды 4,5 МЛН
Quando eu quero Sushi (sem desperdiçar) 🍣
00:26
Los Wagners
Рет қаралды 14 МЛН
Stanford CS229 I Machine Learning I Building Large Language Models (LLMs)
1:44:31
Understanding Word2Vec
17:52
Jordan Boyd-Graber
Рет қаралды 78 М.
Word Embeddings from Scratch | Word2Vec
19:55
Atif Adib
Рет қаралды 3 М.
人是不能做到吗?#火影忍者 #家人  #佐助
00:20
火影忍者一家
Рет қаралды 12 МЛН