For the people who are trying to implement the train_test_split and getting error. The reason for this is when you apply lemmatize some sentences in the corpus are turned to blanks. Try running the below code after lemmatizer code: [[i,j,k] for i,j,k in zip(list(map(len,corpus)),corpus, messages['message']) if i0 ,corpus))] y=pd.get_dummies(y['label']) y=y.iloc[:,1].values y.shape
@iamthejims2 жыл бұрын
Thank you so much. This helped.
@user-9bk Жыл бұрын
After train-test-split when I want to fit X_train,y_train it shows me error : from sklearn.naive_bayes import MultinomialNB model_NB= MultinomialNB() model_NB.fit(X_train,y_train) pls help me to slv this assignment. provide me git-hub link
@Pradoom455 ай бұрын
bro how did you got this and how did you know how to solve this problem
@ajayrathore70453 ай бұрын
@@Pradoom45maybe chatgpt
@devkumaracharyaiitbombay534112 күн бұрын
16:27 yes sir you are doing very good to people thank you sir. god will bless you for helping others.
@rajujadhav13922 ай бұрын
You have been doing excellent work by helping thousands of students learn the advanced technologies, please keep doing for the betterment of society.
@nikhilgupta48592 жыл бұрын
Heyy Krish, I am your subscriber from past 1.5 year and I feel honoured to tell you, after following you I finally got a job transition as a senior data scientist at an MNC 6 month back. Now I have understood the datascience project ecosystem in my company. You are one of the contributors for my success. Thanks a Ton!!!!! Also I would like to open my hands for helping learners. So learners you can tag me asking any doubts. I would be more than happy helping you.
@klaus_aj58952 жыл бұрын
Hi @nikhil i have a query, i want to do address abbreviation expansion using this approach. For example, i have address "123, silver lane St., Nr Mapple Cir., " Kind off to expanded expected output "123, silver lane Street., Near Mapple Circle." Any help would be appreciated. Thanks
@jayashreepaul38905 ай бұрын
@@klaus_aj5895 u can use contractions library. i m not sure while dealing with huge amount of real time data but u can search for the same.
@ravichoudhary23652 жыл бұрын
Thank you Krish for your Amazing Video. I have learn a lot from your videos. Since last 2 years I have been following you. Thank for everything
@datasciencegyan51452 жыл бұрын
You can continue with the quiz it's really fun and getting to know how much knowledge we are getting.
@saimanohar33632 жыл бұрын
Thanks, Krish for providing free sessions. Really appreciate your guidance. 👏
@alankarsharma455018 күн бұрын
you are perfect!
@mihirparmar94412 ай бұрын
Thank you so much sir :) !!
@progamer02562 жыл бұрын
sir i dont attend your live session becuase of job but later i watch your every videos to catch each and every word u says
@bigbossdailydrama9 ай бұрын
Thank you Sir 🎉
@kshitijnishant49686 ай бұрын
There is some flaw in the self-trained Word2Vec model, I was not able to convert and store X as array in X_new like shown in the video. Any reasons?
@litonpaul61332 жыл бұрын
Hi Krish..Please share interview questions regarding the topic which you are teaching after completing the teaching every session. It will be helpful. Day by day people will be ready for interviews. That is the idea.
@pankajkumarbarman7652 жыл бұрын
Thank you sir for this amazing session 👌👌👌👌👌👌
@ShubhamKumar-tj5jw6 ай бұрын
Thnaks krish
@AnkitSharma-yh3nm2 жыл бұрын
Awesome Session😊
@WahranRai2 жыл бұрын
By taking the average, 2 different sentences (input) could have the same AvgWord2vec
@kartiksood81058 ай бұрын
I am getting this error while training my avgWord2Vec model. Any fixes? TypeError: only size-1 arrays can be converted to Python scalars
@MM-vx8go Жыл бұрын
Informative 🎉
@anirudhagrawal50442 жыл бұрын
Krish, I'll be as honest as possible. There is no better educator who teaches soo well about data science and also gives the more profound knowledge of concept that is not even available on any platform that is paid thank you so much for teaching .I am really grateful to you.
@sagarbp-28548 ай бұрын
Hi Krish, Ineuron link is showing 404 error. I wanted to download resources
@ratnak10582 жыл бұрын
Thank you sir
@hargovind27762 жыл бұрын
Awesome stuff
@technicaljethya9932 жыл бұрын
Thanks 🙏
@mallikamehta39282 жыл бұрын
how to open an account on github and post our projects
@sandipansarkar92112 жыл бұрын
finished watching
@vijayalaxmimchatter66502 жыл бұрын
Hi Sir, actually I tried building model which was part of assignment.. but I was getting error while splitting the data into train and test.. can you please do it in next class...
@nishanandal-e4f4 ай бұрын
how we can get the data?
@aditya70425 ай бұрын
TypeError: only length-1 arrays can be converted to Python scalars The above exception was the direct cause of the following exception: How to solve this error for training machine learning model?
@ratnak10582 жыл бұрын
Sir please explain interview questions with answers
@aakashpanda24122 ай бұрын
Hi Krish , there is one doubt, you explained that window size will be the dimension of the word in your previous video of nlp explaining word2vec but here you are explicitly providing vector_size?Why?
@venkatasubbareddykachiredd93432 жыл бұрын
Hey Krish, I am not sure if you have clarified this, but I have a question. When we train our own model for Word2Vec, do we have to 1) split the corpus even before lemmatizing and generating word2vec model or 2) include the entire corpus (because lemmatization requires the entire vocabulary) and then split only when training the model. If we follow the first method, will we have an out-of-vocabulary issue?
@saurabharbal2684 Жыл бұрын
Hi sir hat's off to you, I am facing errors while implementation of word2vec and avg_word2vec on mails dataset. Please help me solving this error
@user-kz4xe5to1g Жыл бұрын
44:05 45:02 lol
@swamiranjit7546 ай бұрын
🤣🤣🤣🤣
@Vansh-v2k4 ай бұрын
When we are creating our own Word2Vec model, then after training the word2vec you have written many times "model.wv.index_to_key" or "model.wv['king'].similar" but wv is a variable where we have loaded the "word2vec-google-news-300" model. So why is it so ? Why we are writing the wv? In avg word2vec model you are also writing the "wv" variable ?
@adityanarendra58862 жыл бұрын
When doing TrainTest Split for the AvgWord2Vec in the day5 notebook at the end one. Its showing: Found input variables with inconsistent numbers of samples: [5564, 5572] My code: from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,random_state=0) Please Help. Btw Loved the Session👍🏽.
@princejindal36182 жыл бұрын
The reason for this is when you apply lemmatize some sentences in the corpus are turned to blanks. Try running the below code after lemmatizer code: [[i,j,k] for i,j,k in zip(list(map(len,corpus)),corpus, messages['message']) if i0 ,corpus))] y=pd.get_dummies(y['label']) y=y.iloc[:,1].values y.shape
@shubhsharma40166 ай бұрын
@@princejindal3618 I LOVE YOU BROTHER. I was stuck on this for sometime, I knew the solution but couldn't figure out how to find those missing values and remove them. Thankyou so much