Your videos just feel so friendly and inclusive, while being really educational. Your way of teaching is great. I thank you sincerely!
@dataschool6 жыл бұрын
Thanks very much for your kind words! You are very welcome!
@RaynerGS4 жыл бұрын
The method which he uses to explain all concepts that are said is totally didact. Some teachers say terms to explain terms and at the final, you do not understand anything, however, Kevin Markham explains each term precisely without utilizing other terms. I admire the way which he teaches. Way to Go, and greetings from Brazil.
@dataschool4 жыл бұрын
Thanks very much for your kind words! 🙏
@akshitdayal26893 жыл бұрын
I've followed these series and these have really given me a great insight about machine learning as I've just started learning about it. Thank you so much
@dataschool3 жыл бұрын
Great to hear! 🙌
@syedasad30472 жыл бұрын
Your way of teaching is absolutely the best. Thanks a lot for your time and effort. May God Bless you.
@dataschool2 жыл бұрын
Thanks very much for your kind words!
@mmpcse5 жыл бұрын
I m SAP ABAP Engineer, trying to integrate python + ABAP. Have seen few videos on Python ML, but listening to Kavin Video reminds of Steve Jobs Marketing Speech : Clear Concise Calm and Rich Knowledge Embedded in this video. I will be watching this video multiple times because it has rich practical content and more importantly Kavin art of Speech brutally attract one's attention 🙂. Keep Guiding Us 🙏.
@dataschool4 жыл бұрын
Thank you!
@lalithdupathi51748 жыл бұрын
I am an electronic student, but your vigor and teaching skills on ML has got me inclined towards it very much. Thank you for the great head start you've given
@dataschool8 жыл бұрын
You're very welcome! Good luck in your machine learning education.
@jgajul28 жыл бұрын
The best tutorial i have ever watched! Kevin you have mastered both the art of machine learning and teaching :)
@dataschool8 жыл бұрын
Wow! What a kind compliment... thanks so much!
@nureyna6296 жыл бұрын
This guy is gifted.
@thebanjoranger4 жыл бұрын
I could listen to this voice all day.
@dataschool4 жыл бұрын
Thank you!
@keepfeatherinitbrothaaaa7 жыл бұрын
Holy crap, he can talk at a normal speed! Anyway, this series was great. I can find my way around with Python but I'm a complete beginner to data science and machine learning and I've learned a ton. I will definitely be re-watching this entire series to really grasp the material. Thanks again, keep up the good work.
@dataschool7 жыл бұрын
HA! Yes, that's my normal talking speed :) Glad you liked the series - I appreciate your comment!
@tseringpaljor86798 жыл бұрын
Hands down the best machine learning presentation I've seen thus far. Definitely looking forward to enrolling in your course once I'm done with your other free intro material. I think what sold me is how you've focused ~3 hours on a specific ML approach (supervised learning) to a common domain (text analysis). Other ML intros try to fit classification/regression/clustering all into 3 hours, which becomes too superficial a treatment. Anyway, bravo and keep up the great work!
@dataschool8 жыл бұрын
Wow, thank you so much! What you're describing was exactly my goal with the tutorial, so I'm glad it met your needs! For others who are interested, here's a link to my online course: www.dataschool.io/learn/
@debanitadasgupta7905 жыл бұрын
The BEST ML tutorials , I have come across... Thanks a lot ... God bless you ...
@dataschool5 жыл бұрын
Thanks so much for your kind words!
@payalbhatia52445 жыл бұрын
@Data School, Again and Again you are the best Kevin. I was scared of the text analytics and web scraping. You can teach in such an intuitive and lucid way. Thanks a ton
@dataschool5 жыл бұрын
Thanks very much for your kind words!
@okao087 жыл бұрын
i coudnt find any relevant video on youtube to do text analysis using machine learning... wow that was a great video and an eye opener for machine learning.. thank you so much kevin
@dataschool7 жыл бұрын
You're very welcome! Glad it was helpful to you!
@okao087 жыл бұрын
Hi Kevin....I have several tokenized text files... I want to compare each of these text file with another text file and check the similarities or differences how i am i able to do that using scikit or nltk
@taotaotan56715 жыл бұрын
Boy you made the best tutorial. Talking slow is magical!
@dataschool5 жыл бұрын
Thank you!
@ibtsamgujjar86977 жыл бұрын
Just wanna thank you for the awesome series. I am new to machine learning and you are one of my first and favorite teacher in this journey :)
@dataschool7 жыл бұрын
You are very welcome! Good luck on your journey! :)
@lingobol8 жыл бұрын
Wonderful set of videos. I have started my ML journey with these videos. Now gonna go deeper and practise more and more. Thanks Kevin for the best possible head start. Your Fan, A beginner Data Scientist.
@dataschool8 жыл бұрын
You're very welcome! That's excellent to hear... good luck!
@nehagupta79048 жыл бұрын
You are indeed a "GURU" who can train and share knowledge in true sense. I'm a non technical person but learning python and scikit-learn for my research and this video has taken my understanding to higher level, just in 3 hours....THANK YOU VERY MUCH Kevin!!! Can you please recommend some links where I can learn more on short text sentiment analysis using machine learning in python, especially to learn feature engineering aspect, like using POS, word embedding as features...Thanks again ...
@dataschool8 жыл бұрын
You are very welcome! Regarding recommended links, I think this notebook might be helpful to you: nbviewer.jupyter.org/github/skipgram/modern-nlp-in-python/blob/master/executable/Modern_NLP_in_Python.ipynb Good luck!
@zankbennett83408 жыл бұрын
Great video. The problem with the audio is that the channels are the inverse of each other, so on mono devices where the L and R channels are summed together, they completely nullify the output signal. I don't know of a work-around except to listen using a 2-channel system
@dataschool8 жыл бұрын
Wow! Thanks for the explanation. How did you figure that out? I spent probably an hour with the A/V people at the conference as they tried to figure out the problem, and they never came up with any clear explanation.
@tompara35 жыл бұрын
If you don't need / care for stereo effect (which is obvious b/c there's only monologue in this video), "jack normalling" via an audio mixer is the solution. Input: Plug either the L or R channel (say L for example) into the "jack normalling" port of an audio mixer. Then the Output of the mixer will be L x L because the L signal (note, it's the "signal") is copied to the R channel on the fly. Vice versa if you use the R channel for Input, which will become R x R for Output. Thus, on playback on either mono or stereo device, L and R channels will have the same phase, and always sound the same. PS: it's strange that the L and R channels are the inverse of each other. Only explanation is the A/V people somehow reversed the polarities of their L and R jacks (assuming professional XLR jacks in this case).
@tissues24416 жыл бұрын
I cant wait till I have watched enough of your content to start on your courses.
@dataschool6 жыл бұрын
Great! :)
@KurzedMetal6 жыл бұрын
Using the x1.5 speed YT feature is perfect for this video :) I'm half of the video so far, and I'm enjoying it a lot, kudos to the presenter.
@dataschool6 жыл бұрын
Glad you are enjoying it! :)
@nureyna6296 жыл бұрын
I did the same from video 1, I have just use 3 days to practice every thing, and I really enjoy the show :)
@AnkitSharma-hk8yq7 жыл бұрын
I am doing a college project on machine learning. It was very helpful. Thank you
@dataschool7 жыл бұрын
You're welcome!
@Torakashi6 жыл бұрын
I really enjoy your structured approach to teaching these classes :)
@dataschool6 жыл бұрын
Thanks! You should check out my online course: www.dataschool.io/learn/
@jasonxoc7 жыл бұрын
Anyone having audio issues, the right channel is completely out of phase with the left channel. So if you use something like audio hi-jack pro and insert an audio unit between (safari|chrome|firefox) and the output speakers to either duplicate the left channel or flip the right channel. Or use headphones as your brain will sum it just fine, it just sounds like it's left heavy because of the haas effect. Using speakers is a sure way to make yourself feel uncomfortable and lastly if you don't hear anything it's because your device is mono and summing the signals renders very little wave. (To the venue engineer: Don't record in stereo unless you know how to record in phase)
@dataschool7 жыл бұрын
Thanks for the suggestions and the technical explanation! I talked with the audio engineers at the conference numerous times, and they were never able to explain the source of the problem!
@jasonxoc7 жыл бұрын
Right on, hopefully it helps someone else. Took me a while to figure out how to flip the channel. By the way, you videos are great man. Thanks so much for them!
@dataschool7 жыл бұрын
You're very welcome! Thanks for your kind comments, and I'm glad you have enjoyed the videos!
@FULLCOUNSEL7 жыл бұрын
sad, the audio could not work..am stranded too
@gtalpc596 жыл бұрын
I have gone through a hell of videos and materials in machine learning. But this is the best which is properly paced and make it easy to follow and learn and takes you inside machine learning. I am keen to know whether you would start on Deep learning and tensorflow soon? It would be really helpful for those who are confused on overwhelming amount of materials. Thanks a lot!!
@dataschool6 жыл бұрын
So glad to hear that my videos have been helpful to you! As far as deep learning, I don't have any upcoming videos or courses planned, but it is certainly under consideration.
@u0000-u2x8 жыл бұрын
This is a great resource. Thank you for sharing
@dataschool8 жыл бұрын
You're very welcome!
@7justfun7 жыл бұрын
Data School , Can you help point me to a demo /material for hierarchical clustering(aglometric pref)... would counter vectorization work for such a scenario befroe we apply knn or shiftmeans
@anakwesleyan7 жыл бұрын
A great resource indeed. What I find extremely helpful is that it explains the small but critical aspects of the library, e.g. CountVectorizer only takes 1D, what sparse data in scipy looks like, etc.
@socialist_king7 жыл бұрын
THIS is some great stuff. . .really helpful I am working on my final year project. I am working on the classification of cattle and wanted to use machine learning (for the facial recognition of both pet and livestock)
@dataschool7 жыл бұрын
Very cool project! So glad to hear that the video was helpful to you!
@bennineo63725 жыл бұрын
This is a great, great tutorial and in depth explanation on many related topics! Thanks so much!
@dataschool5 жыл бұрын
You're very welcome!
@royxss8 жыл бұрын
This channel to so helpful. Actually helped me a lot during my semesters. Thank you so much (y)
@dataschool8 жыл бұрын
Awesome! You're very welcome!
@sibinh7 жыл бұрын
Thanks Kelvin for your great presentation as always. I think it could be great if the presentation included feature selection i.e. chi-squared test...
@dataschool7 жыл бұрын
Thanks for the suggestion! I'll consider that for future videos.
@juiguram71778 жыл бұрын
i just love your videos .They are great help specially for a non programmer like me trying to learn data science.It has helped me a lot in understanding all the concepts clearly in a short time rather than reading stuff.Your videos are my goto stuff for my college work. I want to see some content on grid search and pipeline.Also could you please share your email,i have some more doubts
@dataschool8 жыл бұрын
Thanks for your kind words! I'm glad they have been helpful to you! Regarding grid search, I cover it in video 8 of my scikit-learn series: kzbin.info/www/bejne/faDPkKSFnLeknKM Regarding pipeline, I cover it in modules 4 and 5 of my online course: www.dataschool.io/learn/ (You can also find my email address on this page.) Hope that helps!
@juiguram71778 жыл бұрын
The contact information part doesn't load on my system,can you please post your email here
@dataschool8 жыл бұрын
kevin@dataschool.io
@rahulbhatia56575 жыл бұрын
Is it still relevant in 2019? Thanks for letting me know
@dataschool5 жыл бұрын
Absolutely still relevant! However, there are some changes to the scikit-learn API that are useful to know about: www.dataschool.io/how-to-update-your-scikit-learn-code-for-2018/
@donbasti7 жыл бұрын
great video and the information was very clearly presented. Good work!
@dataschool7 жыл бұрын
Thanks!
@ujwalsah23044 жыл бұрын
You are awesome Kevin
@anjangurung25386 жыл бұрын
thankyou so much for this video. cleared all the doubts i had. thankyou again
@dataschool6 жыл бұрын
You're very welcome!
@stepheniezzi347 жыл бұрын
To fix the audio issue on iPhone use headphones and in settings turn off mono audio (general>>>accessibility, then scroll down to hearing)
@dataschool7 жыл бұрын
Thanks for sharing that solution!
@eddbiddle66045 жыл бұрын
Another fantastic video - thanks Kevin
@dataschool5 жыл бұрын
Thank you!
@gauravmitra36838 жыл бұрын
Another of your fantastic videos.
@dataschool8 жыл бұрын
Thanks for your kind words!
@saurabhsingh8266 жыл бұрын
Excellent video. Thankyou so much Kevin Sir,it really helped me a lot.
@dataschool6 жыл бұрын
You're welcome!
@saurabhsingh8266 жыл бұрын
Data School Sir, I dropped an email few days back by the I'd saurabhs9913@gmail.com . Could you please go through it and let me know ?
@donovankeating85777 жыл бұрын
Really good talk. Very easy to follow. Thank you for sharing! :)
@dataschool7 жыл бұрын
You're very welcome!
@christopherteoh30944 жыл бұрын
Hi Kevin, great video content! I just have a question. At 33:23 where you mentioned about the 5 interesting things that were observed, stop words are dropped and not included in the tokens list. However, during vect.fit(simple_train), the stop_words argument is set to None. Can I presume that there is a set of standardized stop words and CountVectorizer drops it and the stop_words argument takes in user-specified stop words?
@christopherteoh30944 жыл бұрын
I got the answer towards the end of the video i.e. the word was removed because of the string pattern which contains < 2 characters. Thanks!
@omparghale Жыл бұрын
Hey Kevin,firstly thanks for all the pandas stuff that you've put on your channel,that helped greatly!! I wanted to know whether this sklearn pycon tutorial is still applicable in 2023 or is the syntax today is wildly different than what it was back in 2016?
@dataschool Жыл бұрын
Glad to hear the pandas videos have been helpful! Yes, this tutorial is absolutely still relevant, actually very little scikit-learn syntax used in the video has changed.
@md27045 жыл бұрын
Thank you for all your helpful videos. I have a question related to vectorization: At 1:07:36, if we use the words from the test set to fit our model, we could obtain a document-term matrix where some terms would have only zero entries. Would that have negative effects on our classifier?
@dataschool4 жыл бұрын
Glad you like the videos! As for your question, I don't completely follow, sorry! I would just say that there is a right way to do it (fit_transform on training set and transform on testing set), and that will give you the most reliable prediction of how your model will perform on out of sample data. Hope that helps!
@sudhiirreddy78687 жыл бұрын
Thanks a Lot for this resource...Hoping to see more videos like this
@dataschool7 жыл бұрын
You're welcome! Glad it was helpful to you.
@im18already7 жыл бұрын
Hi. It was mentioned on 1:06 that the X should be 1 dimensional. What if I have 2 set/column of text? The 2 column has certain relationship, so merging them into a single column is probably not the best way.
@dataschool6 жыл бұрын
Great question! Sometimes, merging the text columns into the same column is the best solution. Other times, you should build separate feature matrices and merge them, either using FeatureUnion or SciPy.
@vishwasgarg91868 жыл бұрын
great videos man...I have become your fan
@dataschool8 жыл бұрын
Thanks very much!
@rainerwahnsinn32627 жыл бұрын
I'd like to jump in into the questions around 55:00 and ask: Why don't we keep track of the order of the words in a dataset? The meaning of two datasets containing the same words could be really different, for example "Call me "Tom". " and "Tom, call me!". Right now those two datasets look exactly the same to us when vectorized like in the lecture. I thought maybe we could create a higher dimensional matrix and represent those word combinations as vectors in space and then fit a model on this. Would this work?
@dataschool7 жыл бұрын
Great question! We don't keep track of word order in order to simplify the problem, and because we don't believe that word order is useful enough to justify including it. (That would add more "noise" than "signal" to our model, reducing predictive accuracy.) That being said, you can include n-grams in the model, which preserves some amount of word order and can sometimes be helpful.
@AshokPatel-qc1hz5 жыл бұрын
To scale down the feature what should we prefer Standardization or Normalization and why? and when to use it?
@dataschool4 жыл бұрын
It depends on what you mean by those terms, because they are often used interchangeably.
@didierleprince61064 жыл бұрын
Merci 😊
@dataschool4 жыл бұрын
You're welcome!
@deepanshnagaria45796 жыл бұрын
sir, the video series was a great learning experience. Sir can you suggest me the algos in descending order of their accuracies for a model to find emotions from text data?
@dataschool6 жыл бұрын
It is impossible to know what algorithm will work best in advance of trying it out!
@dataschool6 жыл бұрын
I don't have any resources to recommend, I'm sorry!
@karthikudupa54755 жыл бұрын
Thanks a lot Kevin
@dataschool5 жыл бұрын
You're welcome!
@lprevost697 жыл бұрын
Very nice work Kevin. I suspect I did what a lot do -- jump into ML without a lot of fundamentals. My experience was after doing one of the "hello world" tutorials on ML (IRIS dataset), I immediately "wired up" my features, which were of course full of text, and crashed my model with string errors. After that crash, your video was my "back to the drawing board" trek to get some fundamentals in place and I'm now refreshed and ready to go try it again! Question: My real world problem are trouble tickets (documents) with a variety of "features" including some long text fields (ie. problem description or action taken which has sentiment in it) and some category fields which can be resolved to maybe 8 categories. I'm ultimately trying to categorize these "tickets" in the trouble work into about 5-6 categories (multi-class classification problem). so, using your ham/spam email example, I have 2-3 long text fields that will need to be vectorized to DTF (probably each with separate vocabularies), and some category feature inputs to the model. And rather than ham/spam, the model needs to predict to multiple classes (ie. 5-6 categories of tickets). I'm running into problems where the pandas frame has all this but has some of it in Object columns which don't directly product np arrays. Can you make any suggestions on how to approach the work? I think after spending my Saturday and Sunday with your exercise, this is how I should approach it: 1) Read data into Pandas dataframe 2) Count Vectorize the two long text columns into separate DTFs. Do I need to the join the arrays? 3) You mentioned that scikit is not clear on whether category features have to be binarized or not. I'll figure that out. Same with prediction classes. 4) train the model on that. Also, I recall in your course, you mentioned some concepts called "feature unions" and "transformers" in response to a question I could not hear. You gave some recommendations on using ensemble methods and "transformer features next to one another." This sounds like a clue to my problem. Any recommendations on how to go deeper into that area? Of course, one of my very next steps is to sign up for your course!!
@dataschool7 жыл бұрын
Thanks for the detailed question! I think that for step 2, my default approach would be to combine the text fields together for each ticket before vectorizing, which would result in a single document-term matrix (DTM). In other words, you avoid combining multiple DTMs, which may not provide any additional value over a single DTM. Regarding feature unions, here are some public resources that might be helpful to you: zacstewart.com/2014/08/05/pipelines-of-featureunions-of-pipelines.html scikit-learn.org/stable/auto_examples/hetero_feature_union.html Regarding my course, I think you'd get a lot of value out of it given your goals. More information is here: www.dataschool.io/learn/ Hope that helps, and good luck!
@lprevost697 жыл бұрын
Wow! that is a good point Kevin. One DTM makes a lot of sense. Would you agree even for the categorical features? IN other words, would you just mix the two text fields -- one with the messy free form text request and the other with a category field into the same DTM and allow the vectorization just to do it thing on two columns rather than just one? I could see how that would "look" the same to the estimator as a category is just an extension on the DTM yes, I also have since found Zac Stewart's good work on feature unions and pipelines and have even talked to him a bit about the approach. it seems like he has moved his methods onto using things like the Sklearn-pandas library (github.com/paulgb/sklearn-pandas/tree/feature_union_pipegithub.com/paulgb/sklearn-pandas/tree/feature_union_pipethe -- the PR that uses feature unions and pipelines in the code) which better supports pandas and data frames. In contemplating your elegant simple approach of combining, I'm now thinking I have over engineered this. But I did end up making this work by building parallel pipes of features from pandas columns with multiple transformers (countvectorizer, tfidftransformer, and labelbinarizer) and then feature joined these before inputting to the estimator. this method does simplify the learning and transforming process. But, the tradeoff is that it does also complicate the process of being able to discern what features drove the decision logic (ie. hard to get features from the complex pipeline of steps). Your approach of combining to 1 DTM may give me best of both worlds. thanks for your help and would appreciate confirmation on putting categorical features into the single DTM
@dataschool7 жыл бұрын
Thanks for the follow-up! Yes, I would agree that adding the categorical features to the DTM makes sense. However, you may want to append some text to the category names before adding them to the column of free-form text. For example, if the category is color, and possible values are "red" and "blue", you may want to add them to the free-form text as "colorred" and "colorblue". Why do this? Well, it's possible that seeing the word "red" in the text is a good predictor of ticket type A, and seeing the category "red" is a good predictor of ticket type B, and you want the model to be able to learn that information separately. Does that make sense?
@mohammadali88002 жыл бұрын
good job , wish you more success
@dataschool2 жыл бұрын
Thank you!
@laurafernandezbecerra89784 жыл бұрын
Is there any tutorial to analyse system logs with ML? thanks in advance!
@prakharsahu94986 жыл бұрын
Great video. I would like to know if you would be doing videos on tokenizing ,stemming and lemmatizing and other core NLP techniques.
@dataschool6 жыл бұрын
You might be interested in my course, Machine Learning with Text in Python: www.dataschool.io/learn/
@SlesaAdhikari7 жыл бұрын
So very helpful. Thanks Kev!
@dataschool7 жыл бұрын
You're welcome!
@mrfarhadahmadzadegan3 жыл бұрын
Your video was great! I learned a lot. I just have one question, How our model counts the number of column and row of the sparse matrix?
@rahulsripuram81747 жыл бұрын
Awesome. I really liked it. Will do a POC. Please few suggest some datasets otherthan spam-ham
@dataschool7 жыл бұрын
There are lots of great datasets here: archive.ics.uci.edu/ml/ www.kaggle.com/datasets Hope that helps!
@charlinhos08248 жыл бұрын
Thanks for sharing Kevin, apart of the obvious I also curious about how you use evernote in your daily lectures task, maybe that's could be another great video to follow on ..
@dataschool8 жыл бұрын
My Evernote usage is pretty simple... just storing and organizing task lists and links! :)
@23232323rdurian7 жыл бұрын
Thanks for the great tutorial. However several times I cant see the rightmost part of an instruction. So I cant type it, execute it, follow the Python action. Very frustrating! For example: 1:06:25 frome sklearn.cross_validation import train_test_split but then I cant see the rest of the instruction so I cant follow the next several minutes of your tutorial using Python Anyhow: I appreciate your tutorial...Thank you!
@dataschool7 жыл бұрын
Sorry to hear! However, all of the code is available in the GitHub repository: github.com/justmarkham/pycon-2016-tutorial Hope that helps!
@ShriSuperman5 жыл бұрын
This is amazing video .. u really a great teacher.. can i get whole course videos ..pls .....
@dataschool5 жыл бұрын
Thanks! The course is available here: www.dataschool.io/learn/
@sosscs5 жыл бұрын
print false positive: 1:38:01
@cartoonjerk7 жыл бұрын
Once again thanks a lot for the video, been learning a lot from this. Quick question though, can you give the full url for the one you provided around 1:00:00? I tried both methods and none worked! Thanks!
@dataschool7 жыл бұрын
Here's the URL for the SMS dataset: raw.githubusercontent.com/justmarkham/pycon-2016-tutorial/master/data/sms.tsv And you can find all of the code shown in the video here: github.com/justmarkham/pycon-2016-tutorial Hope that helps!
@moutaincold22187 жыл бұрын
I like you and your videos very much. Hope you could develop a more detailed course on scikit-learn and Deep Learning (tensorflow)
@dataschool7 жыл бұрын
Thanks for the suggestion! I'll definitely consider it for the future! Subscribing to my newsletter is a great way to hear when I release new courses: www.dataschool.io/subscribe/
@willbeasley14197 жыл бұрын
Are you in the works of a course to use Tensorflow for NLP?
@dataschool7 жыл бұрын
I'm not, but I appreciate the suggestion and will keep it in mind for the future!
@rock_feller8 жыл бұрын
Hi Kevin, I didn’t catch up very well the why we should do the train-test split before vectorization ? Could you help? Rockefeller from Cameroon
@dataschool7 жыл бұрын
It's a tricky concept! Basically, you want to simulate the real world, in which words will be seen during the testing phase that were not seen during the training phase. By splitting before vectorization, you accomplish this. Hope that helps!
@chrisdemchalk34917 жыл бұрын
Any recommendation for a multi label classification example where there will be a high number >200 of potential classes.
@dataschool7 жыл бұрын
I recommend reducing the complexity of the problem by reducing the number of classes.
@NBAchampionshouston8 жыл бұрын
Hi, thanks for the video! Do you know if it's possible to supply each article to CountVectorizer as a list of features already created (for example noun phrases or verb-noun combinations) rather than the raw article which CountVectorizer would usually then extract n-grams from? Thanks!
@dataschool8 жыл бұрын
From the CountVectorizer documentation, it looks like you can define the vocabulary used by overriding the 'vocabulary' argument: scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html However, it's not clear to me if that will work when using a vocabulary containing phrases rather than single words. Try it out, and let me know if you are able to get it to work!
@VijayaragavanS6 жыл бұрын
Thanks for the detailed information, Is that possible to use Multidimensional?
@dataschool6 жыл бұрын
I'm sorry, I don't understand your question. Could you clarify? Thanks!
@amosmunezero99587 жыл бұрын
Hi does anyone know how we can extract and store the words that are thrown out during the transformation? Like is there an easier way (built-in function) other than writing python text regular expression or manipulation to compare the words and feature names? Thanks.
@dataschool7 жыл бұрын
Great question! I don't know of a simple way to do this, but perhaps someone else here knows...
@sonalivv7 жыл бұрын
Can we use Naive Bayes to classify text into more than just 2 or 3 categories (potentially 10+ categories)?
@dataschool7 жыл бұрын
Great question! The scikit-learn documentation says that "All scikit-learn classifiers are capable of multiclass classification": scikit-learn.org/stable/modules/multiclass.html So yes, that should work!
@yuanxiang13697 жыл бұрын
That's a great tutorial. Just a quick question, if I were to apply svm, random forest, latent dirichlet allocation, instead of naive bayes, does the input data still be document-term matrix form?
@dataschool7 жыл бұрын
I'm not sure for LDA, but for SVM and Random Forests, yes, the input format would be the same.
@puneetja7 жыл бұрын
Hi Kevin, Thanks for the wonderful tutorial. I just have a very basic question - We did image classification in past and used neural network. There we used few convolutional layers and activation function. However I see here that you did not use any convolutional layers and activation function. Is this because you are using naive bayes classifier not neural network classifier algorithms? Thanks in advance.
@dataschool7 жыл бұрын
That's correct! Naive Bayes does not involve any layers or an activation function.
@vivekathilkar58737 жыл бұрын
great learning experience
@dataschool7 жыл бұрын
Thanks!
@ash_engineering5 жыл бұрын
Hey kevin could please make a video on machine learning pipelining .
@dataschool4 жыл бұрын
I cover pipeline in this video: kzbin.info/www/bejne/n6OrmXeDl9xmrtE
@anujasilampur92116 жыл бұрын
in my case..,shape of x_train and x_train_dtm is different..and getting ValueError: Found input variables with inconsistent numbers of samples: [25, 153] at fit.....please help
@dataschool6 жыл бұрын
It's hard for me to say what is going wrong... good luck!
@macpc46127 жыл бұрын
is it possible to calculate spamminess and haminess irrespective of the lcassifier used ?
@dataschool7 жыл бұрын
Great question! You could use a similar approach with other classification models, though the code would be a bit more complicated because you wouldn't have access to the feature_count_ and class_count_ attributes of the Naive Bayes model.
@rayuduyarlagadda34736 жыл бұрын
Awesome video would u please make videos on performance metrics and featurization and feature engineering
@dataschool6 жыл бұрын
Thanks for your suggestions!
@dataschool6 жыл бұрын
I wrote a blog post about feature engineering: www.dataschool.io/introduction-to-feature-engineering/
@wowwwwwwwwwwwwwwwize7 жыл бұрын
Hi Kevin, that is a great video. I have one question. When i am dealing with a dataframe have large number of rows, each row having large texts which text vectorizer will be better tfidfvectotizer or countvectorizer or hashingvectorizer. I applied tfidf but it generates many feauture vectors which later becomes difficult to append it to the origial dataframe becoz of large array size
@dataschool7 жыл бұрын
It's impossible to know in advance which vectorizer will work best, sometimes you just have to experiment! Once you have generated a document-term matrix, you should not put it back in pandas. It should remain a sparse array. Hope that helps!
@benben3417 жыл бұрын
Thank you, very much, just viewed all your online course. Im not really that super duper with machine learning, but your courses certainly got me thinking and able to get scikit to work at least. One thing i will have to research, is if your initial dataset uses classes good/bad etc instead of numbers such 1,0 how to actually get that into the i think its “label.map” from this video. This video shows me how to do it kinda of briefly but your “Machine learning in Python with scikit-learn” series does not cover it at all - (unless i missed it somewhere). Also near the end of your “Machine learning in Python with scikit-learn” the course length become longer which means i have to stop it more often. So maybe more breaks could help. As i said, its amazing what you have provided, and im just trying to offer some feed back - instead of just being all take.
@dataschool7 жыл бұрын
Thanks for your feedback! Regarding your question about numeric labels, I think this video might be helpful to you: kzbin.info/www/bejne/hpDUYaehjtapic0
@eugenydolgy10606 жыл бұрын
Great video!
@dataschool6 жыл бұрын
Thanks!
@generalzeedot7 жыл бұрын
Kev, has anyone ever told you that you remind them of Sheldon Cooper? Keep up the great work btw
@dataschool7 жыл бұрын
Ha! I have heard that a few times recently :) Glad you like the videos!
@andrewhintermeier96758 жыл бұрын
Is it possible to use KFolds cross validation instead of test train split with this method?
@dataschool8 жыл бұрын
Yes, you could use cross-validation instead. However, to do cross-validation properly, you have to also use a pipeline so that the vectorization takes place during cross-validation, rather than before it. Hope that helps!
@andrewhintermeier96758 жыл бұрын
Thanks! I've never used pipelines before but I've seen pipelines used in some example code before, I'll have to look into it.
@dataschool8 жыл бұрын
Here's a nice example that includes a pipeline: radimrehurek.com/data_science_python/
@andrewhintermeier96758 жыл бұрын
Thank you so much. Your series is honestly the best I've found for learning ML, it's been so helpful for me :D
@dataschool8 жыл бұрын
You're very welcome, and thanks for your kind words! :)
@jjunior12836 жыл бұрын
Thanks a lot for the course. Very powerful indeed. Is there a way to create a dataframe with say the top 20 features? Thanks again
@dataschool6 жыл бұрын
Glad you liked it! Regarding your question, is this what you are looking for? df = tokens.head(20).copy()
@jjunior12836 жыл бұрын
Thanks for the suggestion. I figured if I explain the problem better I'd get a better help. I'm trying to predict whether an item will fail or not. I have a data set with over 30 variables one of which I'm trying to vectorize. Doing this blows that one variable to over 7,000. Because of this I run out of memory when merging them to the data set containing the 30 other variables. Also due to the data set being unbalanced, the models don't train well using the two data sets independently (similar results both as good as random). I recently created an account on AWS and bought a powerful instance; I was able to merge the two and still it didn't train well. My goal is to use say the top 20 feature and merge with the 30 other variables to train. I used dtm=fit_transform() for that one variable. Is there a way to limit the number of features to an arbitrary number say 20; that is the ones with the highest tf idf scores? Or can I manually get them? Sorry for the length and thanks for the help
@dataschool6 жыл бұрын
The vectorization is creating a sparse matrix, which is quite memory efficient. It sounds like the problem is that you are merging a sparse matrix with a dense matrix, which forces the sparse matrix to become dense, which would definitely create memory problems. One solution is to train models on the datasets separately and then ensemble them. It sounds like you might be doing this already, but aren't getting good results? If so, I don't think it's because of class imbalance. I think that using the max_features parameter of CountVectorizer will accomplish what you are trying to do, though I don't think it's necessarily a good strategy. You will lose too much valuable data. My recommended strategy is not super simple, so I can't describe it briefly, but it's covered in module 5 of my online course: www.dataschool.io/learn/ Hope that helps!
@jjunior12836 жыл бұрын
Data School thanks a lot. I will definitely watch that recommended video and keep playing with it
@skinheadworkingclass7 жыл бұрын
Hi Kevin, excellent presentation! I would like to ask you a question. How can "tokens_ratio" improve the accuracy score of Naive Bayes model?
@dataschool7 жыл бұрын
Glad you liked it! tokens_ratio was just a way to understand the model - it won't actually help the model to become better.
@mohinik44735 жыл бұрын
I need to test Pega system build along with python for machine learning.I am automation tester but need to do AI testing,can you please guide how can i go about.
@dataschool5 жыл бұрын
I won't be able to help, I'm sorry!
@naveenv30978 жыл бұрын
you said 3 documents as an explanation for 3*6 sparse matrix(around 35.10)...where did we give the 3 documents?
@dataschool8 жыл бұрын
The 3 documents are the 3 elements of the 'simple_train' list, which we passed to the vectorizer during the 'fit' step. Hope that helps!
@naveenv30978 жыл бұрын
Thank you
@deepikadavuluri84747 жыл бұрын
Hi Kevin, It is a great lecture. Eventhough I am new to this machine learning, I understood the basics of machine learning and logistic regression. I have a doubt. Can we classify into more than two groups(Ham, Spam and some_other) ? Thank you.
@dataschool7 жыл бұрын
Great to hear! Regarding your question, you can classify into more than two categories - it's called multi-class classification. scikit-learn does support that. Hope that helps!
@navkirankaur6717 жыл бұрын
ValueError: multiclass format is not supported I am getting this error when i am running the auc score
@dataschool7 жыл бұрын
Are you using the same dataset as me, or your own dataset?
@galymzhankenesbekov29244 жыл бұрын
you make wonderful videos and courses, however it is very expensive for international students like me.
@gianglt20088 жыл бұрын
Thank you for the resource. I have a question In real life, the initiation of class CountVectorizer can fail if the volume of input text is BIG ( e.g. I want to encode a big number of text files). Did it happen to you ?
@dataschool8 жыл бұрын
I haven't had that happen, but if it did, it should happen during the 'fit' stage rather than during the instantiation of the class. In any case, HashingVectorizer is designed to deal with very large vocabularies: scikit-learn.org/stable/modules/feature_extraction.html#vectorizing-a-large-text-corpus-with-the-hashing-trick Hope that helps!
@gianglt20088 жыл бұрын
Thank you very much. You are correct. The problem happens during the fitting stage. I will try with HashingVectorize
@_rsk_6 жыл бұрын
Hello Kevin, I have progressively watched your video's from Pandas to Scikit Learn to this video on ML with Text. All have been brilliant videos and very nicely paced. Kudo's on that and hope you continue with more videos (shout out for Jupyter Notebooks ;-) ). I have one question specific to the topic on this video. For text analytic, the recommendation is to create a vocabulary and document-term matrix of the train data using a Vectorizer (i.e. instantiate a CountVectorizer and use fit_transform). Then use the fitted vocabulary to build a document-term matrix from the testing data (i.e. from the vector used during training perform a transform). If I use TfidfVectorizer and then TruncatedSVD as shown below, is the commented step-3 the right way ? # Step 1: perfrom train, test split. X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) # Step 2: create a Tf-Idf Matrix and perform SVD on it. tfidf_vectorizer = TfidfVectorizer(sublinear_tf=True, stop_words='english') tfidf_train = tfidf_vectorizer.fit_transform(X_train) svd = TruncatedSVD(n_components=200, random_state=42) X_train_svd = svd.fit_transform(tfidf_train) # Step 3: transforming Testing Data ?? # Is this the right way: # tfidf_test = tfidf_vectorizer.transform(X_test) # X_test_svd = svd.transform(tfidf_test) Thanks in advance.
@dataschool6 жыл бұрын
Thanks for your very kind comments, I appreciate it! Regarding your question, I'm not really familiar with TruncatedSVD, so I'm not able to say. Good luck!
@takbirhossaintushar72907 жыл бұрын
dear sir , please tell me how can i classify more then 2 class like 3 or four class prediction model by using the same way .
@dataschool7 жыл бұрын
Most scikit-learn classification models inherently support multi-class prediction. So, the process is exactly the same!
@tulasijamun32346 жыл бұрын
Please read up on OnevsOne and OnevsAll classifiers to answer your question.
@itsbuzzz7 жыл бұрын
Hi Kevin! Thanks for that valuable presentation! Just a question... Is the following the right way to apply K-fold cross validation on text data? X_train_dtm = vect.fit_transform(X_train) scores = cross_val_score(, X_train_dtm, y, cv=5) I am not totally sure if X_train_dtm and y are correct on the cross_val_score function above.. Thanks again!
@itsbuzzz7 жыл бұрын
I just saw Andrew's comment... bit.ly/2mXdwZ9
@dataschool7 жыл бұрын
Glad you liked the tutorial! Regarding your question, I actually cover this in detail in my online course: www.dataschool.io/learn/
@FedericaLuciaVinella6 жыл бұрын
watching this on a Speed 1.5, and still understandable.
@dataschool6 жыл бұрын
Great!
@cartoonjerk7 жыл бұрын
Nevermind my previous comment, problem solved. But now I have a new one and would be very happy if you can help me answer it! When I calculate my ham and spam frequencies, my ham count is completely different than yours. It reads: 1.373624e-09 for very, 4.226535e-11 for nasty, 2.113267e-11 for villa, 4.226535e-11 for beloved, and 2.113267e-11 for textoperator. Any way to fix this or has the data changed since then?
@dataschool7 жыл бұрын
The dataset hasn't changed. Are you sure all the code you wrote was identical to my code? You can check your code here: github.com/justmarkham/pycon-2016-tutorial/blob/master/tutorial_with_output.ipynb
@ghanemimehdi10638 жыл бұрын
Hi, Thanks for sharing, it's very usefull ! I have a little question : for the labelization i use "preprocessing.LabelEncoder()" is it ok ?
@dataschool8 жыл бұрын
Sure, LabelEncoder is useful as long as you are encoding labels (also known as "response values" or "target values") or binary categorical features. If you are using it to encode categorical features with more than 2 levels, you'll want to think carefully about whether it's an appropriate encoding strategy.
@jundou78586 жыл бұрын
two question about the Bag of Words which have obsessed me for a while.first question is my source file has 2 columns, one is email content, which is text format, the other is country name(3 different countries) from where the email is sent, and I want to label if the email is Spam or not, here the assumption is the email sent from different countries also matters if email is spam or not. so besides the bag of words, I want to add a feature which is country, the question is that is there is way to implement it in sklearn.The other question is besides Bag of Words, what if I also want to consider the position of the words, for instance if word appears in first sentence, I want to lower its weight, if word appears in last sentence, I want to increase its weight, is there a way to implement it in sklearn.Thanks.
@dataschool6 жыл бұрын
Great questions! 1. Use FeatureUnion, or combine the two columns together and use CountVectorizer on the combined column. 2. You would write custom code to do this.
@aykutcayir648 жыл бұрын
This video is excellent. Thanks for the video, but there is a problem for the mobile version of the video. After opening talk of the video, I cannot hear the voice. Did you notice that before?
@dataschool8 жыл бұрын
Glad you liked it! Yes, that audio problem affects some devices and browsers, especially mobile devices. It's caused by the audio encoding of the original recording. I tried to fix it, but didn't come up with any solutions. I'm sorry!
@priyankap86276 жыл бұрын
Hey, You have used 2 classes for classification right? What if I need more than 2 class, eg: contempt, depression, anger, joy and many such emotions. Do I need to change any of the code in here, or providing a data set with multiple classes is enough? And I have one more doubt; Once the model is built and prepared, how can I actually know into which class, a new text document supplied as input will belong to? eg: If the new document is ham or spam?
@dataschool6 жыл бұрын
1. Most of the time, you don't need to modify your scikit-learn code for multi-class classification. 2. Using the predict method Hope that helps! You might be interested in my course: www.dataschool.io/learn/
@priyankap86276 жыл бұрын
@@dataschool Thanks a lot. This lecture was very helpful for me. I love the way you teach. Great teacher :)
@ichtube7 жыл бұрын
This is just a minor point but how come y is 150 by nothing when it's a vector?
@dataschool7 жыл бұрын
When I say that it's "150 by nothing", that really just means that it's a one-dimensional object in which the magnitude of the first dimension is 150, and there is no second dimension. That is distinct from a two-dimensional object of 150 by 1. Does that help? If I misunderstood your question, please let me know!