K Nearest Neighbour Easily Explained with Implementation

  Рет қаралды 246,491

Krish Naik

Krish Naik

Күн бұрын

Пікірлер: 94
@sahilvlogs5848
@sahilvlogs5848 2 жыл бұрын
grt explanation my teacher took 2 days i didnt undersatand a word by watching this 18 min main done with knn thnku
@thomsondcruz
@thomsondcruz 2 жыл бұрын
Excellent Video! 3:41 Euclidean Distance is nothing but Pythagoras theorem's way of calculating the hypotenuse
@naveendubey2815
@naveendubey2815 3 жыл бұрын
Excellent Krish...you are really giving a lot to the society..
@VC-dm7jp
@VC-dm7jp 3 жыл бұрын
Thank you so much for explaning the concept and code in such a friendly manner.
@Neerajkumar-xl9kx
@Neerajkumar-xl9kx 3 жыл бұрын
great way of teaching by putting code and implementation
@kamran_desu
@kamran_desu 4 жыл бұрын
Great explanation, just adding my thoughts here. @12:20, you've mentioned K=1 is underfitting. I think it's the other way around. Low K means highly flexible and jagged boundaries (low bias high variance) leading to overfitting.
@eduardomedina5081
@eduardomedina5081 3 жыл бұрын
Good point
@KeigoEdits
@KeigoEdits 2 жыл бұрын
Hey kamran do you got the point why he used 23 as k and why not 33 as it is giving the highest accuracy. Yeah i get the point of overfitting maybe thats why we didn't chose 33 but why 23 either..
@tagoreji2143
@tagoreji2143 2 жыл бұрын
this is what is needed, thank you so much sir
@87040256
@87040256 2 жыл бұрын
Thank you so much! This is exactly what I need it.
@sharmakartikeya
@sharmakartikeya 3 жыл бұрын
Thank you sir, KNN is pretty clear to me now !! : )
@etn422
@etn422 3 жыл бұрын
lmao
@vaibhavkhobragade9773
@vaibhavkhobragade9773 3 жыл бұрын
Imo
@omingole7304
@omingole7304 2 жыл бұрын
@@vaibhavkhobragade9773 why are you two lmaoing?
@sandipansarkar9211
@sandipansarkar9211 4 жыл бұрын
Superb explanation. Now just need to make my hands dirty in the Jupyter notebook.Thanks.
@mallikharjunv6805
@mallikharjunv6805 3 жыл бұрын
Thanks Krish. Good explanation..!
@AdityaRaj-kl1be
@AdityaRaj-kl1be 4 жыл бұрын
In this video, you told that your model will underfitting when k=1, but in this case model always go to overfitting when k is low but we increase the k then our model goes to underfitting .
@saisai-yo4nv
@saisai-yo4nv 4 жыл бұрын
yeah i have the same doubt k=1 it will be overfitting and k=n it will be underfitting
@Yzyou11
@Yzyou11 2 жыл бұрын
Yes
@studio2038
@studio2038 3 жыл бұрын
🙏nice video easy to understand
@nileshkulkarni2845
@nileshkulkarni2845 3 жыл бұрын
Very well explained sir. ..... Thanks a lot for making my concept clear
@sandipansarkar9211
@sandipansarkar9211 4 жыл бұрын
Finished practicing in Jupyter notebook.Thanks
@sairohithpasham
@sairohithpasham 3 жыл бұрын
Thanks for giving a lucid explanation.
@manikaransingh3234
@manikaransingh3234 4 жыл бұрын
I don't understand the idea of using KNN for a regression problem. For classification, it's fine: - There you know the location of the point (x and y value) and you have to predict it's category. picking up the five nearest points is understandable. But in a regression problem, you only know the x value of a point and you have to predict the Y value, If I'm not wrong here. In the video, you first plot the point and then pick 5 or some nearest points. But if you already know the location (x,y) of the point, what is the problem here? The mean of 5 neighbors distances gives you what? I'm guessing the Y value but if that is so then how will you pick k neighbors. Please Answer!
@krishnaik06
@krishnaik06 4 жыл бұрын
For Knn regressorr u take the average of 5 nearest neighbour
@manikaransingh3234
@manikaransingh3234 4 жыл бұрын
@@krishnaik06 I'm really sorry sir. But that doesn't answer my question. I understand you're busy and maybe couldn't go through the whole question. Please try to look at it once more and reply whenever you have time. Thanks!
@sathishs1756
@sathishs1756 4 жыл бұрын
@@manikaransingh3234 Iam not sure exactly but according to lecture we should always select the k value as 5 the mean value 5 nearest neighbour value gives y value.
@manikaransingh3234
@manikaransingh3234 4 жыл бұрын
@@sathishs1756 you too didn't understand my question. Okay, You say five neighbors, neighbors of which point?
@himabinduh7623
@himabinduh7623 4 жыл бұрын
@manikaranasingh Same doubt here
@istech21
@istech21 3 жыл бұрын
You did not mentioned which metrics is applied when test. Eucledian, Manhattan? sklearn library seems to be use minkowski by default.
@raghuram6382
@raghuram6382 3 жыл бұрын
@Krish Naik The dataset you explained here is a Regression problem right? then why have you used "KNearestClassifier" in the codes while importing from sklearn library? could you please tell me? Also why classification report is needed for a regression problem here?
@gauravkumar2602
@gauravkumar2602 3 жыл бұрын
Amazingly explained.Thanks a lot
@chaitanyasrinevas8764
@chaitanyasrinevas8764 3 жыл бұрын
In the error rate vs value of the K plot, shouldn't the value of K be around 37? At k=37, we are getting the least error. At this point, the error is less than 0.6?
@kushalhu7189
@kushalhu7189 3 жыл бұрын
Perfectly explained
@chetanmundhe8619
@chetanmundhe8619 4 жыл бұрын
Very nice explaination, thank u for this video
@subbareddyjangalapalli4708
@subbareddyjangalapalli4708 4 жыл бұрын
Than you Krish, can we call all multiclass logit regressions are non-linear? please confirm or post small video. Thank you
@kvsaipratap7697
@kvsaipratap7697 5 жыл бұрын
HI Sir thank you very much for your transfer of knowledge Can Please explain about concept of Weight of Evidence(WOE) and how it is used in classification algorthims
@manjunath.c2944
@manjunath.c2944 5 жыл бұрын
superb ..good job very much appreciated
@karalworld
@karalworld 5 жыл бұрын
Excellent work.. Done a good job.
@Subliminal001
@Subliminal001 2 жыл бұрын
I didn't get why you took k=23 as in the accuracy plot, we can see that the accuracy is increasing after that point. We should take k value so as to maximize the accuracy, right?
@KeigoEdits
@KeigoEdits 2 жыл бұрын
Same with me, did you got the point now?
@shlokdoshi7162
@shlokdoshi7162 2 жыл бұрын
If a give an input list for the KNN algorithm to predict the classes of each element, How can I print out the list of inputs only belonging to a particular class?
@hasnainalibohra8232
@hasnainalibohra8232 2 жыл бұрын
Hello Sir for k=1 im getting overfitting data and as i increase the value of k the error rate is increasing. How to choose k value if the error graph is linear
@pavankumargopidesu4730
@pavankumargopidesu4730 5 жыл бұрын
hi krish in what situations we can use KNN and logistic regression and what is the difference between them.
@louerleseigneur4532
@louerleseigneur4532 3 жыл бұрын
Thanks Krish
@srinathakarur9798
@srinathakarur9798 4 жыл бұрын
Thank u sir 4 ur logic.
@RaviSharma-tg6yx
@RaviSharma-tg6yx 3 жыл бұрын
It this is also necessary to standardize the categorical variable in KNN to find the better K-value?
@SpiritedTravellerr
@SpiritedTravellerr 2 жыл бұрын
sir I have gone through ML playlist and some videos are not according to step by step after 50 th video so can you check it again please. bcz some videos are interchange up down
@roopagaur8834
@roopagaur8834 5 жыл бұрын
Thank you so much ....!!! It's really nice explanation.
@siddheshpawar1441
@siddheshpawar1441 5 жыл бұрын
thank you sir great explanation sir can you make one video on yolo algorithm?
@howdontanalytics6158
@howdontanalytics6158 3 жыл бұрын
Can you tell me how I can choose variables for KNN? I have 20+ variables, and not sure how I would keep some of the variables with what criteria.
@aashishdagar3307
@aashishdagar3307 3 жыл бұрын
Hi, Krish why not use k =33 it has min error and max accuracy instead of 23?
@mahindrarao4565
@mahindrarao4565 3 жыл бұрын
It leads you to Overfitting. Too less training error is also not acceptable.
@aashishdagar3307
@aashishdagar3307 3 жыл бұрын
I think we need to plot error rate for train vs CV then we have a better plot to look at,and decided to choose 23/33. If the gap between train and cv is less k=33 then k=23 then use k=23 ,otherwise 23 is good.
@Charmingenby
@Charmingenby 4 жыл бұрын
Hi Krish did u find error.rate (1-mean) becz u standardised the data points forehand; that part confuses me
@eugeneliu1212
@eugeneliu1212 3 жыл бұрын
If K is 4, and there are 2 2 equal distribution, what would be the classification?
@codermafia3441
@codermafia3441 2 жыл бұрын
No k value will be always odd.
@shreyanshsahay
@shreyanshsahay 4 жыл бұрын
Hi Krish Why we didnt take sqrt of datapoints to calculate K?
@shreyanshdubey8530
@shreyanshdubey8530 4 жыл бұрын
Instead of standard scalar can't we use MinMaxScalar?
@HARSHRAJ-2023
@HARSHRAJ-2023 5 жыл бұрын
Hi Kris. Can you please share the link of video on imbalance dataset.
@helenhilamariam3149
@helenhilamariam3149 4 жыл бұрын
Hello sir would you please explain about Nearest Neighbour Algorithms for Forecasting Call Arrivals in Call Centers article
@surendratadakaluru8900
@surendratadakaluru8900 5 жыл бұрын
Why we take k=5, We can take any other value or not
@azharshaik21
@azharshaik21 5 жыл бұрын
Could you please briefly explain about euclidean and Manhattan distance
@sindhumathi9209
@sindhumathi9209 3 жыл бұрын
I have started learning about Data Modelling and ML. My doubt is K-Nearest Neighbour will come under classification algorithm which is type of supervised learning. But here it is explained with regression also. Can anyone help me to understand!
@codermafia3441
@codermafia3441 2 жыл бұрын
It works for both classification as well as regression problem. And it comes under supervised machine learning.But in real data scinario KNN mostly used for classification problems.
@social_media789
@social_media789 Жыл бұрын
how to find radius in knn ( in jupyter notebook with code )
@ablearing4927
@ablearing4927 4 жыл бұрын
Hi Krish, I am trying to learn about algorithms which can be used for text base analysis. Could you please advise?
@pabitrakumarghorai7623
@pabitrakumarghorai7623 4 жыл бұрын
I do not understand why you take k nearest neighbors as 23? pls reply me sir...
@lalitchaudhari7470
@lalitchaudhari7470 4 жыл бұрын
Refer following video buddy, kzbin.info/www/bejne/paXSnYakl8ahh80
@manikantamaka7910
@manikantamaka7910 4 жыл бұрын
we are choosing k by seeing the graph , on x axis "k", y axis error rate. so like that k is 23
@iftikhar3609
@iftikhar3609 3 жыл бұрын
Sir DO you have any discord or slack community if yes please share it here i would like to join your community.
@ashishgarg5186
@ashishgarg5186 3 жыл бұрын
How does outlier effect knn??
@abhiyujaiswal7579
@abhiyujaiswal7579 5 жыл бұрын
Impressive ! Nice Clarification..
@unezkazi4349
@unezkazi4349 3 жыл бұрын
How does it train itself on the data?
@anandprasadcc0967
@anandprasadcc0967 4 жыл бұрын
Thankyou sir :)
@awesomeak7083
@awesomeak7083 4 жыл бұрын
Great
@xinyuanliu1959
@xinyuanliu1959 4 жыл бұрын
I don't understand why to choose k=5 while later in the video it chooses 23?
@manikaransingh3234
@manikaransingh3234 4 жыл бұрын
choosing k=5 is just for reference it's like just another example. You have to pick the best value of k for which the final error is minimum. The value of k will basically depend on the dataset points.
@sajidurrehman89
@sajidurrehman89 4 жыл бұрын
why we need training if we just calculate distance from points in testing ? What exactly is done in training phase if we just classify points based on distance ?
@adipurnomo5683
@adipurnomo5683 3 жыл бұрын
KNN does not have training phase.
@QasimKhan-nd8og
@QasimKhan-nd8og 3 жыл бұрын
Internally, KNN uses a tree data structure to sort feature vectors so that it does not have to search the entire training set for finding nearest neighbors. This data structure is generated during training
@shubhamsahu943
@shubhamsahu943 4 жыл бұрын
sir what is the name of this data set on kaggle
@kavyasharma5540
@kavyasharma5540 4 жыл бұрын
Where is the link to this kaggle code
@yijunshen9287
@yijunshen9287 3 жыл бұрын
best comparing other resources !!!!!!!!!!!
@madhabipatra8973
@madhabipatra8973 3 жыл бұрын
PLEASE HELP ME TO FIND OUT ML TUTORIAL -44
@shreeshanayak187
@shreeshanayak187 5 ай бұрын
Can u please provide ppt
@arunkumarr6660
@arunkumarr6660 5 жыл бұрын
could see kadhal vandhale sonf from your bookmarks !!! hah hah ...nice song though
@dragolov
@dragolov 3 жыл бұрын
These are 2 musical (jazz) solos generated using K Nearest Neighbor classifier: kzbin.info/www/bejne/sKWWoI1nipp0etE kzbin.info/www/bejne/iZnIpa2VaLCKodU
@sarthakbhatnagar961
@sarthakbhatnagar961 4 жыл бұрын
go corona corona go!!
@atifroome
@atifroome 3 жыл бұрын
Hue = hoie 😀
@solomongift951
@solomongift951 4 жыл бұрын
Wa.kn was.pkw
@louerleseigneur4532
@louerleseigneur4532 3 жыл бұрын
Thanks Krish
Tutorial 42 - Ensemble: What is Bagging (Bootstrap Aggregation)?
6:27
The IMPOSSIBLE Puzzle..
00:55
Stokes Twins
Рет қаралды 169 МЛН
МЕНЯ УКУСИЛ ПАУК #shorts
00:23
Паша Осадчий
Рет қаралды 5 МЛН
Муж внезапно вернулся домой @Oscar_elteacher
00:43
История одного вокалиста
Рет қаралды 6 МЛН
DBSCAN Clustering Easily Explained with Implementation
18:32
Krish Naik
Рет қаралды 144 М.
What is the K-Nearest Neighbor (KNN) Algorithm?
8:01
IBM Technology
Рет қаралды 20 М.
How to implement KNN from scratch with Python
9:24
AssemblyAI
Рет қаралды 96 М.
ML 21 : K-Nearest Neighbor (KNN) Algorithm Working with Solved Examples
15:30
CS & IT Tutorials by Vrushali 👩‍🎓
Рет қаралды 29 М.
k-Nearest Neighbour
36:38
Machine Learning- Sudeshna Sarkar
Рет қаралды 136 М.
#48 K- Nearest Neighbour Algorithm ( KNN ) - With Example |ML|
10:06
Trouble- Free
Рет қаралды 360 М.
StatQuest: K-nearest neighbors, Clearly Explained
5:30
StatQuest with Josh Starmer
Рет қаралды 651 М.