Machine Learning Lecture 11 "Logistic Regression" -Cornell CS4780 SP17

  Рет қаралды 41,876

Kilian Weinberger

Kilian Weinberger

Күн бұрын

Cornell class CS4780. (Online version: tinyurl.com/eC... )
Lecture Notes: www.cs.cornell....
If you want to take the course for credit and obtain an official certificate, there is now a revamped version (with much higher quality videos) offered through eCornell ( tinyurl.com/eC... ). Note, however, that eCornell does charge tuition for this version.

Пікірлер: 47
@vatsan16
@vatsan16 4 жыл бұрын
Am I the only one who raises his hands from my home whenever he says raise your hands? :P
@varunjindal1520
@varunjindal1520 3 жыл бұрын
Me too
@Enem_Verse
@Enem_Verse 3 жыл бұрын
So Many professor have knowledge But only few have enthusiasm while teaching
@kodjigarpp
@kodjigarpp 3 жыл бұрын
You literally saved my comprehension of Statistical Learning, thanks!
@sandeepreddy6295
@sandeepreddy6295 3 жыл бұрын
Awesome lectures !! glad to have bumped into one of them; after that spending time on the entire series felt worthwhile.
@smallstone626
@smallstone626 4 жыл бұрын
A fantastic lecture. Thank you professor.
@flaskapp9885
@flaskapp9885 3 жыл бұрын
The best teacher ever
@vatsan16
@vatsan16 4 жыл бұрын
I love the way he gets so excited when he says TADA! xD
@rolandheinze7182
@rolandheinze7182 5 жыл бұрын
Thanks for posting all these lectures Dr. Weinberger. Should make Siraj Rival aware of their availability!
@dhrumilshah6957
@dhrumilshah6957 4 жыл бұрын
This video lectures are great ! completed 12 in 2 days ! I find it more intuitive than Andrew Ng one . Also Prof have you ever recorded lectures on unsupervised learning ? would love to watch those since those are missing from this series.
@kilianweinberger698
@kilianweinberger698 4 жыл бұрын
Sorry, never recorded them. But will try to, the next time I teach that course.
@jiviteshsharma1021
@jiviteshsharma1021 4 жыл бұрын
@@kilianweinberger698 YES PLEASEEE
@YulinZhang777
@YulinZhang777 5 жыл бұрын
This is great stuff. It's just funny that those are motorized chalkboard instead of dry erase boards.
@doyourealise
@doyourealise 3 жыл бұрын
9:48 this is amazing :)
@naifalkhunaizi4372
@naifalkhunaizi4372 3 жыл бұрын
Amazing professor Kilian!!
@ugurkap
@ugurkap 5 жыл бұрын
It could also be nice to see a dataset correctly classified by Naive Bayes and that if Logistic Regression optimizes the hyperplane even further.
@insoucyant
@insoucyant 3 жыл бұрын
Amazing Lecture!!!! Tanks a lot Prof.
@mhsnk905
@mhsnk905 2 жыл бұрын
@KilianWeinberger Unfortunately online viewers don't have access to the course homework but I think your claim in 20:31 is only valid if across each dimension, data from classes +1 and -1 happen to come from the same variance Gaussian distributions. Otherwise, you would need quadratic terms too.
@nrupatunga
@nrupatunga 4 жыл бұрын
Hi Kilian, The flow of your lectures are awesome. How you build upon the concepts is amazing. Do you have Matlab codes shared publically? Really cool demos
@taketaxisky
@taketaxisky 4 жыл бұрын
Interesting to learn the link between naive Bayes and logistic regression. Thank you! For the spam email example with very high dimension feature, logistic regression won’t work right.
@xiaoweidu4667
@xiaoweidu4667 3 жыл бұрын
Weinberger is one of the best machine learning lecturer
@jachawkvr
@jachawkvr 4 жыл бұрын
It was nice learning about the connection between naive bayes and logistic regression. However, at the moment, I am only able to see the connection between GaussianNB and Logistic regression. Is there some way to logistic regression if the features are not real-valued?
@kilianweinberger698
@kilianweinberger698 4 жыл бұрын
Yes, typically you can derive the relationship if you use a member of the exponential family to model the class conditional feature distributions in NB. Hope this helps.
@30saransh
@30saransh 4 ай бұрын
Amazing!!!!!!!!!!!!!!!!!!!!!!!!
@sudhanshuvashisht8960
@sudhanshuvashisht8960 4 жыл бұрын
I couldn't prove that Naive Bayes for continuous variable is a linear classifier except for the case where I assumed the variance doesn't vary across labels (spam,ham as an example) of y and only varies across input variables x_alpha. Was anyone able to prove it?
@kilianweinberger698
@kilianweinberger698 4 жыл бұрын
I believe you do have to make that assumption. Sorry, wasn‘t clear in the lecture.
@satyagv3670
@satyagv3670 4 жыл бұрын
Hi Kilian, As Naive Bayes comes up with hyperplane that separates two distributions rather two datasets, can the same statement hold good even if the input data set is highly imbalanced..? I mean with out balancing can we still proceed..?
@kilianweinberger698
@kilianweinberger698 4 жыл бұрын
Yes totally. The imbalance would then be reflected in the prior distributions over the class labels P(Y) that is incorporated in Bayes Formula.
@JoaoVitorBRgomes
@JoaoVitorBRgomes 3 жыл бұрын
So what's best to find the P of logistic regression? MAP or MLE?
@sinhavaibhav
@sinhavaibhav 4 жыл бұрын
Since we are using the same form of distribution for P(Y|X) for NB and Logistic Regression, are we still making the same underlying assumption of conditional independence of Xi|Y in case of Logistic Regression. Or does directly estimating the parameters of P(Y|X) means that we are relaxing that assumption?
@JoaoVitorBRgomes
@JoaoVitorBRgomes 3 жыл бұрын
The distance from naive bayes line from points is the best? Or the line is placed equally distant from points(+1, -1)? @kilian weinberger
@JoaoVitorBRgomes
@JoaoVitorBRgomes 3 жыл бұрын
At circa 42:30 what is lambda? A regularization constant?
@kilianweinberger698
@kilianweinberger698 3 жыл бұрын
yes, exactly.
@sekfook97
@sekfook97 3 жыл бұрын
can we say that Gaussian Naive Bayes is logistic regression In the case of continuous features?
@kilianweinberger698
@kilianweinberger698 3 жыл бұрын
You need to have two classes, and you need to have the same variance for both Gaussians. In the limit of infinite data (and if your modeling assumption is right) it will indeed become the same thing, but note that the two algorithms optimize the parameters differently. LR fits P(y|w,x) and NB fits P(x|y,theta). With limited data, these two approaches will “miss” the true distribution in different ways.
@sekfook97
@sekfook97 3 жыл бұрын
@@kilianweinberger698 thank for the very detailed answer. I completely missed out of optimisation part before. It all starts to make sense to me now.
@aajanquail4196
@aajanquail4196 4 жыл бұрын
i think the product at 4:43 is missing the indicator variable?
@JoaoVitorBRgomes
@JoaoVitorBRgomes 3 жыл бұрын
But is logistic regression limited to linear separable dataset?
@kilianweinberger698
@kilianweinberger698 3 жыл бұрын
wait for the kernel trick :-)
@maddai1764
@maddai1764 5 жыл бұрын
Dear Professor can you explain a little bit of what you said at 0:37 about that we can't use where the the derivative equals to zero to find extremum here. I mean what does STUCK mean here
@lordjagus
@lordjagus 5 жыл бұрын
What it means is you cannot find an analytical representation of the point where the derivative is equal zero. So it will be equal to zero somewhere, but you cannot find a closed form formula to which you can just stick your data and it will compute the point, you have to approximate the point using some numerical method.
@maddai1764
@maddai1764 5 жыл бұрын
lordjagus thanks. I got that but my question is why it’s not possible.
@JoaoVitorBRgomes
@JoaoVitorBRgomes 3 жыл бұрын
@kilian Weinberger: Does logistic regression also has to respect gauss-markov theorem assumptions?
@kilianweinberger698
@kilianweinberger698 3 жыл бұрын
Only if it is unbiased …
@JoaoVitorBRgomes
@JoaoVitorBRgomes 3 жыл бұрын
@@kilianweinberger698 is it the same as saying the error term has normally distributed residuals? If so then has to respect Gauss Markov theorem? But a binary target would not have residuals right. I can't seem to wrap my mind with this.
@shrishtrivedi2652
@shrishtrivedi2652 3 жыл бұрын
31:00 Logistic
@abunapha
@abunapha 5 жыл бұрын
Starts at 0:35
Machine Learning Lecture 26 "Gaussian Processes" -Cornell CS4780 SP17
52:41
Players vs Corner Flags 🤯
00:28
LE FOOT EN VIDÉO
Рет қаралды 86 МЛН
Logistic Regression - Predicting Basketball Wins
13:01
ritvikmath
Рет қаралды 23 М.
Introduction to Machine Learning - 05 - Logistic regression
50:41
Tübingen Machine Learning
Рет қаралды 10 М.
Machine Learning Lecture 31 "Random Forests / Bagging" -Cornell CS4780 SP17
47:25
Machine Learning Lecture 32 "Boosting" -Cornell CS4780 SP17
48:27
Kilian Weinberger
Рет қаралды 34 М.
Logistic Regression with Maximum Likelihood
15:51
Endless Engineering
Рет қаралды 33 М.