This whole set of videos on machine learning is so well done and everything was explained in molecular details. Great teacher with exceptional teaching ability! I feel truly blessed.
@tonimigliato23505 жыл бұрын
I feel bad for people trying to learn Machine Learning and don't were lucky to find this class as I was. Thanks Prof. Freitas!
@cakobe89 жыл бұрын
I truly appreciate these lectures. Thank you very much professor, great pacing, great structure, great content!
@saidalfaraby11 жыл бұрын
I wish i watch this video earlier before the midterm.. Cool, your explanation is always amazing.. Thank you..
@havalsadiq365511 жыл бұрын
Very very clear explanation, I have spent a lot of time about learning probability, just now everything became clear. really very smart professor!
@crestz19 ай бұрын
beautifully linked the idea of maximising likelihood by illustrating the 'green line' @ 51:41
@jiongwang76455 жыл бұрын
God bless you, professor Freitas!
@joeleepee12 жыл бұрын
Smart professor!
@marcuswallenberg449210 жыл бұрын
Great stuff, although I wonder, should the normalisation constant for the multivariate normal pdf at 19:00 contain a factor (2*pi)^(-n/2) (since it's stated as a general multivariate Gaussian)? If it's still supposed to be the bivariate example, I missed that...
@jonpit43424 жыл бұрын
Exactly, as you pointed out it should have negative n over 2 since it talks about n random variables
@gruppenzwangimweb208 жыл бұрын
great intuition for MLE
@yy884811 жыл бұрын
The lecture is great! It is really helpful. Thank you.
@DivakarHebbar7 жыл бұрын
+1 for your sense of humor! :) Great lecture.
@funfun_sci4 жыл бұрын
awesome lecture
@jhonathanpedroso710311 жыл бұрын
Great lesson!
@Gouda_travels2 жыл бұрын
This is when got really interesting 22:02 typically, I'm given points and I am trying to learn the mu's and the sigma's
@AlqGo7 жыл бұрын
Thank you. This lecture alone has consolidated many fragments of knowledge that I have about linear regression! It's like almost everything clicked for me. I do still have a big question. Why is the standard deviation also estimated by minimizing the log-likelihood? What makes it an appropriate estimate of the standard deviation of the same normal distribution that has the mean (x^T)*theta_ML?
@tdoge5 жыл бұрын
39:00 - Maximum likelihood 45:20 - Linear regression
@ahme030711 жыл бұрын
at 1:12:38 is a bit confusing. I think it should be the information that the unfair coin toss reveals to us is less than one heads-or-tails. am I missing some thing?
@SiaHranova5 жыл бұрын
i'm not sure about this, but the way I undertand entropy is as a measure of randomness, thus when you have a fair coin, you have the highest entropy since all events in state space are equally likely. If you have an unfair coin you gain more information about what the value will be next time coin is flipped. If you take limiting cases you have max info gain and min entropy since every throw will result in 0 or 1. In later lectures when he talks about decision trees and information gain he explains this.
@SNPolka569 жыл бұрын
Excellent lecture ....
@mrf14510 жыл бұрын
Superb!
@KrishnaDN9 жыл бұрын
Perfecto
@karimb.4 жыл бұрын
Machine learning... Linear regression
@Lets_MakeItSimple5 жыл бұрын
Thanks Internet for making this accessible in india.