Machine Learning Lecture 26 "Gaussian Processes" -Cornell CS4780 SP17

  Рет қаралды 67,930

Kilian Weinberger

Kilian Weinberger

Күн бұрын

Cornell class CS4780. (Online version: tinyurl.com/eCornellML )
GPyTorch GP implementatio: gpytorch.ai/
Lecture Notes:
www.cs.cornell.edu/courses/cs4...
Small corrections:
Minute 14: it should be P(y,w|x,D) and not P(y|x,w,D) sorry about that typo.
Also the variance term in 40:20 should be K** - K* K^-1 K*.

Пікірлер: 102
@pandasstory
@pandasstory 4 жыл бұрын
I got my first data science internship after watching all lectures. And now revisiting it during the quarantine and still benefit a lot. This whole series is a legend, thank you so much, professor Killian! Stay safe and healthy!
@kilianweinberger698
@kilianweinberger698 4 жыл бұрын
Awesome! I am happy they are useful to you!
@jiahao2709
@jiahao2709 4 жыл бұрын
He is the most interesting ML professor that I Ever seen on the Internet.
@rshukla64
@rshukla64 5 жыл бұрын
That was a truly amazing lecture from an intuitive teaching perspective. I LOVE THE ENERGY!
@TeoChristopher
@TeoChristopher 4 жыл бұрын
Best prof that Ive experienced so far. I love the way he tries to build sensible intuition behind the math. FYI, Love the sense of humour
@horizon2reach561
@horizon2reach561 4 жыл бұрын
There are no words to describe the power of the intelligence in the lecture , thanks a lot for sharing it.
@karl-henridorleans5081
@karl-henridorleans5081 4 жыл бұрын
8 hours of scraping the internet, but the 9th was the successful one. You sir, have explained and answered all questions I had on the subject, and raised much more interesting ones. Thank you ver much!
@miguelalfonsomendez2224
@miguelalfonsomendez2224 3 жыл бұрын
amazing lecture in every possible aspect: bright, funny, full of energy... a true inspiration!
@gareebmanus2387
@gareebmanus2387 3 жыл бұрын
Thanks for the sharing the excellent lecture. @27:00 About the house's price: The contour plot was drawn always in the first quadrant, but the Gaussian contours should have been extended over the entire plane. This actually is a drawback of the Gaussian: While we know that the house's price can't be negative, and we do not wish to consider the negative range in out model at all, we can't avoid it: The Gaussian would allow for non-zero probability for the negative price intervals as well.
@jiageng1997
@jiageng1997 2 жыл бұрын
exactly, I was so confused why he drew it as a peak rather than a ridge
@saikumartadi8494
@saikumartadi8494 4 жыл бұрын
explanation was great ! thanks a lot .it would be great if you upload other courses videos you taught at cornell because everyone is not lucky to get aa teacher like you :)
@yibinjiang9009
@yibinjiang9009 3 жыл бұрын
The best GP lecture I've found. Simple enough and makes sense.
@ikariama100
@ikariama100 2 жыл бұрын
Currently writing my master thesis working with bayesian optimization, thank god I found this video!
@rajm3496
@rajm3496 4 жыл бұрын
Very intuitive and easy to follow. Loved it!
@George-lt6jy
@George-lt6jy 3 жыл бұрын
This is a great lecture, thanks for sharing it. I also appreciate that you took the time to add the lecture corrections.
@salmaabdelmonem7482
@salmaabdelmonem7482 4 жыл бұрын
the best GP lecture ever, impressive work (Y)
@alvarorodriguez1592
@alvarorodriguez1592 4 жыл бұрын
Hooray! Gaussian process for dummies! Exactly what I was looking for Thank you very much.
@ylee5269
@ylee5269 5 жыл бұрын
Thanks for such good lecture and nice explanation, I was struggling of understanding gaussian process for a while until I saw your viedeo
@kiliandervaux6675
@kiliandervaux6675 3 жыл бұрын
The comparision with the houses prices to explain the covariance was very pertinent. I never heard it elsewhere. Thanks !
@kilianweinberger698
@kilianweinberger698 3 жыл бұрын
From one Kilian to another! :-)
@htetnaing007
@htetnaing007 2 жыл бұрын
People like these are truly a gift to our mankind!
@mostofarafiduddin9361
@mostofarafiduddin9361 3 жыл бұрын
Best lecture on GPs! Thanks.
@CibeSridharanK
@CibeSridharanK 4 жыл бұрын
Awesome explanation. That house example explains in very layman’s terms.
@benoyeremita1359
@benoyeremita1359 Жыл бұрын
Sir your lectures are really amazing, you give so many insights I would've never thought of. Thank you
@danielism8721
@danielism8721 4 жыл бұрын
AMAZING LECTURER
@DJMixomnia
@DJMixomnia 4 жыл бұрын
Thanks Kilian, this was really insightful!
@massisenergy
@massisenergy 5 жыл бұрын
It might have only 112 likes & ~5000 views at the moment while I comment, but it will have profound influence to the people who watched it & it would stick to the minds!
@erenyeager4452
@erenyeager4452 3 жыл бұрын
I love you. Thank you for explaining on why you can model it as a gaussian.
@damian_smith
@damian_smith 3 ай бұрын
Loved that "the answer will always be Gaussian, the whole lecture!" moment.
@tintin924
@tintin924 4 жыл бұрын
Best lecture on Gaussian Processes
@rossroessler5159
@rossroessler5159 8 ай бұрын
Thank you so much for the incredible lecture and for sharing the content on KZbin! I'm a first year Master's student and this is really helping me self study a lot of the content I didn't learn in undergrad. I hope I can be a professor like this one day.
@naifalkhunaizi4372
@naifalkhunaizi4372 2 жыл бұрын
Professor Killian you are truly an amazing professor
@prizmaweb
@prizmaweb 5 жыл бұрын
This is a more intuitive explanation than the Sheffield summer school GP videos
@fierydino9402
@fierydino9402 4 жыл бұрын
Thank you so much for this clear lecture :D It helped me a lot!!
@laimeilin6708
@laimeilin6708 4 жыл бұрын
Woo this is Andrew Ng level explanations!! Thank you for making these videos. :)
@gyeonghokim
@gyeonghokim 2 жыл бұрын
Such a wonderful lecture!
@parvanehkeyvani3852
@parvanehkeyvani3852 Жыл бұрын
amazing, I really love the energy of teacher.
@siyuanma2323
@siyuanma2323 4 жыл бұрын
Looooove this lecture!
@jaedongtang37
@jaedongtang37 5 жыл бұрын
Really nice explanation.
@galexwong3368
@galexwong3368 5 жыл бұрын
Really awesome teaching
@hamade7997
@hamade7997 Жыл бұрын
Insane lecture. This helped so much, thank you.
@clementpeng
@clementpeng 3 жыл бұрын
amazing explanation!
@prathikshaav9461
@prathikshaav9461 4 жыл бұрын
just binge watching your course i love it...is there link to homework, exam and solutions for the same... it would be helpful
@rohit2761
@rohit2761 2 жыл бұрын
Kilian Is ML God. Why so less views compared to crappy lectures getting so many, and this gold playlist still less. I hope people dont find it and struggle to decrease competition. But still Kilian is God, and gold series. Please upload deep learning also.
@Higgsinophysics
@Higgsinophysics 2 жыл бұрын
Brilliant and interesting !
@Ankansworld
@Ankansworld 3 жыл бұрын
What a teacher!!
@preetkhaturia7408
@preetkhaturia7408 3 жыл бұрын
Thankyou for an Amazing lecture sir!! :)
@iusyiftgkl7346u
@iusyiftgkl7346u 4 жыл бұрын
Thank you so much!
@rorschach3005
@rorschach3005 3 жыл бұрын
Really insightful lecture series and I have to say gained a lot from it. An important correction in the beginning - Sums and products of Normal distributions are not always normal. Sum of two gaussians is gaussian only if they are independent or jointly normal. No such rule exists for products as far as I remember.
@kilianweinberger698
@kilianweinberger698 3 жыл бұрын
Yes, that came out wrong. What I wanted to say is the product of two normal PDFs is proportional to a normal PDF (which is something that comes up a lot in Bayesian statistics).
@rorschach3005
@rorschach3005 3 жыл бұрын
@@kilianweinberger698 Thanks for replying. I am not sure that I understand what you meant by proportional to a normal. Product of two normals generally is in the form of a combination of chi square variables : XY = ((X+Y)^2 - (X-Y)^2)/4. Please correct me if I am missing something
@fowlerj111
@fowlerj111 Жыл бұрын
@@rorschach3005 I had the same reaction and I think I've resolved it. "product of Gaussians" can be interpreted two different ways. You and I considered the distribution of z where z=x*y and x and y are Gaussian. By this definition, z is definitely not Gaussian. KW is saying that if you define the pdf of z to be the product of the pdfs of x and y, normalized, then z is Gaussian. This is the property exploited in the motivating integral - note that probability densities are multiplied, but actual random variables are never multiplied.
@sarvasvarora
@sarvasvarora 3 жыл бұрын
"What the bleep" HAHAH, it was genuinely interesting to look at regression from this perspective!
@logicboard7746
@logicboard7746 3 жыл бұрын
The last demo was great for understanding gp
@CalvinJKu
@CalvinJKu 3 жыл бұрын
Hypest GP lecture ever LOL
@jiahao2709
@jiahao2709 5 жыл бұрын
Your lecture is really really good! I have a question here, If the input also have noise, how we can use the beyesian linear regression? In most book it mention the gaussian noise in the label, But I think it also quite possible have some noise in the input X.
@dr.vinodkumarchauhan3454
@dr.vinodkumarchauhan3454 2 жыл бұрын
Beautiful
@SubodhMishrasubEE
@SubodhMishrasubEE 3 жыл бұрын
The professor's throat is unable to keep up with his excitement!
@franciscos.2301
@franciscos.2301 3 жыл бұрын
*Throat clearing sounds*
@yuanchia-hung8613
@yuanchia-hung8613 4 жыл бұрын
These lectures definitely have some problems... I have no idea why they are even more interesting than Netflix series lol
@sulaimanalmani
@sulaimanalmani 3 жыл бұрын
Before starting the lecture, I thought this must be an exaggeration, but after watching it, this is actually true!
@yannickpezeu3419
@yannickpezeu3419 3 жыл бұрын
Thanks
@vishaljain4915
@vishaljain4915 3 ай бұрын
What was the question at 14:30 anyone know? Brilliant lecture - easily a new all time favourite.
@CibeSridharanK
@CibeSridharanK 4 жыл бұрын
18.08 I have a doubt we are not constructing a line instead we are comparing with every possible lines near by does that mean we are indirectly taking the W using covariance matrix.
@DrEhrfurchtgebietend
@DrEhrfurchtgebietend 4 жыл бұрын
It is worth pointing out that while there is no specific model there is an analytic model being assumed. In this case he assumed a linear model
@dheerajbaby
@dheerajbaby 3 жыл бұрын
Thanks for a great lecture. I am bit confused about the uncertainty estimates. How can we formally argue that the posterior variance at any point is telling us something really useful? For example, let's say we consider a simple setup where the training data is generated as y_i = f(x_i) + N(0,sigma^2), i = 1,..n and f is a sample path of the GP(0,k). Then is it possible to construct a high probability confidence band that traps the ground truth f_i using the posterior covariance and mean functions? After all, if I understood correctly, the main plus point of GP regression over kernel ridge regression is due to the posterior covariance.
@dheerajbaby
@dheerajbaby 3 жыл бұрын
I actually found all my questions answered at this paper arxiv.org/pdf/0912.3995.pdf which is the test of time paper at ICML 2020
@kevinshao9148
@kevinshao9148 6 ай бұрын
Thanks for the brilliant lecture! One confusion if I may: since 39:18 you change the conditional probability P( y1...yn | x1 .. xn) based on data D to P(y1 ... yn, y_test | x1 ... xn, x_test) ... questions are 1) before test data point, do we already have a joint distribution P(y1 ... yn, x1 ... xn) based on D? 2) once test point comes in, we need form another Gaussian distribution N(mean, variance) for (y1 ... yn, x1 ... xn, y_test , x_test) ? if so how to get covariance term between test data point with each training data? So basically for prediction, I have new x_test, what are the exact parameters we can get for y_test distribution (how to get the mean and variance)? Many Thanks!
@arihantjha4746
@arihantjha4746 3 жыл бұрын
Since p(xi,yi;w) = p(yi|xi;w)p(xi) and during MLE and MAP we ignore p(xi), as it is independent of w, to get the likelihood function (multiply from i to n -> p(yi|xi;w)). But here, why do we simply start with P(D;w) as equal to the likelihood function. Shouldn't P(D;w) be equal to (Multiply from i to n -> p(yi|xi;w) p(xi) ) where p(xi) is some arbitrary dist as it is independent of w and no assumptions are made about it, while p(yi|xi;w) is a Gaussian. Since only multiplying Gaussian with Gaussian gives us a Gaussian, how is the answer a Gaussian when p(xi) is not a Gaussian. Ignoring p(xi) during MLE and MAP makes a lot of sense as it is independent of theta, but why wasn't it been included when writing P(D;w) in the first place. Do we just assume that since xi are given to us and we don't model p(xi), p(xi) is a constant for all xi?? Can anyone help??? Also, thank you for the lectures Prof.
@kilianweinberger698
@kilianweinberger698 3 жыл бұрын
The trick is that P(D;w) is inside a maximization with respect to the parameters w. Because P(x_i) is independent of w, it is just a constant we can drop. max_w P(D;w)=max_w \PI_i P(x_i,y_i;w)=max_w (\PI_i P(y_i|x_i;w)) * (PI_i P(x_i) ) This last term is a multiplicative constant that you can pull out of the maximization and drop, as it won’t affect your choice of w. (Here PI is the capital PI multiply symbol.)
@ejomaumambala5984
@ejomaumambala5984 4 жыл бұрын
Great lectures! Really enjoyable. There's an important mistake at 40:20, I think? The variance is not K** K^-1 K*, as kilian wrote, but rather it is K** - K* K^-1 K*.
@kilianweinberger698
@kilianweinberger698 4 жыл бұрын
Yes, good catch! Thanks for pointing this out. Luckily it is correct in the notes: www.cs.cornell.edu/courses/cs4780/2018fa/lectures/lecturenote15.html
@zvxcvxcz
@zvxcvxcz 3 жыл бұрын
Really making concrete what I've known about ML for some time. There is no such thing as ML, it is all just glorified correlation :P
@mutianzhu5128
@mutianzhu5128 4 жыл бұрын
I think there is a typo at 40:18 for the variance.
@ejomaumambala5984
@ejomaumambala5984 4 жыл бұрын
Yes, i agree. The variance is not K** K^-1 K*, as kilian wrote, but rather it is K** - K* K^-1 K*.
@vikramnanda2833
@vikramnanda2833 Жыл бұрын
Which course to learn Data science or Machine learning
@pratyushkumar9037
@pratyushkumar9037 4 жыл бұрын
Professor Kilian, I don't understand how did you write mean= K*K^ -1Y and variance = K** -K*K^-1 K* for the normal distribution?
@kilianweinberger698
@kilianweinberger698 4 жыл бұрын
It is just the conditional distribution for the Gaussian ( see e.g. en.wikipedia.org/wiki/Multivariate_normal_distribution#Conditional_distributions , here Sigma is our K)
@vatsan16
@vatsan16 4 жыл бұрын
"One line of julia... two lines of python!!" whats with all the python hate professor? :P
@zvxcvxcz
@zvxcvxcz 3 жыл бұрын
Oh come on, two isn't so bad, do you know how many it is in assembly? :P
@gregmakov2680
@gregmakov2680 2 жыл бұрын
hahahah, sinh vien nao ma hieu duoc bai nay la thien tai :D:D:D:D pha tron tum lum :D:D roi qua di.
@namlehai2737
@namlehai2737 9 ай бұрын
Lots of ppl do actually
@zhongyuanchen8424
@zhongyuanchen8424 3 жыл бұрын
Why is integral over w of P(y|x,w)P(w|D) equal to P(y|x,D) ? Is it because P(w|D) = P(w|D,x)?
@kilianweinberger698
@kilianweinberger698 3 жыл бұрын
P(y|x,w)P(w|D)=P(y,w|x,D) If you now integrate out w you obtain P(y|x,D). (Here x is the test point, and D is the training data.) If you want to make it clearer you can also use the following intermediate step: P(y|x,w)=P(y|x,w,D). You can condition on D here, because y is conditionally independent of D, when x,w are given. For the same reason you can write P(w|D)=P(w|D,x) as w does not depend on the test point x (it is only fitted on the training data). Hope this helps.
@christiansetzkorn6241
@christiansetzkorn6241 2 жыл бұрын
Sorry but why correlation of 10 for POTUS example? Correlation can only be -1 ... 1?!
@sekfook97
@sekfook97 3 жыл бұрын
just know about they used Gaussian processes to search the airplanes in the ocean. btw, I am from malaysia.
@sandipua8586
@sandipua8586 5 жыл бұрын
Thanks for the content but please calm down, I'm getting a heart attack
@nichenjie
@nichenjie 5 жыл бұрын
Learning GP is so frustrated T.T
@jzinou1779
@jzinou1779 5 жыл бұрын
lol
@hossein_haeri
@hossein_haeri 3 жыл бұрын
What is exactly k**? Isn't it always ones(m,m)?
@kilianweinberger698
@kilianweinberger698 3 жыл бұрын
No, depends on the kernel function. But it is the inner-product of the test point(s) with it-/themselves.
@zhuyixue4979
@zhuyixue4979 4 жыл бұрын
aha moment: 11:15 to 11:25
@maxfine3299
@maxfine3299 2 ай бұрын
the Donald Trump bits were very funny!
@busTedOaS
@busTedOaS 4 жыл бұрын
ERRM
@bnouadam
@bnouadam 4 жыл бұрын
this giy has absolutely no charisma and has a controlling attitude. tone is not fluent
@prathikshaav9461
@prathikshaav9461 4 жыл бұрын
just binge watching your course i love it...is there link to homework, exam and solutions for the same... it would be helpful
@kilianweinberger698
@kilianweinberger698 4 жыл бұрын
Past 4780 exams are here: www.dropbox.com/s/zfr5w5bxxvizmnq/Kilian past Exams.zip?dl=0 Past 4780 Homeworks are here: www.dropbox.com/s/tbxnjzk5w67u0sp/Homeworks.zip?dl=0
Machine learning - Introduction to Gaussian processes
1:18:55
Nando de Freitas
Рет қаралды 294 М.
Каха и суп
00:39
К-Media
Рет қаралды 5 МЛН
Can You Draw A PERFECTLY Dotted Line?
00:55
Stokes Twins
Рет қаралды 114 МЛН
On the Importance of Deconstruction in Machine Learning Research
29:44
Kilian Weinberger
Рет қаралды 6 М.
I get confused trying to learn Gaussian Processes | Learn with me!
29:32
Machine Learning Lecture 32 "Boosting" -Cornell CS4780 SP17
48:27
Kilian Weinberger
Рет қаралды 33 М.
Gaussian Process - Regression - Part 1 - Kernel First
15:57
Meerkat Statistics
Рет қаралды 29 М.
Expectation Maximization: how it works
10:39
Victor Lavrenko
Рет қаралды 277 М.
Machine learning - Bayesian optimization and multi-armed bandits
1:20:30
Nando de Freitas
Рет қаралды 129 М.
Каха и суп
00:39
К-Media
Рет қаралды 5 МЛН