Introduction to Gaussian process regression. Slides available at: www.cs.ubc.ca/~... Course taught in 2013 at UBC by Nando de Freitas
Пікірлер: 160
@maratkopytjuk34908 жыл бұрын
Thank you, I tried to understand GP via papers, but only you could help me to build up understanding the idea. That is great that you took time to explain gaussian distribution and the important operations! You're the best!
@MrEdnz2 жыл бұрын
Learning a new subject via papers isn’t very helpful indeed :) They expect you to understand basic principles of GP. However lectures like these or books start with the basic principles💪🏻
@augustasheimbirkeland44962 жыл бұрын
5 minutes in and its already better than all 3 hours at class earlier today!
@daesoolee10832 жыл бұрын
The best tutorial for GP among all the materials I've checked.
@sourabmangrulkar91054 жыл бұрын
The way you started from basics and built up on it to explain the Gaussian Processes is very easy to understand. Thank you :)
@life99f2 жыл бұрын
I feel so fortunate to find this video. It's like walking in a fog and finally be able to see things clearly.
@fuat7775 Жыл бұрын
This is absolutely the best explanation of the Gaussian!
@SijinSheung5 жыл бұрын
This lecture is so amazing! The hand drawing part is really helpful to build up intuition reagarding GP. This is a life-saving video to my finals. Many thanks!
@erlendlangseth46726 жыл бұрын
Thanks, this helped me a lot. By the time you got to the hour mark, you had covered sufficient ground for me to finally understand gaussian processes!
@MattyHild5 жыл бұрын
FYI Notation @22:05 is wrong. since he selected an x1 to condition on, he should be computing mu2|1 but he is computing mu1|2
@sarnathk19466 жыл бұрын
This is indeed an Awesome lecture! I liked the way the complexity is slowly built over the lecture. Thank you very much!
@ziangxu77513 жыл бұрын
What an amazing lecture. It is much clearer than lectures taught in my university.
@user-oc5gk7yn6o4 жыл бұрын
I've found so many lectures for understanding gaussian process. Until now you are the only one I think can make me understand it.. Thanks a lot man
@KhariSecario2 жыл бұрын
Here I am in 2021, yet your explanation is the easiest one to understand from all the sources I gathered! Thank you very much 😍
@matej6418 Жыл бұрын
me in 2023, still the same
@francescocanonaco59885 жыл бұрын
I tried to understand GP via blog article, paper and a lot of videos. Best video ever on GP! Thank you !
@marcyaudrey660810 ай бұрын
This lecture is amazing Professor. From the bottom of my heart, I say thank you.
@akshayc1139 жыл бұрын
Thanks a lot Prof. Just a minor correction for the people following the lectures. You made a mistake while writing out the formulae at 22:10 You were writing out mean and variance of P(X1|X2) whereas the diagram was to find P(X2|X1). Since this is symmetric, you can just get them by appropriate replacements, but just letting slightly confused people know
@charlsmartel8 жыл бұрын
+akshayc113 I think all that should change is the formula for the given graphs. It should read: mu_21 = mu_2 + sigma_21 sigma_11*-1 (x_1 - mu_1). Everything else can stay the same.
@tobiaspahlberg15068 жыл бұрын
I think he actually meant to draw x_1 where x_2 is in the diagram. This switch would agree with the KPM formulae on the next slide.
@huitanmao52677 жыл бұрын
Very clear lectures ! Thanks for make them publicly available !
@Ricky-Noll3 жыл бұрын
All time one of the best videos on KZbin
@HarpreetSingh-ke2zk2 жыл бұрын
I started learning about multivariate Gaussian processes in 2011, but it's terrible that I just got to this video when 2021 is ending. He explained things in a way that even a layperson could grasp. He first explains the meaning of the concepts, followed by an example/data, and last, theoretical representation. Typically, mathematic's presenters/writers avoid using data to provide examples. I'm always on the lookout for lectures like these, where the theoretical understanding is demonstrated through examples or data. Unless the concepts are not difficult to grasp, but the presenter/writer has made us go deep in order to open up complex notations without providing any examples.
@dennisdoerrich37436 жыл бұрын
Wow, you saved my life with this genius lecture ! I think it's a pretty abstract idea with GP and it's nice that you can walk one through from scratch !
@Gouda_travels2 жыл бұрын
after one hour of smooth explanation, he says and this brings us to Gaussian processes :)
@MB-pt8hi6 жыл бұрын
Very good lecture, full of intuitive examples which deepens the understanding. Thanks a lot
@DistortedV125 жыл бұрын
Finally! This is gold for beginners like me! Thank you Nando!! Saw you o the committee at the MIT defense, great questions!
@LynN-he7he4 жыл бұрын
Thank you, thank you thank you!! I was stuck on a homework problem and still figuring out what it means to be a testing vs. training data set and how the play a role in the Gaussian Kernel function. I was stuck for the last 3 days, and your video from about 45min - 1 hour mark made the lightbulb go off!
@DanielRodriguez-or7sk4 жыл бұрын
Thank you so much Professor De Freitas. What a clear explanation of GP
@jx48642 жыл бұрын
After 30mins, I am sure that he is top 10 teacher in my life
@bluestar22533 жыл бұрын
One of the best teachers in ML out there!
@pradeepprabakarravindran61511 жыл бұрын
Thank you ! Your videos are so much awesome than any ML lecture series I have seen so far ! -- Grad Student from CMU
@dwhdai4 жыл бұрын
wow, this is probably the best lecture I've ever watched. on any topic.
@richardbrown25654 жыл бұрын
Great explanation. I wish that the title mentioned that it was part one of two, so that I would have known it was going to take twice as long.
@bottomupengineering6 ай бұрын
Great explanation and pace. Very legit.
@oliverxie95593 жыл бұрын
Really great video for reading Gaussian Processes for Machine Learning!
@sanjanavijayshankar55084 жыл бұрын
Brilliant lecture. One could not have taught GPs better.
@xingtongliu16365 жыл бұрын
This becomes very easy to understand with your thorough explanation. Thank you very much!
@jingjingjiang64036 жыл бұрын
Thank you for sharing this wonderful lecture! Gaussian process was so confusing when it was taught in my university. Now it is crystal clear!
@emrecck3 жыл бұрын
That was a great lecture Mr.Freitas, thank you very very much! I watched it to study my Computational Biology course, and it really helped.
@woo-jinchokim64417 жыл бұрын
by far the best structured lecture on gaussian processes. love it :D
@adrianaculebro91765 жыл бұрын
Finally understood how this idea is explained and applied using mathematical language
@turkey3434344 жыл бұрын
Gaussian processes start at 1:01:15
@hohinng8644 Жыл бұрын
pin this
@taygunkekec961610 жыл бұрын
Very clearly explained. The dependencies for learning the framework is concisely and incrementally given while details that make the framework harder to understand is elaborately evaded (You will understand what I mean if you try to dig through Rasmussen's book on GP).
@jinghuizhong9 жыл бұрын
The lecture is quite clear and it inspires me about the the key ideas of gaussian process. Many thanks!
@sak020105 жыл бұрын
thanks a lot prof. Very clean and easy to understand explanation.
@dieg30058 жыл бұрын
Thank you very much Prof. de Freitas, excellent introduction
@austenscruggs87262 жыл бұрын
This is an amazing video! Clear and digestible.
@malharjajoo73934 жыл бұрын
Basic summary of lecture video: 1) Recap on multivariate Normal/Gaussian distribution (MVN). - some info on conditional probability 2) Some information on how sampling can be done from Univariate/Multivariate Gaussian distribution. 3) 39:00 - Introduction to Gaussian Process (GP) It is important to note that GP is considered as a Bayesian non-parametric approach/model
@quantum010101014 жыл бұрын
That is clear and flows naturally, Thank you very much.
@Jacob01110 жыл бұрын
Absolutely superb lecture! Everything is clearly explained even with source code.
@TheTacticalDood10 сағат бұрын
This is amazing. Thanks so much!
@saminebagheri41757 жыл бұрын
amazing lecture.
@darthyzhu57678 жыл бұрын
really clear and comprehensive. thanks so much.
@kiliandervaux66753 жыл бұрын
Thank you so much for this amazing lecture. I wanted to applaude at the end but I realised I was in front of my computer.
@pattiknuth48223 жыл бұрын
Extremely good lecture. Well done.
@maudentable3 жыл бұрын
a master doing his work
@malharjajoo73934 жыл бұрын
1:04:08 - Would be good to emphasize that the test set is actually used for generating prior ... I had a hard time making sense out of it because the test set is usually provided separately (but in this case we are generating it !!)
@niqodea5 жыл бұрын
BEAST MODE teaching
@pankayarajpathmanathan70097 жыл бұрын
The best lecture for gaussian processes
@heyjianjing3 жыл бұрын
around 56:00, I don't think we should omit the condition sign on the mu*, that is conditioned on f: E(f*|f), not E(f*), otherwise, the expected value of f* alone should just be zero
@dracleirbag58382 жыл бұрын
I like the way you teach
@sumantamukherjee19529 жыл бұрын
Lucidly explained. Great video
@crestz14 ай бұрын
Amazing lecturer
@Raven-bi3xn3 жыл бұрын
Am I correct to think that the "f" notation in 30':30" is not the same "f" in 1:01':30"? In the latter case, each f consists of all the 50 f distributions that are exemplified in the former case? If that understanding is correct, then in sampling from the GP, each sample is a 50by1 vector from the 50D multivariate Gaussian distribution. This 50by1 vector is what Dr. Nando refers to as "distribution over functions". In other words, given the definition of a stochastic process as "indexed random variables", each random variable of GP is drawn from a multivariate Gaussian distribution. In that viewpoint, each "indexed" random variable is a function in 1:01':30". This lecture from 2013 is truly an amazing resource.
@jhn-nt2 жыл бұрын
Great lecture!
@bingtingwu8620 Жыл бұрын
Thanks!!! Easy to understand👍👍👍
@huuducdo1437 ай бұрын
Hello Nando, thank you for your excellent course. Following the bell example, the muy12 and sigma12 you wrote should be for the case that we are giving X2=x2 and try to find the distribution of X1 given X2=x2. Am I correct? Other understanding is welcomed. Thanks a lot!
@terrynichols-noaafederal95376 ай бұрын
For the noisy GP case, we assume the noise is sigma^2 * the identity matrix, which assumes iid. What if the noise is correlated, can we incorporate the true covariance matrix?
@chenqu773 Жыл бұрын
It looks like that the notation of the axis in the graph on the right side of the presentation, @ around 20:39, is not correct. It could probably be the x1 on x-axis. I.e: it would make sense if μ12 refered to the mean of variable x1, rather than x2, judging from the equation shown on the next slide.
@philwebb592 жыл бұрын
1:05:58 Analog computers existed way before the first digital circuits. A WWII vintage electrical analog computer, for example, consisted of banks of op amps, configured as integrators and differentiators.
@EbrahimLPatel8 жыл бұрын
Excellent introduction to the subject! Thank you :)
@rsilveira796 жыл бұрын
Awesome lecture, very well explained!
@kevinzhang46922 жыл бұрын
Thank you! It is a wonderful lecture
@JaysonSunshine7 жыл бұрын
Correct me if I am wrong, but isn't the whole cluster of examples starting at 36:35 flawed? Nando shows three points in a single dimension: x1, x2, x3 and their corresponding f-values: f1, f2, f3. It seems these points are three samples from a univariate normal distribution with a scalar variance, rather than what he shows, i.e. a vector from R^3 with a 3x3 covariance matrix.
@JaysonSunshine7 жыл бұрын
On further reflection, perhaps you're doing a non-parametric approach in which you assign a Gaussian per point... ...since the distribution you're forming is empirical, it seems it would be more precise to to say the mean vector of the f-distribution is [f1, f2, f3], yes?
@DESYAAR6 жыл бұрын
I agree. That took me a while as well.
@afish33564 жыл бұрын
An extremely good lecture! Thank you for recording this :) :)
@user-ym7rp9pf6y3 жыл бұрын
Awesome explanation. thanks
@swarnendusekharghosh95393 жыл бұрын
Thankyou sir for a clear explanation
@kambizrakhshan32483 жыл бұрын
Thank you!
@tospines5 жыл бұрын
I think I got the essence of GP, but what I can not understand is why we take that the mean is 0 when clearly it is not 0. I mean, if we suppose that f* will be distributed as a gaussian with mean 0, the expectation value of f* must be 0. Could anyone explain me this fact?
@oskarkeurulainen64145 жыл бұрын
0 is only the mean for the prior for f*. When we know values of other variables that are correlated with f*, then we actually want to consider the mean when f* is conditioned on the other observed variables. Compare with the ellipse in the beginning with x1 and x2, both have mean 0 but if we observe one of them to be positive, the other one is also likely to be positive and thus has a positive conditional expectation.
@GiiWiiDii4 жыл бұрын
23:56 That would be nice, thanks!
@TheSourav773 жыл бұрын
Lol
@ratfuk93405 ай бұрын
Thank you for this
@yunlongsong76184 жыл бұрын
Great lecture. Thanks.
@user-nr3ej2ud5j3 ай бұрын
isn't 22:19 the right side formula for x1|x2 not for x2|x1?
@ho40402 жыл бұрын
Holy shit...what a good lecture
@redberries80393 жыл бұрын
This was a good explanation.
@dhruv3855 жыл бұрын
Wow! Great Lecture!
@katerinapapadaki48105 жыл бұрын
Thanks for the helful lecture! The only thing I want to point out is that if you put labels on the axises on your plots, it would be more helful for the listener to understand from the begging what you describe
@SimoneIovane5 жыл бұрын
Great lesson! Thank you!
@xinking26442 жыл бұрын
if their is a mistake in 21:58 ? it should be condition on x1 instead of x2 ?
@deephazarika22596 жыл бұрын
when estimating 'f', why each point is treated as a separate dimension and not different points in the same dimension?
@malekebadi98054 жыл бұрын
As far as I understood, Gaussian process (regression) serves two purposes: refining the prior (and posterior) and predicting the response for new points. If you collect new observations for the same points you are refining the posterior and if you extend your new point to a new dimension, you're predicting. In the former case, the confidence interval between two points remains relatively fat. Querying for points in new dimensions (given that practically you can do that) squeeze the confidence interval. Theoretically, it doesn't matter I guess. Think of an experiment in which you keep the x the same in every iteration but you read different y's. Think of another experiment in which your x values are changing from one iteration to another and you receives y's. From GP point of view, both are the same.
@homtom29 жыл бұрын
This helped me so much! Thanks!
@brianstampe70564 жыл бұрын
Very helpful. Thanks!
@JadtheProdigy5 жыл бұрын
Can someone explain why f is distributed with mean 0?
@yousufhussain95308 жыл бұрын
Amazing lecture!
@haunted209710 жыл бұрын
well done! Very intuitive!
@itai194 жыл бұрын
Thanks for the lecture, I have a problem with the discussion around 11 - from my understanding, a spherical case does represent some correlation between X and Y, as X is a sub-component of the max radius calculation, meaning larger x leads to smaller possible values of y (or at least lower probability for higher values). In other words, the covariance can be approximated to something like E[x*sqrt(r^2-x^2)]. Are we saying that ends up being zero, i.e. correlation is unable to express such a dependency? My intuition currently understands a square to express 0 correlation