This is great content! Thanks for the valuable information that you share for those people that cannot attent the GP summer school! I would really appreciate if someone could reference some sources that elaborate on the different techinques of validating the posterior GP models. For example things like using mean square error for measuring the accuracy of the mean, or counting the new points falling inside 95% confidence interval to measure whether the confidence intervals of the posterior GP make sense. Also checking the predictive distribution by normalising the residuals.
@user-nq4ct9xf7y3 ай бұрын
Would have been good to see a clear presentation of posterior mean calculation and nll calculation *without* using inducing points.
@mekersemito7 ай бұрын
Any one who can say something about the kernel k(x,x'). Here what does it mean by x and x' I thought they are like two inputs of a random variables that gives a value, but I saw something like a vector k(x,x')=x^Tx'???. Does it mean that x is the observed points and x' is point for prediction?
@charlesity7 ай бұрын
Arguably the best presentation on this subject.
@mahdibahrampouri66277 ай бұрын
Such a great presentation. I wish I could see the rest of the presentation.
@bencavus9 ай бұрын
Thank you!
@be1tube11 ай бұрын
I loved learning that a diagonal noise term can help with ill-conditioned matrix inversions.
@joe_hoeller_chicago Жыл бұрын
Great video on causality!
@WahranRai Жыл бұрын
If you were targeting an international audience, you should articulate and reduce your speed to make it easier to understand !
@matej6418 Жыл бұрын
Can your model deal with exogenous control variables ? Often in literature denoted u(t)
@bryanshi3774 Жыл бұрын
Very good introduction to GPS
@pariseuselain1759 Жыл бұрын
where is the slide pls🥲
@franard4547 Жыл бұрын
I really appreciate for this. Been studying GP with lots of confusion. This was light for me.
@GreenFlyter Жыл бұрын
Thanks!!!
@jonathancangelosi2439 Жыл бұрын
I appreciate how thorough this video was. Many tutorials on GPs tend to handwave a lot of the mathematical details of Gaussians and use sloppy notation (which is a huge problem with machine learning education in general, in my opinion).
@l.yans472 жыл бұрын
20:32
@khanwaqar77032 жыл бұрын
Its amazing. Shall I contact with this professor for some questions.
@rudolfreiter52173 жыл бұрын
Great talk! Thank you
@hosseinrezaie79583 жыл бұрын
very very nice thanks Dr!
@charilaosmylonas50463 жыл бұрын
Amazing and insightful presentation! Thanx for publicly sharing this!
@origamitraveler74253 жыл бұрын
Very important topic, thanks for the talk
@origamitraveler74253 жыл бұрын
Woah! Great introduction!
@origamitraveler74253 жыл бұрын
Great talk! The first 30 minutes really helped ease in to the topic
@eduardomedina50813 жыл бұрын
Nice explanation! Very useful for my thesis :)
@AAAE20133 жыл бұрын
Thanks for the nice explanation.
@AAAE20133 жыл бұрын
Thanks for the nice explanation.
@rohannuttall25773 жыл бұрын
Starts at 4:54
@pattiknuth48223 жыл бұрын
Would be nice if they picked a speaker who could speak English properly. Very hard to understand (and I'm a native English language speaker)
@prashantmdgl93 жыл бұрын
I don't find anything wrong with the diction of the speaker. It seems you haven't worked in an international environment.
@diegoacostacoden87043 жыл бұрын
jaja, no sabe lo que dice
@ceskale2 жыл бұрын
its more about the quality of the audio, the english is pretty good
@AntifachoOi4 жыл бұрын
Really good complement for the Rasmussen & Williams Gaussian Processes for Machine Learning Chapter 3 which is quite involving.
@microndiamondjenkins5664 жыл бұрын
I don't see the slides on the website. the speaker says they are there ..
@mohsenvazirizade63344 жыл бұрын
thank you so much for this wonderful video. In most of your figures, you have about 10 different colors that are moving along the x-axis. On each slice (a vertical line at x=x_j lets say j is 5) we have 10 points in 10 different colors that are normally distributed while they are correlated based on a kernel with the 10 points in x_i {i=0,1,.., 5, ..,n}, and lets say n is 20. My question is how do you generate this point? In total we have 10 (colors) * 20 (n) = 200 points which have to satisfy 2 conditions: 1) being normally distributed at each section 2) following the correlation based on kernal. Thank you
@shankyxyz5 ай бұрын
the lines are sampled independently. each line is generated by randomly sampling a multivariate normal distribution of dimension as much as there are points on each line. so the lines have no relation to each other.
@yanhongzhao31414 жыл бұрын
The boris johnson comment is gold :))))
@hossanatwino4 жыл бұрын
Thank you for these classes, very helpful - and probably COVID for enabling them to go to KZbin :-)
@miguelbatista94934 жыл бұрын
great talk. Not easy to find structured materials on this
@MCPEStuff4 жыл бұрын
Cool!
@sourabmangrulkar91054 жыл бұрын
Great lecture. Thank you :)
@klingonxxxxxx5 жыл бұрын
My poor english + teacher defects in speaking + strict and fast UK english = I'm not able to enjoy the lesson :(
@nicolassoncini22665 жыл бұрын
A great introduction to GPs, it's concise and very visual. Kudos to you Dr Wilkinson! Thanks for uploading these, I hope to attend one someday :D