Gibbs sampling

  Рет қаралды 110,335

Jarad Niemi

Jarad Niemi

Күн бұрын

Пікірлер: 45
@jacobschultz7201
@jacobschultz7201 5 жыл бұрын
Seeing the algorithm "walk" around the plane really made it click for me. Also, this made clear why we need to find every single conditional first. Thank you for the great work!
@TheDebidatta
@TheDebidatta 10 жыл бұрын
This is the clearest explanation of Gibbs sampling out there. Thanks.
@TheAIEpiphany
@TheAIEpiphany 2 жыл бұрын
The example with the 2D Gaussian was invaluable to ground my understanding - thank you!
@jaradniemi
@jaradniemi 2 жыл бұрын
Glad it was helpful!
@tg6452
@tg6452 6 жыл бұрын
Clear explanation, clear voice, clear slides. Good Job! Thanks!
@jessicas2978
@jessicas2978 6 жыл бұрын
Amazing video! I was struggling with it and now I understand. Thank you so much!
@jiagengliu
@jiagengliu 7 жыл бұрын
Just wondering if randomizing the order of sampling (like instead of going \theta_1, ... \theta_K, you do a random permutation of K) will help here? Is there a particular reason why we sample in this order?
@atjebb
@atjebb 10 жыл бұрын
Thanks for the great and clear video! One question though: starting at 3:11, why do the normal conditional distributions have mean of rho*theta and variance of [1-rho^2]? Where is this parameterization coming from?
@hussein-m6x
@hussein-m6x 10 жыл бұрын
Explanation can be found from the conditional distributions section under the multivariate normal article on Wikipedia.
@atjebb
@atjebb 10 жыл бұрын
Thanks for the tip!
@bunnysm
@bunnysm 9 жыл бұрын
Thank you very much! This is nicely and simply explained.
@ccuuttww
@ccuuttww 4 жыл бұрын
I think the vector theta N contains Theta 1 and Theta 2 is a marginal distribution of Bi variate Normal when we plot a histogram with Theta 1 data what it tell us ? The mean of Theta 1 is the value we want to estimate?
@jaradniemi
@jaradniemi 4 жыл бұрын
The histogram of theta1 draws (not data) is an approximation to the marginal distribution for theta1. The mean of theta1 draws are an estimate of the mean of the distribution for theta1.
@ccuuttww
@ccuuttww 4 жыл бұрын
@@jaradniemi Mean (theta1,theta2) gives us center? I still not really sure that what marginal distribution try to tell us In Normal distribution case we can calculate its mean of marginal density to estimate mu and std but in beta binomial cases the Y parameter is absent only given N=20 a=5 b=5 p=0.5 the mean of marginal distribution of Y =10 P =0.5 is it mean that the beta binomial posterior distribution equal to Beta(10+5,,20-10+5)?
@jaradniemi
@jaradniemi 4 жыл бұрын
@@ccuuttww Gibbs sampling is a methodology to sample from arbitrary target distributions by sampling from conditional distributions. When the target distribution is a known distribution, e.g. beta, then you do not need Gibbs sampling. Similarly Gibbs sampling is not necessary for a bivariate normal since we know everything about the distribution given the mean vector and covariance matrix.
@jacobmoore8734
@jacobmoore8734 4 жыл бұрын
Btw, how did you derive the conditional distributions from the joint? When I wrote out the full analytic form of the joint PDF divided by one of the single variable PDFs, the equation did not simplify easily.
@jaradniemi
@jaradniemi 4 жыл бұрын
You certainly can do it, but it is a bit of work. Fortunately many people have already done it and the formula is one of those formulas you are expected to memorize in a STAT PhD program. The formulas can be found on wikipedia under "conditional distributions of a multivariate normal": en.wikipedia.org/wiki/Multivariate_normal_distribution#Bivariate_case_2
@jacobmoore8734
@jacobmoore8734 4 жыл бұрын
​@@jaradniemi Oh I see, thanks! I've been trying to implement various MCMC samplers from scratch to get a better feel for their inner-workings. I find Metropolis and MH are a bit more accessible as all you need are ~proposal and target distributions. Gibbs seems fairly simple once you have the conditional distributions, but automating this process seems like a serious challenge, from an implementation perspective.
@jaradniemi
@jaradniemi 4 жыл бұрын
@@jacobmoore8734 A number of software packages have already done this, e.g. BUGS, JAGS. They basically have implemented all the known conditional distributions. For distributions that are not known, they generally use slice sampling. So that might be another MCMC sampler that you want to look into.
@ZbiggySmall
@ZbiggySmall 5 жыл бұрын
Thanks for the video. I understood the concept but I am not an expert in probability. I know what conditional probability is. I am still struggling to figure out what it means when you sample Theta_1 given Theta_2, ..., Theta_K etc. and would not be able to explain why it's working if someone asked me. In the example of p(Theta) ~ N(0, Sigma), Theta_1 ~ N(ro x Theta_2(0), [1 - ro^2]). Here the covariance matrix became variance [1 - ro^2] and mean became ro x Theta_2(0). Where does it comes from?
@jaradniemi
@jaradniemi 5 жыл бұрын
This requires knowledge of the multivariate normal distribution and condition distributions from this multivariate normal. See here: en.wikipedia.org/wiki/Multivariate_normal_distribution#Bivariate_case_2
@GooseGood2024
@GooseGood2024 8 жыл бұрын
Gibbs sampling is just one special example of MH
@UlrichArmel
@UlrichArmel 5 жыл бұрын
Are all the samples accepted contrary to metropolis Hastings algorithm? Cool video
@jaradniemi
@jaradniemi 5 жыл бұрын
Yes. Although Gibbs sampling can be written as a Metropolis-Hastings algorithm with an acceptance probability of 1.
@Xnaarkhoo
@Xnaarkhoo 10 жыл бұрын
I have seen some paper and codes where they have used more complicated conditional probability. e.g sometimes, some pieces of their conditional probability contain gamma distribution. can you make some comment on how to advance a gibbs sampling ?
@jaradniemi
@jaradniemi 10 жыл бұрын
In the context used here, the Gibbs sampler is determined from the target joint distribution. Here the joint distribution is a multivariate normal and thus its conditional distributiosn are normal. For another joint distribution, it is entirely possible that the conditional distributions are gamma distributions.
@philipruijten2639
@philipruijten2639 10 жыл бұрын
Great explenation! Not to nitpick but from 2.14 the f(\theta) should be p(\theta) or am I looking at it the wrong way ? If not, what's the f ? Nonetheless really awesome explenation! Thanks!
@jaradniemi
@jaradniemi 9 жыл бұрын
Philip Ruijten You are correct, it should be p(\theta).
@shubhamtoshniwal2221
@shubhamtoshniwal2221 8 жыл бұрын
Thanks for such a nice video !!
@mingyanchu6308
@mingyanchu6308 9 жыл бұрын
very good explanation!! really helpful, thanks:)
@SrikantGadicherla
@SrikantGadicherla 9 жыл бұрын
If possible can you please share the link for ppts. Thanks!
@shyamumich
@shyamumich 10 жыл бұрын
Very useful video. Thanks a lot.
@weixuanli163
@weixuanli163 11 жыл бұрын
Nice lecture. Pretty clear explanation
@Carutsu
@Carutsu 10 жыл бұрын
I'm a bit too green on this so sorry, but how did you construct the conditioning? ie how did you go from theta_1|theta_2 to N(pTheta_1, i=p2) EDIT: should've researched before, this seems like a solved problem, just plug the numbers.
@somalichaterji
@somalichaterji 9 жыл бұрын
+Carutsu The more general derivation of Y|X=x is given in the following article: onlinecourses.science.psu.edu/stat414/node/118 Here \mu's are both 0 and \sigma's are both 1.
@boshranabaei8283
@boshranabaei8283 11 жыл бұрын
Thanks a lot. Very good explanation.
@jacobmoore8734
@jacobmoore8734 4 жыл бұрын
Anyone got a quick python function for this?
@jaradniemi
@jaradniemi 4 жыл бұрын
You might need to be more specific about what "this" is.
@jacobmoore8734
@jacobmoore8734 4 жыл бұрын
@@jaradniemi Sorry, I see how that could've been a bit too vague. Trying to recreate your work using python: stackoverflow.com/questions/62348946/gibbs-sampler-fails-to-converge
@supermarcio_
@supermarcio_ 9 жыл бұрын
Thank you for this (:
@Clover-ft4vx
@Clover-ft4vx 10 жыл бұрын
awesome :)
@tadmeri
@tadmeri 11 жыл бұрын
very helpful!
@sa1dana
@sa1dana 11 жыл бұрын
Thanks!
@pooya97
@pooya97 10 жыл бұрын
thanks
@daryoushmehrtash7601
@daryoushmehrtash7601 9 жыл бұрын
thanks. q
An introduction to Gibbs sampling
18:58
Ben Lambert
Рет қаралды 82 М.
Metropolis-within-Gibbs
50:43
Jarad Niemi
Рет қаралды 12 М.
Правильный подход к детям
00:18
Beatrise
Рет қаралды 11 МЛН
Metropolis - Hastings : Data Science Concepts
18:15
ritvikmath
Рет қаралды 114 М.
In Statistics, Probability is not Likelihood.
5:01
StatQuest with Josh Starmer
Рет қаралды 1,3 МЛН
A Beginner's Guide to Monte Carlo Markov Chain MCMC Analysis 2016
44:03
Sagan Summer Workshop
Рет қаралды 131 М.
How Bayes Theorem works
25:09
Brandon Rohrer
Рет қаралды 554 М.
Dirichlet Process Mixture Models and Gibbs Sampling
26:33
Jordan Boyd-Graber
Рет қаралды 70 М.
Introduction to Bayesian statistics, part 1: The basic concepts
9:12
StataCorp LLC
Рет қаралды 516 М.
How to derive a Gibbs sampling routine in general
15:07
Ben Lambert
Рет қаралды 21 М.
Gibbs Sampling : Data Science Concepts
8:49
ritvikmath
Рет қаралды 77 М.