I'm paying 1000s of dollars for a uni course and come here to actually learn what is going on. Thanks!
@NeverHadMakingsOfAVarsityAthle8 ай бұрын
Fantastic explanation, thank you so much! I failed in understanding so many other explanations, but yours really made it click for me:)
@tranle5614 Жыл бұрын
Awesome explanation. Thank you so much, Dr. Lambert.
@AshutoshRaj9 ай бұрын
🎯 Key Takeaways for quick navigation: 00:00 *Predicting new data* 01:21 *Sampling procedure steps* 10:51 *Dominant uncertainty source*
@engenhariaquimica6590 Жыл бұрын
Awesome !!! Thabks a lot for such valuable information!!! And clear explanation
@kiranskamble7 ай бұрын
Excellent Ben! Thank you!
@mirotivo3 жыл бұрын
It's a bit confusing In the video you are trying to come up with the posterior approximation given the sample data by sampling methods, You mentioned the left is the beta distribution which is the posterior already, what are we trying to approximate then, how is the samples drawn to be clear?
@budaejjigae-o9v9 ай бұрын
Thanks for the content. I guess here we are implicitly assuming the predicted value $\tilde{x}_{i}$ does not depend on the data $x$?
@gregoryhall92765 жыл бұрын
I'm a little confused about how the sampling of the posterior distribution is done. Looking at the mathematica simulation, I didn't see any samples taken from the right side of the beta(3,9)...is the sampling restricted somehow to only a portion of the posterior distribution? Or are those samples discarded because they have no effect on the marginal?
@jimip6c124 жыл бұрын
The chance of a particular theta being selected depends on the probability density of the posterior distribution. Because the right side of the beta(3,9) has a very low probability density, its very unlikely to be selected (sampled)
@GuruprakashAcademy2 жыл бұрын
Thanks Ben. It is a nice video. I am trying to simulate Posterior predictive distribution for NHPP. I have expression for P(X tilda I alpha beta)*P(alpha, beta | X). Can you please help how can i simulate the X tilda using MCMC in R or WInbug. Thanks
@abhinavtyagi72316 жыл бұрын
Really great work, Thank you sir for all the videos. When the solution manual of your book will be available?
@SpartacanUsuals6 жыл бұрын
Hi, thanks for your comment. It should be available ASAP on the book website (waiting on publisher). If you email me on Ben.c.lambert@gmail.com, however, I can share it with you. Best, Ben
@jacobschultz72014 жыл бұрын
Very cool video! So if our posterior was not conjugate and was instead approximated using a gibbs sampler, could we do something similar? I'm imagining randomly selecting a gibbs iteration (excluding burn in), and recording that vector of parameters as a sample from the posterior. Plug these parameters into the likelihood, sample, repeat. It seems especially important to sample the entire vector at once, since the marginal posteriors might not be independent. Sound reasonable?
@ZezaoCH4 жыл бұрын
In practice, how is the posterior distribution related to AQL's and RQL's in real life sampling?
@Gatitohomicida4 жыл бұрын
Hi there, do you know if I can obtain the mean of each parameter, in a gaussian mixture, and then obtain the posterior predictive, or I should obtain each gaussian mixture simulation and then obtain the predictive?? it is the same result??
@abhijithv30472 жыл бұрын
Hi sir could you please explain how bayesian model averaging works Including how parameters are estimated in a simple way so that And if possible could you demonstrate it with a problem Thanks in advance
@jacobmoore87345 жыл бұрын
In your simulation towards the end of the video, I'm having some difficulty keeping track of what each process represents. Left process output = sample-theta from actual posterior Middle process output = sample-x (from some distribution?) using output of precious step Right process output = histogram of sample-x values from previous step Definitely missed something important here, yikes
@holloloh5 жыл бұрын
I think the left process output is the parameter likelihood, middle is the distribution based on the parameter and the right is the sampled posterior. If we knew the formula for the actual posterior, there is no point in sampling it, we already have the formula, so we can compute all the parameters and the fits we want from the formula itself. I can be wrong and I agree that the video was quite confusing, but at least intuitively it kinda makes sense.