Posterior for Gaussian Distribution with unknown Mean

  Рет қаралды 1,534

Machine Learning & Simulation

Machine Learning & Simulation

Күн бұрын

Пікірлер: 2
@user-or7ji5hv8y
@user-or7ji5hv8y 3 жыл бұрын
The TensorFlow Probability example was really helpful again. I was wondering though, if there is any intuition as to why sigma_N is substantially smaller than sigma_true and sigma_0.
@MachineLearningSimulation
@MachineLearningSimulation 3 жыл бұрын
Thanks again for your feedback :) I really appreciate it. Regarding the small sigma_N: From a mathematical perspective it seems reasonable (it always reminds me of electrical circuits, if you have two resistors in parallel, then the total resistance is smaller than the smallest resistance). But a more intuitive approach: In the prior we have prior knowledge on our parameter mu, encoded by mu_0 and sigma_0. The mu_0 describes where we expect the mu to be and the sigma_0 encodes our uncertainty about that. If we now observe data and the data is in agreement with what we expected before, our posterior sigma_N must be smaller. That is because a smaller standard deviation is associated with a more narrow Gaussian, hence in the posterior we are more certain on where the parameter mu is. I hope this made sense. Let me know if it still unclear. :)
Precision vs Variance/Standard Deviation for the Normal/Gaussian distribution
5:11
Machine Learning & Simulation
Рет қаралды 1 М.
Posterior & MAP for Normal distribution with unknown precision
30:31
Machine Learning & Simulation
Рет қаралды 719
Car Bubble vs Lamborghini
00:33
Stokes Twins
Рет қаралды 32 МЛН
Friends make memories together part 2  | Trà Đặng #short #bestfriend #bff #tiktok
00:18
The Dirichlet Distribution : Data Science Basics
21:19
ritvikmath
Рет қаралды 7 М.
(ML 4.3) MLE for univariate Gaussian mean
14:31
mathematicalmonk
Рет қаралды 43 М.
What are Maximum Likelihood (ML) and Maximum a posteriori (MAP)? ("Best explanation on YouTube")
18:20
Iain Explains Signals, Systems, and Digital Comms
Рет қаралды 82 М.
Bayesian posterior sampling
7:23
Ben Lambert
Рет қаралды 21 М.
Maximum Likelihood For the Normal Distribution, step-by-step!!!
19:50
StatQuest with Josh Starmer
Рет қаралды 553 М.
The Normal Distribution and the 68-95-99.7 Rule (5.2)
8:50
Simple Learning Pro
Рет қаралды 1,5 МЛН
Generating correlated random variables
18:11
Adrian Liu
Рет қаралды 15 М.
(ML 6.1) Maximum a posteriori (MAP) estimation
13:31
mathematicalmonk
Рет қаралды 177 М.
Car Bubble vs Lamborghini
00:33
Stokes Twins
Рет қаралды 32 МЛН