(ML 11.3) Frequentist risk, Bayesian expected loss, and Bayes risk

  Рет қаралды 30,190

mathematicalmonk

mathematicalmonk

Күн бұрын

A simple way to visualize the relationships between the frequentist risk, Bayesian expected loss, and Bayes risk.

Пікірлер: 12
@marvinpolo6106
@marvinpolo6106 3 жыл бұрын
him: "what makes it confusing is a bunch of bad notation" *starts explaining by introducing bunch of notation" ...interesting
@sairushikjasti660
@sairushikjasti660 Жыл бұрын
"what makes it confusing is a bunch of bad notations" me watching this video and not understanding a single notation, welp
@frobeniusfg
@frobeniusfg 5 жыл бұрын
This is your last chance. After this there is no turning back. You take the red pill: the story ends, you wake up in your bed and believe whatever frequentists want you to believe. You take the blue pill: you stay in Bayesland and I show you how deep the rabbit hole goes.
@bztube888
@bztube888 7 жыл бұрын
Great, intuitive explanation. Books like to attack with formulas before bother to try to explain the ideas behind them.
@mojomagoojohnson
@mojomagoojohnson 10 жыл бұрын
Great!
@auggiewilliams3565
@auggiewilliams3565 5 жыл бұрын
poor explanation
@monsume123
@monsume123 5 жыл бұрын
Great video! Love the symmetrical drawing and perfect coloring! I was wondering whether conceptually it would be more complete to include the Data D into the frequentist notation since it is also conditioning on the data in the frequentist perspective. In my understanding the frequentist perspective has both the parameter and the data as deterministic elements, where the parameter in the end is of course the optimisation variable but since you wrote that theta is a conditional dependence, so should be the data or am I conceptually confusing something here?
@jiatianxu757
@jiatianxu757 3 жыл бұрын
Actually, we usually care about L(y-hat, y) which is L(f(theta hat), y)
@jinghuayan5128
@jinghuayan5128 4 жыл бұрын
what do you mean by average over theta? I am a little clueless about this. Looking forward to hear from you, thank you!
@yugiohgx114
@yugiohgx114 4 жыл бұрын
Average over theta means that we multiply the loss function L(theta, f(D)) by the posterior probabilities p(theta | D) for all values of theta
@danielmburu6936
@danielmburu6936 8 жыл бұрын
hello how can i get the posterior distribution of the poisson given the prior as e raised to power labda
@jiatianxu757
@jiatianxu757 3 жыл бұрын
P(theta|D) is proportional for P(D|theta) * P(theta), in your case, it will be the Poisson mass function given theta * the prior dist.
(ML 11.4) Choosing a decision rule - Bayesian and frequentist
10:06
mathematicalmonk
Рет қаралды 10 М.
The Unreasonable Effectiveness of Bayesian Prediction
15:03
ritvikmath
Рет қаралды 21 М.
إخفاء الطعام سرًا تحت الطاولة للتناول لاحقًا 😏🍽️
00:28
حرف إبداعية للمنزل في 5 دقائق
Рет қаралды 24 МЛН
Will A Guitar Boat Hold My Weight?
00:20
MrBeast
Рет қаралды 257 МЛН
Bayes' classifier: losses and risks
17:15
Machine learning classroom
Рет қаралды 2,4 М.
Bayes Estimation
13:52
statisticsmatt
Рет қаралды 11 М.
Are you Bayesian or Frequentist?
7:03
Cassie Kozyrkov
Рет қаралды 253 М.
(ML 10.1) Bayesian Linear Regression
11:45
mathematicalmonk
Рет қаралды 83 М.
A visual guide to Bayesian thinking
11:25
Julia Galef
Рет қаралды 1,8 МЛН
Bayesian Decision Theory: Loss functions Risk (N Class model) [E3]
26:21
Bayes Decision Theory
38:51
Ahmet Sacan
Рет қаралды 21 М.
3. Bayes Estimation Example
9:10
Christina Knudson
Рет қаралды 45 М.
The Bayesian Trap
10:37
Veritasium
Рет қаралды 4,1 МЛН
L14.4 The Bayesian Inference Framework
9:48
MIT OpenCourseWare
Рет қаралды 61 М.