(ML 6.1) Maximum a posteriori (MAP) estimation

  Рет қаралды 173,520

mathematicalmonk

mathematicalmonk

13 жыл бұрын

Definition of maximum a posteriori (MAP) estimates, and a discussion of pros/cons.
A playlist of these Machine Learning videos is available here:
kzbin.info_play_list...

Пікірлер: 31
@Lunatic108
@Lunatic108 6 жыл бұрын
The problem with most math lectures, especially stochastics and statistics, is that they are too abstract to grasp the concept easily. Just put in a few concrete examples and it will be so much easier to understand.
@gauthamchandra2081
@gauthamchandra2081 4 жыл бұрын
Exactly. That's the problem I find with most lectures. Most of the concepts are vague and abstract with nothing to solidify them.
@RealMcDudu
@RealMcDudu 4 жыл бұрын
This is literally what he does in the next video. The problem with most commentators is that they are too impatient/lazy to even check the next video.
@shubhamjadhav1043
@shubhamjadhav1043 7 жыл бұрын
Who else is watching this at the night before exam?
@daviderickson8283
@daviderickson8283 6 жыл бұрын
Doing a take home exam now. probably going to fail this portion...
@umamakamrulsaifa9675
@umamakamrulsaifa9675 3 жыл бұрын
30 minutes before viva XD
@sebastianpolo6271
@sebastianpolo6271 Жыл бұрын
Excellent video in terms of definition. This can give you a good overview of what exactly you are trying to compute. You should have given a good example of how MAP works.
@pelemanov
@pelemanov 12 жыл бұрын
Also like it, but often I find it helpful to get a simple, but concrete example. It can be very overwhelming, working only with abstract variables and functions, and in order not to drown in this abstractness, it's nice to relate things to the real world from time to time. But of course, this is just my own observation. Probably, a lot of people prefer to stay abstract.
@hongzuli8545
@hongzuli8545 6 жыл бұрын
I have a question. So what is the purpose of the prior once we observe the data?
@mohammadhassantahery8121
@mohammadhassantahery8121 2 жыл бұрын
As practical example Teta: input to a communication channel D: the observed value or the output MLE : value of input that maximizes the particular output Map: value of the most probable input given the observation value
@semiografo
@semiografo 11 жыл бұрын
I'm trying to find a single concrete example on the web on how do determine MLE and MAP but didn't find anything really elucidating. Looks like most texts are made by and for math students.
@daniloamorim3012
@daniloamorim3012 4 жыл бұрын
This concept used in the Naive Bays model and this model is used to classify emails as spam or not spam.
@cfl9077
@cfl9077 3 жыл бұрын
Great lectures to review old concepts. The only thing I would say is that he constantly goes into nearly irrelevant caveats like 'a' vs 'the' MAP, which would be infinitesimal in likelihood of occurring in real problems that you would need to worry about in optimization
@rohitkumarsingh-iw7rf
@rohitkumarsingh-iw7rf 3 жыл бұрын
Is MAP estimator a recursive or batch estimator?
@thomashirtz
@thomashirtz 2 жыл бұрын
Where would be the theta MLE in the case of the graph at 11:15 ? (nice video :))
@BerkayCelik
@BerkayCelik 11 жыл бұрын
i guess if you can understand the ideas delivered here you can easily use Matlab or Python to implement all these terms. Without reviewing the basic terms in (probability, linear algebra) things may become abstract.
@LeoAdamszhang
@LeoAdamszhang 7 жыл бұрын
At 5:38 it says that MLE is to maximize P(D|theta), but I learned elsewhere that inversely the MLE is defined as P(theta|D) = P(D|theta), so it tries to maximize P(theta|D). Am I mistaken? Please point me in the right direction. Thanks.
@LeoAdamszhang
@LeoAdamszhang 7 жыл бұрын
I find this: www.probabilitycourse.com/chapter9/9_1_3_comparison_to_ML_estimation.php
@ThePositiev3x
@ThePositiev3x 7 жыл бұрын
exactly. ??????
@xinjing3153
@xinjing3153 7 жыл бұрын
MLE tries to maximize P(D|theta). MAP tries to maximize P(theta|D). The relationship of these two probabilities is given by P(theta|D) = P(D|theta)*P(theta)/P(D), where P(D) is a constant. So maximizing P(theta|D) is equivalent to maximizing P(D|theta)*P(theta), the numerator.
@LeoAdamszhang
@LeoAdamszhang 7 жыл бұрын
Xin Jing Thanks very much for your explanation. But what do you mean by "MAP"? And since P(theata) is not a constant, would it affect the relationship(equivalence) in the likelihood?
@gafferin
@gafferin 5 жыл бұрын
@@LeoAdamszhang for ML you maximize the Likelihood based on D: argmax L(Theta|D). not P.
@yagneshm.bhadiyadra4359
@yagneshm.bhadiyadra4359 2 жыл бұрын
do you listen what you are saying?
@fabiononame5256
@fabiononame5256 4 жыл бұрын
Sorry, but at time 6.00 the correct formula is p(D, theta) = p(theta, D)p(theta) I'am right?
@poodook
@poodook 4 жыл бұрын
No , this is wrong. Consult Bayes Rule
@khatharrmalkavian3306
@khatharrmalkavian3306 7 жыл бұрын
You wouldn't happen to know if this is available in English anywhere, would you?
@Alley00Cat
@Alley00Cat 7 жыл бұрын
booooooooh
@weissesfrettchen
@weissesfrettchen 6 жыл бұрын
I can´t watch tutorials like that. Keep the cross still, it makes me feel so uncomfortable that its moving like all the time
@sharansabi4139
@sharansabi4139 6 жыл бұрын
who else fell asleep watching this?
@alicetang8009
@alicetang8009 3 ай бұрын
Sorry the way you pronounce theta and data really confuses me 😶‍🌫
(ML 6.2) MAP for univariate Gaussian mean
14:54
mathematicalmonk
Рет қаралды 40 М.
What are Maximum Likelihood (ML) and Maximum a posteriori (MAP)? ("Best explanation on YouTube")
18:20
Iain Explains Signals, Systems, and Digital Comms
Рет қаралды 75 М.
1 or 2?🐄
00:12
Kan Andrey
Рет қаралды 43 МЛН
Дибала против вратаря Легенды
00:33
Mr. Oleynik
Рет қаралды 4,8 МЛН
Likelihood Estimation - THE MATH YOU SHOULD KNOW!
27:49
CodeEmporium
Рет қаралды 45 М.
Maximum A Posteriori Estimate (MAP) for Bernoulli | Derivation & TensorFlow Probability
29:03
(ML 10.1) Bayesian Linear Regression
11:45
mathematicalmonk
Рет қаралды 81 М.
(IC 5.2) Arithmetic coding - Example #1
29:05
mathematicalmonk
Рет қаралды 84 М.
Maximum Likelihood Estimation ... MADE EASY!!!
9:12
Brian Greco - Learn Statistics!
Рет қаралды 8 М.
(ML 16.3) Expectation-Maximization (EM) algorithm
14:37
mathematicalmonk
Рет қаралды 228 М.
Probability Formulas, Symbols & Notations - Marginal, Joint, & Conditional Probabilities
30:43
The Method of Moments ... Made Easy!
9:02
Brian Greco - Learn Statistics!
Рет қаралды 8 М.