Normalization Constant for the Normal/Gaussian | Full Derivation with visualizations

  Рет қаралды 5,322

Machine Learning & Simulation

Machine Learning & Simulation

Күн бұрын

Пікірлер: 24
@Luck_x_Luck
@Luck_x_Luck Жыл бұрын
Thanks a lot for this explanation, a lot of the stuff you find online assumes a lot of "basics" or things that should be obvious (like where the "I" variable comes from which is obvious in hindsight but not stated explicitly in e.g. the PRML book from bishop, or the inclusion of the y variable considering the context given describes a univariate gaussian which makes on assume the y is over the y-axis (probability) which makes things super confusing)
@MachineLearningSimulation
@MachineLearningSimulation Жыл бұрын
Thanks for the kind feedback :). I totally felt the same when I first learned about it. The transition from the undergraduate math one usually learns (for me in a German Mechanical Engineering bachelor) to the math required for machine learning is a really tough one. Thanks for appreciating my way of teaching :).
@timshrode
@timshrode 2 жыл бұрын
Thank you! I needed this for a class.
@MachineLearningSimulation
@MachineLearningSimulation 2 жыл бұрын
You're so welcome! Glad I could help
@octaveraffault8452
@octaveraffault8452 2 жыл бұрын
Good explanations and I love the intro it's similar to the one in Lost lol
@MachineLearningSimulation
@MachineLearningSimulation 2 жыл бұрын
Thanks a lot ☺️ Indeed, there is some similarity with the lost intro. Albeit it's not intended 😁
@Mr_Swan
@Mr_Swan 3 жыл бұрын
Amazing work!
@MachineLearningSimulation
@MachineLearningSimulation 3 жыл бұрын
Thanks a lot
@hengzhou4566
@hengzhou4566 2 жыл бұрын
It would be nice if you can talk about computation of normalization coefficient of Generalized Gaussian distribution (using again change of variable), say (3.56) in Bishop's book.
@MachineLearningSimulation
@MachineLearningSimulation 2 жыл бұрын
Hi, thanks for the suggestion. That's definitely a cool addition to the playlist. :) At the moment, I am focusing on some other content on the channel, but I absolutely want to come back to basic PMF/PDF in the future. There is still so much more to cover, also with respect to priors and posteriors to the multivariate normal. However, I think it will not be until next year. Stay tuned ;)
@user-or7ji5hv8y
@user-or7ji5hv8y 3 жыл бұрын
that's interesting how the normalizing constant is derived to ensure that the density sums up to one. Curious, then why did Gauss choose exp(-0.5 * ((x - u) / s)^2) as a potential expression to start looking for a density function?
@MachineLearningSimulation
@MachineLearningSimulation 3 жыл бұрын
Good point. I am not too familiar with the history of this distribution, but I tried to give some intuition on why it has to be this function in an earlier video (kzbin.info/www/bejne/Znq1n2idrKmjqcU). A better, but unfortunately more mathematical way, is to derive the Gaussian distribution as the function maximizing the differentiable entropy under prescribed mean and standard deviation. The video on this topic is already on my To-Do list, you can expect it to arrive in the next two weeks. Wikipedia has a small subsection on this (en.wikipedia.org/wiki/Normal_distribution#Maximum_entropy).
@MachineLearningSimulation
@MachineLearningSimulation 3 жыл бұрын
The video on the derivation by the maximum entropy principle is now live: kzbin.info/www/bejne/gGi4aaCImtxlnZI It's a bit technical, but I find it extremely beautiful :)
@Luck_x_Luck
@Luck_x_Luck Жыл бұрын
@@MachineLearningSimulation mad props
@danielgigliotti99dg
@danielgigliotti99dg 2 жыл бұрын
shouldn't we have multiplied by -sigma^2/x at the end instead of -sigma^2 ?
@MachineLearningSimulation
@MachineLearningSimulation 2 жыл бұрын
Hi, do you have a timestamp in the video you are referring to? I have to check 😊
@danielgigliotti99dg
@danielgigliotti99dg 2 жыл бұрын
@@MachineLearningSimulation Yes, around 10:00, thank you
@MachineLearningSimulation
@MachineLearningSimulation 2 жыл бұрын
I think I see what you are referring. It seems to me that my derivation is correct. You can ensure that by taking the Derivative of (- sigma * exp(- 1 / (2 * sigma) * r^2) with respect to r and get the previous term. Sorry for the short answer, I'm currently on vacation and answering from mobile 😅 I'm a little unsure what you mean by "x". Are you referring to "r"? I will try to give a more thorough answer next week.
@danielgigliotti99dg
@danielgigliotti99dg 2 жыл бұрын
@@MachineLearningSimulation Yes sorry, I was referring to r 😅 The derivation is indeed correct, but in order to obtain the original exp (...) we also need to get rid of the r (being the derivative -1/sigma^2 * r * exp (...) or am I missing something? I'm very rusty with integrals lol I might be wrong
@MachineLearningSimulation
@MachineLearningSimulation 2 жыл бұрын
Quick reply before a thorough one next week: the "r" being multiplied is exactly what we want. If you look at the previous form there is an additional r due to the factor we got by changing to polar coordinates. Hope that helped 😊
@EW-mb1ih
@EW-mb1ih Жыл бұрын
video almost self sufficient. Too bad you didn't talk about the "transformation factor (at 08:46)
@MachineLearningSimulation
@MachineLearningSimulation Жыл бұрын
Thanks for the comment :) There is of course always the option to explain things in more details
Deriving the (univariate) Normal/Gaussian from a Maximum Entropy Principle
38:59
Machine Learning & Simulation
Рет қаралды 2,4 М.
Why π is in the normal distribution (beyond integral tricks)
24:46
3Blue1Brown
Рет қаралды 1,6 МЛН
ЛУЧШИЙ ФОКУС + секрет! #shorts
00:12
Роман Magic
Рет қаралды 23 МЛН
小路飞还不知道他把路飞给擦没有了 #路飞#海贼王
00:32
路飞与唐舞桐
Рет қаралды 63 МЛН
Человек паук уже не тот
00:32
Miracle
Рет қаралды 3,5 МЛН
Multivariate Normal | Intuition, Introduction & Visualization | TensorFlow Probability
26:33
How do you DERIVE the BELL CURVE?
35:21
Mathoma
Рет қаралды 110 М.
Wavefunction Properties, Normalization, and Expectation Values
23:16
Professor Dave Explains
Рет қаралды 145 М.
how Richard Feynman would integrate 1/(1+x^2)^2
8:53
blackpenredpen
Рет қаралды 526 М.
Variational Inference | Evidence Lower Bound (ELBO) | Intuition & Visualization
25:06
Machine Learning & Simulation
Рет қаралды 71 М.
Dirichlet Distribution | Intuition & Intro | w\ example in TensorFlow Probability
19:14
Machine Learning & Simulation
Рет қаралды 9 М.
MLE for the Multivariate Normal distribution | with example in TensorFlow Probability
43:49
But what is the Central Limit Theorem?
31:15
3Blue1Brown
Рет қаралды 3,5 МЛН
Maximum Likelihood For the Normal Distribution, step-by-step!!!
19:50
StatQuest with Josh Starmer
Рет қаралды 553 М.
Gaussian Mixture Model | Intuition & Introduction | TensorFlow Probability
17:43
Machine Learning & Simulation
Рет қаралды 5 М.
ЛУЧШИЙ ФОКУС + секрет! #shorts
00:12
Роман Magic
Рет қаралды 23 МЛН