Thanks a lot for this explanation, a lot of the stuff you find online assumes a lot of "basics" or things that should be obvious (like where the "I" variable comes from which is obvious in hindsight but not stated explicitly in e.g. the PRML book from bishop, or the inclusion of the y variable considering the context given describes a univariate gaussian which makes on assume the y is over the y-axis (probability) which makes things super confusing)
@MachineLearningSimulation Жыл бұрын
Thanks for the kind feedback :). I totally felt the same when I first learned about it. The transition from the undergraduate math one usually learns (for me in a German Mechanical Engineering bachelor) to the math required for machine learning is a really tough one. Thanks for appreciating my way of teaching :).
@timshrode2 жыл бұрын
Thank you! I needed this for a class.
@MachineLearningSimulation2 жыл бұрын
You're so welcome! Glad I could help
@octaveraffault84522 жыл бұрын
Good explanations and I love the intro it's similar to the one in Lost lol
@MachineLearningSimulation2 жыл бұрын
Thanks a lot ☺️ Indeed, there is some similarity with the lost intro. Albeit it's not intended 😁
@Mr_Swan3 жыл бұрын
Amazing work!
@MachineLearningSimulation3 жыл бұрын
Thanks a lot
@hengzhou45662 жыл бұрын
It would be nice if you can talk about computation of normalization coefficient of Generalized Gaussian distribution (using again change of variable), say (3.56) in Bishop's book.
@MachineLearningSimulation2 жыл бұрын
Hi, thanks for the suggestion. That's definitely a cool addition to the playlist. :) At the moment, I am focusing on some other content on the channel, but I absolutely want to come back to basic PMF/PDF in the future. There is still so much more to cover, also with respect to priors and posteriors to the multivariate normal. However, I think it will not be until next year. Stay tuned ;)
@user-or7ji5hv8y3 жыл бұрын
that's interesting how the normalizing constant is derived to ensure that the density sums up to one. Curious, then why did Gauss choose exp(-0.5 * ((x - u) / s)^2) as a potential expression to start looking for a density function?
@MachineLearningSimulation3 жыл бұрын
Good point. I am not too familiar with the history of this distribution, but I tried to give some intuition on why it has to be this function in an earlier video (kzbin.info/www/bejne/Znq1n2idrKmjqcU). A better, but unfortunately more mathematical way, is to derive the Gaussian distribution as the function maximizing the differentiable entropy under prescribed mean and standard deviation. The video on this topic is already on my To-Do list, you can expect it to arrive in the next two weeks. Wikipedia has a small subsection on this (en.wikipedia.org/wiki/Normal_distribution#Maximum_entropy).
@MachineLearningSimulation3 жыл бұрын
The video on the derivation by the maximum entropy principle is now live: kzbin.info/www/bejne/gGi4aaCImtxlnZI It's a bit technical, but I find it extremely beautiful :)
@Luck_x_Luck Жыл бұрын
@@MachineLearningSimulation mad props
@danielgigliotti99dg2 жыл бұрын
shouldn't we have multiplied by -sigma^2/x at the end instead of -sigma^2 ?
@MachineLearningSimulation2 жыл бұрын
Hi, do you have a timestamp in the video you are referring to? I have to check 😊
@danielgigliotti99dg2 жыл бұрын
@@MachineLearningSimulation Yes, around 10:00, thank you
@MachineLearningSimulation2 жыл бұрын
I think I see what you are referring. It seems to me that my derivation is correct. You can ensure that by taking the Derivative of (- sigma * exp(- 1 / (2 * sigma) * r^2) with respect to r and get the previous term. Sorry for the short answer, I'm currently on vacation and answering from mobile 😅 I'm a little unsure what you mean by "x". Are you referring to "r"? I will try to give a more thorough answer next week.
@danielgigliotti99dg2 жыл бұрын
@@MachineLearningSimulation Yes sorry, I was referring to r 😅 The derivation is indeed correct, but in order to obtain the original exp (...) we also need to get rid of the r (being the derivative -1/sigma^2 * r * exp (...) or am I missing something? I'm very rusty with integrals lol I might be wrong
@MachineLearningSimulation2 жыл бұрын
Quick reply before a thorough one next week: the "r" being multiplied is exactly what we want. If you look at the previous form there is an additional r due to the factor we got by changing to polar coordinates. Hope that helped 😊
@EW-mb1ih Жыл бұрын
video almost self sufficient. Too bad you didn't talk about the "transformation factor (at 08:46)
@MachineLearningSimulation Жыл бұрын
Thanks for the comment :) There is of course always the option to explain things in more details