A pretty reason why Gaussian + Gaussian = Gaussian

  Рет қаралды 757,547

3Blue1Brown

3Blue1Brown

Күн бұрын

A visual trick to compute the sum of two normally distributed variables.
3b1b mailing list: 3blue1brown.substack.com/
Help fund future projects: / 3blue1brown
Special thanks to these supporters: www.3blue1brown.com/lessons/g...
For the technically curious who want to go deeper, here's a proof of the central limit theorem using Moment generating functions:
www.cs.toronto.edu/~yuvalf/CL...
And here's a nice discussion of methods using entropy:
mathoverflow.net/questions/18...
Relevant previous videos
Central limit theorem
• But what is the Centra...
Why π is there, and the Herschel-Maxwell derivation
• Why π is in the normal...
Convolutions and adding random variables
• Convolutions | Why X+Y...
Time stamps
0:00 - Recap on where we are
2:10 - What direct calculation would look like
3:38 - The visual trick
8:27 - How this fits into the Central Limit Theorem
12:30 - Mailing list
Thanks to these viewers for their contributions to translations
German: lprecord, qoheniac
Spanish: Pablo Asenjo Navas-Parejo
Vietnamese: Duy Tran
------------------
These animations are largely made using a custom Python library, manim. See the FAQ comments here:
www.3blue1brown.com/faq#manim
github.com/3b1b/manim
github.com/ManimCommunity/manim/
You can find code for specific videos and projects here:
github.com/3b1b/videos/
Music by Vincent Rubinetti.
www.vincentrubinetti.com/
Download the music on Bandcamp:
vincerubinetti.bandcamp.com/a...
Stream the music on Spotify:
open.spotify.com/album/1dVyjw...
------------------
3blue1brown is a channel about animating math, in all senses of the word animate. And you know the drill with KZbin, if you want to stay posted on new videos, subscribe: 3b1b.co/subscribe
Various social media stuffs:
Website: www.3blue1brown.com
Twitter: / 3blue1brown
Reddit: / 3blue1brown
Instagram: / 3blue1brown
Patreon: / 3blue1brown
Facebook: / 3blue1brown

Пікірлер: 548
@3blue1brown
@3blue1brown 10 ай бұрын
I made a video covering a proof of the central limit theorem, that is, answering why there is a "central limit" at all. It's currently posted for early viewing on Patreon: www.patreon.com/posts/draft-video-on-i-87894319 I think the video has room for improvement, and decided to put it on a shelf for a bit while working on other projects before turning back to it. In the meantime, though, if you are curious about why all finite variance distributions will tend towards some universal shape, it offers an answer. Also, you may be interested to know that a Gaussian is not the only distribution with the property described in this video, where convolving it with itself gives a (rescaled) version of the original distribution. The relevant search term here is "stable distributions", though all others will have infinite variance, so don't fit the criteria of the CLT. Often when the CLT doesn't apply, it's because the independence assumption doesn't hold, but another way it can break is if you're starting with one of these infinite variance cases.
@csehszlovakze
@csehszlovakze 10 ай бұрын
please make a post there about the complete software stack you're using to make your videos!
@official-obama
@official-obama 10 ай бұрын
9:08 the second part has "transformatoin" at the top
@mskiptr
@mskiptr 10 ай бұрын
It's kinda hidden, but for people who prefer RSS to mailing lists, it's at /feed
@voidify3
@voidify3 10 ай бұрын
Are the functions at 0:10 stable distributions? When you started talking about rotational symmetry I was expecting you to bring up a visual graph of one of those functions convoluted with itself and explain why it doesn’t have the special property, but instead 5:55 only shows trivial examples and my curiosity about this question remained unanswered. Is it because the functions from 0:10 are stable distributions? If not, why weren’t they shown when they would have been much more interesting demonstrations of the Gaussian’s specialness than trivial examples?
@mydroid2791
@mydroid2791 10 ай бұрын
Grant, could you please make a video on when the discrete can be approximated by the continuous. For example, in this series you showed that discrete random variables added together approach a continuous normal distribution, and you did discrete and continuous convolutions. But what is the error formula one would get by assuming, say, that d6 dice are continuous valued, get your continuous convolution answer, and then take discrete samples of that answer to match the actual discrete nature of d6 dice. I find it much easier to integrate a 'nice' function then it is to simplify a discrete Σ sum.
@justinahole336
@justinahole336 10 ай бұрын
I have to laugh. "Why the normal distribution?" was one of the questions that motivated me to get my M.S. Stat a couple of decades ago. I'm loving this series - it adds so much clarity to what I recall learning.
@AlphaPhoenixChannel
@AlphaPhoenixChannel 10 ай бұрын
That made for a great lunch 😁. In your last video you described the Gaussian as an “attractive point in the space of all functions” and I LOVED that phrasing - really made it make sense. However I don’t do enough real math to realize that could be the foundation of a proof. That’s pretty cool.
@KingDuken
@KingDuken 10 ай бұрын
Agreed! :) I'm at work eating my lunch and people around me sometimes ask, "Oh are you in school?" and I'm like, "Nope, just an engineer like you that likes learning the math that was never taught!"
@nisargbhavsar25
@nisargbhavsar25 10 ай бұрын
The legend is here! 🙏🏽🛐
@MeanSoybean
@MeanSoybean 10 ай бұрын
i also had lunch to this video
@jasonremy1627
@jasonremy1627 10 ай бұрын
After all the cliffhangers, it's nice to get this series all wrapped up so neatly.
@QuantumHistorian
@QuantumHistorian 10 ай бұрын
Wrapped up? He didn't prove the central limit theorem at all. Which is supposedly what this was all about. This video itself barely adds anything at all to the previous ones. Moment generating functions are really not all that complicated - it's high school stuff really. And it gives a much clearer intuition for why a Gaussian is the limit in the central limit theorem: it's the unique probability distribution that has a mean and a standard deviation but no higher moments. In other words it's the simplest* distribution: the one that can be described by the least information. Anything else like skew or asymmetry is "averaged out". Sadly, Grant is so obsessed about representing things visually that he brushes over alternatives that are at times far clearer and more powerful ways of understanding this. * [technically the simplest would be a point distribution were a single outcome has probability 1 and everything else probability 0, but that hardly counts as a distribution. And anyway, it's just a special type of Gaussian with width 0]. EDIT: I got mixed up, replace "moment" with "cumulant" above to correct it. Intuition is the same.
@Redingold
@Redingold 10 ай бұрын
@@QuantumHistorian This series is an excellent demonstration of the idea of limits, not just in that the videos are all about the central limit theorem, but also in that he's tending towards the proof of the central limit theorem without ever actually reaching it.
@lelouch1722
@lelouch1722 10 ай бұрын
@@QuantumHistorian This series of video is clearly not to give a fully technical answer but rather an intuitive view of why it's true. I also agree that the visual "trick" here does not seem to simplify a lot the work given that the integral is already easy to compute using trigonometric change of variable that arise naturally, but maybe i'm biased by my own experience.
@KingDuken
@KingDuken 10 ай бұрын
You can probably argue that the purpose of the cliffhanger is to encourage the viewer to ponder upon a new solution. That's very much the format of his videos. 3Blue1Brown will never tell the viewer the answer but rather allow open-ended interpretation.
@QuantumHistorian
@QuantumHistorian 10 ай бұрын
@@KingDuken That's not even remotely true. He starts with hints, but he almost always gives the full solution at the end. Look at the recent video on chords, or older ones on the chessboard puzzle or the Basel problem.
@johnchessant3012
@johnchessant3012 10 ай бұрын
I love that this series actually started with the Borwein integrals video. Like, here's a very curious sequence of integrals and here's an interesting concept to explain it, and then five videos later we've dug so far deep into convolutions that we got an intuitive explanation for one of the most important theorems in all of math. It's all interrelated!
@capitaopacoca8454
@capitaopacoca8454 10 ай бұрын
I was studying statistics right now and saw this drop
@arvind-venkat
@arvind-venkat 10 ай бұрын
For real. Happened twice now. With binomial and this.
@baidurjyasarkar8854
@baidurjyasarkar8854 10 ай бұрын
They are watching..... There will come a time when they will order us....
@THEMATT222
@THEMATT222 10 ай бұрын
I guess Grant calculated the time of day with the highest probability that the world population would study statistics and then release the video at that time, lol
@gaggy7448
@gaggy7448 10 ай бұрын
Lucky you
@benbockelman6125
@benbockelman6125 10 ай бұрын
Same
@AzureLazuline
@AzureLazuline 10 ай бұрын
thank you for always making sure to show the "roadmap" before diving into the details! Knowing the broad outline beforehand really makes things easier to follow, and it's something that a lot of other explanatory videos/articles don't bother to do.
@novakonstant
@novakonstant 9 ай бұрын
Grant, this has been an absolute masterclass and I genuinely believe it has been your best work so far. Your visualisations have been top notch and it has brought concept space applied to mathematics to a level not seen before, all publicly accessible through KZbin. You are making mathematics a better field for the entire world. Thanks for your hard work!
@avip2u
@avip2u 10 ай бұрын
One level of brilliance is simply to be brilliant. Another level is to be able to explain and teach. Yet another level of brilliance is to be able to clearly visualize & present the advanced concepts. Wow. No words.
@RyanODonnellTeaching
@RyanODonnellTeaching 10 ай бұрын
I like this related explanation: Let X and Y be independent normal random variables, and write S = X+Y for their sum. You can think of S as the dot product of the 2-d vectors (X,Y) and (1,1). As Grant said, the key aspect of the normal random variables is that if you take a draw a pair of them, the result is rotationally symmetric. Now dot product is *also* rotationally symmetric (the dot product between two vectors only depends on their lengths and angle). So the distribution on S would be the same if we rotated (1,1) to any other vector with length sqrt2; in particular, to (sqrt2,0). But (X,Y) dotted with (sqrt2,0) is just sqrt2 X, so we see that S is distributed as (sqrt2 times) a normal random variable.
@fatitankeris6327
@fatitankeris6327 10 ай бұрын
Now there are so many great explanations on this channel, that it really completes making one understand it.
@cyancoyote7366
@cyancoyote7366 10 ай бұрын
Thank you for bringing us amazing math content Grant! The world needs it! Enjoying my afternoon coffee while watching this one! :)
@SquallEstel
@SquallEstel 10 ай бұрын
Thank you very much for your hard work, the result is so pleasing. I’ve discovered your channel with the neural network series and I’ve been enjoying your videos ever since. You rekindled in me the taste for mathematics. Greetings and best regards from France
@Indecisiveness-1553
@Indecisiveness-1553 10 ай бұрын
Congratulations on finally wrapping up this pseudo-series. They’re some of my favorite videos you’ve done!
@zakwhite5159
@zakwhite5159 10 ай бұрын
What incredible content. I think like once a year I revisit the same list of statistical oriented content. Between Grant, Richard Mcelreath and Josh Starmer. You really get your bases covered on great stats content.
@diffusegd
@diffusegd 10 ай бұрын
I want to talk about a strange area of probability, where random variables no longer commute: Random Matrices You can define the expectation of a random matrix to be the expectation of its trace, which Essentially is the distribution of its eigenvalues. It turns out, theres a new kind of central limit theorem, known as the "Free central limit theorem" This theorem says that if you have "Freely independent" random matrices, then the mean's eigenvalue distribution tends towards not a normal distribution, but a semicircular distribution. In this probability theory (known as free probability theory), a free convolution exists, which essentially gives the distribution of eigenvalues of X+Y. It turns out the semicircle distribution convolved with itself is another semicircle, much like a normal distribution in classical probability.
@SluisaStoffelen-os5oc
@SluisaStoffelen-os5oc 9 ай бұрын
Is this what we called ''Wigner semicircle law''?
@ahmedkamelkamelo7433
@ahmedkamelkamelo7433 10 ай бұрын
@3Blue1Brown could you please do a series of videos for the time series analysis, I think we need a visual and intuitive explanation for a lot of things there! Thank you 😊
@stick-Iink
@stick-Iink 10 ай бұрын
honestly one of of the best series on youtube
@whitewalker608
@whitewalker608 10 ай бұрын
Last time, just after I completed IFFT, you dropped a video on continuous convolution. Yesterday, I finished studying Bivariate Normal distribution and you dropped this. Perfect timing for me!
@thegreatsibro9569
@thegreatsibro9569 9 ай бұрын
After taking AP stats in my high school senior year, I'm glad this series tied up some loose ends of that course. Thanks for all the amazing insight! By the way, I was wondering if you could possibly do a video based on a problem I solved and want to confirm my answers on. It goes like this: You have a line segment of any arbitrary length (it doesn't matter). If you cut it in two random places, what is the probability that the three new segments form a triangle without any excess length left over? Again, I believe I know the answer, but I still feel the need to have my results confirmed. I'm also curious if there is any extra insight that can be provided based on problems such as this one. Again, thanks for making this series, and I can't wait to hear what more spicy knowledge you have in store for us!
@xyzct
@xyzct 10 ай бұрын
Please oh please do a video on the Kalman filter, given how indescribably important it is to our modern existence. The result that the convolution of two Gaussians is a Gaussian is at the heart of the Kalman filter's magic.
@geekswithfeet9137
@geekswithfeet9137 10 ай бұрын
Yes…. So much yes to this, would intersect so many core bits of interest perfectly
@user-gv3xt5we1j
@user-gv3xt5we1j 10 ай бұрын
It's been 7 years since I took calculus but this is a great way to revisit those concepts. Thank you!
@jeffmannlein9772
@jeffmannlein9772 9 ай бұрын
always enjoy ur videos. it's nice to watch them and make some connections i might've missed from my time in school
@RobinHillyard
@RobinHillyard 10 ай бұрын
Thanks so much for this--it makes it really clear. And the 3-dimensional model is really a lot more like a bell! (although I know that actual bells have a somewhat different shape). I've been using the concept of combining Gaussian (and uniform) distributions for a while now in my (Scala) library called Number. It keeps track of the error bounds on variables. If keeping track of relative bounds, it's easy: for multiplication, just add the relative bounds together; for functions like e^x or x^p, only very slightly more complex. But, for addition, we need to convert to absolute bounds and use the convolution that you've been describing.
@mathemelo
@mathemelo 7 ай бұрын
Simply amazing. That is a very simple yet very rich explanation for the central role played by the normal distribution, and the visuals are amazing as usual! The much more technical way I've always envisioned this is to say that the normal distribution is in some sense "the fixed point of the Fourier transform", and to see the Central Limit Theorem as some kind of "convergence to the fixed point" result through the Fourier transforms. I wonder if the rotational symmetry, which is the key property you use here, can be linked to this "fixed point of Fourier" thing?
@asseenontv247
@asseenontv247 10 ай бұрын
Awesome video as always! I don't think I've seen you do it yet, but I would love to see you tackle explaining how and why the RSA encryption algorithm works.
@XxRiseagainstfanxX
@XxRiseagainstfanxX 10 ай бұрын
Binomials with same p are stable under convolution, Poisson distributions as well. The normal distribution is not unique in that regard. Even Cauchy distributions are stable without having any moments or satisfying the CLT. If I had to pick any intuitive reason why the normal distribution shows up in the CLT, I enjoy the fact that the normals cumulants are all zero from the third and that a standardized iid sum’s cumulants hence all tend to those of the standard normal distribution whenever they exist. Also, not all standardized sums converge in distribution to a normal distribution. The limit can be a Gumbel distribution for example as well.
@abhinandanangra
@abhinandanangra 10 ай бұрын
This is what I needed, was working on my project on Central limit theorem in various scenarios.
@genuine8879
@genuine8879 2 ай бұрын
Wondeful video! The feel i got (in high school) when i proved something by symmetry always made my day cheer up! Usually these are the most elegants approches to do and the simplest in intuition. Much respect ❤
@Truth4thetrue
@Truth4thetrue 10 ай бұрын
Man I just love your videos Even though I'm way past the time of having genuine will and ability to learn abstract mathematics (living in wartorn hell doesn't really help) but they still give me a sad and lovely nostalgia of the things I love I'm just really glad I learned about your channel and watched it grow without losing any of the great things that made it simply extraordinary
@SilasHaslam
@SilasHaslam 10 ай бұрын
I was wondering about this topic for a while because I didn’t quite get this concept intuitively. And then 3blue1brown dropped this !!
@Mavhawk64
@Mavhawk64 10 ай бұрын
After having received my Bachelors of Math this past December, I now just realized why we get that sqrt(2) when finding the convolution. The geometric visualization is extremely easy to understand! (I’m sure I derived this back in first year, but I must have forgotten lol)
@joaodirk
@joaodirk 10 ай бұрын
I would love you extending this series on gaussian distributions and CLT for when there is correlation and/or dependency.
@shaiguitar
@shaiguitar 9 ай бұрын
Love your videos is an understatement. Speaking of distributions, any chance 3b1b fans can get a video on optimal transport??
@SaplingDatree
@SaplingDatree 9 ай бұрын
I've been watching for a while now, Idk why I haven't subscribed till now, but I love your videos. I've always found it fascinating that there is an awesome maths channel with a logo that has relatively the same shape as one of my eyes :) (the brown spot is even in the right place too)
@div.6763
@div.6763 10 ай бұрын
Wow it's already in the playlist.. thank you. I wanted to study this for so long
@chinchao
@chinchao 10 ай бұрын
You make me finally understand why CLT works, thanks ❤
@lorenzoplaserrano8734
@lorenzoplaserrano8734 10 ай бұрын
You are partly the reason I am in love with statistics. Thank you. ❤
@deltaeins1580
@deltaeins1580 10 ай бұрын
A mailing list! Awesome. I loved Tom Scott doing it and now you too? Amazing!
@MrMctastics
@MrMctastics 10 ай бұрын
This question popped back into my head yesterday so good timing
@morgan0
@morgan0 10 ай бұрын
after these videos on convolution, it would be cool to see you do a series on the convolution of filters, and also a video on the complex plane math used to design filters would be cool as well. i’m in that spot where i know the z plane math works but i don’t have a full intuition for why
@chiyosa7041
@chiyosa7041 9 ай бұрын
I love it sooooooo much!!! Can you please also do a video on Principal Component Analysis/Regression?
@steffanjansenvanvuuren3257
@steffanjansenvanvuuren3257 8 ай бұрын
"But what is the Fourier Transform? A visual introduction" In that video you showed that the "Centre of weight"(hypotenuse max peak) reaches its peak on the right side, x(real) axis whenever the input sinewave frequency is the same as the rotating frequency. But that only happens if the input sinewave is in phase with the rotation frequency and the rotation starts exactly at x=0 and y=1 on the complex plain. ONLY then does the vector/hypotenuse max peak line up perfectly with the x axis. In reality we have to continuously plot the vector/hypotenuse on a separate graph to get the information we want because on the complex plain the vector/hypotenuse max peak can point in any direction or fall in any quadrant depending on the phase difference between the rotation and the input sinewave signal.
@estrheagen4160
@estrheagen4160 10 ай бұрын
My abstract brain would have loved showing that Gaussians aee their own convolution via the Fourier Transform, since a convolution in coordinate space is multiplication in momentum space (spot the physicist), and since an FT of a Gaussian is a Gaussian, and the product of two Gaussians is a Gaussian, then the convolution of two Gaussians must also be a Gaussian. But, this is an incredibly satisfying explanation. I'm not left wanting, and after being in the field for nearly a decade, I'm glad to see a frequent concept intuited so cleanly, without the need for arcane notation. ❤
@strehlow
@strehlow 7 ай бұрын
I'd love to see a video on deconvolution, and its applications. One noteworthy one is basic processing of an image from a telescope. The aperture (typically circular) applies a convolution of a rectangle function to the incoming light. Convolving the resulting image with the inverse of the rect function will remove the distortions caused by the aperture. One strategy on smaller telescopes (especially using film instead of digital sensors) to avoid this is to put a filter on the aperture whose opacity follows a Gaussian, clearest in the center and darkest at the edge. This minimizes the distortions of the image coming through the telescope and avoids the need to process it afterward.
@sentinelaenow4576
@sentinelaenow4576 9 ай бұрын
Humanity will always be grateful for your superbly amazing, impactful, and meaningful work. I'm confident your viewers are the best candidates to improve our entire world. It's inspiring to see how your efforts can enhance our understanding of the world and empower people to engage with sophisticated ideas. With your powerful content, you hold the impressive potential to inspire and educate countless individuals, fostering a deeper appreciation for math and its importance in our lives. Such efforts unquestionably play a crucial role in advancing our society as a whole. Thanks a million, Sir 3Blue1Brown. You are genuinely enhancing our world with the most insightful visual content currently available. Please continue for good.
@kirilchi
@kirilchi 10 ай бұрын
Was waiting for continuation of series ❤
@EPMTUNES
@EPMTUNES 10 ай бұрын
This video is a joyous moment in maths communications, as all your videos are.
@dakshnarula8036
@dakshnarula8036 Ай бұрын
Hey grant, I've been a big follower of your videos. Could you please make a detailed series covering all the topics in combinatorics, statistics and probability
@salchipapa5843
@salchipapa5843 10 ай бұрын
I've forgotten pretty much everything I learned in college, but one thing I kind of sort of remember is that one way to convolve two functions is to take their Laplace transform and then multiply them. Convolution in the time domain is multiplication in the frequency domain, basically.
@majdwardeh3698
@majdwardeh3698 10 ай бұрын
Great video as always. Thanks a lot! Could you please make a video for Manifolds or Lie groups and Lie algebra?
@christopherli7463
@christopherli7463 10 ай бұрын
The animation at 7:17 about rotating your radius r to be perpendicularly aligned with the background x-y Cartesian grid is super. Like again animation is providing a very immediate, visual, and physically informed intuition / feeling that if you rotate it one way to align with the grid you'll preserve the area and simplify your computation. Just a small detail but these animations are great thank you very much!
@christopherli7463
@christopherli7463 10 ай бұрын
Like it's almost like the feeling in linear algebra when you change to a natural (eigen) basis to decouple your vectors/directions and then the computation just proceeds orthogonally along their individual axes, not interfering with each other and making the computation much more literally straightforward. So like rotation for a better coordinate system. This was a cool video thanks!
@nkkk6801
@nkkk6801 10 ай бұрын
My friend...i can't thank you enough for the "Essence of linear algebra" videos
@lucasf.v.n.4197
@lucasf.v.n.4197 9 ай бұрын
I love u sir, ur animations are awesome ❤
@torkelholm6577
@torkelholm6577 10 ай бұрын
So great seeing this video finally come out just as I finished statistics
@mathanimation7563
@mathanimation7563 10 ай бұрын
When you upload video I feel happy because I learn new concept
@jacoblojewski8729
@jacoblojewski8729 10 ай бұрын
That the convolution of two Gaussians makes me think of some sort of metric (or psuedo-metric) space of integrable probability functions with finite variance, modded out by equivalence of linear transformations on the dependent/independent variables, then a contraction mapping theorem on them. Then the CLT would be sort of a "global" contraction mapping theorem. Wonder if that's provable or even makes sense, gonna go tinker around!
@Systox25
@Systox25 10 ай бұрын
I cant wait what you are up to on the new channel. Take care!
@567kkd
@567kkd 9 ай бұрын
MY Statistics and calculus professor love your video.
@JulianCrypto
@JulianCrypto 9 ай бұрын
Impressive work
@LiborTinka
@LiborTinka 10 ай бұрын
Reminds me of old days of programming digital image processing, where we used a speed-up trick of repeatedly applying box function to approximate gaussian filter. It was really fast and no floating point math was required.
@klam77
@klam77 10 ай бұрын
Very very cool. Never learnt convolutions that way!
@coreyyanofsky
@coreyyanofsky 10 ай бұрын
at 5:44, being super-clear and specific: the properties that imply a 2D Gaussian are (i) a function x and y only through r, and (ii) independence, expressed as the functional equation g(r) = f(x)f(y) you mention independence earlier and it's on the screen in the upper right but i think it's worth emphasizing that it's essential to the derivation
@thanosauce9128
@thanosauce9128 10 ай бұрын
My god he drops a video relevant to the topic I take literally after I finish it
@Reda-Ou
@Reda-Ou 9 ай бұрын
The entropy explanation is really interesting and makes a lot of sense. As far as I can tell, what it is saying is that: noticing that convolving many different distributions leads to a gaussian distribution, is the same as noticing that repeated sampling the microstate of a system, which is the same as sampling N independent atomic distributions (or approximately independent... or not, depending on your system) of an equilibrium (maximal entropy) system, for large N, will always correspond to the same value of a macrostate variable.
@ezxalidosman
@ezxalidosman 10 ай бұрын
I really loved the math since the day I started watching your videos not gonna lie!
@simplyshocked9908
@simplyshocked9908 3 ай бұрын
Very nice! Have you thought about making a video on the concentration of measures phenomenon in higher dimensions?
@Vikrampratapmaurya
@Vikrampratapmaurya 10 ай бұрын
This channel is one of the most popular chnl in the field of advance maths..❤❤
@fuwadhasan7553
@fuwadhasan7553 9 ай бұрын
Majority of your video goes top of my head 😅 as I'm not a good student. But i come here and watch your evey video because of your representation. Thank you 😊
@michalchik
@michalchik 10 ай бұрын
Here's a little idea that I figured out while thinking about catalysts in my high school chemistry class. There is a mysterious fact that's taught just for road memorization in chemistry, that catalysts lower energy use of activation but they don't shift equilibriums. This is broadly been explained as, if catalysts could shift equilibriums then it would be possible to add and remove catalysts from a reaction chamber, shift the equilibrium back and forth, and essentially build a perpetual motion machine from what you could generate power. This fact was mysterious to me until I realized that the distribution of energies in molecules bouncing around a reaction chamber approaches the normal distribution. The normal distribution. The amount of each reactants and products is only determined by the relative differences in energy and the temperature, not the ease of transition. This would not be true for any other distribution I can think of
@TonyWangYQ
@TonyWangYQ 9 ай бұрын
Please also make a video on logistic regression - specifically how the sigmoid function implies probability. I think this would be an interesting topic! Thanks!!
@mahadlodhi
@mahadlodhi 10 ай бұрын
Great vid as always
@paniczgodek
@paniczgodek 9 ай бұрын
That's all great, but will you ever make a video about the Kalman filter?
@monku1521
@monku1521 10 ай бұрын
Thank you for the shoutout at the end! -Daksha
@dliessmgg
@dliessmgg 10 ай бұрын
I remember learning that there's two kinds of stable points when you apply a process like repeatedly putting a function into convolution with itself: - those where a slight disturbance will always bring you back to where you started - those where a slight disturbance will always send you away from where you started it's obvious that the norman distribution is the first kind of stable point, but is there a function that is the second kind of stable point under repeated convolution?
@satyakiguha415
@satyakiguha415 10 ай бұрын
No one believed that math could be soooooooo beautiful before ur channel was created
@musicarroll
@musicarroll 10 ай бұрын
More generally, linear transformations of Gaussian-distributed random vectors are also Gaussian random vectors. This is one of the main reasons why Kalman filtering works. BTW, convolution is also a bilinear transformation on L^p spaces.
@Brandon_Tyr
@Brandon_Tyr 9 ай бұрын
Thanks for this series. I finally understand convolution! I have a question though. If convolution answers how to sum PDFs, then how do you do other operators? Maybe multiplication or exponentiation?
@bhanushikha1
@bhanushikha1 9 ай бұрын
Hi! thank you so much for these video. please can you videos on optimisation.
@lovishnahar1807
@lovishnahar1807 10 ай бұрын
love from india sir u teach the actual math,but can u also please provide some notes of advance maths topics as interactive as ur lecture
@r4fa3l59
@r4fa3l59 10 ай бұрын
WONDERFUL THANKS FOR INSPIRING AN ENTIRE GENERATION TO GET AND UNDERSTAND THE TRUE BEAUTY OF MATHEMATICS
@markgross9582
@markgross9582 8 ай бұрын
Could you do a video on strum liouville theory and generalized Fourier series?
@jeanw4287
@jeanw4287 10 ай бұрын
astounding quality as always
@tanmayshukla7339
@tanmayshukla7339 9 ай бұрын
Please make a playlist of this topic !! I wasn't able to watch your videos for some time, for some reason, so it's jumbled up !!
@Me-0063
@Me-0063 10 ай бұрын
Love the videos! They have “re-sparked” my interest in math
@henriqnuchoa
@henriqnuchoa 8 ай бұрын
Please make a video like this about the Student's t-distribution!
@berryesseen
@berryesseen 10 ай бұрын
Another important question is how fast the normalized sum of iid random variables’ distribution converge to that of the Gaussian. One way to quantify this is to ask max_A | P[S in A] - P[W in A]| where S is the sum and W is the Gaussian. This maximum scales as constant/sqrt(N) and is known as the Berry-Esseen theorem. The constant depends on the third moment and the variance. If you need an intuition for why the scaling is 1/sqrt(N), the answer would be the gaps between cumulants of S and W. Their first 2 cumulants are the same by design (mean and variance). Cumulants of W beyond 3rd degree are all zero. Cumulants of S beyond 3rd degree go as c/sqrt(N), d/N, … If you relate this gap with inverse Fourier transform, you will get probability gaps. And that c/sqrt(N) gap in the third cumulant leads to the scaling in Berry-Esseen Theorem. The order of scaling (1/sqrt(N)) is also quite universal. You don’t necessarily need iid. For example it works for independent sums and Markov-1 sums. The dimension can be more than 1. You can even pass the sum through a smooth function.
@alwayshere6956
@alwayshere6956 10 ай бұрын
Golly id love a little on entropy and it's application here. Important almost even
@vigilantcosmicpenguin8721
@vigilantcosmicpenguin8721 10 ай бұрын
This is a very elegant explanation of what makes the normal curve so special, but it still seems a little [puts on sunglasses] ...convoluted.
@otakultur5624
@otakultur5624 10 ай бұрын
I didn’t watch already but thank you for this video, none of my university teachers ever explained this when studiying probabilities !
@raymondfrye5017
@raymondfrye5017 10 ай бұрын
Because they never apply it.
@Kulsgam
@Kulsgam 9 ай бұрын
Can you make a video on the Shoelace formula and how it is able to find the area of concave polygons
@hejhhopp
@hejhhopp 10 ай бұрын
Love this! Just one thing, it might have been better to use more distinct colors between the area of the slice and the color of the 2d gaussian, both of them look blueish and it took me a while to figure out they were different. Anyway, no complain, this was great!
@ShivamSharma-ob8ix
@ShivamSharma-ob8ix 10 ай бұрын
Grand was and is my source of inspiration to master mathematics, and to become linguistically accurate! One of my hero ❤.
@AliAhmed-xv1wu
@AliAhmed-xv1wu 8 ай бұрын
ما شاءالله عليك يا استاذي الله يكتب اجرك ويبارك فيك على شرحك الجميل كل كلمه تقولها احسن انك أضفت معلومه جديده لي شرح سلس و مفصل و عميق ما اقدر واوصف اعجابي بك اشكرك شكرا جزيلا جدا ...اتمنى اكون مثل يوما ما يعجبني ايضا استخدامك للبايثون اتمنى لك التوفيق والنجاح ...رغم اني اتمنى اشوف كل فديوهاتك لكن الانترنت في بلدي غالي لذالك اشوف مقاطع المهمه
@adwaitpandey2526
@adwaitpandey2526 10 ай бұрын
Was waiting for this video for a long time
@protocol6
@protocol6 10 ай бұрын
You made me curious what cosh(-x^2) and sinh(-x^2) look like as they sum to e^(-x^2). I've seen those curves somewhere before. I think as differential equations in some kind of gravitational field context.
@lanog40
@lanog40 10 ай бұрын
FYI there is a small typo at 9:10 in the challenge problem, "The transformatoin of the line..." Thank you visualizing this connection!
@nagrajgovekar6305
@nagrajgovekar6305 10 ай бұрын
Can you please make a video on open problems in mathematics or theoretical physics that would be awesome to have knowledge of real open problems the way you present.
@arjunshah7105
@arjunshah7105 8 ай бұрын
Could you please make a video on modelling the surface area of an egg?
@rayhanlahdji
@rayhanlahdji 9 ай бұрын
Freshman me would thank you a lot. "Why Normal?" is the most unanswered question throughout my stats undergrad.
But what is the Central Limit Theorem?
31:15
3Blue1Brown
Рет қаралды 3,3 МЛН
Why π is in the normal distribution (beyond integral tricks)
24:46
3Blue1Brown
Рет қаралды 1,5 МЛН
didn't want to let me in #tiktok
00:20
Анастасия Тарасова
Рет қаралды 7 МЛН
Teenagers Show Kindness by Repairing Grandmother's Old Fence #shorts
00:37
Fabiosa Best Lifehacks
Рет қаралды 32 МЛН
skibidi toilet 73 (part 2)
04:15
DaFuq!?Boom!
Рет қаралды 31 МЛН
Visualizing quaternions (4d numbers) with stereographic projection
31:51
Why is calculus so ... EASY ?
38:32
Mathologer
Рет қаралды 1,5 МЛН
Feynman's Lost Lecture (ft. 3Blue1Brown)
21:44
minutephysics
Рет қаралды 3,4 МЛН
Researchers thought this was a bug (Borwein integrals)
17:26
3Blue1Brown
Рет қаралды 3,3 МЛН
Is the Future of Linear Algebra.. Random?
35:11
Mutual Information
Рет қаралды 174 М.
A tale of two problem solvers (Average cube shadows)
40:06
3Blue1Brown
Рет қаралды 2,6 МЛН
Convolutions | Why X+Y in probability is a beautiful mess
27:25
3Blue1Brown
Рет қаралды 631 М.
This Is the Calculus They Won't Teach You
30:17
A Well-Rested Dog
Рет қаралды 2,9 МЛН
how Laplace solved the Gaussian integral
15:01
blackpenredpen
Рет қаралды 699 М.
didn't want to let me in #tiktok
00:20
Анастасия Тарасова
Рет қаралды 7 МЛН