Logarithmic nature of the brain 💡

  Рет қаралды 223,691

Artem Kirsanov

Artem Kirsanov

Күн бұрын

Shortform link:
shortform.com/artem
My name is Artem, I'm a computational neuroscience student and researcher.
In this video we will talk about the fundamental role of lognormal distribution in neuroscience. First, we will derive it through Central Limit Theorem, and then explore how it support brain operations on many scales - from cells to perception.
REFERENCES:
1.Buzsáki, G. & Mizuseki, K. The log-dynamic brain: how skewed distributions affect network operations. Nat Rev Neurosci 15, 264-278 (2014).
2.Ikegaya, Y. et al. Interpyramid Spike Transmission Stabilizes the Sparseness of Recurrent Network Activity. Cerebral Cortex 23, 293-304 (2013).
3.Loewenstein, Y., Kuras, A. & Rumpel, S. Multiplicative Dynamics Underlie the Emergence of the Log-Normal Distribution of Spine Sizes in the Neocortex In Vivo. Journal of Neuroscience 31, 9481-9488 (2011).
4.Morales-Gregorio, A., van Meegen, A. & van Albada, S. J. Ubiquitous lognormal distribution of neuron densities across mammalian cerebral cortex. biorxiv.org/lookup/doi/10.1101... (2022) doi:10.1101/2022.03.17.480842.
OUTLINE:
00:00 Introduction
01:15 What is Normal distribution
03:03 Central Limit Theorem
04:23 Normality in biology
05:46 Derivation of lognormal distribution
10:20 Division of labour in the brain
12:20 Generalizer and specialist neurons
13:37 How lognormality arises
15:19 Conclusion
16:00 Shortform: sponsor message
16:54 Outro
CREDITS:
Icons by biorender.com/
Mathematical animations were created using Manim CE python library - www.manim.community/

Пікірлер: 309
@ArtemKirsanov
@ArtemKirsanov 2 жыл бұрын
Join Shortform for awesome book guides and get 5 days of unlimited access! shortform.com/artem
@adityakulkarni4549
@adityakulkarni4549 Жыл бұрын
@Artem Kirsanov the text at 15:03 doesn't seem to correspond to the biorxiv paper you have linked in the description 😅
@defenestrated23
@defenestrated23 Жыл бұрын
Log-normal distributions are closely related to pink noise (power is 1/freq), since d(log) = 1/x. This is said to be the hallmark of self-organization. It shows up everywhere you have fractal symmetry: brains, turbulence, finance, weather, even migration patterns.
@whannabi
@whannabi Жыл бұрын
The everything
@Maouww
@Maouww Жыл бұрын
yep I was totally thinking of Quantitative Linguistics the moment log-normal distribution cropped up
@Simonadas04
@Simonadas04 Жыл бұрын
D(ln)=1/x
@luker.6967
@luker.6967 Жыл бұрын
@@Simonadas04 some people prefer log to denote ln, since log base e is more common in pure mathematics.
@Simonadas04
@Simonadas04 Жыл бұрын
@@luker.6967 i see
@aitor9185
@aitor9185 Жыл бұрын
Great video! Super happy to see my paper about neuron densities made it into this video 15:12 :)
@isaac10231
@isaac10231 Жыл бұрын
wow, the KZbin algorithm is crazy
@elismith4040
@elismith4040 Жыл бұрын
As an electrical engineer who has also always been extremely interested in neuroscience, stumbling across this channel is pure gold.
@BiancaAguglia
@BiancaAguglia 2 жыл бұрын
Thank you for all the effort you put into your videos, Artem. You're doing a great job taking complex topics and making them easy to visualize and to understand. In case you're looking for topic suggestions for future videos, I have a few: 1. curriculum you would follow if you had to start from scratch and wanted to teach yourself neuroscience (computational or, if you prefer, a different concentration) 2. sources of information neuroscientists should follow in order to stay current with the research in the field (e.g. journals, labs, companies, people, etc) 3. list of open problems in neuroscience Thank you again for your videos. Keep up the great work. 😊
@ArtemKirsanov
@ArtemKirsanov 2 жыл бұрын
Thank you for wonderful suggestions! Right now, I'm actually preparing the script for a video about getting started with computational neuroscience! So stay tuned ;)
@BiancaAguglia
@BiancaAguglia 2 жыл бұрын
@@ArtemKirsanov Thank you. I look forward to it. 🙂
@leif1075
@leif1075 2 жыл бұрын
@@ArtemKirsanov Can you clarify how exsctly normal.dostrobtions arise eve tally even when you have wildly extreme and different values? Is it basically just evening out?
@iwanttwoscoops
@iwanttwoscoops Жыл бұрын
@@leif1075 pretty much! look at height; there's a wide variance, and in any town you can find a tiny person and a giant. But overall, most people are average height, and these outliers are rare. Hence normal
@fabiopakk
@fabiopakk 2 жыл бұрын
Excellent video, Artem! I enjoy a lot watching your videos, they are incredibly well done and explained. I particularly liked the ones involving topology.
@ImBalance
@ImBalance Жыл бұрын
The best explanation of logarithms I've ever seen. How surprising that a neuroscience KZbin video managed to describe the concept and its application so much more completely than any of the math classes I've ever taken. Well done!
@lbsl7778
@lbsl7778 Жыл бұрын
This channel is the most beautiful thing that has happened in my life this week, maybe even this month. Thank you for your effort, greetings from Mexico!
@threethrushes
@threethrushes Жыл бұрын
I studied statistics for biologists at university some 25 years ago. Your explanations are logical and intuitive. Good job Artem.
@giacomogalli2448
@giacomogalli2448 Жыл бұрын
Your videos are fantastic for anyone interested in neuroscience! I never studied it in depth but it's fascinating and I'm discovering it
@someone5781
@someone5781 Жыл бұрын
This was one of the most mindblowing videos I've seen in a while. Such amazing content Artem!
@nedfurlong8675
@nedfurlong8675 Жыл бұрын
Your videos are fantastic. What an excellent communicator!
@emmaodom7201
@emmaodom7201 Жыл бұрын
Wow you are such an effective communicator!!! Your insights were very clear and easy to understand
@omaryahia
@omaryahia Жыл бұрын
I am happy I didn't skip this video, and now I know another great channel for math and science thank you Artem great quality, and topics I am interested in
@SudhirPratapYadav
@SudhirPratapYadav Жыл бұрын
one word, EXCELLENT!!! So happy to watch this.
@danin2013
@danin2013 2 жыл бұрын
i love your channel and the way you explain everything with such detail!
@Boringpenguin
@Boringpenguin 2 жыл бұрын
On a completely unrelated note, the lognormal distribution also pops up in the field of mathematical finance! In particular, it is used to model the stock prices in the Black-Scholes model.
@ArtemKirsanov
@ArtemKirsanov 2 жыл бұрын
Wow, cool info! Thanks for sharing
@BiancaAguglia
@BiancaAguglia 2 жыл бұрын
The wikipedia page on log-normal distribution has some examples too: - city sizes - number of citations of journal articles and patents - surgery durations - length of hair, nails, or teeth - length of chess games - length of comments in forums, etc. It's an interesting read.
@KaliFissure
@KaliFissure Жыл бұрын
Most stimulating content in ages! 👍🖖🤘
@valor36az
@valor36az Жыл бұрын
Your videos are such high quality thanks for the efforts.
@stevenschilizzi4104
@stevenschilizzi4104 Жыл бұрын
Brilliant, Artem! And fascinating.
@TheBrokenFrog
@TheBrokenFrog Жыл бұрын
This is exactly what I was looking for today!! How strange that I found this exact topic here. Thank you :)
@matveyshishov
@matveyshishov 2 жыл бұрын
Beautiful, thank you!
@95VideoMan
@95VideoMan Жыл бұрын
Thanks! This is fascinating and useful information. You presented it so clearly, and the visuals were top notch. Really appreciate this work.
@uquantum
@uquantum Ай бұрын
Terrific video, Artem. Mind-blowing: not only the production values, but in particularly highly engaging content. Thank you for sharing with us. Fantasti❤
@neutrino9
@neutrino9 Жыл бұрын
Truly amazing topics, thank you !
@ruperterskin2117
@ruperterskin2117 Жыл бұрын
Cool. Thanks for sharing.
@kapteinskruf
@kapteinskruf Жыл бұрын
Outstanding!
@bofloa
@bofloa Жыл бұрын
This lecture is wow...thanks
@Jeffben24
@Jeffben24 Жыл бұрын
Thank you Artem ❤
@khwlasemaan8135
@khwlasemaan8135 2 жыл бұрын
Impressive ... neuroscience is a powerful topic
@justmewendy6461
@justmewendy6461 Жыл бұрын
Very good. Thank you.
@luwi8125
@luwi8125 Жыл бұрын
Thank you for a great video! Very interesting topic and very nice of you to show the article to make people more likely to actually look it up for themselves. 😀👍
@davispeixoto
@davispeixoto 12 күн бұрын
Your videos are awesome!
@lakshminarayananraghavendr319
@lakshminarayananraghavendr319 Жыл бұрын
Thanks for the informative video
@gz6963
@gz6963 Жыл бұрын
Thanks for the clear explanation, great video
@NoNTr1v1aL
@NoNTr1v1aL 2 жыл бұрын
Absolutely amazing video! Subscribed.
@lucascsrs2581
@lucascsrs2581 Жыл бұрын
This channel is a hidden gem. +1 subscriber
@zwazwezwa
@zwazwezwa Жыл бұрын
Excellent video, much appreciated!
@suhanovsergey
@suhanovsergey Жыл бұрын
Thanks for high quality content! I love the use of palmistry as an example of a random process at 3:42 :)
@quentinmerritt
@quentinmerritt Жыл бұрын
Dude that’s so cool! I’m a first year grad student at OSU looking to research Nuclear Theory! And I’ve been watching your videos since late high school, I’d love to see a series on QFT!
@ward_heimdal
@ward_heimdal 7 ай бұрын
This is definitely one of my favourite channels now. Up there with 3B1B. You explain things really well, and the topics you cover are just my cup of tea.
@enricoginelli3405
@enricoginelli3405 Жыл бұрын
Super cool video Artem! Keep up!
@peterbenoit5886
@peterbenoit5886 Жыл бұрын
Wonderful content on a most interesting topic.
@umerghaffar4686
@umerghaffar4686 9 ай бұрын
I can’t believe this valuable information is available on YT for free!! I just finished my a level studies and am keen on biology and neuroscience so I loved the fact I got to see a computational perspective on the brain. Makes me wonder where else can the log-normal distributions be seen in the body or what other mathematical models can be deduced in Biological systems. Keep up!
@accountname1047
@accountname1047 Жыл бұрын
This video is fantastic
@knaz7468
@knaz7468 Жыл бұрын
Really nice explanation, thanks!
@ASMM1981EGY
@ASMM1981EGY Жыл бұрын
Awesome episode
@horizn9982
@horizn9982 6 ай бұрын
Wow man amazing videos, I wanna do research as a computational neuroscientist and your content is really what I was looking for!
@RanLevi
@RanLevi Жыл бұрын
That was amazing! Great work, Artem - love your videos :-)
@SuperEbbandflow
@SuperEbbandflow 2 жыл бұрын
Excellent video, keep the great content coming!
@ArtemKirsanov
@ArtemKirsanov 2 жыл бұрын
Thanks!
@bovanshi6564
@bovanshi6564 2 жыл бұрын
Great video, really interesting!
@mapnzap
@mapnzap Жыл бұрын
That was very well done
@QasimAlKhuzaie
@QasimAlKhuzaie Жыл бұрын
A very interesting video. Thank you very much
@alkeryn1700
@alkeryn1700 Жыл бұрын
once wrote a spiking neural net with around a million neurons some neurons would fire almost every iteration, some every 10 iteration, and some would average once every thousands. didn't bother to plot the distribution but that could have been fun.
@user-mc2gm6fz9i
@user-mc2gm6fz9i 2 жыл бұрын
great video analysis
@whiteoutTM
@whiteoutTM Жыл бұрын
fascinating and engaging!
@Luper1billion
@Luper1billion Жыл бұрын
Its interesting because I thought the video would be able how the brain perceives information logarithmicly, but it actually shows its actually physically built logarithmicly as well.
@EPMTUNES
@EPMTUNES Жыл бұрын
Great video!
@CoolDudeClem
@CoolDudeClem Жыл бұрын
I just want to probe the parts of my brain where the picture and sounds form so I can record my dreams and then play them back like a movie.
@buckrogers5331
@buckrogers5331 Жыл бұрын
I've been interested in brain science since I was a kid. This is definitely understandable to a 10 year old kid. Well done! More content please!! And you shud have more subscribers!!
@soymilkman
@soymilkman Жыл бұрын
damn you must be hella smart for a 10 yr old
@isaacharton7851
@isaacharton7851 Жыл бұрын
Very productive vid. It inspires me to be productive as well.
@dalliravitejareddy3089
@dalliravitejareddy3089 Жыл бұрын
great effort
@PalCan
@PalCan Жыл бұрын
Awesome video
@sunkojusurya2864
@sunkojusurya2864 Жыл бұрын
Insightful video. 👍 Keep going.
@Marquesremix
@Marquesremix Жыл бұрын
Your videos have a Good dinamic and didacts and the edictions is verry harmony, its really impressive why you not have 1 million of subscribers, more one subscriber from brazil 🇧🇷
@jpcf
@jpcf Жыл бұрын
High quality content here!
@sytelus
@sytelus Жыл бұрын
Thanks!
@asdf56790
@asdf56790 Жыл бұрын
Amazing video!
@chistovmaxim
@chistovmaxim Жыл бұрын
really interesting video for someone reseraching on NN, thanks!
@4shotpastas
@4shotpastas Жыл бұрын
Me, knowing nothing about how the brain works: "Pfft, of COURSE it's logarithmic. Why wouldn't it be?"
@goid314
@goid314 Жыл бұрын
Interesting video!
@666shemhamforash93
@666shemhamforash93 2 жыл бұрын
Great video! I would love to see a follow-up video on neuronal avalanches and the critical brain hypothesis. A nice review on the topic that you might find useful is "Being critical of criticality in the brain" by Beggs and Timme (2012).
@ArtemKirsanov
@ArtemKirsanov 2 жыл бұрын
Thank you! I will definitely look into it!
@leif1075
@leif1075 2 жыл бұрын
@@ArtemKirsanov Thank you for sharing Artem. I hope you can respond to my message about how to deal with scientific papers and dealing with math when you can. Thanks very much.
@a__f
@a__f Жыл бұрын
Interestingly, I used to work in solar physics where avalanches are also a commonly used model for how solar flares occur
@lambdo
@lambdo 2 жыл бұрын
Wonderful explanation of gaussian distribution
@editvega803
@editvega803 Жыл бұрын
Wow! An amazing video! Thank you very much Artem. You have a new suscriber from Argentina 🇦🇷
@mukul98s
@mukul98s Жыл бұрын
I had studied advanced mathematics in my last semester but never understand the concept of random variables and distribution with that much clarity. Amazing video with great explanation.
@chipsi21
@chipsi21 2 жыл бұрын
Wow so awesome, thanks a lot 🤙🏻🤙🏻
@TheDRAGONFLITE
@TheDRAGONFLITE 2 жыл бұрын
Nice video! Great pacing
@jeromewelch7409
@jeromewelch7409 Жыл бұрын
Nice!
@Tabbywabby777
@Tabbywabby777 Жыл бұрын
Great animations and explanations. However, as a fellow scientist and learner I wish that you had presented the central limit theorem and the derivation of the log-normal distribution in it's full mathematical glory. I feel that half the power of MANIM is in it's ability to concisely represent both the graphical and textual aspects of mathematics, to avoid one of them is to kneecap the platform. As a learner it is essential that I build associations between the graphical and textual representations. I think you did this better in your video on wavelets! Anyway, thank you so much for taking the time to create these videos. I am sure that they will make a lasting contribution to the field of computational neuroscience and inspire students for years to come.
@AswanthCR7
@AswanthCR7 Жыл бұрын
Loved the video and the presentation :) Can biasing the weights of an artificial neural network toward such a log normal distribution provide any advantage?
@anonymous_4276
@anonymous_4276 Жыл бұрын
Exactly what I was wondering!
@inversebrah
@inversebrah Жыл бұрын
learned a lot, ty
@aaronsmith6632
@aaronsmith6632 Жыл бұрын
Freaking fascinating. I imagine these properties would transfer to neutral network design as well!
@abhishek101sim
@abhishek101sim Жыл бұрын
Helpful content, with a good lowering of entry barrier for someone uninitiated. I learned a lot. A small but important point: sum of independent random variables is not normally distributed, but mean of independent random variables is normally distributed.
@stipendi2511
@stipendi2511 Жыл бұрын
Technically you're right, since the limit of the sum of the random variables diverges. However, I don't think stressing that point helps with conceptual understanding, since in practice all sums are finite, and then the sum approximately resembles the SHAPE of a normal distribution. Once you normalize it, which is what taking the mean does, you obtain a probability distribution.
@Abhishek-zb3dp
@Abhishek-zb3dp Жыл бұрын
Technically it's not the mean but mean times sqrt(n) where n is the number of samples taken to get the mean and under the limit that n is large. Otherwise the mean would just be a point as n becomes very large.
@geodesic616
@geodesic616 Жыл бұрын
Why Guys like this are so under subscribed . Wish you success
@carlotonydaristotile7420
@carlotonydaristotile7420 2 жыл бұрын
Cool video.
@aksamitnaPiesc
@aksamitnaPiesc Жыл бұрын
good job
@Andres186000
@Andres186000 Жыл бұрын
3:56 The Cauchy distribution does not follow the central limit theorem and actually shows up relatively frequently, so that is a caveat to the Central Limit Theorem that is worth noting. This is actually a very, very big deal when talking about epidemiology and market behaviors.
@marcelo55869
@marcelo55869 Жыл бұрын
The central limit theorem appears when many different independent variables each gives a little small contribution to the output of the bell curve. That's why on the limit with N going to infinity it always holds true whichever distribution you choose for the variables. However, If one variable outweighs the others or the variables are somehow correlated or if the number of variables N is not large enough, the theorem does not hold.
@gideonk123
@gideonk123 Жыл бұрын
The ordinary Central Limit Theorem is valid only for random variables which have a finite variance. Therefore it is NOT relevant for sums of variables where each is distributed as Cauchy, because Cauchy variables do not have finite variance (incidentally, they do not have a mean either…). But there is a different kind of limit theorem valid for ALL distributions: alpha-Stable distributions! Cauchy variables are a special case with alpha = 1.
@Evan490BC
@Evan490BC Жыл бұрын
@@gideonk123 Yes, exactly, this is the reason.
@Treviisolion
@Treviisolion Жыл бұрын
The shape certainly makes some intuitive sense. Extremely short firing rates are more likely to be mistaken as random noise so a neuron wants to be above that limit. However, it doesn't want to be too far above it, because firing is energy-intensive and the brain is already a calorie-hungry organ. At the same time if information is encoded partially in the firing rate, then utilizing only a small subsection of possible firing rates is not information efficient, so neurons that need to be heard more often would be incentivized to use lower utilized firing rates as there is less noise in those channels. I don't know whether that explanation would necessarily result in a log-normal distribution as opposed to a low-median normal distribution, but it is interesting to see roughly the shape I was thinking emerge at the end.
@crimfan
@crimfan Жыл бұрын
Lognormal is the central limit theorem for RVs that combine in a multiplicative fashion (as long as the tails aren't too heavy).
@isanewday
@isanewday Жыл бұрын
Interesting
@cynido
@cynido Жыл бұрын
Brain is the most complex and fundamental part of our body - Brain
@anywallsocket
@anywallsocket Жыл бұрын
What it means is that the things we measure to be lognormal, we are assuming an additive linearity, when likely there exists a more natural measure of the thing in a multiplicative non-linearity, e.g., ignoring the fact that the thing is self-interacting, or grows from itself.
@dmitritomanencu8535
@dmitritomanencu8535 Жыл бұрын
great
@reocam8918
@reocam8918 Жыл бұрын
Awesome! Can I ask how do you create these fantastic animations? Thanks!
@YonatanLoewenstein
@YonatanLoewenstein Жыл бұрын
Very nice! An explanation of why the distribution of firing rates in the cortex is log-normal can be found in Roxin, Alex, et al. "On the distribution of firing rates in networks of cortical neurons." Journal of Neuroscience 31.45 (2011): 16217-16226.
@wilsonbohman3543
@wilsonbohman3543 Жыл бұрын
i have a heavy background in audio production, and i figured this made a lot of sense given the logarithm nature of how we perceive sound, it’s cool to see that this is just inherent to our brains in general
@lucusekali5767
@lucusekali5767 Жыл бұрын
You deserve subscribe
@maxmyzer9172
@maxmyzer9172 Жыл бұрын
3:21 this video so far is more helpful than the statistics course i took
@TheKemalozgur
@TheKemalozgur Жыл бұрын
Whenever there is a log-normal behaviour, we can think of connnected and combined behaviour of things, namely evolutionary step. order of importance of things can only stabilized enough in a logarithmic fashion.
@smotocel69
@smotocel69 Жыл бұрын
Subscribed.
Theta rhythm: A Memory Clock
20:18
Artem Kirsanov
Рет қаралды 101 М.
Building Blocks of Memory in the Brain
27:46
Artem Kirsanov
Рет қаралды 230 М.
Luck Decides My Future Again 🍀🍀🍀 #katebrush #shorts
00:19
Kate Brush
Рет қаралды 2,8 МЛН
Brain Criticality - Optimizing Neural Computations
37:05
Artem Kirsanov
Рет қаралды 207 М.
Wavelets: a mathematical microscope
34:29
Artem Kirsanov
Рет қаралды 605 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 277 М.
Something Strange Happens When You Follow Einstein's Math
37:03
Veritasium
Рет қаралды 10 МЛН
How I make science animations
43:39
Artem Kirsanov
Рет қаралды 690 М.
How Your Brain Organizes Information
26:54
Artem Kirsanov
Рет қаралды 514 М.
The weirdest paradox in statistics (and machine learning)
21:44
Mathemaniac
Рет қаралды 1 МЛН
How Intelligence Evolved | A 600 million year story.
15:22
Art of the Problem
Рет қаралды 233 М.
Dendrites: Why Biological Neurons Are Deep Neural Networks
25:28
Artem Kirsanov
Рет қаралды 217 М.
Luck Decides My Future Again 🍀🍀🍀 #katebrush #shorts
00:19
Kate Brush
Рет қаралды 2,8 МЛН