Dendrites: Why Biological Neurons Are Deep Neural Networks

  Рет қаралды 230,433

Artem Kirsanov

Artem Kirsanov

Күн бұрын

Пікірлер: 564
@ArtemKirsanov
@ArtemKirsanov Жыл бұрын
Keep exploring at brilliant.org/ArtemKirsanov/ Get started for free, and hurry-the first 200 people get 20% off an annual premium subscription.
@tetrahexo5592
@tetrahexo5592 Жыл бұрын
Hey, I love your unique work😁👍, may i ask you if would you like someday explain the basics of synapse plasticity in biological neuron networks?
@JohnSmith-ut5th
@JohnSmith-ut5th Жыл бұрын
I developed a theory about 15 years ago that neurons actually function more like mathematical group permutations than anything else. Their goal is to organize and group action potentials from synapses (putting similar action potentials closer together in time prior to integration). I actually developed C code to show how they can be used to solve a 50 node 2D Hamiltonian path problem extremely fast (in C it was solved almost instantly). The behavior of individual neuron dendritic growth was modeled by a genetic algorithm. I never published my research, so anyone is welcome to try and reproduce this.
@aissahoi7308
@aissahoi7308 Жыл бұрын
Like the video.♥️♥️ Can anyone tell me what software has been used for the animations?🙏🏻🙏🏻
@mattaku9430
@mattaku9430 Жыл бұрын
On the interesting note: We don’t just have static weights, we also have fast weights, they’re dynamic and decay fast
@agrymakismanolis7925
@agrymakismanolis7925 Жыл бұрын
Neuroscience PhD student here. Having followed this topic, although from a distance, since I am more into molecular neurobiology, I must say you did a very good job of explaining this very interesting topic. I have been in lectures from some of the researchers you included and I have to say I understood way more things from you than I did before. Thank you!
@ArtemKirsanov
@ArtemKirsanov Жыл бұрын
Wow, thank you so much! It's really nice to hear that!
@zakiyo6109
@zakiyo6109 Жыл бұрын
researchers are not teachers i dont know why universities haven’t understood that yet.
@Caius1930
@Caius1930 Жыл бұрын
​@@zakiyo6109 especially so at top universities. They're mostly hired for the funding and citations they can attract rather than teaching ability...
@michelstronguin6974
@michelstronguin6974 Жыл бұрын
Sparse distributed representation (thousand brains theory) from Numenta is more accurate since it’s based on actual biological data.
@JiroMusik
@JiroMusik Жыл бұрын
I saw another video here on yt where one was talking about a study showing consciousness is just a download. Pretty sure in
@mikip3242
@mikip3242 Жыл бұрын
This is crazy-interesting stuff. The whole thing reminds me of the development of the concept of atoms. Atoms were supposedly meant to be the indivisible building blocks of matter from which you could build anything. The greek concept was adapted to what we now call...well.... atoms. The problem is that atoms are not elementary building blocks and as it turns out they are complex systems made of multiple more fundamental particles that can be arranged in many many ways to build many atoms in very different states and configurations. So the real greek concept of "atom" should be applied to fundamental particles of the standard model, and what we call atoms should be renamed. The same goes for neurons. In AI a "neuron" is just the smallest conceivable operational unit that you can use to build more complicated logic systems. Historically we thought that biological neurons were just that, the smallest indivisible element of logic (a simple lightbulb, a binary switch), but then, like when we splitted the "atom", we discovered that in fact a biological neuron is really equivalent to an ensamble of more simple logic gates. So, the actual "atoms of logic", the neurons in neural networks, should not be confused with neurons in biology.
@ArtemKirsanov
@ArtemKirsanov Жыл бұрын
Wow, this is actually fascinating insight!! I've never thought about it
@mikip3242
@mikip3242 Жыл бұрын
@@ArtemKirsanov By the way, your explanation of the perceptron was one of the best I've ever seen. Clear, straight to the point and esthetically pleasing. Congratulations for the awesome chanel.
@starxcrossed
@starxcrossed Жыл бұрын
We’re computers made of computers
@andrewdunbar828
@andrewdunbar828 Жыл бұрын
In general, all science and discovery is fractal-shaped. The picture is never complete because the more you zoom in the more details there are and you realize everything you saw before zooming was just an approximation.
@jamescheddar4896
@jamescheddar4896 Жыл бұрын
they're called elements when they're atoms so a single elementoid?
@corsaircaruso471
@corsaircaruso471 Жыл бұрын
I’m a fine arts doctorate with only 6 or so credit hours of formal education in science post-high school. You were able to make this topic at least comprehensible for me. Thanks for the engaging format!
@ArtemKirsanov
@ArtemKirsanov Жыл бұрын
Thank you! I'm glad you enjoyed it :)
@your_-_mom
@your_-_mom Жыл бұрын
What job are you looking for
@Afkmuds
@Afkmuds Жыл бұрын
@@your_-_mom petridish
@casey7882
@casey7882 Жыл бұрын
As an undergraduate who is currently studying to go into computational neuroscience, I feel privileged that such a high quality channel exists for a relatively niche subject like this. Keep up the good work!
@lucascsrs2581
@lucascsrs2581 Жыл бұрын
Awesome content. It's always amazing to see someone that can explain advanced topics in terms that even who are not familiar with the field can understand. Fascinating stuff.
@FriedChairs
@FriedChairs Жыл бұрын
While I didn’t understand most of it I subscribed just because I can appreciate the value he is providing for free with this content.
@tedarcher9120
@tedarcher9120 Жыл бұрын
Depolarisation is decreasing voltage but increasing potential. Slight correction by a physicist there ;)
@stafan102938
@stafan102938 Жыл бұрын
But if a neuron sits at a negative value in its resting state, surely its the other way round. Voltage gets closer to 0 so increases but potential decreases?
@tedarcher9120
@tedarcher9120 Жыл бұрын
@@stafan102938 voltage is just a potential difference so it is a matter of labeling. It makes more sense to use it in absolute terms in this case
@stafan102938
@stafan102938 Жыл бұрын
@@tedarcher9120 But regardless of using negative or positive voltages to talk about it, a depolarisation would always refer to a reduction in potential across the membrane. Plus the voltage value falls out of an equation which uses ion concentrations to calculate it, so that's why its consistently referred to as negative inside the cell and positive outside.
@stafan102938
@stafan102938 Жыл бұрын
@@tedarcher9120 Maybe I've just the terminology backwards in my understanding of it all though
@stafan102938
@stafan102938 Жыл бұрын
@@tedarcher9120 Going to blame my school physics teacher for giving cop out answers to explaining voltage and potential
@mgostIH
@mgostIH Жыл бұрын
I loved this! It gives a very optimistic outlook to that artificial neural networks can at the very least simulate our own neurons if they need it to solve a problem. It's important to notice that while a deep neural network can approximate a single biological neuron with a lot of layers and parameters, this only provides an upper bound on the total amount of parameters you'd need to approximate multiple biological neurons! Not only may there be a more efficient architecture the authors haven't tried, but it may be that as artificial neural networks scale to become larger they become more efficient at handling the equivalent of an individual biological neuron. This because the overall behaviour of a system may be simpler to model by exploiting correlations and limiting behavior, just like a glass of water with respect to all of the H2O molecules composing it.
@revimfadli4666
@revimfadli4666 Жыл бұрын
Sounds like what Sandler, Kirsch, and Schmidhuber have tried to do with promising results
@moormanjean5636
@moormanjean5636 Жыл бұрын
Yes its very likely that your second explanation is at play. Grasping the function of a neuron with active dendrites would be much easier when the dimension of the computational manifold is scaled up.
@StanislavMudrets
@StanislavMudrets Жыл бұрын
Neurons don't get put together like a "neural net". They start out as whole undifferentiated cells and then progressively specialize into neurons. Neural nets are put together like machines - from combination of separate parts, components, functions, etc. according to a plan. In other words, although neural nets can simulate some abstract aspect of neurons that might interest humans, you can't really understand neurons using neural nets. For example, neural nets don't socialize. They don't generate projections and try to make it in neuron society, etc. It's kinda like studying humans using speech recognition software that's incapable of going out and making friendships, looking for work on its own, trying to find a spouse, etc.
@revimfadli4666
@revimfadli4666 Жыл бұрын
@@StanislavMudrets would graph convolution or attention be enough socialisation?
@StanislavMudrets
@StanislavMudrets Жыл бұрын
@@revimfadli4666 Is this activity planned by a researcher or something that a bunch of math functions just decided to do on their own? Math functions performing other's will don't do anything spontaneously for their own purposes. They are completely parasitic on human purpose.
@magnuswahlstrom766
@magnuswahlstrom766 Жыл бұрын
Fascinating. Especially the idea that neurons are sensitive to the activation order of signals along its dendrites. I remember seeing a talk long ago where the speaker was discussing what aspects of the brain's (or the neurons') configuration that are actually coded for in our genes, and argued that (a) almost none of the neuron-to-neuron connections or connectivity strengths are controlled (as you may have thought they were, if the brain was a giant network of threshold gates), but (b) great care is taken to control, across a large variation of different neuron cell types, precisely where along the dendrite each nerve cell type connects and in what manner (which makes no sense in a threshold gate model). That seems rather more on-point with the model you're describing!
@ramotimeramotude6448
@ramotimeramotude6448 Жыл бұрын
I have a PhD from Berkeley in neuro & engineering, currently work in ML, and you break this stuff down really well. I love the digested summary of the paper equating a neuron to a small CNN. Thanks, and keep it up man
@cdreid9999
@cdreid9999 Жыл бұрын
i envy you so much. I never had your opportunities but i wanted to do ai research as a programmerin the 80s. It was learning the most basic parts of neuroscience that showes me (and others) how we wpuld achieve it. Unfortunately it also helped me understand how we needed huge leaps in processing power to get there. It is amazing seeing real ai coming to fruition in my lifetime
@ArtemKirsanov
@ArtemKirsanov Жыл бұрын
Thank you!
@mofo78536
@mofo78536 Жыл бұрын
I feel like the insight that each neuron is essentially like a small CNN function would actually be a useful intuition for better neural hardware accelerations if we could keep the numbers of large distances connected between neurons down. Much like how software design to be multithreaded is better for multicore CPUs.
@13lacle
@13lacle Жыл бұрын
Great video, It's cool that single neurons can act as their own mini deep neural networks. I think that it is worth clarifying that even though the biological neurons are not equivalent to perceptron neurons in function, all this does is move the perceptron to a sub part of the neuron. Meaning that the fundamental principle is the same (ie. a network of sub networks is still a single network on the whole)
@matveyshishov
@matveyshishov Жыл бұрын
Always looking forward to your videos, thank you for your hard work! A few questions (maybe answered in the papers, I haven't read them yet): * One neuron is great, how much do we know about the next level, a net of several neurons? IIRC, the number of different neurotransmitters is in many thousands? How does it change the compute? Does it mean we need to create a 5-8 layer nn for every MODE a neuron finds itself in or smth? * Dendritic spines. Neuroplasticity. Has it been incorporated into the simulation models yet? * On a tangential, have you looked at Michael Levin's research? Starting to look like neurons are but a special case of bioelectric signaling.
@moormanjean5636
@moormanjean5636 Жыл бұрын
1. I'm not sure if the number of neurotransmitter types would number in the thousands. For context, serotonin, one of the most complex neurotransmitters (i.e. more diverse receptor class) has about 40 types of receptor types, which around 7 major subclasses. Acetylecholine, which I believe is the most complicated, has upwards of 200 receptor types IIRC. Importantly, a specific neuron doesn't have to express all receptor types for all neurotransmitters, and are likely specialized for their specific role in the circuit, i.e. what specific neurotransmitters they receive. Thus, some neurons may be much more computationally complex than others, at least in terms of receptor subtypes / neurotransmitters received. To simplify this to a 5-8 layer nn is not trivial, as as having a 8 layer temporal CNN for each node in your network is not computationally feasible. 2. I am sure that dendritic spines have been incorporated into simulation models, you may want to look into the dendritic neuron model and active dendrite research in general. As far as neuroplasticity, I would argue that backprop implements a version of plasticity, i.e. plasticity = weight changes. If you are talking about spike timing dependent plasticity (STDP) or behavioral time scale plascity (BTSP), I would assume there are also models which incorporate them.
@ArtemKirsanov
@ArtemKirsanov Жыл бұрын
Wow, thank you very much, Matvey! 1) The number of different neurotransmitters is actually much lower than that - less than a hundred. It is true, that there are any subtypes of receptors to a single neurotransmitter, which differ in kinetics (how fast they open), sensitivity, ion selectivity, etc. It is not clear how to incorporate such diversity into network models, so, usually people just interconnect neurons with different synapses, explicitly defined by maximum conductances (either excitatory or inhibitory) and time constants. It’s worth saying, however, that certain compounds, such as serotonin, dopamine, norepinephrine can act more globally (as neuromodulators), affecting a large population of neurons simultaneously and modulating their excitability. There are certainly models that incorporate neuromodulation, but again, in a very simplified description (e.g. introducing an additional current, which reflects the activation of extrasynaptic receptors). This approach with describing a single neuron with a neural network is very new and I don’t know any papers that model a whole circuit of neurons in this way, so I’m not sure how this could be done. Great question! To quote the authors of the paper on this one: “If indeed one cortical neuron is equivalent to a multilayered DNN, then what are the implications for the cortical microcircuit? Is that circuit merely a deeper classical DNN composed of simple ‘‘point neurons’’? A key difference between the classical DNN and a cortical circuit composed of deep neurons is that, in the latter case, synaptic plasticity can take place mainly in the synaptic (input) layer of the analogous DNN for a single cortical neuron, whereas the weights of its hidden layers are fixed (and dedicated to represent the I/O function of that single cortical neuron)”. 2. Dendritic spines and plasticity (e.g. STDP, BTSP) are definitely accounted for in many realistic network models. However, such simulations are done the “usual way”, by representing neurons with capacitors and resistors and adding plasticity-specific equations to synapses (e.g. increment the weight whenever the 2 neurons is a synaptic pair is co-activated. Or something like that). Since Beniaguev et al. simulated input-output transformations of a single neuron, I don’t think they modeled any plasticity (the neuron just responded to random inputs). When in the future people begin to make use of the DNNs to model networks of neurons, than synaptic plasticity will surely be accounted for (see the quote from above). 3. I’m aware of Michael Levin’s work, but haven’t looked at it in detail just yet ;) 
It’s true that bioelectricity is really important, not just in neurons, but also in regulating embryonic development, IIRC.
@whitealpha2265
@whitealpha2265 Жыл бұрын
Your videos are incredibly well made and are always a special joy to watch. Thank you for your high quality content, please keep it up :)
@ArtemKirsanov
@ArtemKirsanov Жыл бұрын
Thank you!
@DiegoGonzalez-gb3ct
@DiegoGonzalez-gb3ct Жыл бұрын
I just cannot explain the fascination, love, and inspiration that this video delivers.
@richtigmann1
@richtigmann1 Жыл бұрын
This is so interesting! I love how it combines and compares neuroscience and computer science. The production quality is so high
@zacharyohare6029
@zacharyohare6029 Жыл бұрын
I'm ... NOT a scientist by profession or choice. But I AM fascinated by the digital/analog crossover... started with audio projects and a basic understanding of PWM... then realizing electricity/chemistry/physics/biology/neurology are merging at a level. This video to me was super helpful. It also has me contemplating how the paper mentioned essentially modeled a tiny brain microphone/recording booth, with digital playback and analog capture. It's like amplifier modeling. It's very very very difficult to capture all the possible inputs, but if you get enough of them, the capture and modeling is hard, but it can be sufficiently accurate to be useful, and then needs very little computational power after the model has been created. It's super intriguing and makes me wonder what would happen plugging the analog input/outputs of the dendrite/some would look like plotted as audio signals into some sort of modeler. Probably worse, more translation and compression happening etc. The fact that the neuron basically is a combination of an analog and digital signal is amazing to me. It does pulse width, essentially square wave when the specific voltage activates the dendrite spikes, and then duration and magnitude, and possibly- time-- I'm no interested in if there is a sort of standard "period" for each input/output. It was mentioned there are some differences, but is there a universal cap on what that is? Does a specific input change the effective period of responses? IE, would stress shorten the periods thus the perception of things happening faster? What is our internal "clock" speed. We know it isn't fixed at a top level, but on an individual cell, is it set? Are there a few that are defined based on the chemistry at work, and depending on the channels/ions/resistance etc? So we have individual inputs for location, a comparator for location which acts almost as an input it seems, that is part of a summed input of what order/how long. Then on each input we have duration and magnitude (x/y) and additionally, on dendrites themselves we have the NMDA acting as an additional square wave input? What about the periods of that input? Would different specific voltage inputs result in a response that is different consistently in period? Hey guys- I HATE school. But I might have to go back to school lol
@naimneman4216
@naimneman4216 Жыл бұрын
that was a really great video! I was engaged every second of it, please keep uploading these topics!
@ranbenayoun1978
@ranbenayoun1978 4 күн бұрын
Amazing! learned about the XOR nural function for the first time, very intersting
@davidhand9721
@davidhand9721 Жыл бұрын
I was having a hard time understanding how a neuron could be phase sensitive from your video on theta rhythm, but the "xor" neuron kinda makes that make sense. How narrow can the window of sensitivity be?
@sadrien
@sadrien Жыл бұрын
arbitrarily so.
@nixedgaming
@nixedgaming 11 ай бұрын
This channel is so incredibly underrated. Fantastic stuff. Really great.
@willyouwright
@willyouwright Жыл бұрын
Great video. Now we can encode neural nets using a few new novel signal processing nodes !! Should make neural nets much faster and less power hungry..
@evennot
@evennot Жыл бұрын
There's one more important property - temporal sensitivity. I did my diploma in hardware spiking neural nets (on a state-of-the-art Xilinx programmable chip) and this realisation was a wow moment. Basically, any digital system has a clock speed limitation. Natural neurons have much slower propagation speed, but activation is dependant on constant balance of race conditions + no two signals are coming at the exact same moment + basic quantum effects act as a gate (if one ion membrane changes its state from a pulse, concurring pulse sources will be denied and "rerouted"/lost). In the process of digital learning, resolution for these race conditions quickly becomes insufficient, signals are starting to clash, when they are coming in a single clock step. Biological systems, on the other hand, have physical time resolution, which is effectively infinite. No GHz clock can compete with this biological mechanics (only ASIC non-clockable implementation can utiliize this process). BTW, these effects manifest themselves with some neurotoxins: when receptors of a particular kind slow down, the whole system can be knocked out, or become very unstable
@odettali1000
@odettali1000 Жыл бұрын
thanks for the addition, would you have any papers explaining these basic quantum effects and the way they act as a gate?
@evennot
@evennot Жыл бұрын
​@@odettali1000 I meant really basic discrete/quantized effects: such as in "Voltage-gated ion channels". Basically, you can't strip more electrons than the molecule has available for ionization. And you can't stick additional electron into non-ionized molecule. This electron will be "diverted" to go somewhere else
@NoNTr1v1aL
@NoNTr1v1aL Жыл бұрын
Absolutely brilliant video!
@AffectiveApe
@AffectiveApe Жыл бұрын
Once again, stellar communication and animation work!
@denis7325
@denis7325 Жыл бұрын
It's funny how the XOR gate is called the only "nonlinear" gate here because of non linear separability, while it is the only binary gate (among the usual ones) that is linear from an algebraic perspective (seeing bits as a vector space over F2). We can even use this to prove that with only XOR and NOT, you cannot build a circuit computing AND: XOR and NOT can only build linear functions, but AND is not linear.
@adrianmoisa2281
@adrianmoisa2281 Жыл бұрын
What the hell is wrong with the YT algo??? This video is amazing! It should have minimum 200k views ... This is a slap in the face of all nerds dedicated to STEM fields... Great work YT promoting great twrking videos! Great work!
@ArtemKirsanov
@ArtemKirsanov Жыл бұрын
Thank you so much!
@keyyyla
@keyyyla Жыл бұрын
Your animations are crazy. Keep up the great work!
@avashurov
@avashurov Жыл бұрын
Even with the current achievements in AI this year we still have a long way to go to approximate our own neurons. Artificial networks a still build pretty linearly with information propagating instantly in one direction. Not to mention that they are still have to be pretrained for a long period of time. Real neurons on the other hand can be connected in loops of diverse lengths. Their signals can arrive in any order. They include complicated timing mechanisms and can learn as they go.
@MusingsFromTheJohn00
@MusingsFromTheJohn00 Жыл бұрын
As far as I know, our scientists have not figured how it works yet but have figured out that somehow there is informational processing and storage internal to cells, probably centered around DNA & RNA, which interacts with the higher level neurological system of our brain. Some long term memory storage is thought to be kept like this, perhaps having something to do with DNA methylation. Consider the vital role astrocytes play in how neurons function. Point being, inside each cell there is some kind of super computing process which seems very centered upon DNA & RNA that is separate from the neurological system but capable of interacting with the neurological system to provide the combined synergistic result which is our human level intelligence. I don't think we will be able to fully understand how the human brain works until we are able to fully understand both the higher neurological layer and the lower DNA/RNA layer, as well as how the two work together.
@gileneusz
@gileneusz Жыл бұрын
bro, those explanations are so cool. I will definitely watch more of your yt videos. Just curious, where are you located?
@marz.6102
@marz.6102 Жыл бұрын
The visual are hypnotizing of nerves and all that stuff A good choice for subscribing!!!!
@PedramNG
@PedramNG Жыл бұрын
Useful stuff, I'm gonna relate some of these concepts to my paper that I'm writing. 😁 Thank you, beautiful video.
@ahmedhadwan9273
@ahmedhadwan9273 Жыл бұрын
WAAAAAAAAAAAAAH- Your channel is literally top notch!! Thank s a lot for the content!
@AryaPhoenix
@AryaPhoenix Жыл бұрын
Funniest story, I am about to wrap up 2 weeks of lectures with Matthew Larkum here at the BCCN in Berlin, and the moment he started speaking about his theories and work on the role of dendrites I immediately remembered about this excellent video!
@FA18_Driver
@FA18_Driver 5 ай бұрын
Amazing. Best channel on neural networks on YT by FAR!
@yoverale
@yoverale 4 ай бұрын
18:51 does it imply that a more accurate model of how neurons work would be actually a base 3 or ternary numeral system? It’s not 1 or 0, but 0, 1, 2 states equivalent to under current threshold, over current threshold, over saturation threshold 🧠
@os2171
@os2171 Жыл бұрын
No doubt. You have created the best KZbin channel. Top quality in every regard. Cheers!
@sabriath
@sabriath Жыл бұрын
Well, it's kind of obvious if you just think about it for 2 seconds. Our "perceptron" is actually a single timed event which is more closely resembling the synapse to dendrite communication, using a master propagation chain as an offset, and the soma is another "perceptron" that details the chain into the axon....but everything is a timed sequence (as noted by the pulses, since channels need to recharge). All of this can be simulated using 2 layers across many "phases" (a phase is a deterioration of a signal as it fades across the channels, giving you the semblance of timing). When you have completed that, then you can realize that new neurons actually train themselves against already existing neuron clusters in order to output similar signals in order to get "the answer" faster in time, allowing the replacement of the slower one (this is how your memory changes slightly as you get older, possibly even losing the memory altogether). Each section of the brain has a different "job" to do, and when you solve that, you create the singularity....which is actually pretty easy. Don't worry, mine doesn't want to kill humans or anything.
@dragenn
@dragenn Жыл бұрын
Amazing you broke it down so simple for someone like me to undestand. I love this channel!!
@Hacktheplanet_
@Hacktheplanet_ Жыл бұрын
What a great subject you have chosen to work on!
@JasjotDhillonArticulateandSolv
@JasjotDhillonArticulateandSolv 10 ай бұрын
Great Video Artem!, Loved it! One thing has been bothering me tho, at 18:25 , corresponding to the third red spike (A) at the dendrite, shouldn't there be a spike in the soma as well? kindly let me know if this isn't the case, because I am kind of having a hard time making sense of it otherwise. Thanks
@aidenhastings6341
@aidenhastings6341 Жыл бұрын
This is so cool! I am becoming very interested in artificial neural networks and I always thought that the initial “simplified” model of the neuron was more or less correct
@nutzeeer
@nutzeeer Жыл бұрын
Wow so not only the connections of the cells but proteins themselves assist in the process. Great!
@qcard76
@qcard76 Жыл бұрын
It’s interesting to see a more mathematician-centric POV of how neurons are like neural networks, when from a neurobiologist’s perspective, it’s already obvious that real brain cells individually and collectively are already immensely powerful in a computational sense. So much so, it’s weird to see it phrased in this video that it’s the neuronal cells being “exonerated” relative to computational neural networks, when in reality it should be the other way around.
@gytoser801
@gytoser801 Жыл бұрын
has reduced function and resulting in summing inputs because of leaky gates -Amplify voltage -Backpropogation : adjusts weights of input? NMDA gates (Neurotransmitter + adequate depolarization) (adjusts synapse connection) Dendritic Calcium Action Potential predictiion and logical results
@Shy_guy9795
@Shy_guy9795 Жыл бұрын
Офигенные анимации!!! Невероятная визуализация, очень помогает понять. Где делали?
@ArtemKirsanov
@ArtemKirsanov Жыл бұрын
Спасибо! Базовые анимации в Adobe After Effects, а активность нейронов - связка NEURON (для просчета самих симуляций) + Python + Blender
@tomaspecl1082
@tomaspecl1082 Жыл бұрын
This is so fascinating! Thank you for your awesome videos.
@ArtemKirsanov
@ArtemKirsanov Жыл бұрын
Thanks!
@RomaineRC
@RomaineRC Жыл бұрын
You're so good, man! Been trying to find a vid like this for a while
@ufozxcv
@ufozxcv Жыл бұрын
Thank you.this is the exact information i've been looking for so long.
@petpaltea
@petpaltea Жыл бұрын
Great stuff, keep these videos coming!
@OxwoodBr-io6id
@OxwoodBr-io6id Жыл бұрын
Interesting stuff thanks to the wonderful landscape
@yonadabjaredguzmanmendoza1576
@yonadabjaredguzmanmendoza1576 Жыл бұрын
Your channel is awesome, it's purely gold !! and not only for neuroscientist also for everyone. Thanks for sharing all of your knowledge with that easily and great animations. Cheers from Mexico :)
@scoopynoodle8418
@scoopynoodle8418 Жыл бұрын
Great video! Love the effort you put into the graphics. Keep it up!
@jamesscourtos3583
@jamesscourtos3583 Ай бұрын
Can you do a video about dendritic clustering, i saw a talk about a model neuron called the “clusteron” where the dendritic branches cluster synapses along the branches, curious how this works and what it does for the neuron
@StratosFair
@StratosFair Жыл бұрын
Amazing video Artem, synthetic and clear explanations with impressive animations. As someone doing research on Deep Learning, I got super inspired by this video and I look forward to the future of our field !
@ArtemKirsanov
@ArtemKirsanov Жыл бұрын
Thank you!
@lostammo9026
@lostammo9026 Жыл бұрын
The computer can’t think it will run in circles in what it’s programmed to do the human mind can break out of 100 of emotions
@tommyhuffman7499
@tommyhuffman7499 Жыл бұрын
You're video is absolutely amazing!!!
@ArtemKirsanov
@ArtemKirsanov Жыл бұрын
Thank you!
@FobosLee
@FobosLee Жыл бұрын
4:36 - actually ions and molecules do permeate barrier spontaneously. Depends on lots of variables.
@qtxrs-st7xe
@qtxrs-st7xe Жыл бұрын
has reduced function and resulting in summing inputs because of leaky gates -Amplify voltage -Backpropogation : adjusts weights of input? NMDA gates (Neurotransmitter + adequate depolarization) (adjusts synapse connection) Voltage selectivity(originating between synapses), Voltage amplification, complications from being connected and depolarization affecting plasticity Dendritic Calcium Action Potential predictiion and logical results
@444haluk
@444haluk Жыл бұрын
2:00 It goes way back 1907 lapicque paper describes the summation with weights with a threshold. Current NNs are 118 years old theory, updated thousands of times and still not right.
@goid314
@goid314 Жыл бұрын
Very interesting video! Was pleasure to listen to and to watch
@cinemaipswich4636
@cinemaipswich4636 Жыл бұрын
No matter what the spin doctors say, we still do not have Artificial. Even Chat GPT is nothing but a glorified search engine. It may partially mimic a human, but within a few minutes, we find it cannot pass the Turing test.
@silvestersabathiel2589
@silvestersabathiel2589 Жыл бұрын
Fantastic video as always, thank you for your great work! Question: You mentioned that for the experiments in the last paper, if the NMDA-channels were neglected single-layer networks were sufficient to reproduce the functionality of the neuron. How is this possible? Have the channels responsible for XOR-functionality/capacity been thrown away with the NMDA-channel?
@bpath60
@bpath60 Жыл бұрын
thank you amazing videos with great communication
@444haluk
@444haluk Жыл бұрын
20:40 you have work to do in the DNN literature. DNNs can fit to any random data. And It is a only model, if additionally it doesn't learn like a neuron under new experience, that network is useless for ANY kind of neuronal theory.
@evanroderick91
@evanroderick91 Жыл бұрын
I've been wondering how the brain adjusts its own weights when learning. The stuff I've heard just never cut it.
@absta1995
@absta1995 Жыл бұрын
Long-term potentiation (LTP)
@rxphi5382
@rxphi5382 Жыл бұрын
How cool do those RGB Neurons look!!! Love it😊
@ArtemKirsanov
@ArtemKirsanov Жыл бұрын
Thanks! :)
@thetransferaccount4586
@thetransferaccount4586 Жыл бұрын
this was just the video I was looking for
@energyeve2152
@energyeve2152 Жыл бұрын
Great work!!
@tedarcher9120
@tedarcher9120 Жыл бұрын
Also biological neurons can be steered by other neurons changing their activity completely.
@justanokapi
@justanokapi Жыл бұрын
multilayered multilayeredness
@technologist6102
@technologist6102 Жыл бұрын
Does mathematics make it possible to create a perfect computational model of the human brain or only a model that mimics some aspects of it? Will we ever be able to recreate every single aspect of the human brain in a computer so that the computer is truly intelligent and therefore capable of making scientific discoveries much faster than scientists? The thing I basically want to ask you is whether mathematics will allow us to recreate a computational model identical to the human brain. Do you think to do this requires different kinds of hardware(quantum chips, etc...)? Also, thanks to mathematical models of the future, is it possible to artificially recreate what is still not present in artificial neural networks, namely the different types of neuronal cells that are in the human brain, chemical impulses, and other things like glial cells?
@In20xx
@In20xx Жыл бұрын
Thank you for explaining this so well!
@nutzeeer
@nutzeeer Жыл бұрын
Wow this video is great. It answered all the questions I recently had, as I am not in AI research. Cool! Neurons are quite capable.
@moormanjean5636
@moormanjean5636 Жыл бұрын
Pleaseee make a video on "Context-sensitive neocortical neurons transform the effectiveness and efficiency of neural information processing"
@petevenuti7355
@petevenuti7355 Жыл бұрын
What about so called "pulse train" patterns? Or is what you are discussing just a better understanding of what leads to that behavior, and the 'train' is more an obsolete term?
@inteligenciaartificiuau
@inteligenciaartificiuau Жыл бұрын
Impressive content! Thank you very much!
@kurtisbrischke6995
@kurtisbrischke6995 Жыл бұрын
Wow! Genius combination of ideas!
@frechlachs7205
@frechlachs7205 Жыл бұрын
Amazing video, thank you for your effort!
@JasonAxon
@JasonAxon 10 күн бұрын
what a great job👍
@RoboticusMusic
@RoboticusMusic Жыл бұрын
Why did it take 5-8 layers of a DNN to simulate that a brain cell will send a signal only when it's being talked to by one neuron at a time, AKA neurons don't listen to multiple conversations at once or they freeze up? So neuron A and B sending a signal to the XOR neuron at the same time jams it, like a crash at an intersection? Is there more to this revelation?
@proveritate1205
@proveritate1205 Жыл бұрын
I think this is true to a great extent, that the collective functioning neurons is broadly analogous to a digital deep neural network; but it's extremely likely that there's also a myriad of other fine-tuned characteristics and properties that a make the brain something more than a mere deep neural network, which truly confer its unmatched capabilities of problem-solving and integration.
@tomoki-v6o
@tomoki-v6o 2 ай бұрын
Can you talk about attention mechanisme ?
@tiagotiagot
@tiagotiagot Жыл бұрын
02:59 Is there somewhere I can find those graphs at full resolution?
@ArtemKirsanov
@ArtemKirsanov Жыл бұрын
Sure, I believe the picture with activation functions is from here medium.com/@shrutijadon/survey-on-activation-functions-for-deep-learning-9689331ba092 And the one with network architectures is by Asimov Institute: www.asimovinstitute.org/neural-network-zoo/
@tiagotiagot
@tiagotiagot Жыл бұрын
@@ArtemKirsanov Thanx :)
@raoulduke7668
@raoulduke7668 Жыл бұрын
This also begs the question: What is consciousness? Can consciousness be artificially created? Are neural networks "self aware"? I know this sounds stupid, but at the end, humans are just really complex biological machines which experience consciousness because they can observe their surroundings and act accordingly. So interesting!
@gpt-jcommentbot4759
@gpt-jcommentbot4759 Жыл бұрын
Being able to truly experience things. Most likely, yes. Definitely not.
@claytonharting9899
@claytonharting9899 Жыл бұрын
This game is such a good idea, it’s fun to watch
@egor.okhterov
@egor.okhterov Жыл бұрын
What is there in the neurons that's distinct from ANNs that allows them to do: 1. One shot learning (no need for backpropagation on GPU cluster with billions of examples). 2. No catastrophic forgetting. We don't forget old information after learning new knowledge. Old and new knowledge are incorporated together and complement each other. New knowledge doesn't override old knowledge. 3. Transfer learning. We can transfer skills and knowledge from one domain to another domain. If you could play CSGO then you will also be able to play Fortnite. 4.Planning/Self goal setting. ANN cannot choose and set a goal for itself - it's just a function that computes some value (or transforms one value into another). There's nothing in the deep neural network that does planning, it doesn't enumerate possibilities while computing something, it doesn't allocate time for a certain computation and doesn't do self-stop of that computation if it becomes too long. ANN is not self-reflective and there it cannot observe itself doing the computation. How to bridge the gap between what we have now with ANNs to have all of the 4 properties that we observe in the brains? It feels like that the architecture of ANNs is wrong on the fundamental level and there is no way to bridge the gap by simply increasing the number of layers in the ANN. Yes, even Transformer architecture feels like a step in the wrong direction and I don't see how it will be able to bridge the gap.
@gpt-jcommentbot4759
@gpt-jcommentbot4759 Жыл бұрын
For no. 2 i think catastrophic forgetting is inevitable when the network is learning with backpropogation. If the network can automatically encode information into its memory it could possibly solve catastrophic forgetting if it is wired to not forget previous information.
@wangchakip8551
@wangchakip8551 Жыл бұрын
like the recap
@rumplstiltztinkerstein
@rumplstiltztinkerstein Жыл бұрын
It is amazing how much more there is to improve about neural networks. Once we succeed at developing artificial neurons that are just as effective as biological neurons, we can go even further. Start developing neurons and neural networks that are even more advanced than biological ones.
@numericalcode
@numericalcode 2 ай бұрын
Good stuff!
@musicoscope
@musicoscope 10 ай бұрын
wonderful! thanks
@brubsby
@brubsby Жыл бұрын
I wonder what can be said about transformers abilities to model dendritic nonlinearity
@alexh8754
@alexh8754 10 ай бұрын
thank you
@paxtoncargill4661
@paxtoncargill4661 Жыл бұрын
11:39 is that the fundamental reason we perceive time?
@brianhirt5027
@brianhirt5027 Жыл бұрын
So, not a scientist. But i've been deeply interested in whole brain emulation for thirty years. If i'm understanding you right we're getting very close to unlocking a path towards whole brain emulation. There needs to be a serious number of gains (perhaps quantum computing) in the hardware technology, what we have in the near future is just too bulky, slow, and inefficient at translating an analog dial into a digital switch. For anything approaching the capacity to simulate a consciousness would have to dwell on fastswap DRAM-like memory several orders of magnitude more precise & rapid than anything available today. But that's a matter for Moore's law to eventually overcome, I'd imagine. Look, i'll level with ya, Doc. I ain't sure how much longer i've got on this rock, but i'm all-in on this concept of digital immortality. There's a whole bunch of universe i've wanted to explore firsthand since I was six years old, and there's no other way that's gonna happen in what lifespan I've got left. The more we look at things, the more it seems possible at some future date. But to suss it out y'all gonna need some volunteers with no illusions and the capacity to ride out the shitstorm even if the sensory input and cognition syncs are all screwy initially. You have that available in me. I'm self aware enough to dispassionately interrogate my own processes thoroughly. I've lived with pain half my life. I'm tough. I can endure what would break most people. HAVE endured what has broken most other people. Y'all will need people just like me to iron out the bugs, and I know it. I understand how close post mortem the brain must be frozen to preserve the fine detail neuron & dendrite map. I understand that said frozen brain will have to be dissected & scanned in ten micron slices to GET that map. I already have my will set up to line up all that preservation, barring some catastrophic accident putting me beyond reclamation. The proposition doesn't scare me or spook me at all. It feels like something i'm well disposed to help our species accomplish. I'm not a rich man, so my only way to bridge that initial cost divide is as a test subject. I'm willing to explore what it means to be a human navigating identity in a digital space. It's a low odd probability, but it's a damn sight better than NO probability. I just need you guys to know I exist so my frozen brain doesn't just linger on a shelf somewhere. That i'm willing, more than capable, and exactly the type you're gonna need to figure this all out down the way. Fortune favors the bold, and i've got that in spades. You or whoever relevantly else reads this wants to get in contact, do so. Let's talk.
@mpik97
@mpik97 Жыл бұрын
I've figured out how to solve XOR with a single neuron. Source: hopefully a journal coming to you soon.
@6torthor
@6torthor Жыл бұрын
Hey, great vid btw very informative. How do you create the graphics for your videos?
@benjaminhinz2552
@benjaminhinz2552 Жыл бұрын
So this is why me and my last braincell is still smarter than ChatGPT?
@matushalak
@matushalak Жыл бұрын
Wow amazing video and topic!
@brycecounts3168
@brycecounts3168 11 ай бұрын
does training a network to learn the input/output of a detailed neuron (the biophysics, apparently?) justify the claim that that a single neuron is x times more complex than a neural network in principle, since the smallest units of a bio NN will always be a higher number than an artificial NN? Is that the reasoning? I guess I wonder if training a network to model the function of a biophysical mechanism is more complex than describing just the structure of a single neuron, with its inputs and outputs. Not sure if this is clear.
@JBJ555
@JBJ555 Жыл бұрын
What about the role of brainwaves in activating neurons, per Pribram?
Brain Criticality - Optimizing Neural Computations
37:05
Artem Kirsanov
Рет қаралды 217 М.
How Your Brain Organizes Information
26:54
Artem Kirsanov
Рет қаралды 607 М.
HAH Chaos in the Bathroom 🚽✨ Smart Tools for the Throne 😜
00:49
123 GO! Kevin
Рет қаралды 15 МЛН
Пришёл к другу на ночёвку 😂
01:00
Cadrol&Fatich
Рет қаралды 11 МЛН
отомстил?
00:56
История одного вокалиста
Рет қаралды 7 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,1 МЛН
I programmed some creatures. They Evolved.
56:10
davidrandallmiller
Рет қаралды 4,2 МЛН
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 436 М.
Growing Living Rat Neurons To Play... DOOM? | Part 1
27:10
The Thought Emporium
Рет қаралды 3,6 МЛН
How One Line in the Oldest Math Text Hinted at Hidden Universes
31:12
Is graphene starting to live up to its hype?
28:03
RAZOR Science Show
Рет қаралды 479 М.
A Brain-Inspired Algorithm For Memory
26:52
Artem Kirsanov
Рет қаралды 111 М.
Building Blocks of Memory in the Brain
27:46
Artem Kirsanov
Рет қаралды 254 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 323 М.
The Key Equation Behind Probability
26:24
Artem Kirsanov
Рет қаралды 102 М.
HAH Chaos in the Bathroom 🚽✨ Smart Tools for the Throne 😜
00:49
123 GO! Kevin
Рет қаралды 15 МЛН