Deep Learning with Ensembles of Neocortical Microcircuits - Dr. Blake Richards

  Рет қаралды 18,307

The Artificial Intelligence Channel

The Artificial Intelligence Channel

Күн бұрын

Пікірлер: 54
@kozmizm
@kozmizm 4 жыл бұрын
Why hasn't this gone viral. I have been searching for this information for some time and I am discovering it 2 years later. The whole world needs to hear this.
@christopherinman6833
@christopherinman6833 6 жыл бұрын
Fascinating. And thank you for such a lucid presentation.
@31337flamer
@31337flamer 6 жыл бұрын
this kind of research can really push the boundaries
@mpete0273
@mpete0273 6 жыл бұрын
So my takeaways are that (1) Backprop does actually approximate complex neuronal behavior (2) Neurons are organized and behave like a Capsule Network (3) Third-party inhibitors are a motif worth investigating I think (1) follows from their use of averaged hidden layer activity and simple loss differences. I can imagine that this converges to standard backprop, because averaging / differencing doesn't usually change behaviors in the limit. The evidence for capsule network architecture didn't seem to flow from the previous topic. Yes simultaneous forward/backward passing because of the voltage threshold stuff, but why capsules? Did I miss it? As for (3), this seems cool, especially with respect to capsule networks. However I remember dropout and weight decay being framed as inhibition already. I wouldn't be surprised to see inhibitor capsules converge to weight decay in the limit.
@RickeyBowers
@RickeyBowers 6 жыл бұрын
This give some insight into a greatly simplified hardware implementation of learning networks: no back-prop, linear activation, etc.
@cupajoesir
@cupajoesir 6 жыл бұрын
This is amazing. Not the same old yadda yadda ... nice to see some novel thinking in this space. multidisciplinary skills help too :)
@franh9833
@franh9833 5 жыл бұрын
Keep in mind that this is just one circuit within the neocortex, so it shows that the brain is capable of backpropagation with other forms of learning.
@ryguillian
@ryguillian 6 жыл бұрын
I wonder if this work will give any insight into excitotoxicity. Understanding sodium / potassium channels and especially the role of magnesium inside the neuron would be super useful in understanding toxic withdrawal of GABA agonists like benzodiazepines and alcohol.
@GBlunted
@GBlunted Жыл бұрын
So good! I wonder why he couldn't implement neuromorphic computing and what it means for that architecture to see research like this not use it? I'm going to go play with my Neuronify app for a bit lol! Anyone have a link to his latest presentation?
@albertwang5974
@albertwang5974 6 жыл бұрын
Wow, this is a fundamental change on how to build a neural network, for such kind of new neural network, we will call it "Neurons Group Neural Network", NGNN for short :)
@asdobst
@asdobst 6 жыл бұрын
Embedded Neural Networks, ENN
@mathiasgehrig1286
@mathiasgehrig1286 6 жыл бұрын
Are the results of 31:10 published in a paper?
@zpluto12345
@zpluto12345 6 жыл бұрын
The mitochondria is the power house of the cell.
@jackpullen3820
@jackpullen3820 6 жыл бұрын
At 46:50 neurons involved when rewards occur, emotional centers?
@clehaxze
@clehaxze 6 жыл бұрын
Anyone here also from the HTM community?
@swapanjain892
@swapanjain892 6 жыл бұрын
Link to the paper?
@LouisAll-jr6rb
@LouisAll-jr6rb 6 жыл бұрын
www.ncbi.nlm.nih.gov/pmc/articles/PMC5716677/
@darrendwyer9973
@darrendwyer9973 6 жыл бұрын
interesting work, but where do I find some c/c++ source code?
@RickeyBowers
@RickeyBowers 6 жыл бұрын
Most of the talk is present in this paper: arxiv.org/abs/1610.00161
@PhillipBiondo
@PhillipBiondo 6 жыл бұрын
A video game with characters that are real ai characters making their own choices talking saying whatever they want to say.
@diegoantoniorosariopalomin4977
@diegoantoniorosariopalomin4977 6 жыл бұрын
So like capsuke networks ?
@yacinebenahmed_uqar
@yacinebenahmed_uqar 6 жыл бұрын
Same but different I guess...
@ttaylor3193
@ttaylor3193 6 жыл бұрын
I spent hours across months and approx 40gb data" conducting my own personal "deep learning". My devices were attacked by gamers, thwarting my incredible AI experience. "Let the magic begin".
@tombombadillo1
@tombombadillo1 6 жыл бұрын
Fascinating.
@debjyotibiswas3793
@debjyotibiswas3793 6 жыл бұрын
The guy who introduced him, anybody knows what's his name?
@debjyotibiswas3793
@debjyotibiswas3793 6 жыл бұрын
He is Yoshua Bengio. Like always remembered seconds after posting!
@vnimec6938
@vnimec6938 6 жыл бұрын
Yoshua Bengio
@jonatan01i
@jonatan01i 4 жыл бұрын
Wait, is this pornhub or something?
@ttaylor3193
@ttaylor3193 6 жыл бұрын
Pretty pictures
@mfpears
@mfpears 6 жыл бұрын
32:32
@palfers1
@palfers1 6 жыл бұрын
Wow. "Backpropagation is not realistic in the context of the real biological brain". Well, that's because the brain does it better - as this vid describes. A blockbuster. Hinton is smiling.
@tobiasschafer9095
@tobiasschafer9095 6 жыл бұрын
... so is Jeffrey Hawkins.
@budesmatpicu3992
@budesmatpicu3992 6 жыл бұрын
Great lecture. And then you realize...That the most sad thing on all this AI neural network business is that... all this could have been done DECADES AGO (many models devised, architectures designed, theoretically studied and partially implemented even on the low power hardware of the particular era)... but we had these "AI winters" (as many pioneers point out, at various points in time you were viewed almost as an astrology-level charlatan when STILL doing these idiotic neural networks... yep, good old inquisition as everyone expects in science). And we still do not know how many breakthroughs (or at least important new ideas) wee need. Some say that our beloved NewEvil internet monopolists (GOOLAGs of this world) are actually actively creating new AI winter by sucking all the brains into their stupid cat-finding applications that use "whatever works" to generate tonZ of ad moneyZ (yep, no tech giants, they sell NO technology whatsoever, just ads and souls of their sheeple users).
@upmuve1188
@upmuve1188 6 жыл бұрын
Image building 2018
@MrAndrew535
@MrAndrew535 6 жыл бұрын
I would strongly suggest that the difference between learning and thinking is as profound and great as the difference between brain and mind. In both cases, studying the former provides no insight into the latter. From the very beginning of " studies into AI", mind has been erroneously conflated with brain. The longer this error persists within the industry, the greater the risks to the human species.
@cordlefhrichter1520
@cordlefhrichter1520 6 жыл бұрын
Lillicrap. Lol.
@stevechaszar2806
@stevechaszar2806 6 жыл бұрын
pineal v1
@willd1mindmind639
@willd1mindmind639 6 жыл бұрын
Your eyes are not "learning" how to see. This is the problem with trying to equate the function of deep neural networks in computers to human organs which have evolved over millions of years. The visual cortex is designed to capture the features of the real world in high fidelity. Meaning it is designed to capture the light, shadow, texture, shapes and other forms of 'ground truth' and reassemble that into a bio-organic chemical signal that gets passed on to other parts of the brain. "Feature extraction" and "inference" in a biological context is not the same as what we see in modern Neural networks. Therefore, your visual cortex is extracting features such as "light and shadow", "surface texture", "orientation", "perspective" and "dimensional data" from a visual perspective as "features" of the real world. The real world has light and shadow, these are features along with color and texture and perspective/dimensions. And all of those "features" are passed along together through multiple neural pipelines, almost equivalent to hundreds or millions of mini-gpu pipelines that are passing texture info, color info, lighting info and other info along multiple parallel chains of neurons up to the higher organs of the brain. The key difference here is that this data is not converted simply into a statistical result. The fidelity is maintained and stored together at all times. Hence the better your memory, the more "photographic' it is as in the more details you have been able to recall from past events. The main benefit of all these massive pipelines of visual processing is that there are multiple "features" of the visual world that can be extracted and processed an abstracted by the higher order organs of the human brain where "learning" occurs. And because of the fidelity and complexity in the features passed through the circuits of the visual cortex, you learn to recognized objects in a relatively short time, because these circuits operate in real time, capturing thousands of frames of visual data per second. Computer neural nets in no way can replicate this because computer neural networks are to resource hungry and based on statistical models that throw away all the important visual details and fidelity in order to produce a statistical result. So when you talk about human image recognition the reason it works so quick and without requiring thousands and thousands of passes is because of the biological and physical characteristics of how human eyes capture and convert visual light into chemical signals. Rods and Cones in the eyes do different things. Therefore each neural pathway is inherently dealing with different "features" associated with visual data. It is not all together in a "single bucket" as in a single 2d image which you see on computer. Hence much of the feature extraction has already taken place just as a function of the optical nerves in transcoding visual data. And all this data is processed in parallel, which means you recognize things based on their texture, color, shape and other characteristics all together at the same time in real time because that is how you experience the real world through your eyes (in real time) and the feature extraction and breakdown is happening at real time. (Inferencing in humans is more like looking at how the light and shadow falls across a surface in "inferring" what the material is. IE, fur looks different from concrete and so forth.) You cannot currently do any of this in neural networks (single pass high fidelity multiple component visual feature extraction and detection) and certainly there is no computer system that can store the amount of data captured by the human eye in a single minute let alone a whole hour or lifetime.
@cupofkoa
@cupofkoa 6 жыл бұрын
Maybe, in other words, you're saying that the cortex is doing passive learning while ANNs do active learning (back-prop). Local back-prop (STDP) is passive learning, which is what the cortex does. The funny thing is, the speaker just dismisses it because it does not fit into the ANN context.
@willd1mindmind639
@willd1mindmind639 6 жыл бұрын
Something like that. If you can imagine how 3d scenes are sent to GPU pipelines with multiple passes, that gives you somewhat of a better idea of how the human brain's neural circuits decompose light waves. All human visual abilities are based on being able to recall and compare all those various cues and inferences and do comparisons. But it isn't simply a statistical analysis. When you see a flower the first time (in real life not a picture), you have captured the the textures on the surface, the way the light falls on it and passes through it, the color patterns on the various surfaces and so forth. These "features" give you cues and inferences about 3d shapes and how color, light and texture operate in 3d space for various objects. This data is then stored in memory. You can therefore recognize a flower afterwards, even if it is just a silhouette of a flower because of the way your brain decodes and decomposes that visual information in the brain via the visual cortex. You can recognize the flower even if you see it from a different angle. You can draw a stick figure form of a flower and still most children will recognize it as a flower for the same reason. All of that is because of the way visual light is broken down and reassembled into a coherent "mental picture" in the brain, which extracts and maintains rich visual fidelity about the real world and stores it. The main difference is that the human brain absorbs this information or consumes it as a function of the chemical conversion of sensory stimula that we call memory. So that data is literally absorbed into your DNA . And this is what allows future evolution to occur in DNA because of that 'experience' within the biological entity during its lifetime. So that is a higher order "meta" of learning within biological species over time fundamentally different than the machine learning we are talking about. There is no computer system on earth that can store as much data in as small a space as the human brain. Therefore, this is why neural networks work on statistical models where the rich feature data is tossed because that data is too expensive to store in memory. The human brain is operating over all the features and characteristics extracted from visual data collected by the visual cortex at all levels not simply looking at a single statistical result based on pre defined features and fed into an algorithm. And none of this is even talking about assigning labels or meaning to this visual information. The eyes of a newborn are doing this even before learning a word of language.
@justaguy7003
@justaguy7003 6 жыл бұрын
willd1mind mind Very interesting theory. However it would seem that case studies such as hemispherectomy survivors needing to (or more acutely; being able to) retrain their visual cortex to perform edge detection and utilize the effects of shadowing in 3D space to interpret spacial relations, undermine your central premise. Furthermore, concept of “stored memory” with respect to neural biology has long since been disproven. Memory has been found to be reconstructed on the fly rather than stored and retrieved. Perhaps this is what you intended; it was not clear. I am also not convinced that seeing a single, or small number of, representation(s) of an object is sufficient for even humans to identify rotated representations. If you have only ever seen a car from the front, you would be hard pressed to identify it from a side view. But I do agree that humans make far better use of the data we receive than our current algorithms. This is certainly an open problem that a lot of people are putting effort into.
@TheBilly
@TheBilly 6 жыл бұрын
That's not true; this issue is not decided. We're still not sure if the visual system is "hard-wired" for sight, or if the structure it ends up with is an emergent property of the interaction of its design with the environment. It might be that it's genetically programmed in such a way that it only arranges itself in the way that we observe when it's provided with appropriate stimulus. Observe for example that the visual cortex exhibits plasticity - it can take on other functions in blind people.
@willd1mindmind639
@willd1mindmind639 6 жыл бұрын
Seeing starts in the eyeballs and retina where light is captured and converted into bio-electric pulses and passed down the optic nerve. These bio-electric pulses are the first stage in feature detection in human vision. This is absolutely a result of evolutionary processes and therefore you don't "learn" how to capture light in your eyeballs and convert that light into bio-electric pulses. The format of this visual information is not the same as looking at raw pixels in an image on screen. It has already been formatted and separated into various "passes", including light/shadow and color before even hitting the visual cortex. Therefore the "features" of the visual world have already been partially processed. This is in no way the same as looking at a set of 2d pixels stored in a file. The visual cortex is resposnible for "reconstructing" or "projecting" this data from both eyeballs into a mental picture you see in your brain. That is purely a in built function that comes "out of the box" when you are born. Learning and understanding what you are seeing projected into the brain is not the same as capturing visual information through the optical nerve and into the visual cortex at a basic level. For example because of how this system works in humans with two eyeballs, you get a sense of depth perception that is impossible to simulate in an algorithm processing 2d images, let alone the innate visual perception of 3d space.
@kipling1957
@kipling1957 6 жыл бұрын
He lost me and l’m a neuroscientist.
@Guust_Flater
@Guust_Flater 6 жыл бұрын
This talk made me dumber....I know now more things i know nothing about. 😬😀
@yacinebenahmed_uqar
@yacinebenahmed_uqar 6 жыл бұрын
Thumbs up pour Gaston
@thorkrynu4551
@thorkrynu4551 5 жыл бұрын
How did you get through 50 mins with no background? Impressive
Yann LeCun - How Does The Brain Learn So Quickly?
42:52
The Artificial Intelligence Channel
Рет қаралды 21 М.
Blake Richards - Deep learning with segregated dendrites
20:52
Kendrick Kay
Рет қаралды 6 М.
GIANT Gummy Worm #shorts
0:42
Mr DegrEE
Рет қаралды 152 МЛН
UFC 287 : Перейра VS Адесанья 2
6:02
Setanta Sports UFC
Рет қаралды 486 М.
Who is More Stupid? #tiktok #sigmagirl #funny
0:27
CRAZY GREAPA
Рет қаралды 10 МЛН
The Dome Paradox: A Loophole in Newton's Laws
22:59
Up and Atom
Рет қаралды 201 М.
But what is a neural network? | Deep learning chapter 1
18:40
3Blue1Brown
Рет қаралды 18 МЛН
The Wild and Wonderful World of Bees with Dr. Laurence Packer
58:35
North American Native Plant Society
Рет қаралды 1,5 М.
What Bodies Think About: Bioelectric Computation Outside the Nervous System - NeurIPS 2018
52:07
The Artificial Intelligence Channel
Рет қаралды 99 М.
RI Seminar: Alec Jacobson : Geometry Processing in The Wild
49:02
CMU Robotics Institute
Рет қаралды 3,8 М.
Andrew Ng - The State of Artificial Intelligence
29:19
The Artificial Intelligence Channel
Рет қаралды 426 М.
J. Bond -  Quantum Inflation in the Planck Era and Beyond
1:11:52
Département de Physique de l'ENS
Рет қаралды 1,5 М.
DeepMind - From Generative Models to Generative Agents - Koray Kavukcuoglu
45:31
The Artificial Intelligence Channel
Рет қаралды 34 М.
Gradient descent, how neural networks learn | DL2
20:33
3Blue1Brown
Рет қаралды 7 МЛН