Backpropagation and the brain

  Рет қаралды 17,166

Yannic Kilcher

Yannic Kilcher

Күн бұрын

Пікірлер: 40
@YannicKilcher
@YannicKilcher 4 жыл бұрын
Note: This is a reupload. Sorry for the inconvenience.
@Stopinvadingmyhardware
@Stopinvadingmyhardware Жыл бұрын
The brain does this thing call axon regulation. In some parts where there are reuptake axons, they self regulate to reduce the amount of feedback when over stimulated. Basically this means they close, and leave the flooded neutral transmitter in the flow stream for the dendrites. This has the effect of down regulating the signal. I saw another video where you covered the direct feedback mechanism, and mentioned that the neurons didn’t have a back propagation mechanism, and wanted to share that with you.
@MikkoRantalainen
@MikkoRantalainen 2 жыл бұрын
Great video! I think I've seen at least summary of this algorithm earlier and this video make it more clear.
@stephanrasp3796
@stephanrasp3796 4 жыл бұрын
I think at 4:50, the perturbation should be added to w, not x, i.e. f(x, w+n). Awesome content btw!
@YannicKilcher
@YannicKilcher 4 жыл бұрын
True, you want to jiggle the model itself. Thanks!
@dermitdembrot3091
@dermitdembrot3091 4 жыл бұрын
Could it be that perturbation learning is just Hebbian learning where the updates are scaled by the "reward"? So if the "reward" is always 1 it would correspond to Hebbian learning. And for negative rewards the weights are changed to reduce the activations. In the r=-1 vs r=-2 case that would give a negative update for both but a stronger one for the second "action" (comparable to the REINFORCE algorithm).
@YannicKilcher
@YannicKilcher 4 жыл бұрын
Yes that's exactly what's happening. Basically every unit does RL by itself.
@dermitdembrot3091
@dermitdembrot3091 4 жыл бұрын
@@YannicKilcher Thanks for confirmation!
@jyotiswarupsamal1587
@jyotiswarupsamal1587 2 жыл бұрын
This is a good explanation. I could understand the basics. Thank you
@Neural_Causality
@Neural_Causality 4 жыл бұрын
Does anyone know of an implementation of the proposed idea on the paper? Also, thanks a lot for sharing this paper, and comments on different papers, I think it's quite useful!
@YannicKilcher
@YannicKilcher 4 жыл бұрын
If you look in the comments here you'll find a link to Bengio's paper about the algorithm, they might have something.
@Neural_Causality
@Neural_Causality 4 жыл бұрын
@@YannicKilcher Thanks! Will check it
@bzqp2
@bzqp2 3 жыл бұрын
I like how immediately once the paper is written by Hinton you switched from drawing the layers horizontally to drawing them vertically xd
@terumiyuuki6488
@terumiyuuki6488 4 жыл бұрын
It does sound suspiciously like Decoupled Neural Interfaces. Think you'd like to make a video on that? It would be great. Keep up the great work!
@YannicKilcher
@YannicKilcher 4 жыл бұрын
Thanks for the suggestion!
@Murmur1131
@Murmur1131 4 жыл бұрын
Thanks so much! Super interesting! High class content!
@redone9553
@redone9553 3 жыл бұрын
Thanks for the upload! But who says that we need negative voltage for a signed gradient? Why not assume high frequencies are positive and low are negative?
@lost4468yt
@lost4468yt 27 күн бұрын
Why can't they just send the information back with neurotransmitters instead of action potentials? You could also easily encode signed values this way. We even know that certain neurotransmitters do go backwards, like the cannabinoid system?
@8chronos
@8chronos 2 жыл бұрын
Thanks for this nice video. One thing still seems unclear to me, does this only allow for possibly near biological NN-training or are there also other advantages? E.g. Is it faster than Backprop?
@moormanjean5636
@moormanjean5636 2 жыл бұрын
This is what I would like to know as well. I would guess its slower, but the only way to train networks in a comparable manner given certain assumptions.
@Zantorc
@Zantorc 4 жыл бұрын
For perturbation learning, excitation and inhibition use completely different mechanisms in the brain - the neuro-transmitter is even different and different cell types are involved. So rather than dampen all weights when the result is wrong it can selectively dampen the excitation and/or amplify the inhibition. So there is an extra degree of freedom, which is the degree to which the correction falls on the inhibitory neurons v excitatory neurons as well as the magnitude of the correction. So this is at least a 2D correction vector - possible more given that individual neuron sub-types may be differently affected. Therefore my claim is that in the brain it's not so much 'scalar feedback' as 'vector feedback', at least for perturbation learning. I suspect it is the lack of distinction between neurons in ML which leads to poor results for perturbation learning.
@iuhh
@iuhh 4 жыл бұрын
I think the different mechamisms in a single brain neuron could probably be represented by two or more artificial neurons though, maybe in multiple layers that handles excitation and inhibition separately, so not sure how that could relate to the quality of the results.
@Zantorc
@Zantorc 4 жыл бұрын
@@iuhh The more you know about neurons, the less you're likely to think that. The point neuron can't do what a pyramidal neuron can do, it's predictive, synapse strength isn't the equivalent of a weight it's one bit at most on distal and apical dendrites and doesn't cause firing - it's part of the pattern matching process.
@BuzzBizzYou
@BuzzBizzYou 4 жыл бұрын
Won’t the proposed network create a massive IIR filter?
@joirnpettersen
@joirnpettersen 4 жыл бұрын
If the brain uses back-propagation, and we can some day figure out a way to model it mathematically, would adverserial attacks become a thing we might need to worry about? If not, would it be for a lack of information, or is there some difference between the way the brain does it and the way we do it on computers?
@YannicKilcher
@YannicKilcher 4 жыл бұрын
very nice question. I think this is as of yet unanswered, but definitely possible.
@BrtiRBaws
@BrtiRBaws 4 жыл бұрын
Maybe we can see optical illusions as a sort of adversarial attack :)
@maloxi1472
@maloxi1472 4 жыл бұрын
​@@BrtiRBaws Yes, absolutely. I would argue that things like optical illusions, ideological belief structures, very elaborate lies, hallucinogens, unhealthy but tasty food... are all adversarial attacks on different substructures of the brain
@priyamdey3298
@priyamdey3298 3 жыл бұрын
Numenta shows that if information flow (both inputs and weights of neurons) are quite sparse, then a network becomes quite robust to perturbations / random noise. And they say that brain has a very sparse information flow. So maybe yes, we are still yet to include more meaningful priors (like sparseness) in the right way to make them robust.
@bzqp2
@bzqp2 3 жыл бұрын
Hitting a guy in the head with a shovel can be an adversarial neural network attack.
@victorrielly4588
@victorrielly4588 4 жыл бұрын
Here’s a link to an Archive.org paper on difference target propagation, for anyone like me who doesn’t want to pay to read the biology paper. Also, this paper looks like the original work describing the machine learning aspect of this idea. arxiv.org/pdf/1412.7525.pdf
@sehbanomer8151
@sehbanomer8151 4 жыл бұрын
I thought this is a part 2 or something
@YannicKilcher
@YannicKilcher 4 жыл бұрын
no, sorry, I deleted it by accident
@stefanogrillo6040
@stefanogrillo6040 Жыл бұрын
Duper
@ThinkTank255
@ThinkTank255 2 жыл бұрын
How many times do I have to tell you guys, the brain doesn't "learn"??? The brain *memorizes* verbatim. For prediction, the brain says, "What matches my memories the best?" and chooses that as a prediction. It is as simple as that. Brains are generally *not* as good as backpropagation at generalization, but that feature of brains is actually very useful for nonlinear spatio-temporal patterns, such as doing mathematics and logic. This is why, to date, ML based methods have not been able to solve extremely complex reasoning based problems. They overgeneralize when it comes to nonlinear logical processes. It is actually extremely easy to prove the brain doesn't use backpropagation. How many times do you have to read a book to give a good summary? Once. Etc.... The brain learns *instantly* by rote memorization. Instant learning brings many evolutionary benefits.
@DajesOfficial
@DajesOfficial Жыл бұрын
How many times have you read a book before it became possible for you to give a good summary from the first time? Lets test your hypothesis by giving a book to an infant and asking them to give a good summary from the first time?
@ThinkTank255
@ThinkTank255 Жыл бұрын
@@DajesOfficial You've actually proven my point. The problem is, most humans aren't particularly good at remembering factual information. This is because 99.99% of the information you are getting at any given time isn't factual information. It random sights, sounds, smells, that your brain deems important for your survival. The reason adults are better than infants is that they have practiced that skill of honing in on factual information.
@herp_derpingson
@herp_derpingson 4 жыл бұрын
DEJA VU
@YannicKilcher
@YannicKilcher 4 жыл бұрын
yea sorry, I hope YT reinstates the old one
@palfers1
@palfers1 9 ай бұрын
2020 is quite dated.
Supervised Contrastive Learning
30:08
Yannic Kilcher
Рет қаралды 61 М.
A Brain-Inspired Algorithm For Memory
26:52
Artem Kirsanov
Рет қаралды 180 М.
СИНИЙ ИНЕЙ УЖЕ ВЫШЕЛ!❄️
01:01
DO$HIK
Рет қаралды 3,3 МЛН
We Attempted The Impossible 😱
00:54
Topper Guild
Рет қаралды 56 МЛН
Что-что Мурсдей говорит? 💭 #симбочка #симба #мурсдей
00:19
Dendrites: Why Biological Neurons Are Deep Neural Networks
25:28
Artem Kirsanov
Рет қаралды 242 М.
Does the brain do backpropagation? CAN Public Lecture - Geoffrey Hinton - May 21, 2019
1:22:04
Canadian Association for Neuroscience
Рет қаралды 19 М.
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,5 МЛН
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 577 М.
Building Blocks of Memory in the Brain
27:46
Artem Kirsanov
Рет қаралды 296 М.
Learning Forever, Backprop Is Insufficient
26:29
Edan Meyer
Рет қаралды 18 М.
How Your Brain Organizes Information
26:54
Artem Kirsanov
Рет қаралды 693 М.
СИНИЙ ИНЕЙ УЖЕ ВЫШЕЛ!❄️
01:01
DO$HIK
Рет қаралды 3,3 МЛН