#041

  Рет қаралды 13,212

Machine Learning Street Talk

Machine Learning Street Talk

Күн бұрын

Dr. Simon Stringer. Obtained his Ph.D in mathematical state space control theory and has been a Senior Research Fellow at Oxford University for over 27 years. Simon is the director of the the Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, which is based within the Oxford University Department of Experimental Psychology. His department covers vision, spatial processing, motor function, language and consciousness -- in particular -- how the primate visual system learns to make sense of complex natural scenes. Dr. Stringers laboratory houses a team of theoreticians, who are developing computer models of a range of different aspects of brain function. Simon's lab is investigating the neural and synaptic dynamics that underpin brain function. An important matter here is the The feature-binding problem which concerns how the visual system represents the hierarchical relationships between features. the visual system must represent hierarchical binding relations across the entire visual field at every spatial scale and level in the hierarchy of visual primitives.
We discuss the emergence of self-organised behaviour, complex information processing, invariant sensory representations and hierarchical feature binding which emerges when you build biologically plausible neural networks with temporal spiking dynamics.
00:00:00 Tim Intro
00:09:31 Show kickoff
00:14:37 Hierarchical Feature binding and timing of action potentials
00:30:16 Hebb to Spike-timing-dependent plasticity (STDP)
00:35:27 Encoding of shape primitives
00:38:50 Is imagination working in the same place in the brain
00:41:12 Compare to supervised CNNs
00:45:59 Speech recognition, motor system, learning mazes
00:49:28 How practical are these spiking NNs
00:50:19 Why simulate the human brain
00:52:46 How much computational power do you gain from differential timings
00:55:08 Adversarial inputs
00:59:41 Generative / causal component needed?
01:01:46 Modalities of processing i.e. language
01:03:42 Understanding
01:04:37 Human hardware
01:06:19 Roadmap of NNs?
01:10:36 Intepretability methods for these new models
01:13:03 Won't GPT just scale and do this anyway?
01:15:51 What about trace learning and transformation learning
01:18:50 Categories of invariance
01:19:47 Biological plausibility
Pod version: anchor.fm/machinelearningstre...
www.neuroscience.ox.ac.uk/res...
en.wikipedia.org/wiki/Simon_S...
/ simon-stringer-a3b239b4
"A new approach to solving the feature-binding problem in primate vision"
royalsocietypublishing.org/do...
James B. Isbister, Akihiro Eguchi, Nasir Ahmad, Juan M. Galeazzi, Mark J. Buckley and Simon Stringer
Simon's department is looking for funding, please do get in touch with him if you can facilitate this.
#machinelearning #neuroscience

Пікірлер: 53
@saanvisharma2081
@saanvisharma2081 3 жыл бұрын
*This was my master's thesis topic* Absolutely enjoyed watching this video. Thanks Tim.
@samdash3216
@samdash3216 3 жыл бұрын
Content quality through the roof, video quality stuck back in the 90'ties. Keep up the good work!
@Self-Duality
@Self-Duality 10 ай бұрын
I’d absolutely love to see part 2!
@DavenH
@DavenH 3 жыл бұрын
This is very inspiring. That viable substrates and training strategies vary this much shows field has so much room for breakthroughs and invention. I'm so glad this show is keeping up with the bleeding edge of AI rather than the dreary "what AWS instance is best for ML" kind of stuff.
@freemind.d2714
@freemind.d2714 3 жыл бұрын
More we know much more we know that we don't know, because there always more we don't know that we don't know!
@billykotsos4642
@billykotsos4642 2 жыл бұрын
To be fair. Spinning up an AWS instance is no simple task !
@priyamdey3298
@priyamdey3298 3 жыл бұрын
Sometimes I had to put the playback speed at 0.75 to understand what Dr. Stringer was saying😂 His Broca's area must be more developed than an average human😀 Great video btw! keep the Neuroscience stuff coming! We need a lot more of this for a more targeted approach towards AGI
@DelandaBaudLacanian
@DelandaBaudLacanian Жыл бұрын
I'm listening to 0.75 and I still have to rewind and listen over-and-over again, it's truly inspiring and thought-provoking
@sabawalid
@sabawalid 3 жыл бұрын
Great episode, as usual guys! thumbs up. I really like a lot of what Dr. Simon Stringer said, and your feedback/comments were also great
@sedenions
@sedenions 3 жыл бұрын
I will have to watch this again. As a neuroscience student, I was expecting more talk of LTP, LTD, STDP, silicon neuronal networks, lzh neurons, liquid networks, and the math behind all of this. Also if someone could tell me this: how much overlap is there between theoretical neuroscience and theoretical machine learning?
@marc-andrepiche1809
@marc-andrepiche1809 3 жыл бұрын
This was a really fascinating talk
@oncedidactic
@oncedidactic 3 жыл бұрын
I always like the intro but this Tim Intro was especially captivating. :D I think this is mostly due to just the right briskness and level of detail. Which makes it an awesome hook and also a simple-but-not-simpler running table of contents on the episode. 👍
@JamesTromans
@JamesTromans 3 жыл бұрын
I enjoyed this Simon, takes me back!
@dyvel
@dyvel 3 жыл бұрын
The temporal encoding bit was very interesting. I wonder if you can adjust the filtering to cut off low frequency responses when there is a high amount of information to be processed, and use a buffer to classify the discarded information so that you get a two-layered temporal filtering system that is both short-term and long(er)-term at the same time. However the classification of that information may be too high order to assess at that level. Can it be determined the importance of specific information before the end result has been reached? Like preprocessing the input to redirect common sensory inputs to a low powered specialized Network so that the main Network can be used for the yet-unknown complex classifications? Like classifying a specific type of input as a known pattern that doesn't need to go through full evaluation again, but only has to go through classification within a limited scope, leaving more general neurons available for yet unclassified input patterns.
@abby5493
@abby5493 3 жыл бұрын
A very fascinating video 😍
@shivapundir7105
@shivapundir7105 3 жыл бұрын
What is that software you are using in the beginning (Intro Part) to create those directional definition diagrams ? Looks clean and sleek.
@Checkedbox
@Checkedbox 3 жыл бұрын
Hi Tim, great episode but could I ask what you use for your mindmaps?
@LiaAnggraini1
@LiaAnggraini1 3 жыл бұрын
The title is always intriguing. I am a new subscriber, however, did you already post a topic about causal inference?
@dr.mikeybee
@dr.mikeybee 2 жыл бұрын
An interesting question is can mere scale in an ANN account for any functionality that can be created by created by architectural features. For example, can a for loop be unwound and represented by left to right paths through a large enough NN? In other words can computational equivalence be achieved in all cases simply by increasing scale? My intuition says, yes, but I don't completely trust intuition. I wonder if anyone has made a mathematical proof of this.
@dr.mikeybee
@dr.mikeybee 2 жыл бұрын
A corollary question is can one compress linear logic into looping structure? It would be an interesting algorithmic challenge.
@drdca8263
@drdca8263 3 жыл бұрын
So, with recognizing the hand and pen being near each other across the different positions of the two, and across the different saccade eye directions, is the idea that the fire together wire together thing is happening fast enough that some neuron ends up becoming associated with that like, just based on a short experience? I had been under the impression that the strength of connections between neurons was something that changed much more slowly and was mostly about long term learning, and that very short term mental structures were just about patterns of firings that are like going around in a complicated cycle or something. Was I under the wrong impression there? (sounds likely to me that I was misunderstanding that. I was definitely misunderstanding something, just not sure what.) unless there was like, some neurons that are always there for, things which are, uh, what, moving together with a given pair of positions? This is confusing to me, it seems like there would have to be like, combinatorially many different neurons to describe all the combinations of some stuff, which I guess, brains do have lots of neurons, so maybe that's right, but, to have that but also having the physically nearby neurons be more connected, in a way that matches with like, positions in the retina (which, makes sense by itself), makes me wonder how there can possibly be enough room?
@JLongTom
@JLongTom Жыл бұрын
I think the idea is that neuronal ensembles undergo modifications to become hand-detecting neurons and that this process of becoming occurs over the the course of many thousands of saccades, say during early development as a young child. Then the same scheme if spatial binding occurs with neuronal ensembles detecting the grabbed object. Regarding your question about the time course of synaptic plasticity, this occurs at all spatial scales and is implemented at different levels of synaptic and neuronal structure. Fast changes occur at the level of protein memories and become instantiated into receptor-level and then synaptic (bouton and spine)-level structural changes.
@intontsang
@intontsang 3 жыл бұрын
Just found your podcast, great episode. What is the program used at the intro, loved the way you presented that.
@MachineLearningStreetTalk
@MachineLearningStreetTalk 3 жыл бұрын
Whimsical
@intontsang
@intontsang 3 жыл бұрын
@@MachineLearningStreetTalk Great, thanks a lot.
@quebono100
@quebono100 3 жыл бұрын
The background is so hypnotic
@willcowan7678
@willcowan7678 11 ай бұрын
"The way our brains work we don't see labels over everything in the world". I am curious to what extend our genetics (maybe even epigenetics) hold labels -- simple example, we have tastes and smells that seem like they might be labels. What more complex or abstract labelling fascilitates brain development?
@kimchi_taco
@kimchi_taco 3 жыл бұрын
5:47 Hebbian theory: all neurons are same but different weights. The 'strength' of each neuron has limit. all neuron competes and "winner-take-all" like softmax. 24:40 top-down connection is critical. 26:35 binding neuron summarizes low activations and high activations, which recalls me "Feedback Transformers". 30:50 With STDP, repeated presynaptic spike arrival a few milliseconds before postsynaptic action potentials leads in many synapse types to Long-Term Potentiation (LTP) of the synapses, whereas repeated spike arrival after postsynaptic spikes leads to Long-Term Depression (LTD) of the same synapse. www.scholarpedia.org/article/Spike-timing_dependent_plasticity 31:15 "Cells that fire together wire together." en.wikipedia.org/wiki/Hebbian_theory
@DelandaBaudLacanian
@DelandaBaudLacanian Жыл бұрын
bless you 💙
@charlesfoster6326
@charlesfoster6326 3 жыл бұрын
What's the intuition for why we should think "the feature binding problem" should be hard for ANNs to solve? Work like OpenAI's CLIP openai.com/blog/clip/ seem to provide evidence that mere co-occurrence alone can provide a strong enough signal to learn how to bind together robust, useful representations, even from disparate modalities. Should we expect this to stop sometime soon?
@charlesfoster6326
@charlesfoster6326 3 жыл бұрын
Oh also nice video :)
@paxdriver
@paxdriver 3 жыл бұрын
Love the channel but I would love it even more without the green screens and flicker. The background isn't so fancy it's worth distracting av. Maybe if you transposed those faces onto 3d models and rendered movies it's be cool, but that's not feasible I'm guessing lol 1hr21mins for example, can't even use hands and a pen to speak and gesture because of the green screen attempt. It's truly awful but I'll stop ranting lol love the show, thanks much
@bigbangind
@bigbangind 3 жыл бұрын
I don't like this new setting, previous one is better with 4 splits of the screen.
@bigbangind
@bigbangind 3 жыл бұрын
Seriously, you should at least not change the background of individual cam videos. Quality decreases, it just flickers. Simpler, better
@bigbangind
@bigbangind 3 жыл бұрын
What did you do upload it in 2x speed? He talks fast :D
@machinelearningdojowithtim2898
@machinelearningdojowithtim2898 3 жыл бұрын
First! ✌❤😎
@3nthamornin
@3nthamornin 3 жыл бұрын
😎
@AlanShore4god
@AlanShore4god 3 жыл бұрын
"The brain is unsupervised" - This perspective always confuses me because the world labels itself timestep to timestep. You use features now to predict features next, which is as supervised as an RNN. I disagree that GPT-3 is too basic to support sophisticated emergent behaviors. The recurrence in an RNN generalizes well enough to facilitate "cyclicality", allowing cycles to form at any level of abstraction. This also follows from the fact that RNNs are turing complete. Any argument against the deficiencies of the "engineering" approach in this domain will have to be arguments against backprop/sgd, not against the architecture.
@machinelearningdojowithtim2898
@machinelearningdojowithtim2898 3 жыл бұрын
"RNNs are turing complete" this is pretty meaningless in practice, we need to change the record and stop making this point every time this discussion comes up
@AlanShore4god
@AlanShore4god 3 жыл бұрын
@@machinelearningdojowithtim2898 yes this is always the response when someone brings it up, but it's important in this context because a claim is being made that simple RNN architectures *can't* support the emergence of sophisticated behavior. It's just not true. I used to feel the same way, but after GPT-3 I've come around to the opposite perspective: I think people are way too quick to dismiss the importance of this characteristic. It's become a meme to act like it's unimportant when in reality I haven't seen any work demonstrating how large the gap is between the ideal RNN for a sequence learning task and the best possible RNN practically converged upon via standard backprop/sgd at really high dimensionality
@machinelearningdojowithtim2898
@machinelearningdojowithtim2898 3 жыл бұрын
​@@AlanShore4god In the show Simon is talking about the emergence of very complex _temporal_ spiking dynamics, and circuits forming at many levels of abstraction -- this is a behaviour of spiking neural networks which appears rapidly (en.wikipedia.org/wiki/Spiking_neural_network) . GPT-3 has a fixed objective, is (effectively) supervised, and has no concept of time. I am not saying RNNs can't theoretically learn sophisticated behaviour but they are limited by data and training objective. Also watch my video on GPT-3 if you haven't already, I didn't see any evidence of general intelligence .
@AlanShore4god
@AlanShore4god 3 жыл бұрын
@@machinelearningdojowithtim2898 very excited to watch your video. I will confess that I don't have a very rigorous definition for what constitutes sufficiently "sophisticated emergent behavior". I am leaning on an assumption that I would be able to pull something out of my ass if such a description were offered
@AlanShore4god
@AlanShore4god 3 жыл бұрын
​@@machinelearningdojowithtim2898 My interpretation of the temporal spiking dynamics observed in the brain is that it's a consequence of the brain having to solve a fundamentally different problem than the problem neural networks attempt to solve before it can attempt to learn in the way that neural networks learn. The problem that I'm referring to is the problem of establishing stable representations of current and historical features across the layers of abstraction. This is taken for granted in neural networks because the states of all parameters of the network (and inputs) are completely stable in the computer memory over time, so neural networks enter the learning arena with a significant advantage. It really sucks for the brain because in order to build higher level features out of the lower level features, it must work to maintain state between neurons involved in lower and upper level features to build connections between them. It takes a lot of time for signal to flow from lower layers to higher layers in a biological brain, so a lot of effort must be invested in stabilizing activation patterns both between layers and across layers over durations of time that are long enough to enable learning. I suspect that this is what is being witnessed when observations of self-organizing circuits in the brain are made because that kind of problem can be solved auto regressively or by optimizing for stability and synchronicity without having to parameterize on an explicit biological goal. Learning on an explicit goal is what happens after this feature stability problem has been solved, and that is exactly what neural networks do. From this perspective, a lot of the complexity observed in biological brains can be thought of as achieving an objective which is a given for neural networks trained on computers. That's why I don't find temporal spiking dynamics to be particularly important for thinking about general AI.
@marekglowacki2607
@marekglowacki2607 3 жыл бұрын
Planes don't have feathers. The problem is that we don't know what is feather in brain.
@jantuitman
@jantuitman 3 жыл бұрын
“If an image would be stabilized on the retina, humans would go blind” - compare that to the current neural networks that use many many training iterations where the image is the same. 🤣
@shabamee9809
@shabamee9809 3 жыл бұрын
2rd
@Chr0nalis
@Chr0nalis 3 жыл бұрын
4rd
@siyn007
@siyn007 3 жыл бұрын
Great podcasts but it tends to feel overly edited at times.
@quebono100
@quebono100 3 жыл бұрын
3rd
@bigbangind
@bigbangind 3 жыл бұрын
13th
@atriantafy
@atriantafy 3 жыл бұрын
No one wants to be that GOFAI guy
Spiking Neural Networks for More Efficient AI Algorithms
55:42
WaterlooAI
Рет қаралды 60 М.
WHY DOES SHE HAVE A REWARD? #youtubecreatorawards
00:41
Levsob
Рет қаралды 32 МЛН
Glow Stick Secret 😱 #shorts
00:37
Mr DegrEE
Рет қаралды 145 МЛН
Cat story: from hate to love! 😻 #cat #cute #kitten
00:40
Stocat
Рет қаралды 13 МЛН
The Attention Mechanism in Large Language Models
21:02
Serrano.Academy
Рет қаралды 77 М.
Being Human | Robert Sapolsky
37:00
The Leakey Foundation
Рет қаралды 227 М.
Neural manifolds - The Geometry of Behaviour
23:17
Artem Kirsanov
Рет қаралды 264 М.
The Physics of Magnetic Monopoles - with Felix Flicker
53:47
The Royal Institution
Рет қаралды 1,1 МЛН
Why spiking neural networks are important - Simon Thorpe, CERCO
29:28
Robert Sapolsky: The Biology of Humans at Our Best and Worst
1:13:13
Stanford Iranian Studies Program
Рет қаралды 914 М.
Transformer Neural Networks, ChatGPT's foundation, Clearly Explained!!!
36:15
StatQuest with Josh Starmer
Рет қаралды 571 М.
WHY DOES SHE HAVE A REWARD? #youtubecreatorawards
00:41
Levsob
Рет қаралды 32 МЛН