Invariance and equivariance in brains and machines

  Рет қаралды 7,737

MITCBMM

MITCBMM

Күн бұрын

Bruno Olshausen, UC Berkeley
Abstract: The goal of building machines that can perceive and act in the world as humans and other animals do has been a focus of AI research efforts for over half a century. Over this same period, neuroscience has sought to achieve a mechanistic understanding of the brain processes underlying perception and action. It stands to reason that these parallel efforts could inform one another. However recent advances in deep learning and transformers have, for the most part, not translated into new neuroscientific insights; and other than deriving loose inspiration from neuroscience, AI has mostly pursued its own course which now deviates strongly from the brain. Here I propose an approach to building both invariant and equivariant representations in vision that is rooted in observations of animal behavior and informed by both neurobiological mechanisms (recurrence, dendritic nonlinearities, phase coding) and mathematical principles (group theory, residue numbers). What emerges from this approach is a neural circuit for factorization that can learn about shapes and their transformations from image data, and a model of the grid-cell system based on high-dimensional encodings of residue numbers. These models provide efficient solutions to long-studied problems that are well-suited for implementation in neuromorphic hardware or as a basis for forming hypotheses about visual cortex and entorhinal cortex.
Bio: Professor Bruno Olshausen is a Professor in the Helen Wills Neuroscience Institute, the School of Optometry, and has a below-the-line affiliated appointment in EECS. He holds B.S. and M.S. degrees in Electrical Engineering from Stanford University, and a Ph.D. in Computation and Neural Systems from the California Institute of Technology. He did his postdoctoral work in the Department of Psychology at Cornell University and at the Center for Biological and Computational Learning at the Massachusetts Institute of Technology. From 1996-2005 he was on the faculty in the Center for Neuroscience at UC Davis, and in 2005 he moved to UC Berkeley. He also directs the Redwood Center for Theoretical Neuroscience, a multidisciplinary research group focusing on building mathematical and computational models of brain function (see redwood.berkele...).
Olshausen's research focuses on understanding the information processing strategies employed by the visual system for tasks such as object recognition and scene analysis. Computer scientists have long sought to emulate the abilities of the visual system in digital computers, but achieving performance anywhere close to that exhibited by biological vision systems has proven elusive. Dr. Olshausen's approach is based on studying the response properties of neurons in the brain and attempting to construct mathematical models that can describe what neurons are doing in terms of a functional theory of vision. The aim of this work is not only to advance our understanding of the brain but also to devise new algorithms for image analysis and recognition based on how brains work.
cbmm.mit.edu/n...

Пікірлер: 11
@rockapedra1130
@rockapedra1130 5 ай бұрын
Wow! So many cool observations and super clever tricks! Plus Bruno is very good at explaining enough of the background succinctly so that it is easy to follow. This ability he has to make things intuitive makes a huge difference for me. The lecture is sufficiently self-contained so that you don't go off the rails because of some small thing you don't know and then the rest of the lecture would be incomprehensible. Kudos!
@seasnowcai
@seasnowcai 5 ай бұрын
Such a wonderful talk! Thank you for sharing! This talk helped me understand a puzzle I have had for a long time: how is human visual perception more efficient than machine learning, expressed in math? It makes sense to decompose the mechanism into some key factors, such as equivariance and invariance, and use combinatorian to simulate large numbers of possibilities. Bruno has done a great job explaining math models in such an intuitive way that I can understand the basic ideas without getting into too many technical details. My original naive idea was that learning in motions must have played a substantial role in human vision, so maybe we should use videos instead of static pictures in machine learning. But then it would further worsen the problem of computational powers. But these sets of research seem to open a new promising path! Looking forward to more exciting findings!
@WaveOfDestiny
@WaveOfDestiny 5 ай бұрын
One of the best lectures i've ever seen
@zartajmajeed
@zartajmajeed 5 ай бұрын
52:05 Main points - 1. Animal behavior tells us what problems the brain is solving, 2. Biological structure gives us clues about the mechanisms involved, 3. Mathematical structure provides the computational foundations
@paulilorenz3039
@paulilorenz3039 5 ай бұрын
Theoretical Neuroscience sounds like a lovely field, is it popular in Europe? Amazing video, thank you for publishing
@oceanwang2652
@oceanwang2652 5 ай бұрын
Great presentation! It brought me many inspirations for my Graph neural network research.
@yairreyes9288
@yairreyes9288 5 ай бұрын
Amazing content
@mausplunder5313
@mausplunder5313 5 ай бұрын
very interesting presentation... hope some day i can contribute to research like this..
@jordia.2970
@jordia.2970 5 ай бұрын
Bruh, amazing stuff
@themultiverse5447
@themultiverse5447 5 ай бұрын
Don't let anyone tell you; your elaborately pontificated assertion is anti-egregious. Someday soon, I hope to cogitate scientific nomenclature such as Bruv. However lamentably, I too use the penultimate, pejorative; Bruh.
@AlgoNudger
@AlgoNudger 5 ай бұрын
Thank you. 😊
Learning to Reason, Insights from Language Modeling
57:43
MITCBMM
Рет қаралды 6 М.
Panel Discussion: Open Questions in Theory of Learning
1:41:29
人是不能做到吗?#火影忍者 #家人  #佐助
00:20
火影忍者一家
Рет қаралды 20 МЛН
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 120 МЛН
AI, apps, cars: Is China taking the lead in tech? - BBC World Service
7:07
The Potential for AI in Science and Mathematics - Terence Tao
53:05
Oxford Mathematics
Рет қаралды 204 М.
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
In the Age of AI (full documentary) | FRONTLINE
1:54:17
FRONTLINE PBS | Official
Рет қаралды 26 МЛН
The Platonic Representation Hypothesis
1:09:11
MITCBMM
Рет қаралды 4,8 М.
Jeff Dean (Google): Exciting Trends in Machine Learning
1:12:30
Rice Ken Kennedy Institute
Рет қаралды 177 М.
A Universal Theory of Brain Function
19:31
Artem Kirsanov
Рет қаралды 83 М.
A Brain-Inspired Algorithm For Memory
26:52
Artem Kirsanov
Рет қаралды 185 М.