A Fruitful Reciprocity: The Neuroscience-AI Connection

  Рет қаралды 8,015

MITCBMM

MITCBMM

11 ай бұрын

Dan Yamins, Stanford University
Abstract: The emerging field of NeuroAI has leveraged techniques from artificial intelligence to model brain data. In this talk, I will show that the connection between neuroscience and AI can be fruitful in both directions. Towards "AI driving neuroscience", I will discuss a new candidate universal principal for functional organization in the brain, based on recent advances in self-supervised learning, that explains both fine details as well as large-scale organizational structure in the vision system, and perhaps beyond. In the direction of "neuroscience guiding AI", I will present a novel cognitively-grounded computational theory of perception that generates robust new learning algorithms for real-world scene understanding. Taken together, these ideas illustrate how neural networks optimized to solve cognitively-informed tasks provide a unified framework for both understanding the brain and improving AI.
Bio: Dr. Yamins is a cognitive computational neuroscientist at Stanford University, an assistant professor of Psychology and Computer Science, a faculty scholar at the Wu Tsai Neurosciences Institute, and an affiliate of the Stanford Artificial Intelligence Laboratory. His research group focuses on reverse engineering the algorithms of the human brain to learn how our minds work and build more effective artificial intelligence systems. He is especially interested in how brain circuits for sensory information processing and decision-making arise by optimizing high-performing cortical algorithms for key behavioral tasks. He received his AB and PhD degrees from Harvard University, was a postdoctoral researcher at MIT, and has been a visiting researcher at Princeton University and Los Alamos National Laboratory. He is a recipient of an NSF Career Award, the James S. McDonnell Foundation award in Understanding Human Cognition, and the Sloan Research Fellowship. Additionally, he is a Simons Foundation Investigator.

Пікірлер: 9
@willd1mindmind639
@willd1mindmind639 10 ай бұрын
I think this way of looking at the brain to model computer neural networks omits the key difference between brains and computers. Brains have discreteness built in which makes the process of learning to identify patterns and shapes along with relationships between them much easier. Computers have no intrinsic means of generating discrete elements to distinguish one element from another, such as in a collection of pixels. Therefore the computer can never match the way the brain learns things because of that lack of discrete data encoding that is based on bio molecular values. (To see this best, look at the cells in the skin of a camouflaging octopus). So the fundamental behavior of computer neural networks is building a model that approximates the base classifier or set of classifiers (dog, cat, human) that you want to use as part of identification. Because without that base classifier there is no way to identify anything in a computer imaging pipeline. That is why unsupervised learning doesn't work because there are no base models to compare against. And this is where the contrast approach seems to work, but even there, it doesn't have the fidelity and flexibility of the way human brains work. Local aggregation is a mathematical approximation totally different to how brain neural networks work. A child will still be able to distinguish two dogs based on the type of fur, color of fur and other discrete characteristics that a computer neural network has no way of understanding innately. Because these unsupervised are still generalizing a high level classifier, such as dog, versus really understanding all the characteristics and elements that make up a dog: legs, tail, fur, ears, snout, tongue, etc. Ultimately all computer neural networks operate on a mathematical model that tries to generate discreteness through classifiers based on computational processing. That imposes a cost that doesn't exist in biology at a far lesser degree of fidelity and detail. Brains don't have built in previously trained classifiers for things
@doudouban
@doudouban 7 ай бұрын
a child could see, touch and smell with live feedback, while AI is facing cold image data. I think if we give it a full functioning body and improved algorithms, machines might learn much faster and could be improved much faster.
@willd1mindmind639
@willd1mindmind639 7 ай бұрын
It is a difference in how data is encoded where in the brain for example, each color captured by the retina has a specific discrete molecular encoding separating it from other colors. Which means that the visual image in the brain is a collection of multiple networks of these discrete low level molecular values. There isn't any "work" required to distinguish one color from another or one "feature" from another based on these values. Whereas in computer neural networks, everything is a number, so you have to do work to convert those collections of numeric values into some kind of "distinct" features. Most of the reason why current computer neural network frameworks still use pre-existing encoding formats for imagery is because they are designed to be portable and operate on existing data formats. And the other reason is because algorithms like convolutions are based on pixels in order to work.@@doudouban
@narenmanikandan4261
@narenmanikandan4261 2 ай бұрын
@@willd1mindmind639 I see what @doubouban is saying in related to your earlier comment. if we did give the capability to sense things in the physical model, this would greatly increase the learning speed since a large section of the use-cases of AI is in something that relies on physical interaction (such as the 5 sense). Then, i guess the real challenge lies in data that isn't necessarily physical (such as code and text).
@hyphenpointhyphen
@hyphenpointhyphen 11 ай бұрын
I like the parsimony approach. Not sure if i get this right but couldn't a working type of memory then selectively grant access to lower level *-topic maps in parallel for feedback in so called higher brain functions? The foundational model delivering the mappings and basic functionality for higher brain functions to access and optimize (learn) target functions, whichever are useful in a social context, thus in light of evolution stabilize genetics. Some more months and CAPTCHAs won't work anymore. If those evolutionary parameters are hard-coded, shouldn't there be genes markable/knockable during development determining the connection strength?
@AlgoNudger
@AlgoNudger 11 ай бұрын
Thanks.
@jerryzhang3506
@jerryzhang3506 11 ай бұрын
👏👏👏
@richardnunziata3221
@richardnunziata3221 11 ай бұрын
not unlike evolution of eye saccadic
@hyphenpointhyphen
@hyphenpointhyphen 11 ай бұрын
Care to explain? You mean as error correction of flow?
CBMM10 Panel: Research on Intelligence in the Age of AI
1:27:21
What Kind of Computation Is Cognition?
1:18:10
Yale University
Рет қаралды 181 М.
ДЕНЬ РОЖДЕНИЯ БАБУШКИ #shorts
00:19
Паша Осадчий
Рет қаралды 5 МЛН
Normal vs Smokers !! 😱😱😱
00:12
Tibo InShape
Рет қаралды 119 МЛН
OMG 😨 Era o tênis dela 🤬
00:19
Polar em português
Рет қаралды 11 МЛН
Brain Criticality - Optimizing Neural Computations
37:05
Artem Kirsanov
Рет қаралды 205 М.
The Turing Lectures: The future of generative AI
1:37:37
The Alan Turing Institute
Рет қаралды 551 М.
How Your Brain Organizes Information
26:54
Artem Kirsanov
Рет қаралды 512 М.
Liquid Neural Networks
49:30
MITCBMM
Рет қаралды 223 М.
The Neuroscience of Learning - Bruce McCandliss
21:20
Stanford
Рет қаралды 63 М.
The Thousand Brains Theory
1:30:07
Microsoft Research
Рет қаралды 45 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 211 М.
iPhone 15 Pro vs Samsung s24🤣 #shorts
0:10
Tech Tonics
Рет қаралды 10 МЛН
wyłącznik
0:50
Panele Fotowoltaiczne
Рет қаралды 23 МЛН
Mi primera placa con dios
0:12
Eyal mewing
Рет қаралды 393 М.
Xiaomi Note 13 Pro по безумной цене в России
0:43
Простые Технологии
Рет қаралды 1,9 МЛН
👎Главный МИНУС планшета Apple🍏
0:29
Demin's Lounge
Рет қаралды 498 М.