Biologically motivated learning dynamics:parallel architectures & ..|Michael Buice, Allen Institute

  Рет қаралды 418

The Theoretical Neuroscience Channel

The Theoretical Neuroscience Channel

Күн бұрын

Van Vreeswijk Theoretical Neuroscience Seminar
www.wwtns.online; on twitter: WWTNS@TheoreticalWide
Wednesday, April 10, 2024, at 11:00 am ET
Michael Buice
Allen Institute
Title: Biologically motivated learning dynamics: parallel architectures and nonlinear Hebbian plasticity
Abstract: Learning in biological systems takes place in contexts and with dynamics not often accounted for by simple models. I will describe the learning dynamics of two model systems that incorporate either architectural or dynamic constraints from biological observations. In the first case, inspired by the observed mesoscopic structure of the mouse brain as revealed by the Allen Mouse Brain Connectivity Atlas, as well as multiple examples of parallel pathways in mammalian brains, I present a mathematical analysis of learning dynamics in networks that have parallel computational pathways driven by the same cost function. We use the approximation of deep linear networks with large hidden layer sizes to show that, as the depth of the parallel pathways increases, different features of the training set (defined by the singular values of the input-output correlation) will typically concentrate in one of the pathways. This result is derived analytically and demonstrated with numerical simulation with both linear and non-linear networks. Thus, rather than sharing stimulus and task features across multiple pathways, parallel network architectures learn to produce sharply diversified representations with specialized and specific pathways, a mechanism which may hold important consequences for codes in both biological and artificial systems. In the second case, I discuss learning dynamics in a generalization of Hebbian rules and show that these rules allow a neuron to learn tensor decompositions of higher-order input correlations. Unlike the case of the Oja rule and PCA, the resulting learned representation is not unique but selects amongst the tensor eigenvectors according to initial conditions.

Пікірлер: 2
@JuanCarlosTabao-xj9eq
@JuanCarlosTabao-xj9eq 9 ай бұрын
❤❤❤❤❤❤❤
@JuanCarlosTabao-xj9eq
@JuanCarlosTabao-xj9eq 9 ай бұрын
❤❤❤❤❤
Back to the Continuous Attractor | Memming Park, Champalimaud Foundation
59:02
The Theoretical Neuroscience Channel
Рет қаралды 75
Conversions between space and time in network dynamics ... | Merav Stern, Rockefeller University
41:29
Мем про дорожку
00:38
Max Maximov
Рет қаралды 4,5 МЛН
Tilt 'n' Shout #boardgames #настольныеигры #games #игры #настолки #настольные_игры
00:24
I Spent 100 Hours Inside The Pyramids!
21:43
MrBeast
Рет қаралды 78 МЛН
Prediction of neural activity in connectome-constrained...| Manuel Beiran, Columbia University
42:29
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 455 М.
Matrix Factorization with Neural Networks | Marc Mézard, Bocconi University, Milano
44:32
The Theoretical Neuroscience Channel
Рет қаралды 407
Spiking Neural Networks for More Efficient AI Algorithms
55:42
WaterlooAI
Рет қаралды 70 М.
Rethinking behavior in the light of evolution | Paul Cisek, University of Montreal
52:51
The Theoretical Neuroscience Channel
Рет қаралды 1 М.
Discovering learning-induced changes in neural representations from...|Alex Cayco Gajic, ENS, Paris
45:55
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,8 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,6 МЛН
Neural computations underlying the regulation of motivated behavior | Ann Kennedy, Northwestern U.
42:59