Using Fast Weights to Attend to the Recent Past, NIPS 2016 | Jimmy Ba, University of Toronto

  Рет қаралды 2,243

Preserve Knowledge

Preserve Knowledge

6 жыл бұрын

Until recently, research on artificial neural networks was largely restricted to systems with only two types of variable: Neural activities that represent the current or recent input and weights that learn to capture regularities among inputs, outputs and payoffs. There is no good reason for this restriction. Synapses have dynamics at many different time-scales and this suggests that artificial neural networks might benefit from variables that change slower than activities but much faster than the standard weights. These ``fast weights'' can be used to store temporary memories of the recent past and they provide a neurally plausible way of implementing the type of attention to the past that has recently proven helpful in sequence-to-sequence models. By using fast weights we can avoid the need to store copies of neural activity patterns.

Пікірлер: 3
@johannesgh90
@johannesgh90 6 жыл бұрын
Here's the paper, everyone: arxiv.org/abs/1610.06258 This is really interesting and I hope to see more about different implementations of fast weights soon. Just the elaboration in the Q&A here is really cool. Also, I really like that, in the paper, in reference to LSTMs you say, "remember gates", not "forget gates"... because the inaccuracy of that name always bugs me.
@revimfadli4666
@revimfadli4666 6 ай бұрын
Looks like it greatly outperforms LSTMs, so I wonder what's keeping it from being the next gold standard. Also bit of a shame it only blew up after Transformers replaced RNNs for mainstream purposes. With the recent surge of graph nets and massively multi agent learning, hope it'll get another chance to be used
Geoffrey Hinton: The Foundations of Deep Learning
28:22
Elevate
Рет қаралды 126 М.
Dendrites: Why Biological Neurons Are Deep Neural Networks
25:28
Artem Kirsanov
Рет қаралды 215 М.
Eccentric clown jack #short #angel #clown
00:33
Super Beauty team
Рет қаралды 19 МЛН
Did you find it?! 🤔✨✍️ #funnyart
00:11
Artistomg
Рет қаралды 121 МЛН
La final estuvo difícil
00:34
Juan De Dios Pantoja
Рет қаралды 27 МЛН
Geoffrey Hinton: Using Fast Weights to Store Temporary Memories
1:03:37
Preserve Knowledge
Рет қаралды 3 М.
Recurrent Neural Networks (RNNs), Clearly Explained!!!
16:37
StatQuest with Josh Starmer
Рет қаралды 472 М.
The Best of NeurIPS 2022
24:40
Edan Meyer
Рет қаралды 15 М.
Intro to Machine Learning & Neural Networks.  How Do They Work?
1:42:18
Math and Science
Рет қаралды 129 М.
Reinforcement Learning: Machine Learning Meets Control Theory
26:03
Steve Brunton
Рет қаралды 252 М.
David Duvenaud | Reflecting on Neural ODEs | NeurIPS 2019
21:02
Preserve Knowledge
Рет қаралды 25 М.
Geoffrey Hinton: Turing Award Lecture "The Deep Learning Revolution"
32:28