Mixture-of-Depths: Dynamically allocating compute in transformer-based language models

  Рет қаралды 2,172

Gabriel Mongaras

Gabriel Mongaras

Күн бұрын

Пікірлер: 20
@raraismature
@raraismature 8 ай бұрын
awesome content, seriously
@toatoa10
@toatoa10 8 ай бұрын
great video! this is much easier to understand than just reading the paper. what app are you using for annotating the paper and making notes?
@gabrielmongaras
@gabrielmongaras 8 ай бұрын
Thanks! Glad you found my video helpful! I'm using the default Samsung Notes app to make all the annotations and notes.
@gauravsrivastava9428
@gauravsrivastava9428 8 ай бұрын
Thanks for the video tutorial, it is really helpful! At time 24:08, when you mention about softmax, do you mean a softmax is done to compute the routing scalars? If yes, then as per my understanding they don't compute routing scalars using softmax. The scalars are computed just by doing an inner product of token with routing weights vector.
@gabrielmongaras
@gabrielmongaras 8 ай бұрын
Oh yes, I see what you're talking about. On page 6, right above equation (1), they mention the rth weight is computed as the inner product between the weight and the vector, which is different from normal MoE. I suppose this fixes the gradient problem I was talking about. Thanks for the clarification!
@ml-ok3xq
@ml-ok3xq 8 ай бұрын
i thought people theorise that transformers still use the 'slack' tokens for other purposes, so the compute is not wasted, i guess this shows that maybe those theories needed to be rigorously tested. although actually since they only sandwich the layers maybe it is fully used. this method effectively gives some tokens up to double the mixing time
@DiogoNeves
@DiogoNeves 8 ай бұрын
Im not sure I understand, even though the sigmoids are independent, why would it allow for causal sampling if it was trained to mimic a distribution that isn’t causal? It carries information from the future albeit indirectly no? For example, if we were training on a distribution of a biased lottery, we would still be predicting the future from just some of the tokens?
@DiogoNeves
@DiogoNeves 8 ай бұрын
Ah, I think you mention exactly that afterwards 😅 thanks
@DiogoNeves
@DiogoNeves 8 ай бұрын
One more question, can these be added to existing models and trained separately? From the description sounds like it’s possible
@gabrielmongaras
@gabrielmongaras 8 ай бұрын
I don't think they talked about doing that in the paper. My intuition says it may be hard and probably wouldn't work as well as we might hope. The activations for attention are whatever it needs to do the attention mechanism. However, in this paper, the activations are also used for ranking. My first thought is that these two activation distributions are quite different, making the model start from a poor state. I wonder if Google did something like this, but found it didn't work that well and decided not to add it in the paper? Would be totally worth trying if you have the compute though! Maybe you could start off with initializing routing to all tokens and slowly decrease this during fine-tuning.
@ckpioo
@ckpioo 8 ай бұрын
awsome, btw maybe try using excalibur
@Stan-san
@Stan-san 8 ай бұрын
Why use lot words when few words do trick?
@gabrielmongaras
@gabrielmongaras 8 ай бұрын
Yeah, definitely a problem I have 😅 Been trying to get better at it, and realized I could've explained the extra loss part in much fewer words after uploading. In general, sometimes it's hard to know if the explanation given is satisfying or not when trying to balance conciseness and length.
@rykim4626
@rykim4626 4 ай бұрын
⁠@@gabrielmongarasthey might be referring to mixture of depths using only few of the words. Personally, I thought your explanations were great
@theatheistpaladin
@theatheistpaladin 8 ай бұрын
What field of math do you need to understand this?
@gabrielmongaras
@gabrielmongaras 8 ай бұрын
Just an understanding of machine learning models at a high level and how transformers work. The experts themselves are just a linear layer or feed forward layer in MoE and the single expert in this paper is a transformer layer.
@MrNathanShow
@MrNathanShow 8 ай бұрын
I'd add that a basic understanding of statistics can help with some introductory degree of calculus. But for the most part there is more trial and error for these discoveries than you might not believe. The understanding comes after sometimes ;)
@jaredtweed7826
@jaredtweed7826 8 ай бұрын
​@@gabrielmongaras what do you think helped you best understand neural networks? I have a shallow understanding of how transformers work. I know how the encoder works, but I don't really understand the decoder fully. I also know pytorch only well enough to build simple convolutional neural networks. I also have a really strong understanding of calculus and linear algebra.
@tgugdevil
@tgugdevil 8 ай бұрын
Calculus and Linear Algebra.
@jaredtweed7826
@jaredtweed7826 8 ай бұрын
@@tgugdevil thank you, sorry, I forgot to mention that I already have a strong understanding of those concepts
REAL or FAKE? #beatbox #tiktok
01:03
BeatboxJCOP
Рет қаралды 18 МЛН
Mixture-of-Depths
1:49:11
hu-po
Рет қаралды 3,7 М.
xLSTM: Extended Long Short-Term Memory
43:26
Gabriel Mongaras
Рет қаралды 2 М.
Class Meeting 28 - MAT165
1:33:03
Joe Yoest
Рет қаралды 20
Understanding Mixture of Experts
28:01
Trelis Research
Рет қаралды 10 М.
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
Learning to (Learn at Test Time): RNNs with Expressive Hidden States
35:52
Stanford CS25: V4 I Demystifying Mixtral of Experts
1:04:32
Stanford Online
Рет қаралды 8 М.