great video! this is much easier to understand than just reading the paper. what app are you using for annotating the paper and making notes?
@gabrielmongaras8 ай бұрын
Thanks! Glad you found my video helpful! I'm using the default Samsung Notes app to make all the annotations and notes.
@gauravsrivastava94288 ай бұрын
Thanks for the video tutorial, it is really helpful! At time 24:08, when you mention about softmax, do you mean a softmax is done to compute the routing scalars? If yes, then as per my understanding they don't compute routing scalars using softmax. The scalars are computed just by doing an inner product of token with routing weights vector.
@gabrielmongaras8 ай бұрын
Oh yes, I see what you're talking about. On page 6, right above equation (1), they mention the rth weight is computed as the inner product between the weight and the vector, which is different from normal MoE. I suppose this fixes the gradient problem I was talking about. Thanks for the clarification!
@ml-ok3xq8 ай бұрын
i thought people theorise that transformers still use the 'slack' tokens for other purposes, so the compute is not wasted, i guess this shows that maybe those theories needed to be rigorously tested. although actually since they only sandwich the layers maybe it is fully used. this method effectively gives some tokens up to double the mixing time
@DiogoNeves8 ай бұрын
Im not sure I understand, even though the sigmoids are independent, why would it allow for causal sampling if it was trained to mimic a distribution that isn’t causal? It carries information from the future albeit indirectly no? For example, if we were training on a distribution of a biased lottery, we would still be predicting the future from just some of the tokens?
@DiogoNeves8 ай бұрын
Ah, I think you mention exactly that afterwards 😅 thanks
@DiogoNeves8 ай бұрын
One more question, can these be added to existing models and trained separately? From the description sounds like it’s possible
@gabrielmongaras8 ай бұрын
I don't think they talked about doing that in the paper. My intuition says it may be hard and probably wouldn't work as well as we might hope. The activations for attention are whatever it needs to do the attention mechanism. However, in this paper, the activations are also used for ranking. My first thought is that these two activation distributions are quite different, making the model start from a poor state. I wonder if Google did something like this, but found it didn't work that well and decided not to add it in the paper? Would be totally worth trying if you have the compute though! Maybe you could start off with initializing routing to all tokens and slowly decrease this during fine-tuning.
@ckpioo8 ай бұрын
awsome, btw maybe try using excalibur
@Stan-san8 ай бұрын
Why use lot words when few words do trick?
@gabrielmongaras8 ай бұрын
Yeah, definitely a problem I have 😅 Been trying to get better at it, and realized I could've explained the extra loss part in much fewer words after uploading. In general, sometimes it's hard to know if the explanation given is satisfying or not when trying to balance conciseness and length.
@rykim46264 ай бұрын
@@gabrielmongarasthey might be referring to mixture of depths using only few of the words. Personally, I thought your explanations were great
@theatheistpaladin8 ай бұрын
What field of math do you need to understand this?
@gabrielmongaras8 ай бұрын
Just an understanding of machine learning models at a high level and how transformers work. The experts themselves are just a linear layer or feed forward layer in MoE and the single expert in this paper is a transformer layer.
@MrNathanShow8 ай бұрын
I'd add that a basic understanding of statistics can help with some introductory degree of calculus. But for the most part there is more trial and error for these discoveries than you might not believe. The understanding comes after sometimes ;)
@jaredtweed78268 ай бұрын
@@gabrielmongaras what do you think helped you best understand neural networks? I have a shallow understanding of how transformers work. I know how the encoder works, but I don't really understand the decoder fully. I also know pytorch only well enough to build simple convolutional neural networks. I also have a really strong understanding of calculus and linear algebra.
@tgugdevil8 ай бұрын
Calculus and Linear Algebra.
@jaredtweed78268 ай бұрын
@@tgugdevil thank you, sorry, I forgot to mention that I already have a strong understanding of those concepts