MAMBA from Scratch: Neural Nets Better and Faster than Transformers

  Рет қаралды 212,166

Algorithmic Simplicity

Algorithmic Simplicity

Күн бұрын

Пікірлер: 291
@jamescamacho3403
@jamescamacho3403 7 ай бұрын
As someone actively working on this stuff, this channel has the best explanations on the internet, and the 'tuber actually understands what is going on.
@Quarky_
@Quarky_ 7 ай бұрын
3blue1brown of deep learning?
@Sumpydumpert
@Sumpydumpert 6 ай бұрын
I’d love feed back on Reddit if ur working on this as well as on cosmo knowledge KZbin channel I threw up some concepts
@alwaysonsumbullshi
@alwaysonsumbullshi Ай бұрын
@@Quarky_ i believe thats 3blue1brown again
@andreasbeschorner1215
@andreasbeschorner1215 3 ай бұрын
During my Ph.D times a paper of mine got rejected at ICASSP for not having quoted a certain paper (I guess the reviewer was one of the authors) which had absolutely NOTHING to do with what my paper was about... So yes, a lot in the reviewing process seems to be a) personal and b) must do this and that even if it is not related to your paper at all. Since years...
@ptrdmr
@ptrdmr 7 ай бұрын
Brutal. I'm going to have to watch this about 30 times. Love it.
@Levy1111
@Levy1111 6 ай бұрын
I do hope you'll soon get at least 6 figures subscribers count. The quality of your videos (both in terms of education and presentation) is top notch, people need you to become popular (at least within our small tech bubble).
@jawadmansoor6064
@jawadmansoor6064 8 ай бұрын
wow, you've made some difficult i mean extremely difficult algorithms look easy. thank you.
@thaRealShady1
@thaRealShady1 6 ай бұрын
It's all not as difficult as one might think. I'm currently in my PhD and I quickly realized that most of the difficulty comes from people trying to look smart instead of trying to properly explain stuff. It is very hard to come up with a good solution to a problem while it is significantly easier to explain the solution once it is understood. Hence, if you are of average or slightly above average intelligence you should be able to learn almost anything if you have someone that is willing to actually provide a good explanation.
@LinkSF1
@LinkSF1 17 күн бұрын
I agree with @tharealshady1. Much of this stuff isn’t that hard to understand and ppl make it more complicated than it needs to be, likely cause they wanna come off as smart.
@shirenlu5260
@shirenlu5260 4 ай бұрын
Wow this is a great video. I've been having a lot of trouble understanding and getting an intuition of how Mamba works, and this video just made it make sense. The visuals were a massive help and the explanations are super simple and easy to understand.
@Paraxite7
@Paraxite7 6 ай бұрын
I finally understand MAMBA! I've been trying to get my head around it for months, but now see that approaching the way the original paper stated wasn't the best way. Thank you.
@gyahoo
@gyahoo 3 ай бұрын
Underrated ML channel ❤
@tomashonzik1758
@tomashonzik1758 6 ай бұрын
Thanks!
@algorithmicsimplicity
@algorithmicsimplicity 6 ай бұрын
Thanks for your support!
@davidespinosa1910
@davidespinosa1910 3 ай бұрын
A+++ for OpenReview. Transparency is so valuable ! Also, many thanks for the excellent video !
@RexPilger
@RexPilger 7 ай бұрын
About peer review: As one comment noted, there could be many more candidate papers presented than could be accommodated at the venue. However, this video argues, the rejection justification for this paper is inadequate at best. Some comments ask whether the rejection is important; for academics, the answer is yes, because presentations and publications count for tenure, promotions, and raises plus continued funding of the research. Since several comments plus the video indicate that the algorithm had already received a lot of publicity, for the sake of the project it may not matter if it can continue to be funded, especially if commercial implementations are successful. What is interesting in any case is that the paper exists; in effect it has been published; the authors may not get the desired credit for formal publication, but their work and the reviewer comments are out there now. A couple of decades ago that would not have been the case; most people in the field would be unaware of the algorithm. In terms of peer review, in general (outside of AI), in my field, one of the natural sciences, a paper I submitted for publication encountered an editor plus two reviewers who were well qualified in the field; after asking for two revisions to the manuscript, the third version was rejected. Interestingly, all three scientists had published research which my paper undermined; they may well have lost funding for their research or even their position had that manuscript of mine been published (I speculate here). Peer review cuts both ways. While iterating with the editor and reviewers I continued to expand my research project and made some additional discoveries. Following the rejection I wrote a completely different paper which incorporated my initial work supplemented by the new discoveries; happily it was published a few months ago (in a different journal). I'm formally retired now, but continue to do research. To young researchers -- never give up. Learn from rejection, refine your work, be humble, exercise integrity and honesty, and take pride in your accomplishments, even if only a few know about them. Peer review (by humans) is a necessity and will continue to be. There is no such thing as a perfect filter, but science and technology would be overwhelmed by irrelevancy, dishonesty, and duplication of effort without it. AI may become a useful filtering tool, but science is a human endeavor.
@goliathstark9142
@goliathstark9142 7 ай бұрын
nice one rex
@TheParkitny
@TheParkitny 4 ай бұрын
Very good explanation, and kudos for exposing the broken peer review system. Subscribed
@anrilombard1121
@anrilombard1121 8 ай бұрын
Currently testing it on molecular generation, so excited to see where these strengths hold and where they falter :)
@rikkathemejo
@rikkathemejo 7 ай бұрын
Nice video! I just wanted to point out that the parallel scan algorithm can be also implemented in O(n) time (instead of the O(n log(n)) version peresented in the video. and this is the version that the MAMBA uses.
@tulgatbolderdene7493
@tulgatbolderdene7493 8 ай бұрын
This just shows how RNNs are way too natural of an architecture to ignore. Maybe solution to a gradient descent problem is to not use gradient descent at all. There has to be a different way to update parameters than this bizarre hack and slash let ||x_0|| = 1 for RNNs.
@BooleanDisorder
@BooleanDisorder 8 ай бұрын
Meta-learning could potentially be one way. Like a neural "module" in the model that looks how changes in the first layers affect the representation space deeper and vice versa. It would have to have some goal and reward itself
@tempname8263
@tempname8263 8 ай бұрын
But gradient descent is too natural of an algorithm to ignore >.
@ckpioo
@ckpioo 8 ай бұрын
​@@tempname8263 it's actually not natural at all, gradient decent itself is the one big difference between a human brain and any neural networks.
@egor.okhterov
@egor.okhterov 8 ай бұрын
​@@tempname8263no
@ultrasound1459
@ultrasound1459 8 ай бұрын
​@BooleanDisorder you have 10 missed calls from Juergen Schmidhuber 🧏‍♂️
@EkShunya
@EkShunya 8 ай бұрын
please open your community tab your content is incredible
@danverzhao9912
@danverzhao9912 6 ай бұрын
Actually best explanation channel on youtube, rivaling 3B1B!
@AndrewAnderson-h4d
@AndrewAnderson-h4d 2 ай бұрын
This was really concise and easy to understand.
@jarib3858
@jarib3858 8 ай бұрын
One small note on RNN's, reservoir computing is a very high dimensional random RNN with linear regression readout, therefore there is no exploding nor vanishing gradient. Reservoir computing is currently the standard for non-linear dynamic time series prediction
@zzador
@zzador 5 ай бұрын
Yes but does it support backpropagation? Remember you have to propagate an error from the output layer through every RNN up to the inputs. Reservoirs/EchoStateMachines don't support this. There only the delta layer (linear regression layer) gets trained while the reservoir stays fixed. So you could get the error up to the first delta layer but not further.
@CHRISTICAUTION
@CHRISTICAUTION 5 ай бұрын
Hi, can you recommend a paper about that
@terrortinus
@terrortinus 4 ай бұрын
@@zzador The wonder of it is that you don't need it to go further...
@cambrawal
@cambrawal 4 ай бұрын
@@terrortinus paper please
@kalkhasse
@kalkhasse 7 ай бұрын
I love how you nail the level of detail in the explanations. Perfect for me at least.
@timseguine2
@timseguine2 7 ай бұрын
Thanks for the clear explanation. This gives me enough understanding to not only implement it myself, but to also have some ideas for sensible architecture modifications.
@tianlechen
@tianlechen 4 ай бұрын
Peer reviews are highly motivated by the reviewers protecting their existing work extending previously state-of-the-art methodologies. If you have an actually new innovation that goes against the grain, you need to publish regardless of whether the venue is highly regarded or not.
@wargreymon2024
@wargreymon2024 7 ай бұрын
The level of details and intuition you dig into are excellent 💯🔥
@InfiniteQuest86
@InfiniteQuest86 7 ай бұрын
I like how we now call 1 billion parameters small.
@Nasser-bp6qf
@Nasser-bp6qf 5 ай бұрын
Will we ever scale up and reach a point where 1 trillion is small?
@lylong-i2z
@lylong-i2z 3 ай бұрын
i hope so
@honglu679
@honglu679 8 ай бұрын
Wow, excellent explaination. It covers all the essense of the paper with just enough math/algo. Thank you so much ! If you dont mind, plz make a video for RWKV (v6 has some new modifications), which is another strong linear RNN model. I am curious how does it compares to mamba.
@pi5549
@pi5549 8 ай бұрын
Another beautiful exposition. Further points: (1) HiPPO itself comes from attempting to approximate a spiking net with a SSM (Voelker 2017/8), (2) we do have O(NlogN) transformer hacks now, (3) RWKV is a promising arch that deserves a place in this arena.
@algorithmicsimplicity
@algorithmicsimplicity 8 ай бұрын
I haven't heard of any O(NlogN) transformer hacks that preserve performance, got any links? And yeah RWKV is promising, I would've loved to talk about it as well but the video was getting long lol.
@marloelefant7500
@marloelefant7500 3 ай бұрын
I honestly found the "boring technical details" the most interesting of the video.
@ithaca2076
@ithaca2076 8 ай бұрын
absolutely love the quality and information of this video!!! please keep up the good work this is amazing
@kamdynshaeffer9491
@kamdynshaeffer9491 8 ай бұрын
Absolutely amazing vid. Just subbed after getting recommended to this channel. Never stop making videos dude
@mehnot8193
@mehnot8193 7 ай бұрын
Extremely noob question but, at 13:52 why aren't the input vectors x multplied by P^-1 instead of P? Don't you need to convert them to the eigenbasis before applying the D transformation (or, equivalently, taking the hadamard product with the diag(D) vector)?
@algorithmicsimplicity
@algorithmicsimplicity 7 ай бұрын
Yes, I should have applied P^-1 first to be consistent with my earlier notation W=PDP^-1. Of course, the naming is just a matter of preference, you can equivalently call the first matrix which is applied P or P^-1, so long as the two matrices are inverse of each other it doesn't matter which is called which.
@mehnot8193
@mehnot8193 7 ай бұрын
@@algorithmicsimplicity Oh ok, that makes sense now! Thanks a lot for your answer and this amazing video ^^
@harshvardhanv3873
@harshvardhanv3873 7 ай бұрын
we need more videos from you, especially one from basics
@algorithmicsimplicity
@algorithmicsimplicity 7 ай бұрын
Any topics in particular you'd like to see?
@harshvardhanv3873
@harshvardhanv3873 7 ай бұрын
@@algorithmicsimplicity we need video series in math for linear algebra, calculus, probability and statistics seperately for ml perspective and then after that we would like to learn more on basic concepts like regression, classification, clustering, etc. we would also like to learn more on the types of learning unsuperwised, semi- superwised and self-superwised. some basic architectures like rnn types (lstm, gru, hybrids) , basic ann , mlp and even the recent kan, ntk.
@algorithmicsimplicity
@algorithmicsimplicity 7 ай бұрын
@@harshvardhanv3873 Got it. I am definitely planning to do videos on calculus and probability for ML soon. After that I can do videos on the types of ML.
@harshvardhanv3873
@harshvardhanv3873 7 ай бұрын
@@algorithmicsimplicity sure waiting for your videos ✌
@sichengmao4038
@sichengmao4038 7 ай бұрын
well, maybe 3b1b's video already fullfills what your need on prerequisites of ml.
@nothreeshoes1200
@nothreeshoes1200 6 ай бұрын
Please make more videos. They’re fantastic!
@ThéoUscidda
@ThéoUscidda 4 ай бұрын
At 31:02, I agree that Mamba has linear O(n) memory requirements. However, why don't transformers have quadratic O(n^2) memory requirements? They need to store the attention matrices that are n x n. I'm surely missing something.
@algorithmicsimplicity
@algorithmicsimplicity 4 ай бұрын
You don't need to materialize the full nxn matrix in memory at the same time. You can instead materialize only a chunk of it, sum over that chunk, and then materialize the next chunk in the same memory slot. This is how, for example, FlashAttention and FlashAttention2 work. When you do this the memory requirement is O(n).
@ThéoUscidda
@ThéoUscidda 3 ай бұрын
@@algorithmicsimplicity very clear, thanks a lot!
@jhonyiigp
@jhonyiigp 3 ай бұрын
Incredible explanation
@tellu5493
@tellu5493 7 ай бұрын
This was very good, and I hope you make more videos like this!
@francescorossi7582
@francescorossi7582 6 ай бұрын
Thanks for the video. Why do you use matrix diagonalization instead of SVD in 13:00? SVD can decompose any matrix and you do not need to introduce complex numbers. The power trick also works with SVD wrt the singular values.
@algorithmicsimplicity
@algorithmicsimplicity 6 ай бұрын
With SVD you get W=USV for a diagonal matrix S, but the U and V are not necessarily inverse of each other, so when you take W^2=USVUSV you can't cancel out the inner VU.
@francescorossi7582
@francescorossi7582 6 ай бұрын
@@algorithmicsimplicity you are right, in my mind i was assuming W to be symmetric
@IllIl
@IllIl 7 ай бұрын
Thank you! Your channel is an invaluable resource on here. Hope you keep making these videos!
@novantha1
@novantha1 6 ай бұрын
Fascinating video. I've always found state space model papers a little bit dense and self-referential to understand coming from other areas of ML but this video is a really great reparameterization of the issue. I'm not sure if it would be in line with previous videos (covering generally useful industry standard models with wide applications), but is there any possibility of getting a video on liquid neural networks or spiking neural networks?
@algorithmicsimplicity
@algorithmicsimplicity 6 ай бұрын
Thanks for the feedback. I probably won't get around to making videos on spiking and liquid neural networks for a while, I have lots of other stuff I'm planning to cover, but they are definitely on my todo list!
@TTminh-wh8me
@TTminh-wh8me 5 ай бұрын
Just watched the lecture by mohit, then watching your video. Feel like this make me understand this architecture better than reading those papers for months 😂
@markdatton1348
@markdatton1348 8 ай бұрын
Awesome video. I love the speed and the depth of this, it's perfect
@boogati9221
@boogati9221 6 ай бұрын
Crazy how two separate ideas ended up converging into one nearly identical solution.
@andrewy2957
@andrewy2957 6 ай бұрын
Totally agree. I feel like that's pretty common in math, robotics, and computer science, but it just shows how every field in stem is interconnected.
@kacemabdelaziz4940
@kacemabdelaziz4940 4 ай бұрын
tmw you realize humanity is just being trained with gradient descent and we always converge to these local minima
@NostraDavid2
@NostraDavid2 2 ай бұрын
Kind of how biology always optimizes being into a crab (or crab-like) entity.
@BooleanDisorder
@BooleanDisorder 8 ай бұрын
You have such a pleasant voice 😊 Thanks for helping me understand better. Please keep making videos. ❤
@luke.perkin.online
@luke.perkin.online 7 ай бұрын
in para-lllelll :-D
@nialv7985
@nialv7985 7 ай бұрын
Thanks for this explanation! Phrasing mamba in terms of a Linear RNN makes it much easier to understand. You've done a lot already with this video, but I just want to ask for a little bit more. Since the original Mamba paper presented the model in terms of SSM, many, many implementations of Mamba also use that language. And I have difficulty wrapping my head around trying to map their code back to the concepts in this video. I wish you can explain how concepts in the Mamba paper (∆ A B C D, discretization, etc) maps back to the parameters of a Linear RNN, which would help a lot.
@algorithmicsimplicity
@algorithmicsimplicity 7 ай бұрын
Sure, for the state space terminology A in ℂ^d is the learnable parameter that is used to make the recurrent weight vector, the equivalent in my video is a+bi, with a, b in R^d as learnable parameters, i is the imaginary unit. B, C in ℂ^{d x d } are the complex matrices applied before and after the recurrence respectively, equivalent to P and Q matrices in my video, also learnable parameters. SSM performs discretization of the parameters, which creates A^bar = e^{ΔA} and B^bar = (ΔA^-1)(exp(ΔA)-I)ΔB. Note A^bar and B^bar are what are actually used in the computation. This discretization is equivalent to the stable reparameterization outlined in my video. In the SSM formulation, they phrase the discretization as modifying B into B^bar, but note that B is the matrix which is applied to the input, so multiplying B with Δ is equivalent to multiplying the input x with Δ and leaving B unchanged, which is how it is described in my video. One last thing to be aware of is that in the state space literature, the models are often described as having another "state dimension" N in addition to the model dimension d. This state dimension is equivalent to the factor by which the output vector's dimension is expanded, so for example Mamba uses N=16, i.e. expands outputs by a factor of 16. Let me know if you still have any questions!
@nialv7985
@nialv7985 7 ай бұрын
@@algorithmicsimplicity Thank you so much!
@PliniusSecundus
@PliniusSecundus 7 ай бұрын
Great job! Your channel is a treasure.
@diabolo19x
@diabolo19x 7 ай бұрын
Incredible work. I mean REALLY incredible
@anthonybernstein1626
@anthonybernstein1626 8 ай бұрын
Amazing explanation, thank you!
@marschrr
@marschrr 7 ай бұрын
Subscribed! Thats some 3Blue1Brown level stuff! Amazing!
@danverzhao9912
@danverzhao9912 5 ай бұрын
Just wondering if you can make a video on how GNN works? There's not really many videos about GNN on youtube.
@algorithmicsimplicity
@algorithmicsimplicity 5 ай бұрын
Thanks for the suggestion, I will put it on the list!
@ThéoUscidda
@ThéoUscidda 4 ай бұрын
At 27:30, why do we get sub-linear O(n*log(n)) time complexity? Shouldn't it be linear O(n)? I'm surely missing something.
@algorithmicsimplicity
@algorithmicsimplicity 4 ай бұрын
It depends on the algorithm used for the parallel scan, in this video I described an O(nlog(n)) algorithm, in practice there are actually O(n) parallel scan algorithms and Mamba uses one of them.
@ThéoUscidda
@ThéoUscidda 3 ай бұрын
@@algorithmicsimplicity I see, thanks a lot!
@MrStevemur
@MrStevemur 7 ай бұрын
I appreciate the soothing piano music. Currently the words are only slightly better than Charlie Brown listening to adults talk, but I hope to dive in.
@Adityak1997
@Adityak1997 3 ай бұрын
Cou you mention the souces for the tables and graph you gave in 23:46
@algorithmicsimplicity
@algorithmicsimplicity 3 ай бұрын
The graph is Figure 4b from the Mamba paper ( openreview.net/pdf?id=AL1fq05o7H ). The table I made by combining the numbers from the linear RNN paper ( openreview.net/pdf?id=M3Yd3QyRG4 ) with the transformer numbers provided in the S4 paper ( arxiv.org/pdf/2111.00396 ).
@nikilragav
@nikilragav 7 ай бұрын
I really wish that when you're talking about things happening in parallel, your animations happened in parallel. Like 8:30. I think it would really improve the comprehensibility of your explanation
@2255.
@2255. 8 ай бұрын
underrated channel
@luke.perkin.online
@luke.perkin.online 7 ай бұрын
great video. That trick around the 26 minute mark of doing 16x compute almost for free (in terms of time) because of memory bottlenecks is really neat. I wonder how many other architectures would benefit from that kind of design optimisation?
@algorithmicsimplicity
@algorithmicsimplicity 7 ай бұрын
It appears that it is only useful for linear recurrent layers, because the main computation is just performing elementwise multiplication between the previous output vector and the recurrent weight vector, which means you have O(d) parameters and you do O(d) compute, and transferring one parameter takes longer than doing one operation. For other kinds of layers, such as fully connected layers, you are doing at least a matrix-vector multiplication, which means you are doing O(d^2) compute, and that usually takes much longer than transferring O(d) parameters.
@Nerdimo
@Nerdimo 7 ай бұрын
Would you mind explaining this associativity 10:37 ? My assumption is that f is the linear recurrence function, but how is it equal to a pair of the matmul between W2 and W1 and the second term? Wouldn’t f output a vector, so how could it be equal to the right hand side pair of vectors?
@saiipranay995
@saiipranay995 4 ай бұрын
very well explained
@blutwurst9000
@blutwurst9000 7 ай бұрын
Love the video but I have the question: Shouldn't be the approximation at 17:00 be something like n*w^(n-1)*0.001*x, so isn't there an n missing? Or how was the approximation done?
@algorithmicsimplicity
@algorithmicsimplicity 7 ай бұрын
Ahh yes you're right, there should be an n out the front, the gradient is proportional to nw^(n-1)x. The vanishing/exploding gradient arguments are still the same though, the linear scaling factor doesn't matter compared to the exponential scaling for large n.
@drdca8263
@drdca8263 8 ай бұрын
Here’s an idea that probably wouldn’t work: What if instead of algebraically guaranteeing that some operation is a monoid so that one can use the parallelizing thing that combines n inputs in O(log(n)) steps in n processors, what if you just had some operation, learned by a NN, which has “how much it deviates from being a monoid operation” as part of the loss? Like, suppose you randomly selected some pair of consecutive applications of the operation, and also computed it in the opposite order, and took the L^2 norm of the difference between the results, and multiplied that by some weighting, and made that a term in the loss? Like, within the family of continuous and piecewise-smooth monoidal operations, perhaps some of them would be better at selective remembering?
@algorithmicsimplicity
@algorithmicsimplicity 8 ай бұрын
That sounds really interesting, you should try it out!
@drdca8263
@drdca8263 8 ай бұрын
@@algorithmicsimplicity Thanks! Unfortunately I am lazy... And, there’s already another “what if I did [X]?” machine learning project I barely started (“what if I tried to add a simple approximation to what copying heads do to an n-gram model”, which seems like it should be much easier, but I’ve barely written the n-gram model part of it (and ChatGPT honestly wrote most of that). Haven’t even started on the “compute statistics about whether copying a word from previously in the current text, or go based on the corpus as a whole, is more accurate in this context” part...
@CyrusEstavillo
@CyrusEstavillo 7 ай бұрын
@@drdca8263thats a lame response. Try it. Make something in this world
@TheDoomerBlox
@TheDoomerBlox 7 ай бұрын
It's only yet another silly experiment to do the seemingly impossible in the hottest meme area, picking your nose seems like a more productive waste of time. But imagine, if you found something really cool and nobody would listen. That would be funny, that would be cool.
@gnaarW
@gnaarW 6 ай бұрын
​@@TheDoomerBloxif you would be able to build a RecNN that outperforms current state of the art models and put it on hugging face, people will care about that 🤷🏼‍♂️
@yqisq6966
@yqisq6966 7 ай бұрын
Peer review is broken nowadays because people have little time to actually read through a manuscript with attention to details given the amount of pressure to publish their own papers. So when you have more papers out there than the time people can spend on reviewing, you get low quality peer review.
@Gunth0r
@Gunth0r 18 күн бұрын
It doesn't help that there's bots writing articles.
@blacklistnr1
@blacklistnr1 8 ай бұрын
Nice video! What I didn't understand is what happens to the stable weights during training. Particularly: - How are they kept stable? - How can the model learn while being so restricted? What I'm guessing is that some form of the Delta is also used in training to keep the weights in those ranges + rely a lot more on the accuracy to carry the information. Is this correct? Does it imply that using double instead of float gives it a better ability to learn?
@algorithmicsimplicity
@algorithmicsimplicity 8 ай бұрын
Great question. The answer is it's really complicated and no-one knows for sure. There is nothing explicitly keeping the weights stable during training. They can (and probably do) become unstable. The thing is, there are actually thousands of different weights in the vector. At initialization, all of the weights are essentially one, so information from anywhere in the input can influence the gradient, but the model is incredibly restricted (cannot perform meaningful transformations in the recurrence). Then SOME of those weights change and enter the un-stable regime, so they can no longer carry information long distance but can do more interesting computations, while others remain stable. And in the fully-connected layers between recurrences, all weights can communicate information with each-other. So you have this complicated system where weights are changing at different rates, some remain stable, some become unstable, and that allows for interesting computation to be done and information to be propagated long distances.
@blacklistnr1
@blacklistnr1 8 ай бұрын
@@algorithmicsimplicity Thanks for the reply! That's quite interesting, different propagation lengths didn't even cross my mind. It'd be really funny if after all this work the model learned unstable weights and became forgetful :))
@f14-werto
@f14-werto 8 ай бұрын
I believe that the transformer does have a quadratic cost in memory (specifically self attention (SA)). The attention matrix in SA is n by n, thus n^2 (n being the number of tokens). Probably the reviewers is referring to that bit. Anyway, rejecting mamba was hecking stupid. Great video!
@algorithmicsimplicity
@algorithmicsimplicity 8 ай бұрын
The matrix is indeed n^2, but you never need to materialize the full matrix at the same time. You can materialize one column at a time, which is exactly what FlashAttention does, resulting in O(n) memory (still O(n^2) compute though).
@f14-werto
@f14-werto 8 ай бұрын
I have no idea how flash attention manages to be faster and more memory friendly. Are you sure that the attention matrix is never fully in memory (regardless of the type of memory)?. However the classical implementation didn't use flash attention so I believe that the reviewer is referring to that.
@f14-werto
@f14-werto 8 ай бұрын
I have rechecked the paper and it appears that flash attention is linear wrt the memory. The work of Tri Dao Is magic to me.
@Alulapower
@Alulapower 8 ай бұрын
Good video to explain mamba : I understand something
@harrysvensson2610
@harrysvensson2610 8 ай бұрын
You see, it's O(n log(n)) instead of O(n^2) without any penalties. Okay? 100% crystal clear, right? //end of joke
@BooleanDisorder
@BooleanDisorder 8 ай бұрын
​​​@@harrysvensson2610that means that, basically, transformers scale x² in compute needed for prompting. Also called square or quadratic since x² is a square if you would make a geometric figure. So if you write a prompt of 5 words, that's 25 compute since 5*5=25. You can see how this gets really crazy at high tokens counts. Mamba scales differently, so you need much less compute per prompt.
@londonl.5892
@londonl.5892 7 ай бұрын
Why did the "W_2 W_1" on the right at 10:48 change into a "W_1 W_2" by 12:34?
@algorithmicsimplicity
@algorithmicsimplicity 7 ай бұрын
Ahh that's a mistake, it should be W_2 W_1 throughout.
@hunter13971
@hunter13971 5 ай бұрын
Great explanation, do one for Mamba 2 as well, if possible
@phmfthacim
@phmfthacim 7 ай бұрын
This is amazing!
@gameboyplayer217
@gameboyplayer217 6 ай бұрын
Nicely explained
@julioalmeida4645
@julioalmeida4645 6 ай бұрын
Damn. Amazing piece
@itsyaro1297
@itsyaro1297 6 ай бұрын
Hey man! Really appreciate the technical detail in your videos
@algorithmicsimplicity
@algorithmicsimplicity 6 ай бұрын
Thanks for the suggestion, I will add them to the TODO list.
@ollyfoxcam
@ollyfoxcam 7 ай бұрын
Woah big claim! I’m excited
@augmentos
@augmentos 7 ай бұрын
Great video, would prefer no music but that’s me
@alexmomot6268
@alexmomot6268 8 ай бұрын
Thx a lot for the interesting video! 💛💙
@RomanTreutlein
@RomanTreutlein 6 ай бұрын
That was very well explained. Could you please also do a video on RWKV.
@Mohammed-rx6ok
@Mohammed-rx6ok 7 ай бұрын
Good job 👏
@koka3243
@koka3243 8 ай бұрын
Great video! Thanks!
@hackerborabora7212
@hackerborabora7212 8 ай бұрын
This algo is new and you made a video about it I love you I will subscribe your channel keep going
@nyyotam4057
@nyyotam4057 8 ай бұрын
So how close is the weight estimator to the MMSE (minimal mean square error) estimator? Can the MAMBA arch be improved even more, using a sparse covariance matrix and an application of a 'true' Kalman filter? Or is it already as close as it can get?
@nias2631
@nias2631 6 ай бұрын
@nias2631 I have no particular opinion on transformers or MAMBA since, for my work, I never use these. But as for peer review I think that Open Review itself is a great "filter for the filter". The research community can actively review the reasoning for accept/reject as you did in this video. For most journals not using Open Review the process is fairly opaque.
@algorithmicsimplicity
@algorithmicsimplicity 6 ай бұрын
Absolutely agree, the transparent review process is definitely a net benefit for the community as a whole.
@tannergilliland6105
@tannergilliland6105 7 ай бұрын
If you ever get the time I would love to see another video on mamba implementation but dumded down even more. Like to the level of statquest videos. They need to make you feel special while also showing the math step by step like its 9th grade.
@algorithmicsimplicity
@algorithmicsimplicity 7 ай бұрын
Thanks for the suggestion, there will probably be improved versions of Mamba coming out soon, I will make a more basic explanation video for them when they do.
@Kavukamari
@Kavukamari 6 ай бұрын
"i can do eleventy kajillion computations every second" "okay, what's your memory throughput"
@jhonny1682
@jhonny1682 7 ай бұрын
Can you make an explanation video like this one on Liquid Time Constant Networks 🙏
@nixonmanuel6459
@nixonmanuel6459 4 ай бұрын
Thank you!
@MarcosPedroLeal
@MarcosPedroLeal 8 ай бұрын
Loved your videos. Which software or library do you use to make these animations? Is it manim?
@algorithmicsimplicity
@algorithmicsimplicity 8 ай бұрын
It is a combination of Manim (for rendering latex) and my own renderer written in Pytorch (for 3d stuff).
@maximilianchrzon4545
@maximilianchrzon4545 7 ай бұрын
Your videos are so good man keep it up, seriously. Although that is probably beneath you, but could you maby make a video on how neural networks are computed on machines in general or maby on GPUs? As someone who did not learn computer science in uni, this would be an interesting topic for me to learn and maby fundamentally understand nn better.
@algorithmicsimplicity
@algorithmicsimplicity 7 ай бұрын
That's an interesting topic, I was planning on making videos about how CPUs and GPUs work at the physical level (e.g. logical gates are built out of transistors, addition and multiplication are built out of logical gates). Neural nets are just implemented as a bunch of matrix multiplications (you put all the neuron weights in one matrix and multiply it with the input). Is that what you are asking about?
@maximilianchrzon4545
@maximilianchrzon4545 7 ай бұрын
@algorithmicsimplicity yeah that sounds about right, thank you. Maby you could use matrix multiplication as a case example on those inner workings :) anyways, thanks for making awesome videos
@ArtOfTheProblem
@ArtOfTheProblem 7 ай бұрын
3b1b has this covered pretty well already@@maximilianchrzon4545
@gpjedy7379
@gpjedy7379 5 ай бұрын
Since this is also used to make long connections in the state space, might also mamba applied not just for language models but for gradient-optimising reinforcement learning models?
@algorithmicsimplicity
@algorithmicsimplicity 5 ай бұрын
Yes, absolutely. Mamba has been applied to some other areas now, such as protein sequence modelling. I haven't heard of anyone applying it to reinforcement learning, but I imagine it would work very well.
@luiscedillo9321
@luiscedillo9321 7 ай бұрын
You are a golden channel
@drjenschn
@drjenschn 4 ай бұрын
Quick question: I guess if you want a true linear recurrence from real-valued to real-valued, you could use the Hermitian of P for P^-1? That would also eliminate optimizing for Q...
@algorithmicsimplicity
@algorithmicsimplicity 4 ай бұрын
You could, but there isn't really any need to. The complex version performs the same as strictly real recurrences (actually, in some cases better). And optimizing for Q doesn't really have much cost, even if you used the Hermitian of P in place of Q you would still need to back-prop through it.
@drjenschn
@drjenschn 4 ай бұрын
@@algorithmicsimplicity Although I still don't get the backprop argument... If you backpropagate through P, computing the Hermitian has a closed-form solution... It's the complex version of a matrix transpose.
@algorithmicsimplicity
@algorithmicsimplicity 4 ай бұрын
@@drjenschn Sure, say we compute the output of a layer as y=P^TDPx. When we are backpropagating we need to compute the gradient of y w.r.t x, which means computing (P^TDP)^T y`. If you use a completely separate Q instead of P^T, computing this gradient still has the same cost. The only advantage of reusing P is you don't have to update the Q matrix as well, but updating weights is a relatively small computation compared to calculating (QDP)^T y`.
@drjenschn
@drjenschn 4 ай бұрын
@@algorithmicsimplicity Got it now. I was originally talking about "optimizing" for P^-1 (learning the matrix weights). Back-prop is still necessary, correct. Thx!
@downloadableram2666
@downloadableram2666 6 ай бұрын
State-space models are not necessarily from ML, they're used a lot in control systems actually. Not surprised by their relationship considering both are strongly based on linear algebra.
@SolathPrime
@SolathPrime 8 ай бұрын
[6:28]: While that sound somewhat good in practice it doesn't work like that Alternating between linear recurrent and non linear dense doesn't give that much of context in advantage :( The gradients vanishes or explodes after a while and requires some sort sigmoid transformation + some value Say for example an architecture like this: ```plaintext Dense -> Sigmoid -> Recurrent -> Dense -> Sigmoid -> Recurrent -> Dense -> Softmax ``` Until the gradients reach the first Recurrent the gradients loses most of it's value :(
@vibertthio
@vibertthio 4 ай бұрын
Great video! It's not critical, but 13:05, the calculation has error (?). It should be ((1,-1),(2,3)) on the left hand side
@algorithmicsimplicity
@algorithmicsimplicity 4 ай бұрын
Yes! Well spotted, I think you're the first person to notice.
@dntbther9298
@dntbther9298 7 ай бұрын
How about RWKV ?
@karius85
@karius85 8 ай бұрын
Appreciate the breakdown. I think there are a few more things at play here for the reject that is somewhat overlooked in the discussion at the end. Specifically, there are issues with anonymity and using "hype" to push a paper through an academic conference. I speculate that this was the underlying reason for rejecting the paper.
@algorithmicsimplicity
@algorithmicsimplicity 8 ай бұрын
Cool, if that was the reason for the reject they should have said that in the rationale for the reject. Instead they made up a bunch of a criticisms which are either 1) irrelevant or 2) blatantly untrue. That's a bad look for the conference, as it makes it seem like their reviewers are un-qualified to judge academic works.
@karius85
@karius85 8 ай бұрын
@@algorithmicsimplicityAbsolutely agree. In my experience, the quality of conference reviewers are extremely variable. Almost all researchers I know have horror stories about how incompetent and outright adversarial reviewers can be. Many great papers are rejected without sufficient basis, and mediocre papers are included for seemingly no good reason. Many experienced researchers don't want to review anymore. Just a comment on the reject; it might have been a conscious decision to not actually bring the anonymity issues up in the rebuttal to avoid further disputation. But, I am just speculating here with little to no factual basis.
@algorithmicsimplicity
@algorithmicsimplicity 8 ай бұрын
It could very well have been a conscious decision, but I think it was the wrong decision. From an outside perspective, it looks like a fantastic paper was rejected because of clueless reviewers. That's far more damaging to the conference's integrity than what ever conflicts might arise from anonymity violation disputes.
@karius85
@karius85 8 ай бұрын
@@algorithmicsimplicity Independently of what one may think of the paper, I agree that the justification for the reject was weak. Unfortunately, I don't think it matters much for the integrity of the conference in the long run, as this has happened in all the other big conferences in the past. Authors generally adapt and move on. What makes this unique is the hype around Mamba. Previously, no single member of the general public would have been interested in the review decision of a single paper in AI / ML. Now, the community extends far beyond academics, for better or worse. All in all, I hope it serves to incentivise stronger review processes for the future.
@karius85
@karius85 8 ай бұрын
On a side note, I really enjoy your content, keep up the good work 👏
@YA-yr8tq
@YA-yr8tq 7 ай бұрын
the channel is great and the material is awesome! the only catch is: the piano in the background makes it hard to focus..
@karigucio
@karigucio 6 ай бұрын
so the transformation applied to the weights does not concern purely with initialization? instead, in the expression w=exp(-exp(a)*exp(ib)) numbers a and b are the learned parameters and not w, right?
@algorithmicsimplicity
@algorithmicsimplicity 6 ай бұрын
Yes a and b are the learned parameters.
@agsystems8220
@agsystems8220 8 ай бұрын
RNNs are constrained by having to hold all their information in a single embedding space, so this space needs to be extremely large. It needs to hold every piece of information in the context that might come in useful at some point. Transformers can distribute information between many tokens, so can operate with a much smaller embedding space, at least in theory. The memory complexity of a RNN with a given structure is quadratic on the size of the embedding space, meaning we really pay big time for that increased embedding size. I wonder if that is what the reviewer was getting at. The results were impressive, but they haven't been followed up by success at larger model sizes which I would have expected to have already happened if it was going to. It is a cool mathematical trick to make it work, and demonstrates that language is surprisingly linear, but once you start to hit truly non linear questions I would expect it to stop improving. Overhyped IMO.
@howuhh8960
@howuhh8960 8 ай бұрын
if you stack multiple linear rnn layers they can handle non-linear dependencies across time, so "demonstrates that language is surprisingly linear, but once you start to hit truly non linear questions" is not true as mamba model as a whole (multiple layers) is nonlinear rnn
@algorithmicsimplicity
@algorithmicsimplicity 8 ай бұрын
The really cool thing about linear RNNs is that increasing the size the embedding space only has linear cost, not quadratic. The recurrence operator only performs elementwise multiplication with the embedding vector. This is why Mamba is able to increase the size of the embedding vector by a factor of 16 at essentially no cost. If you were willing to incur some additional cost, you could easily make the embedding vectors even larger. When you expand the embedding vector by a factor of a few thousand, now your talking about as much memory as a transformer with a few thousand tokens of the original size. Works are currently in progress to train larger model sizes, it takes about a year from start to finish to train a full sized model. Mamba already achieves state of the art performance for ~3b sized language modelling, this is HIGHLY HIGHLY non-linear. And finally, while there are some aspects in which transformers are still superior to dynamic linear RNNs, hybrid architectures such as Griffin (arxiv.org/abs/2402.19427 ) appear to give the best of both worlds, handily outperforming both.
@LinkSF1
@LinkSF1 17 күн бұрын
Honest question: how is the transformers attention mechanism not quadratic memory? You’d need to store it so that you can run the softmax operation.
@algorithmicsimplicity
@algorithmicsimplicity 16 күн бұрын
You don't need to store the full matrix to compute the softmax, you just need to keep a running total of the sum of elements seen so far as you are iterating over them. e.g: sum = 0 for element in column: sum += exp(element) for element in column: element = exp(element) / sum this will compute softmax of a single column, and you can apply this to each column one at a time, and you will never materialize more than n elements. In practice to run on a GPU you wold iterate over blocks of the matrix, you can check out the FlashAttention paper for a real world implementation of a O(n) memory self-attention.
@marloelefant7500
@marloelefant7500 3 ай бұрын
What about LSTMs? You've shortly showed the paper, but didn't mention them, even though they were supposed to be the solution to the vanishing and exploding gradients problem.
@algorithmicsimplicity
@algorithmicsimplicity 3 ай бұрын
LSTMs do better than regular RNNs at remembering. A regular RNN will forget what it sees 20 tokens ago, LSTMs can remember for a few hundred tokens, maaaybe up to 1000, but after that they forget as well. This is because LSTMs don't completely fix vanishing and exploding gradients, they just make them vanish slower (basically because the sigmoid gates it uses saturate and they can't output values extremely close to 0 or 1). When people say LSTMs fix vanishing and exploding gradients they mean it has less vanishing and exploding gradients compared to regular RNNs. Mamba on the other hand can remember for at least hundreds of thousands of tokens. Also LSTMs aren't parallelizable, so it isn't practical to train large-scale LSTMs on modern hardware. Recently the author of LSTMs put out a new paper with new versions of LSTMs to fix these issues (called LSTMx), but from what I can tell LSTMx just performs worse than Mamba in every way.
@OscarTheStrategist
@OscarTheStrategist 8 ай бұрын
Amazing video, insta-sub!
@tempname8263
@tempname8263 8 ай бұрын
21:48 33%? Dude, it's 3.4x improvement. Measuring improvement relative to accuracy instead of error rate is dumb, since that'd mean that difference between 100% accuracy and 99% is just 1%, which is not representative of anything.
@harrysvensson2610
@harrysvensson2610 8 ай бұрын
Everyone got issues when it comes to calculating with percentages. Here's an example: Imagine a game character with armor, the person got 98% damage reduction, and then puts on some more armor and reaches 99% damage reduction. How much less damage does the tank take compared to before putting on the extra armor? 100%? 50%? 1%? If you math it out it's obviously 50% less damage taken, since there's 2% between 98% and 100%. And one of those 2% is now removed, hence 1/2 -> 50% less damage taken compared to before. But you know what? Not everyone agrees that it is 50%. Understanding percentages is difficult.
@BooleanDisorder
@BooleanDisorder 8 ай бұрын
​@@harrysvensson2610yeh, the armor things is a great example. The higher the damage and the more important a tank is, the more important that single percent becomes. Could literally mean the difference between surviving a blow from a boss or die
@ScorpioneOrzion
@ScorpioneOrzion 8 ай бұрын
@@harrysvensson2610 it depends, of the armor example, its 1% absolute, and 50% relative
@harrysvensson2610
@harrysvensson2610 8 ай бұрын
@@ScorpioneOrzion Exactly.
@tempname8263
@tempname8263 8 ай бұрын
@@harrysvensson2610 It's not like it's difficult, it's just that most people do leaps in logic, where they don't even think relative to *what* are they measuring the percentage
@tantzer6113
@tantzer6113 8 ай бұрын
Enjoyed this. Given that its performance is comparable to or better than transformers as verified independently in several papers, is Mamba gaining a foothold among practitioners?
@mimotron
@mimotron 8 ай бұрын
It does : kzbin.info/www/bejne/b6SQapSJpMeer5o
@algorithmicsimplicity
@algorithmicsimplicity 8 ай бұрын
Definitely, lots of open source language models are switching to Mamba. Mamba is also being used for other tasks as well, e.g. arxiv.org/abs/2401.09417 Also, recently google deepmind released this paper ( arxiv.org/abs/2402.19427 ) on hybrid dynamic linear RNN and transformers which achieves really good results. Dynamic linear RNNs are definitely going to become mainstream.
@jondo7680
@jondo7680 5 ай бұрын
Do you have a video comparing mamba to rwkv with benefits of each over the other?
@algorithmicsimplicity
@algorithmicsimplicity 5 ай бұрын
I do not, I'd recommend checking out the latest papers for each (Mamba: arxiv.org/pdf/2405.21060 , RWKV: arxiv.org/pdf/2404.05892 ) and seeing which performs better on tasks that are similar to your use case.
Attention in transformers, visually explained | DL6
26:10
3Blue1Brown
Рет қаралды 1,9 МЛН
Andro, ELMAN, TONI, MONA - Зари (Official Music Video)
2:50
RAAVA MUSIC
Рет қаралды 2 МЛН
24 Часа в БОУЛИНГЕ !
27:03
A4
Рет қаралды 7 МЛН
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 402 М.
MAMBA and State Space Models explained | SSM explained
22:27
AI Coffee Break with Letitia
Рет қаралды 56 М.
Transformer Neural Networks Derived from Scratch
18:08
Algorithmic Simplicity
Рет қаралды 152 М.
Mamba - a replacement for Transformers?
16:01
Samuel Albanie
Рет қаралды 252 М.
Why LLMs Are Going to a Dead End? Explained | AGI Lambda
14:46
AGI Lambda
Рет қаралды 10 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 550 М.
The Key Equation Behind Probability
26:24
Artem Kirsanov
Рет қаралды 159 М.
2024's Biggest Breakthroughs in Math
15:13
Quanta Magazine
Рет қаралды 558 М.
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,4 МЛН
Andro, ELMAN, TONI, MONA - Зари (Official Music Video)
2:50
RAAVA MUSIC
Рет қаралды 2 МЛН