Next paper: was NAND gates and registers all we needed?
@ickorling73282 ай бұрын
Wait, literally...
@yurona51552 ай бұрын
Shame on you, NERF (NOR-exclusionary reductive functionalist)!
@achunaryan34182 ай бұрын
Was newton all we needed?
@Cereal.interface2 ай бұрын
are organic molecules and nucleotides all we needed?
@ickorling73282 ай бұрын
@@Cereal.interface was DNA as central dogma all we needed?
@Bikameral2 ай бұрын
It's great having you back !! Thank you and please don't leave us again
@FranksWorldTV2 ай бұрын
💯
@wolpumba40992 ай бұрын
*Were RNNs All We Needed? Revisiting the Power of Minimal Recurrent Networks* * *0:00** Introduction:* The video explores a paper questioning the necessity of complex recurrent neural network (RNN) architectures like S4 and Mamba, suggesting that simpler RNNs might achieve comparable performance. * *0:16** RNNs vs. Transformers:* RNNs handle sequences efficiently with constant memory requirements compared to Transformers' quadratic memory needs, but suffer from backpropagation through time (BPTT). * *3:52** BPTT Limitations:* BPTT requires backpropagating gradients through all intermediate steps, limiting the length of sequences RNNs can effectively handle. * *5:30** State Space Models:* Newer models like S4 and Mamba address BPTT by removing hidden state dependencies from input computations, allowing for parallel processing and training. * *9:06** Minimal RNNs (minGRU, minLSTM):* The paper introduces minimal versions of GRUs and LSTMs that eliminate hidden state dependencies in gating mechanisms, further simplifying computation. * *12:54** Parallel Scan:* These minimal RNNs can be trained efficiently using a parallel scan algorithm, similar to S4 and Mamba. * *14:56** Trade-offs:* While simpler, minimal RNNs are less powerful than traditional RNNs in a single layer. However, this can be mitigated by using multiple layers. * *19:55** Experimental Results:* * *19:57** Selective Copying Task:* Minimal RNNs struggle with long-range dependencies in a single layer, but improve significantly with multiple layers. * *21:02** Reinforcement Learning Benchmarks:* Minimal RNNs perform well, but the benchmarks are considered too simple to draw strong conclusions. * *23:59** Language Modeling (Shakespeare):* Minimal RNNs perform comparably to Mamba on this small character-level dataset, where Transformers struggle due to the task's local nature. * *26:45** Conclusion:* The paper's hypothesis that minimal RNNs can achieve comparable performance to complex state-space models is valid, but requires stronger experimental evidence. However, the potential for scalability and efficiency makes them promising candidates for future research. I used gemini-1.5-pro-exp-0827 on rocketrecap dot com to summarize the transcript. Cost (if I didn't use the free tier): $0.03 Input tokens: 21161 Output tokens: 467
@onlyms46932 ай бұрын
Does their solution to attention head are just with more layer? if i not wrong even mamba have limitation that it use transformer multi head attention to mitigate it. What we need to find are a replacement formula for attention head because i feel its the biggest compute cost with the fact the bigger the context the bigger its need to be procces in the attention head by calculating each context to other context if it more than one words.
@novantha12 ай бұрын
Imagine how influential this paper could have been if it released in 2014, lol. It would have been revolutionary.
@HoriaCristescu2 ай бұрын
Great explanation of the distinction between SSM and RNN at 5:30
@maccloud85262 ай бұрын
Use a dark theme, then you won't have to wear sunglasses.
@achunaryan34182 ай бұрын
Even then how is he going to enter the matrix neo?
@ikartikthakur2 ай бұрын
dang
@apncahere1372 ай бұрын
Lmao
@Neomadra2 ай бұрын
Excellent analysis of the benchmarks. Especially the analysis of character level tasks makes so much sense.
@lizardy28672 ай бұрын
TLDR: It would have been more experimentally interesting to see results on an ensemble of minGRUs. It is hard for me to say there is much takeaway here besides confirmation of the Mamba architecture's success. Perhaps they were a bit too excited with the release of the paper, that they decided to not focus on the stronger aspect of the paper, that being the minGRU and the concept of ensemble that Mamba also relies on.
@GNARGNARHEAD2 ай бұрын
was looking at doing something similar last week, but compressing the layers of a transformer into the weights for the RNN get around the training inefficiencies
@MohamedMagdy-u2k2 ай бұрын
24:00 correct me if I am wrong, what I see is Transformers is more generalized architecture that requires more training time, on the other side there is an inductive bias in Mamba, minLSTM, and minGRU that makes these architectures converges very quickly to that dataset
@PaganPegasus2 ай бұрын
To me this paper highlights that RNNs actually aren't all we need and how powerful the transformer really is. A two layer transformer alone is capable of solving a bunch of tasks such as copying, sorting or other sorts of linear classification and reasoning thanks to the QK/OV circuits.
@elpepemandioca2 ай бұрын
In spite of not getting good results right now, I'd like more research to go this way, attempting to synthesize the plethora of models
@r.alexander90752 ай бұрын
Why were the benchmarks chosen to be RL tasks, instead of Seqeunce Modelling tasks? And why would we then compare then to Decision Transformers?
@creepi.2 ай бұрын
Why do most papers concerning SSMs and RNNs not include RWKV in their benchmarks. Would've been interesting to see how it fairs against Mamba S4/S6 and minGRU/LSTM
@the_primal_instinct2 ай бұрын
Next paper: "Can multiplications be replaced with multiple additions?"
@dougrattmann12 ай бұрын
*cough* AutoML-Zero moment
@quasimodo19142 ай бұрын
I don't know, KAN they?
@xelaxander2 ай бұрын
25:55 Constant gate decay might actually interesting for surrogate models of physical systems. Ignoring damage accumulation, a system response is independent of it‘s history.
@xelaxander2 ай бұрын
You need knowledge of the past though since you can’t include the entire phase space in your input, making you loose higher order information.
@nias26312 ай бұрын
This kind of happens in Echo State Networks since the previous signals ring-down due to the eigenvalues of the reservoir matrix.
@Mordenor2 ай бұрын
Thank you Mr Yannic for discussing whether RNNs are all we needed.
@DanFrederiksen2 ай бұрын
wouldn't it be straight forward to try it on the GPT2 training set and compare? or is that inconvenient
@Timotheeee12 ай бұрын
can you review the nGPT paper?
@Metalhead1213962 ай бұрын
Comment on Table 2 -- I was under the impression that the S6 configs were generally the "best" for S4/H3/Mamba? Or does someone know of cases where the S4 or Hyena layers are better-suited?
@lifeofcode2 ай бұрын
Wonderful overview, thanks!
@shikhars48162 ай бұрын
iiuc, selective copying of a token depends on the current input token alone (?). In that case why does a single layer perform so bad on the task?
@achunaryan34182 ай бұрын
Single layer selection cannot be optimally used in rnn for selective token copying based on current input token. Recurrence requires more than one for creating outputs with less error even when it is input dependent. Maybe CNN, or mnn can produce better result.
@danielsautot45212 ай бұрын
Welcome back. Can you make a video of the architecture of the liquid foundation model?
@mostaphabenhenda45392 ай бұрын
Where is the architecture released?
@yorth81542 ай бұрын
Hey Yan, I was wondering if you've seen the new google paper Selective Attention. It looks good
@2dapoint4242 ай бұрын
Next Paper.. Electricity is all we need,
@box-mt3xv2 ай бұрын
Missed your videos
@ensabinha2 ай бұрын
The fact one of the experiments is run on a simple benchmark is not the issue. As long as all architectures were run on such then that is not an argument not to use as a benchmark. Good architectures should perform well in simple problems as well. However, they should run on hard problems too.
@testboga59912 ай бұрын
I think they're onto something, but I also think that in the strict sense is impossible to demonstrate if it can't be mathematically proven (likely not by a human possible anyway, if at all). They're basically trying to prove a negative, which strictly doesn't work.
@black-snow2 ай бұрын
5th! Finally able to leave a high-quality comment.
@marcotito98732 ай бұрын
Really great content
@davidlearnforus2 ай бұрын
but I do not get who has decided that better performance of compositional bare-bone element means anything? Ameba has so much more capabilities than human singe neuron, but there is a big "BUT".
@andytroo2 ай бұрын
how many layers are in mingru - current transformers have >20 complex layers ....
@mrpocock2 ай бұрын
RNNs are effectively map-reduce.
@andrewaverbah48092 ай бұрын
Please review REPA paper
@MarceloTeixeiraa2 ай бұрын
What's the next ALL WE NEEDED?
@tresuvesdobles2 ай бұрын
I doubt it (answering the question in the title)
@makhalid19992 ай бұрын
GPRNN when?
@alekseyburrovets47472 ай бұрын
Randomly stumbled. Subscribed.
@4thpdespanolo2 ай бұрын
Were Transistors All We Needed?
@crassflam88302 ай бұрын
yes
@-E42-2 ай бұрын
damn I wish I was at flying altitude to fly with you through these papers ahah :D
@tallwaters97082 ай бұрын
What do people use RNNs for these days? I though they went the way of GANs.
@chickenp70382 ай бұрын
GANs are definitely still very used. all of the vaes in the LDMs use a gan loss
@novantha12 ай бұрын
Well, the problem with deep learning seems to be that you can do most tasks with most architecture giving enough scale, data, and training compute. RNNs are kind of nice in that paradigm because they have stable memory allocation with large sequences as compared to Transformers, but they’re also a lot easier to optimize because you effectively just need efficient kernels for the linear transformations, activation functions, and parallel scan algorithms, which is quite a bit simpler than in, for instance, a full Transformer. As for what you’d use them for? Presumably the same things you could use a Transformer for, essentially. It appears that for a lot of the things you would use a smaller LLM for (ie: 1.3B and below) it actually really doesn’t matter which architecture you have. I’ve also thought about extending the context length for a Transformer LLM with some sort of RNN adapter for the ultra long range dependencies but I’m not even sure what that would look like exactly.
@AM-yk5yd2 ай бұрын
I think translatatron still uses LSTM. It mentioned in Translatatroron 2 paper, iirc paper 3 doesn't explicitly goes into what decoder is. Only vaguely. And says that its backbone is translatatron 2. There was also xLSTM. I think Yannic covered it. RWKV is still alive and being developed. Still weak, but one day.... I will not be surprised if it is not used in time series prediction. Mamba would fit perfectly and rnn is generally probably the first thing I'd try to model time series from scratch.
@AM-yk5yd2 ай бұрын
Simplest form of adaptor would probably make just insert extra layers of rnn. Memory transformers(or retro I don't remember which) found that inserting kv-lookup near end like around layer 10 in 12 layers network gives very good results. We can replace kv lookup with rnn layers and either add their output as prefix tokens RMT style or just add values
@zrmsraggot2 ай бұрын
I just saw the title and i laughed
@MartinDxt2 ай бұрын
Say whaaaat?
@lanessarosel2 ай бұрын
Right - I’m still wondering what the borealis ai is - sounds like a reset machine