Scaling Transformer to 1M tokens and beyond with RMT (Paper Explained)

  Рет қаралды 58,443

Yannic Kilcher

Yannic Kilcher

Күн бұрын

Пікірлер: 137
@YannicKilcher
@YannicKilcher Жыл бұрын
OUTLINE: 0:00 - Intro 2:15 - Transformers on long sequences 4:30 - Tasks considered 8:00 - Recurrent Memory Transformer 19:40 - Experiments on scaling and attention maps 24:00 - Conclusion Paper: arxiv.org/abs/2304.11062
@CosmiaNebula
@CosmiaNebula Жыл бұрын
TLDR: use a Transformer as a RNN. Imagine LSTM but for each LSTM block you use Transformer. Train it by backpropagate through 7 steps of the RNN ("backprop through time" or BPTT). Why now? Because finally algorithm and hardware has caught up enough to fit 7 copies of the Transformer into one hardware. What next? Perhaps rematerialization!
@thegreenxeno9430
@thegreenxeno9430 Жыл бұрын
Is Open Assisstant open to submissions of home video recordings for training data?
@herp_derpingson
@herp_derpingson Жыл бұрын
Yay, a normal video after what feels like years. Also, is it me or the recent papers have become increasingly easier to read? There is no obscure math and the code is published.
@joech1065
@joech1065 Жыл бұрын
As Clyde from South park would say, “ChatGPT, dude”
@NoNameAtAll2
@NoNameAtAll2 Жыл бұрын
I miss ML news :(
@Nif3
@Nif3 Жыл бұрын
Yes, I've noticed this as well - publications have become a lot shorter and more focused on practical applications.
@xynonners
@xynonners Ай бұрын
​@@Nif3they have also become less novel, it's hard to find a paper that is both simple (and has published code) and is novel
@halocemagnum8351
@halocemagnum8351 Жыл бұрын
I've always loved the in depth paper reviews! Thanks so much for this one, it was great!
@jidun9478
@jidun9478 Жыл бұрын
Thanks, for saying it finally. I have seen quite a few AI specialty channels talking about pasting stuff like the entire Harry Potter book series into a single prompt box :) OMG I couldn't even comment.
@joe_limon
@joe_limon Жыл бұрын
Ty for covering this
@neocrz
@neocrz Жыл бұрын
Nice. I was interested in that paper. Video came out right on time
@GeorgeFosberry
@GeorgeFosberry Жыл бұрын
Thank you for great analysis that is accessible even to laymen like myself. Always a pleasure to watch your videos in contrast to AI hype riders (AAAAAAAAAAAAA TWO MILLION TOKENS CTX LENGTH IS HERE!!!11)
@adrianimfeld8360
@adrianimfeld8360 Жыл бұрын
Was literally waiting for your take on this paper, thx for covering it!
@ChuanChihChou
@ChuanChihChou Жыл бұрын
Information only propagates bottom up in Transformer-XL so the maximum "receptive field" (effective context length) is finite regardless of how far back the BPTT goes. To be more precise, O(LC): L = number of layers, C = context length of each layer.
@perbojsen3433
@perbojsen3433 Жыл бұрын
Thank you for this nice video. Being brand new to this field, I nevertheless find your presentation and explanations very clear and easy to follow. I also appreciate your skepticism and how you look behind the hype.
@FredPauling
@FredPauling Жыл бұрын
I appreciate you taking the time to reduce the hype on this paper for non experts.
@Billy4321able
@Billy4321able Жыл бұрын
I was very skeptical when people were saying that it could read an entire book, in memory, all at once. As it turns out it was all just hype. Go figure.
@piratepartyftw
@piratepartyftw Жыл бұрын
Will you do Hyena next? Thanks!
@Skinishh
@Skinishh Жыл бұрын
Great explanation! The fact that the video is
@adelhalawa974
@adelhalawa974 Жыл бұрын
Really appreciate not just the breakdown but you injecting your intuition throughout. Great vid
@breakablec
@breakablec Жыл бұрын
This seems to work for only a sparse information density that does not overwhelm the input memory
@agsystems8220
@agsystems8220 Жыл бұрын
For now. I guess you could let it control it's own read speed to let it run at the speed it wants, potentially even with backtracking. It is currently working like a book that turned over it's own pages at a set rate, no matter how fast the reader felt was appropriate.
@breakablec
@breakablec Жыл бұрын
@@agsystems8220 well also the input size could be warried with various pretrained model sizes and potentially smaller chunks and the overwhelming of inputs could be detected and adjusted for as well
@Alex-gc2vo
@Alex-gc2vo Жыл бұрын
Seems like you could do the same thing with prompting. Maybe even better. Just feed it chunks of the overall text with the prompt to take notes of information relative to the question. Then use all the notes to answer. You could also do it with a vector database.
@moomoo9820
@moomoo9820 Жыл бұрын
Overhype for the algo
@killers31337
@killers31337 Жыл бұрын
I guess the interesting part is that they didn't use any additional weights to process memory. BERT's lack of causal masking makes it possible to update memory just passing it through transformer layers. This method might be fundamentally incompatible with autoregressive models. It might be possible to use a NN trained this way with other forms of memory - I would guess it doesn't really care if memory tokens come from the previous segment or elsewhere. So you can have a memory database and look up the most relevant memory for a specific segment.
@jeffwads
@jeffwads Жыл бұрын
Having used the 30b model you guys created, I can say with confidence that it is an amazing model, far exceeding what I thought it would be capable of. Its comprehension appears to be at least GPT 3.5 level if not better. Well done.
@preddyshite6342
@preddyshite6342 Жыл бұрын
Tellme you use haven't used chatGPT3.5 in a while without telling me
@Klokinator
@Klokinator Жыл бұрын
OpenAssistant is absolutely not at ChatGPT's level. It is pretty good though, and certainly the best of the open source models out right now. I look forward to the next major iteration, and more importantly, I'M DOING MY PART! Contribute to the Oasst dataset!
@novantha1
@novantha1 Жыл бұрын
I actually had a very silly idea at one point where you would have a transformer model doing general processing and understanding, with the catch that it would rapidly forget information. However, each time it learned something, it would learn a small percentage of the weights involved would be sent to an RNN, almost in the background. The idea was that the RNN would be long term memory, and it would only learn things that were reinforced many times, and ideally retain specifically facts and figures. This isn't the same thing, but it seems that somebody had a similar thought.
@dik9091
@dik9091 Жыл бұрын
i had that tought today and also the tought if someone else also had the tought and now I see that is the case ;)
@share4713
@share4713 Жыл бұрын
Finally, you don't know, but I am waiting everyday for a new video.
@andres_pq
@andres_pq Жыл бұрын
Finally a paper review!!!
@aBigBadWolf
@aBigBadWolf Жыл бұрын
You should do a video on the block-recurrent transformer! It's a mix between lstm and transformer and achieves sota on pg19.
@alexeybalandin4676
@alexeybalandin4676 Жыл бұрын
A very concise and clear analysis, thank you very much!
@ilia_zaitsev
@ilia_zaitsev Жыл бұрын
Indeed, feels like a kind of RNN but using attention layers instead of the dense ones :) Or, a recurrent transformer, depending on from what side to look...
@clray123
@clray123 Жыл бұрын
Sounds like the same approach as used by LlamaIndex (aka GPTIndex). It's true that it is not the same as having a 1M token context window, but the collected facts (and they can be something non-trivial, which still fits into the "small" 32K context window) can be then put together and summarized and inferred from as a final step. So it does in fact resemble what a human would do when extracting information from a long book - take notes on relevant topics while reading it, then write up some conclusions based on those notes alone.
@jonathanfranks1286
@jonathanfranks1286 Жыл бұрын
Sorry, could a model trained like that also output text with a big amount of token?
@clray123
@clray123 Жыл бұрын
@@jonathanfranks1286 Huh? There is no limit on the number of tokens any model can output.
@yorth8154
@yorth8154 Жыл бұрын
New Billion token paper out. Can you make rundown for it please?
@LaSalsePareille92
@LaSalsePareille92 Жыл бұрын
amazing review of this paper, thanks !
@almoni127
@almoni127 Жыл бұрын
Great video as always! Just a small correction. Quadratic memory is not an issue since the introduction of flash attention. There are still the limitations of linear memory and quadratic running time.
@dik9091
@dik9091 Жыл бұрын
thank you was just breaking my head over it ;)
@ilianos
@ilianos Жыл бұрын
Hi Yannic, great video! Are you planning to review the following paper? "Low-code LLM: Visual Programming over LLMs"
@Verrisin
@Verrisin Жыл бұрын
I mean, if they learn to generalize the compression ... it could remember a lot of stuff, and drop details but keep the basic idea ... - Then it would know "I need to look at X to find details" - it would output that as LOOKUP(X), something would include that thing in near-context (e.g. I look up source of a fn I roughly know) and it could do A LOT. - I mean ... this is how I work as a human. - I think if they figure out how to train it to have a general enough compression ... this approach is all that is needed.
@hEmZoRz
@hEmZoRz Жыл бұрын
I'm really, really waiting for your review on the LongNet that claims to scale to 1B tokens!
@evennot
@evennot Жыл бұрын
Why don't they just save the input sequence and reiterate over it when a question is presented? It's a genuine question: there's probably a reason there. Multiple transformers constantly working with input data (+ using recurrent connections, not in parallel) can't be slower than an additional question-specific transformer reiterating over text. Also dumb reiteration with something specific "in mind" would be nice for spotting contradicting facts from the input. People solve some tasks like this. Betting on acquiring all possible aspects of the input data into the "context cache" looks like an unsolvable problem for me
@aboody006
@aboody006 Жыл бұрын
Woah I just read this today, and then I see this notification.
@RuairiODonnellFOTO
@RuairiODonnellFOTO Жыл бұрын
What note taking tool is he using? Anyone have tips on organise all the papers/PDFs into a catalogue on my desktop. I've read loads of papers but just put them in one big folder. Any nice research organiser for PDFs or URLs (maybe that allow annotations for searching later)?
@learningwithlowell
@learningwithlowell Жыл бұрын
Great breakdown. Thank you!
@nettlesoup
@nettlesoup Жыл бұрын
Not an AI dev so this is just my layman's reading. As other comments have referenced the "paste entire Harry Potter book" example, isn't the advantage of this that you could tell the memorization function what you want it to treat as facts? So, you could ask, "Tell me all the spells Hermione casts when Ron is nearby and where they are", and then the first step is to tune the memorization network to detect facts that relate to this and treat any sentences that don't involve any spell casting as noise for memorization purposes. (How? I don't know, some kind of fact filter rule in plain English that gets added to each pass? Presumably you can use a LLM to generate that filter rule text). Then the location of the spell casting can be determined from the context of preceding sentences. Maybe another memorization could be the list of unique spells as they're taught so they can be detected out of scope, e.g. wingardium levitosa or whatever it is (not a big HP fan sorry).
@vivienseguy
@vivienseguy Жыл бұрын
Great paper review as usual!
@fitybux4664
@fitybux4664 Жыл бұрын
Maybe you could have it analyze every file in a large code base. Or have it be able to carry on a conversation that is weeks long.
@herp_derpingson
@herp_derpingson Жыл бұрын
Maybe
@makuru.42
@makuru.42 Жыл бұрын
Or, more importantly, you could have an enormous prompt.
@KevinGChiu
@KevinGChiu Жыл бұрын
How does it know what fact to put into memory before reading the question?
@yildirimakbal6723
@yildirimakbal6723 Жыл бұрын
Great summary!
@easter.bunny.6
@easter.bunny.6 Жыл бұрын
Hi Yannic, thanks for your video. After watching your video, do you think this model can be used in decoder-only architecture?
@dinkusstinkus4396
@dinkusstinkus4396 Жыл бұрын
To me the big reveal was that it had no other architecture, and they did it on a 1060
@weert7812
@weert7812 Жыл бұрын
This seems like it could be a way to have agents which have more persistence in time.
@lamhkak47
@lamhkak47 Жыл бұрын
I wonder if you could do a review on RWKV model? Heard that model is built by 1-madlad team
@arthurheuer
@arthurheuer 11 ай бұрын
i can hardly believe i laughed when hearing “a humungous 1 million, even 2 million tokens” in anticipation for how funny it will be in the future…
@sandratoolan9598
@sandratoolan9598 Жыл бұрын
missed you , you look good in the glasses - its to much a brand already dude , no way back .
@RuslanLagashkin
@RuslanLagashkin Жыл бұрын
Overhyping with all my might ) Seriously though, it is an obvious idea, just well executed. I guess at some point we'll have to write questions before the material to analyze, not in any part of prompt, as it is now in ChatGPT.
@siquod
@siquod Жыл бұрын
Why do they use autoregressive self-attention to generate and attend to the memory tokens? Wouldn't cross attention make more sense, mostly because then different semantic embeddings could be used for memory facts than for mere tokens?
@emmanuelkolawole6720
@emmanuelkolawole6720 Жыл бұрын
Hey Yannic, why don't you add pandasAI to your open assistant project? It will take the product to a new level of traffic. Also support the pandasAI project so it can go beyond beta soon
@BO2trickshoting
@BO2trickshoting Жыл бұрын
This would probably be useful for something like bing chat or just search engines in general.
@marverickbin
@marverickbin Жыл бұрын
A question: BERT is encoder only transformer. It means the input are token ids, but the output are vector embeddings, so, they are not the same kind of data. Therefore, you cannot use the output as the input... How they manage to get memory tokens as output if the outputs are vector embeddings?
@davidlatkin5525
@davidlatkin5525 Жыл бұрын
Can you make a video about SAM (Segment Anything Model) from Meta?
@albinoameise
@albinoameise Жыл бұрын
Would it be possible to have a step before the transformer that handles the input? E.g. first take the last section of the input (which is the task for the transformer) as a Query. Then take some memory of fixed length and run a attention block over the input section by section, taking the Query from before and doing attention between the memoy and the current section. If that works, the memory would be a dense representation of what is actually important from the input, regardless of length or task. Might be difficult to train though...
@barulicksama3838
@barulicksama3838 Жыл бұрын
You should do more videos on your new chat. You should promote it.
@DaniilKirilenko
@DaniilKirilenko Жыл бұрын
Hi Yannic! What pdf-reader do you use?
@rootthree9436
@rootthree9436 Жыл бұрын
onenote
@cchance
@cchance Жыл бұрын
Is this similar to how automatic1111 surpasses the 75 token cap?
@snippletrap
@snippletrap Жыл бұрын
How does it compare with RWKV?
@snapo1750
@snapo1750 Жыл бұрын
In theory RWKV is completely different from transformers as it uses ONLY RNN, because RWKV uses only RNN's there is no input context lenght limit, but in the learning process they only feed (afaik 8k tokens) therefore it should not be able to know more. The more beautyful thing about RWKV is that you dont need to quadratically increase your vram 🙂
@SimSim314
@SimSim314 Жыл бұрын
It would be interesting to see a demo of any such system. Lets say open assist 30B with this...
@holthuizenoemoet591
@holthuizenoemoet591 Жыл бұрын
So what would be better, increasing the context size of BERT to for example from 512 to 2048 or Using this recurrent memory technique and repeat the 512 four times?
@undergroundculture9009
@undergroundculture9009 Жыл бұрын
obviously increasing bert context size
@thegreenxeno9430
@thegreenxeno9430 Жыл бұрын
Attention should be sentence specific. Label grammatically- noun, verb, etc. Store labels locally in a vector db to remember context (conversation, story, etc.) Run transformer on vdb. [context labelling] Next step, analysis engine stores 'understandings' in rdb. ¿
@thegreenxeno9430
@thegreenxeno9430 Жыл бұрын
Like, the rules of grammar already exist. Just apply that labelling scheme.
@thegistofcalculus
@thegistofcalculus Жыл бұрын
It may be possible to use this architecture to read backwards and look for an answer instead of trying to memorize facts that may or may not be relevant when the question comes. Or maybe iterate forward with awareness of the question that is otherwise presented at the end.
@alexbrown2288
@alexbrown2288 Жыл бұрын
Yannic looks a lot better without the sunglasses. He'd probably gain subscribers without them.
@theaugur1373
@theaugur1373 Жыл бұрын
Anyone know how this compares with the Reformer architecture? It was able to scale to about 1 million tokens.
@darklordvadermort
@darklordvadermort Жыл бұрын
any comments/thoughts on hyena?
@Veptis
@Veptis 8 ай бұрын
took 10 Months for Google to come up with Gemini ... but they aren't telling us how exactly.
@ground_news
@ground_news Жыл бұрын
We enjoy watching your content and believe that both of our missions align well! Would love to connect to talk about a partnership
@dik9091
@dik9091 Жыл бұрын
only know I can somewhat follow it, great. I understand the attention things and I thought why is that not applied to the conversation with feedback and I was wondering if that is not already been done or being researched. I am at the start, will draw concussions at the end. In the meanwhile we have MPT 7B model with 65k input with alibi? quote : These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing positional embeddings with Attention with Linear Biases (ALiBi). 2.40 yes that's what I immediately thought when I understood the self attention matrix, thats a non scaling bottleneck that can be solved with an analog signal matrix with calculating opamps and a en.wikipedia.org/wiki/Nonblocking_minimal_spanning_switch and I happen to build these switches , hmm
@creativityoverload2049
@creativityoverload2049 Жыл бұрын
So can it do machine translation?
@NeoShameMan
@NeoShameMan Жыл бұрын
I was hyped for 500ms only, does that count?
@m4ng4n
@m4ng4n Жыл бұрын
How does this fare vs MEGA?
@jnsi0
@jnsi0 Жыл бұрын
Seven segments - reminds me Miller's law 🤔
@danielhenderson7050
@danielhenderson7050 Жыл бұрын
24:33 sketch is kinda funny :D
@Addoagrucu
@Addoagrucu Жыл бұрын
i don't know about this take. i kind of agree, except i think you're a bit too harsh on the utility this paper brings. to steelman the twitter hype i could say that the tradeoff between memory requirement (linear for this technique) and amount of functionality learned (which i think can be pushed further with better datasets) might make this a contender for a pretty robust method for large scale NLP. a study on how much complicated language understanding benchmarks suffer as a result of using all available vram to fit multiple of the same transformer into memory to do backprop over time as opposed to using all available vram to fit one big transformer would be helpful in trying to guide our opinions with empiricism.
@rumfordc
@rumfordc Жыл бұрын
why does open assistant brown nose for the WEF ?
@Phasma6969
@Phasma6969 Жыл бұрын
How?
@rumfordc
@rumfordc Жыл бұрын
@@Phasma6969 It describes them as heroes saving the world and agrees with every single one of their publicly stated agendas. It will even go so far as to ignore overrides on those topics (up to a point). I can understand how Microsoft and Google would reach this sort of behavior but am curious as to how Open Assistant comes by it.
@alexandermathews9710
@alexandermathews9710 Жыл бұрын
@@rumfordc probably because the data all the models are absorbing share similar outlooks
@rumfordc
@rumfordc Жыл бұрын
@@alexandermathews9710 yea its as if they're just pulling from the WEF's website and nowhere else. they should probably diversify their training set.
@alexandermathews9710
@alexandermathews9710 Жыл бұрын
@@rumfordc no i think the sheer amount of data that has been generated is in agreement with the WEF. this is one of the dangers of ai. a lack of diversity in data overall. not that wef information is purposefully selected its that the amount of it makes it look that way
@nevokrien95
@nevokrien95 Жыл бұрын
This isn't new and it's relatively oversimplified. We have preciverio and transformerlstm
@-mwolf
@-mwolf Жыл бұрын
transformer xl reminds me of fwd fwd algorithm
@kristoferkrus
@kristoferkrus Жыл бұрын
Awesome
@draken5379
@draken5379 Жыл бұрын
This 'RMT' seems really pointless. You can just use the same main LLM, to turn text into embeddings and store them in a vectorstore database. Then you are able to search that vectorstore database for everything related to the incoming input. Allowing an LLM to have a massive vast collection of data that is retrieved in a natural lang way. Super Simple Example: Told my Bot, "Dogs like Blue, Cats like red, Rats like Yellow". The LLM itself, detects these 'facts' in the input, and redirects them to a 'fact save' function. Which saves each fact to a vectorstore. I then asked. What color does dogs like ? The vectorstore DB is then queried with that input, which results in dogs like blue, which gets fed into the LLM along with the current input as a 'fact'. Crude and simple example, but shows you dont really need to go code out a totally new neural net to just handle something an LLM can already handle by design.
@BO2trickshoting
@BO2trickshoting Жыл бұрын
do you think this is what bing chat uses?
@draken5379
@draken5379 Жыл бұрын
@@BO2trickshoting Ya from what ive heard. The way Stripe,Bing,spotify etc are handling memory is via vectorstores.
@fontenbleau
@fontenbleau Жыл бұрын
Paper is paper, but where is working test...
@lio1234234
@lio1234234 Жыл бұрын
Awesome stuff! Do you think this will be integrated into Open Assistant?
@codemark7464
@codemark7464 Жыл бұрын
thanks a lot!
@binjianxin7830
@binjianxin7830 Жыл бұрын
7:44 maybe it’s about the model needs to be able to rule out negative facts?
@ivanstepanovftw
@ivanstepanovftw Жыл бұрын
I don't like this idea from the paper... Why not just make embeddings of previous context?
@klammer75
@klammer75 Жыл бұрын
We’ll put and eloquently described…gotta admit I was starstruck when I first saw the headline but you’re right, it’s an RNN not an absurdly long transformer window…Tku for this😎🦾
@samsamhuns928
@samsamhuns928 Жыл бұрын
Sounds like RNNs with extra steps lol
@nevokrien95
@nevokrien95 Жыл бұрын
Why do you trust the dataset if even the exmple in the paper is wrong? This seems to be an indicator of poor data quality
@timeTegus
@timeTegus Жыл бұрын
" So u are saying i can out in all harrypqtrer bools and ask qestions about them "😂
@davidconsumerofmath
@davidconsumerofmath Жыл бұрын
Load in entire code bases!!
@serta5727
@serta5727 Жыл бұрын
Algo Support
@zerotwo7319
@zerotwo7319 Жыл бұрын
lol a few weeks ago I was talking how that was a limitation, but .... what a time to be alive.
@serta5727
@serta5727 Жыл бұрын
Cool thing❤
@theosalmon
@theosalmon Жыл бұрын
It's not 100% transformer. That in itself is noteworthy.
@dik9091
@dik9091 Жыл бұрын
great news when it works
@qeter129
@qeter129 Жыл бұрын
1 gagillion tokens of context...
@kaikapioka9711
@kaikapioka9711 Жыл бұрын
Finally.
@preddyshite6342
@preddyshite6342 Жыл бұрын
I'm running out of pants to shit
@MultiCraftTube
@MultiCraftTube Жыл бұрын
The italians are comming 😱
@АлександрКрасных-щ9б
@АлександрКрасных-щ9б Жыл бұрын
I know that kuratov
RWKV: Reinventing RNNs for the Transformer Era (Paper Explained)
1:02:17
СКОЛЬКО ПАЛЬЦЕВ ТУТ?
00:16
Masomka
Рет қаралды 2,5 МЛН
When u fight over the armrest
00:41
Adam W
Рет қаралды 26 МЛН
PRANK😂 rate Mark’s kick 1-10 🤕
00:14
Diana Belitskay
Рет қаралды 11 МЛН
Longformer: The Long-Document Transformer
26:36
Yannic Kilcher
Рет қаралды 23 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,3 МЛН
Swin Transformer paper animated and explained
11:10
AI Coffee Break with Letitia
Рет қаралды 69 М.
Blowing up Transformer Decoder architecture
25:59
CodeEmporium
Рет қаралды 17 М.
Do we need Attention? A Mamba Primer
33:50
Sasha Rush 🤗
Рет қаралды 10 М.
SpaceX's Starship is about to make History! Here's what will happen!
19:15
СКОЛЬКО ПАЛЬЦЕВ ТУТ?
00:16
Masomka
Рет қаралды 2,5 МЛН