Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention (AI Paper Explained)

  Рет қаралды 16,827

Yannic Kilcher

Yannic Kilcher

Күн бұрын

Пікірлер: 70
@andreassyren329
@andreassyren329 3 жыл бұрын
If nothing else, the contribution to model naming is a clear increment to SOTA.
@jonatan01i
@jonatan01i 3 жыл бұрын
Nyströmer clearly is.
@andreassyren329
@andreassyren329 3 жыл бұрын
@@jonatan01i I will agree with that.
@VikasSingh-jv7fn
@VikasSingh-jv7fn 3 жыл бұрын
Hello Yannic, Your comment about the order of operations is correct. It is one of those things where you set out to check how poorly it performs and find out that it could work empirically (at least in limited settings). The lemma is not practically useful but merely evaluates the setting that if/when everything is idealized, the results/procedure does not lead to nonsensical conclusions. The choice of F early on in the paper was to avoid a conflict with D (D and d were both used) and E (ones matrix).
@tchlux
@tchlux 3 жыл бұрын
What about the A^+ comment, was that actually a typo in the paper? kzbin.info/www/bejne/o17do5ajh8lqe5Y
@zhanpengzeng8592
@zhanpengzeng8592 3 жыл бұрын
@@tchlux Yes, that is a typo. We somehow left out the pseudo inverse sign.
@xiongyunyang9643
@xiongyunyang9643 3 жыл бұрын
@@tchlux This is a typo. We will update it. Thanks for the catch.
@JoelHough
@JoelHough 3 жыл бұрын
I have seen this exact type of lemma in many discussions about approximations. It did not seem out of place to me. It's nice to know that in the limit your approximation will agree with the ground truth which is certainly not the case in all approximation methods.
@herp_derpingson
@herp_derpingson 3 жыл бұрын
0:42 Nyan-storm-former! . 3:30 Time for my weekly Transformer explanation :) . 27:00 That was a really sweet and easy to understand explanation. . 35:00 I wonder if we can have a DNN just predict a landmark tensor..
@xiongyunyang9643
@xiongyunyang9643 3 жыл бұрын
Thanks for making this great video. Nice catch for the typo. We will update the draft soon.
@mdmishfaqahmed8356
@mdmishfaqahmed8356 3 жыл бұрын
the pronounciation of those author names were clutch :D
@RobEnglebright
@RobEnglebright 3 жыл бұрын
top work pronouncing the authors
@jamiekawabata7101
@jamiekawabata7101 3 жыл бұрын
If I lift box 1 onto shelf 1 and box 1 onto shelf 2 and box 2 onto shelf 1, then I can predict the effort in lifting box 2 onto shelf 2. Great analogy, thank you.
@lucidraisin
@lucidraisin 3 жыл бұрын
Lol, unexpectedly mentioned 😅 thanks for the video!
@mizupof
@mizupof 3 жыл бұрын
By the power of Yannic, I rename you!
@poesaste
@poesaste 3 жыл бұрын
Incredible breakdown, subbed!
@benjaminho351
@benjaminho351 3 жыл бұрын
Nobody: Yannic: uöu 1:07
@Anirudh-cf3oc
@Anirudh-cf3oc 2 жыл бұрын
Very nice explaination sir, Thank you!!
@mathematicalninja2756
@mathematicalninja2756 3 жыл бұрын
What I hear: nice transformer
@АлексейТучак-м4ч
@АлексейТучак-м4ч 3 жыл бұрын
that's a nice trömformer
@MausamJain
@MausamJain 3 жыл бұрын
How did you import pdf to onenote with such a good quality? The printout option generally inserts a very poor quality images of the pages.
@YannicKilcher
@YannicKilcher 3 жыл бұрын
It's definitely poor for me, too, it's right on the edge of being useful.
@G12GilbertProduction
@G12GilbertProduction 3 жыл бұрын
Sweet Thursday with more sweety kind of paper. Buon appetite, Yannic! :)
@KennedDansker
@KennedDansker 3 жыл бұрын
It is F because it is forward attention right? (Then it would fit with B being backward). It is not entirely right (A contain part of the forward attention) but I think that is the intention
@osiris42
@osiris42 3 жыл бұрын
Does it even matter that the softmax doesn't commute, if the softmax is just a heuristic / hack in the first place? Or is there something inherintly special about softmax in the transformer architecture?
@tchlux
@tchlux 3 жыл бұрын
I don't know if I'd call it "special", but I like to think of it geometrically. When you use a softmax, you make it so that the layer immediately after the softmax only has to model a "surface" that lives on the inner wedge of the unit cube (points with 1-norm
@mathematicalninja2756
@mathematicalninja2756 3 жыл бұрын
@@tchlux that is a good perspective
@JamesAwokeKnowing
@JamesAwokeKnowing 3 жыл бұрын
So is that like a softmax over time, where it's valid kindof because over many iterations it's pulling random samples? Well hope a better way is found.
@NilabhraRoyChowdhury
@NilabhraRoyChowdhury 3 жыл бұрын
I bet you wish you could: from previous_videos import SelfAttention every time you make a video related to transformers
@otaviodzb1
@otaviodzb1 3 жыл бұрын
One thing that I still couldn't understand is how backprop works in a transformer. Does someone have a good reference or video that explains it?
@pg1337ful
@pg1337ful 3 жыл бұрын
seems like you have fundamental gaps in ML.
@YannicKilcher
@YannicKilcher 3 жыл бұрын
It works like in any other neural network, by applying the chain rule to all involved operations
@jonatan01i
@jonatan01i 3 жыл бұрын
I struggle to believe that it actually is named as Nyströmformer. I'll call it Nyströmer, as suggested and should be.
@scarcommander5517
@scarcommander5517 3 жыл бұрын
We like this transformer!
@chaitanyaparmar888
@chaitanyaparmar888 2 жыл бұрын
Love this video!
@BorrWick
@BorrWick 3 жыл бұрын
Didn't this came out like yesterday??
@ingusmant
@ingusmant 3 жыл бұрын
And?
@BorrWick
@BorrWick 3 жыл бұрын
@@ingusmant Just amazed by the speed Yannic can read, understand and produce these videos :o
@YannicKilcher
@YannicKilcher 3 жыл бұрын
You're right, it's already old now... ;)
@muhammadsaadmansoor7777
@muhammadsaadmansoor7777 3 жыл бұрын
I was not expecting this until a month later. But where do keys queries and value come from
@IRWBRW964
@IRWBRW964 3 жыл бұрын
They are learned.
@kicckicc
@kicckicc 3 жыл бұрын
just fyi, I tried to implement this the day before yesterday, but got NAN. I checked the code and realized that the formula (14) isn't accurate and also Z0 = AS/(||AS ||1||AS ||∞) should be Z0 = transpose(AS)/(||AS ||1||AS ||∞).
@xiongyunyang9643
@xiongyunyang9643 3 жыл бұрын
You mean the NAN is from your own implementation or our implementation? The accuracy to approximate pseudoinverse using formula (14) depends on how many iterations. Z0 is AS^T/(||AS ||1||AS ||∞). We will fix the typo in our update.
@kicckicc
@kicckicc 3 жыл бұрын
@@xiongyunyang9643 thanks for the reply. After I used correct (14) and correct z0, the nan is gone. Just fyi, formula (16) is also inaccurate, but it is easy to be noticed.
@xiongyunyang9643
@xiongyunyang9643 3 жыл бұрын
@@kicckicc Cool. Formula (16), similar to average local pooling, is to compute landmarks efficiently.
@JamesAwokeKnowing
@JamesAwokeKnowing 3 жыл бұрын
The name was designed to sound like 'the nice transformer'. So leave the name as is.
@pratik245
@pratik245 2 жыл бұрын
Have you heard Michelle Srivastav but more probably you would hear Peter Chakraborty. If you can tell me the reason, you would know a lot about caste and region based targeting in India.
@pratik245
@pratik245 2 жыл бұрын
So, no body hates India when they are born, but, as you keep growing you see these divisions between people, majoritarianism, govt repression, targetting of intellectual class, poverty, corruption and then you start seeing trends in these concepts all in the name of highly preached American democracy and capitalism... But, Surely everything is a joke even misery.. Right Guys?
@ZedaZ80
@ZedaZ80 3 жыл бұрын
I have no idea what most of this means, but the lemma was funny
@Xrey56Cheyz
@Xrey56Cheyz 3 жыл бұрын
To be honest, I expected the Performer to be the ImageNet moment for transformers, but it seems there is still a long way to go and random Fourier features are not the best way to do the thing. Somewhat sad cause Performer's idea looked so cool and well grounded :(
@redjammie8342
@redjammie8342 3 жыл бұрын
Big leaps come through simple ideas like ReLU, convolution, drop-out, residual connections, self-attention... The moment an idea becomes too convoluted, it is less likely to be game changing.
@charlesfoster6326
@charlesfoster6326 3 жыл бұрын
What are you waiting for? If anything, the transformer revolution seems like it's come with even more force and speed than ImageNet.
@ahmadmoussa3771
@ahmadmoussa3771 3 жыл бұрын
*The NICEtrömer*
@NextFuckingLevel
@NextFuckingLevel 3 жыл бұрын
Indeed
@visionscaper
@visionscaper 3 жыл бұрын
Hi there!
@YannicKilcher
@YannicKilcher 3 жыл бұрын
hi!
@weizhu2230
@weizhu2230 3 жыл бұрын
OK i vote down for this work and i think the "asymmetric non local neural network for semantic segmentation" should be a better one.
@yaaank6725
@yaaank6725 3 жыл бұрын
In the last twitter chart, it's quite surprising that Performer has the worst performance across the other efficient transformers. Is this also verified by other tasks?
@yaaank6725
@yaaank6725 3 жыл бұрын
Or other people maybe..
@xiongyunyang9643
@xiongyunyang9643 3 жыл бұрын
We have released the scores on individual LRA tasks. It will be interesting to see how Performer works for other tasks beyond LRA tasks.
@lennartvandergoten6592
@lennartvandergoten6592 3 жыл бұрын
Grüße an meinen alten ETH Kumpanen Yannic, richte Jonas schöne Grüße von mir aus :-)
@Ronnypetson
@Ronnypetson 3 жыл бұрын
Noice
@CandidDate
@CandidDate 3 жыл бұрын
I'd bet a million dollars that AGI, when discovered, uses frequencies of waves rather than any matrices.
@kimchi_taco
@kimchi_taco 3 жыл бұрын
mathematically ugly but somehow works well. I don't feel good in that both Nyströmformer and Performer rely on random sampling.
@xiongyunyang9643
@xiongyunyang9643 3 жыл бұрын
No, Nyströmformer does not rely on random sampling.
Linformer: Self-Attention with Linear Complexity (Paper Explained)
50:24
SISTER EXPOSED MY MAGIC @Whoispelagheya
00:45
MasomkaMagic
Рет қаралды 14 МЛН
Я сделала самое маленькое в мире мороженое!
00:43
Flipping Robot vs Heavier And Heavier Objects
00:34
Mark Rober
Рет қаралды 59 МЛН
Rethinking Attention with Performers (Paper Explained)
54:39
Yannic Kilcher
Рет қаралды 56 М.
Flow Matching for Generative Modeling (Paper Explained)
56:16
Yannic Kilcher
Рет қаралды 51 М.
Deep Networks Are Kernel Machines (Paper Explained)
43:04
Yannic Kilcher
Рет қаралды 59 М.
SISTER EXPOSED MY MAGIC @Whoispelagheya
00:45
MasomkaMagic
Рет қаралды 14 МЛН