The math behind Attention: Keys, Queries, and Values matrices

  Рет қаралды 183,732

Serrano.Academy

Serrano.Academy

Күн бұрын

This is the second of a series of 3 videos where we demystify Transformer models and explain them with visuals and friendly examples.
Video 1: The attention mechanism in high level • The Attention Mechanis...
Video 2: The attention mechanism with math (this one)
Video 3: Transformer models • What are Transformer M...
If you like this material, check out LLM University from Cohere!
llm.university
00:00 Introduction
01:18 Recap: Embeddings and Context
04:46 Similarity
11:09 Attention
20:46 The Keys and Queries Matrices
25:02 The Values Matrix
28:41 Self and Multi-head attention
33:54: Conclusion

Пікірлер: 276
@SerranoAcademy
@SerranoAcademy 7 ай бұрын
Hello all! In the video I made a comment about how the Key and Query matrices capture low and high level properties of the text. After reading some of your comments, I've realized that this is not true (or at least there's no clear reason for it to be true), and probably something I misunderstood while reading in different places in the literature and threads. Apologies for the error, and thank you to all who pointed it out! I've removed that part of the video.
@tantzer6113
@tantzer6113 7 ай бұрын
No worries. It might help to pin this comment to the top. Thanks a lot for the video.
@chrisw4562
@chrisw4562 2 ай бұрын
Thanks for note. That comment actually sounds very reasonable to me. If I understand this right, keys and querys help to determine the context.
@JTedam
@JTedam 4 ай бұрын
I have watched more than 10 videos trying to wrap my head around the paper, attention is all you need. This video is by far the best video. I have been trying to assess why it is so effective at explaining such a complex concept and why the concept is hard to understand in the first place. Serrano explains the concepts, step by step, without making any assumptions. It helps a great deal. He also used diagrams, showing animations along the way as he explains. As for the architecture, there are so many layers condense in to the architecture. It has obviously evolved over the years with multiple concepts interlaced into the attention mechanism. so it is important to break it down into the various architectures and take each one at a time - positional encoding, tokenization, embedding, feed forward, normalization, neural networks, the math behind it, vectors, query-key -values. etc. Each of these are architectures that need explaining, or perhaps a video of their own, before putting them together. I am not quite there yet but this has improved my understanding a great deal. Serrano, keep up your approach. I would like to see you cover other areas such as Transformer with human feedback, the new Qstar architecture etc. You break it down so well.
@SerranoAcademy
@SerranoAcademy 4 ай бұрын
Thank you for such a thorough analysis! I do enjoy making the videos a lot, so I'm glad you find them useful. And thank you for the suggestions! Definitely RLHF and QStar are topics I'm interested in, so hopefully soon there'll be videos of those!
@blahblahsaurus2458
@blahblahsaurus2458 Ай бұрын
Did you also try reading the original Attention is All you Need paper, and if so, what was your experience? Was there too much jargon and math to understand?
@visahonkanen7291
@visahonkanen7291 25 күн бұрын
Agree, an excellelt öööököööööööövnp
@JTedam
@JTedam 21 күн бұрын
@@blahblahsaurus2458 too much jargon obviously intended for those already Familiar with the concepts. The diagram appears upside down and not intuitive at all. Nobody has attempted to redraw the architecture diagram in the paper. It follows no particular convention at all.
@Rish__01
@Rish__01 8 ай бұрын
This might be the best video on attention mechanisms on youtube right now. I really liked the fact that you explained matrix multplications with linear transformations. It brings a whole new level of understanding with respect to embedding space. Thanks a lot!!
@SerranoAcademy
@SerranoAcademy 8 ай бұрын
Thank you so much! I enjoy seeing things pictorially, especially matrices, and I'm glad that you do too!
@maethu
@maethu 4 ай бұрын
This is really great, thanks a lot!
@JosueHuaman-oz4fk
@JosueHuaman-oz4fk Ай бұрын
That is what many disseminators lack: explaining things with the mathematical foundations. I understand that it is difficult to do so. However, you did it, and in an amazing way. The way you explained the linear transformation was epic. Thank you.
@fcx1439
@fcx1439 2 ай бұрын
this is definitely the best explained video for attention model, the original paper sucks because there is not intuition at all, just simple words and crazy math equations that I don't know what it's doing
@user-tl3ix3xf3j
@user-tl3ix3xf3j 7 ай бұрын
This is unequivocally the best introduction to Transformers and Attention Mechanisms on the entire internet. Luis Serrano has guided me all the way from Machine Learning to Deep Learning and onto Large Language Models, maximizing the entropy of my AI thinking, allowing for limitless possibilities.
@JonMasters
@JonMasters Ай бұрын
💯 agree. Everything else is utter BS by comparison. I’ve never tipped someone $10 for a video before this one ❤
@computersciencelearningina7382
@computersciencelearningina7382 2 ай бұрын
This is the best description of Keys, Query, and Values I have ever seen across the internet. Thank you.
@__redacted__
@__redacted__ 5 ай бұрын
I really like how you're using these concrete examples and combining them with visuals. These really help build an intuition on what's actually happening. It's definitely a lot easier for people to consume than struggling with reading academic papers, constantly looking things up, and feeling frustrated and unsure. Please keep creating content like this!
@23232323rdurian
@23232323rdurian 8 ай бұрын
you explain very well Luis. Thank you. It's HARD to explain complicated topics in a way people can easily understand. You do it very well.
@SerranoAcademy
@SerranoAcademy 8 ай бұрын
Thank you! :)
@rohitchan007
@rohitchan007 6 ай бұрын
Please continue making videos. You're the best teacher on this planet.
@channel8048
@channel8048 8 ай бұрын
Just the Keys and Queries section is worth the watch! I have been scratching my head on this for an entire month!
@SerranoAcademy
@SerranoAcademy 8 ай бұрын
Thank you! :)
@joelegger2570
@joelegger2570 5 ай бұрын
These are the best videos so far I saw to understand how Transformer / LLM works. Thank you. I really like maths but it is good that you keep math simple that one don't loose the overview. You really have a talent to explain complex things in a simple way. Greets from Switzerland
@WhatsAI
@WhatsAI 8 ай бұрын
The best explanation I've seen so far! Really cool to see how much closer the field is getting to understanding those models instead of being so abstract thanks to people like you, Luis! :)
@ganapathysubramaniam
@ganapathysubramaniam 5 ай бұрын
Absolutely the best set of videos explaining the most discussed topic. Thank you!!
@aravind_selvam
@aravind_selvam 7 ай бұрын
This video is, without a doubt, the best video on transformers and attention that I have ever seen.
@ChujiOlinze
@ChujiOlinze 8 ай бұрын
Thanks for sharing your knowledge freely. I have been waiting patiently. You add a different perspective that we appreciate. Looking forward to the 3rd video. Thank you!
@SerranoAcademy
@SerranoAcademy 8 ай бұрын
Thank you! So glad you like the videos!
@guitarcrax127
@guitarcrax127 8 ай бұрын
Amazing video. pushed forward my understanding of attention by quite a few steps and helped me build an intuition for what’s happening under the hood. Eagerly waiting for the next one
@dekasthiti
@dekasthiti Ай бұрын
This really is one of the best videos explaining the purpose of K, Q, V. The illustrations provide a window into the math behind the concepts.
@Chill_Magma
@Chill_Magma 7 ай бұрын
Honestly you are the best content creator for learning Machine learning and Deep learning in a visual and intuitive way
@alexrypun
@alexrypun 6 ай бұрын
Finally! This is the best from the tons of videos/articles I saw/read. Thank you for your work!
@snehotoshbanerjee1938
@snehotoshbanerjee1938 5 ай бұрын
One of the Best video on Attention. Such a complex subject been taught in a simple manner.Thank u!
@johnschut164
@johnschut164 4 ай бұрын
Your explanations are truly great! You have even understood that you sometimes have to ‘lie’ first to be able to explain things better. My sincere compliments! 👊
@MrMacaroonable
@MrMacaroonable 4 ай бұрын
this is absolutely the best video that clearly illustrate and explains why we need v,k,q in attention. Bravo!
@chiboreache
@chiboreache 8 ай бұрын
very nice and easy explanation, thanks!
@SeyyedMohammadLoghmanDastgheyb
@SeyyedMohammadLoghmanDastgheyb 7 ай бұрын
This is the best video that I have seen about the concept of attention! (I have seen more than 10 videos but none of them was like this.) Thank you so much! I am waiting for the next videos that you have promised! You are doing a great job!
@lijunzhang2788
@lijunzhang2788 7 ай бұрын
Great explanation. I was waitinig for this after your first video on attention mechanism! Your are so talented in explaining things in easily understandable ways! Thank you for the effort put into this and keep up the great work!
@kranthikumar4397
@kranthikumar4397 Ай бұрын
This is one of the best videos on attention and w,k,v so far.Thank you for a detailed explanation
@lengooi6125
@lengooi6125 3 ай бұрын
Simply the best explanation on this subject.Crystal clear .Thank you
@TheMotorJokers
@TheMotorJokers 7 ай бұрын
Thank you, really good job on the visualization! They make the process really understandable.
@user-zq8bd7iz4e
@user-zq8bd7iz4e 7 ай бұрын
The best explanation l've ever seen about the attention mechanism, amazing
@leilanifrost771
@leilanifrost771 Ай бұрын
Math is not my strong suit, but you made these mathematical concepts so clear with all the visual animations and your concise descriptions. Thank you so much for the hard work and making this content freely accessible to us!
@brainxyz
@brainxyz 8 ай бұрын
Amazing explanation. Thanks a lot for your efforts.
@etienneboutet7193
@etienneboutet7193 8 ай бұрын
Great video as always ! Thank you so much for this quality content.
@shannawallace7855
@shannawallace7855 5 ай бұрын
I had to read this research paper for my Intro to AI class and it's obviously written for people who already have a lot of background knowledge in this field. so being a newbie I was so lost lol. Thanks for breaking it down and making it easy to understand!
@redmond2582
@redmond2582 4 ай бұрын
Amazing explanation of very difficult concepts. The best explanation I have found on the topic so far.
@joshuaohara7704
@joshuaohara7704 7 ай бұрын
Amazing video! Took my intuition to the next level.
@MrSikesben
@MrSikesben 3 ай бұрын
This is truly the best video explaining each stage of a transformer, thanks man
@antraprakash2562
@antraprakash2562 3 ай бұрын
This is one of best video I've come across to understand embeddings, attention. Looking forward to more such explanations which can simplify such complex mechanisms in AI world. Thanks for your efforts
@bzaruk
@bzaruk 5 ай бұрын
MAN! I have no words! Your channel is priceless! thank you for everything!!!
@chrisw4562
@chrisw4562 2 ай бұрын
Thank you for the great tutorial. This is the clearest explanation I have found so far.
@awinashjha
@awinashjha 7 ай бұрын
This probably is “the best video “ on this topic
@deniz517
@deniz517 7 ай бұрын
The best video I have ever watched about this!
@danielmoore4311
@danielmoore4311 5 ай бұрын
Excellent job! Please continue making videos that breakdown the math.
@0xSingletOnly
@0xSingletOnly 3 ай бұрын
I'm going to try implement self-attention and multi-head attention myself, thanks so much for doing this guide!
@alnouralharin
@alnouralharin Ай бұрын
One of the best explanations I have ever watched
@EkShunya
@EkShunya 8 ай бұрын
Thank you it was a superb explanation 🤩
@vasanthakumarg4538
@vasanthakumarg4538 4 ай бұрын
This is the best video I had seen explaining attention mechanism. Keep up the good work!
@Chill_Magma
@Chill_Magma 7 ай бұрын
Excellent explaination
@MarkusEicher70
@MarkusEicher70 5 ай бұрын
HI Luis. Thank you for this video. I'm sure, this is a very good way to explain this complex topic, but I just won't get this into my brain. I'm currently doing the Math for Machine Learning specialization on Coursera and brushing up my algebra and calculus skills that are way to low. In any case, you made me getting involved into this and now I will grind through it till I make it. I'm sure the pain will become less and the fog will lighten up. 😊
@BABA-oi2cl
@BABA-oi2cl 4 ай бұрын
Thanks a lot for this. I always got terrified of the maths that might be there but the way you explained it all made it seem really easy ❤
@sheiphanshaijan1249
@sheiphanshaijan1249 8 ай бұрын
Brilliant Explanation.
@SerranoAcademy
@SerranoAcademy 8 ай бұрын
Thank you! :)
@kylelau1329
@kylelau1329 4 ай бұрын
I've been watching over 10 of the Transformers architecture tutorial videos, This one is so far the most intuitive way to understand it! really good work! yeah, Natural language processing is a hard topic, This tutorial is kind of revealed the black boxe from the large language model.
@devmum2008
@devmum2008 Ай бұрын
This is great videos with clarity! on Keys, Query, and Values. Thank you
@joehannes23
@joehannes23 4 ай бұрын
Great video finally understood all the concepts in their context
@brandonheaton6197
@brandonheaton6197 8 ай бұрын
Amazing explanation. I am a professional pedagogue and this is stellar work
@PeterGodek2
@PeterGodek2 5 ай бұрын
Best video so far on this topic
@januaymagori4642
@januaymagori4642 7 ай бұрын
Today i have understood attention mechanism better than never before
@user-ff7fu3ky1v
@user-ff7fu3ky1v 6 ай бұрын
Great explanation. I just really needed the third video. Hope you will post it soon.
@saintcodded2918
@saintcodded2918 3 ай бұрын
This is powerful yet so simple. Thanks
@sreelakshminarayanan.m6609
@sreelakshminarayanan.m6609 17 күн бұрын
Best Video to get clear understanding of transformers
@knobbytrails577
@knobbytrails577 4 ай бұрын
Best video on this topic so far!
@panagiotiskyriakis795
@panagiotiskyriakis795 2 ай бұрын
Great and intutive explanations! Well done!
@_ncduy_
@_ncduy_ Ай бұрын
This is the best video for people trying to understand basic knowledge about transformer, thank you so much ^^
@sadiaafrinpurba9179
@sadiaafrinpurba9179 8 ай бұрын
Thank you for the explantion.
@EkShunya
@EkShunya 8 ай бұрын
the scaling factor in scaled dot product can be understood as the ~ dis(points). in higher dimentions the estimate of distance between two points in roughly srqt(dimentions)
@pavangupta6112
@pavangupta6112 5 ай бұрын
Very well explained. Got a bit closer to understanding attention models.
@deveshnandan323
@deveshnandan323 2 ай бұрын
Sir , You are a Blessing to New Learners like me , Thank You , Big Respect.❤
@user-eg8mt4im1i
@user-eg8mt4im1i 5 ай бұрын
Amazing video and explanations, thank you !!
@OpenAITutor
@OpenAITutor 7 ай бұрын
You are a master!
@celilylmaz4426
@celilylmaz4426 4 ай бұрын
This video has the best explanations of QKV matrices and linear layers among the other resources i ve come across. I don't know why but people seem not interested in explaining whats really happening with each action we take which results in loads of vague points. Yet, the video could ve been further improved with more concrete examples and numbers. Thank you.
@aldotanca9430
@aldotanca9430 5 ай бұрын
Thanks, very useful. I love the way you explain things here and on Coursera.
@davidking545
@davidking545 5 ай бұрын
Thank you so much! the image at 24:29 made this whole concept click immediately.
@tankado_ndakota
@tankado_ndakota Күн бұрын
amazing video. that's what i looking for. I need to know mathematical background to understand what is happening behind. thank you sir!
@SulkyRain
@SulkyRain 3 ай бұрын
Love the simplification you brought !!! super
@Wise_Man_on_YouTube
@Wise_Man_on_YouTube 2 ай бұрын
"This step is called softmax" . 😮😮😮 Today I understood why softmax is used. Such a beautiful function. And such a great way to demonstrate it.
@cooperwu38
@cooperwu38 2 ай бұрын
Super clear ! Great video !!
@user-jz8hr5fo9e
@user-jz8hr5fo9e Күн бұрын
Great Explanation. Thank you so much
@MrMehrd
@MrMehrd 8 ай бұрын
Fast forward watched, seems to be good , thx will watch
@alieskandarian5258
@alieskandarian5258 3 ай бұрын
It was fascinating to me, I searched a lot for a math explained which didn't find thanks for this Please do more😅 with more complex ones
@danherman212nyc
@danherman212nyc Ай бұрын
I studied linear algebra during the day on Coursera and watch KZbin videos at night on state of the art machine learning. I’m amazed by how fast you learn with Luis. I’ve learned everything I was curious about. Thank you!
@SerranoAcademy
@SerranoAcademy Ай бұрын
Thank you, it’s an honor to be part of your learning journey! :)
@ThinAirElon
@ThinAirElon 7 ай бұрын
This is Great ! in next video can you please include why we need sin and cosine functions for posistional encoding ? whats the intution behind it? if we add this vector to embedding vector what happens ?
@wiktormigaszewski8684
@wiktormigaszewski8684 3 ай бұрын
Yep, a truly terrific video. Congrats!
@bonadio60
@bonadio60 2 ай бұрын
As always, great content! Thanks
@DanteNoguez
@DanteNoguez 4 ай бұрын
Amazing. Thanks a lot for this!
@Hiyori___
@Hiyori___ 2 ай бұрын
God sent video. So incredibly well put
@healthyhappy7487
@healthyhappy7487 4 күн бұрын
Best video. Great explanation
@BrikeshKumar987
@BrikeshKumar987 4 ай бұрын
Thank you so much !! I watched several video and none could explain the concept so well
@SerranoAcademy
@SerranoAcademy 4 ай бұрын
Thanks, I'm so glad you enjoyed it! Lemme know if you have suggestions for more topics to cover!
@rollingstone1784
@rollingstone1784 10 күн бұрын
@SerranoAcademy At 13:23, you show a matrix-vector multiplication with a column-vector (rows of the table times columns of the vector) by right-multiplication. On the right side, maybe you could use, additionally to "is sent to", the icon "orange' (orange prime). This would show the multiplication in a clearer way Remark: you use a matrix-vector multiplication here (using a row of the matrix and the words as a column on the right of the matrix). If you use row vectors, the the word vector should be placed horizontally on the left of the matrix and in the explanation, a column of the matrix has to be used. The result is then a row vector again (maybe a bit hard to sketch)
@cool12345687
@cool12345687 4 ай бұрын
This is awesome.. Thanks a ton for this video. May God bless you..
@mattmurdock3868
@mattmurdock3868 2 ай бұрын
Best video on this topic🙌🏻
@user-hf3fu2xt2j
@user-hf3fu2xt2j 2 ай бұрын
best explanation i've seen
@ayoubelmhamdi7920
@ayoubelmhamdi7920 7 ай бұрын
so great video
@rollingstone1784
@rollingstone1784 10 күн бұрын
@SerranoAcademy If you want to come to the same notation as in the mentioned paper, Q times K_transpose, than the orange is the query and the phone is the key here. The you calculate q times Q times K_transpose times key_transpose (as mentioned in the paper) Remark: the paper uses "sequences", described as a "row vectors". However, usually one uses column vectors. Using row vectors, the linear transformation is a left multiplication a times A and the dot product is written as a times b_transpose. Using column vectors, the linear transformation is A times a and the dot product is written as a_transpose times b. This, in my opinion, is the standard notation, e.g. to write Ax = b and not xA=b.
@manojkalyan94
@manojkalyan94 Ай бұрын
Loved it want to go through again and again ❤
@naveensubramanian4876
@naveensubramanian4876 7 ай бұрын
Are these slides available somewhere for reference? It will be a great help. Thanks
@Ludwighaffen1
@Ludwighaffen1 5 ай бұрын
Great video series! Thanks you! That helped a ton 🙂 One small remark: the concept of the "length" of a vector that you use here confused me. Here, I guess you take the point of view of a programmer: len(vector) outputs the number of dimensions of the vector. However, for a mathematician, the length of a vector is its norm or also called magnitude (square root of x^2 + y^2).
@MSGMSUSA
@MSGMSUSA 4 ай бұрын
Wow!!! Now, I understand attention mechanism. I did not understand a bit when learning about this in an expensive AI course
@gemini_537
@gemini_537 2 ай бұрын
Summary by Gemini: This video is about the math behind attention mechanisms in large language models. The speaker first gives a brief overview of what attention mechanisms are and how they are used in large language models. Then, he dives into the details of the math behind attention mechanisms, including the concepts of keys, queries, and values matrices. Here are the key points from the video: * Attention mechanisms are a way for large language models to focus on the most relevant parts of an input sentence when generating text. * Keys, queries, and values matrices are all used to calculate the attention weights, which determine how much weight to give to each word in the input sentence. * The keys and queries matrices are used to find the similarity between words in the input sentence. * The values matrix is used to combine the information from the relevant words to generate the output text. The speaker also mentions that he will be going into more detail about how attention mechanisms are used in Transformer models in the next video in this series.
@o.k.4599
@o.k.4599 2 ай бұрын
I haven't blinked my eyes for a sec. 👏🏼🙏🏼
@samirelzein1095
@samirelzein1095 7 ай бұрын
Amazing job! that s a Serrano Academy level!
@tariqkhan1518
@tariqkhan1518 9 күн бұрын
Thankyou so much for the video.
What are Transformer Models and how do they work?
44:26
Serrano.Academy
Рет қаралды 89 М.
The Attention Mechanism in Large Language Models
21:02
Serrano.Academy
Рет қаралды 72 М.
Kitten has a slime in her diaper?! 🙀 #cat #kitten #cute
00:28
когда одна дома // EVA mash
00:51
EVA mash
Рет қаралды 4,3 МЛН
одни дома // EVA mash @TweetvilleCartoon
01:00
EVA mash
Рет қаралды 1,4 МЛН
INO IS A KIND ALIEN😂
00:45
INO
Рет қаралды 9 МЛН
Attention is all you need explained
13:56
Lucidate
Рет қаралды 77 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 153 М.
Visual Guide to Transformer Neural Networks - (Episode 2) Multi-Head & Self-Attention
15:25
Attention Is All You Need - Paper Explained
36:44
Halfling Wizard
Рет қаралды 92 М.
Self-Attention Using Scaled Dot-Product Approach
16:09
Machine Learning Studio
Рет қаралды 11 М.
The End of Finetuning - with Jeremy Howard of Fast.ai
1:24:48
Latent Space
Рет қаралды 17 М.
This is why Deep Learning is really weird.
2:06:38
Machine Learning Street Talk
Рет қаралды 297 М.
The 7 Strangest Coincidences in the Laws of Nature
8:13
Sabine Hossenfelder
Рет қаралды 247 М.
Phone charger explosion
0:43
_vector_
Рет қаралды 20 МЛН
Распаковка айфона в воде😱 #shorts
0:25
НЭКС
Рет қаралды 1,1 МЛН