Transformers explained | The architecture behind LLMs

  Рет қаралды 24,953

AI Coffee Break with Letitia

AI Coffee Break with Letitia

Күн бұрын

Пікірлер: 107
@YuraCCC
@YuraCCC 8 ай бұрын
Thanks for the explanation. At 9:19 : Shouldn't the order of multiplication be the opposite here? E.g. x1(vector) * Wq(matrix) = q1(vector). Otherwise I don't understand how we get the 1x3 dimensionality at the end
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
Oh, shoot, messed up the order in the animations there. You are right. Sorry, pinning your comment.
@YuraCCC
@YuraCCC 8 ай бұрын
No problem thanks for clarifying that, and thanks again for the great video@@AICoffeeBreak
@scifaipy9301
@scifaipy9301 3 ай бұрын
The vectors should be column vectors.
@tildarusso
@tildarusso 8 ай бұрын
As far as I am aware, word embedding has changed from legacy static embedding like Word2Vec/GLOVE (like the famous queen=woman+king-man metaphor) to BPE & unigram, this change gave me quite a headache, as most of paper do not mention any detail of their "word embedding". Perhaps Letitia you can make a video to clarify this a bit for us.
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
Great suggestion, thanks!
@M4ciekP
@M4ciekP 8 ай бұрын
How about a video explaining SSMs?
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
✍️
@AICoffeeBreak
@AICoffeeBreak 7 ай бұрын
Psst: This will be the video coming up in a few days. it's in editing right now.
@M4ciekP
@M4ciekP 7 ай бұрын
Yaay! @@AICoffeeBreak
@MachineLearningStreetTalk
@MachineLearningStreetTalk 8 ай бұрын
Epic as always 🤌
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
Thanks, Tim!
@DaveJ6515
@DaveJ6515 8 ай бұрын
You know how to explain things. This one is not easy: I can see the amount of work that went into this video, and it was a lot. I hope that your career takes you where you deserve.
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
Thanks for watching and thanks for the kind words. All the best to you as well!
@Thomas-gk42
@Thomas-gk42 8 ай бұрын
Understood about 10%, but I like these vidoes and feel intuitively the usefulness.
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
@heejuneAhn
@heejuneAhn 3 ай бұрын
BEST of BEST Explanation. 1) Visually, 2) intuitively, 3) by numerical examples. And your English is better than native for Foreigners to listen.
@xyphos915
@xyphos915 8 ай бұрын
Wow, this explanation on the difference between RNNs and Transformers at the end is what I was missing! I've always heard that Transformers are great because of parallelization but never really saw why until today, thank you! Great video!
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
Oh, this makes me happy !
@manuelafernandesblancorodr6366
@manuelafernandesblancorodr6366 7 ай бұрын
What a wonderful video! Thank you so much for sharing it!
@AICoffeeBreak
@AICoffeeBreak 7 ай бұрын
Thank you too for this wonderful comment!
@l.suurmeijer1382
@l.suurmeijer1382 8 ай бұрын
Absolute banger of a video. Wish I had seen this when I was learning about transformers in uni last year :-)
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
Haha, glad I could help. Even if a bit late.
@Clammer999
@Clammer999 4 ай бұрын
Thanks so much for this video. I’ve gone through a number of videos on transformers and this is much easier to grasp and understand for a non-data scientist like myself.
@AICoffeeBreak
@AICoffeeBreak 4 ай бұрын
You're very welcome!
@SamehSyedAjmal
@SamehSyedAjmal 8 ай бұрын
Thank you for the video! Maybe an explanation on the Mamba Architecture next?
@AICoffeeBreak
@AICoffeeBreak 7 ай бұрын
The Mamba and SSM beans are roasting as we speak.
@abhishek-tandon
@abhishek-tandon 8 ай бұрын
One of the best videos on transformers that I have ever watched. Views 📈
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
Do you have examples of others you liked?
@connor-shorten
@connor-shorten 8 ай бұрын
Awesome! Epic Visuals!
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
Thanks, Connor!
@phiphi3025
@phiphi3025 8 ай бұрын
Thanks, you helped so much explain Transformers to my PhD advisors
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
This is really funny. In what field are you doing your PhD? 😅
@420_gunna
@420_gunna 8 ай бұрын
Awesome video, thank you! I love the idea of you revisiting older topics -- either as a 201 or as a re-introduction. "Attention combines the representation of input vector's value vectors, weighted by the importance score (computed by the query and key vectors)."
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
Thanks for your appreciation!
@jcneto25
@jcneto25 8 ай бұрын
Best didatic explanation about Transformers so far. Thank you for sharing it.
@AICoffeeBreak
@AICoffeeBreak 7 ай бұрын
Wow, thanks! Glad it's helpful.
@darylallen2485
@darylallen2485 5 ай бұрын
Letitia, you're awesome and I look forward to learning more from you.
@mccartym86
@mccartym86 7 ай бұрын
I think I had at least 10 aha moments watching this, and I've watched many videos on these topics. Incredible job, thank you!
@AICoffeeBreak
@AICoffeeBreak 7 ай бұрын
Wow, thank You for this wonderful comment!
@16876
@16876 8 ай бұрын
What a thorougfh and much anticipated overview laid out so coherently ,, thank you
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
Our pleasure! We should have done this video much earlier, considering that our old Transformer Explained is our most watched video to date. 😅
@xxlvulkann6743
@xxlvulkann6743 5 ай бұрын
This is a very well-made explanation. I hadn't known that the feedforward layers only received one token at a time. Thanks for clearing that up for me! 😁
@cosmic_reef_17
@cosmic_reef_17 8 ай бұрын
Thank you very much for the very clear explanations and detailed analysis of the transformer architecture. Your truly the 3blue1brown of machine learning!
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
@rahulrajpvr7d
@rahulrajpvr7d 8 ай бұрын
Tomorrow i have thesis evaluation and i was thinking about watching that video again, but youtube algorithm suggested me without searching anything, Thank u youtube algo.. 😅❤🔥
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
It read your mind.
@mumcarpet109
@mumcarpet109 8 ай бұрын
your videos has helped visual learner like me so much, thank you
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
Happy to hear that!
@DerPylz
@DerPylz 8 ай бұрын
Wow, you've come a long way since your first transformer explained video!
@uw10isplaya
@uw10isplaya 3 ай бұрын
Had to go back and rewatch a section after I realized I'd been spacing out staring at the coffee bean's reactions.
@AICoffeeBreak
@AICoffeeBreak 2 ай бұрын
@dannown
@dannown 8 ай бұрын
Really appreciate this video.
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
So glad!
@muhammedaneesk.a4848
@muhammedaneesk.a4848 8 ай бұрын
Thanks for the explanation 😊
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
Thanks for watching!
@tomoki-v6o
@tomoki-v6o 8 ай бұрын
well explained . as you promised
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
@GarySuffield-w9p
@GarySuffield-w9p 8 ай бұрын
Really well done and easy to follow, thank you
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
Glad you enjoy it!
@DatNgo-uk4ft
@DatNgo-uk4ft 8 ай бұрын
Great Video!! Nice improvement over the original
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
Glad you think so!
@volpir4672
@volpir4672 8 ай бұрын
that's great, I'm a little stuck on the special mask token? ... I'll keep digging, good info, the video is good explanation, it allows for more experimentation instead of relying on open source models that can have components look like a black box to noobs like me :)
@paprikar
@paprikar 8 ай бұрын
here we go! TY for content
@davidespinosa1910
@davidespinosa1910 7 күн бұрын
Time is quadratic, but memory is linear -- see the FlashAttention paper. But the number of parameters is constant -- that's the magic ! Thanks for the excellent videos ! 👍
@bartlomiejkubica1781
@bartlomiejkubica1781 7 ай бұрын
Thank you! Finally, I start to get it...
@ArthasDKR
@ArthasDKR 8 ай бұрын
Excellent explanation. Thank you!
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
@ai-interview-questions
@ai-interview-questions 8 ай бұрын
Thank you, Letitia!
@AICoffeeBreak
@AICoffeeBreak 7 ай бұрын
Our pleasure!
@jonas4223
@jonas4223 8 ай бұрын
Today, I had the problem I need to understand how Transformers work.. I searched on youtube and found your video 20 minutes after release. What a perfect timing
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
What a timing!
@HarishAkula-df8gs
@HarishAkula-df8gs 5 ай бұрын
Amazing explanation, Thank you! Just discovered your channel and I really like how the difficult topics are demystified.
@AICoffeeBreak
@AICoffeeBreak 5 ай бұрын
Thanks a lot!
@zbynekba
@zbynekba 8 ай бұрын
❤ Letitia, thank you for great visualization and intuition. For inspiration: In the original paper, the decoder utilizes the output of the encoder by running a cross-attention process. Why does GPT not use an encoder? As you've mentioned, the encoder is typically used for classification, while the decoder is for text generation. They are never used in combination. Why is this the case? Missing Intuition: Why does the cross-attention layer inside the decoder take the values from the ENCODER’s output to create the enhanced embeddings (as a weighted mix)? Intuitively, I would use the values from the DECODER.
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
Thanks for your thoughts! Encoders are sometimes used in combination with decoders, right? The most famous example is the T5 architecture.
@zbynekba
@zbynekba 8 ай бұрын
Thanks for your prompt reply. Hence, understanding the concept and intuition behind feeding the encoder output into the decoder is essential. I found only this one video on encoder-decoder cross-attention: kzbin.info/www/bejne/eqLNomd9rcmbpMksi=gtLzNxAU0pUGyLvk In it, Lennart emphasizes the observation that, based on the original equations, we have the enhanced embeddings calculated as a weighted sum of ENCODER values. Inside of a DECODER, I would rather expect to have the DECODER values pass through. Letitia, I am sure, you will resolve this mystery. 🍀
@supanutsookkho2749
@supanutsookkho2749 2 ай бұрын
Great video and a good explanation. Thanks for your hard work on this amazing video!!
@AICoffeeBreak
@AICoffeeBreak 2 ай бұрын
Glad you liked it!
@partywen
@partywen 3 ай бұрын
Super informative and helpful! Thanks a lot!
@AICoffeeBreak
@AICoffeeBreak 3 ай бұрын
Oh wow, thanks!
@zahrashah6567
@zahrashah6567 5 ай бұрын
What a wonderful explanation😍 Just discovered your channel and absolutely loving the explanations as well as visuals😘
@AICoffeeBreak
@AICoffeeBreak 5 ай бұрын
Thank you! welcome!
@Ben_D.
@Ben_D. 6 ай бұрын
...ok. After binging some of your vids, I now need to go make coffee. 😆
@AICoffeeBreak
@AICoffeeBreak 6 ай бұрын
Please do!
@pfever
@pfever 7 ай бұрын
Just discovered your channel and this is great! Thank you! :D
@AICoffeeBreak
@AICoffeeBreak 7 ай бұрын
Thank you! Hope to see you again soon in the comments.
@Jeshhhhhh
@Jeshhhhhh Ай бұрын
Oh my goddess in disguise, I thank you for saving me from depths of hell. Lots of love
@AICoffeeBreak
@AICoffeeBreak Ай бұрын
Glad to help. 😆
@ehudamitai
@ehudamitai 7 ай бұрын
In 11:14, the weighted sum is the sum of 3 vectors of 3 elements each, but the results is a vector of 4 elements. Which, conveniently, is the same size as the input vector. Could there be a missing step there?
@AICoffeeBreak
@AICoffeeBreak 7 ай бұрын
Yes, there is a missing back transformation to 4 dimensions I skipped. :) Well spotted!
@MuruganR-tg9yt
@MuruganR-tg9yt 7 ай бұрын
Thank you. Nice explanation 😊
@AICoffeeBreak
@AICoffeeBreak 7 ай бұрын
Thank You for your visit!
@heejuneAhn
@heejuneAhn 2 ай бұрын
Thanks for your video. I have a question on inference process. For example when I have a input prompt of 2 tokens = {t1, t2}. we will get the output {o1, o2, o3}. we will take only o3 and make new input sequence {t1, t2, o3}. Then we will get another output {o'1, o'2, o'3, o'4}. Here my questions are. When we use causal masking for attention, o1= o'1 and o2=o'2 and so on? Another question, even though the mask guarantee the causal attension. but still the matrix calcuation is performed. Then it means the computation is used any way. How can we reduce the computation resource for this case.
@l3nn13
@l3nn13 8 ай бұрын
great video
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
Thanks for the visit and for leaving the comment!
@LEQN
@LEQN 6 ай бұрын
Awesome video :) thanks!
@AICoffeeBreak
@AICoffeeBreak 6 ай бұрын
Thank you for watching and for your wonderful comment!
@DaeOh
@DaeOh 8 ай бұрын
Everything makes sense except multiple attention heads. Each layer has only one set of Q, K, V, O matrices. But 8 attention heads per layer? I want to understand that.
@AICoffeeBreak
@AICoffeeBreak 8 ай бұрын
Think about it this way: In one layer, instead of having one head telling you how to pay attention at things, you have 8. In other words, instead of having one person shout at you the things they want you to pay attention to, you have 8 people simultaneously shouting at you. This is beneficial because it has an ensembling effect (the effect of a voting parliament. Think of Random Forests that are an ensemble of Decision Trees). I do not know if this helps, but I thought giving it another shot at explaining this.
@kallamamran
@kallamamran 8 ай бұрын
Phew 😳
@nmfhlbj
@nmfhlbj 6 ай бұрын
hi! can i ask question of how did you get the dimension (d)? because all i know is dimension can be found in square matrices, and the dot product of the attention formula says that Q•K^T. if we're using 1x3 matrices, we'll get 1x1 matrices or 1 dimension, how do you get 3 ? unless its 3x1 matrix beforehand, so we'll get 3x3 or 3 dimensional matrix. thankyouu !
@AICoffeeBreak
@AICoffeeBreak 6 ай бұрын
Hi, if you mean the mistake at 10:00, then the problem is that I have written matrix times vector when I should have written vector times matrix! (or I could have used column vectors instead of row vectors). Is this what you mean?
@benjamindilorenzo
@benjamindilorenzo 7 ай бұрын
What a great video. It still could expand more and really sum up every sub-part and connect it to a certain clear visualization or clear step of what happens with the information at each time step and how its "transformation" progresses over time. So i think you could redo this video and really make it monkey proof for folks like me. But beware, if you look for example at the StatQuest version, its to slow and too repetative and also does not really capture, what really goes on inside the Transformer, once all steps are stacked together. Great work!
@josephvanname3377
@josephvanname3377 8 ай бұрын
I want to train a transformer that eats a row of matrices instead of just a row of vectors.
A brief history of the Transformer architecture in NLP
8:23
AI Coffee Break with Letitia
Рет қаралды 22 М.
Attention in transformers, visually explained | Chapter 6, Deep Learning
26:10
Amazing Parenting Hacks! 👶✨ #ParentingTips #LifeHacks
00:18
Snack Chat
Рет қаралды 21 МЛН
MAMBA and State Space Models explained | SSM explained
22:27
AI Coffee Break with Letitia
Рет қаралды 48 М.
Transformers for beginners | What are they and how do they work
19:59
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1 МЛН
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 433 М.
Vision Transformer Basics
30:49
Samuel Albanie
Рет қаралды 26 М.
How might LLMs store facts | Chapter 7, Deep Learning
22:43
3Blue1Brown
Рет қаралды 534 М.
The math behind Attention: Keys, Queries, and Values matrices
36:16
Serrano.Academy
Рет қаралды 244 М.