Thanks for the explanation. At 9:19 : Shouldn't the order of multiplication be the opposite here? E.g. x1(vector) * Wq(matrix) = q1(vector). Otherwise I don't understand how we get the 1x3 dimensionality at the end
@AICoffeeBreak8 ай бұрын
Oh, shoot, messed up the order in the animations there. You are right. Sorry, pinning your comment.
@YuraCCC8 ай бұрын
No problem thanks for clarifying that, and thanks again for the great video@@AICoffeeBreak
@scifaipy93013 ай бұрын
The vectors should be column vectors.
@tildarusso8 ай бұрын
As far as I am aware, word embedding has changed from legacy static embedding like Word2Vec/GLOVE (like the famous queen=woman+king-man metaphor) to BPE & unigram, this change gave me quite a headache, as most of paper do not mention any detail of their "word embedding". Perhaps Letitia you can make a video to clarify this a bit for us.
@AICoffeeBreak8 ай бұрын
Great suggestion, thanks!
@M4ciekP8 ай бұрын
How about a video explaining SSMs?
@AICoffeeBreak8 ай бұрын
✍️
@AICoffeeBreak7 ай бұрын
Psst: This will be the video coming up in a few days. it's in editing right now.
@M4ciekP7 ай бұрын
Yaay! @@AICoffeeBreak
@MachineLearningStreetTalk8 ай бұрын
Epic as always 🤌
@AICoffeeBreak8 ай бұрын
Thanks, Tim!
@DaveJ65158 ай бұрын
You know how to explain things. This one is not easy: I can see the amount of work that went into this video, and it was a lot. I hope that your career takes you where you deserve.
@AICoffeeBreak8 ай бұрын
Thanks for watching and thanks for the kind words. All the best to you as well!
@Thomas-gk428 ай бұрын
Understood about 10%, but I like these vidoes and feel intuitively the usefulness.
@AICoffeeBreak8 ай бұрын
@heejuneAhn3 ай бұрын
BEST of BEST Explanation. 1) Visually, 2) intuitively, 3) by numerical examples. And your English is better than native for Foreigners to listen.
@xyphos9158 ай бұрын
Wow, this explanation on the difference between RNNs and Transformers at the end is what I was missing! I've always heard that Transformers are great because of parallelization but never really saw why until today, thank you! Great video!
@AICoffeeBreak8 ай бұрын
Oh, this makes me happy !
@manuelafernandesblancorodr63667 ай бұрын
What a wonderful video! Thank you so much for sharing it!
@AICoffeeBreak7 ай бұрын
Thank you too for this wonderful comment!
@l.suurmeijer13828 ай бұрын
Absolute banger of a video. Wish I had seen this when I was learning about transformers in uni last year :-)
@AICoffeeBreak8 ай бұрын
Haha, glad I could help. Even if a bit late.
@Clammer9994 ай бұрын
Thanks so much for this video. I’ve gone through a number of videos on transformers and this is much easier to grasp and understand for a non-data scientist like myself.
@AICoffeeBreak4 ай бұрын
You're very welcome!
@SamehSyedAjmal8 ай бұрын
Thank you for the video! Maybe an explanation on the Mamba Architecture next?
@AICoffeeBreak7 ай бұрын
The Mamba and SSM beans are roasting as we speak.
@abhishek-tandon8 ай бұрын
One of the best videos on transformers that I have ever watched. Views 📈
@AICoffeeBreak8 ай бұрын
Do you have examples of others you liked?
@connor-shorten8 ай бұрын
Awesome! Epic Visuals!
@AICoffeeBreak8 ай бұрын
Thanks, Connor!
@phiphi30258 ай бұрын
Thanks, you helped so much explain Transformers to my PhD advisors
@AICoffeeBreak8 ай бұрын
This is really funny. In what field are you doing your PhD? 😅
@420_gunna8 ай бұрын
Awesome video, thank you! I love the idea of you revisiting older topics -- either as a 201 or as a re-introduction. "Attention combines the representation of input vector's value vectors, weighted by the importance score (computed by the query and key vectors)."
@AICoffeeBreak8 ай бұрын
Thanks for your appreciation!
@jcneto258 ай бұрын
Best didatic explanation about Transformers so far. Thank you for sharing it.
@AICoffeeBreak7 ай бұрын
Wow, thanks! Glad it's helpful.
@darylallen24855 ай бұрын
Letitia, you're awesome and I look forward to learning more from you.
@mccartym867 ай бұрын
I think I had at least 10 aha moments watching this, and I've watched many videos on these topics. Incredible job, thank you!
@AICoffeeBreak7 ай бұрын
Wow, thank You for this wonderful comment!
@168768 ай бұрын
What a thorougfh and much anticipated overview laid out so coherently ,, thank you
@AICoffeeBreak8 ай бұрын
Our pleasure! We should have done this video much earlier, considering that our old Transformer Explained is our most watched video to date. 😅
@xxlvulkann67435 ай бұрын
This is a very well-made explanation. I hadn't known that the feedforward layers only received one token at a time. Thanks for clearing that up for me! 😁
@cosmic_reef_178 ай бұрын
Thank you very much for the very clear explanations and detailed analysis of the transformer architecture. Your truly the 3blue1brown of machine learning!
@AICoffeeBreak8 ай бұрын
@rahulrajpvr7d8 ай бұрын
Tomorrow i have thesis evaluation and i was thinking about watching that video again, but youtube algorithm suggested me without searching anything, Thank u youtube algo.. 😅❤🔥
@AICoffeeBreak8 ай бұрын
It read your mind.
@mumcarpet1098 ай бұрын
your videos has helped visual learner like me so much, thank you
@AICoffeeBreak8 ай бұрын
Happy to hear that!
@DerPylz8 ай бұрын
Wow, you've come a long way since your first transformer explained video!
@uw10isplaya3 ай бұрын
Had to go back and rewatch a section after I realized I'd been spacing out staring at the coffee bean's reactions.
@AICoffeeBreak2 ай бұрын
@dannown8 ай бұрын
Really appreciate this video.
@AICoffeeBreak8 ай бұрын
So glad!
@muhammedaneesk.a48488 ай бұрын
Thanks for the explanation 😊
@AICoffeeBreak8 ай бұрын
Thanks for watching!
@tomoki-v6o8 ай бұрын
well explained . as you promised
@AICoffeeBreak8 ай бұрын
@GarySuffield-w9p8 ай бұрын
Really well done and easy to follow, thank you
@AICoffeeBreak8 ай бұрын
Glad you enjoy it!
@DatNgo-uk4ft8 ай бұрын
Great Video!! Nice improvement over the original
@AICoffeeBreak8 ай бұрын
Glad you think so!
@volpir46728 ай бұрын
that's great, I'm a little stuck on the special mask token? ... I'll keep digging, good info, the video is good explanation, it allows for more experimentation instead of relying on open source models that can have components look like a black box to noobs like me :)
@paprikar8 ай бұрын
here we go! TY for content
@davidespinosa19107 күн бұрын
Time is quadratic, but memory is linear -- see the FlashAttention paper. But the number of parameters is constant -- that's the magic ! Thanks for the excellent videos ! 👍
@bartlomiejkubica17817 ай бұрын
Thank you! Finally, I start to get it...
@ArthasDKR8 ай бұрын
Excellent explanation. Thank you!
@AICoffeeBreak8 ай бұрын
@ai-interview-questions8 ай бұрын
Thank you, Letitia!
@AICoffeeBreak7 ай бұрын
Our pleasure!
@jonas42238 ай бұрын
Today, I had the problem I need to understand how Transformers work.. I searched on youtube and found your video 20 minutes after release. What a perfect timing
@AICoffeeBreak8 ай бұрын
What a timing!
@HarishAkula-df8gs5 ай бұрын
Amazing explanation, Thank you! Just discovered your channel and I really like how the difficult topics are demystified.
@AICoffeeBreak5 ай бұрын
Thanks a lot!
@zbynekba8 ай бұрын
❤ Letitia, thank you for great visualization and intuition. For inspiration: In the original paper, the decoder utilizes the output of the encoder by running a cross-attention process. Why does GPT not use an encoder? As you've mentioned, the encoder is typically used for classification, while the decoder is for text generation. They are never used in combination. Why is this the case? Missing Intuition: Why does the cross-attention layer inside the decoder take the values from the ENCODER’s output to create the enhanced embeddings (as a weighted mix)? Intuitively, I would use the values from the DECODER.
@AICoffeeBreak8 ай бұрын
Thanks for your thoughts! Encoders are sometimes used in combination with decoders, right? The most famous example is the T5 architecture.
@zbynekba8 ай бұрын
Thanks for your prompt reply. Hence, understanding the concept and intuition behind feeding the encoder output into the decoder is essential. I found only this one video on encoder-decoder cross-attention: kzbin.info/www/bejne/eqLNomd9rcmbpMksi=gtLzNxAU0pUGyLvk In it, Lennart emphasizes the observation that, based on the original equations, we have the enhanced embeddings calculated as a weighted sum of ENCODER values. Inside of a DECODER, I would rather expect to have the DECODER values pass through. Letitia, I am sure, you will resolve this mystery. 🍀
@supanutsookkho27492 ай бұрын
Great video and a good explanation. Thanks for your hard work on this amazing video!!
@AICoffeeBreak2 ай бұрын
Glad you liked it!
@partywen3 ай бұрын
Super informative and helpful! Thanks a lot!
@AICoffeeBreak3 ай бұрын
Oh wow, thanks!
@zahrashah65675 ай бұрын
What a wonderful explanation😍 Just discovered your channel and absolutely loving the explanations as well as visuals😘
@AICoffeeBreak5 ай бұрын
Thank you! welcome!
@Ben_D.6 ай бұрын
...ok. After binging some of your vids, I now need to go make coffee. 😆
@AICoffeeBreak6 ай бұрын
Please do!
@pfever7 ай бұрын
Just discovered your channel and this is great! Thank you! :D
@AICoffeeBreak7 ай бұрын
Thank you! Hope to see you again soon in the comments.
@JeshhhhhhАй бұрын
Oh my goddess in disguise, I thank you for saving me from depths of hell. Lots of love
@AICoffeeBreakАй бұрын
Glad to help. 😆
@ehudamitai7 ай бұрын
In 11:14, the weighted sum is the sum of 3 vectors of 3 elements each, but the results is a vector of 4 elements. Which, conveniently, is the same size as the input vector. Could there be a missing step there?
@AICoffeeBreak7 ай бұрын
Yes, there is a missing back transformation to 4 dimensions I skipped. :) Well spotted!
@MuruganR-tg9yt7 ай бұрын
Thank you. Nice explanation 😊
@AICoffeeBreak7 ай бұрын
Thank You for your visit!
@heejuneAhn2 ай бұрын
Thanks for your video. I have a question on inference process. For example when I have a input prompt of 2 tokens = {t1, t2}. we will get the output {o1, o2, o3}. we will take only o3 and make new input sequence {t1, t2, o3}. Then we will get another output {o'1, o'2, o'3, o'4}. Here my questions are. When we use causal masking for attention, o1= o'1 and o2=o'2 and so on? Another question, even though the mask guarantee the causal attension. but still the matrix calcuation is performed. Then it means the computation is used any way. How can we reduce the computation resource for this case.
@l3nn138 ай бұрын
great video
@AICoffeeBreak8 ай бұрын
Thanks for the visit and for leaving the comment!
@LEQN6 ай бұрын
Awesome video :) thanks!
@AICoffeeBreak6 ай бұрын
Thank you for watching and for your wonderful comment!
@DaeOh8 ай бұрын
Everything makes sense except multiple attention heads. Each layer has only one set of Q, K, V, O matrices. But 8 attention heads per layer? I want to understand that.
@AICoffeeBreak8 ай бұрын
Think about it this way: In one layer, instead of having one head telling you how to pay attention at things, you have 8. In other words, instead of having one person shout at you the things they want you to pay attention to, you have 8 people simultaneously shouting at you. This is beneficial because it has an ensembling effect (the effect of a voting parliament. Think of Random Forests that are an ensemble of Decision Trees). I do not know if this helps, but I thought giving it another shot at explaining this.
@kallamamran8 ай бұрын
Phew 😳
@nmfhlbj6 ай бұрын
hi! can i ask question of how did you get the dimension (d)? because all i know is dimension can be found in square matrices, and the dot product of the attention formula says that Q•K^T. if we're using 1x3 matrices, we'll get 1x1 matrices or 1 dimension, how do you get 3 ? unless its 3x1 matrix beforehand, so we'll get 3x3 or 3 dimensional matrix. thankyouu !
@AICoffeeBreak6 ай бұрын
Hi, if you mean the mistake at 10:00, then the problem is that I have written matrix times vector when I should have written vector times matrix! (or I could have used column vectors instead of row vectors). Is this what you mean?
@benjamindilorenzo7 ай бұрын
What a great video. It still could expand more and really sum up every sub-part and connect it to a certain clear visualization or clear step of what happens with the information at each time step and how its "transformation" progresses over time. So i think you could redo this video and really make it monkey proof for folks like me. But beware, if you look for example at the StatQuest version, its to slow and too repetative and also does not really capture, what really goes on inside the Transformer, once all steps are stacked together. Great work!
@josephvanname33778 ай бұрын
I want to train a transformer that eats a row of matrices instead of just a row of vectors.