You make great videos with great explanations! I can listen to you all day
@isaiasprestes9759 Жыл бұрын
This paper is amazing! Thank you very much for making this video. Quite good!
@davefaulkner63029 ай бұрын
Regarding Multi-headed attention: it wasn't until you listed the dimensions of the output heads that it became clear that you are splitting the input by the embedding dimension, d, across the different heads. This should have been made more explicit in your explanation. Regardless, I was looking for the answer to this question of how the input was split across the heads so thank you for this detailed explanation of how the multi-headed mechanism works.
@jacobyoung20453 ай бұрын
Thanks, your comment made it clearer for me.
@chrisogonas9 ай бұрын
Very well illustrated! Thanks
@PyMLstudio9 ай бұрын
Glad you liked it!
@maryammohseni45075 ай бұрын
great video!tnx
@gusbakker Жыл бұрын
Thanks! Really nice explanation
@MrDelord39 Жыл бұрын
Thank you for this video :)
@xxxiu1311 ай бұрын
This is a great explanation.
@milanvasilic45102 ай бұрын
Greate explanation, thx :). So the headnumber just tells me how many weight matrices i have for K, Q and V?
@angelineamberАй бұрын
Hi, thanks for the amazing video. One question. I get d, q,k,v but didn't get the denotation W.
@Lesoleil370Ай бұрын
Thanks, so w is a learnable matrix to get q, k, and v. So to get q, we use q=w_q x, and similarly , for k and v: q=W_q x k=W_k x v=W_v x
@angelineamberАй бұрын
@@Lesoleil370 thank you!
@mehdimohsenimahani4150Ай бұрын
great great 👍💯
@PyMLstudioАй бұрын
Thanks 🙏🏻
@gigglygeekgal6 ай бұрын
Great Explanation :)
@quentinparrenin2484 Жыл бұрын
great video ! Really enjoyed the explaination in a simple way specially for the cross attention. Any plan to explain some other concept aside some python ? Would love it
@PyMLstudio Жыл бұрын
Thanks for the comment. Yes, my plan is to first cover transformers , then I’ll also cover more general machine learning and deep learning concepts as well, and some of them with Python implementation
@me-ou8rf3 ай бұрын
Can you suggest some materials that deal with how transformer can be applied to time series database like EEG ?
@NaveenKumar-vn7vx10 ай бұрын
thank you
@paktv8583 ай бұрын
what is the difference between self attention and multi head self attention? is both are same just instead of single attention multi head attention use multi heads?
@temanangka38202 ай бұрын
6:00 Does each attention head only process part of embeded token? Example: Say, there is 100 token and 2 attention heads. Does each head only process 50 tokens. ?? If yes, then how can we make sure each head can understand whole context of sentence, while it only consumes half of sentence?
@PyMLstudio2 ай бұрын
That’s a great question The multihead attention splits the feature dimension not the sequence dimension. So that way, each head is able to see entire sequence, but working on a smaller feature-size. Example : input is 100 token and each embedding vector is 256 dimensional . Then with 8 heads , each head will process tensors of size 100x16
@temanangka38202 ай бұрын
@@PyMLstudio understood... Great explanation.. Thank you, Bro...
@just4visit9 ай бұрын
where do the W come from?
@PyMLstudio9 ай бұрын
Good question! These W matrices are learnable parameters of the model. They could be initialized randomly if we build a transformer from scratch, and gradually updated and learned through back-propagation.
@harshaldharpure99216 ай бұрын
How to do the cross attention mechanism if we have a three feature x,y, rag with size of x.shape torch.Size([8, 768]) y.shape torch.Size([8, 512]) rag.feature torch.Size([8, 768])
@PyMLstudio6 ай бұрын
So, let's assume query=y, key=x, and value=rag for our explanation, but remember, you can adjust this configuration depending on your specific needs. Given these tensors, our first step is to ensure that the dimensions of the query, key, and value match for the attention mechanism to work properly. Since y has a different dimension (512) compared to x and rag (768), we need to project y to match the 768-dimension space of x and rag: query_projector = Linear(512, 768) query_projected = query_projector(query) ## --> 8x768 With this projection, all three tensors (query_projected, key=x, value=rag) now share the same dimensionality (8x768), making them compatible for the multi-head attention, where each head involves dot-product between query_projected and keys, followed by Softmax and multiplying by the values. Remember that the assignment of x/y/rag to query/key/value can change depending on your use case and where these tensors come from. I hope this answers your question.
@harshaldharpure99216 ай бұрын
Thanks a lot sir @@PyMLstudio
@Yaz710236 ай бұрын
When you find people who makes science easy and engoyable 🫡
@isaiasprestes9759 Жыл бұрын
This paper is amazing! Thank you very much for making this video. Quite good!