This is the clearest explanation for the RoPE embedding.
@theunconventionalenglishman Жыл бұрын
I've watched a few videos trying to wrap my head around this concept and yours is by far the best. Thanks!
@wolpumba4099 Жыл бұрын
*Video Summary: Rotary Positional Embeddings: Combining Absolute and Relative* - *Introduction* - Discusses the importance of positional embeddings in Transformer models. - *Absolute Positional Embeddings* - Explains how absolute positional embeddings work. - Highlights limitations like fixed sequence length and lack of relative context. - *Relative Positional Embeddings* - Introduces the concept of relative positional embeddings. - Discusses the computational challenges and inefficiencies. - *Rotary Positional Embeddings (RoPE)* - Combines the advantages of both absolute and relative embeddings. - Uses rotation to encode position, preserving relative distances. - *Matrix Formulation* - Explains the mathematical formulation behind RoPE. - *Implementation* - Shows how RoPE can be implemented efficiently in PyTorch. - *Experiments and Conclusion* - Shares results of experiments showing RoPE's effectiveness and efficiency compared to other methods. The video provides a comprehensive overview of Rotary Positional Embeddings, a new method that combines the strengths of both absolute and relative positional embeddings. It delves into the mathematical details and practical implementation, concluding with experimental results that validate its effectiveness.
@cmbbqrpb9737 Жыл бұрын
Thanks for creating and sharing this vid! Still confused on the math stuff though. So I read through the paper and wrote down some notes: The rotation matrix Rm rotates a query vector q of the mth token by mθ, while Rn rotates a key vector k of the nth token by nθ. For any rotation matrix or orthogonal matrix R, R^T = R^-1 holds. Thus Rm^T is Rm's inverse, that rotates q in another direction by -mθ. This means (q·Rm)^T·(k·Rn) can in total rotate q^T·k by (n-m)θ. This ultimately associates the knowledge extracted from the mth query and the nth key with their relative distance n - m, naturally and interpretably.
@dy85769 ай бұрын
Genius
@duzx45412 күн бұрын
Jeeez, computer science (masters) student here, who just recently dug his head into AI, especially transformer. But Im really struggling with understanding the math parts behind this... How do you learn that stuff D:
@ItsRyanStudios Жыл бұрын
This is amazing, thank you I just wrapped my mind around sinusoidal embeddings and came across rope and was really struggling to grasp it. Definitely going to refer back to this video. I love in depth NLP content like this.
@kindness_mushroom5 ай бұрын
Thank you for such an intuitive explanation of a pretty complex paper.
@laurentiupetrea37266 ай бұрын
Finally! My 4th video and I was lost but this one did the trick!
@gemini_5379 ай бұрын
Gemini: The video is about a new method for positional embeddings in transformers called rotary positional embeddings. The Transformer architecture is a neural network architecture commonly used for various natural language processing tasks. A key challenge for Transformer models is that they are invariant to the order of words by default. This means that the model would not be able to distinguish between a sentence and its scrambled version. To address this challenge, positional embeddings are added to the Transformer model. There are two main types of positional embeddings: absolute positional embeddings and relative positional embeddings. Absolute positional embeddings assign a unique vector to each position in a sentence. This approach however, can not handle sentences longer than the training data. Relative positional embeddings, on the other hand, represent the relationship between two words. While this method can handle sentences of any length, it requires additional computations in the self-attention layer, making it less efficient. Rotary positional embeddings address the limitations of both absolute and relative positional embeddings. The core idea is to rotate the word vector instead of adding a separate positional embedding vector. The amount of rotation is determined by the position of the word in the sentence. This way, rotary positional embeddings capture the absolute position of a word while also preserving the relative positions between words. The video also mentions that rotary positional embeddings have been shown to improve the training speed of language models.░
@hw562211 ай бұрын
Thank you so much. Your explanation is very clear and succinct.
@varunsaagars Жыл бұрын
🎯 Key Takeaways for quick navigation: 00:14 🆕 *In 2022, a new architectural improvement called "Rotary Positional Embeddings" (ROPE) was proposed and adopted by various language models.* 03:27 🔄 *Relative positional embeddings represent token pairs' distances but face engineering challenges like slower processing for longer sequences.* 06:01 🔄 *Rotary positional embeddings propose rotating word vectors based on positions, combining advantages of both absolute and relative positional embeddings.* 08:04 🔢 *Rotary embeddings are implemented using rotation matrices for 2D cases and a more general approach for higher-dimensional vectors.* 10:48 ⚙️ *Experiments show that models using rotary positional embeddings train faster than those using sinusoidal embeddings and are relatively robust across various model architectures and training setups.*
@MrOnlineCoder Жыл бұрын
Amazing video, intuitive explanations with examples.
@vixguy Жыл бұрын
You make it easy to learn even for a high school student
@marshallmcluhan33 Жыл бұрын
Good work, I look 'forward' to the ReRoPE video. 😎
@muyanfeng20829 ай бұрын
Really good introduction, thanks
@weekendwarrior79338 ай бұрын
Absolutely amazing explanation! Keep it up man
@roomo7time7 ай бұрын
your explanation is amazing. thank you for your work
@snehotoshbanerjee19385 ай бұрын
Excellent explanation!! Thank you!
@sammcj20006 ай бұрын
Great explanation. Thank you for making this.
@ChungSo_AC10 ай бұрын
OMG!! Very good teaching!!!
@SahilDua Жыл бұрын
Thanks for the in-depth explanation of RoPE. A couple of questions: 1. How is KV Cache used/built for RoPE case? RoPE is applied to q and K. Does this change anything in how K and V are cached? 2. Where can I find intuition behind why this RoPE works? I usually find it harder to jump into the mathematical equations directly to find the proof.
@EfficientNLP Жыл бұрын
Yes, the KV cache can be used normally with RoPE, because the rotation is applied to a token depending on its position from the start of the sequence, and this does not change as more tokens are generated. I hope this video provides a good intuition of why this works!
@pierreenel151610 ай бұрын
Excellent video, thanks!
@manikantabandla39234 ай бұрын
Thanks for the crisp explanation. But I'm curious to know the source of information at 7:36; I couldn't find the same in the paper. Can you share the source for more information?
@EfficientNLP4 ай бұрын
I'm not sure if this is what you're asking, but a property of rotations is that they preserve the dot product between vectors. The dot product remains the same if you apply the same rotation to both vectors, so it only depends on the relative position difference between the two tokens, and not their absolute difference.
@manikantabandla39234 ай бұрын
@@EfficientNLP@EfficientNLP If I'm not wrong, RoPE preserves this only at the first layer of the transformer. Because after the first layer, the angle between the representations for the words "pig" and "dog" will be different for the two prompts.
@EfficientNLP4 ай бұрын
@@manikantabandla3923 That is correct - the angle between 'pig' and 'dog' is only the same in the first layer, as in later layers the embedding incorporates information from the entire sentence. In the later layers, the angle preserving property of RoPE lets it better capture relative positional information than absolute position.
@HosseinKhosravipour3 ай бұрын
Great explanation
@akshaydevkarama32777 ай бұрын
great explanation,really helped me!
@mineword27714 ай бұрын
@naklecha 💪 and a legend was born ‼️
@garylai51746 ай бұрын
Nice video. Thanks for this. I could be wrong but one potential error I see: In this video, you said that "You can’t do KV cache because you change the embeddings with every token you add." I don't think this is necessarily true, at least not for decoder architectures like GPTs. The previous tokens don't attend to the new tokens -- they only attend to tokens to their left (there's a causal mask). When you add a new token, the relative position between the previous tokens don't change. For example, if you add a 6th token to a sequence, the distance between token 1 and token 4 haven't changed at all; therefore, the KV cache is still valid. It seems to me that yes, relative position embedding is inefficient, but not because it invalidates KV cache; rather, it's because every time we add a new token, it needs to attend to all previous tokens twice: once for the regular attention calculation, once for the relative positional embedding
@EfficientNLP6 ай бұрын
Yes, that is correct. The KV cache can still be used in T5 relative positional embeddings, but it is less efficient because the relative position needs to be recalculated - so this is an extra step that cannot be cached, making the KV cache not as effective compared to absolute positional embeddings.
@kevon217 Жыл бұрын
Thanks for this overview!
@abdelrahmanhammad1020 Жыл бұрын
Thanks @Bai for the great explanation. I still have a question: Mathematically, why will the positional embedding of other positional embedding techniques (may be absolute?) change if adding more tokens to the sentence? Approximately, at minutes 7:00 of this video. Thanks!
@EfficientNLP Жыл бұрын
This is a property of most absolute positional embeddings, but generally not for relative positional embeddings. For example, T5's relative embeddings change at every step as different bias values need to be added to the attention matrix. Thus, rotary embeddings are the first to combine the benefits of both absolute and relative embeddings.
Жыл бұрын
very good explanation.
@ml.91069 ай бұрын
Very clear~~thanks!
@einsteinsapples2909 Жыл бұрын
I just smashed the like button.
@rubncarmona2 ай бұрын
I'm having a hard time with this. Aren't the dimensions supposed to encode key information about the sentence? The examples at 6:30 have a vector that is much higher in X than the other which is higher in Y. How can the model understand that both mean the same and the only change is the position? What if there's a word whose embedding is equal to the rotated dog vector?
@EfficientNLP2 ай бұрын
Yes, that is correct - positional information is encoded in the same vector space as other semantic information; this is the case for all positional embeddings, not unique to rotary embeddings. This works because when there are enough dimensions, the network is able to learn a suitable representation so that the positional information is distinct from the semantic.
@rubncarmona2 ай бұрын
@@EfficientNLP I think this is the cost I was looking for... so there's a soft requirement of high dimensionality... and then the network reliably learns the task of separating the additional encoded information. I think I get it!
@naubull2 Жыл бұрын
Thanks for a great explanation! By the way, I was curious when I understood from the initial explanation and the rotational equations, consecutive pairs of coordinates seem to be rotated, as in (x_1, x_2) / (x_3, x_4) ... are each rotated. However from most of the implementations as suggested in the video, the codes pair up not by adjacent indices but with a window size of half the dimension, which would be (x_1, x_d//2+1) / (x_2, x_d//2+2) ... since the code states that we split the hdim by half and swap their order.. did I understand correctly or am I missing something?
@EfficientNLP Жыл бұрын
You are correct. In many implementations, rather than rotating each pair of adjacent dimensions, they choose to split the entire vector in half and rotate the two halves. Ultimately, this does not matter because the dimensions of vectors are interchangeable and do not affect vector addition and multiplication. This is likely to be more efficient from an implementation standpoint and is equivalent to the original formula.
@buh35710 ай бұрын
thank you for such a clear explaination, your explaination helped me to understand this concept, rotary positional embedding is so elegant way to do positional embedding, and intuitively make sense to me, curious how can this embedding technique works for vision transformer? anyone have experience?
@EfficientNLP10 ай бұрын
Rotary embeddings may be applied to a vision transformer, just as they can be for any other transformer; I'm not aware of any reports that it improves performance in this case. It would be an interesting experiment, though!
@jasonjones4236 Жыл бұрын
Why is kv cache difficult to implement in case of relative embeddings?
@EfficientNLP Жыл бұрын
The KV cache saves the K and V matrices during autoregressive decoding to avoid recomputing them for every token. But for relative embeddings, when a new token is generated, the relative distance between the new token and previous tokens changes. So there is an extra step (adding the relative biases) that cannot be cached, making the KV cache not as effective.
@jasonjones4236 Жыл бұрын
@@EfficientNLP Ah so to be precise, the cache can work but we need to fully compute the attention matrix and add the relative embedding matrix to it. But isn't the attention matrix computed when we torch.matmul q and k in other cases too?
@EfficientNLP Жыл бұрын
That is correct. In summary: there are several steps that are required in relative positional embeddings that aren't needed for absolute & rotary embeddings, which make them slower. Determining precisely which step causes the slowdown is an interesting question and would require some benchmarking experiments.
@hussainshaik4390 Жыл бұрын
great video but i have one question you are referring the eluther ai blog right? in that pytorch implementation instead of rotating every 2 elements in dim vector they rotated half vector like this ```def rotate_half(x): x1, x2 = x.chunk(2, dim=-1) return torch.cat((-x2, x1), dim=-1)``` but in the jax implementation they rotated every two elements any idea on this?
@EfficientNLP Жыл бұрын
Yea that's possible, there are multiple ways to implement this but they should be logically equivalent.
@hazemessamm Жыл бұрын
Hi, thank you for this great video, but I wanted to ask how they should be logically equivalent, the values that were negated are not the same, so how they are logically eqivalent?@@EfficientNLP
@ziqichen5902 Жыл бұрын
Same question... Have you figured out the reason yet?😅@@hazemessamm
@guanxi99 Жыл бұрын
Thanks for the good explanation! How to actually make sure that the result of applying a positional embedding algorithm does not coincidently represent another token? E.g how to avoid that the positional embedding of “dog” in oosition i does not mean “cat” in position b?
@EfficientNLP Жыл бұрын
Indeed, it is possible for a word at position i to have the same embedding as a different word at position j, since both positional information and non-positional semantic information are represented in the same embedding space. The model learns to use them appropriately during training.
@amortalbeing Жыл бұрын
thanks alot
@dylstuart Жыл бұрын
Great video! What value is used for Theta?
@EfficientNLP Жыл бұрын
Theta_i = 10000^(2i / d). I didn't cover this in the video, but it is mentioned in the RoFormer paper.
@gemini_53711 ай бұрын
@@EfficientNLP That seems to be the same as the one used in the paper "Attention is All you Need".
@pratik644710 ай бұрын
What is W (q,k) matrix and how its calculated?
@EfficientNLP10 ай бұрын
These are the W_q and W_k matrices in self-attention, which are used to generate the Q and K matrices.
@qwerty_and_azerty Жыл бұрын
Great vid! Nice explanation! Question: why is it termed “rotary” and not “rotational” position embeddings?
@EfficientNLP Жыл бұрын
It’s the name given in the paper. I think it’s quite catchy!
@csbarathi Жыл бұрын
Why not positionally embed based on sentence and paragraph rather than just the position of the word in the overall prompt? I understand that it adds more computation. But would yield better results wouldn't it?
@EfficientNLP Жыл бұрын
The transformer doesn't distinguish between sentences and paragraphs; they are treated like any other token, so the position encoding doesn't refer to them specifically.
@csbarathi Жыл бұрын
@@EfficientNLP I guess I have something in mind that I'm unable to express in words now. Will try it out and let you know what I ran into.