Layer Normalization - EXPLAINED (in Transformer Neural Networks)

  Рет қаралды 32,664

CodeEmporium

CodeEmporium

Күн бұрын

Пікірлер: 60
@minorinxx
@minorinxx Жыл бұрын
the diagram is GOLD
@jbca
@jbca Жыл бұрын
I really like your voice and delivery. It’s quite reassuring, which is nice when the subject of the videos can be pretty complicated.
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much! Very glad you liked this. There will be more to come
@vib2810
@vib2810 10 ай бұрын
as per my understanding and from the LayerNorm code in pytorch, in NLP for an input of size [N, T, Embed], statistics are computed using only the Embed dim, and layer norm is applied to each token in each batch. But for vision with an input of size [N,C,H,W], statistics are computed using the [C,H,W] dimentions
@superghettoindian01
@superghettoindian01 Жыл бұрын
Another great video - I like the structure you use of summarising the concept and then diving into the implementation. The code really helps bring it together as some others have commented. I look forward to seeing more of this series and would love to see a longer video of you deploying a transformer on some dummy data (perhaps you already have one - still going through the content)!
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much on commenting on all the videos! I really appreciate it. And yea, going to be introducing the code behind the encoder and decoder in the coming sessions!
@superghettoindian01
@superghettoindian01 Жыл бұрын
@@CodeEmporium I love your content so will do what I can to comment , like and spread!
@Hangglide
@Hangglide 6 ай бұрын
Thank you so much for providing solid examples and calculations to explain these concepts. I have seen these concepts elsewhere but couldn't make sense of them until I saw how you computed the values. Great video!
@yangkewen
@yangkewen Жыл бұрын
very clear and sound explanation for a complex concept, thumb up for the hard works!
@velvet_husky
@velvet_husky Ай бұрын
6:56 actually you would use the variance instead of the standard deviation, so in the layernorm formula it should be sigma^2 in the divisor. and the epsilon is missing to prevent zero division
@saahilnayyer6865
@saahilnayyer6865 Жыл бұрын
Nice series on transformer. Really liked it. Btw interesting design choice for the video to use a landscape layout of the transform architecture during intro :D
@tdk99-i8n
@tdk99-i8n Жыл бұрын
10:30 when layers normalizations are "computed across the layer and also the batch", are their means and std connected as if the batch boundaries aren't there? So that means there's a different learnable gamma and beta parameter for each word?
@CodeEmporium
@CodeEmporium Жыл бұрын
“Means and std connected as if the batch boundary isn’t there” ~ yes But we have the same gammas and betas for the same layer across ALL words in the dataset
@_.hello._.world_
@_.hello._.world_ Жыл бұрын
Great! Since you’re covering transformer components, I would love to see TransformerXL and RelativePositionalEmbedding concepts explained in the upcoming videos! ☺️
@MsFearco
@MsFearco Жыл бұрын
i help with relpos, its the same as usual posenc but instead of fixed positional encoding its in relation to each other, pairwise.
@user-wr4yl7tx3w
@user-wr4yl7tx3w Жыл бұрын
the python example really helped to solidify understanding.
@CodeEmporium
@CodeEmporium Жыл бұрын
Super glad it did
@Ibrahimnada1995
@Ibrahimnada1995 Жыл бұрын
Thanks Man , you deserve more than a like
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks a ton for the kind words :)
@ashutoshtripathi5699
@ashutoshtripathi5699 11 ай бұрын
best explanation ever!
@anishkrishnan3979
@anishkrishnan3979 3 ай бұрын
A bit late but can you tell me if my understanding is correct. The output from the attention head might have skewed means because of the large spread of the values, this might consist early lead to wrong predictions. So the gradients might constantly changing itself to correct these predictions which might lead to incorrect change in the weights which eventually leads to exploding and vanishing gradients. To correct this we sort of reset the mean and then fine tune the mean according to the matrix ?
@shreejanshrestha1931
@shreejanshrestha1931 Жыл бұрын
EXCITED!!!!!
@CodeEmporium
@CodeEmporium Жыл бұрын
Me too :)
@luvsuneja
@luvsuneja Жыл бұрын
if our batch has 2 vectors of 2 words x 3 embedding size say: [[1,2,3],[4,5,6]] and [[1,3,5], [2,4,6]] For layer normalization, Is nu_1 = mean(1,2,3,1,3,5) and nu_2 = mean(4,5,6,2,4,6) Just wanted to clarify. Keep up the great work, brother. Like the small bite sized videos.
@Guhankarthik-w4i
@Guhankarthik-w4i 6 ай бұрын
meaning , the statistical calculation for first vectors of all batches are done together? like how you calculating mean using the first vectors of both the batches?
@twincivet9668
@twincivet9668 Ай бұрын
I don't think that is correct. Layer normalization is not applied across a batch. It's applied to a single input observation. The mean and std values are computed for each embedding.
@caiyu538
@caiyu538 Жыл бұрын
Great lectures.
@yanlu914
@yanlu914 Жыл бұрын
Where can I find the reason why we need to calculate mean and standard deviation over the parameter shapes? In Pytorch, they just caculate over the last dimension, hidden size.
@yanlu914
@yanlu914 Жыл бұрын
I' m sorry, I just check the Pytorch code, there is normalized_shape like parameter_shape in your code.
@fenglema36
@fenglema36 Жыл бұрын
Amazing your expiation are so clear. Can this help with exploding gradients?
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much! And yea. Later normalization should help exploding and vanishing gradients. So training is stable
@fenglema36
@fenglema36 Жыл бұрын
@@CodeEmporium Perfect thanks.
@shubhamgattani5357
@shubhamgattani5357 Ай бұрын
The diagram is great..but you should explain the code also corresponding to the diagram
@duzx4541
@duzx4541 5 күн бұрын
Isnt it actually norm and add? I´ve mostly seen stuff like x = x + self.attention(self.norm1(x), mask)
@TD-vi1jx
@TD-vi1jx Жыл бұрын
Great video! I find these very informative, so please keep them going! Question on the output dimensions though. In your transformer overview video the big diagram shows that after the layer normalization, you have a matrix of shape [batch_size, sequence_len, dmodel] (in the video 30x50x512 I believe.) However here you end up with an output matrix (out) of [sequence_len, batch_size, dmodel] (5x3x8). Do we need to reshape these output matrices again to [batch_size,sequence_len,dmodel], or am I missing something? Thanks again for all the informative content!
@dhawajpunamia1000
@dhawajpunamia1000 Жыл бұрын
I wonder the same. Do you know the reason for it?
@mohammadyahya78
@mohammadyahya78 Жыл бұрын
Wow. Amazing channel
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much!
@DivyanshuSinghania01
@DivyanshuSinghania01 3 ай бұрын
What is the toool you use to write?
@MatheusHenrique-jz1dc
@MatheusHenrique-jz1dc Жыл бұрын
Thank you very much friend, very good!!
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much for watching !
@chargeca3573
@chargeca3573 Жыл бұрын
Thanks! This vedio helps me a lot
@CodeEmporium
@CodeEmporium Жыл бұрын
Glad!
@superghettoindian01
@superghettoindian01 Жыл бұрын
Have an actual question this time! While trying to understand the differences between layer and batch normalization, I was wondering whether it’s also accurate to say you are normalising across the features of a vector when normalising the activation function - since each layer is a matrix multiply across all features of a row, would normalising across activation functions be similar to normalising across the features? In the same thread, can/should layer and batch normalization be run concurrently? If not, are there reasons to choose one over the other?
@CodeEmporium
@CodeEmporium Жыл бұрын
Good questions. From what I understand, we normalize the activation values across the layer (I.e make sure the values across the layer follow a bell curve of sorts). While in batch normalization, we do the same exact thing but across the batch dimension instead The only issue i see with batch normalization is that it is dependent on the batch size (which is typically an order magnitude smaller than the size of layers) If we apply a normalization to a small number of items, we might get erratic results (that is the mean and standard deviation of a small number of numbers simply don’t lead to values that are truly “normalized”). One remedy would be to increase your batch size to a sizable number (like in the hundreds). I have heard this is an issue with NLP problems specifically. But I would need to do my own experimentation to see why.
@misterx3321
@misterx3321 Жыл бұрын
Beautifully done video but isn't layer normalization essentially batch normalization layer?
@yagneshbhadiyadra7938
@yagneshbhadiyadra7938 10 ай бұрын
Value Matrix should be on the right side of the multiplication with Attention Weights matrix
@anshumansinha5874
@anshumansinha5874 8 ай бұрын
I think your code is not correct, LayerNorm will not be over the batch. Try and think what it means to take normalisation over the batches. For a layer norm each entity of the batch is unique, and each should be normalised across the layer of the MLP (essentially everything is MLP).
@madhukarmukkamula1515
@madhukarmukkamula1515 Жыл бұрын
Great Video !! Just a question, why do we need to swap the dimensions and all that other stuff ? Why can't we do something like this # assuming inputs is of format (batch_size,sentence_length,embedding_dim) mean=inputs.mean(dim=-1,keepdim=True) var = ((inputs - mean) ** 2).mean(dim=-1, keepdim=True) epsilon = 1e-5 std = (var + epsilon).sqrt() y = (inputs - mean) / std y
@alexjolly1689
@alexjolly1689 Жыл бұрын
I have the same doubt . did you get it ?
@user-wr4yl7tx3w
@user-wr4yl7tx3w Жыл бұрын
Given that we have 8 heads, Is it 512 / 8, which is 64? Are we actually going to split the 512 into 8 equal 64 length parts?
@CodeEmporium
@CodeEmporium Жыл бұрын
In theory, that’s what’s happening. In code, we make “8 parallel heads” by introducing another dimension. For example, vectors of shape 30 (max sequence length) x 10 (batch size) x 512 (embedding size) would form query key and value tensors of shape 30 x 10 x 8 x 64. So there is a new dimension for heads that essentially acts like another batch dimension for more parallel computation
@saurabhnirwan549
@saurabhnirwan549 Жыл бұрын
Is pytorch better for NLP task than Tensorflow?
@CodeEmporium
@CodeEmporium Жыл бұрын
I wouldn’t necessarily say that is the case. Tensor flow and PyTorch are frameworks we can use for building these complex models. PyTorch might be easier to use since you don’t need to code out tensors themselves. But tensorflow (or even going as low as numpy) can be used to train models too
@saurabhnirwan549
@saurabhnirwan549 Жыл бұрын
@@CodeEmporium Thanks for clarifying
@ProdbyKreeper
@ProdbyKreeper 4 ай бұрын
appreciate!
@darshantank554
@darshantank554 Жыл бұрын
🔥🔥🔥
@CodeEmporium
@CodeEmporium Жыл бұрын
:) thank you
@himanshusingh2980
@himanshusingh2980 7 ай бұрын
Really want to hear indian accent of thai guy 😅😂
Blowing up the Transformer Encoder!
20:58
CodeEmporium
Рет қаралды 20 М.
Batch Normalization in neural networks - EXPLAINED!
17:00
CodeEmporium
Рет қаралды 5 М.
How to treat Acne💉
00:31
ISSEI / いっせい
Рет қаралды 108 МЛН
Enceinte et en Bazard: Les Chroniques du Nettoyage ! 🚽✨
00:21
Two More French
Рет қаралды 42 МЛН
So Cute 🥰 who is better?
00:15
dednahype
Рет қаралды 19 МЛН
Group Normalization (Paper Explained)
29:06
Yannic Kilcher
Рет қаралды 31 М.
Attention in transformers, visually explained | DL6
26:10
3Blue1Brown
Рет қаралды 2 МЛН
Batch normalization | What it is and how to implement it
13:51
AssemblyAI
Рет қаралды 67 М.
Multi Head Attention in Transformer Neural Networks with Code!
15:59
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 413 М.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,2 МЛН
Positional embeddings in transformers EXPLAINED | Demystifying positional encodings.
9:40
AI Coffee Break with Letitia
Рет қаралды 73 М.
Transformer Neural Networks - EXPLAINED! (Attention is all you need)
13:05
How to treat Acne💉
00:31
ISSEI / いっせい
Рет қаралды 108 МЛН