Great lecture for free. Thank you Michigan University and professor Justin.
@temurochilov2 жыл бұрын
Thank you I found answers to the questions that I have been looking for long time
@hasan07708162683 жыл бұрын
33:10 stride 53:00 batch normalization
@alokoraon147510 ай бұрын
I have this great package for my university course.❤
@faranakkarimpour37942 жыл бұрын
Thank you for the great course.
@tatianabellagio31073 жыл бұрын
Amazing! Pd: Although I am sorry for the guy with the coughing attack...........
@kobic8 Жыл бұрын
yeah, kinda disturbed me to concentrate. 2019 it was right before covid striked the world hahah 😷
@rajivb94933 жыл бұрын
at 35:09, the expression for output in case of stride convolution is (W - K + 2P)/S +1...for W=7, K=3, P = (K-1)/2 = 1 & S=2 we get output as (7 - 3 + 2*1)/2 + 1 = 3 +1 = 4 ...however, the slide shows the output as 3x3 instead of 4x4 at the right hand corner... is it correct..?
@DED_Search3 жыл бұрын
I have the same question.
@krishnatibrewal55463 жыл бұрын
both are different situations, the calculation is done without padding whereas the formula is written considering padding
@rajivb94933 жыл бұрын
@@krishnatibrewal5546 ... thanks a lot, yes you're right..
@DED_Search3 жыл бұрын
@@krishnatibrewal5546 thanks.
@eurekad20703 жыл бұрын
Thank you for exellent video! But I have a question here, at 1:05:42, after layer normalization, every sample in x has shape 1xD, while μ has shape Nx1. How do you perform the subtraction x-μ?
@useForwardMax3 жыл бұрын
I wonder if gamma and beta with 1 x D is a typo? If it should be N x 1? If it is not a typo, doing the subtraction is just using the broadcasting mechanism like in numpy.
@eurekad20703 жыл бұрын
@@useForwardMax Broadcasting mechanism makes sense. Thank you.
@vaibhavdixit43774 ай бұрын
Just finished watching the lecture, as per my understanding, X (1 X C X H X W) is the shape of the input vector consumed at once in the algo, and for the calculated means and standard deviations they have mentioned the shape of the output vectors of these parameters in terms of batch size (N X 1 X 1 X 1) as each value uniquely represents each input (1 X C X H X W). It is a late reply but I am replying if someone else would scroll through with similar question to yours!
@intoeleven3 жыл бұрын
why they don't use batch norm + layer norm together?
@jijie1334 жыл бұрын
Great.
@DED_Search3 жыл бұрын
1:01:30 what did he mean by “fusing BN with FC layer or Conv layer”?
@krishnatibrewal55463 жыл бұрын
You can have conv-pool-batchnorm-relu or fc- bn- relu , batch norm can be induced between any layer of the network
@DED_Search3 жыл бұрын
@@krishnatibrewal5546 thanks a lot!
@yahavx Жыл бұрын
Because both are linear operators, then you can simply concat them after training (think of them as matrices A and B, in test time you multiply C=A*B and you put that instead of both)
@puranjitsingh17823 жыл бұрын
Thanks for an excellent video Justin!! I had a quick question on how does the conv. filters change the 3d input into a 2d output
@sharath_92463 жыл бұрын
When you dot product 3d image example(3*32*32) with filter(3*5*5) gives a 2d feature map (28*28) just bcoz of the dot product operation between image and filter
@rajivb94933 жыл бұрын
In Batch Normalization during Test time at 59:52, what are the averaging equations used to average Mean & Std deviation, sigma ..during the lecture some mention is made of exponential mean of Mean vectors & Sigma vectors...please suggest.
@ibrexg Жыл бұрын
Well don! here is more explanation to normalization: kzbin.info/www/bejne/qamooqeggahjl68&ab_channel=NormalizedNerd
@magic4266 Жыл бұрын
sounds like someone was building duplo the entire lecture