Lecture 7: Convolutional Networks

  Рет қаралды 57,066

Michigan Online

Michigan Online

Күн бұрын

Пікірлер: 29
@jh97jjjj
@jh97jjjj Жыл бұрын
Great lecture for free. Thank you Michigan University and professor Justin.
@temurochilov
@temurochilov 2 жыл бұрын
Thank you I found answers to the questions that I have been looking for long time
@hasan0770816268
@hasan0770816268 3 жыл бұрын
33:10 stride 53:00 batch normalization
@alokoraon1475
@alokoraon1475 10 ай бұрын
I have this great package for my university course.❤
@faranakkarimpour3794
@faranakkarimpour3794 2 жыл бұрын
Thank you for the great course.
@tatianabellagio3107
@tatianabellagio3107 3 жыл бұрын
Amazing! Pd: Although I am sorry for the guy with the coughing attack...........
@kobic8
@kobic8 Жыл бұрын
yeah, kinda disturbed me to concentrate. 2019 it was right before covid striked the world hahah 😷
@rajivb9493
@rajivb9493 3 жыл бұрын
at 35:09, the expression for output in case of stride convolution is (W - K + 2P)/S +1...for W=7, K=3, P = (K-1)/2 = 1 & S=2 we get output as (7 - 3 + 2*1)/2 + 1 = 3 +1 = 4 ...however, the slide shows the output as 3x3 instead of 4x4 at the right hand corner... is it correct..?
@DED_Search
@DED_Search 3 жыл бұрын
I have the same question.
@krishnatibrewal5546
@krishnatibrewal5546 3 жыл бұрын
both are different situations, the calculation is done without padding whereas the formula is written considering padding
@rajivb9493
@rajivb9493 3 жыл бұрын
@@krishnatibrewal5546 ... thanks a lot, yes you're right..
@DED_Search
@DED_Search 3 жыл бұрын
@@krishnatibrewal5546 thanks.
@eurekad2070
@eurekad2070 3 жыл бұрын
Thank you for exellent video! But I have a question here, at 1:05:42, after layer normalization, every sample in x has shape 1xD, while μ has shape Nx1. How do you perform the subtraction x-μ?
@useForwardMax
@useForwardMax 3 жыл бұрын
I wonder if gamma and beta with 1 x D is a typo? If it should be N x 1? If it is not a typo, doing the subtraction is just using the broadcasting mechanism like in numpy.
@eurekad2070
@eurekad2070 3 жыл бұрын
@@useForwardMax Broadcasting mechanism makes sense. Thank you.
@vaibhavdixit4377
@vaibhavdixit4377 4 ай бұрын
Just finished watching the lecture, as per my understanding, X (1 X C X H X W) is the shape of the input vector consumed at once in the algo, and for the calculated means and standard deviations they have mentioned the shape of the output vectors of these parameters in terms of batch size (N X 1 X 1 X 1) as each value uniquely represents each input (1 X C X H X W). It is a late reply but I am replying if someone else would scroll through with similar question to yours!
@intoeleven
@intoeleven 3 жыл бұрын
why they don't use batch norm + layer norm together?
@jijie133
@jijie133 4 жыл бұрын
Great.
@DED_Search
@DED_Search 3 жыл бұрын
1:01:30 what did he mean by “fusing BN with FC layer or Conv layer”?
@krishnatibrewal5546
@krishnatibrewal5546 3 жыл бұрын
You can have conv-pool-batchnorm-relu or fc- bn- relu , batch norm can be induced between any layer of the network
@DED_Search
@DED_Search 3 жыл бұрын
@@krishnatibrewal5546 thanks a lot!
@yahavx
@yahavx Жыл бұрын
Because both are linear operators, then you can simply concat them after training (think of them as matrices A and B, in test time you multiply C=A*B and you put that instead of both)
@puranjitsingh1782
@puranjitsingh1782 3 жыл бұрын
Thanks for an excellent video Justin!! I had a quick question on how does the conv. filters change the 3d input into a 2d output
@sharath_9246
@sharath_9246 3 жыл бұрын
When you dot product 3d image example(3*32*32) with filter(3*5*5) gives a 2d feature map (28*28) just bcoz of the dot product operation between image and filter
@rajivb9493
@rajivb9493 3 жыл бұрын
In Batch Normalization during Test time at 59:52, what are the averaging equations used to average Mean & Std deviation, sigma ..during the lecture some mention is made of exponential mean of Mean vectors & Sigma vectors...please suggest.
@ibrexg
@ibrexg Жыл бұрын
Well don! here is more explanation to normalization: kzbin.info/www/bejne/qamooqeggahjl68&ab_channel=NormalizedNerd
@magic4266
@magic4266 Жыл бұрын
sounds like someone was building duplo the entire lecture
@brendawilliams8062
@brendawilliams8062 Жыл бұрын
Thomas the tank engine?
@park5605
@park5605 7 ай бұрын
ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem ahem . ahem ahem. ahe ahe he he HUUUJUMMMMMMMMMMMM
Lecture 8: CNN Architectures
1:12:03
Michigan Online
Рет қаралды 48 М.
Lecture 6: Backpropagation
1:11:16
Michigan Online
Рет қаралды 108 М.
Convolutional Neural Networks from Scratch | In Depth
12:56
But what is a convolution?
23:01
3Blue1Brown
Рет қаралды 2,8 МЛН
How convolutional neural networks work, in depth
1:01:28
Brandon Rohrer
Рет қаралды 210 М.
MIT 6.S191 (2023): Convolutional Neural Networks
55:15
Alexander Amini
Рет қаралды 263 М.
Convolutional Neural Network from Scratch | Mathematics & Python Code
33:23
The Independent Code
Рет қаралды 193 М.
Receptive Fields: Why 3x3 conv layer is the best?
8:11
Soroush Mehraban
Рет қаралды 8 М.
Lecture 10 | Recurrent Neural Networks
1:13:09
Stanford University School of Engineering
Рет қаралды 591 М.
Inside a Neural Network - Computerphile
15:42
Computerphile
Рет қаралды 429 М.
Lecture 13: Attention
1:11:53
Michigan Online
Рет қаралды 69 М.