thanku very very much mr andrew ng...........u made my all the task super easy....now its like piece of a cake...u cant believe that how much i struggle to demystify all the issues regarding about deep-learning specially conv.
@sandyz10005 жыл бұрын
AlexNet was remarkable when it first came. Setting up two GPUs for training was very difficult and communication among Gpus using mpi require great deal of effort. Those guy were really geeks to figure out such solution
@SuperSmitty99992 жыл бұрын
Setting up one GPU with tensorflow today is a feat of engineering.
@yinghong35435 жыл бұрын
At 8:40, the AlexNet, why the channel suddenly shrank from 384 to 256?
@CyborgGaming995 жыл бұрын
It's just the number of filters they used, they decided to up the filters from 96 to 256 to 384 at first, and probably when they realized that their results aren't changing much, they decided to back the filter number. Number of channels is just the number of filters they decided to use, there is no "explanation" or math formula as to why they chose those jumps(they explain it in the paper probably)
@shwethasubbu33856 жыл бұрын
At 13:37, why do we have [CONV 64] x 2 ? why do we perform the CONV operation twice (when we get the same dimensions each time) ? Also, what is the advantage of increasing the number of channels and decreasing the height and width?
@CyborgGaming995 жыл бұрын
Well you don't have to change dimensions every time in order to actually get some results. It was just their way of trying to detect patterns in images, it just looks unusual
@MuhannadGhazal4 жыл бұрын
p=1 here so the dimensions stayed the same..
@sau0026 жыл бұрын
Excellent video. Thank you very much. I have a question. When we apply the 2nd set of convolution 16 filters - do we not apply this filter on the 6 channels we produced in the previous layer? Therefore the final output after the 2nd pooling should be 5X5 X 16filters X 6 filters = 400x6=2400 ?
@sau0026 жыл бұрын
I think I understand why. Every filter should be visualized as a 3d matrix , i.e. a volume. Each layer in the volume operates on each of the channels. E.g. In case of R,G,B picture, the first layer of filters would have 3 matrices each , 1 matrix for R, 1 matrix for G, 1 matrixfor B. The 3 matrices in a single filter would operate on the R,G,B image to produce a single 2d matrix. Now 6 filters in the first layer produce 6 two dimensional matrices. Think of the count 6 as a picture with 6 channels. Therefore in the subsequent filter layers, your input picture is made up of 6 two dimensional matrices. Each of the 16 filters in the second filter layer have a depth of 6 in the 3 rd dimension, i.e. a stack of 6 two dimensional matrices. Therefore 16 of these 6 channeled filters operate on the input image (which can be thought of as a 6 channel image produced by convolution in the first layer).
@sau0026 жыл бұрын
The previous videos from ANG have the answer to my question. I have summarized below.
@pengsun13554 жыл бұрын
@@sau002 good job:)
@nikilragav5 ай бұрын
Why is the last layer 312 y-hat drawn as a single node? Shouldn't it be drawn as a 1x10 similar to the outputs of the FC? And what's the nonlinearity for the FC layers? Relu?
@oktayvosoughi6199 Жыл бұрын
do you have papers that prof said in the lecture?
@sheethalgowda66164 жыл бұрын
How does a 14×14×6 turn into 10×10×16, I mean we have 6 14×14 filtered output images, how to apply 16 filters for 6 14×14 output images
@anhphan86433 жыл бұрын
@@awest11000 so how do you know how many filters can fit with next layer?
@LogicalFootball2 жыл бұрын
The critical part is that you SUM UP those 10x10x6 on depth(6). So when a filter of 5x5x6 is applied on a 14x14x6 tensor it will yield 10x10x1, just like a filter of 5x5x5 on a tensor 14x14x6 would yield a 10x10x2 output, and a 5x5x4 on the same tensor would yield a 10x10x3 output etc...
@navneetchaudhary48424 жыл бұрын
as we see in lenet or in our conv network when we apply filter dimension is decrease each time ex:- in leNet 32*32*1 when we get conv layer by 6 filters of 5*5 matrix then the answer is 28*28*6 but in VGG 16 the answer we get every time is same like 224*224*3 result in 224*224*64 only no. of filters are change let me help with that or explain it .
@legacies90414 жыл бұрын
The block sizes do not change in VGG because the authors use zero padding throughout. I hope this helps.
@ayushyarao96934 жыл бұрын
i think joe meant that there is suitable padding used to make sure that they both are same size.Which must be 2.
@computing_T Жыл бұрын
@@ayushyarao9693 p=1. (n+2p-f )s+1 => (224+2(1)-3)/1+1 224. Ans after 3 years of comment. I wrote it may help who learning from it now and came to see this doubt.
@kiranarun18684 жыл бұрын
After same padding how did 27x27x96 become 27x27x256?
@alexiafairy4 жыл бұрын
Conv layer, since its same padding so the height and weight remained 27X27, but they used 256 filters, or channels, so the dimensions became 27x27x256
@rahul25iit Жыл бұрын
@@alexiafairy Andrew doesn't explicitly mention about using 256 filters.
@devanshgoel3433 Жыл бұрын
@@rahul25iit That's because if we will watch playlist in serial order. Then one can get to know such things have to be considered by default if he has not mentioned explicitly.
@jacobjonm05112 жыл бұрын
it is confusing, is the kernel 3*3*3 or 3*3? I assume for the RGB images it is 3*3*3.
@gerrardandeminem Жыл бұрын
It is 3*3*no. of filter
@jacobjonm0511 Жыл бұрын
@@gerrardandeminem are you sure? based on this video it is 3*3*3: kzbin.info/www/bejne/gpLOq2WDpK2sbNE
@gerrardandeminem Жыл бұрын
@@jacobjonm0511 I think Andrew ng explains this in previous videos of this series. It is an arbitrary choice.
@jacobjonm0511 Жыл бұрын
@@gerrardandeminem it is not arbitrary. Here is another video at 7:23 kzbin.info/www/bejne/pnXHgWOKe9-mpbM
@gerrardandeminem Жыл бұрын
@@jacobjonm0511 If you are asking about the first input. then yes, it is 3*3*3. But it is arbitrary afterwards
@uchungnguyen14746 жыл бұрын
i have question how come from 90216 parameters to 4096 parameters? and how do i know how many layers i need?
@adamajinugroho8306 жыл бұрын
im havent followed this video yet, did you mean layer or parameter? for the layer it came from the experiment of the given architecture
@muhammadharris44706 жыл бұрын
90216 resulted from flattening the the last layer conv layer. 4096 are not the parameters but the number of hidden layers of that layer. lastly, number of layers is a hyperparameter meaning you have to experiment with what works best for your problem
@ThePaypay884 жыл бұрын
number is just multiplication of width*height*channel , about how many you need they just test ( or phd students test ) and report to advisor professor kek
@roshnisingh83426 жыл бұрын
How 400 to 120 to 84 in fully connected layers?
@janvonschreibe34476 жыл бұрын
The next layers needs not have the same number of nodes than the previous ones
@pallawirajendra6 жыл бұрын
Every 400 nodes are connected to every 120 nodes and every 120 nodes are connected to every 84 nodes. There are no maths but only your experience which helps you decide the number of nodes.
@ritapravadutta79395 жыл бұрын
120 and 84 are just choice of number of nodes for LeNet-5
@Joshua-dl3ns4 жыл бұрын
they chose those numbers as they work best for the model, you have to find out what number of neurons works well for you
@roshnisingh83424 жыл бұрын
Thank you all for helping out
@aayushpaudel23795 жыл бұрын
224 on convolution by 3*3 filter twice should give 220. Help me with this !!