CNN Forward Method - PyTorch Deep Learning Implementation

  Рет қаралды 27,135

deeplizard

deeplizard

Күн бұрын

Пікірлер: 58
@haroldsu
@haroldsu 2 жыл бұрын
I went over the blog and videos from time to time, and It is really impressive.
@iposipos9342
@iposipos9342 4 жыл бұрын
You guys are really doing a great job. As a beginner, I was struggling to understand this code in the pytorch documentation. But you've taken all the troubles and break everything down step by step. Great job and highly appreciated. Please more videos. thank you
@Nissearne12
@Nissearne12 4 жыл бұрын
This series is awesome.
@vaibhavkhobragade9773
@vaibhavkhobragade9773 3 жыл бұрын
You have my words! Thanks @deeplizard team.
@Kevin.Kawchak
@Kevin.Kawchak Жыл бұрын
Thank you for the discussion
@ellisalva5494
@ellisalva5494 5 жыл бұрын
This series is truly amazing! Thank you so much for the wonderful work! Looking forward to the next one!
@deeplizard
@deeplizard 5 жыл бұрын
Hey Ellis - Forward is now 😉: kzbin.info/www/bejne/bKfaloSgpNp_e6c
@coltenlarsen5134
@coltenlarsen5134 5 жыл бұрын
These videos are amazing. The only wish I have is for more of them.
@deeplizard
@deeplizard 5 жыл бұрын
Wish granted - kzbin.info/www/bejne/bKfaloSgpNp_e6c
@donfeto7636
@donfeto7636 2 жыл бұрын
Thank you guys for all this good work
@soumyajitsarkar2372
@soumyajitsarkar2372 4 жыл бұрын
This is just amazing !!!
@ahmedelsayedabdelnabyrefae1365
@ahmedelsayedabdelnabyrefae1365 3 жыл бұрын
your videos have 4 dislikes and I want to find these 4 people to ask them what the hell they didn't like in that series of explanation :D I started to really enjoy working with PyTorch due to this series of videos. very very well done guys you are the best
@LiaRistiana95
@LiaRistiana95 5 жыл бұрын
Thank you so much for making this content. Learning deep learning is definitely not an easy ride, but your videos and blogs help me a ton. It's really wonderful.
@deeplizard
@deeplizard 5 жыл бұрын
Hey L R - Thank you. I appreciate that. Glad these videos are helping!
@deeplizard
@deeplizard 5 жыл бұрын
Check out the corresponding blog and other resources for this video at: deeplizard.com/learn/video/MasG7tZj-hw
@zurmad3487
@zurmad3487 5 жыл бұрын
Thank you. Greetings from Perú 🇵🇪.
@lukaradulovic339
@lukaradulovic339 3 жыл бұрын
Amazing!
@FarooqComputerVision
@FarooqComputerVision 5 жыл бұрын
Thank you for sharing this video.
@muizahmed3802
@muizahmed3802 Жыл бұрын
amazing series glued to it Y did i found this late feeling sad :(
@raymondlee9956
@raymondlee9956 4 жыл бұрын
Hey deeplizard, thanks so much for making these videos! For the line of code in the 4th hidden layer, what is the reason behind the -1 in t.reshape(-1, 12 * 4 * 4)?
@deeplizard
@deeplizard 4 жыл бұрын
The -1 accounts for the batch size. The minus one tells the reshape method to calculate the value based on the number of batches.
@Beybladelouser
@Beybladelouser 5 жыл бұрын
This channel is still alive? Holy Crossentropy!
@deeplizard
@deeplizard 5 жыл бұрын
You bet your forward pass it is!
@jay-rathod-01
@jay-rathod-01 5 жыл бұрын
What kind of CHLUEBI is that . Deep lizard is the best. You s.o.b.
@ericrudolph6615
@ericrudolph6615 4 жыл бұрын
Your tutorial is amazing!! I would love to have a tutorial with speech recognition using MFCC's and a Neural Network (for example: TDNN)
@deeplizard
@deeplizard 4 жыл бұрын
Noted!
@tingnews7273
@tingnews7273 5 жыл бұрын
Before: 1、I thought how to implement the forward propagation. What I learned: 1、There is an input layer. f(x) = x. Most network don’t implement it. 2、Because the loss function.We don’t need softmax.But when we inference we will implement it.May be have softmax more convenient
@tejaswilakshmi4217
@tejaswilakshmi4217 4 жыл бұрын
what is the use of specifying class name in super?
@deeplizard
@deeplizard 4 жыл бұрын
This was a requirement of python 2. It's no longer required. The blog post for this video has been updated: deeplizard.com/learn/video/MasG7tZj-hw
@muizzy
@muizzy 4 жыл бұрын
First off, I love your videos, they're way underappreciated looking at the amount of views! I did have a question though: It's not entirely clear to me why exactly we're adding activation functions and pooling after every layer. I understand that a ReLU operation removes negative outputs and makes the network non-linear, which allows for more complexity. But what's the intuition behind applying this to every layer? And would you always add a non-linear activation function after a layer to achieve this purpose? Similarly, the max_pooling is usually used as a way to force the network to abstractify. But what makes that necessary after both convolutional layers in this case?
@fardinsaboori8770
@fardinsaboori8770 3 жыл бұрын
Hello, why did you choose 120 as out_feature?
@deeplizard
@deeplizard 3 жыл бұрын
It is an arbitrary choice. Usually, we output less than we input since we are looking to extract features.
@min-youngchoi3833
@min-youngchoi3833 4 жыл бұрын
I have questions! Before flatten t (using reshape), is t is rank 4 tensor? (batch, channel, height, width) and If we have RGB color channel then still t.reshape(-1, 12*4*4) if in_channel=3 from input layer(conv1).
@deeplizard
@deeplizard 4 жыл бұрын
Hey Min-Young - Great work going though the course! Yes. The conv and pooling operations do not change the rank of the tensor. The length of the axes can change though. For example, the height and width axes decrease and the channel axis increases. If we have if in_channel=3, we still have t.reshape(-1, 12*4*4). This is because we are still instructing the first layer to transform the input (1 channel or 3 channels) into 6 channels. Also, I'd like to mention that you can try this yourself, with the code below: import torch import torch.nn as nn import torch.nn.functional as F t = torch.ones([3,28,28]) #RGB image t.shape conv1 = nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5) conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5) o = F.max_pool2d(conv2(F.max_pool2d(conv1(t.unsqueeze(dim=0)),2,2)),2,2) o.shape Hope this helps! Chris
@magelauditore333
@magelauditore333 4 жыл бұрын
Can anyone pls explain me why softmax is not used at output layer.
@urospocek4668
@urospocek4668 3 жыл бұрын
Because we use CrossEntropyLoss() that automatically performs softmax on the output.
@XorAlex
@XorAlex 5 жыл бұрын
Does he explain anywhere what the relu and max_pool2d operations do?
@deeplizard
@deeplizard 5 жыл бұрын
These concepts are explained in our Deep Learning Fundamentals series. Full series: deeplizard.com/learn/playlist/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU Max pooling: deeplizard.com/learn/video/ZjM_XQa5s6s Activation functions (relu): deeplizard.com/learn/video/m0pIlLfpXWE
@cenkkol8759
@cenkkol8759 5 жыл бұрын
at fourth layer, shouldn't that be t.reshape(12*4*4 , -1). There must be 12*4*4 rows in matrix.
@deeplizard
@deeplizard 5 жыл бұрын
This is not quite right. Here are some details. Our task is to flatten the tensor. This operation is discussed in this episode: deeplizard.com/learn/video/mFAIBMbACMA To flatten the tensor, the 12 channels of height 4 and width 4 must be turned into a rank-1 tensor (an array) of length 12 * 4 * 4. This gives us 1 row of 12*4*4 columns. The minus 1 tells the reshape function to figure our the length of the first axis (columns). This depends on the batch size. Hope this helps. Let me know.
@cenkkol8759
@cenkkol8759 5 жыл бұрын
@@deeplizard first of all thank you for the high quality content and replying. I appreciate it. Does not line self.fc1=nn.linear(in_features=12*4*4, out_features=120) produce a matrix 120 by 192. In the 20th video of this series you say that and I saw that in every other tutorials. But I do not understand how we are able to multiply a 120 by 192 matrix with a 1 by 192 matrix. It works perfectly when I run the code but I dont get it. Actually in tensorflow that line would produce a 192 by 120 matrix so it is not a problem yet I couldn't understand how pytorch works in this case.
@cenkkol8759
@cenkkol8759 5 жыл бұрын
Now I checked the source code and I saw that in the forward method of Linear class there is transpose operation before multiplication.
@deeplizard
@deeplizard 5 жыл бұрын
Thank you. You are welcome! Let's consider this line: self.fc1=nn.linear(in_features=12*4*4, out_features=120) This line creates a layer (called fc1) that accepts an array (rank-1 tensor) that contains 192 elements (also called features). The output from this layer is an array (rank-1 tensor) that contains 120 elements (also called features). Inside this linear layer (fc1) a 120 x 192 weight matrix is created. This weight matrix stays inside the linear layer and is used to preform the linear transformation. When we call the linear layer (fc1), we pass an array of 192 elements as input. The output is an array of 120 elements. How is this achieved? Well, it's achieved using the 120 x 192 weight matrix. This is what was described in video 20 of the series: deeplizard.com/learn/video/rcc86nXKwkw
@deeplizard
@deeplizard 5 жыл бұрын
That transpose is happening because the input tensor is on the left side of the operation. In the video #20 demo, the weight matrix was on the left side.
@jay-rathod-01
@jay-rathod-01 5 жыл бұрын
Cool
@shabnampathan4515
@shabnampathan4515 5 жыл бұрын
Can u please make a video on how to deploy a pretrained pytorch GAN model on Android Or how to export pretrained pytorch model on Android studio
5 жыл бұрын
but in pytorch website uses logsoftmax
@deeplizard
@deeplizard 5 жыл бұрын
Yes. The cross_entropy loss function applies a logsoftmax.
@urospocek4668
@urospocek4668 3 жыл бұрын
@@deeplizard Is it ok if we still perform softmax at the end (it is just not necessary) or is it mistake to do that and we will get wrong answer(and bad model)? Thank you in advanced.
@min-youngchoi3833
@min-youngchoi3833 4 жыл бұрын
((Weight - Filter + 2*Padding) / Stride) + 1, don't forget the maxpooling then, we can get 12*4*4
@donfeto7636
@donfeto7636 2 жыл бұрын
not Weight it's the input width or height
@aryan_kode
@aryan_kode 4 жыл бұрын
t = F.softmax(t, dim=1) shoudnt the dim be 0 instead of 1
@deeplizard
@deeplizard 4 жыл бұрын
Try it both ways and see if which way the output makes more sense. Hint: The first dimension represents the batch. We want softmax for each image not the entire batch.
@idristlili3138
@idristlili3138 4 жыл бұрын
this is better than porn.
@deeplizard
@deeplizard 4 жыл бұрын
😂
@janmichaelbesinga3867
@janmichaelbesinga3867 4 жыл бұрын
"welcoooooooome"
@xavierhenschel55
@xavierhenschel55 2 жыл бұрын
FOR ANY NOOBS: import torch.nn.functional as F could not figure out why F was not defined hope this helps someone
Diffusion models from scratch in PyTorch
30:54
DeepFindr
Рет қаралды 265 М.
Mom Hack for Cooking Solo with a Little One! 🍳👶
00:15
5-Minute Crafts HOUSE
Рет қаралды 23 МЛН
VIP ACCESS
00:47
Natan por Aí
Рет қаралды 30 МЛН
Арыстанның айқасы, Тәуіржанның шайқасы!
25:51
QosLike / ҚосЛайк / Косылайық
Рет қаралды 700 М.
Convolutional Neural Network from Scratch | Mathematics & Python Code
33:23
The Independent Code
Рет қаралды 193 М.
TensorBoard with PyTorch - Visualize Deep Learning Metrics
19:53
Visualizing activations with forward hooks (PyTorch)
18:58
mildlyoverfitted
Рет қаралды 14 М.
CNN Training Loop Explained - Neural Network Code Project
22:04
deeplizard
Рет қаралды 18 М.
Practical Deep Learning for Coders: Lesson 1
1:22:56
Jeremy Howard
Рет қаралды 380 М.
CNN Output Size Formula - Bonus Neural Network Debugging Session
14:44
CNN Weights - Learnable Parameters in PyTorch Neural Networks
23:51
Mom Hack for Cooking Solo with a Little One! 🍳👶
00:15
5-Minute Crafts HOUSE
Рет қаралды 23 МЛН