I went over the blog and videos from time to time, and It is really impressive.
@iposipos93424 жыл бұрын
You guys are really doing a great job. As a beginner, I was struggling to understand this code in the pytorch documentation. But you've taken all the troubles and break everything down step by step. Great job and highly appreciated. Please more videos. thank you
@Nissearne124 жыл бұрын
This series is awesome.
@vaibhavkhobragade97733 жыл бұрын
You have my words! Thanks @deeplizard team.
@Kevin.Kawchak Жыл бұрын
Thank you for the discussion
@ellisalva54945 жыл бұрын
This series is truly amazing! Thank you so much for the wonderful work! Looking forward to the next one!
@deeplizard5 жыл бұрын
Hey Ellis - Forward is now 😉: kzbin.info/www/bejne/bKfaloSgpNp_e6c
@coltenlarsen51345 жыл бұрын
These videos are amazing. The only wish I have is for more of them.
your videos have 4 dislikes and I want to find these 4 people to ask them what the hell they didn't like in that series of explanation :D I started to really enjoy working with PyTorch due to this series of videos. very very well done guys you are the best
@LiaRistiana955 жыл бұрын
Thank you so much for making this content. Learning deep learning is definitely not an easy ride, but your videos and blogs help me a ton. It's really wonderful.
@deeplizard5 жыл бұрын
Hey L R - Thank you. I appreciate that. Glad these videos are helping!
@deeplizard5 жыл бұрын
Check out the corresponding blog and other resources for this video at: deeplizard.com/learn/video/MasG7tZj-hw
@zurmad34875 жыл бұрын
Thank you. Greetings from Perú 🇵🇪.
@lukaradulovic3393 жыл бұрын
Amazing!
@FarooqComputerVision5 жыл бұрын
Thank you for sharing this video.
@muizahmed3802 Жыл бұрын
amazing series glued to it Y did i found this late feeling sad :(
@raymondlee99564 жыл бұрын
Hey deeplizard, thanks so much for making these videos! For the line of code in the 4th hidden layer, what is the reason behind the -1 in t.reshape(-1, 12 * 4 * 4)?
@deeplizard4 жыл бұрын
The -1 accounts for the batch size. The minus one tells the reshape method to calculate the value based on the number of batches.
@Beybladelouser5 жыл бұрын
This channel is still alive? Holy Crossentropy!
@deeplizard5 жыл бұрын
You bet your forward pass it is!
@jay-rathod-015 жыл бұрын
What kind of CHLUEBI is that . Deep lizard is the best. You s.o.b.
@ericrudolph66154 жыл бұрын
Your tutorial is amazing!! I would love to have a tutorial with speech recognition using MFCC's and a Neural Network (for example: TDNN)
@deeplizard4 жыл бұрын
Noted!
@tingnews72735 жыл бұрын
Before: 1、I thought how to implement the forward propagation. What I learned: 1、There is an input layer. f(x) = x. Most network don’t implement it. 2、Because the loss function.We don’t need softmax.But when we inference we will implement it.May be have softmax more convenient
@tejaswilakshmi42174 жыл бұрын
what is the use of specifying class name in super?
@deeplizard4 жыл бұрын
This was a requirement of python 2. It's no longer required. The blog post for this video has been updated: deeplizard.com/learn/video/MasG7tZj-hw
@muizzy4 жыл бұрын
First off, I love your videos, they're way underappreciated looking at the amount of views! I did have a question though: It's not entirely clear to me why exactly we're adding activation functions and pooling after every layer. I understand that a ReLU operation removes negative outputs and makes the network non-linear, which allows for more complexity. But what's the intuition behind applying this to every layer? And would you always add a non-linear activation function after a layer to achieve this purpose? Similarly, the max_pooling is usually used as a way to force the network to abstractify. But what makes that necessary after both convolutional layers in this case?
@fardinsaboori87703 жыл бұрын
Hello, why did you choose 120 as out_feature?
@deeplizard3 жыл бұрын
It is an arbitrary choice. Usually, we output less than we input since we are looking to extract features.
@min-youngchoi38334 жыл бұрын
I have questions! Before flatten t (using reshape), is t is rank 4 tensor? (batch, channel, height, width) and If we have RGB color channel then still t.reshape(-1, 12*4*4) if in_channel=3 from input layer(conv1).
@deeplizard4 жыл бұрын
Hey Min-Young - Great work going though the course! Yes. The conv and pooling operations do not change the rank of the tensor. The length of the axes can change though. For example, the height and width axes decrease and the channel axis increases. If we have if in_channel=3, we still have t.reshape(-1, 12*4*4). This is because we are still instructing the first layer to transform the input (1 channel or 3 channels) into 6 channels. Also, I'd like to mention that you can try this yourself, with the code below: import torch import torch.nn as nn import torch.nn.functional as F t = torch.ones([3,28,28]) #RGB image t.shape conv1 = nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5) conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5) o = F.max_pool2d(conv2(F.max_pool2d(conv1(t.unsqueeze(dim=0)),2,2)),2,2) o.shape Hope this helps! Chris
@magelauditore3334 жыл бұрын
Can anyone pls explain me why softmax is not used at output layer.
@urospocek46683 жыл бұрын
Because we use CrossEntropyLoss() that automatically performs softmax on the output.
@XorAlex5 жыл бұрын
Does he explain anywhere what the relu and max_pool2d operations do?
@deeplizard5 жыл бұрын
These concepts are explained in our Deep Learning Fundamentals series. Full series: deeplizard.com/learn/playlist/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU Max pooling: deeplizard.com/learn/video/ZjM_XQa5s6s Activation functions (relu): deeplizard.com/learn/video/m0pIlLfpXWE
@cenkkol87595 жыл бұрын
at fourth layer, shouldn't that be t.reshape(12*4*4 , -1). There must be 12*4*4 rows in matrix.
@deeplizard5 жыл бұрын
This is not quite right. Here are some details. Our task is to flatten the tensor. This operation is discussed in this episode: deeplizard.com/learn/video/mFAIBMbACMA To flatten the tensor, the 12 channels of height 4 and width 4 must be turned into a rank-1 tensor (an array) of length 12 * 4 * 4. This gives us 1 row of 12*4*4 columns. The minus 1 tells the reshape function to figure our the length of the first axis (columns). This depends on the batch size. Hope this helps. Let me know.
@cenkkol87595 жыл бұрын
@@deeplizard first of all thank you for the high quality content and replying. I appreciate it. Does not line self.fc1=nn.linear(in_features=12*4*4, out_features=120) produce a matrix 120 by 192. In the 20th video of this series you say that and I saw that in every other tutorials. But I do not understand how we are able to multiply a 120 by 192 matrix with a 1 by 192 matrix. It works perfectly when I run the code but I dont get it. Actually in tensorflow that line would produce a 192 by 120 matrix so it is not a problem yet I couldn't understand how pytorch works in this case.
@cenkkol87595 жыл бұрын
Now I checked the source code and I saw that in the forward method of Linear class there is transpose operation before multiplication.
@deeplizard5 жыл бұрын
Thank you. You are welcome! Let's consider this line: self.fc1=nn.linear(in_features=12*4*4, out_features=120) This line creates a layer (called fc1) that accepts an array (rank-1 tensor) that contains 192 elements (also called features). The output from this layer is an array (rank-1 tensor) that contains 120 elements (also called features). Inside this linear layer (fc1) a 120 x 192 weight matrix is created. This weight matrix stays inside the linear layer and is used to preform the linear transformation. When we call the linear layer (fc1), we pass an array of 192 elements as input. The output is an array of 120 elements. How is this achieved? Well, it's achieved using the 120 x 192 weight matrix. This is what was described in video 20 of the series: deeplizard.com/learn/video/rcc86nXKwkw
@deeplizard5 жыл бұрын
That transpose is happening because the input tensor is on the left side of the operation. In the video #20 demo, the weight matrix was on the left side.
@jay-rathod-015 жыл бұрын
Cool
@shabnampathan45155 жыл бұрын
Can u please make a video on how to deploy a pretrained pytorch GAN model on Android Or how to export pretrained pytorch model on Android studio
5 жыл бұрын
but in pytorch website uses logsoftmax
@deeplizard5 жыл бұрын
Yes. The cross_entropy loss function applies a logsoftmax.
@urospocek46683 жыл бұрын
@@deeplizard Is it ok if we still perform softmax at the end (it is just not necessary) or is it mistake to do that and we will get wrong answer(and bad model)? Thank you in advanced.
@min-youngchoi38334 жыл бұрын
((Weight - Filter + 2*Padding) / Stride) + 1, don't forget the maxpooling then, we can get 12*4*4
@donfeto76362 жыл бұрын
not Weight it's the input width or height
@aryan_kode4 жыл бұрын
t = F.softmax(t, dim=1) shoudnt the dim be 0 instead of 1
@deeplizard4 жыл бұрын
Try it both ways and see if which way the output makes more sense. Hint: The first dimension represents the batch. We want softmax for each image not the entire batch.
@idristlili31384 жыл бұрын
this is better than porn.
@deeplizard4 жыл бұрын
😂
@janmichaelbesinga38674 жыл бұрын
"welcoooooooome"
@xavierhenschel552 жыл бұрын
FOR ANY NOOBS: import torch.nn.functional as F could not figure out why F was not defined hope this helps someone