Thank you very much for this! I am currently doing my undergrad thesis in PyTorch and freaking out. Your explanation is quite clear and helpful. Keep going ^^
@tudoronrec Жыл бұрын
same :D
@observor-ds3ro6 ай бұрын
That was excellent! a great help for me , you described as clear and clean as possible
@tudoronrec Жыл бұрын
Thank you for the information!
@kvdiatpune8753 Жыл бұрын
Thanks ,nicely explained
@shinchannohara39279 ай бұрын
will the same code with num-out 200 work for 200 class classification with such great accuracy
@user-ot6yk6ie2f Жыл бұрын
i can save this model as usual (by using alexnet) and use it with other model on open cv right?
@mightylearning Жыл бұрын
when i am going to train vgg model with 38 classes then that error occur: RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[2, 38, 224, 224] to have 3 channels, but got 38 channels instead, when i use summarytool, how to slove the error
@vikramrs41912 жыл бұрын
Is there an example how we can use our own trained models in transfer learning of other images in keras library
@nagamadhubabuvikkurthi56952 жыл бұрын
please tell me how can I build a confusion matrix from this.,
@murtazajabalpurwala81243 жыл бұрын
Hi thanx for the video, appriciate it, but I believe this tutorial was more suited for a moderate level to advanced level. I still had many concepts to dig and I thought you were skipping on many things that were still new to me. May be you can make another video where you can guide data loading process and training process more in details. Thanx again
@DennisMadsen3 жыл бұрын
Noted. Thanks for the input Murtaza :)
@danielac5203 жыл бұрын
hi! which is the difference between freezing the layers or using model.eval()?
@DennisMadsen3 жыл бұрын
Hi. In eval mode you notify all the layers that you are not in training mode. Which also has an impact on e.g. dropout and batchnorm layers. With no_grad (freeze) - this is often used for training to avoid computing the gradient for X number of layers. But indeed the goal of the two functions looks a bit the same - do not compute the gradient.
@danielac5203 жыл бұрын
@@DennisMadsen Thanks for your answer! I understand, Anyway computing the gradient does not imply the update of parameters right ?
@DennisMadsen3 жыл бұрын
@@danielac520 Glad it was helpful. And true. The parameters are not updated when the gradients are. You would then need to update them with something like: optimizer.step()
@danielac5203 жыл бұрын
@@DennisMadsen Got it :)
@tycstahX3 жыл бұрын
Great stuff!
@muhammadzubairbaloch32244 жыл бұрын
sir please make lecture on GAN
@DennisMadsen4 жыл бұрын
Hereby put on my video lidt Muhammad. Thanks a lot for the suggestion!