This tutorial is one of the best out there. Thank you so much for making this. It is really appreciated.
@patloeber3 жыл бұрын
thanks! Glad it's helpful
@СергейЯкушев-ъ3д Жыл бұрын
@@patloeber Thanx a lot! It is brilliant
@bpmoran893 жыл бұрын
nice video, one nit: it's important to remember that confidence scores != probabilities. they may HIGHLY correlate with probability, they may LOOK like probabilistic outputs but they are NOT empirically derived probabilities in the strict sense. a model can have a confidence of 0.99, but that is not the probability that label is correct. if a research scientist would like, they can correlate confidence scores to probabilities empirically using testsets.
@Yoyo-sy5kl Жыл бұрын
Awesome tutorial. I do find some of the concepts a bit tough to grasp, but when I review them it helps a ton. I also like to review them by rewatching your videos. Keep up the great work, and I'm looking forward to checking out other tutorials on your channel.
@aminaleali7161 Жыл бұрын
Your videos not only are educational and informative, but they are also very enjoyable. Thank you!
@tcidude4 жыл бұрын
Very nice, clear, and detailed PyTorch tutorial!!! I haven't been able to find anything as good so far! Please keep up the good work and continue to make more tutorials!
@patloeber4 жыл бұрын
Thanks a lot :) Glad you like it!
@gordonlim23224 жыл бұрын
The slide at 6:50 was very helpful. Thank you.
@patloeber4 жыл бұрын
You are welcome :)
@MinhLe-xk5rm4 жыл бұрын
Sir your video was amazing! THank you for showing how softmax and cross entropy is implemented in python!
@patloeber4 жыл бұрын
Glad you like it :)
@annalavrenova56314 жыл бұрын
Thank you very much for the valuable content! Very helpful tutorials Pytorch!
@patloeber4 жыл бұрын
Glad you like it :)
@Ftur-57-fetr3 жыл бұрын
Lovely approach, clear, structured
@patloeber3 жыл бұрын
Glad you liked it!
@xingfenyizhen Жыл бұрын
It's very friendly to beginners like me,very awsome video and author!
@donfeto7636 Жыл бұрын
def softmax(x): return np.exp(x) / np.sum(np.exp(x), axis=1, keepdims=True) x = np.array([[2, 1, 0.1]], dtype=np.float32) print(softmax(x)) this should be the softmax for handling multiple example in batch
@grigorijdudnik5753 жыл бұрын
Great video series, man!
@patloeber3 жыл бұрын
Glad you like them!
@gabbebelz75228 ай бұрын
RuntimeError: size mismatch (got input: [3], target: [1]). If you get something like this, you havent put double brackets when declaring good/bad prediction array. [[2.0, 1.0, 0.1]] from my understanding it has to do with the outer/ inner dimension
@heathenfire2 жыл бұрын
very nice, clear explanation
@vatsal_gamit3 жыл бұрын
Can anyone please help me why there's Y = torch.tensor([0]) , there should be 3 values inside ???
@rubenguerrero32079 күн бұрын
simply awesome!
@haroldsu16962 жыл бұрын
Thank you, Sir. You are doing a great job.
@koraykara62703 жыл бұрын
Is the cross-entropy loss that you mentioned the negative log-likelihood (NLL)?
@mysteriousXsecret2 жыл бұрын
Does it make sense to apply the softmax function in tandem of the sum-of-squares loss function, instead of the cross-entropy?
@samas694202 жыл бұрын
no avresti sempre uno perché la softmax prende un vettore mentre la mse é uno scalare
@asheeshmathur4 жыл бұрын
Very Good, explanation of Multi class vs Binary classification
@patloeber4 жыл бұрын
Thanks :)
@egeerdem82723 жыл бұрын
3:31 what does N represent? Also at 7:49, how do we actually represent and classify the data (as Y = torch.tensor([0])) I am confused? Changing 0 to 1 and 2 produces results so I thought they represented positions as in a list (0: [1 0 0], 1: [0 1 0], 2: [0 0 1]); however, that doesn't appear to be the case since they yield different answers compared to the numpy method (I used the same Y'^).
@albertma14 жыл бұрын
Thanks for the tutorial!
@waqasahmed89734 жыл бұрын
Thank you for the video, may i ask where did you learn pytorch? Did you study in your bachelor/master studies?
@patloeber4 жыл бұрын
I have a master's degree where I learned programming and the concepts of ML/DL. I learned PyTorch on my own in my day job. It's not that hard when you understand the underlying concepts and know how to code.
@erfanshayegani3693 Жыл бұрын
Great brother! Greaaaat!
@mtkoliАй бұрын
Very good video, thank you
@andrewliao11403 жыл бұрын
Hi, great video! Could you explain why you passed in the name of the class and "self" into the super() function?
@patloeber3 жыл бұрын
that was the old syntax, but it's no longer required in the latest Python versions...
@fullychen3004 жыл бұрын
Thx very much! VERY CLEAR explanation!
@patloeber4 жыл бұрын
thanks!
@saifulislamsajol93774 жыл бұрын
The last 4 minutes of this video is very important. Could you please explain what to do when I am using MSELoss for autoencoder based networks? For cross-entropy loss it's working (although that's incorrect) but for MSELoss it's not working.
@SuperOnlyP4 жыл бұрын
at 13: 00 , Could you please explain more the number you change for multi outputs for good prediction : [ [0.1, 1.0 , 2.1], [2.0, 1.0 , 0.1] , [0.1, 3.0 , 0.1] ]) if the output respects to ([2,0,1]) : label 2 is predicted : [0.1, 1.0 , 2.1] label 0 is predicted : [2.0, 1.0 , 0.1] label 1 is predicted: [0.1, 3.0 , 0.1] How can It turn to be good? --- why it is not something like : label 2 is predicted : [2.1, 1.0 , 0.1] - As I understand: 2 will be one-hot encoded : [1,0,0]. I guess idx [0] the prediction [2.1, 1.0 , 0.1] is highest number to be considered a good. thanks !!
@Небудьбараном-к1м4 жыл бұрын
I also got confused at first. If you index out the values of Y_pred_good then the index of the highest value will match the ground truth label
@patloeber4 жыл бұрын
exactly this! Also, class 2 would be [0 0 1] as one hot.
@goelnikhils Жыл бұрын
Amazing Content
@adityashrivastava5018 Жыл бұрын
hey patrick , great stuff , can you upload the ppt for this part to on your github repository , i checked it's not there , it'll be extremly helpful
@ahmedchaoukichami9345 Жыл бұрын
thank u so much it is very good explanation thnks a lot
@tz89044 жыл бұрын
Great Tutorial, but why does 3 sample look like [2, 0, 1] but not [[2], [0], [1]], thanks!!
@lamnguyentrong2753 жыл бұрын
a little bit confuse, i thought that the Y true value must be the same size as Y_predicted value. As far as i know cross entropy will be a list of probability of true value and predicted value multiply each other then sum up. But in your case the y_predict value consists of three value, does it mean the y_true have to be the same as well ? Hope you can clear that for me
@keroldjoumessi3 жыл бұрын
The CrossEntopyLoss function provided by PyTorch implicitly does the one-hot encoded with Softmax for us...
@piotr7802 жыл бұрын
does it work fot multi-label classification problem ?
@n45a_ Жыл бұрын
well, my first error i figured out before watching this video was using softmax at the end of my model, and the second one was using logits as Y target, and i learn it just now after playing with my modle for weeks... Thanks for poiting that out for others
@LearningWorldChatGPT2 жыл бұрын
Hi Patrick! One question please! I trained a model and I'm making some predictions. On the other hand I'm trying to use the Pytorch library: "from torchmetrics.functional import auroc" To assim be able to know how my model is performing. But when applying that Pytorch function... I receive a tensor with a value of zero --> tensor(0.), I expected a higher value. My research is a Classification task with 48 speakers (Multiclass) (Speaker Identification). If you could give me a suggestion, I thank you in advance! # We perform a Classification: audio_file = '../CC_Sara.WAV' signal, fs = torchaudio.load(audio_file) # test_speaker: 1 output_probs, score, index, text_lab = classifier.classify_batch(signal) #print('Target: , Predito: ' + text_lab[0]) preds = output_probs.softmax(dim=-1) print(preds.sum()) # --> tensor(1.0000) print(preds) # --> tensor([[9.9889e-01, 1.2311e-05, 6.8829e-07, 9.4551e-06, 1.1589e-05, 8.6421e-07, 1.0865e-05, 3.1053e-06, 3.6530e-06, 1.7232e-06, 1.4130e-05, 4.6458e-06, 3.5194e-07, 2.0069e-05, 4.8732e-06, 1.1485e-06, 6.5748e-04, 3.0096e-05, 2.5761e-06, 1.6100e-06, 6.5871e-07, 2.6747e-06, 8.3195e-07, 3.0935e-06, 1.1512e-06, 2.0719e-05, 1.1157e-05, 3.2084e-07, 3.7289e-05, 3.2228e-07, 2.2807e-06, 1.1052e-05, 9.6865e-05, 1.0193e-05, 2.5367e-06, 6.8669e-06, 6.2059e-07, 7.3362e-05, 1.1659e-05, 1.7374e-06, 1.2059e-06, 4.7137e-06, 1.2199e-07, 9.8617e-07, 3.7428e-06, 2.1497e-06, 7.7288e-06, 4.6681e-07]]) target = index # --> tensor([0]) print(target) import torch from torchmetrics.functional import auroc auroc(preds, target, num_classes=48).item() # --> 0.0
@penugondasaichand6922 жыл бұрын
super(car1,self).__init__() can you explain this with example what is difference between class and super class
@DanielWeikert4 жыл бұрын
if Y is [0] and we have a prediction of 1,3 then Y is automatically one hot encoded?
@patloeber4 жыл бұрын
Not sure what you mean. Do you mean you have two predictions 1 and 3, and 1 is the correct class? Then [0] is NOT one hot encoded, however it is the right shape for the CrossEntropyLoss. [1, 0] would be one hot encoded...
@shadowzabyss4 жыл бұрын
@@patloeber So, I do not understand what the prediction values represent if the correct class is not one hot encoded..
@sunnybwoy45473 жыл бұрын
would multiple samples mean batch size greater then 1? That one confused me a bit
@_jiwi26744 жыл бұрын
Dont you have divide by N (number of classes)? It seems that you just added the true Y and log of pred Ys
@keroldjoumessi3 жыл бұрын
I think N is the number of sample. Thus, he didn't take the mean because it was just one sample.
@turboytytyt3 жыл бұрын
thanks!
@patloeber3 жыл бұрын
thanks for watching!
@TheOraware3 жыл бұрын
i got it now after viewing different videos for cross-entropy, one confusion is in case of numpy implementation at 5:38 the predictions Y_pred_good or Y_pred_bad are an output of softmax/sigmoid right?
@patloeber3 жыл бұрын
yep!
@manalihiremath28053 жыл бұрын
i am gretting this error while using softmax function: Dimension out of range (expected to be in range of [-2, 1], but got 3)
@patloeber3 жыл бұрын
Your tensors do not have the correct shape
@manalihiremath28053 жыл бұрын
Just a general question even after adding np.random.seed(1) my accuracy value is changing after evry run whyy?
@joaobarreira52213 жыл бұрын
Hi, Could y_pred_good have negative values?
@islambennani678214 күн бұрын
Y_pred_good = torch.tensor([2.0, 1.0, 0.1]).unsqueeze(0) in case it's not working