PyTorch Tutorial 08 - Logistic Regression

  Рет қаралды 64,846

Patrick Loeber

Patrick Loeber

4 жыл бұрын

New Tutorial series about Deep Learning with PyTorch!
⭐ Check out Tabnine, the FREE AI-powered code completion tool I use to help me code faster: www.tabnine.com/?... *
In this part we implement a logistic regression algorithm and apply all the concepts that we have learned so far:
- Training Pipeline in PyTorch
- Model Design
- Loss and Optimizer
- Automatic Training steps with forward pass, backward pass, and weight updates
Part 08: Logistic Regression
📚 Get my FREE NumPy Handbook:
www.python-engineer.com/numpy...
📓 Notebooks available on Patreon:
/ patrickloeber
⭐ Join Our Discord : / discord
If you enjoyed this video, please subscribe to the channel!
Official website:
pytorch.org/
Part 01:
• PyTorch Tutorial 01 - ...
Logistic Regression from scratch:
• Logistic Regression in...
Code for this tutorial series:
github.com/patrickloeber/pyto...
You can find me here:
Website: www.python-engineer.com
Twitter: / patloeber
GitHub: github.com/patrickloeber
#Python #DeepLearning #Pytorch
----------------------------------------------------------------------------------------------------------
* This is a sponsored link. By clicking on it you will not have any additional costs, instead you will support me and my project. Thank you so much for the support! 🙏

Пікірлер: 96
@anirvinvaddiyar7671
@anirvinvaddiyar7671 3 жыл бұрын
Straight to the point and simple. Finally
@qorbanimaq
@qorbanimaq 3 жыл бұрын
It has been some time that I've been looking for an excellent PyTorch tutorial, finally, I found them. Thank you, Patrick!
@patloeber
@patloeber 3 жыл бұрын
Glad you like it!
@MrDeyzel
@MrDeyzel 4 жыл бұрын
So far I find your tutorials are very underrated. You should have more views given the quality.
@patloeber
@patloeber 4 жыл бұрын
Thank you :)
@nigelbess5168
@nigelbess5168 3 ай бұрын
Hey I really love this series. It has been helping me a ton with getting used to Pytorch. Thank you
@jasurbekgopirjonov
@jasurbekgopirjonov Жыл бұрын
I am watching every single of them. just amazing.
@ryanzeng7729
@ryanzeng7729 Жыл бұрын
This is definitely one of the best pytorch tutorial videos. Thank you Patrick !
@ripsirwin1
@ripsirwin1 3 жыл бұрын
This tutorial series is quite helpful. After completing all of these videos I should be ready to jump into the textbooks!
@patloeber
@patloeber 3 жыл бұрын
great! glad you like it
@nirmalbaishnab4910
@nirmalbaishnab4910 Жыл бұрын
Incredibly helpful. Many thanks. Already subscribed.
@henrygory
@henrygory 11 ай бұрын
Thank you. This is a great training video.
@alexandredamiao1365
@alexandredamiao1365 3 жыл бұрын
What an amazing tutorial! Thanks again!
@patloeber
@patloeber 3 жыл бұрын
Glad you like it!
@hassanhayat2831
@hassanhayat2831 3 жыл бұрын
very well explained, thanks
@patloeber
@patloeber 3 жыл бұрын
Glad you liked it :)
@ShreyanshChordia
@ShreyanshChordia 4 жыл бұрын
You are doing an amazing job
@patloeber
@patloeber 4 жыл бұрын
Thank you so much!!
@taukirchowdhury3295
@taukirchowdhury3295 3 жыл бұрын
Great Content! Thanks a lot!
@patloeber
@patloeber 3 жыл бұрын
glad you like it!
@Sunny.khanzada81
@Sunny.khanzada81 3 жыл бұрын
Given the programming code of Logistic Regression and Neural Networks do the following: Use the cat data set to provide classification using neural networks. Try different combination of number of neurons in the hidden layer and learning rate and compare the efficiency of the trained model. In short you have to fill the following table with all the combinations of the following number of neurons = [5,10,15,20] and learning rate = [.5,.05,.005,.005]
@aimennaeem8849
@aimennaeem8849 Жыл бұрын
very helpful tutorial
@IluhaBratan
@IluhaBratan 2 жыл бұрын
Hello friend, first of all - great tutorial! Loved it. One question - why do you need the Scaler this time? What's the point of doing it and why didn't we do it in the past? Thanks!
@piyushkumar-wg8cv
@piyushkumar-wg8cv Жыл бұрын
1. What is the difference between fit_transform and transform, and why you are using them differently 2. Why are you not standardizing y values.
@vilmantas_gegzna
@vilmantas_gegzna 5 ай бұрын
The .fit() method learns required parameters from data (in this case, calculates means and standard deviations of variables in training data). The .transform() method applies the transformation after the required parameters are learned. And .fit_transform() does both steps. Your model should only learn from training data to prevent so called "data leakage" (and to get as realistic predictions on test set as possible). So for training and test datasets different methods should be applied.
@coldbrewed8308
@coldbrewed8308 Ай бұрын
Fit_transform obtains the mean and standard dev for the scaling. To keep test data with the same scaling factors, test sets use transform only. And there is no point in scaling 1 and 0.
@howardhabtom8823
@howardhabtom8823 Жыл бұрын
Hi Patrick, Thank you for your invaluable tutorials. A question for you though: would please add here multiple hidden layers (sequential method) application? I would like to see this in the example of cancer data. I believe there may be more people interested. Would you pleas also add example of deploying the trained model in predicting a new unseen data? Thank you
@achrafbouchouik6469
@achrafbouchouik6469 4 жыл бұрын
thaankk you
@marfblah33
@marfblah33 2 жыл бұрын
cool video series! Thanks! I wonder: is there no pytorch funtionality to update the training loop? pass num_epochs, learing_rate, criterion, optimizer, loss and maybe a loss improvement and then tada... also the accuracy has to be a built-in function, no?
@valinlol
@valinlol 4 жыл бұрын
Thanks great tutorials! What IDE do you use in these videos?
@patloeber
@patloeber 4 жыл бұрын
Visual Studio Code
@TheOraware
@TheOraware 3 жыл бұрын
Hi Patrick , instead of using with torch.no_grad() , if i predict(X_test) and move into some other local variable which seems to me working , is it ok to do this way?
@zenpunk8698
@zenpunk8698 Жыл бұрын
if I would like to use the trained model to predict a new value that has not been seen before, how would this look like? code wise
@JibranAbbasi_1
@JibranAbbasi_1 2 жыл бұрын
I see that you're using VSCode. How do you get the code output to appear in the output tab? My code output appears in the Terminal tab along with a lot of other just. You outputs are so much cleaner.
@DanielWeikert
@DanielWeikert 4 жыл бұрын
Any trick to figure out when to use .view to reshape the data into the right format? Wrong tensor shapes are most often an issue. Thanks. Great video!
@patloeber
@patloeber 4 жыл бұрын
For Y, it is dependent on the loss function you choose, so have a look at the doc: pytorch.org/docs/stable/nn.html. Most of the time Input is : (N, C), and Target is (N), or input and target are both (N, 1). For x it is dependent on how you designed your model, or more precisely the input layer. Most of the times its (N_samples, N_features), or a flattened tensor for images...
@ekaterinaivanova3816
@ekaterinaivanova3816 2 жыл бұрын
17:08 why do we get the number of samples by using function float() rather than int()?
@popamaji
@popamaji 4 жыл бұрын
4:40 whats the difference of transform and fit_transform, I searched some forums but didnt understand the transform part and y u just used transform for test data?
@patloeber
@patloeber 4 жыл бұрын
fit_transform(X) will fit the transformer to X, and then also transforms the data. This means that e.g. it will calculate mean, variance or whatever based on X, and then applies the stored calculations during the transform. When you call only transform, then at some point you must have called the fit method before. You can transform for example your X_test based on the X_train fitting...
@skymanaditya
@skymanaditya 3 жыл бұрын
Just to add to what the author wrote. Basically, when you call fit_transform, it would compute mean, variance, etc. based on the input training distribution X. And then transform would apply the transformation on the input data. When you fit the test_data, you would want to fit it using the same transformation that you used to fit the training data. If you try to fit again on the test data, the transformation applied to the test can cause your test data's domain to be out of sync with your training data domain. That's why you fit based on the training data and apply the same learned transformations to the test and train data.
@egeerdem8272
@egeerdem8272 Жыл бұрын
@@skymanaditya why not apply the whole transformation to the X data before splitting?
@bhawnahanda4277
@bhawnahanda4277 3 жыл бұрын
Hey, great lecture ! I have a doubt though, I am always getting the acc value as 0 even after using acc.item(), however if I convert the sum tensor to numpy using detach, I get the correct answer, while in the video you're able to get the correct value without any conversion, any idea where I'am going wrong?
@mehmetaliozer2403
@mehmetaliozer2403 2 жыл бұрын
same issue for me
@mehmetaliozer2403
@mehmetaliozer2403 2 жыл бұрын
this worked -> acc = y_predicted_cls.eq(y_test).sum() / float(y_test.shape[0])
@taaaaaaay
@taaaaaaay 3 жыл бұрын
Why did you not add the sigmoid computation layer to the __init__? Great video btw!
@patloeber
@patloeber 3 жыл бұрын
Both ways are fine, this is just a matter of taste...
@Nicho_dive
@Nicho_dive 4 жыл бұрын
Thanks for that video. It would be nice if you have used dataloaders to train with mini batch instead of whole data at once, and also it would be better to use model.train() and model.eval() (rather than no_grad() )
@patloeber
@patloeber 4 жыл бұрын
Thanks for the feedback. I introduce DataLoader in tutorial #9 and use them from there on ;)
@ambroisecoste2944
@ambroisecoste2944 3 жыл бұрын
Hi, correct me if i'm wrong, i'm not sure to understand : we want to approximate the true function which is " f = w*x + b", so we use nn.Linear for the " w " part then the sigmoïd is here the the " b " part ? Or does the nn.Linear able to approximate " w " and " b " and the sigmoid just round up the result ?
@patloeber
@patloeber 3 жыл бұрын
No, the linear layer applies wx+ b, so it tries to find both a good w and b. The sigmoid function then applies its own formula to map the values in the range [0,1]
@ambroisecoste2944
@ambroisecoste2944 3 жыл бұрын
@@patloeber thx a lot ! I didn't understood at all at the first time then ;)
@lamho411
@lamho411 2 жыл бұрын
I have a question, you wrote a method called forward in the LogisticRegression class but never call model.forward(...). How come?
@elise8619
@elise8619 Жыл бұрын
Hey! Shouldn't: y_predict = model(xtest).detach() ... acc = predictClass.eq(ytest.detach()).sum() produce the same result as: with torch.no_grad(): y_predict = model(xtest) ... I tried it, but the final accuracy is different. I'm not really understanding the difference between no_grad() and detach() and internet isn't helping much :/
@huynguyenuc4063
@huynguyenuc4063 4 жыл бұрын
I'm quite confused that why not y_predicted = model.Forward(X_test), but y_predicted = model(X_test). I got fault there. But I really appreciate it, your videos are awesome!
@patloeber
@patloeber 4 жыл бұрын
we have to implement the forward method inside our model class. then during training we call model(X), and this will automatically apply the forward pass for us!
@peregudovoleg
@peregudovoleg 3 жыл бұрын
@@patloeber I was wondering the same thing. We never called "forward" explicitly. You say it is called automatically. But what if there are multiple methods inside the class, how should the model know which one to call? What triggers this call? Is it somewhere during the model call, since it inherited the methods from nn.Linear, it will look for a "forward" call it also has but will use ours instead (custom). Very good explanations and "bottom-up" approach. Loved it!
@430matthew6
@430matthew6 3 жыл бұрын
Hi, I want to confirm my way: when test step, can I use y_predicted = model(X_test).detach() to replace with torch.no_grad(): because I'm not sure does this behavior affect the gradient compute or not I am looking forward to your reply. thanks a lot!
@patloeber
@patloeber 3 жыл бұрын
detach prevents gradient computation for the used variable, no_grad prevents the computation for all calculations inside the with statement. I recommend no_grad for test step
@430matthew6
@430matthew6 3 жыл бұрын
Python Engineer thank u
@Bximbo
@Bximbo Ай бұрын
"n_samples, n_features = X.shape" please what is the use of this line?
@akhilanand298
@akhilanand298 3 жыл бұрын
why you have converted y into float type. I am able to get result without converting it into float. correct me if i am wrong :-as per my knowledge we convert independent variables into float so that we will calculate the gradient because grad can only be calculated for float and complex numbers.
@RUFY211098
@RUFY211098 8 ай бұрын
Sorry guys, maybe it is a dumb question, but do you know why why he hasn’t used the forward( ) method defined in class LogisticRegressin neither during the loop nor in the testing phase? I really cannot understand this
@prateekshaw6443
@prateekshaw6443 3 жыл бұрын
What is the purpose of torch.nn.LogSigmoid model. Instead of creating a layer using Linear, why we do not directly use this model?
@patloeber
@patloeber 3 жыл бұрын
because we also apply the sigmoid function in this model. Probably you can use Layer + sigmoid directly here, but I wanted to demonstrate how to write your own model class here. We need that for more advanced models later
@prateekshaw6443
@prateekshaw6443 3 жыл бұрын
@@patloeber Thank you for your reply.
@mazen.ibrahim99
@mazen.ibrahim99 3 жыл бұрын
Why do we not call the forward function?
@abdelrahmanhammad1020
@abdelrahmanhammad1020 2 жыл бұрын
This forward method is automatically called in line 58: y_predicted = model(X_train) This is true for any nn.Module subclass
@dhanushsuryavamshi4279
@dhanushsuryavamshi4279 3 жыл бұрын
Just one question, hope you see this. We defined a class function called forward for predicting. But we never used it in the code. Could you please tell me why?
@patloeber
@patloeber 3 жыл бұрын
the forward function is automatically called for us when we use model(x). so we have to implement it in order for this to work...
@dhanushsuryavamshi4279
@dhanushsuryavamshi4279 3 жыл бұрын
@@patloeber Ahh, got it. Thanks for the reply and this amazing tutorial series. 🙏🏼
@chandank5266
@chandank5266 Жыл бұрын
@@dhanushsuryavamshi4279 model(x) is similar to model.forward(x) ....................actually the integral of code is such that the instance of model becomes callable for the forward function ;)
@anshanshtiwari9725
@anshanshtiwari9725 2 жыл бұрын
For some reason, the loss comes out to be NaN after the first iteration. What could be going wrong there? EDIT: Never mind. Just figured out that criterion(y_pred, y_train) and criterion(y_train,y_pred) is not the same. lol
@ammarhaider1530
@ammarhaider1530 4 жыл бұрын
how come there is backward pass in Logistic Regression. We do forward and backward passes in Neural Networks and not in Logistic Regression. Logistic Regression is similar to linear regression but we use Logistic Cost function instead of Mean Squared Loss Function. It's little confusing.
@patloeber
@patloeber 4 жыл бұрын
I'm not exactly sure what you mean. In all pytorch training pipelines we use backward pass to calculate the gradients. For Linear Regression the forward pass is already calculated for us, because we are using only one nn.Linear() layer. For Logistic regression we use a nn.Linear() PLUS the sigmoid, because here we want probabilitites between 0 and 1. so we have to implement the forward pass for ourselves. The mean squared error makes no longer sense then for the probabilities, so we have to use a different loss function (BCELoss)
@ammarhaider1530
@ammarhaider1530 4 жыл бұрын
​@@patloeber Ok got it. Thanks.
@weili866
@weili866 4 жыл бұрын
Thanks for the nice tutorial. But I found that "with torch.no_grad():"(or "detach()") is not working if I "print(acc)" directly. The only thing work is "print('{acc:.4f}')" !!! I don't know why
@patloeber
@patloeber 4 жыл бұрын
acc is still a tensor, so you want to call acc.item() to get the actual value. also notice that I'm using an f-string to print. for mel print(acc) is working but it also prints e.g. "tensor(0.8947)"
@weili866
@weili866 4 жыл бұрын
@@patloeber Got it! Thanks for your reply
@ponkavinthangavel7974
@ponkavinthangavel7974 3 жыл бұрын
Can you tell me what is bc.data and bc.target?
@patloeber
@patloeber 3 жыл бұрын
These are our training samples (features) and class labels that we have to use for the model training
@ponkavinthangavel7974
@ponkavinthangavel7974 3 жыл бұрын
@@patloeber okay
@navinbondade5365
@navinbondade5365 3 жыл бұрын
We created the forward method in class but we never fire that one I mean you never write model.forward, I'm little confuse with this thing
@patloeber
@patloeber 3 жыл бұрын
When we call model(x) it will internally call the forward function
@navinbondade5365
@navinbondade5365 3 жыл бұрын
@@patloeber thank you
@navinbondade5365
@navinbondade5365 3 жыл бұрын
@@patloeber if we change the name of forward function will it still call internally
@patloeber
@patloeber 3 жыл бұрын
@@navinbondade5365 No you always have to call and implement the forward function in your model class. You should not rename it
@navinbondade5365
@navinbondade5365 3 жыл бұрын
@@patloeber can you please make a deep video on working of nn.Module class, cause it looks like a mystery, and don't know what more methods it has and how it internally works
@dehaoqin2365
@dehaoqin2365 Жыл бұрын
Too nice to forget comment
PyTorch Tutorial 09 - Dataset and DataLoader - Batch Training
15:27
Patrick Loeber
Рет қаралды 181 М.
PyTorch Tutorial 07 - Linear Regression
12:11
Patrick Loeber
Рет қаралды 79 М.
Универ. 13 лет спустя - ВСЕ СЕРИИ ПОДРЯД
9:07:11
Комедии 2023
Рет қаралды 4,4 МЛН
Они убрались очень быстро!
00:40
Аришнев
Рет қаралды 3,1 МЛН
Logistic Regression Project: Cancer Prediction with Python
44:21
Alejandro AO - Software & Ai
Рет қаралды 4,4 М.
Building a Neural Network with PyTorch in 15 Minutes | Coding Challenge
20:34
PyTorch Tutorial 06 - Training Pipeline: Model, Loss, and Optimizer
14:16
PyTorch Crash Course - Getting Started with Deep Learning
49:55
AssemblyAI
Рет қаралды 81 М.
Logistic regression : the basics - simply explained
20:25
TileStats
Рет қаралды 32 М.
What is PyTorch? (Machine/Deep Learning)
11:57
IBM Technology
Рет қаралды 21 М.
Logistic Regression Indepth Maths Intuition In Hindi
29:32
Krish Naik Hindi
Рет қаралды 73 М.
Универ. 13 лет спустя - ВСЕ СЕРИИ ПОДРЯД
9:07:11
Комедии 2023
Рет қаралды 4,4 МЛН