Probability Calibration : Data Science Concepts

  Рет қаралды 32,694

ritvikmath

ritvikmath

Күн бұрын

Пікірлер: 59
@a00954926
@a00954926 3 жыл бұрын
This is super amazing!! It's such an important concept that like you said, doesn't get all the credit it deserves. And sometimes we forget this step.
@zijiali8349
@zijiali8349 2 жыл бұрын
I got asked about in an interview. Thank you so much for posting this!!!
@danielwiczew
@danielwiczew 3 жыл бұрын
Great video, it's very interesting concept that I never heard about, but mathematically speaking would make a sense. Also it's interesting that a linear model was able to correct the error so profoundly. Nevertheless, isn't a kind of metalearning ? Also I think you shouldn't use the name of "testing" dataset for traning the "calibration model", but rather e.g. metadataset. Test dataset is reserved only and only for the final, crossvalidated model
@accountname1047
@accountname1047 3 жыл бұрын
does it generalize or is it just overfitting with more steps?
@buramjakajil5232
@buramjakajil5232 Жыл бұрын
I also have some problem with the second phase of configuring, I'm curious what happens to the out-of-sample performance after the calibration. I don't claim I understand the background here, but I just easily get the feeling in mind that: "the model fit did not produce performance that matches the observed distribution, so lets wrap the random forest into logistic function and fit it to the empirical distribution". Naturally this would perform better, but does the out-of-sample performance also improve? Sorry for my confusion, also pretty new concept for me as well.
@chineloemeiacoding
@chineloemeiacoding 3 жыл бұрын
Awesome video!! I was trying to figure out how this concept work using the SK-Learning documentation, but I found the material too much theoretical. And in your video you put the things in a more friendly way!! Many thanks :)
@MuammarElKhatib
@MuammarElKhatib 4 ай бұрын
I would have called your "test" set the "calibration" set. Nice video.
@tradewithkev
@tradewithkev Ай бұрын
hey Ritvik, thank you so much for making all these videos, pursuing my Financial Math Masters - this channel has been a game changer for me in terms of understanding proper ML although I have applied it a lot of times but this level of in-depth understanding is, to say the least, very satisfying. Thankyou so much for what you're doing for all of us. Also, is it possible to connect with you somewhere?
@aparnamahalingam1595
@aparnamahalingam1595 Жыл бұрын
This was FABULOUS, thank you.
@ritvikmath
@ritvikmath Жыл бұрын
Glad you enjoyed it!
@tusharmadaan5480
@tusharmadaan5480 Жыл бұрын
This is such an important concept. I feel guilty of deploying models without a calibration layer.
@nishadseeraj7034
@nishadseeraj7034 3 жыл бұрын
Great material as usual!! Always look forward to learning from you. Question: Are you planning on doing any material covering xgboost in the future?
@AnonymPlatypus
@AnonymPlatypus Жыл бұрын
I'm late to the party, but surely since random forest is not performing optimally in your example, you need to tweak its hyperparameters(tweak data, tune model) to fit a better curve. What if you create a badly performing model and try to calibrate it further with logistic regression when you could've gotten a better performing model just using random forest?
@IgorKuts
@IgorKuts 5 ай бұрын
Thank you! Brilliant video on such an important applied-ML topic. Tho i haven't seen, in the top section of comments, mentions of the Isotonic Regression (which also can be found in Scikit-Learn package). More often than not, it performs way better on such a task, compared to the Logistic regression, due to it's inherent monotonicity constraint and piecewise nature. Personally i found the most useful - the part about using different sets (test / val), for calibration and calibration validation. Right now i am in the process of developing the production classification ML model, and i think i have made a mistake of performing calibration using training set. Oops
@MuhammadAlfiansyah
@MuhammadAlfiansyah 2 жыл бұрын
If I already use log loss as loss function, do I need to calibrate it again? Thank you
@yangwang9688
@yangwang9688 3 жыл бұрын
I thought we don't touch test dataset until we have decided which model we are going to use?
@rohanchess8332
@rohanchess8332 Жыл бұрын
Wow, that is an amazing video, I might be wrong but generally we use validation set first no, for calibration, and test is on the unseen data, I mean it is like that in hyperparameter tuning, so I assumed it should be same here. Correct me if I'm wrong.
@FahadAlkarshmi
@FahadAlkarshmi 2 жыл бұрын
I like the explanation. It is very clear. But one thing I've noticed is data snooping. Mainly in the training setting that you proposed, why not training both the classifier and the calibrator on the training set and optimise them using a validation set? as we may not (and should not) have access to the testing set. Thanks.
@ramanadeepsingh
@ramanadeepsingh 4 ай бұрын
Shouldn't we first do a min-max scaling on the original probabilities you get from the models? Let's say I have three models and I run them on the same training data to get the below distribution of probabilities: 1) Naive Bayes: all predicted values between 0.1 to 0.8 2) Random Forest: all predicted values between 0.2 to 0.7 3) XGBoost: all predicted values between 0.1 to 0.9 If I want to take an average prediction, I am giving an undue advantage to XGBoost. So we should scale all of them to be between 0 to 1. The second step then is to feed these original scaled probabilities to the Logistic Regression model to the calibrated probabilities by feeding in these new scaled probabilities.
@user-or7ji5hv8y
@user-or7ji5hv8y 3 жыл бұрын
I think I know how you computed empirical probability. For me, it would have helped to see an explicit calculation, just to be sure.
@felixmorales3713
@felixmorales3713 Жыл бұрын
You could solve the calibration issue more easily by tuning hyperparameters. Specifically, you choose to tune hyperparameters to optimize a cost function that is considered a "proper scoring rule", such as logistic loss/cross entropy (the cost function of logistic regression, actually). At least in my RF implementations, that has resulted in calibrated probabilities right off the bat, without any post-processing. That being said, you'll probably notice that scikit-learn's LogisticRegression() class doesn't return calibrated probabilities all of the time. You can blame that on the class using regularization by default. Just turn it off, and you'll likely get calibrated probabilities again :)
@duynguyen4154
@duynguyen4154 3 жыл бұрын
Very good tutorial, I have one question: Is this concept based on any background theory/algorithm??? If so, could you please introduce the specific name. Thanks
@jackleerson
@jackleerson 2 жыл бұрын
It is called platt scaling
@martinkunev9911
@martinkunev9911 Жыл бұрын
Isn't it weird that the empirical probability is not monotonically increasing as a function of the uncalibrated probability? This would mean that the calibration model needs to learn to transform, e.g. 0.4 to 0.3 but 0.5 to 0.2.
@Ad_bazinga
@Ad_bazinga 6 ай бұрын
Can you do a video on calibrating scorecards? like doubling of odds?
@tompease95
@tompease95 10 ай бұрын
The notebook section of this video is quite misleading - it is basically just plotting a line of best fit on a calibration curve. To actually calibrate the predictions, the trained logistic regression model should make predictions on a set of model outputs, and those 'calibrated' outputs can then be used to plot a newly calibrated calibration curve.
@mattsamelson4975
@mattsamelson4975 3 жыл бұрын
You linked the code but not the data. Please add that link.
@payam-bagheri
@payam-bagheri Жыл бұрын
Some people are wondering whether the initial calibration shouldn't be done on the calibration set rather than the test set. I'd say the presenter in this video has the right concepts, but he's calling what's usually called validation set, test set, and vice versa. Usually, the set that's kept out for our final testing of the performance of the model is called the test set and the validation set is used before that to do whatever adjustments and tuning that we want to do.
@mohsenvazirizade6334
@mohsenvazirizade6334 10 ай бұрын
Thank you very much for such an amazing video. I like your videos that you explain the reasons behind something and then show the math. Could you please do the same for probability calibration? It is not clear to me why this happens and if changing the loss function in the classifier can change anything.
@laxmanbisht2638
@laxmanbisht2638 2 жыл бұрын
Thanks. So calibration is basically done to reduce error. right?
@thechen6985
@thechen6985 8 ай бұрын
If you calibrate it on the test set, that would introduce bias does it? Shouldn't it be the validation set?
@illuminati9478
@illuminati9478 Ай бұрын
Thanks Man !!
@ritvikmath
@ritvikmath Ай бұрын
No problem!
@nihirpriram69
@nihirpriram69 Жыл бұрын
I get that it works, but ultimately, I can't help but feel like this is a band-aid fix to a more underlying issue, namely that something is wrong fundamentally with the model (in this case random forest). It feels like throwing in a fudge factor and hoping for the best.
@yogevsharaby45
@yogevsharaby45 Жыл бұрын
Hey, thanks for the great video! I have a question regarding the predicted probability versus the empirical probability plot. I'm a bit confused because, if I understand correctly, the empirical observations are either 0 or 1 (or in this plot, are you grouping multiple observations together to obtain empirical observations that represent a probability?) Could you clarify this to help me understand it better? thanks very much again :)
@gunhatornie
@gunhatornie Жыл бұрын
Opened from vertible coornadation found a reciever according to molecular dissedent alluminum
@abhishek50393
@abhishek50393 3 жыл бұрын
great video, keep it up!
@aparnamahalingam1595
@aparnamahalingam1595 Жыл бұрын
Is this the same way we implement calibration for a multi-class problem?
@Corpsecreate
@Corpsecreate 2 жыл бұрын
Why do you assume the blue line is not correct?
@illuminati9478
@illuminati9478 Ай бұрын
If you hear him properly, he did say the Y Axis is actual value and X Axis has predicted, So the blue line is telling that the predictions are way way off, but using calibrations on the Blue Line we can get better results..
@simkort5799
@simkort5799 21 күн бұрын
@@illuminati9478 how do you get the actual value of the students chance of dropping out of school? isnt it either 1 or 0?
@user-or7ji5hv8y
@user-or7ji5hv8y 3 жыл бұрын
on surface, it looks like you are using ML twice, with the second iteration to correct error from the first run. I can't seem to see why that second iteration is a legitimate step to do. It's like you made a bad prediction, and now we are going to give you another chance and coach you to adjust your prediction to arrive at a more accurate prediction. I know you used test data, but still can't see how you won't be overfitting.
@buramjakajil5232
@buramjakajil5232 Жыл бұрын
exactly my thoughts
@The_Jarico1
@The_Jarico1 10 ай бұрын
Your right I've seen this exact phenomenon happen in the wild and the model needed adjustment as such. Does anyone know why this happens?
@hameddadgour
@hameddadgour 2 жыл бұрын
Great video!
@MsgrTeves
@MsgrTeves Жыл бұрын
I am confused why you train the logistic regression with input being predicted probabilities and output being the targets themselves. It seems you would train it with input being predicted probabilities and outputs being empirical probabilities. The probabilities should have nothing to do with the actual targets only how likely the prediction is to match the actual target which we calculate when we calculate the empirical probabilities. What am I missing?
@junhanouyang6593
@junhanouyang6593 2 жыл бұрын
How do you calculate the empirical probability if all the data in dataset is unique? Because if every datapoint is unique the empirical probability will be 0 or 1
@houyao2147
@houyao2147 3 жыл бұрын
It looks to me it's already caliberated during training phase because we minimize the error between predicted and empirical probability. I don't quite understand its necessity.
@mohammadrahmaty521
@mohammadrahmaty521 2 жыл бұрын
Thank you!
@davidwang8971
@davidwang8971 Жыл бұрын
awesome!
@petroskoulouris3225
@petroskoulouris3225 3 жыл бұрын
Great vid. I cant find the data on your github account
@jasdeepsinghgrover2470
@jasdeepsinghgrover2470 3 жыл бұрын
But I find it difficult to understand why non-probabalistic models aren't configured by default... The probability is derived from dataset itself... So if the dataset is large enough then it should be already configured
@Ziemecki
@Ziemecki 2 жыл бұрын
Thank you for this video! I didn't understand why we bias if we train the calibration in the training set and not in the test set. Could you give us an example please? +Subscribe
@Ziemecki
@Ziemecki 2 жыл бұрын
I know you gave an example later in the notebook, but the what if the data is the other way around? I mean the training is the testing and testing is the training, will we still see this behavior?
@raise7935
@raise7935 2 жыл бұрын
thanks
@EdiPrifti
@EdiPrifti 8 ай бұрын
Thank you. This makes sense in a regression task. How about a binary classification task. What would be the real emperical probability to fit the calibration task ?
@bonnyphilip8022
@bonnyphilip8022 2 жыл бұрын
Unlike the looks, you simply are a great teacher... (Looks in the sense i mean, your attitude and looks are more similar to a freaky artist not a studious person)..:D:D
Recurrent Neural Networks : Data Science Concepts
27:17
ritvikmath
Рет қаралды 30 М.
Gradient Boosting : Data Science's Silver Bullet
15:48
ritvikmath
Рет қаралды 67 М.
World’s strongest WOMAN vs regular GIRLS
00:56
A4
Рет қаралды 39 МЛН
How Much Tape To Stop A Lamborghini?
00:15
MrBeast
Рет қаралды 192 МЛН
Why Logistic Regression DOESN'T return probabilities?!
15:20
CodeEmporium
Рет қаралды 12 М.
Metropolis - Hastings : Data Science Concepts
18:15
ritvikmath
Рет қаралды 106 М.
Probability Calibration For Machine Learning in Python
11:52
NeuralNine
Рет қаралды 3,9 М.
Gaussian Processes : Data Science Concepts
24:47
ritvikmath
Рет қаралды 14 М.
Probability Calibration Workshop - Lesson 1
28:45
numeristical
Рет қаралды 8 М.
Conditional Random Fields : Data Science Concepts
20:11
ritvikmath
Рет қаралды 35 М.
Entropy (for data science) Clearly Explained!!!
16:35
StatQuest with Josh Starmer
Рет қаралды 628 М.
Maximum Likelihood : Data Science Concepts
20:45
ritvikmath
Рет қаралды 37 М.
Probability Calibration Workshop - Lesson 2
22:09
numeristical
Рет қаралды 3,9 М.
World’s strongest WOMAN vs regular GIRLS
00:56
A4
Рет қаралды 39 МЛН