Creating ROC curves and ensembling models in R with "caret" | R Tutorial (2021)

  Рет қаралды 4,602

RichardOnData

RichardOnData

Күн бұрын

Пікірлер: 7
@fullsurr3465
@fullsurr3465 3 жыл бұрын
The part of the video with ensembles was really helpful to me. Thanks!
@RichardOnData
@RichardOnData 3 жыл бұрын
Yes, it's very powerful - and also a great way to land yourself knee deep in some code that takes 4 hours to run!
@djangoworldwide7925
@djangoworldwide7925 2 жыл бұрын
I watched the whole series. Fantastisch!
@shaoru
@shaoru Жыл бұрын
very helpful series. Thanks!
@HaiLeQuang
@HaiLeQuang 3 жыл бұрын
It's a really pity your channel doesnt get more views. But keep up the good work
@spencerantoniomarlen-starr3069
@spencerantoniomarlen-starr3069 2 жыл бұрын
I think most people either dramatically overthink (or perhaps underthink) the tradeoffs for when to favor TPR, TNR, PPV, NPV, and Accuracy (not counting the more advanced/synthetic metrics like Kappa, AUC, or F1). It is actually relatively simple. Is the potential upside benefit of getting your positive predictions correct larger than the potential (the range) downside risk of getting your positive predictions wrong, i.e. the payoffs of the TPR vs the TNR. If for instance, you are trying to predict future stock prices to assess which stocks you should buy naked short positions on, then the TNR (aka specificity) is WAY more important than the TPR (aka sensitivity) because if you are wrong, the potential losses from the stock price going up instead of down are unbounded, that is, there is no upper limit to the price a stock can reach. Similarly in the other direction, if you essentially have a known discrete or known limited potential downside or cost, and the potential upsides are unbounded (or even just have a much wider range), then the TPR is more important. How to choose between PPV and TPR is even easier by the way! It just depends what you want to do with your output and in what form it comes. If you are taking some sort of disease screening test you bought or administered to you by your doctor and you get a positive reading and you'd like to calculate the probability that you have that disease (assuming you can find out the population base rate and somehow adjust for how much of an amplification the fact that you have symptoms should impose on that base rate), always go with the Positive Predictive Value (aka Precision). That is literally the common language definition of it actually!
@minhaoling3056
@minhaoling3056 3 жыл бұрын
Hi, is it possible to combine three layers of feature extraction method and one model layer into only a model object ?
ROC and AUC in R
15:13
StatQuest with Josh Starmer
Рет қаралды 274 М.
We Attempted The Impossible 😱
00:54
Topper Guild
Рет қаралды 56 МЛН
“Don’t stop the chances.”
00:44
ISSEI / いっせい
Рет қаралды 62 МЛН
Training and Tuning ML Models in R with "caret" | R Tutorial (2021)
17:03
ROC Curve & Area Under Curve (AUC) with R - Application Example
19:40
Dr. Bharatendra Rai
Рет қаралды 104 М.
Tuning random forest hyperparameters with tidymodels
1:04:32
Julia Silge
Рет қаралды 18 М.
R or Python: Which Should You Learn in 2024?
14:42
RichardOnData
Рет қаралды 9 М.
Preprocessing Data in R for ML with "caret" (2021)
19:24
RichardOnData
Рет қаралды 12 М.
Follow THESE 5 Tips to Get a Data Job
14:57
RichardOnData
Рет қаралды 1,2 М.
20 R Packages You Should Know
30:42
RichardOnData
Рет қаралды 41 М.
When Should You Use Random Forests?
13:26
RichardOnData
Рет қаралды 19 М.
k-Fold Cross-Validation in R
1:03:12
David Caughlin
Рет қаралды 31 М.
We Attempted The Impossible 😱
00:54
Topper Guild
Рет қаралды 56 МЛН