Рет қаралды 212,134
In this video, we'll learn about K-fold cross-validation and how it can be used for selecting optimal tuning parameters, choosing between models, and selecting features. We'll compare cross-validation with the train/test split procedure, and we'll also discuss some variations of cross-validation that can result in more accurate estimates of model performance.
Download the notebook: github.com/justmarkham/scikit...
Documentation on cross-validation: scikit-learn.org/stable/module...
Documentation on model evaluation: scikit-learn.org/stable/module...
GitHub issue on negative mean squared error: github.com/scikit-learn/sciki...
An Introduction to Statistical Learning: www-bcf.usc.edu/~gareth/ISL/
K-fold and leave-one-out cross-validation: • Video
Cross-validation the right and wrong ways: • Video
Accurately Measuring Model Prediction Error: scott.fortmann-roe.com/docs/Me...
An Introduction to Feature Selection: machinelearningmastery.com/an-...
Harvard CS109: github.com/cs109/content/blob...
Cross-validation pitfalls: www.jcheminf.com/content/pdf/1...
WANT TO GET BETTER AT MACHINE LEARNING? HERE ARE YOUR NEXT STEPS:
1) WATCH my scikit-learn video series:
• Machine learning in Py...
2) SUBSCRIBE for more videos:
kzbin.info?su...
3) JOIN "Data School Insiders" to access bonus content:
/ dataschool
4) ENROLL in my Machine Learning course:
www.dataschool.io/learn/
5) LET'S CONNECT!
- Newsletter: www.dataschool.io/subscribe/
- Twitter: / justmarkham
- Facebook: / datascienceschool
- LinkedIn: / justmarkham