LASSO (Shrinkage/Regularization)

  Рет қаралды 17,738

Leslie Myint

Leslie Myint

Күн бұрын

Пікірлер: 23
@issa_coder
@issa_coder 2 жыл бұрын
I really enjoyed listening and learning from your explanation of this topic: it's very informative and easy to understand. I can now do my research paper with one doubt cleared. Thank you for your help!
@Zumerjud
@Zumerjud Жыл бұрын
Very easy to understand. Thank you!
@jandoedel1390
@jandoedel1390 2 жыл бұрын
This is a really good video
@fabianaltendorfer11
@fabianaltendorfer11 Жыл бұрын
Very cool, thank you!
@firstkaransingh
@firstkaransingh Жыл бұрын
Brilliant explanation of a complex topic 👍
@vulong9763
@vulong9763 Жыл бұрын
greate video.
@patrikatkinson4694
@patrikatkinson4694 2 жыл бұрын
Excellent video
@fanwinwin
@fanwinwin Жыл бұрын
great
@davekimmerle9453
@davekimmerle9453 Жыл бұрын
Do you know a good scientific paper that describes how LASSO regression works which I could cite for my papers?
@lesliemyint1865
@lesliemyint1865 Жыл бұрын
Not a paper but I highly recommend the ISLR textbook: hastie.su.domains/ISLR2/ISLRv2_website.pdf
@baxoutthebox5682
@baxoutthebox5682 Жыл бұрын
@@lesliemyint1865 dang this is awesome thank you!
@nishah4058
@nishah4058 8 ай бұрын
Hui superb lecture.. but I have one doubt I hope u will clear it. All of the videos related to lasso used regression model to explain it.. since it’s also a feature selection so can we use this as a feature selection for classification problem?? As PCA is used for both regression and classification. And moreover classifications means related to categorical data which by default convert into numerical values. So can we use lasso for classification problem also as a feature selection,if yes then why there is not any example of it? Thanx in advance
@aminaahmedalibelal5676
@aminaahmedalibelal5676 2 жыл бұрын
I really enjoyed listening. But is it possible to give a real example?
@omegajoctan2938
@omegajoctan2938 2 жыл бұрын
that is the real question, I can't figure out where to start calculating this and what to expect from the formula
@lesliemyint1865
@lesliemyint1865 2 жыл бұрын
Could you clarify what you mean when you say "real example"? Around 8:44 in the video, I go through an example of how results from LASSO modeling could be interpreted in the context of an applied problem: predicting credit card balance.
@lesliemyint1865
@lesliemyint1865 2 жыл бұрын
@@omegajoctan2938 LASSO models aren't meant to be fit by hand (the level of computation involved requires a computer), so there really aren't hand calculations that would be useful to show. For this reason, this video focuses on the concepts underlying this tool.
@omegajoctan2938
@omegajoctan2938 2 жыл бұрын
@@lesliemyint1865 Thanks for the reply, I'm looking for a way to code this logic if you could show a coding example, that will be great not by using the skicit-learn library, coding from scratch, I understand that the computations are too hard to manually
@lesliemyint1865
@lesliemyint1865 2 жыл бұрын
@@omegajoctan2938 An example of coding up LASSO from scratch is available here at the top of the page: lmyint.github.io/253_spring_2019/shrinkageregularization.html. This isn't what appears in actual LASSO implementations but I hope it helps get the ideas across.
@jasonbourne8628
@jasonbourne8628 Жыл бұрын
Thank you❤
@sp8871
@sp8871 2 жыл бұрын
How a positive beta multiplied by large positive Alfa becomes zero? Cannot get it.
@lesliemyint1865
@lesliemyint1865 2 жыл бұрын
Did you mean when a beta coefficient is multiplied by lambda (the penalty parameter)? If there is a large lambda penalty, then there is a very large contribution to the penalized sum of squared residuals - that indeed can be very large. But this is exactly what mathematically incentivizes the beta coefficient to move towards zero. The smaller the coefficient magnitude, the smaller the penalty incurred. (And when the coefficient is zero, there is no penalty incurred.)
@kan3259
@kan3259 Жыл бұрын
Why do we need the penalty term? can we not just have the RSS without it?
@lesliemyint1865
@lesliemyint1865 Жыл бұрын
Yes, we can just have RSS but just RSS can lead to an overfit model when we have lots of predictors (some of which are likely uninformative in predicting the outcome). The penalty term encourages predictors that help little to not at all be eliminated from the model.
KNN Regression and the Bias-Variance Tradeoff
11:17
Leslie Myint
Рет қаралды 4,1 М.
Regularization Part 1: Ridge (L2) Regression
20:27
StatQuest with Josh Starmer
Рет қаралды 1,1 МЛН
The evil clown plays a prank on the angel
00:39
超人夫妇
Рет қаралды 49 МЛН
Lasso regression - explained
18:35
TileStats
Рет қаралды 19 М.
Statistical Learning: 6.6 Shrinkage methods and ridge regression
12:38
Regularization - Explained!
12:44
CodeEmporium
Рет қаралды 17 М.
Lasso Regression with Scikit-Learn (Beginner Friendly)
17:47
Ryan & Matt Data Science
Рет қаралды 4,1 М.
LASSO Regression
27:49
David Caughlin
Рет қаралды 12 М.
The weirdest paradox in statistics (and machine learning)
21:44
Mathemaniac
Рет қаралды 1 МЛН
Local Regression and Generalized Additive Models
13:56
Leslie Myint
Рет қаралды 15 М.
Ridge vs Lasso Regression, Visualized!!!
9:06
StatQuest with Josh Starmer
Рет қаралды 265 М.
Ridge Regression
16:54
ritvikmath
Рет қаралды 129 М.
The evil clown plays a prank on the angel
00:39
超人夫妇
Рет қаралды 49 МЛН