Robust, Interpretable Statistical Models: Sparse Regression with the LASSO

  Рет қаралды 45,888

Steve Brunton

Steve Brunton

Күн бұрын

Пікірлер: 66
@juliogodel
@juliogodel 4 жыл бұрын
Prof Steve.... Just keep publishing these videos forever :)
@aayushpatel5777
@aayushpatel5777 4 жыл бұрын
If you apply LASSO on lectures of this topic only Steves' videos will survive.
@user-hk3ej4hk7m
@user-hk3ej4hk7m 4 жыл бұрын
Thanks for publishing these videos. I'm more of a programmer than a maths person, but it's really nice to have an idea about what algorithms there are out there to interpret datasets.
@naimanaheed6594
@naimanaheed6594 3 жыл бұрын
A wonderful book! I never saw such a combination of book, video, and codes from the author. Everything is clearly explained. I don't know how to express my gratitude in words!
@Alexander-ye5hv
@Alexander-ye5hv 4 жыл бұрын
Fantastic lecture, Steve! Probably my favourite one to date...
@HA-vh3ti
@HA-vh3ti 3 жыл бұрын
Wow - The best visualization of the topic i have seen so far, it's just amazing how the world learn today, virtually from anywhere - online.
@EuroPerRad
@EuroPerRad 3 жыл бұрын
These videos are so much better than any lecture that I had at the university!
@The_Tauri
@The_Tauri 4 жыл бұрын
Since the Covid crisis confined me to home, you have become one of my favorite youtubers. Great succinct explanations with real applicability to problems both abstract and praactical. THANK YOU!!
@obusama6321
@obusama6321 Жыл бұрын
Loved this. So Sad I discovered this channel so late! Finally, a channel, which doesn't dumb down and help really improve the vigor mathematically as well as conceptually without being daunted by research papers notations and lingo. I request videos on Optimization as a series - how it works in different algorithms across Supervised, Semi, Unsupervised, Reinforcement.
@francistembo650
@francistembo650 4 жыл бұрын
My favourite channel of all time. I hope we're going to get videos on Interpretability for machine learning.
@damiandk1able
@damiandk1able 4 жыл бұрын
Thank you for crystal clear lecture. And the topic is fantastic because: a) linear model (simplicity) b) interpretability (for the reasons you have clearly explained yourself). I am looking forward for more content and I am ready to buy yet another your book professor
@Nick-ux5vr
@Nick-ux5vr 4 жыл бұрын
I used LASSO & Elastic Net for a sports betting prediction model this year in college basketball. The LASSO model did better than EN. Thanks for the explanation! It was very timely for me. :)
@SRIMANTASANTRA
@SRIMANTASANTRA 4 жыл бұрын
Hi Professor Steve, thank you so much ❤️.
@jackdoodle7202
@jackdoodle7202 4 жыл бұрын
Thanks for the clear explanation and ample good examples.
@TURALOWEN
@TURALOWEN 4 жыл бұрын
I have learned a lot from your videos, Prof. Brunton. Thank you!
@MikeAirforce111
@MikeAirforce111 4 жыл бұрын
GREAT lecture. Knew most of the content, but had to watch it to the end anyways.
@mar-a-lagofbibug8833
@mar-a-lagofbibug8833 3 жыл бұрын
You make these topics engaging. Thanks.
@doodadsyt
@doodadsyt 3 жыл бұрын
Hi Prof Brunton, please correct me if I'm wrong: at 25:43 the least square solution is at lambda = 1 not 0 right? Since 1/0 would throw an error.
@Eigensteve
@Eigensteve 3 жыл бұрын
Thanks for the comment. Yes, I see the confusion. The x-axis label "1/lambda" is not technically correct. It is just a trend that this increases as lambda decreases, but we shouldn't read this literally as 1/lambda. What I mean is that when lambda->0 in the upper right optimization problem, then there is no sparsity penalization and the optimization will return the least squares solution.
@lilmoesk899
@lilmoesk899 4 жыл бұрын
Nice job! Great visuals. Looking forward to seeing more topics! Thanks for putting your content online.
@ddddyliu
@ddddyliu 4 жыл бұрын
Such a great lecture! Deep but enjoyable on a Saturday morning:)Thank you professor.
@AliMBaba-do2sl
@AliMBaba-do2sl 4 жыл бұрын
Excellent presentation Steve.
@abbddos
@abbddos 2 жыл бұрын
This is pure gold...
@danielcohen8187
@danielcohen8187 3 жыл бұрын
Thank you for always publishing amazing videos!
@JoshtMoody
@JoshtMoody 3 жыл бұрын
Excellent, as always. Extremely good content.
@mattkafker8400
@mattkafker8400 4 жыл бұрын
Very interesting video, Professor. As you mentioned, the Elastic Net algorithm combines the benefits of the Ridge Regression and the LASSO algorithms. Is there a circumstance in which one would specifically use LASSO, rather than simply always going with Elastic Net? Does Elastic Net require significantly more computation to implement? Are there issues that come with the greater generality of Elastic Net that LASSO doesn't suffer from?
@philspaghet
@philspaghet 3 жыл бұрын
I want to know this as well!
@mouadmouhsin3024
@mouadmouhsin3024 3 жыл бұрын
my fav one, just keep publishing
@Eigensteve
@Eigensteve 3 жыл бұрын
Thanks!
@zhanzo
@zhanzo 4 жыл бұрын
what is the reference paper that connects svm and elastic lasso?
@Eigensteve
@Eigensteve 4 жыл бұрын
Here is the paper: arxiv.org/abs/1409.1976
@usefulengineer2999
@usefulengineer2999 4 жыл бұрын
Thank you for the great contribution.
@MaksymCzech
@MaksymCzech 3 жыл бұрын
Please make a video explaining ARMAX model estimation method. Thank you.
@JosephRivera517
@JosephRivera517 4 жыл бұрын
Thanks for this great lecture.
@oncedidactic
@oncedidactic 2 жыл бұрын
Is there a talk on SR3? Sounds really cool! Will check out the paper
@clazo37
@clazo37 5 ай бұрын
Thank you so much for your clear presentations. Have you been working with causal inference? I have been reading the work of Judea Perl, I find it not very accessible. If you have experience with causal inference, it would be great to know about your insights.
@raviprakash5987
@raviprakash5987 4 жыл бұрын
Thank you very much Dr. Steve.
@pierregravel5941
@pierregravel5941 Жыл бұрын
Why is the SINDy spot not located at the minimum of the test curve? You put it instead at the knee of the Pareto curve. In ML, we usually use cross validation to locate the minimum of the loss function for the test dataset.
@TymoteuszCejrowski93
@TymoteuszCejrowski93 4 жыл бұрын
Love these videos how it is easy to watch and understand, even on morning coffee ☕
@daveneumann8899
@daveneumann8899 2 жыл бұрын
Amazing Math visualizations!!! In particular, what software/programming language did you use to create the 3D versions of the Tibshirani plots? (minute 20:00). I think that the intuition behind the Sparsity induced by the L1 norm is much clearer in higher dimensions. It's a shame that we have to stop at 3 dimensions. Still many thanks for the visualization!
@msinanozeren6733
@msinanozeren6733 2 ай бұрын
Most of the stuff (not all but most) this guy is talking about are very cool and his presentation is very good and constructive so a big thank you. But actually what he is talking about are known for pretty long time (extra non-quadratic term in the minimization was discussed by mathematicians even in 19th century and Tibshirani is not the first to discover its consequences, Americans always think that when they find something they are the first people who discovered it) and have little to do why the learning algorithms and data-driven stuff are powerful. What this guy is talking about is actually classical linear algebra put into some nice algorthmic iterations. That is not the center of gravity of the data-driven science. I mean, you have to know this stuff of course and if you studied science in Europe (not USA but Europe) you know this linear algebra and much more (in Italy you have to finish whole books about quadratic forms to pass an undergrad linear algabra exam) by the time you finished the undergraduate. The power lies on probabilistic stuff based fundamentally on theory developed by Soviet mathematicians Vapnik and Chernovenkis that really made the distinction between classical statistical and probabilistic decision theory and what people novadays call AI.
@Chrisratata
@Chrisratata 3 жыл бұрын
Everything's great here, only thing..the side by side images at 15:00 aren't selling it for me. I get that l1 would be pointy while l2 would be spherical. But you say, and the consensus says, that l2 can intersect at multiple points...yet the image shows a tangent. Are we talking about the not-shown possibility of that blue line cutting through and forming a secant?, but if that's the case then the same could happen for the diamond. This is unclear to me EDIT (20 seconds later lol) : AH! The idea is that the dimensionality of the point of intersection
@JavArButt
@JavArButt 2 жыл бұрын
Thank you for this very helpful video. I was looking for a method for sparse regression and directly used pySindy. However, unfortunately, our data is not suited to be interpreted as a dynamical system. Long story short. From the big possible selection of regression techniques - now I have some kind of overview and now SR3 should be the next step.
@haiderahmed575
@haiderahmed575 4 жыл бұрын
What kind of app. Do you use in your videos?
@zhanzo
@zhanzo 4 жыл бұрын
You can have a similar effect with OBS studio. Add a powerpoint presentation with blue background, and use a blue chroma key to make blue transparent.
@felixwhise4165
@felixwhise4165 4 жыл бұрын
@@zhanzo thank you!
@zhihuachen3613
@zhihuachen3613 3 жыл бұрын
can anyone download the book?
@Akshay-cy9tu
@Akshay-cy9tu 3 жыл бұрын
just amazing
@mr.logzoid1302
@mr.logzoid1302 4 жыл бұрын
Hi Steve could we get a lecture on Sgd stochastic gradient decent and Backpropagation!
@mikets42
@mikets42 2 жыл бұрын
Dear Steven, it appears that you reinvented (partially) kernel-based system identification, popularized by Dr. Lennart Ljung as ReLS. Essentially, it uses inv(x*x') instead of Tikhonov diagonal loading, which is as optimal a solution as it can get. Imho, it is all about how to formalize your "physical" knowledge of the system. BTW, the ReLS's FLOPS are orders of magnitude lower than for biased estimation, compressed sensing, LASSO, etc.
@abdjahdoiahdoai
@abdjahdoiahdoai 3 жыл бұрын
Hi. Professor, please tell us how we can support this channel, shall we just buy the book/ you would set up a Patreon account?
@amielwexler1165
@amielwexler1165 10 ай бұрын
Thanks
@charuvaza3807
@charuvaza3807 3 жыл бұрын
Sir can u please make a video on restricted isometry property.
@KountayDwivedi
@KountayDwivedi 2 жыл бұрын
Thank you so much, Sir. A very insightful video. Could you please throw some light on how to decide the threshold value of lambda in LASSO Regression? Is it dependent on the number of features? Thanks again, Sir.
@AB-dw8vo
@AB-dw8vo 4 жыл бұрын
great lectures!!!!! many thanks!
@itsamankumar403
@itsamankumar403 Жыл бұрын
Thank you Prof :)
@Eigensteve
@Eigensteve Жыл бұрын
Thanks for watching!
@ShashankShekhar-de4ld
@ShashankShekhar-de4ld 4 жыл бұрын
Hello Sir It was great video. Thank you for this. May you also make video on SISSO
@krishnaaditya2086
@krishnaaditya2086 4 жыл бұрын
Awesomeness thank you👍
@emmanuelameyaw6806
@emmanuelameyaw6806 4 жыл бұрын
Economic models are typically dynamic systems of difference equations not differential equations...is SINDY applicable to difference equations?? If we can discover nonlinear systems that generate economic data, that would be awesome...but I guess interpretability would still be limited...:).
@prashantsharmastunning
@prashantsharmastunning 4 жыл бұрын
wow!!
@akathevip
@akathevip 8 ай бұрын
I Like Someone Who Looks Like You I Like To Be Told I Like To Take Care of You I Like To Take My Time I Like To Win I Like You As You Are I Like You, Miss Aberlin
@fl2024steve
@fl2024steve 3 жыл бұрын
I want to do PhD again :)
@amielwexler1165
@amielwexler1165 10 ай бұрын
another comment for algorithm
@marofe
@marofe 4 жыл бұрын
Thanks for this excellent lecture!
Sparse Representation (for classification) with examples!
18:57
Steve Brunton
Рет қаралды 25 М.
Robust Principal Component Analysis (RPCA)
22:11
Steve Brunton
Рет қаралды 71 М.
“Don’t stop the chances.”
00:44
ISSEI / いっせい
Рет қаралды 62 МЛН
REAL or FAKE? #beatbox #tiktok
01:03
BeatboxJCOP
Рет қаралды 18 МЛН
Каха и дочка
00:28
К-Media
Рет қаралды 3,4 МЛН
СИНИЙ ИНЕЙ УЖЕ ВЫШЕЛ!❄️
01:01
DO$HIK
Рет қаралды 3,3 МЛН
Sparsity and the L1 Norm
10:59
Steve Brunton
Рет қаралды 50 М.
Interpretable Deep Learning for New Physics Discovery
24:08
Steve Brunton
Рет қаралды 58 М.
Robust Regression with the L1 Norm
8:05
Steve Brunton
Рет қаралды 21 М.
Crows: Smarter Than You Think with UW Professor John Marzluff
47:58
UW (University of Washington)
Рет қаралды 18 М.
Wavelets and Multiresolution Analysis
15:12
Steve Brunton
Рет қаралды 147 М.
Sparse Sensor Placement Optimization for Reconstruction
17:47
Steve Brunton
Рет қаралды 22 М.
The Search for Randomness with Persi Diaconis
55:02
UW Video
Рет қаралды 28 М.
“Don’t stop the chances.”
00:44
ISSEI / いっせい
Рет қаралды 62 МЛН