Robust Regression with the L1 Norm

  Рет қаралды 21,916

Steve Brunton

Steve Brunton

Күн бұрын

Пікірлер: 18
@zoheirtir
@zoheirtir 4 жыл бұрын
This channel is one of the most important channels for me! MANY Thanks Steve
@LucasL.Mbembela
@LucasL.Mbembela Жыл бұрын
Along with visual aids you have explained the concept in a very understandable manner. Thanks for the video.
@zulhamhidayat858
@zulhamhidayat858 25 күн бұрын
Thank you for sharing this tutorial video and also the code.
@JousefM
@JousefM 4 жыл бұрын
Thumbs up Steve!
@3003eric
@3003eric 4 жыл бұрын
Nice video. Your channel and book are amazing! Congratulations.
@Globbo_The_Glob
@Globbo_The_Glob 4 жыл бұрын
I was just talking in a meeting about this, get out of my head Brunton.
@Calvin4016
@Calvin4016 4 жыл бұрын
Prof. Brunton, thank you for the lecture! However, in some cases such as maximize a posterior and maximum likelihood estimation, under the assumption that the noise is Gaussian distributed, minimizing the L2 norm provides the optimal solution. Usually certain heuristics such as M-Estimation are applied to mitigate issues arise from outliers, in other words changing the kernel to a shape that can tolerate certain amount of outliers in the system. It sounds like using L1 norm here has very similar effects to that of robust kernels where we are effectively changing the shape of the cost/error. Can you please elaborate on the differences between using (L1 norm) and (L2 norm + M-estimator), and how the L1 norm performs in applications where data uncertainty is considered? Thanks!
@keyuchen5992
@keyuchen5992 Жыл бұрын
I think you are right
@pierregravel5941
@pierregravel5941 Жыл бұрын
Is there any way we might generate a sampling matrix which is maximally incoherent? What if the samples are positioned randomly and maximally distant from each other? Can we add additional constraints on the sampling matrix?
@haticehocam2020
@haticehocam2020 4 жыл бұрын
Mr. Brunton What material and program did you use while shooting this video?
@alexandermichael3609
@alexandermichael3609 4 жыл бұрын
Thank you, Professor. It is pretty helpful for me.
@vijayendrasdm
@vijayendrasdm 4 жыл бұрын
Hi Steve L1 solution (i.e regularization) error surface is not convex. Are you planning to explain how do we optimize such functions ? Mathematical derivations would be helpful :) Thanks
@sutharsanmahendren1071
@sutharsanmahendren1071 4 жыл бұрын
Dear sir, I am from Sri Lanka and I am really admired by your video series. My doubt is l1 norm does not differentiable at zero due to its non-continuty. To impose sparsity, researchers use ISTA (Iterative Soft Thresholding Algorithm) to handle the weights when they come near to the zero with a certain threshold. What are your thoughts related to this?
@twk844
@twk844 4 жыл бұрын
Does anyone know historical reasons for such popularity of L2 norm? Very entertaining videos! Namaste!
@MrHaggyy
@MrHaggyy 2 жыл бұрын
I think it's so popular because you need it so damn often. Pythagoras or the distance between two points in 2D knows basically everybody. This idea dominates mechanical engineering. The whole idea of complex numbers require l2 norm with i = sqrt(-1) is designed around l2 norm. So all the differential equation in mechanics and electronics need it. And basic optics need it too.
@JeffersonRodrigoo
@JeffersonRodrigoo 4 жыл бұрын
Excellent!
@alegian7934
@alegian7934 4 жыл бұрын
there is a point in each video, where you loose consciousness of time passing :D
Robust Regression with the L1 Norm [Matlab]
4:48
Steve Brunton
Рет қаралды 10 М.
Robust Regression with Huber Loss - Clearly Explained
9:28
Selva Prabhakaran (ML+)
Рет қаралды 3,4 М.
So Cute 🥰 who is better?
00:15
dednahype
Рет қаралды 19 МЛН
Правильный подход к детям
00:18
Beatrise
Рет қаралды 11 МЛН
Chain Game Strong ⛓️
00:21
Anwar Jibawi
Рет қаралды 41 МЛН
Sparse Sensor Placement Optimization for Reconstruction
17:47
Steve Brunton
Рет қаралды 22 М.
Robust Regression with the L1 Norm [Python]
5:01
Steve Brunton
Рет қаралды 11 М.
Sparsity and the L1 Norm
10:59
Steve Brunton
Рет қаралды 50 М.
Why America’s Biggest Companies Are Going Anti-’Woke’ | WSJ
8:46
The Wall Street Journal
Рет қаралды 249 М.
Regularization in a Neural Network | Dealing with overfitting
11:40
R demo | Robust Regression (don't depend on influential data)
4:40
yuzaR Data Science
Рет қаралды 7 М.
Robust Principal Component Analysis (RPCA)
22:11
Steve Brunton
Рет қаралды 71 М.
What is Norm in Machine Learning?
5:15
Normalized Nerd
Рет қаралды 83 М.
So Cute 🥰 who is better?
00:15
dednahype
Рет қаралды 19 МЛН