1:15 Approximation and fitting 1:42 Norm Approximation 4:10 examples 5:39 Penalty function approximation 10:51 example 17:15 Huber penalty function 23:10 Least-norm problems 25:31 examples 30:17 Regularization approximation 34:53 Scalarized problem 39:03 Optimal input design 40:34 examples 42:48 Signal reconstruction 47:47 quadratic smoothing example 50:48 total variation reconstruction example 56:50 Robust approximation 1:06:19 stochastic robust LS 1:10:04 worst-case robust LS 1:15:52 example
@ArnabJoardar4 жыл бұрын
Where are you, dude? Missing your bookmark keeping for all the remaining lectures.
@shiv0934 жыл бұрын
@@ArnabJoardar My Professor skipped a few chapters. :(
@tag_of_frank5 жыл бұрын
450,000 views for lecture one, 36,000 for lecture ten. About a factor of 10! Glad I made it through.
@Gabend4 жыл бұрын
This is where the good stuff starts!
@joshhyyym4 жыл бұрын
2020 and compressed sensing is still pretty popular in MRI
@SalmaKhan-sy7tr8 жыл бұрын
Professor Boyd. The Great
@ajminich14 жыл бұрын
Only Boyd could turn an annoying fly into a hilarious joke...and weave it into the entire lecture.
@kylex75014 жыл бұрын
32:50 A fly problem
@ArnabJoardar4 жыл бұрын
At 8:46, he mentions that the 1-Norm has an opposite sense to that of the square-error function: for small residual errors, you're irritated more in case of the 1-Norm compared to the square-error function, relatively. Could anyone elaborate how a small residual value is able to give Large Irritation in the case of the 1-Norm? If the Irritant function was 1/residual, then, if the residual was small, i.e. 0
@chaowu91054 жыл бұрын
Arnab Joardar I think this is about the slope of L1 and quadratic function. L1 has constant 1 but quadratic has smaller than 1.
@bgang62073 жыл бұрын
"Large irritation" in the relative sense. Say the residual is 0.1, in 1-norm it is just 0.1, but in 2-norm it is 0.01 and 0.1>>0.01.
@alialenezi3605 жыл бұрын
Don’t give up
@TheProgrammer104 жыл бұрын
the fly was just trying to get a better look at what boyd was talking about
@annawilson382411 ай бұрын
1:10:00
@jrf26233 жыл бұрын
Lecture 10: optiflyzation problem
@crystalp198312 жыл бұрын
totally
@EulerGauss139 ай бұрын
This feels like hearing all the toolkit without seeing and knowing anything about the real problems. Like I know this is applied part of the course, but it's so unmotivating.