Lecture 10 | Convex Optimization I (Stanford)

  Рет қаралды 52,629

Stanford

Stanford

Күн бұрын

Пікірлер: 19
@shiv093
@shiv093 4 жыл бұрын
1:15 Approximation and fitting 1:42 Norm Approximation 4:10 examples 5:39 Penalty function approximation 10:51 example 17:15 Huber penalty function 23:10 Least-norm problems 25:31 examples 30:17 Regularization approximation 34:53 Scalarized problem 39:03 Optimal input design 40:34 examples 42:48 Signal reconstruction 47:47 quadratic smoothing example 50:48 total variation reconstruction example 56:50 Robust approximation 1:06:19 stochastic robust LS 1:10:04 worst-case robust LS 1:15:52 example
@ArnabJoardar
@ArnabJoardar 4 жыл бұрын
Where are you, dude? Missing your bookmark keeping for all the remaining lectures.
@shiv093
@shiv093 4 жыл бұрын
@@ArnabJoardar My Professor skipped a few chapters. :(
@tag_of_frank
@tag_of_frank 5 жыл бұрын
450,000 views for lecture one, 36,000 for lecture ten. About a factor of 10! Glad I made it through.
@Gabend
@Gabend 4 жыл бұрын
This is where the good stuff starts!
@joshhyyym
@joshhyyym 4 жыл бұрын
2020 and compressed sensing is still pretty popular in MRI
@SalmaKhan-sy7tr
@SalmaKhan-sy7tr 8 жыл бұрын
Professor Boyd. The Great
@ajminich
@ajminich 14 жыл бұрын
Only Boyd could turn an annoying fly into a hilarious joke...and weave it into the entire lecture.
@kylex7501
@kylex7501 4 жыл бұрын
32:50 A fly problem
@ArnabJoardar
@ArnabJoardar 4 жыл бұрын
At 8:46, he mentions that the 1-Norm has an opposite sense to that of the square-error function: for small residual errors, you're irritated more in case of the 1-Norm compared to the square-error function, relatively. Could anyone elaborate how a small residual value is able to give Large Irritation in the case of the 1-Norm? If the Irritant function was 1/residual, then, if the residual was small, i.e. 0
@chaowu9105
@chaowu9105 4 жыл бұрын
Arnab Joardar I think this is about the slope of L1 and quadratic function. L1 has constant 1 but quadratic has smaller than 1.
@bgang6207
@bgang6207 3 жыл бұрын
"Large irritation" in the relative sense. Say the residual is 0.1, in 1-norm it is just 0.1, but in 2-norm it is 0.01 and 0.1>>0.01.
@alialenezi360
@alialenezi360 5 жыл бұрын
Don’t give up
@TheProgrammer10
@TheProgrammer10 4 жыл бұрын
the fly was just trying to get a better look at what boyd was talking about
@annawilson3824
@annawilson3824 11 ай бұрын
1:10:00
@jrf2623
@jrf2623 3 жыл бұрын
Lecture 10: optiflyzation problem
@crystalp1983
@crystalp1983 12 жыл бұрын
totally
@EulerGauss13
@EulerGauss13 9 ай бұрын
This feels like hearing all the toolkit without seeing and knowing anything about the real problems. Like I know this is applied part of the course, but it's so unmotivating.
@何浩源-r2y
@何浩源-r2y 5 жыл бұрын
Very funny
Lecture 11 | Convex Optimization I (Stanford)
1:17:03
Stanford
Рет қаралды 41 М.
Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 1
1:18:27
Une nouvelle voiture pour Noël 🥹
00:28
Nicocapone
Рет қаралды 9 МЛН
IL'HAN - Qalqam | Official Music Video
03:17
Ilhan Ihsanov
Рет қаралды 700 М.
Lecture 4 | Convex Optimization I (Stanford)
1:13:38
Stanford
Рет қаралды 148 М.
Lecture 12 | Convex Optimization I (Stanford)
1:16:05
Stanford
Рет қаралды 34 М.
Everything You Need to Know About Control Theory
16:08
MATLAB
Рет қаралды 589 М.
A Nice Algebra Problem | Math Olympiad | Find x values?
20:20
Advanced Quantum Mechanics Lecture 1
1:40:06
Stanford
Рет қаралды 447 М.
But what is a convolution?
23:01
3Blue1Brown
Рет қаралды 2,8 МЛН