Lecture 11 | Convex Optimization I (Stanford)

  Рет қаралды 41,061

Stanford

Stanford

Күн бұрын

Пікірлер: 20
@shiv093
@shiv093 4 жыл бұрын
0:30 Statistical estimation 4:51 Linear measurements with IID noise 7:23 Examples 15:36 Logistic Regression 23:03 example 29:39 (Binary) hypothesis testing 35:02 detection probability matrix 40:43 scalarization 50:41 minimax detector & example 55:11 Experiment Design 1:01:45 vector optimization formulation 1:13:01 D-optimal design 1:14:28 example
@ArnabJoardar
@ArnabJoardar 4 жыл бұрын
The real MVP. Thanks a lot for the book keeping. Really helps to know when to expect to go back to the book.
@MinSeokSong-f8d
@MinSeokSong-f8d 9 ай бұрын
15:00 it should be "by flipping and taking log (not exp)
@Thien--Nguyen
@Thien--Nguyen 3 жыл бұрын
Ah the everlasting battle between Frequentists and Bayesians...
@VolatilityEdgeRunner
@VolatilityEdgeRunner 11 жыл бұрын
The number of reviews is decreasing exponentially w.r.t the number of lectures.
@iaroslavshcherbatyi5357
@iaroslavshcherbatyi5357 8 жыл бұрын
+Wei Xue Was just about to comment on that :D
@abhishekaich970
@abhishekaich970 8 жыл бұрын
yeah..:/ I am still waiting when he starts solving a convex optimization problem!
@yngve1993
@yngve1993 8 жыл бұрын
That's not necessarily the point of these lectures, they are here to teach us how to formulate the problems in order for them to be solved with a professional solver. Creating a robust solver is VERY difficult, which is the reason the best solvers out there are in the range of $10,000, good free ones are still available though. If you for some reason want to build a solver yourself you would need to have a toolbox with a range of quasi-newton methods, numerical linear algebra, etc. this is FAR beyond what you can expect from a 20 lecture course in applied convex optimization (It could very well be a PhD thesis).
@janjereczek9070
@janjereczek9070 3 жыл бұрын
On the slide about logistic regression, it is stated that the distribution is logistic... But it does not integrate to 1, so it is not a distribution, right? Isn't it rather a cumulative distribution function?
@heyjianjing
@heyjianjing 2 ай бұрын
it is a discrete distribution with p(y=1) using the equation shown in the slide, and p(y=0)=1-p(y=1)
@ArnabJoardar
@ArnabJoardar 4 жыл бұрын
At 57:13, could someone elaborate as to why that is the error covariance matrix? The ' a's ' are under the control of the person conducting the experiment, right? So how's that an error? I understand that one needs to choose only those experiments which help in allowing a series of experiments to gauge to the maximum extent possible given things like resource constraints. But why call it error? It would elucidate the reason why I want to make it smaller.
@jiaxinzhu1057
@jiaxinzhu1057 3 жыл бұрын
Because people always use probability or says gaussian, to build up this problem. If you substitute the formula into a gaussian PDF, you will find that 'as' is something like variance and therefore you will know the answer to the first question.
@heyjianjing
@heyjianjing 2 ай бұрын
you have least squares solution (A^TA)^-1A^Ty, plug in y=Ax+w, then you can see that error x^hat-x is just (A^TA)^-1A^Tw. then you do take expected value of (x^hat-x)(x^hat-x)^T to get covariance matrix, which is the equation in the slide
@calebbowyer9030
@calebbowyer9030 6 жыл бұрын
What is a non-trivial bi-criterion LP?
@tag_of_frank
@tag_of_frank 5 жыл бұрын
Where do ROC and detectors come up in statistics and machine learning?
@dudeshash4110
@dudeshash4110 4 жыл бұрын
I have seen ROC and Detectors in Signal Processing, we call ROC as NP Detector. No idea about machine learning.
@kenahoo
@kenahoo 2 жыл бұрын
In ML we'd call it the Precision/Recall curve.
@ethanjyx
@ethanjyx 4 жыл бұрын
Seems to start having more connections with ML
@annawilson3824
@annawilson3824 11 ай бұрын
58:25
@shupengwei9419
@shupengwei9419 6 жыл бұрын
chapter 7 ..
Lecture 12 | Convex Optimization I (Stanford)
1:16:05
Stanford
Рет қаралды 34 М.
Stanford EE364A Convex Optimization I Stephen Boyd I 2023 I Lecture 1
1:18:27
Mom Hack for Cooking Solo with a Little One! 🍳👶
00:15
5-Minute Crafts HOUSE
Рет қаралды 23 МЛН
小丑女COCO的审判。#天使 #小丑 #超人不会飞
00:53
超人不会飞
Рет қаралды 16 МЛН
Sigma Kid Mistake #funny #sigma
00:17
CRAZY GREAPA
Рет қаралды 30 МЛН
Правильный подход к детям
00:18
Beatrise
Рет қаралды 11 МЛН
Lecture 8 | Convex Optimization I (Stanford)
1:16:30
Stanford
Рет қаралды 125 М.
Lecture 19: Dynamic Programming I: Fibonacci, Shortest Paths
51:47
MIT OpenCourseWare
Рет қаралды 2,8 МЛН
Terence Tao at IMO 2024: AI and Mathematics
57:24
AIMO Prize
Рет қаралды 691 М.
Newton's Method for constrained optimization problems
18:29
OptiML PSE
Рет қаралды 9 М.
Стыдные вопросы про Китай / вДудь
3:07:50
вДудь
Рет қаралды 3,5 МЛН
4. Assembly Language & Computer Architecture
1:17:35
MIT OpenCourseWare
Рет қаралды 735 М.
Lecture 5 | Convex Optimization I (Stanford)
1:16:10
Stanford
Рет қаралды 121 М.
Mom Hack for Cooking Solo with a Little One! 🍳👶
00:15
5-Minute Crafts HOUSE
Рет қаралды 23 МЛН