Lecture 11 | Convex Optimization I (Stanford)

  Рет қаралды 41,054

Stanford

Stanford

Күн бұрын

Пікірлер: 20
@shiv093
@shiv093 4 жыл бұрын
0:30 Statistical estimation 4:51 Linear measurements with IID noise 7:23 Examples 15:36 Logistic Regression 23:03 example 29:39 (Binary) hypothesis testing 35:02 detection probability matrix 40:43 scalarization 50:41 minimax detector & example 55:11 Experiment Design 1:01:45 vector optimization formulation 1:13:01 D-optimal design 1:14:28 example
@ArnabJoardar
@ArnabJoardar 4 жыл бұрын
The real MVP. Thanks a lot for the book keeping. Really helps to know when to expect to go back to the book.
@MinSeokSong-f8d
@MinSeokSong-f8d 9 ай бұрын
15:00 it should be "by flipping and taking log (not exp)
@VolatilityEdgeRunner
@VolatilityEdgeRunner 11 жыл бұрын
The number of reviews is decreasing exponentially w.r.t the number of lectures.
@iaroslavshcherbatyi5357
@iaroslavshcherbatyi5357 8 жыл бұрын
+Wei Xue Was just about to comment on that :D
@abhishekaich970
@abhishekaich970 8 жыл бұрын
yeah..:/ I am still waiting when he starts solving a convex optimization problem!
@yngve1993
@yngve1993 8 жыл бұрын
That's not necessarily the point of these lectures, they are here to teach us how to formulate the problems in order for them to be solved with a professional solver. Creating a robust solver is VERY difficult, which is the reason the best solvers out there are in the range of $10,000, good free ones are still available though. If you for some reason want to build a solver yourself you would need to have a toolbox with a range of quasi-newton methods, numerical linear algebra, etc. this is FAR beyond what you can expect from a 20 lecture course in applied convex optimization (It could very well be a PhD thesis).
@Thien--Nguyen
@Thien--Nguyen 3 жыл бұрын
Ah the everlasting battle between Frequentists and Bayesians...
@janjereczek9070
@janjereczek9070 3 жыл бұрын
On the slide about logistic regression, it is stated that the distribution is logistic... But it does not integrate to 1, so it is not a distribution, right? Isn't it rather a cumulative distribution function?
@heyjianjing
@heyjianjing 2 ай бұрын
it is a discrete distribution with p(y=1) using the equation shown in the slide, and p(y=0)=1-p(y=1)
@ArnabJoardar
@ArnabJoardar 4 жыл бұрын
At 57:13, could someone elaborate as to why that is the error covariance matrix? The ' a's ' are under the control of the person conducting the experiment, right? So how's that an error? I understand that one needs to choose only those experiments which help in allowing a series of experiments to gauge to the maximum extent possible given things like resource constraints. But why call it error? It would elucidate the reason why I want to make it smaller.
@jiaxinzhu1057
@jiaxinzhu1057 3 жыл бұрын
Because people always use probability or says gaussian, to build up this problem. If you substitute the formula into a gaussian PDF, you will find that 'as' is something like variance and therefore you will know the answer to the first question.
@heyjianjing
@heyjianjing 2 ай бұрын
you have least squares solution (A^TA)^-1A^Ty, plug in y=Ax+w, then you can see that error x^hat-x is just (A^TA)^-1A^Tw. then you do take expected value of (x^hat-x)(x^hat-x)^T to get covariance matrix, which is the equation in the slide
@calebbowyer9030
@calebbowyer9030 6 жыл бұрын
What is a non-trivial bi-criterion LP?
@tag_of_frank
@tag_of_frank 5 жыл бұрын
Where do ROC and detectors come up in statistics and machine learning?
@dudeshash4110
@dudeshash4110 4 жыл бұрын
I have seen ROC and Detectors in Signal Processing, we call ROC as NP Detector. No idea about machine learning.
@kenahoo
@kenahoo 2 жыл бұрын
In ML we'd call it the Precision/Recall curve.
@annawilson3824
@annawilson3824 11 ай бұрын
58:25
@ethanjyx
@ethanjyx 4 жыл бұрын
Seems to start having more connections with ML
@shupengwei9419
@shupengwei9419 6 жыл бұрын
chapter 7 ..
Lecture 12 | Convex Optimization I (Stanford)
1:16:05
Stanford
Рет қаралды 34 М.
Lecture 10 | Convex Optimization I (Stanford)
1:17:55
Stanford
Рет қаралды 52 М.
Cat mode and a glass of water #family #humor #fun
00:22
Kotiki_Z
Рет қаралды 42 МЛН
Mom Hack for Cooking Solo with a Little One! 🍳👶
00:15
5-Minute Crafts HOUSE
Рет қаралды 23 МЛН
Lecture 8 | Convex Optimization I (Stanford)
1:16:30
Stanford
Рет қаралды 125 М.
Lecture 9 | Convex Optimization I (Stanford)
1:16:35
Stanford
Рет қаралды 78 М.
Lecture 15 | Convex Optimization I (Stanford)
1:16:45
Stanford
Рет қаралды 53 М.
Lecture 16 | Convex Optimization I (Stanford)
1:13:59
Stanford
Рет қаралды 37 М.
Lecture 6 | Convex Optimization I (Stanford)
1:09:20
Stanford
Рет қаралды 88 М.
Lecture 7 | Convex Optimization I
1:14:38
Stanford
Рет қаралды 87 М.
Lecture 1 | Convex Optimization I (Stanford)
1:20:33
Stanford
Рет қаралды 722 М.
Lecture 4 | Convex Optimization I (Stanford)
1:13:38
Stanford
Рет қаралды 148 М.
Lecture 13 | Convex Optimization I (Stanford)
1:15:17
Stanford
Рет қаралды 33 М.
Lecture 16 | Convex Optimization II (Stanford)
1:19:17
Stanford
Рет қаралды 23 М.
Cat mode and a glass of water #family #humor #fun
00:22
Kotiki_Z
Рет қаралды 42 МЛН