0:30 Statistical estimation 4:51 Linear measurements with IID noise 7:23 Examples 15:36 Logistic Regression 23:03 example 29:39 (Binary) hypothesis testing 35:02 detection probability matrix 40:43 scalarization 50:41 minimax detector & example 55:11 Experiment Design 1:01:45 vector optimization formulation 1:13:01 D-optimal design 1:14:28 example
@ArnabJoardar4 жыл бұрын
The real MVP. Thanks a lot for the book keeping. Really helps to know when to expect to go back to the book.
@MinSeokSong-f8d9 ай бұрын
15:00 it should be "by flipping and taking log (not exp)
@VolatilityEdgeRunner11 жыл бұрын
The number of reviews is decreasing exponentially w.r.t the number of lectures.
@iaroslavshcherbatyi53578 жыл бұрын
+Wei Xue Was just about to comment on that :D
@abhishekaich9708 жыл бұрын
yeah..:/ I am still waiting when he starts solving a convex optimization problem!
@yngve19938 жыл бұрын
That's not necessarily the point of these lectures, they are here to teach us how to formulate the problems in order for them to be solved with a professional solver. Creating a robust solver is VERY difficult, which is the reason the best solvers out there are in the range of $10,000, good free ones are still available though. If you for some reason want to build a solver yourself you would need to have a toolbox with a range of quasi-newton methods, numerical linear algebra, etc. this is FAR beyond what you can expect from a 20 lecture course in applied convex optimization (It could very well be a PhD thesis).
@Thien--Nguyen3 жыл бұрын
Ah the everlasting battle between Frequentists and Bayesians...
@janjereczek90703 жыл бұрын
On the slide about logistic regression, it is stated that the distribution is logistic... But it does not integrate to 1, so it is not a distribution, right? Isn't it rather a cumulative distribution function?
@heyjianjing2 ай бұрын
it is a discrete distribution with p(y=1) using the equation shown in the slide, and p(y=0)=1-p(y=1)
@ArnabJoardar4 жыл бұрын
At 57:13, could someone elaborate as to why that is the error covariance matrix? The ' a's ' are under the control of the person conducting the experiment, right? So how's that an error? I understand that one needs to choose only those experiments which help in allowing a series of experiments to gauge to the maximum extent possible given things like resource constraints. But why call it error? It would elucidate the reason why I want to make it smaller.
@jiaxinzhu10573 жыл бұрын
Because people always use probability or says gaussian, to build up this problem. If you substitute the formula into a gaussian PDF, you will find that 'as' is something like variance and therefore you will know the answer to the first question.
@heyjianjing2 ай бұрын
you have least squares solution (A^TA)^-1A^Ty, plug in y=Ax+w, then you can see that error x^hat-x is just (A^TA)^-1A^Tw. then you do take expected value of (x^hat-x)(x^hat-x)^T to get covariance matrix, which is the equation in the slide
@calebbowyer90306 жыл бұрын
What is a non-trivial bi-criterion LP?
@tag_of_frank5 жыл бұрын
Where do ROC and detectors come up in statistics and machine learning?
@dudeshash41104 жыл бұрын
I have seen ROC and Detectors in Signal Processing, we call ROC as NP Detector. No idea about machine learning.