Please make sure you subscribe to the channel and hit the the notification button to receive further notifications about the channel. Thank you all.
@belldemasi38704 жыл бұрын
subscription done !! Thank you for you useful lecture :)
@deshawndarwin89014 жыл бұрын
agree
@wiltonhardesty85884 жыл бұрын
subscribed :) been supporting you since day 1.
@jacobyaubrey41094 жыл бұрын
I have subscribed sir !!
@nautboermans23675 жыл бұрын
You just spoon feed my brain with your clear explanation, thanks man!
@AhmadBazzi5 жыл бұрын
You are most welcome, Elena :)
@leonardocortese82935 жыл бұрын
Thanks for this awesome is the first time I really understood how SDPs works.
@tuongnguyen93914 жыл бұрын
this channel is so underrated ... your pedagogy style is right to the point. I really love your teaching style. Will you cover Difference of Convex programming in the future :)
@AhmadBazzi4 жыл бұрын
Thanks a lot Tuong!! I appreciate it. I think DCPs are really cool with many applications in Machine learning ( see Kernel selections and SVMs) as well as Economics and Finance. I will try my best to dedicate some lectures for them. 😊
@tuongnguyen93914 жыл бұрын
@@AhmadBazzi wow you really know a lot about it. Although DC programming was invented by vietnamese people not many book in my country even mention about its application :)))))
@shubhamsharma22025 жыл бұрын
i know Schur's complement .. what i like is how easy you make it look in 26:11 !! thank you
@AhmadBazzi5 жыл бұрын
you are welcome !!
@ElizabethMartinez-vj9fi5 жыл бұрын
Sir, your videos are terrific, especially this series along with the MATLAB always do very well in my math and statistics courses, but I am rarely satisfied with just getting the computations right-I seek an understanding of the math that extends beyond the numbers and an understanding of how the math applies in the real usually takes time to answer all of these questions on my own, but your videos clarify so much and extends my interest, so thanks.
@maid61915 жыл бұрын
Just Brilliant!! Ahmad Bazzi- You are a genius!
@AhmadBazzi5 жыл бұрын
Thank you for the support :)
@blairstatler46345 жыл бұрын
I wished My school UT Dallas had professors like I need not struggle on thankfully I found you.
@shubhamsharma22025 жыл бұрын
thank u for the lecture sir
@vicenteisabelle53575 жыл бұрын
excellent are different and please continue posting videos
@AhmadBazzi5 жыл бұрын
Sure thing, Vincente :)
@batigol12345678904 жыл бұрын
33:28 the norm of a matrix (or more specifically the Frobinius norm) is the trace of what you are mentioning in minute 33:28; i.e., ||A||^2=trace(A^T A)=trace(A A^T), where A^T denotes the transpose of A.
@AhmadBazzi4 жыл бұрын
forgot the trace here. You’re right
@hankpark45104 жыл бұрын
Brilliantly done otherwise one of the hardest concepts to explain! Just a quick question for the Eigenvalue Minimization Problem. At 32:19, V'_max * A(x) * V_max = lambda_max * ||V_max||^2, which makes me curious. I think the "lambda_max" should be derived from the whole matrix; tI - A(x), not from A(x) alone. Thus, it's confusing you decomposed ------------------------------------------------------------------------------------------------------------------------------------------------------- V'_max * ( tI - A(x) ) * V_max = t * ||V_max||^2 + V'_max * A(x) * V_max = t * ||V_max||^2 + lambda_max * ||V_max||^2 ------------------------------------------------------------------------------------------------------------------------------------------------------- I'd rather think ------------------------------------------------------------------------------------------------------------------------------------------------------- V'_max * ( tI - A(x) ) * V_max = lambda_max * ||V_max||^2 b/c ( tI - A(x) ) * V_max = lambda_max * V_max ------------------------------------------------------------------------------------------------------------------------------------------------------- So, if we stick to your approach, it would be more helpful to explain how A(x) can share the same eigenvalue, "lambda_max", with the whole matrix; "tI - A(x)". I don't understand why A(x) can be a similar matrix to ( tI-A(x) ). Thanks again!
@chick3n713 жыл бұрын
The identity Id is invariant under any unitary or orthogonal transforms U, i.e., U Id U^\dagger = Id.
@sandeepayyagari3 жыл бұрын
Hello Ahmad, Can you please shed some light on the efficienct MISDP solvers available. I am currently using YALMIP branch & bound..Thanks for your efforts. 🥰
@Scott_Raynor3 жыл бұрын
How do you go about doing SDPs in cvx in matlab?
@zyzhang1130 Жыл бұрын
32:09 you claim the 'larger than in semidefinite sense' and 'larger than or equal' is equivalent, but it seems the proof only goes one direction: at 32:09 why is the reverse direction (the inequality holds for v_max implies it holds for all alpha) also true?
@rosiebennett5125 жыл бұрын
Ahmad Bazzi, the next big thing ?
@sucramgnat81573 жыл бұрын
23:08 should that general inequality sign be reversed?
@zyzhang1130 Жыл бұрын
the content is top tier but it is totally compromised by the recording quality. I'm not even kidding .
@AhmadBazzi Жыл бұрын
Thank you for the support and kind comment. I would like to apologize - this video was recorded when i started the channel, with very basic and poor equipment, with little video editing knowledge. The latest videos - especially on optimization - are higher quality.
@zyzhang1130 Жыл бұрын
@@AhmadBazzi I see. It was just frustrating because I was trying to follow the reasoning step by step and then encountered multiple glitches. I still learned a lot from it and I appreciate your effort.
@santhuathidi59874 жыл бұрын
sir, How Vmax multiplied by I gives norm of of vmax(32:11)
@abdowaraiet21693 жыл бұрын
The norm of vmax is the result of multiplying vTmax by vmax from the other side of the bracket.
@lglgunlock4 жыл бұрын
I think at 26:21 the formula is incorrect.
@AhmadBazzi4 жыл бұрын
which one ? Schur's complement ?
@lglgunlock4 жыл бұрын
Yes. Schur's complement. If A is positive definite, then "M is positive semi-definite" is equivalent to "M/A is positive semi-definite".
@AhmadBazzi4 жыл бұрын
Schur complement is correct, please see here en.wikipedia.org/wiki/Schur_complement. As for the statement I mentioned, check the properties in the above link where it says "M is a positive-definite symmetric matrix if and only if D and the Schur complement of D are positive-definite."
@lglgunlock4 жыл бұрын
Your writing and your verbal description are inconsistent, however, I think both your writing and your verbal description are incorrect. Your writing claims: M is PSD (positive simi-definite) A is PD (positive definite) and M/A is PD. This is incorrect, because, "M is PSD" is not the sufficient condition for "A is PD and M/A is PD". To get to "A is PD and M/A is PD", you need "M is PD", "M is PSD" is not enough. Your verbal description claims: M is PSD A is PSD and M/A is PSD. This is also incorrect, because, from "M is PSD", you cannot get to "A is PSD and M/A is PSD". The Schur complement only says: If "A is PD", then "M is PSD" "M/A is PSD". Note "A is PD" is a premise, it is not a conclusion that you can get from "M is PSD". I am not sure whether "A is PSD" is a conclusion that you can get from "M is PSD". It seems you can get this conclusion based on the generalized Schur complement. Anyway, you cannot get to "A is PSD and M/A is PSD" from "M is PSD".
@AhmadBazzi4 жыл бұрын
Checking the second property in the wikidpedia link en.wikipedia.org/wiki/Schur_complement It says that M is a P.D symmetric matrix A and the Schur complement of A are P.D M is the block matrix as show in 26:21. It is symmetric and assumed to be P.S.D then this is equivalent to A and the Schur complement of A are P.S.D Where is the issue ?
@virebovipole66575 жыл бұрын
My professor does the worst job at explaining convex optimization and in particular semidefinite guy is doing quite the opposite :)
@AhmadBazzi5 жыл бұрын
I’m glad the explanation is of benefit to someone :)