this video is so much underated, this is the clearest explanation one can find on the internet about semidefinite programming without having to study control theory or Mr Boyd convex optimization course. The animation of the feasible region as a 3D object completely blow my mind yet it does send a powerful message regarding the beatiful nature of semidefinite programming. Very difficult to express appreciation with my limited vocabulary
@VisuallyExplained2 жыл бұрын
I really really appreciated your nice comment, and I am glad you liked the video!!
@kodirovsshik2 жыл бұрын
Why does this channel have 4 thousand subscribers and not 4 million, it would be truly deserved Another awesome video tho, and looking forward for 3rd part!
@VisuallyExplained2 жыл бұрын
Yayy!! Thanks for the support, I really appreciate it!
@jcamargo20058 ай бұрын
If you align the first vector with [1, 0, 0], you see the 3 vectors are on xy plane, separated by 120 degrees.
@danschultz777 Жыл бұрын
I love these videos, but I feel like the pacing is a bit fast. If you slowed down the explanations in the video by 25% that would be very helpful to me. Keep up the great work!
@VisuallyExplained Жыл бұрын
Fair enough, thanks for the valuable feedback!
@artr0x932 жыл бұрын
really like these visualizations! I'm just now getting into SDPs for computer vision, would love to see a video on how different SDP solvers work internally. That's pretty much a mystery to me, and it's hard to find good explanations
@shyambhagwat2 жыл бұрын
Truly awesome Bashir! Subscribed for more content !!
@Darkev774 күн бұрын
@4:49 anyone know why we needed to keep positivity when minimizing the alpha? I.e., why did we minimize alpha in such a way that it makes the matrix positive semi-definite? Is it because we know that the matrix can be decomposed into V^⊤ V?
@jimlbeaver2 жыл бұрын
I've never heard of this...great explanation. thanks very much!
@VisuallyExplained2 жыл бұрын
You're very welcome! :-)
@SonLeTyP94962 жыл бұрын
As always, what an extraordinary content, Bakhir :)
@VisuallyExplained2 жыл бұрын
Thanks a ton!!!
@stevensolomon93998 ай бұрын
Awesome content! Thank you🎉
@ehTrotcoD2 жыл бұрын
When writing A=C^TC isn't C more akin to the Cholesky decomposition and a square root of A would be B=A^(1/2)? I know this notation is used sometimes, but it seems confusing to refer to C as a square root since you would think you could just square it too get back A, but you actually have to transpose one of the C. In particular, sqrtm from scipy uses the more traditional definition of a square root.
@renanwilliamprado5380 Жыл бұрын
You are right! This is not the actual definition of square root.
@nithingovindarajan31784 ай бұрын
Yes, that is indeed correct. A matrix B is a square root of a matrix A if B^2 = A. Thus, if we consider the spectral decomposition of A = V diag(lambda_1, ... , lambda_n) V^T, and set B = V diag(sqrt(lambda_1), ... , sqrt(lambda_n)) V*, we get what we wanted. Notice also that B is a symmetric matrix, so B^2 = B^T B = A
@sw986302 жыл бұрын
Great!!! Ive had a trouble of getting visual intuituon of SDP and it helps a lot!!
@VisuallyExplained2 жыл бұрын
Fantastic! Great to hear!
@qr-ec8vd2 жыл бұрын
love these, do you accept donations?
@VisuallyExplained2 жыл бұрын
I don’t take donations, but I accept nice comments. Thank you!
@fabricetshinangi50422 жыл бұрын
Great video. 👍
@iamnottellingumyname2 жыл бұрын
Hey! So I know that for an nxn matrix, the min alpha is -1/(n-1). How would I go about proving this? I can’t think of a general system of equations for the eigenvalues
@iamnottellingumyname2 жыл бұрын
Absolutely beautiful videos by the way!
@VisuallyExplained2 жыл бұрын
@@iamnottellingumyname That's a fun problem! Here is an elegant solution that doesn't require too much computation. Let M be the matrix with ones on the diagonal, and alpha on on the off diagonal. We want the smallest alpha s.t. M stays psd. We can write M = alpha * J + (1-alpha)* I, where J is the all one matrix and I is the identity matrix. It's not hard to show J has rank one, with eigenvalues (n, 0, ..., 0). This means that the eigenvalues of alpha*J are (n*alpha, 0, ..., 0). Adding (1-alpha)*I shifts all the eigenvalues by (1-alpha), so the eigenvalues of M are (n*alph+(1-alpha), 1-alpha, ..., 1-alpha). From there you can easily show that min alpha is -1/(n-1).
@iamnottellingumyname2 жыл бұрын
@@VisuallyExplained that’s a really beautiful solution!! Appreciate the reply, keep up the good work
@fabricetshinangi50422 жыл бұрын
I noticed that u used python in many of the videos u posted.Can u recommend me a book that can help to quickly learn how to solve NP in python.
@VisuallyExplained2 жыл бұрын
If you mean solving NP-hard problems, I am afraid even the almighty python cannot do that ... If you mean getting good-enough solutions for NP-hard problems, then I have a feeling that you might like the next video ;)