5. Positive Definite and Semidefinite Matrices

  Рет қаралды 161,744

MIT OpenCourseWare

MIT OpenCourseWare

Күн бұрын

Пікірлер
@georgesadler7830
@georgesadler7830 3 жыл бұрын
DR. Strang thank you for another classic lecture and selection of examples on Positive Definite and Semidefinite Matrices.
@spoopedoop3142
@spoopedoop3142 4 жыл бұрын
For everyone asking about the bowl and eigenvalues analogy: Let X= (x,y) be the input vector (so that I can write X as a vector) and consider the energy functional f(X)=X^t S X. What would happen if we evaluate on the eigenvalues? First, why would I think to do this? The eigenvectors of the matrix give the "natural coordinates" to express the action of the matrix as a linear transformation, which then gives rise to all the "completing the square" type problems with quadratic forms in usual LA classes. The natural coordinates rotate the quadratic so it doesn't have off-diagonal terms. This means the function changes from something like f(x,y)=3x^2+6y^2+4xy to something like f(x,y)=(x^2+y^2)=(||X||^2), where ||X||^2 denotes the squared norm. So the functional looks like a very nice quadratic in this case, like the ones you may learn how to draw in a multivariate calc course. Going back to the current calculation which f(X)=X^tSX: if we evaluate in the eigen-directions, then our function becomes f(X_1)=X_1^t S X_1=X_1 lambda_1 X_1= lambda_1 ||X_1||^2 (a nice quadratic) and f(X_2)=X_2^t S X_2=X_2 lambda_2 X_2= lambda_2 ||X_2||^2 (another nice quadratic). The eigenvalues lambda_1, lambda_2 become scaling coefficients in the eigen-directions. A large scaling coefficient means we have a steep quadratic and a small coefficient means we have a quadratic that is stretched out horizontally. If the eigenvalue is close to zero, the quadratic functional will almost look like a horizontal plane (really, the tangent plane will be horizontal) and hence not be invertible, so any solver will have difficulty finding a solution due to infinitely many approximate solutions. Since the solver will see a bunch of feasible directions, it will bounce around the argmin vector without being able to confidently declare success. Poor solver. Of course, these are purely mathematical problems; rounding error will probably mitigate the search even further. Edit: changed "engenvalue" to "eigenvector" in 2nd paragraph.
@amysun6080
@amysun6080 4 жыл бұрын
Lecture starts at <a href="#" class="seekto" data-time="170">2:50</a>
@marekdude
@marekdude 3 жыл бұрын
Positive Semi-Definite matricies: 38.01
@jonasblom6177
@jonasblom6177 3 жыл бұрын
thanks man
@guythat779
@guythat779 3 жыл бұрын
What a king
@Luke-ty2ur
@Luke-ty2ur 2 жыл бұрын
u the man.
@miguiprytoluk
@miguiprytoluk 2 жыл бұрын
38:01
@mariomariovitiviti
@mariomariovitiviti 4 жыл бұрын
listening to Strang is like getting a brain massage
@CrazyHorse151
@CrazyHorse151 4 жыл бұрын
Im only half through one lecture and I already love him. :'D
@PremiDhruv
@PremiDhruv Жыл бұрын
I was going through a headache, after 15 minutes of his lecture it got evaporated.
@emanueleria8151
@emanueleria8151 10 ай бұрын
Sure
@hangli1622
@hangli1622 2 жыл бұрын
at <a href="#" class="seekto" data-time="2480">41:20</a>, why the rank 1 matrix has 2 zero eigenvalues? because 3 - 1 = 2? does the professor mean that number of zero eigenvalues always equals to nullity of that matrix?
@justsomerandomguy933
@justsomerandomguy933 4 жыл бұрын
Staring at <a href="#" class="seekto" data-time="1320">22:00</a>, should not we follow in the opposite of the gradient direction to reach minima? Gradient gives the steepest ascent directions as far as I know.
@zma4543
@zma4543 4 жыл бұрын
I think you are right
@jenkinsj9224
@jenkinsj9224 3 жыл бұрын
Yes
@mac_edmarco
@mac_edmarco 4 жыл бұрын
I wish Strang was my grandfather
@NguyenAn-kf9ho
@NguyenAn-kf9ho 3 жыл бұрын
maybe he s not because he will be sad if his grandson s stupid and cannot inverse a matrix.... just kidding XD
@prajwalchoudhary4824
@prajwalchoudhary4824 3 жыл бұрын
@@NguyenAn-kf9ho lol
@hxqing
@hxqing 3 жыл бұрын
Wishing he was and isn't ? Better wishing he is.
@alexandersanchez6337
@alexandersanchez6337 2 жыл бұрын
This professor is the platonic version of a professor
@MLDawn
@MLDawn Жыл бұрын
at <a href="#" class="seekto" data-time="858">14:18</a>, the energy can also so be EQUAL to 0 (not JUST bigger than 0)! Then does this not mean that the matrix is positive SEMI definite as opposed to positive definite?
@quirkyquester
@quirkyquester 4 жыл бұрын
came here from 18.06 fall 2011 Singular value decomposition taught by Professor Strang
@samirroy1412
@samirroy1412 4 жыл бұрын
I am doing a project on this topic it really helped me a lot..thank you
@samirroy1412
@samirroy1412 4 жыл бұрын
@@vishalyadav2958 yes
@samirroy1412
@samirroy1412 4 жыл бұрын
U r doing phd or post grad?
@samirroy1412
@samirroy1412 4 жыл бұрын
U can follow horn n johnson and strang book... it's relatively easier to understand
@anubhav2198
@anubhav2198 5 жыл бұрын
At <a href="#" class="seekto" data-time="1680">28:00</a> what is the intuition behind shape of the bowl and large/small eigenvalues? He made it sound like a quite obvious statement. Also at <a href="#" class="seekto" data-time="2210">36:50</a>, given that S and Q-1SQ are similar implies they have same eigen values. However, how do you show S and Q-1SQ are similar? OK I figured out the <a href="#" class="seekto" data-time="2210">36:50</a> part. It is the spectral theorem which sir had covered in previous class. S = Q (lambda) Q-1. Lambda = Q-1 S Q. As, lambda is defined as the matrix of eigen values of S, this implies that S and Q-1 S Q are similar. Please explain the part at <a href="#" class="seekto" data-time="1680">28:00</a> . Thanks!
@ramman405
@ramman405 5 жыл бұрын
Regarding similarity you don't need the spectral theorem, just to remember that we say that A and B are similar if there exists an invertible matrix M such that A = M^(-1) * B * M You can immediately verify that if A = Q^(-1) * S* Q, B = S, and M=Q, then the equation is satisfied so A=Q^(-1) *S* Q and B=S are similar. Regarding the bowl statement, it should be pretty clear when the eigenvectors are [1,0] and [0,1]. In that case the energy function is given by: [x,y] * S * [x,y]^T = x^2 * lambda1 + y^2 * lambda2. So in the xz-plane it is just the quadratic function scaled by lambda1. In the yz-plane it is just the quadratic function scaled by lambda2 (and in general it is a linear combination of the two). If either eigenvalue is much larger than the other the scalings will be disproportionate and therefore we will get a bowl with a steep slope in the direction of the large eigenvalue, and pretty flat slope in the direction of the small eigenvalue. However the whole point of diagonalization is that basically we can treat any diagonalizable matrix like the diagonal matrix of its eigenvalues as long as we do the appropriate orthogonal base change (or equivalently work in the correct coordinate system), so really we already know that the general bowl will be an orthogonal transformation of the bowl described above and therefore itself be a narrow valley bowl. Concretely, if v1,v2 is an orthonormal basis of eigenvectors of S, with associated eigenvalues lambda1,lambda2, then the energy function is v^T QDQ^T v where Q is the orthonormal matrix whose columns are v1,v2. D is the diagonal matrix with elements lambda1,lambda2. We can write v as a unique linear combination of the eigenvectors (it is a basis after all): v= x * v1 + y * v2 Then the energy function evaluates to: v^T QDQ^T v = v^T QD [x,y]^T = v^T Q[lambda1 * x, lambda2 * y] = v^T (lambda1 * x * v1 + lambda2 * y * v2) = lambda1 * x^2 + lambda2 * y^2, so again it is a bowl which in the direction of v1 is a 1-dimensional quadratic scaled by lambda1, and in the direction of v2 is a 1-dimensional quadratic scaled by lambda2. So if lambda1 is huge the slope in the direction v1 will be steep. Same as before, just from the point of view of the coordinate system given by the eigenvectors (v1,v2).
@jenkinsj9224
@jenkinsj9224 3 жыл бұрын
@@ramman405 thanks
@sriharsha580
@sriharsha580 4 жыл бұрын
@<a href="#" class="seekto" data-time="1920">32:00</a>, Prof mentions "if the eigenvalues are far apart, that's when we have problems". What does he mean by that?
@nguyennguyenphuc5217
@nguyennguyenphuc5217 4 жыл бұрын
He means difference between eigenvalues, |lambda1 - lambda2|, is big, then we have the case where "the bowl is long and thin" he mentions right before that.
@gabrielmachado5708
@gabrielmachado5708 4 жыл бұрын
@@nguyennguyenphuc5217, yes, it looks like it would make it easier to miss the point and bounce back and forth around the minimum
@debralegorreta1375
@debralegorreta1375 4 жыл бұрын
@@gabrielmachado5708 right. it the bowl is narrow and your descent is slightly off you'll start climbing again.... so we take baby steps.
@Hank-ry9bz
@Hank-ry9bz 9 ай бұрын
<a href="#" class="seekto" data-time="1249">20:49</a> gradient descent
@lazywarrior
@lazywarrior 3 жыл бұрын
who's that eager student answering every question for everyone else on every class?
@JulieIsMe824
@JulieIsMe824 4 жыл бұрын
Sooo love Prof. Strang!!
@xc2530
@xc2530 Жыл бұрын
<a href="#" class="seekto" data-time="600">10:00</a> energy <a href="#" class="seekto" data-time="1140">19:00</a> convex
@xc2530
@xc2530 Жыл бұрын
14:00 deep learning
@xc2530
@xc2530 Жыл бұрын
24:00 gradient descent
@xc2530
@xc2530 Жыл бұрын
27:00 eigenvalue tells the shape of the bowl
@xc2530
@xc2530 Жыл бұрын
38:00 semi def pos
@jeeveshjuneja445
@jeeveshjuneja445 5 жыл бұрын
I think the shape of the bowl will change when we add (x^T)b at <a href="#" class="seekto" data-time="1020">17:00</a> . Am I right???
@jeevanel44
@jeevanel44 5 жыл бұрын
It will shift or tilt the bowl in X axis direction. You can try the vizualizer al-roomi.org/3DPlot/index.html
@hardikho
@hardikho 3 жыл бұрын
@@jeevanel44 Hey, sorry to bother you a year later - what expression would I input to receive the bowl shown here?
@billy-the-good-boy
@billy-the-good-boy 5 жыл бұрын
Who could possibly dislike this?
@allandogreat
@allandogreat 5 жыл бұрын
who can't understand that.
@AJ-et3vf
@AJ-et3vf 2 жыл бұрын
Awesome video sir! Thank you!
@MathGenius6284
@MathGenius6284 3 жыл бұрын
Love you sir .love from India .
@SphereofTime
@SphereofTime 6 ай бұрын
<a href="#" class="seekto" data-time="857">14:17</a> <a href="#" class="seekto" data-time="857">14:17</a> <a href="#" class="seekto" data-time="857">14:17</a>
@heretoinfinity9300
@heretoinfinity9300 4 жыл бұрын
Where was the energy equation mentioned in previous lectures?
@CM-Gram
@CM-Gram 4 жыл бұрын
What is meant by energy whe X^t S X multiplication is carried?
@spoopedoop3142
@spoopedoop3142 4 жыл бұрын
Are you asking why this quadratic form is called energy?
@CM-Gram
@CM-Gram 4 жыл бұрын
@@spoopedoop3142 yes exactly
@possibly_hello_1270
@possibly_hello_1270 3 жыл бұрын
@@CM-Gram Kinetic energy is 1/2mv^2, where v is the velocity vector, and potential energy is 1/2kx^2, where x is the position vector.
@rayvinlai7268
@rayvinlai7268 2 жыл бұрын
Hopefully I can still love science at this age
@mehmetozer5675
@mehmetozer5675 4 жыл бұрын
I am here to leave a like to the legend.
@quanyingliu7168
@quanyingliu7168 5 жыл бұрын
At 41min, Why is the number of nonzero eigenvalues the same as rank(A)?
@fustilarian1
@fustilarian1 5 жыл бұрын
The eigenvectors with non zero eigenvalues must be mapped to somewhere within the column space, in all other directions outside the column space it collapses to 0, bear in mind that the null space vectors are also solutions to Ax=\lambda x where \lambda is 0.
@MoodyG
@MoodyG 5 жыл бұрын
The answer is at 41:17 ... you notice how we can decompose the matrix into a weighted sum of its eigenvectors.. the weights being the eigenvalues obviously, and since Rank(A) is by definition the number of linearly independent vectors in the column space of A, i.e., it is the same as the number of non-zero terms in the decomposition, which is in turn the number of non-zero eigenvalues
@quanyingliu7168
@quanyingliu7168 5 жыл бұрын
@@fustilarian1 Thanks for your explanation. That's very helpful.
@imranq9241
@imranq9241 3 жыл бұрын
These are great lectures! Is the autograder and programming assignment available somewhere?
@parthmalik1
@parthmalik1 Жыл бұрын
yes when u get admitted to MIT u can take up the class and partake in assignments
@user-or7ji5hv8y
@user-or7ji5hv8y 3 жыл бұрын
Very comprehensive. Thanks
@iwtwb8
@iwtwb8 3 жыл бұрын
Does he mean "a * a^T" near the end of the video?
@csl1384
@csl1384 5 жыл бұрын
Where can I find the online homework? I can't find it in OCW.
@mitocw
@mitocw 5 жыл бұрын
The homework can be found in the Assignments section of the course on MIT OpenCourseWare at: ocw.mit.edu/18-065S18. Best wishes on your studies!
@unalcachofa
@unalcachofa 5 жыл бұрын
@@mitocw are the Julia language online asigmants mentioned also available somewhere? I see only problems from the textbook in the Assignments section of the OCW
@mitocw
@mitocw 5 жыл бұрын
julialang.org/
@StuckNoLuck
@StuckNoLuck 5 жыл бұрын
@@mitocw Where can we locate the programming assignments?
@nicko6419
@nicko6419 4 жыл бұрын
@@mitocw I have a question about the kzbin.info/www/bejne/rqSzXoZtrrCUiKM Where can I find this lab work about convolution? On MIT OpenCourceWare at ocw.mit.edu/courses/mathematics/18-065-matrix-methods-in-data-analysis-signal-processing-and-machine-learning-spring-2018/assignments/ I can find only book assignments ocw.mit.edu/courses/mathematics/18-065-matrix-methods-in-data-analysis-signal-processing-and-machine-learning-spring-2018/assignments/MIT18_065S18PSets.pdf#page=7 Could you help me? Thanks!
@lolololo8606
@lolololo8606 4 жыл бұрын
you ate the best
@ML_n00b
@ML_n00b 8 ай бұрын
when does he prove 3?
@mohamedlaminbangura3303
@mohamedlaminbangura3303 4 жыл бұрын
Great work
@vukasinspasojevic1521
@vukasinspasojevic1521 Жыл бұрын
Can we find homeworks/labs online?
@mitocw
@mitocw Жыл бұрын
The course materials are available on MIT OpenCourseWare at: ocw.mit.edu/18-065S18. Best wishes on your studies!
@olsela3073
@olsela3073 4 жыл бұрын
Well thanks prof.
@yeshuip
@yeshuip 3 жыл бұрын
hello, could anyone explains me the difference between energy function ans snorm taught by professor in lecture 8
@jan-heinzwiers581
@jan-heinzwiers581 4 жыл бұрын
always a minus fault .... sqrt(68) not sqrt(60) , so one eigenvalue neg , yes .... 🤣😊 But now does Matlab opposite , to mine abc formula : (8 +/- sqrt(68))/2 for eigenvalues 🙄
@jan-heinzwiers581
@jan-heinzwiers581 4 жыл бұрын
Octave : -0.12311 , 8.12311 agrees with abc formula
@jan-heinzwiers581
@jan-heinzwiers581 4 жыл бұрын
Matlab too 😀
@binnypatel7061
@binnypatel7061 4 жыл бұрын
Thanks a lot !
@mingliu1940
@mingliu1940 2 жыл бұрын
Thanks professor.
@meenaammma
@meenaammma 4 жыл бұрын
Amazing
@suprithashetty9016
@suprithashetty9016 3 жыл бұрын
Voice ❤️
@hakimmecene2230
@hakimmecene2230 4 жыл бұрын
Hi , I need cours about matrices polynomial please .
@anynamecanbeuse
@anynamecanbeuse 4 жыл бұрын
No we don't have to use gradient decent in this case
@suprithashetty9016
@suprithashetty9016 3 жыл бұрын
Math ❤️
@suprithashetty9016
@suprithashetty9016 3 жыл бұрын
Duster ❤️
@suprithashetty9016
@suprithashetty9016 3 жыл бұрын
Mic ❤️
@suprithashetty9016
@suprithashetty9016 3 жыл бұрын
Chalk ❤️
@suprithashetty9016
@suprithashetty9016 3 жыл бұрын
Accent ❤️
@o.y.930
@o.y.930 5 жыл бұрын
i hope this professor doesnt get any sexual assault charges with that much winking because his lectures are awesome.
@alexandrek2555
@alexandrek2555 2 жыл бұрын
🤣
@freeeagle6074
@freeeagle6074 2 жыл бұрын
I guess not, unless the air gets personified and files a case.
@shuyuliu4016
@shuyuliu4016 3 жыл бұрын
看着他越来越老 唉 时光
@aubrey1008
@aubrey1008 5 жыл бұрын
I see that this professor does not take question in class. . Maybe if you email him.
@naterojas9272
@naterojas9272 5 жыл бұрын
Maybe no one raise their hand
@cubegears
@cubegears Жыл бұрын
wow hes old now....
@kevinchen1820
@kevinchen1820 2 жыл бұрын
20220517簽
@allandogreat
@allandogreat 3 жыл бұрын
what is Convext? like that ....hahah
6. Singular Value Decomposition (SVD)
53:34
MIT OpenCourseWare
Рет қаралды 232 М.
Lecture 8: Norms of Vectors and Matrices
49:21
MIT OpenCourseWare
Рет қаралды 163 М.
We Attempted The Impossible 😱
00:54
Topper Guild
Рет қаралды 56 МЛН
Мясо вегана? 🧐 @Whatthefshow
01:01
История одного вокалиста
Рет қаралды 7 МЛН
Гениальное изобретение из обычного стаканчика!
00:31
Лютая физика | Олимпиадная физика
Рет қаралды 4,8 МЛН
27. Positive Definite Matrices and Minima
50:40
MIT OpenCourseWare
Рет қаралды 256 М.
Eigenvectors and eigenvalues | Chapter 14, Essence of linear algebra
17:16
Lecture 1: The Column Space of A Contains All Vectors Ax
52:15
MIT OpenCourseWare
Рет қаралды 331 М.
The deeper meaning of matrix transpose
25:41
Mathemaniac
Рет қаралды 397 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 828 М.
4. Eigenvalues and Eigenvectors
48:56
MIT OpenCourseWare
Рет қаралды 149 М.
25. Symmetric Matrices and Positive Definiteness
43:52
MIT OpenCourseWare
Рет қаралды 135 М.
Derivative of a Matrix : Data Science Basics
13:43
ritvikmath
Рет қаралды 404 М.