Lecture 8: Norms of Vectors and Matrices

  Рет қаралды 162,308

MIT OpenCourseWare

MIT OpenCourseWare

Күн бұрын

Пікірлер: 128
@veeecos
@veeecos 4 жыл бұрын
In times of Covid, I hope this makes young people realize why older people are so important. Long live Prof Strang.
@justpaulo
@justpaulo 4 жыл бұрын
In times of Covid I hear in the background of the class someone sneezing and nose-blowing and it gives me the chills ...
@somadityasantra5572
@somadityasantra5572 4 жыл бұрын
He is human equivalent of God
@godfreypigott
@godfreypigott 3 жыл бұрын
@@somadityasantra5572 Are you saying he doesn't exist?
@somadityasantra5572
@somadityasantra5572 3 жыл бұрын
@@godfreypigott U are assuming that I mean God does not exist. But how can u prove or disprove that?
@godfreypigott
@godfreypigott 3 жыл бұрын
@@somadityasantra5572 There is no "god" - that is a given. So by saying he is the "human equivalent of god" you are saying that he doesn't exist.
@EduardoGarcia-tv2fc
@EduardoGarcia-tv2fc 4 жыл бұрын
I'd say without any doubt that Professor Strang is the best Algebra professor in the entire world. I'm sure he has helped tons of students all around the world to understand the beauty of algebra
@lizijian7090
@lizijian7090 5 жыл бұрын
Long live your kindly,mild professor
@Pablo-y2k6b
@Pablo-y2k6b Ай бұрын
31:20 intuitive explanation of how the norm choice effects the minimization problem was eye opening to me
@andrewmeowmeow
@andrewmeowmeow 3 жыл бұрын
What a smart and humble person! Long live Prof. Strang!
@JulieIsMe824
@JulieIsMe824 3 жыл бұрын
Best linear algebra course ever! Best wishes for Prof. Strang's health during this horrible pandemic
@deeptendusantra670
@deeptendusantra670 4 жыл бұрын
After reading so many texts finally some actual geometric interpretation of L1 and L2 ...he explains it so beautifully.Came here only to understand definition but his charsima made me watch whole 50 mins
@Forced2
@Forced2 4 жыл бұрын
Exactly the same for me
@denys22222
@denys22222 4 жыл бұрын
Ahah I am in the same situation.
@minoh1543
@minoh1543 3 жыл бұрын
the same for me 2222222
@pondie5381
@pondie5381 2 жыл бұрын
EXACTLY the same!!!
@rogiervdw
@rogiervdw 4 жыл бұрын
Teaching norms with their R2 pictures is just brilliant. So much insight, even emerging while teaching (sparsity of L1 optimum: it's on the axis!!). An absolute joy to watch & learn from
@abdulghanialmasri5550
@abdulghanialmasri5550 2 жыл бұрын
This man does not stop giving, many thanks.
@bilyzhuang9242
@bilyzhuang9242 5 жыл бұрын
LONG LIVE PROFESSOR STRANG!!!!!
@KirtiDhruv
@KirtiDhruv 4 жыл бұрын
This lecture needs to reach more people asap. Total respect for the Professor!
@asifahmed1801
@asifahmed1801 3 жыл бұрын
After passing the linear algebra course, i was kind of disappointed no need to see your lecture again . but for data analysis u came again in a HD resolution. So glad to see you professor .
@atulsrmcem
@atulsrmcem Жыл бұрын
I'm currently reading Calculus by Dr. Strang. One of the best books on the subject I have ever come across.
@abdowaraiet2169
@abdowaraiet2169 4 жыл бұрын
"You start from the origin and you blow up the norm until you get a point on the line that satisfies your constraint, and because you are blowing up the norm, when it hit first, that's the smallest blow up possible, that's min, that's the guy that minimize" (31:23-31:42) that's 2-D optimization in a nutshell...clear and simple, thanks very much Professor Strang..
@ShwetankT
@ShwetankT 3 жыл бұрын
3D as well, no?
@georgesadler7830
@georgesadler7830 3 жыл бұрын
DR. Strang, thank you explaining and analyzing Norms. I understand this lecture from start to finish.
@diysumit
@diysumit 3 жыл бұрын
Love this man, thanks MIT for looking out for us!
@arnaud5033
@arnaud5033 2 жыл бұрын
Probably, this has been said before, so forgive me if I repeat someone else's words. I acknowledge here, that professor Strang is a good pedagogue. I learnt some math over the years. I completely support the use of the geometrical visualization of some properties, as it is a learning need. I can say that for me it is easy to see how to derive properties like the one he gave for the assignment on the Frobenius norm. I say this, because I may not be the only one thinking it and I wanted to tell those people that there is more to math here. Only recently, I understood the huge degree of humility and teaching wit that it takes one to pass one's knowledge along. It requires to pretend or to honestly feel you are no better than any of your students. For instance, as I could witness here, Pr Strang shared with his students the latest cool research topics as if they were his colleagues, he thanked them for contributing to the course by giving out some answers. That's what allows him to successfully challenge them in solving some assignments, like the Frobenius norm - SVD problem. All of it is summarized by Gilbert himself at the very end in 48:12, when he explains his view of his relationship with the students (such as "We have worked to do!", an honest use of the pronoun "we" by the lecturer). This 48 min long lecture, honestly impressed me in this regard. Today, I had the privilege of a double lecture: one in math (that could have been compressed to 15 min, since most proofs were skipped) and one in being a better passer of knowledge (that could be extended to 10+ years). Hat off!
@supersnowva6717
@supersnowva6717 Жыл бұрын
This lecture just brought my understanding of norms to a whole new level! Thank you so much Professor Strang!
@ashutoshpatidar3288
@ashutoshpatidar3288 Жыл бұрын
Feeling so emotional watching him teaching at the age of 84😢
@hieuphamngoc6258
@hieuphamngoc6258 3 жыл бұрын
He is such a sweet man and a genius teacher at the same time
@sebah1991
@sebah1991 2 жыл бұрын
The reason I went from hating math to loving math (especially linear algebra) is Gilbert Strang. What an incredible teacher.
@naterojas9272
@naterojas9272 4 жыл бұрын
I highly recommend doing the Frobenius norm proof he mentions. It is elegant and uses some nice properties of linear algebra. If you took 18.06 (or watched the lectures) using the column & row picture of matrix multiplication really helps. I'll finalize my proof and post a link - hopefully I didn't make a mistake ;)
@naterojas9272
@naterojas9272 4 жыл бұрын
Maybe I shouldn't post a link... I wouldn't want anyone enrolled in 18.065 to copy it... Hmm......
@xingjieli3069
@xingjieli3069 3 жыл бұрын
Great point on comparing matrix Nuclear norm with vector L1 norm, which tends to find the most sparse winning vector. I guess the matrix Nuclear norm may tend to find 'least' weights during the optimization.
@wangxiang2044
@wangxiang2044 2 жыл бұрын
Frobenius norm squared = trace of (A transpose times A) = sum of eigenvalues of (A transpose times A) = sum of squares of singular values
@lololamize
@lololamize 5 жыл бұрын
Does anyone know something more concrete about the Srebro results? Have they been verified already? How general are they? 44:54
@karthikeyakethamakka
@karthikeyakethamakka 2 жыл бұрын
27:07 minimizing something with a constraint Lagrangian Formulation.
@karthikeyakethamakka
@karthikeyakethamakka 2 жыл бұрын
The Largest Singular Value is the same as largest Eigen Value for a fully connected layer which is also called as spectral Normalization.
@jonahansen
@jonahansen 5 жыл бұрын
Dang - He's good!
@mgh256
@mgh256 5 жыл бұрын
come on.... He is Gilbert Strang......
@shvprkatta
@shvprkatta 3 жыл бұрын
Prof Strang...my respects sir...
@HoangLe-rk2ke
@HoangLe-rk2ke 3 жыл бұрын
Protect him with all cost MIT
@jxw7196
@jxw7196 3 жыл бұрын
This man is brilliant!
@nuclearrambo3167
@nuclearrambo3167 3 ай бұрын
I would use laplace's succession rule in coin flipping problem
@fredsmith894
@fredsmith894 4 жыл бұрын
I love Linear Algebra!
@RohithBhattaram
@RohithBhattaram 4 жыл бұрын
Good Video about Norms. Thank you Prof.
@xc2530
@xc2530 Жыл бұрын
35:00 matrix norm
@Leeidealist
@Leeidealist 4 жыл бұрын
I love him so much I don't believe in God but I prayed for his health
@shaurovdas5842
@shaurovdas5842 4 жыл бұрын
At 32:41 when professor Strang says L2 norm of a matrix is 'sigma1', what does he mean by sigma1?
@oscarys
@oscarys 4 жыл бұрын
Hi Shaurov. He is referring to the largest singular value in the SVD of A
@filialernotina1060
@filialernotina1060 4 жыл бұрын
Can someone explain to me when we should use Frobenius norm and when we should use the nuclear norm ?
@zkhandwala
@zkhandwala 5 жыл бұрын
Compelling lecture (as always), but I'm unsettled about one thing: much of it is based on the fact that the first singular vector of A is the maximizing x in the definition of ||A||2. However, this fact just seems to be mentioned without proof or argument, and accordingly it doesn't feel as though the proof that ||A||2 = sigma1 is complete. Thoughts?
@gordongustafson2799
@gordongustafson2799 5 жыл бұрын
I agree. I can give a proof sketch: 1. A = U Σ V^t by the SVD. 2. To maximize ||Σy|| for a unit vector y, we would choose y to have all 0's except for a 1 in the position multiplying the largest value in the diagonal matrix Σ, which is sigma1. This effectively scales every component of y by sigma1 (all the other components are 0). Any other choice of y results in some component of y being scaled by a value less than sigma1, and no component scaled by more than sigma1. 3. U is orthonormal, so ||Uz|| = ||z|| 4. 1 and 3 give us ||Ax|| = ||U Σ V^t x|| = ||Σ V^t x|| 5. Assume ||x|| = 1. V^t is orthonormal, so ||V^t x|| = 1. 6. Thus, the maximizing value of x satisfies V^t x = y for the y we found in step 2. 7. This gives x = v1, and ||Ax|| = sigma1. 8. Since the L2 norm of A is the maximum value of ||Ax||/||x|| over all x's, the L2 norm of A is sigma1 (small leap here, but straightforward)
@wernerhartl2069
@wernerhartl2069 3 жыл бұрын
Max ||Ax||/||x||=Max ||Akx||/||kx||=Max ||Ax||/||x||, ||x||=1. So you can think of a unit circle ||x||=1 with ||Ax|| plotted on the radius which might look like a circle and ellipse with a point on the Ellipse being at Max ||Ax||/||x||.
@TeejK
@TeejK 4 жыл бұрын
Holy crap this is a good lecture
@namimmsadeghi8475
@namimmsadeghi8475 3 жыл бұрын
perfect. God bless you...
@HardLessonsOfLife
@HardLessonsOfLife 3 жыл бұрын
Why is L half not a good norm? Why the P is restricted to be >= 1 instead of just p >0?
@zeke-hc3rc
@zeke-hc3rc 3 жыл бұрын
thank you
@karthikeyakethamakka
@karthikeyakethamakka 2 жыл бұрын
40:10 F norm
@obsiyoutube4828
@obsiyoutube4828 4 жыл бұрын
sure big professor
@nazarm6215
@nazarm6215 Жыл бұрын
so is a sigmoid a norm or a norm is a sigmoid?
@raphaelambrosiuscosteau829
@raphaelambrosiuscosteau829 4 жыл бұрын
How do we actually see what sigma1 is the maximum blow-up factor and what v1 is the vector what gets blown up the most? Because i initially thought it would be the first eigenvector, and then it would make sense, but then i realised what sigma is not an eigenvalue after professor said it and i'm struggling a bit with imagening what's happening here
@justpaulo
@justpaulo 4 жыл бұрын
Recall the picture Prof. Strang draw when explaining SVD. Here's a refresher (in slide #25): ocw.mit.edu/resources/res-18-010-a-2020-vision-of-linear-algebra-spring-2020/videos/MITRES_18_010S20_LA_Slides.pdf As Prof. Strang mentioned, U and V only perform rotation or possible reflection of x, which does not changes the norm of x. It is Sigma that is responsible for stretching and among those sigma1 is the biggest and it is therefore the "maximum blow-up factor". I hope this helps.
@prajwalchoudhary4824
@prajwalchoudhary4824 3 жыл бұрын
Is L0 norm is not convex ????
@ZeroManifold
@ZeroManifold Жыл бұрын
yes, cause the origin point is excluded
@sergiohuaman6084
@sergiohuaman6084 4 жыл бұрын
@44:00 by now Prof. Strang should know that nothing is ever taken out of the tape haha.
@eyobgizaw8362
@eyobgizaw8362 3 жыл бұрын
how would the shape look like for p, 1
@godfreypigott
@godfreypigott 3 жыл бұрын
Between the diamond and the circle.
@matthewpublikum3114
@matthewpublikum3114 2 жыл бұрын
Are the notes available somewhere?
@mitocw
@mitocw 2 жыл бұрын
There are no lectures notes available for this course; that is because the book (Strang, Gilbert. Linear Algebra and Learning from Data. Wellesley-Cambridge Press, 2018. ISBN: 9780692196380) is basically the lecture notes for the course. See the course on MIT OpenCourseWare for more info and materials at: ocw.mit.edu/18-065S18. Best wishes on your studies!
@charliehou9553
@charliehou9553 3 жыл бұрын
Long live!
@yupm1
@yupm1 5 жыл бұрын
Whenever I make money I will donate!
@jaimelima2420
@jaimelima2420 5 жыл бұрын
You will make a lot of money man. In Wall Street, perhaps!
@freeeagle6074
@freeeagle6074 Жыл бұрын
When you earn 20 dollars, you can donate 1/2 dollars. When you earn 20,000 dollars, you can donate 100 dollars. When you earn 2 billion, you'll leave here and forget about donation for ever.
@phuongnamphan335
@phuongnamphan335 5 жыл бұрын
why sigma 1 is the largest singular value ? Why it's position relate to largest or not ? I dont understand
@BorrWick
@BorrWick 5 жыл бұрын
Yes, singular values are ordered based on size
@quanyingliu7168
@quanyingliu7168 5 жыл бұрын
The phenomenon he mentioned in the first 5 minutes is a very interesting psychological question. Is it about the sequential effects of decision making? Anyone knows the field? Please feel free to share some papers. Thank you.
@mdrasel-gh5yf
@mdrasel-gh5yf 4 жыл бұрын
Multi Arm Bandits problem?
@SimmySimmy
@SimmySimmy 5 жыл бұрын
I've watched the first 6 videos without difficulty, but I'm confused by the definition and geometric meaning of different norms. Could anyone please tell me which textbook I should read to help me understand? Thanks for your helping!
@turdferguson3400
@turdferguson3400 5 жыл бұрын
Rewatch the videos, and maybe you'll get it! It has worked for me!
@darkwingduck42
@darkwingduck42 5 жыл бұрын
Linear Algebra and Learning from Data by Gilbert Strang!
@sanjaykrkk
@sanjaykrkk 4 жыл бұрын
Awesome!
@csl1384
@csl1384 5 жыл бұрын
Is there a link to the notes Prof. Strang keeps alluding to?
@NolanZewariligon
@NolanZewariligon 5 жыл бұрын
@Bob Mama There aren't any lecture notes on that link.
@vyvo1473
@vyvo1473 5 жыл бұрын
Why ||A||2 = max ||Ax||2/||x||2? Can someone help me explain? :(
@sricharanbattu4502
@sricharanbattu4502 5 жыл бұрын
That is actually the definition of matrix norm, induced by a vector
@tanweermahdihasan4119
@tanweermahdihasan4119 5 жыл бұрын
@Rich Caputo shouldn't there be a l2 norm constraint for x? say, ||x|| = 1.
@myoung1445
@myoung1445 4 жыл бұрын
It's a definition rather than a result
@bird9
@bird9 2 жыл бұрын
well... How old are these students ?
@avareallymeow
@avareallymeow 4 жыл бұрын
This is some nice chalk
@oscarlu9919
@oscarlu9919 3 жыл бұрын
12:12 prof: it's just exploded in importance. me: I just burst in laugh :)
@saleemji9561
@saleemji9561 4 жыл бұрын
Old people are really our pride
@sumanchaudhary8757
@sumanchaudhary8757 5 жыл бұрын
can somebody provide lecture notes of this course?
@mitocw
@mitocw 5 жыл бұрын
Course materials are available on MIT OpenCourseWare at: ocw.mit.edu/18-065S18. Best wishes on your studies!
@NolanZewariligon
@NolanZewariligon 5 жыл бұрын
@@mitocw There aren't any lecture notes on that link.
@sukuya
@sukuya 4 жыл бұрын
github.com/ws13685555932/18.065_lecture_notes are some summary notes till lecture 14.
@NolanZewariligon
@NolanZewariligon 5 жыл бұрын
He forgot to finish PCA.
@СергейКумейко-й8г
@СергейКумейко-й8г 4 жыл бұрын
I can give available information . The lecture here connects the optimization problem with eigenvectors. But sorry , the lecture in Russian))) kzbin.info/www/bejne/jWatfYaBmNqUh9E
@bpc1570
@bpc1570 4 жыл бұрын
How about referring to Andrew Ng lecture at cs229, which is not in Russian for English speakers
@justpaulo
@justpaulo 4 жыл бұрын
kzbin.info/www/bejne/m6qVgXhrrc5sY6M
@NolanZewariligon
@NolanZewariligon 4 жыл бұрын
@@justpaulo @BP C MVPs.
@MrFurano
@MrFurano 2 жыл бұрын
43:49 The "actual humans" statement is still on the tape 🤣
@YNRUIZ69
@YNRUIZ69 3 жыл бұрын
Cool vid
@intoeleven
@intoeleven 5 жыл бұрын
may I ask what the p mean?
@ethereal1m
@ethereal1m 5 жыл бұрын
it's the mode of norm
@sschmachtel8963
@sschmachtel8963 5 жыл бұрын
superelipses
@thetedmang
@thetedmang 5 жыл бұрын
I didn't get the part where he minimized the L2 norm geometrically, why was it that particular point?
@yuchenzhao6411
@yuchenzhao6411 4 жыл бұрын
L2 norm for a vector is the distance from origin. Since the candidate vectors have to be on the constraint line, the problem (find a vector that subject to the constraint minimize L2 norm) became "which point on that line has smallest distance to the origin".
@bmh18172
@bmh18172 3 жыл бұрын
RIP. He will be missed.
@kevinkillingsworth4359
@kevinkillingsworth4359 2 жыл бұрын
He's not dead yet!
@paquitagallego6171
@paquitagallego6171 3 жыл бұрын
💖💖💖🙏
@fraenzo44
@fraenzo44 2 жыл бұрын
Ć
@kevinchen1820
@kevinchen1820 2 жыл бұрын
20220527 簽
9. Four Ways to Solve Least Squares Problems
49:51
MIT OpenCourseWare
Рет қаралды 122 М.
The Lp Norm for Vectors and Functions
9:34
Dr. Will Wood
Рет қаралды 80 М.
Как Я Брата ОБМАНУЛ (смешное видео, прикол, юмор, поржать)
00:59
How To Choose Mac N Cheese Date Night.. 🧀
00:58
Jojo Sim
Рет қаралды 113 МЛН
Twin Telepathy Challenge!
00:23
Stokes Twins
Рет қаралды 135 МЛН
Одну кружечку 😂❤️
00:12
Денис Кукояка
Рет қаралды 1,4 МЛН
What is the i really doing in Schrödinger's equation?
25:06
Welch Labs
Рет қаралды 244 М.
Norms
16:38
NPTEL-NOC IITM
Рет қаралды 74 М.
Lecture 1: The Column Space of A Contains All Vectors Ax
52:15
MIT OpenCourseWare
Рет қаралды 328 М.
Large Language Models explained briefly
8:48
3Blue1Brown
Рет қаралды 699 М.
2023 MIT Integration Bee - Finals
28:09
MIT Integration Bee
Рет қаралды 2,1 МЛН
I never understood why you can't go faster than light - until now!
16:40
FloatHeadPhysics
Рет қаралды 4,1 МЛН
5. Positive Definite and Semidefinite Matrices
45:27
MIT OpenCourseWare
Рет қаралды 160 М.
Как Я Брата ОБМАНУЛ (смешное видео, прикол, юмор, поржать)
00:59