14. Orthogonal Vectors and Subspaces

  Рет қаралды 530,079

MIT OpenCourseWare

MIT OpenCourseWare

Күн бұрын

Пікірлер: 243
@ozcan3686
@ozcan3686 12 жыл бұрын
i dont know how but when ever i need he repeats it.thx mr Strang
@9888565407
@9888565407 4 жыл бұрын
hey thats true mate. so did you watch the whole series ?
@kub1031
@kub1031 3 жыл бұрын
sen de berbat hocalara sahiptin herhalde kader arkadaşım.
@rosadovelascojosuedavid1894
@rosadovelascojosuedavid1894 3 жыл бұрын
@@9888565407 lol let's hope he has the same KZbin account he had 8 years ago
@priyankkharat7407
@priyankkharat7407 5 жыл бұрын
Thank you professor! I am amazed by the fact that professors from top institutes like MIT explain the mere basics without any expectation that we are supposed to know those topics earlier. On the other side our university professors just avoid the whole thing by saying "it isn't the part of syllabus, you are expected know this already". A huge salut and thanks to professor Strang and MIT team for publishing these videos free of cost.
@yogeshporwal7219
@yogeshporwal7219 4 жыл бұрын
Yes this one line is fix-"you are supposed to know this, learn it at your own" And here this great professor giving knowledge from basic level to very advance level.
@nenadilic9486
@nenadilic9486 3 жыл бұрын
25:56 "I'm a happier person now." I love his interludes. Thank you, professor, a lot.
@palashnandi4165
@palashnandi4165 Жыл бұрын
00:00:00 to 00:02:50 : Introduction 00:02:51 to 00:13:45 : What is Orthogonality? 00:13:50 to 20:49:00 : What is Orthogonality for Subspaces? 00:20:50 to 26:00:00 : Why RS(A) ⊥ NS(A)? 26:01:00 to 34:00:00 : What is Orthogonal complement? 39:45:00 to End : Properties of A^T.A ?
@LinhNguyen-st8vw
@LinhNguyen-st8vw 8 жыл бұрын
Linear algebra, it's been almost 3 years but I think I've finally got you. *sob *wished I could go back in time
@dangernoodle2868
@dangernoodle2868 6 жыл бұрын
Man, I think a part of me died in the math class I took at the start of university. I feel like I'm ressurecting a part of my soul.
@jnnewman90
@jnnewman90 2 жыл бұрын
This man cooked up some vectors AND insulted MIT's floor integrity. Legend
@abdelaziz2788
@abdelaziz2788 3 жыл бұрын
Thats A VERY VERY ESSENTIAL lecture for machine learning i used to do the transpose trick but didnt know where it come from, know i may die in peace
@ZehraAkbulut-my7fj
@ZehraAkbulut-my7fj 5 ай бұрын
I can't stop watching the spinning pens 15:05
@nateshtyagi
@nateshtyagi 3 жыл бұрын
Thanks Prof Strang, MIT!
@BigBen866
@BigBen866 Жыл бұрын
“Let me add the great name, ‘Pythegorious’!” I love it 😂😂😊
@ozzyfromspace
@ozzyfromspace 4 жыл бұрын
Here I was, thinking I was gonna breeze through this lecture when BAM! I got hit with subtle logic 👨🏽‍🏫
@minagobran4165
@minagobran4165 Жыл бұрын
at 25:17 when he says (row1)^T * x=0. This is wrong. Row1 is 1xn and x is nx1. Row1*x=0. row1^T is nx1 and you can't multiply a nx1 vector by x another nx1 vector.
@APaleDot
@APaleDot Жыл бұрын
Row vectors are written as v^T. It's just a convention to distinguish them from column vectors.
@tlo9966
@tlo9966 9 жыл бұрын
He's absolutely amazing! I do wonder, however, if a non MIT - like class of students would be able to appreciate his teaching style.
@BenRush
@BenRush 9 жыл бұрын
Teresa Longobardi You're right, those from Princeton stand no chance.
@faustind
@faustind 5 жыл бұрын
At 25:17 is it necessary to transpose the rows of A before multiplying with X ( since the dimensions match already )?
@nenadilic9486
@nenadilic9486 3 жыл бұрын
He didn't transpose the rows of A but the vectors named 'row-sub-i', which are, as any vector, always written in the column form. In other words, it is a convention that, if we want to to write a vector that corresponds to a row of any matrix A (the rows are not vectors by themselves) we write it as the proper vector which is the corresponding column of the matrix A transpose. This makes our notation consistent. Anytime we write a vector name (e.g. 'a'. 'row'. 'q', 'spectrum', 'x', 'v'...), we can always replace it with some matrix column. So, if we want to multiply another vector or matrix with it from the left, we must first transpose it. And it is not a mere convention! It is an essential property of the matrices: the columns are vectors, not the rows. If we could, at our own leisure, claim whenever we want that rows are also vectors, then the whole concept of a transposed matrix will be corrupted, even the concept of a matrix itself.
@jimziemer474
@jimziemer474 2 жыл бұрын
@@nenadilic9486 I’m not sure that’s completely correct. I’ve seen him show rows as vectors at times to compare what a row vector looks like compared to the column vectors.
@bipashat4131
@bipashat4131 2 жыл бұрын
why exactly is the null space of (A transpose )(A) = to the null space of A ?
@shubhamtalks9718
@shubhamtalks9718 4 жыл бұрын
"well I shouldn't do this, but I will." How many times has he said this in this course?
@thedailyepochs338
@thedailyepochs338 4 жыл бұрын
i think its a teaching strategy
@HeartSeamstress
@HeartSeamstress 13 жыл бұрын
when finding an orthogonal vector, are there specific steps to find it, or do we have to find a vector (through inspection) which when multiplied by another vector through dot product is zero?
@bassmaiasa1312
@bassmaiasa1312 2 жыл бұрын
The normal vector to any plane (or hyperplane) is orthogonal to any vector in the plane. In the polynomial equation for a plane thru the origin (ax + by + cz = 0), the normal vector is the coefficients (a,b,c). So given any vector (a,b,c), plug in any (x,y) and solve for z. Then (x,y,z) is orthogonal to (a,b,c). The same applies to hyperplanes (A1x1 + A2x2 + .... Anxn = 0). (A1, A2, ... An) is the normal vector to a hyperplane in Rn.
@alankuo2023
@alankuo2023 Жыл бұрын
32:10
@nikhilseth8308
@nikhilseth8308 6 жыл бұрын
Hit Like if you got to know now from where dot product have originated
@parkmr45
@parkmr45 12 жыл бұрын
Berkeley EECS shall kidnap him
@ocean1097
@ocean1097 5 жыл бұрын
Great idea, eecs teaching quality needs improvement
@dougiehwang9192
@dougiehwang9192 3 жыл бұрын
I really encourage you to buy The Introduction of Linear Algebra which Pf Strang wrote. If I say these videos are rank r, then I can definitely say the book is the orthogonal complement of these videos that makes perfect dimension of Linear Algebra.
@rosadovelascojosuedavid1894
@rosadovelascojosuedavid1894 3 жыл бұрын
Dude I read this comment and literally TODAY I recommended this book to a guy in a Facebook group and he already ordered it. 👌
@debarshimajumder9249
@debarshimajumder9249 6 жыл бұрын
"the origin of the world is right here"
@georgeyu7987
@georgeyu7987 4 жыл бұрын
"blackboard extends to infinity..." yeah, MIT does have infinitely long blackboard...
@akselai
@akselai 3 жыл бұрын
* slides out the 45th layer of blackboard *
@steveecila
@steveecila 11 жыл бұрын
Mr Strang makes me feel, in the first time of my life, that linear algebra is interesting!
@crazy6139
@crazy6139 2 жыл бұрын
Hi
@adamlevin6328
@adamlevin6328 8 жыл бұрын
That smile at the end, he knew he'd done a good job
@philippelaferriere2661
@philippelaferriere2661 6 жыл бұрын
Along with that slick chalk trick
@samuelleung9930
@samuelleung9930 4 жыл бұрын
awesome smile 😊
@quirkyquester
@quirkyquester 4 жыл бұрын
He's happy :)
@nenadilic9486
@nenadilic9486 3 жыл бұрын
To find this course on the web is tantamount to finding massive gold treasure.
@corey333p
@corey333p 7 жыл бұрын
The dot product of orthogonal vectors equals zero. All of a sudden it clicked when I remembered my conclusion as to what a dot product actually was, that is, "what amount of one vector goes in the direction of another." Basically, if vectors are orthogonal, then no amount of one will go in the direction of the other. Like how a tree casts no shadow at noon.
@robertorama8284
@robertorama8284 5 жыл бұрын
Thank you for this comment! That's a great conclusion.
@estebanl2354
@estebanl2354 4 жыл бұрын
it was very enlightening
@anilsarode6164
@anilsarode6164 4 жыл бұрын
kzbin.info/www/bejne/gqqqfKyZjrllrJI to get the concept of the dot product.
@indiablackwell
@indiablackwell 3 жыл бұрын
This helped, a lot
@kevinliang5568
@kevinliang5568 3 жыл бұрын
Oh my this is enlightening, I've never thought it that way
@prakhyathbhandary9822
@prakhyathbhandary9822 3 жыл бұрын
25:00 why transpose of Row's were taken to find combination of row space? Will we be able to multiply transpose of row 1 to X?
@rjaph842
@rjaph842 2 жыл бұрын
I lost it there too man,idk if you've managed to figure out why
@joaocosta3506
@joaocosta3506 2 жыл бұрын
@@rjaph842 wasn't the point proving the case that the left null space and the column space were ortogonal too?
@iamjojo999
@iamjojo999 2 жыл бұрын
I think its a little mistake that prof. Strang didn’t notice. Probably because prof Strang just taught what property of two vectors are orthogonal have.(ie XtY=0) But this require’s X and Y are column vectors. Here row vector is not a column vector, so no need to transpose in order to product another column vector. Simply row vector * x (which is a column vector) =0 is ok though. Nevertheless, I really like prof Strang’s style. Thank you prof Strang.
@dmytrobondal4127
@dmytrobondal4127 7 жыл бұрын
Gilbert Strang, you are truly an outstanding teacher! I am currently doing my Master's thesis in Finite Element Analysis and started watching these video lectures just for fun, since I already had some Linear Algebra back on my bachelor's. Your little sidenote at the end of a lecture about multiplying a system by A.transpose actually helped me crack a problem I'm dealing with right now. My finite element system had more equations than unknowns (because I'm fixing some internal degrees of freedom, not the nodes themselves) and I just couldn't figure out how to solve such system. I completely forgot about this trick of multiplying by a transpose!! THANK YOU SO MUCH!! My final system now has "good" dimensions and the stiffness matrix has a full rank!!!
@dmytrobondal4127
@dmytrobondal4127 7 жыл бұрын
And also his strict mathematical proof, I believe, in 1971, about the completeness being a necessary condition for FEM convergence is actually something I'm using right now! This guy played such a great role in FEM.
@muditsaxena3640
@muditsaxena3640 6 жыл бұрын
At 19:50 he said "When is a line through the origin orthogonal to whole plane? Never" but I think if we take any line through origin and a plane whose normal vector is parallel to that line then they both will be orthogonal. For example x-axis and y-z plane. Help me out please.
@wasiimo
@wasiimo 6 жыл бұрын
By that he means any line passing through the origin that is in the plane(i.e a subspace of the plane) cannot be orthogonal to the whole plane. Of course if this line is parallel to the normal of the plane as you stated, then yes it will be orthogonal to every vector in that plane.
@Basta11
@Basta11 5 жыл бұрын
He’s talking a line through origin (a sub space) that is also in the plane.
@khanhdovanit
@khanhdovanit 3 жыл бұрын
Thanks for your question
@jeffabc1997
@jeffabc1997 3 жыл бұрын
Thanks for the question and answer... it really helps!
@LisaLeungLazyReads
@LisaLeungLazyReads 8 жыл бұрын
I remember falling asleep in all my linear algebra classes @ UWaterloo. Not until now that I'm starting to like linear algebra!
@Neme112
@Neme112 7 жыл бұрын
"like linear algebra" Good one!
@lucasm4299
@lucasm4299 6 жыл бұрын
Lisa Leung Is that in Ontario, Canada
@alpozen5347
@alpozen5347 4 жыл бұрын
same here for me, I study at EPFL, but Mr. Strang seems to have a natural gift for the subject
@vedantparanjape
@vedantparanjape 4 жыл бұрын
Second best part about watching these lectures is the comment section
@condafarti
@condafarti 5 жыл бұрын
okkkkk, cameras are rolling, this is lecture 14. What an intro line!
@elyepes19
@elyepes19 3 жыл бұрын
This lecture is a Tour of Force, every sentence he says, including the ancillary comments, are so well crafted that makes everything click with ease. Least Squares open the gates for the siamese fields of Optimization and Inverse Theory, so every bit of insight he shares has deep implications on those fields (and many others). It's not exaggeration to say that the whole lecture is an aha! moment. Very illuminating, thank you Professor Strang
@shoumikghosal
@shoumikghosal 4 жыл бұрын
"The one thing about Math is you're supposed to follow the rules."
@quirkyquester
@quirkyquester 4 жыл бұрын
so much fun, so much love. Thank you Professor Strang and MIT for inspiring more people around the world. I truly enjoy learning linear algebra with Professor Strang :) we know he's done it!
@hassannazeer5969
@hassannazeer5969 4 жыл бұрын
This is a 90-degree chapter, Strang meant business from the word go!
@youmgmtube
@youmgmtube 14 жыл бұрын
This series is phenomenal. Every lecture a gem. Thank you Mr Strang!
@betobayona7812
@betobayona7812 9 жыл бұрын
25:39 Ins´t there an error with the symbol T (for transpose)? Why transpose the rows? Please, explanations!
@RetroAdvance
@RetroAdvance 9 жыл бұрын
Here we contemplate rows as vectors. And it is just a convention to write a vector vertical. So you have to transpose it if you mean to write it down horizontally.
@Nakameguro97
@Nakameguro97 9 жыл бұрын
RetroAdvance Thanks for this confirmation - I suspected this reason was much more likely than Prof. Strang making a mistake here (as I have seen this convention in other textbooks). However, it's still confusing as he sometimes draws an array of rows [row_1 row_2 ... row_m] vertically implying that those are horizontal rows. Is this convention of all variables as vectors typically only apply to variables written in text?
@RetroAdvance
@RetroAdvance 9 жыл бұрын
Ken Feng Yes, he often writes things down without a strict mathematical rigor for didactic reasons. [row1 row2 row3] is probably just Strang's intuitive formulation to get the point across so don't take that too seriously. As long as it is clear what he means by that it is ok. But a vector is different from its transposed version in terms of its matrix representation: V (element of R^n) is a n x 1 matrix V transposed is a 1 x n matrix
@longgy1123
@longgy1123 7 жыл бұрын
Beto ba Yona It is just a small mistake.
@thangibleword6854
@thangibleword6854 5 жыл бұрын
@@RetroAdvance no, it is a mistake
@eccesignumrex4482
@eccesignumrex4482 7 жыл бұрын
Gill uses his 'god' voice at ~8:00
@onatgirit4798
@onatgirit4798 3 жыл бұрын
Omg the orthogonality between nullspace and row space adds up so well with the v=[1,2,3] example prof. gave previous lecture. I've seen much less entertaining tv series than 18.06, this course should be on Netflix lol
@AryanPatel-wb5tp
@AryanPatel-wb5tp 3 ай бұрын
"Let me cook up a vector that's orthogonal to it" - the goat professor strang 8:25
@mreengineering4935
@mreengineering4935 3 жыл бұрын
Thank you very much, sir. I am watching lectures and enjoying them. I have benefited from you because we do not have a teacher in Yemen because of the war situation, so you became my teacher
@e.a.5330
@e.a.5330 4 жыл бұрын
3:13 "going way back to greeks..." :) well, sir, i think greeks are still in the world.
@rabinadk1
@rabinadk1 4 жыл бұрын
Really a great lecture. He explains things simply that they seem obvious. I had never learned it as clearly as this in my college.
@professorfernandohartwig
@professorfernandohartwig 2 жыл бұрын
In many linear algebra courses that I have seen, the student is simply told about the various relationships between the fundamental subspaces. But in this course these ideas are convincingly yet accessibly presented. This is very important because it allows students to really understand such key ideas of linear algebra to the point where they become intuitive, instead of simply memorizing properties and formulas. Another great lecture by professor Strang!
@anilsarode6164
@anilsarode6164 4 жыл бұрын
38:30 -Mr. Strang gives a hint about the Maximum Likelihood Estimate (MLE).
@rajprasanna34
@rajprasanna34 9 жыл бұрын
It's an extraordinary and amazing one.. No other lecturer are as good as Gilbert... Thank you sir........
@山田林-f5b
@山田林-f5b 2 жыл бұрын
thank a lot
@zoltanczesznak976
@zoltanczesznak976 9 жыл бұрын
You are the king Mr Strang! Thanks
@MrSuperDagangsta
@MrSuperDagangsta 9 жыл бұрын
Dr. Strang*
@zoltanczesznak976
@zoltanczesznak976 9 жыл бұрын
excusez-moi! :)
@abdulazizabdu8362
@abdulazizabdu8362 8 жыл бұрын
But lessons are great!!!! I'am enjoying from every class. Thank you Gilbert Strang
@hits6620
@hits6620 3 жыл бұрын
At 25:00 Mr. Stang wrote (row 1) transpose x equals 0, but I don't really understand. I was thinking about to remove the "transpose" thing, and I was sooo confused.
@minagobran4165
@minagobran4165 Жыл бұрын
me too did u ever understand
@APaleDot
@APaleDot Жыл бұрын
@@minagobran4165 All row vectors are written with the transpose symbol to indicate they are row vectors and not column vectors.
@sachinranveer3452
@sachinranveer3452 3 жыл бұрын
Where are the next lectures for A^TA ???
@adarshagrawal8510
@adarshagrawal8510 2 жыл бұрын
At 36:22, it is fascinating how he got a bit into the orbital mechanics, saying there are 6 unknowns (which is rightly known at the State vector, which is a 6 by 1 matrix with Position (x,y,z) and velocities (xdot, ydot, zdot)).
@cornmasterliao7080
@cornmasterliao7080 11 ай бұрын
How am I understanding so easily compared to the nothing that I understood from my school. This video isnt even in my language.
@MrCricriboy
@MrCricriboy 8 жыл бұрын
Was it a burp at 48:48?
@SteamPunkLV
@SteamPunkLV 5 жыл бұрын
now we're asking the real questions
@winniejeng7402
@winniejeng7402 5 жыл бұрын
Sorry had an indigestion right before class
@tongqiao699
@tongqiao699 11 жыл бұрын
The most greatest lecturer who I meet in my life.
@samuelyeo5450
@samuelyeo5450 4 жыл бұрын
Looking at the comment section, I feel pretty dumb for barely feeling any intuition in whatever Prof. Strang said.
@zelousfoxtrot3390
@zelousfoxtrot3390 Жыл бұрын
Does anyone else want to sneak into the classroom and put up an engraved plaque marking the origin of the world?
@Jack-gi6jp
@Jack-gi6jp 8 жыл бұрын
This shit is so good
@Maria-yx4se
@Maria-yx4se 8 ай бұрын
8:18 let him cook!!!!
@anthonyroberson5199
@anthonyroberson5199 7 ай бұрын
LOL if numbers are just is they can be whatever we want to be. Why don't you get a ninety seven percent
@mind-blowing_tumbleweed
@mind-blowing_tumbleweed Жыл бұрын
44:20 why can't we solve it? We couldn't if there was more unknowns than equations.
@georgesadler7830
@georgesadler7830 3 жыл бұрын
DR. Strang ,thank you for another classic lecture on orthogonal vectors and subspaces. Professor Strang, you are the grand POOBAH of linear algebra.
@ninadgandhi9040
@ninadgandhi9040 2 жыл бұрын
Really enjoying this series! Thank you professor Strang and MIT. This is absolute service to humanity!
@miami360x
@miami360x 12 жыл бұрын
I love his explanations. My linear Algebra prof. will just give us definitions, state theorums, and prove them and if were lucky we'll get an example, but never a solid explanation.
@mikesmusicmeddlings1366
@mikesmusicmeddlings1366 3 жыл бұрын
I am learning so much more from these lectures than from any teacher I have ever had
@maltepersson3365
@maltepersson3365 8 ай бұрын
20:54
@facitenonvictimarum
@facitenonvictimarum 7 ай бұрын
But is anyone on Earth better off because of this? No.
@kingplunger6033
@kingplunger6033 Ай бұрын
Why transpose rows for the dot products ? around 25:00
@wandileinvestment1413
@wandileinvestment1413 2 жыл бұрын
Thank you prof I'm writing a exam tomorrow morning
@abdulazizabdu8362
@abdulazizabdu8362 8 жыл бұрын
Just I have wondered. Are they student of MIT? Why are they so silent??????
@rasraster
@rasraster 4 жыл бұрын
They are responding to his questions, but the microphone doesn't pick it up.
@nenadilic9486
@nenadilic9486 3 жыл бұрын
@@rasraster I hope that's the case xD
@BestBites
@BestBites 2 жыл бұрын
Cameraman would have become Pro in Linear Algebra by absorbing such a high level of teaching.
@Illuminatesfolly
@Illuminatesfolly 12 жыл бұрын
Damn, I wish I had Gilbert Strang as a professor... oh wait... I wasn't smart enough to get into MIT's engineering program....lol
@mijubo
@mijubo 12 жыл бұрын
thats the problem with these programms! They have the better teachers not the better pupils!
@woddenhorse
@woddenhorse 2 жыл бұрын
"I shouldn't do this, but I will"
@carlostrebbau2516
@carlostrebbau2516 3 ай бұрын
I have never felt the platonic injunction to "to carve nature at its joints" more strongly than after watching this lecture.
@soulmansaul
@soulmansaul 3 жыл бұрын
Recorded in 1999, still relevant in 2021. "Comes back 40 years later" - Yep still relevant
@olouck2789
@olouck2789 4 жыл бұрын
Econometrics is in the air ... :)))))))
@gokulakrishnancandassamy4995
@gokulakrishnancandassamy4995 3 жыл бұрын
Great summary at the end: A^T*A is invertible if and only if A is full column rank! Just loved the lecture...
@hanzvonkonstanz
@hanzvonkonstanz 13 жыл бұрын
I swear, these lectures with the Schaum's Outline of Linear Algebra can really help anyone learn the subject.
@matrixkernel
@matrixkernel 12 жыл бұрын
Wouldn't the z-axis be orthogonal to the entire x-y plane? It kind of goes against one of his remarks in the 20-21 minute part of the video.
@nenadilic9486
@nenadilic9486 3 жыл бұрын
There is no z axis in 2D space.
@Mike-mu3og
@Mike-mu3og 5 жыл бұрын
19:50 why can't a line through the origin be orthogonal to a plane? It looks natural to me, that the z-axis is orthogonal to xy-plane
@KaiyuKohai
@KaiyuKohai 5 жыл бұрын
it won't be orthogonal to every vector contained in the plane so it isn't orthogonal to the plane
@vishwajeetdamor2302
@vishwajeetdamor2302 3 жыл бұрын
I think he was only talking about a line in R2 space
@nenadilic9486
@nenadilic9486 3 жыл бұрын
@@KaiyuKohai It is orthogonal to every vector in that plane, but the point is we are talking about 2D space: no vector in that space is orthogonal to a plane representing that space.
@maitreyverma2996
@maitreyverma2996 4 жыл бұрын
Now I know why the figure was this way in the 10th lecture.
@bassmaiasa1312
@bassmaiasa1312 2 жыл бұрын
At kzbin.info/www/bejne/j6u9hnyPh6h4aZo, Professor Strang applies FOIL to vectors. I'm not clear on why you can do that.
@APaleDot
@APaleDot Жыл бұрын
Matrix multiplication distributes over addition because it is linear. Dot product also distributes for the same reason, if you prefer to look at it that way.
@naterojas9272
@naterojas9272 4 жыл бұрын
Too much suspense! I'm starting the next video now
@ankanghosal
@ankanghosal 3 жыл бұрын
At 44:31 why did sir say solving 3 equation by 2 variables is not possible. We have seen in our schools that the number of variable should be equal to number of equation in order to evaluate the variable. Plz explain.
@nenadilic9486
@nenadilic9486 3 жыл бұрын
Let's suppose some natural law determines that some attribute b is given by exactly 3 parts of some parameter a1 and exactly 5 parts of some parameter a2, and these two parameters don't depend on each other. You would have the equation: a1*3 + a2* 5 = b If you could measure those two parameters absolutely precisely, you would always end up with the correct b. And the other way around - if you could measure b with absolute precision... but let's see what happens. Suppose, you want to discover that natural law, i.e. you didn't know factors 3 and 5 but want to discover them. You would measure the observable, b, in situations when you change a1 and a2 by experiment (or when you measure cases where a1 and a2 are naturally different from case to case). Suppose the ideal world (which doesn't exist) where there is no error in measurement. You get, for example: 1*x1 + 1*x2 = 8 2*x1 + 1*x2 = 11 -1*x1 + 1*x2 = 2 1*x -0.2*x2 = 2 You have more than 2 equations, and two unknowns, x1, and x2. Because all the equations are linear combinations of just two of them, you can take any two equations, and throw away the rest and the solution will be: x1 = 3, x2 = 5. If you put the coefficients in front of x1 in the first column of matrix A and those in front of x2 in the second column, then you 'get 4 x 2 matrix A: - - | 1 1 | | 2 1 | | -1 1 | | 1 -0.2 | - - You could write x1 and x2 in a column, as a vector x with x1 and x2 as its coordinates: - - | x1 | | x2 | - - You also write all the right-hand side numbers as coordinates of the vector b: - - | 8 | | 11| | 2 | | 2 | - - Then you can write all the above equations in a matrix form, as: Ax = b. Matrix A has the rank of 2 (only two independent equations - the other two, in this example, are just derived from them as linear combinations, so they don't provide any new information). You can replace A with the matrix A^, which is 2x2 matrix with only any two of the equations. (You also have to reduce your observed b vector to 2-dimensional space, and write it as b^.) For example, you could choose equations: - - - - - - | 2 1 | * | x1 | = | 11| | -1 1 | | x2 | | 2 | - - - - - - When you solve this system, and we can write it in a matrix form, as: A^ * x = b^ for x, you get vector x with the coordinates x1 and x 2 that are 3 and 5, which is the solution you wanted. But the world is not ideal. You do the measurements and you actually get the whole A as this one: 1,01*x1 + 0,99*x2 = 8,04 2, 01*x1 + 1,04* 2 = 10,99 -0,97*x1 + 1*x2 = 2,05 1*x -0.22*x2 = 2, 01 It's close to the first set of equations, but not exactly the same. Now, you can solve the system of any two of these equations, but you cannot solve the whole system, because there doesn't exist a combination of x1 and x2 that satisfies all four equations. We say that the vector with coordinates: 8,04, 10,99, 2,05, 2,01, i.e. our observed 'b', is NOT in the column space of A. In the first example, which is extremely rare, vector b was, by chance, in the A's column space. Instead, you can find the BEST approximate solution, which is exactly the topic of the next lecture :)
@niko97219
@niko97219 3 жыл бұрын
It is a pure joy watching these lectures. Many thanks to Prof. Gilbert Strang and MIT OCW.
@antoniosaidwebbesales2418
@antoniosaidwebbesales2418 2 жыл бұрын
Amazing, thk u MIT and Prof. Gilbert Strang.
@trojanhorse8278
@trojanhorse8278 Жыл бұрын
Hello can someone please explain how the length formula has derived for vector? how length of vector x is xt .x ? where xt is x transpose.
@thepruh1151
@thepruh1151 Жыл бұрын
a^2 + b^2 = c^2, so a, b, c must be the length of the vectors making this right-angled triangle. Thus, the a^2 in the Pythagorean formula corresponds to ||→a||^2, where →a just indicates vector a. This logic applies to the rest of the theorem, resulting you in ||→a||^2 + ||→b||^2 = ||→c||^2 But we know that →c is just →a + →b, so we can replace that c with the vector sum, therefore ||→a||^2 + ||→b||^2 = ||→a + →b||^2 That covers the vector interpretation of the Pythagoras Theorem. As to why ||→x||^2 can be written as x^T * x. We can write ||→x||^2 as ||→x||^2 = ||→x|| ||→x||= →x ⋅ →x By property of duality, the dot product of any two vectors, say →v and →w is the same as taking the matrix multiplication of one of those vectors transposed and the other vector, mathematically →v ⋅ →w = v^T * w = w^T * v Why this is so results from, as far as I know, a coincidentally beautiful property where it's just true For example, say →v = and →w = , where indicates column vectors Thereby →v ⋅ →w = ⋅ →v ⋅ →w = 1 * 3 + 2 * 2 + 3 * 1 →v ⋅ →w = 10 or v^T * w = [3] [1, 2, 3] * [2] = 1* 3 + 2 * 2 + 3 * 1 [1] v^T * w = [3] [1, 2, 3] * [2] = 10 [1] or w^T * v = [1] [3, 2, 1] * [2] = 3 * 1 + 2 * 2 + 1 * 3 [3] w^T * v = [1] [3, 2, 1] * [2] = 10 [3] In conclusion, the dot product of two vectors is also equal to the matrix multiplication of one of the vectors transposed and the other vector. Apply that logic with the dot product of a vector with itself, and you'll get the same result.
@baconpenguin94
@baconpenguin94 11 ай бұрын
HES THE GOAT. THE GOAAAAAAT
@VladimirDjokic
@VladimirDjokic 9 жыл бұрын
He's absolutely amazing!!!
@nenadilic9486
@nenadilic9486 3 жыл бұрын
И ја сам одушевљен. Његова предавања су пуна просветљујућих момената, бар за нас лаике је то тако.
@cylex96
@cylex96 7 ай бұрын
8:19 let him cook
@eswyatt
@eswyatt 2 жыл бұрын
@ 21:55 Jerry Seinfeld
@たま-z6n9k
@たま-z6n9k 4 жыл бұрын
I'm afraid this video doesn't seem to provide any proof of the significant theorem on the blackboard which insists: ======================== = N((A*)A)=N(A), i.e., the nullspace of (A*)A is identical to that of A, where A* denotes the transpose matrix of A. (For convenience, let * substitute for the conventional superscript T meaning "transpose".) ======================== I'd prove it my way, at least for all real number matrices A and real number input vectors, like this: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Proof. Let A be a m-by-n real number matrix. And then, for any input vector x in R^n Ax=0 ⇒ ((A*)A)x = (A*)(Ax) =0, so that if x is in the nullspace of A, then it is also in the nullspace of (A*)A, that is, N((A*)A)⊇N(A) ...(1). On the other hand, let a[1], ..., a[n] be the column vectors of A. In other words, let a*[1], ..., a*[n] be the row vectors of A*. And then, for any input vector x in R^n, for each of i=1, ..., n, the i-th component of the product vector ((A*)A)x=(A*)(Ax) is the dot product a*[i](Ax). Therefore, ((A*)A)x=0 implies that for each of i=1, ..., n a*[i](Ax)=0, so that Ax is perpendicular to any linear combination of the column vectors a[1], ..., a[n], i.e., to every vector in the column space C(A) of A. However, as a linear combination of column vectors a[1], ..., a[n], Ax is also in the column space C(A), which combined with the previous statement means that Ax is in C(A) and at the same time orthogonal to C(A). This is possible if and only if Ax=0, because otherwise Ax is not perpendicular to itself (see note at the bottom for another reason). Thus, after all it has been found that, for any input vector x in R^n ((A*)A)x=0 ⇒ Ax=0, so that if x is in the nullspace of (A*)A, then it is also in the nullspace of A, that is, N((A*)A)⊆N(A) ...(2). (1) and (2) combined, implication is that N((A*)A)=N(A). QED. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (Note): If C(A) = {0}, then trivially Ax in C(A) is the zero vector. Otherwise, by the so-called Gram-Schmidt orthonormalization, an orthonormal basis can be chosen for the column space C(A) such that all the basis vectors are normalized and mutually orthogonal. Then, the vector Ax in C(A) may be expressed as a linear combination of those basis vectors, namely Ax = s[1]n[1]+ ... + s[r]n[r], where n[1], ..., n[r] are the orthonormal basis vectors, s[1], ..., s[r] are real numbers, and r=rank(A). However, since being orthogonal to C(A), Ax is perpendicular to each of the basis vectors. Thus, for each of k=1, ..., r 0 = n*[k](Ax) = n*[k](s[1]n[1] + ... + s[r]n[r]) = s[1](n*[k]n[1]) + ... + s[k](n*[k]n[k]) + ... + s[r](n*[k]n[r]) = s[k], implying that Ax = 0. QED. Thanks. Was I correct? Are there any simpler proofs?
@たま-z6n9k
@たま-z6n9k 4 жыл бұрын
Oops, the 2nd part of my proof may be presented without referring to any specific column vectors of A at all, which would be simpler: Since the row space C(A**)=C(A) of the transpose matrix (A*) is orthogonal to the nullspace N(A*), the assumption (A*)(Ax)=0 implies that the vector Ax is orthogonal to C(A). However, Ax is at the same time in C(A) as a linear combination of the column vectors of A. This is possible if and only if Ax=0, just for the same aforementioned reasons . Hence, for every input vector x, the assumption ((A*)A)x=0 implies Ax=0, that is, N((A*)A)⊆N(A) ...(2).■
@たま-z6n9k
@たま-z6n9k 4 жыл бұрын
Oops, If only I had more patience. Actually, at the end of the video for lecture 16, Prof. Strang reveals a little trick which makes the 2nd part of my proof way simpler: just multiply both sides of the equation ((A*)A)x=0 from left by x*. (See the video for what will happen.) Thanks so much for the amazing trick, Professor.
@lanakhalil9065
@lanakhalil9065 2 жыл бұрын
اكس
@MB-oc7ky
@MB-oc7ky 4 жыл бұрын
At 42:17 why does multiplying each side by A^T change x? In other words why isn't x equal to x_hat?
@MrScattterbrain
@MrScattterbrain 4 жыл бұрын
Here, prof. Strang is pointing towards Statistics. Assume there is some "true" value of x, which is set by nature's law. We collected measurements of A and b, and try to find that true x by solving the equation Ax=b. But our measurements are noisy, so we will not get the true x. What we can find, is some approximation, "estimate" of x, which is usually denoted by x-hat. That's one example of Stats is about - finding estimates for unknown parameters from the given observations. The true parameter usually remains unknown.
@BigBen866
@BigBen866 Жыл бұрын
The man puts his Soul into his lectures 🤔🙏🏼😀👍
@Seanog1231
@Seanog1231 6 жыл бұрын
Can't wait for the next one!
@pelemanov
@pelemanov 13 жыл бұрын
I also love this series of lectures, but in this lecture I find that he does not explain perpendicular subspaces enough. The example with the blackboard and the floor is just confusing (to me and my colleagues at least), because obviously they form a right angle and thus are perpendicular. And obviously they are both subspaces. Which leads me to believe that the definition is wrong, which I'm sure it isn't.
@alexdowad947
@alexdowad947 7 жыл бұрын
That's not what "perpendicular" means in relation to vector subspaces. When applied to subspaces in the context of linear algebra, "perpendicular" means that *every* vector in one subspace is perpendicular to every vector in the other.
@stanbaltazar
@stanbaltazar 5 жыл бұрын
The blackboard-floor example is in 3-dimensional space. As such, the dimensions of two orthogonal subspaces in R^3 must add up to 3. The blackboard and the floor are both 2-dimensional and hence could not be orthogonal.
@bassmaiasa1312
@bassmaiasa1312 2 жыл бұрын
I don't think of perpendicular and orthogonal as synonyms. Perpendicular is the familiar spatial concept of right angles. Orthogonal means the dot product = 0, it's algebraic, not spatial. Once we get past R3, I don't try to think spatially (e.g., 90° angles in R4). When you say the planes are perpendicular, you're thinking of a plane as pointing in one direction, but planes don't actually point in any direction. A plane's normal vector points in a direction. Saying two planes are perpendicular means their normal vectors are perpendicular. Otherwise, to say two planes are perpendicular is meaningless, the way I see it. I think Prof Strang only uses the term 'perpendicular' because we are already familiar with the pythagorean theorem and right triangles, to show that the pythagorean theorem also works with orthogonal vectors. I'd just as soon forget the word 'perpendicular' as applied to linear alegreba. The key point is that the familiar spatial relationships in R2 and R3 continue to work as algebraic relationships beyond R3. E.g., I don't think of the 'length' of a vector in R4 because 'length' is a spatial concept; I think of the result of the pythagorean equation.
15. Projections onto Subspaces
48:51
MIT OpenCourseWare
Рет қаралды 526 М.
Orthogonality and Orthonormality
11:48
Professor Dave Explains
Рет қаралды 198 М.
Ozoda - Lada (Official Music Video)
06:07
Ozoda
Рет қаралды 13 МЛН
10. The Four Fundamental Subspaces
49:20
MIT OpenCourseWare
Рет қаралды 643 М.
2023 MIT Integration Bee - Finals
28:09
MIT Integration Bee
Рет қаралды 2 МЛН
12. Graphs, Networks, Incidence Matrices
47:57
MIT OpenCourseWare
Рет қаралды 369 М.
Ch 1: Why linear algebra? | Maths of Quantum Mechanics
11:18
Quantum Sense
Рет қаралды 243 М.
21. Eigenvalues and Eigenvectors
51:23
MIT OpenCourseWare
Рет қаралды 634 М.
Complex numbers as matrices | Representation theory episode 1
19:17
1. The Geometry of Linear Equations
39:49
MIT OpenCourseWare
Рет қаралды 1,7 МЛН
Ozoda - Lada (Official Music Video)
06:07
Ozoda
Рет қаралды 13 МЛН