16. Projection Matrices and Least Squares

  Рет қаралды 476,947

MIT OpenCourseWare

MIT OpenCourseWare

Күн бұрын

Пікірлер: 282
@jiawenchen4634
@jiawenchen4634 7 жыл бұрын
I am a graduate student majored in electrical and computer engineering. Though most of us have learned linear algebra in undergraduate study, I would like to highly recommend this course to those who are interested in machine learning and signal processing. Thank you Prof.Strang!
@cagrkaymak3063
@cagrkaymak3063 7 жыл бұрын
same for me, I am a grad student too but I learn a lot from these lectures
@andre.queiroz
@andre.queiroz 6 жыл бұрын
I'm finishing college and I'm studying this to get into Machine Learning.
@qiangli5860
@qiangli5860 6 жыл бұрын
I am also a graduate student majored in ECE. machine learning and numerical linear algebra.
@alexandresoaresdasilva1966
@alexandresoaresdasilva1966 5 жыл бұрын
Same here. The insights are invaluable - the lecture about projections finally clarified why a color calibration project I had during my undergrad didn’t always work well. These lectures should be used to teach linear algebra everywhere where there’s no really strong linear algebra classes, as image processing/ML tend to require way more command of linear algebra than what the common linear algebra college classes(talking about Texas Tech here) tend to offer.
@johncarloroberto2635
@johncarloroberto2635 3 жыл бұрын
Same I graduated with an ECE degree, but our curriculum didn't have linear algebra so I'm taking this in order to pursue a masters with a focus in machine learning. The intuition and guidance Prof Strang offers is really great!
@hurbig
@hurbig 4 жыл бұрын
With lectures this good I can watch this instead of Netflix. I have one professor who also hold phenomenal lectures and lectures this good bring me as much joy or even more than playing a good video game or watching a good show. It is interesting and entertaining and it blows my mind. Truly a fantastic job! Thank you professor Strang!
@MasterCivilEngineering
@MasterCivilEngineering 4 жыл бұрын
Really true dear
@starriet
@starriet 2 жыл бұрын
Guys, let's watch Prof. Strang instead of watching dumb TV shows!!! (well.. I'm dumb too though!)
@staa08
@staa08 Жыл бұрын
These kinda people are scary bruh!!! Hats off to you for having this kinda motivation
@trevandrea8909
@trevandrea8909 11 ай бұрын
@@starriet You're not dumb, the fact you watch Linear Algebra videos show you're interested in learning, and you are smart:)
@ashutoshtiwari4398
@ashutoshtiwari4398 5 жыл бұрын
18:04 and 28:12 It proves Prof. Strang is a man of his words.
@MasterCivilEngineering
@MasterCivilEngineering 4 жыл бұрын
He definitely is
@starriet
@starriet 2 жыл бұрын
That's what I wanted to say :)
@jaydenou6818
@jaydenou6818 11 ай бұрын
In 7:28, in case someone is wondering why e=(I-P)b , you can derive it as e=b-Pb, which it only meant e=b-p.
@ozzyfromspace
@ozzyfromspace 4 жыл бұрын
"please come out right". "oh yes!" "thank you god" 😂
@MasterCivilEngineering
@MasterCivilEngineering 4 жыл бұрын
Good
@super-creative-stuff1421
@super-creative-stuff1421 3 жыл бұрын
I think it's that sort of personality that my teachers in school were missing. They didn't care about math at all.
@bboysil
@bboysil 11 жыл бұрын
I love how you can reach the same answer also using a calculus approach.. and I LOVE the two pictures for the least squares regression. Beautiful stuff and amazing lecturer.
@LAnonHubbard
@LAnonHubbard 11 жыл бұрын
The bit at 4:54 where b is replaced by Ax giving A(A^TA)ˉ¹ATAx, which then collapses to Ax is fantastic. It's so high-level and so simple to see.
@Hotheaddragon
@Hotheaddragon 4 жыл бұрын
He does not teach Linear Algebra, He teach us to see the MATH as an art form and tells us how to draw math, and admire its beauty.
@MasterCivilEngineering
@MasterCivilEngineering 4 жыл бұрын
Yes absolutely dear
@fearlesspeanut6868
@fearlesspeanut6868 7 жыл бұрын
This is probably one of the best courses I have ever taken, Prof Gilbert Strang really rocks! Never though linear algebra can be this beautiful
@yryj1000
@yryj1000 8 жыл бұрын
"Oh god, come out right here!" ... "Thank you, God!" XD I was dying at those parts of the lecture. Not only does he teach skillfully, he's hilarious too.
@xiangzhang8508
@xiangzhang8508 8 жыл бұрын
linear algebra is so fun in Prof Gilbert Stang 's hand.
@pubgplayer1720
@pubgplayer1720 4 жыл бұрын
Yeah. It's so good. It's senior algebra!
@maheepchaudhary4200
@maheepchaudhary4200 5 жыл бұрын
Thank you Prof. Strang for changing the way to study maths rather than cramming now we not only just study matrices but can visualize itbecaue of you. Never seen a wonderful teacher like you.I hope I will meet you somday to show my gratitude. Highly recommended for everyone.
@nguyenbaodung1603
@nguyenbaodung1603 3 жыл бұрын
I love it how he does not even need to explain it carefully, but everyone was already interested in Linear Algebra taught by him. He really encourages students to brainstorm other insights based on strong background that he could provide them.
@Concon9343
@Concon9343 12 жыл бұрын
Really an inspiration to me as how the things add up and come together. Great lecture intense easy to follow / understand
@bhaweshkumar7128
@bhaweshkumar7128 10 жыл бұрын
Mr Strang and MIT ,thank you so much.
@maoqiutong
@maoqiutong 6 жыл бұрын
Professor Strang you are the first person who makes me deeply understand linear regression from linear algebra's point of view.
@ankanghosal
@ankanghosal 3 жыл бұрын
His lectures are in an endless loop. He comes back to the statements that he has said earlier in the lecture.
@eklavyashukla8106
@eklavyashukla8106 3 жыл бұрын
WE NEED MORE TEACHERS LIKE YOU Mr.Strang!!!! Regards a fellow student you never met
@heropadaimazero
@heropadaimazero 15 жыл бұрын
The mistake starts at 11:26. The right hand side is (1 2 2), not (1 2 3). But he later uses the (1 2 2) for all other calculations, so not a big deal.
@matthewwhitted-tx3xf
@matthewwhitted-tx3xf 7 ай бұрын
Not the first time he's made a mistake and not caught it.
@lee_land_y69
@lee_land_y69 6 жыл бұрын
11:34 made a typo in b as it should be [1, 2, 2] right?
@subhasishsarkar363
@subhasishsarkar363 5 жыл бұрын
yes
@jurgenkoopman9091
@jurgenkoopman9091 4 жыл бұрын
I think he makes these mistakes on purpose. Unbelievable there is almost no reaction from students.
@walterlevy5924
@walterlevy5924 4 жыл бұрын
Yes, and he got away with it because he remembered [1,2,2] instead of using the erroneous [1,2,3] for b that he put on the blackboard.
@ZhanyeLI
@ZhanyeLI 4 жыл бұрын
@@jurgenkoopman9091 Maybe he wanted to inspect that if students were careful in the class
@francescocostanzo8225
@francescocostanzo8225 4 жыл бұрын
I thought I was going crazy since this is not the top comment? Like wait if even the comment section doesn't see it then I have nothing
@niclasn2695
@niclasn2695 Жыл бұрын
I'm watching this 27 years after I took a similar course in my university. Haven't seen linear algebra much during my career. Now when watching, everything seem much clearer to me. Strang is a really good lecturer.
@gigis1393
@gigis1393 2 жыл бұрын
learned this about 30 years ago at Technion Haifa. If I could only have such videos or Instructor then life would have been a breeze
@shivammalviya3737
@shivammalviya3737 4 жыл бұрын
This is the 3rd time I am taking these lectures in the last 2 years. Thank you, professor, these lectures are amazing.
@divdagr8
@divdagr8 4 жыл бұрын
Rediscovering Linear Algebra again with Professor Strang! So intuitive with him.
@johndon1986
@johndon1986 6 жыл бұрын
This video quitely provides the proof of assumption of regression that features have to be uncorrelated/independent. I have read that only in theory but now I can see exactly why. Thank you prof strang
@Eizengoldt
@Eizengoldt Жыл бұрын
Its mad that we dont learn statistics this way in class
@samuelphiri8541
@samuelphiri8541 5 жыл бұрын
29:14 “You have to admire the beauty of this answer”. 😂😂
@sahajthareja9415
@sahajthareja9415 4 жыл бұрын
I am econ Student and have studied Regression in my statistics class but never was able to understand how exactly was it connected to Nullspace and Column Space . Totally a new persepective , Thanks a lot for this series Prof Strang and MIT
@bilyzhuang9242
@bilyzhuang9242 5 жыл бұрын
I suggest every colleges' linear algebra course using this course video. Prof Strang makes linear algebra so intuitive, interesting and easy to understand. He plots the pictures and tells you what's going on in the vector space and then he will go back to the theory to make you have a deep comprehension
@dylanhoggyt
@dylanhoggyt 13 жыл бұрын
@pelemanov from what I understand we are projecting b in to the column space so that we can actually solve the system with a best estimate. The closest (least squares) projection of b is p which will only be orthogonal if b happens to be in the null space of A transpose, but there is no requirement that this is the case. In fact if b happens to be in the column space, then the projection doesn't change b at all (i.e. P = I.)
@jcf129er
@jcf129er 7 жыл бұрын
This is like pure intellectual chocolate. Gilbert Strang should've taken over Wonka's Chocolate Factory, not Charlie.
@francescocostanzo8225
@francescocostanzo8225 4 жыл бұрын
36:48 I legitimately was wondering this. Thank Professor Strang for answering my questions from beyond the screen
@georgesadler7830
@georgesadler7830 3 жыл бұрын
This is another great lecture by MIT Professor DR. Gilbert Strang. Least Squares put linear algebra into another world by itself.
@supersnowva6717
@supersnowva6717 Жыл бұрын
Beautiful lecture, just beautiful. Prof. Strang is drawing the beauty of Linear Algebra on a blackboard.
@nateshtyagi
@nateshtyagi 3 жыл бұрын
I can't stress my thanks enough. Thanks for everything Prof Strang, MIT.
@PyMoondra
@PyMoondra 5 жыл бұрын
This was a really good lecture. It was packed with insights. I love how everything is coming together.
@НикитаЮрченко-э3ь
@НикитаЮрченко-э3ь 5 жыл бұрын
The best course of linear algebra. Thanks Prof. Strang!
@MasterCivilEngineering
@MasterCivilEngineering 4 жыл бұрын
Absolutely dear
@super-creative-stuff1421
@super-creative-stuff1421 3 жыл бұрын
I had horrible experiences with learning math in elementary school and since then, I've had a negative predisposition to it. This playlist is reversing that predisposition.
@lucaswolf9445
@lucaswolf9445 8 жыл бұрын
He might correct it later, but as I am watching it: I'm afraid Prof. Strang made a tiny error (pun not intended) at about 12:00. According to my understanding the right-hand side of the equation should be (1 2 2)^T not (1 2 3)^T. Can anyone confirm this? Awesome lecture nevertheless.
@matthewlang8711
@matthewlang8711 8 жыл бұрын
+Lucas Wolf Yeah I noticed that too.
@OhCakes
@OhCakes 8 жыл бұрын
+Lucas Wolf You are correct. It is Ax=b where b in this case is (1,2,2)
@OhCakes
@OhCakes 8 жыл бұрын
+Lucas Wolf He does switch it back to (1,2,2) though so the output is still correct.
@rongrongmiao4638
@rongrongmiao4638 7 жыл бұрын
Interesting that no MIT student corrected him on that...
@bayesianlee6447
@bayesianlee6447 6 жыл бұрын
Me too. I was just curious full why they never let him know and fix ..
@santiagotheone
@santiagotheone 2 жыл бұрын
34:29~35:13 It is really helpful for me that he explicitly pointed that out.
@karthik3685
@karthik3685 3 жыл бұрын
Nothing new in my comment. EE Grad - did all of this math in undergrad. Don't remember any of it, and never developed an intuition for it. This is so friggin' amazing!! Dr. Strang is a rockstar!
@phononify
@phononify 3 ай бұрын
Minute 15:19: Typo on slight components of third point are [3, 2] .. b switched to [1, 2, 3]. Such a very good lecture to start linear algebra ... I looked it many years before. But come back if I need to repeat again.
@thabsor
@thabsor 2 жыл бұрын
I finally understood OLS in econometrics, now I can say I comprehend what I'm doing, instead of mindlessly applying formulas and rules. Thank you verry much Mr Strang.
@mauriciobarda
@mauriciobarda 5 жыл бұрын
professor Strang you are excellent. Thanks a lot to you and MIT for these lectures and to all the supporters of OCW.
@jaydenou6818
@jaydenou6818 11 ай бұрын
In 22:50, in case someone is wondering why he tack on the columns like that, it is just out of convenience to solve for \hat{x} in A^TA\hat{x} = A^Tb , you could just do it in a regular way by first multiply all the matrix out and make it in a form of A\hat{x}=b , then solve it from there.
@inazuma3gou
@inazuma3gou 4 жыл бұрын
I had some trouble connecting the two pictures. What helped me to understand the connection is rewrite the original equation as Ax = b = e + p. That means we breaking down b into the error vector and its projection, p, onto A. We find e1, e2, e3, which are the elements of the error vector, by solving for C and D such as e is in the null space of A transpose.
@nguyenbaodung1603
@nguyenbaodung1603 3 жыл бұрын
With his lecture, I could sit in my desk all day and study. Math is so great.
@AngeloYeo
@AngeloYeo 8 жыл бұрын
It is crazy... really amazed... (tears drop)
@김창현-v4z
@김창현-v4z 3 жыл бұрын
Absolutely!
@jamesking2439
@jamesking2439 4 жыл бұрын
Gilbert is really good at teasing the next lecture. I have to force myself to stop watching so I can sleep.
@nguyenbaodung1603
@nguyenbaodung1603 3 жыл бұрын
Listening to your voice has been my priority these days.
@ManishKumar-xx7ny
@ManishKumar-xx7ny 3 жыл бұрын
This is a lecture of the best of the best quality. Thrilling
@bezaeshetu5454
@bezaeshetu5454 3 жыл бұрын
What interesting lecture is this. you showed how maths is a pillars of statistics. during the lecture I think of the assumptions of least square estimation. it comes from maths (like the independency assumption). great work. God bless you prof.
@dwijdixit7810
@dwijdixit7810 2 жыл бұрын
17:54 Considering Prof. Strang's art is teaching, he is undoubtedly one of the greatest in the world!
@hj-core
@hj-core Жыл бұрын
Professor Gilbert is going to make us a good space traveler😂Thanks to MIT OCW and Professor Gilbert for bringing such great lectures to us
@haileywei1884
@haileywei1884 4 жыл бұрын
I can't stop watching these lectures...
@LAnonHubbard
@LAnonHubbard 10 жыл бұрын
The proof of A^tA being invertible around 39:00 was great!
@ThePimp4dawin
@ThePimp4dawin 4 жыл бұрын
Beautiful lecture and amazing lecturer! Thank you Mr. Strang!
@MasterCivilEngineering
@MasterCivilEngineering 4 жыл бұрын
Thankyou
@HanZhang1994
@HanZhang1994 10 жыл бұрын
Around 20:00. Why are the errors "e" the vertical distance between the line and the "b" points? Why don't we look at the shortest distance between b and p which is when you use a distance between b and the line that is perpendicular to the line (not parallel with y axis)?
@shaochenghua
@shaochenghua 10 жыл бұрын
Because the line is not column space. If you go back to the previous lecture, the goal was to find vector b project on the column space of A for solving Ax = b. In this example, A is a 3x2 matrix, and b is 3x1 vector. b-A*x_hat is perpendicular to the column space of A that is a 2D plane. Here X-Y plane (the plane on the blackboard) is not column space, as it represents the possible solution plane (null space moved by the special solution, check earlier lectures). The line is not column space either. The perpendicularity doesn't happen in this geometrical illustration. If you are looking for the shortest distance, i.e. error e perpendicular to line, you are projecting each point onto a 1-dimention line, i.e. you are considering the line as column space of a totally different equation. The example is not a good example indeed, when professor Strang said the "best line" close to all three points. It's not the closest line to all three points. What you said is another way of finding the "best line", to minimize the least square of the distance to a 1-D line, but it cannot be written in the the same Ax=b format as the professor wrote on board.
@MrNtuer
@MrNtuer 6 жыл бұрын
That would be Support Vector Machine
@varun2275
@varun2275 6 жыл бұрын
i think you're confusing it with principal component analysis
@user-fv8im
@user-fv8im 2 жыл бұрын
Excellent , I was amazed to see the one to one correspondence between solving Ax=b when b is not in the column space of A and least squares fitting by a line when all the points doesn't exactly fit in a straight line
@khurshedfitter5695
@khurshedfitter5695 6 жыл бұрын
At around 18:00, why did he take p1, p2 and p3 on the same vertical line as b1, b2 and b3 respectively? Why not take them on the line perpendicular to the line we drew (I mean why not project them properly??) Bcos the line we drew is not perpendicular to the vertical. Help pls. Thanks :)
@ashutoshtiwari4398
@ashutoshtiwari4398 5 жыл бұрын
I don't think I understood your question but I will try to help anyway. - b1, b2, b3 are not on any straight line. -p1, p2, p3 are on a line P = C + Dt . - b1 is e1 away from p1 (along the vertical axis or C axis ). Similarly for b2 and b3.
@ShoookeN
@ShoookeN 9 жыл бұрын
32:56 Thank you Prof. Strang.
@muhammedyusufsener1622
@muhammedyusufsener1622 3 жыл бұрын
Excellent lecture. Professor Strang is a legend.
@LAnonHubbard
@LAnonHubbard 12 жыл бұрын
This is mind blowing. Great lecture.
@pelemanov
@pelemanov 13 жыл бұрын
@j4ckjs What I mean, is that around minute 14:00 he says that the error is vertical instead of orthogonal to the line. I thought we were trying to minimize the error by orthogonal projection. I'm probably mixing things up, but I don't see it.
@yanshudu9370
@yanshudu9370 2 жыл бұрын
Notes: 1. For Ax=b, we can draw a picture that projection vector p plus e is equal to b. To solve linear regression problems we can calculate the A'Ax=A'b firstly. Then p=Ax, e=b-p. 2. If A has independent columns then A'A is invertible.
@gladragsakshay
@gladragsakshay 13 жыл бұрын
Prof Strang spoke so much about errors and he did make one ! :P
@richarddow8967
@richarddow8967 2 жыл бұрын
I loved his sincerity when he thanks god @32:58
@watchmanling
@watchmanling 11 ай бұрын
I just don’t understand why other mit recordings can’t follow this masterpiece standard
@vigneshStack
@vigneshStack 11 ай бұрын
Bro if you don't mind can you explain me at 29:0 min how 5/3 value comes -2/6
@richarddow8967
@richarddow8967 2 жыл бұрын
Glad I decided to go all the way back to basic LA, such a great and thorough review
@Gabriel-pd8sv
@Gabriel-pd8sv 3 жыл бұрын
Before this lesson, I liked linear algebra. Now I LOVE IT!!
@mind-blowing_tumbleweed
@mind-blowing_tumbleweed Жыл бұрын
On 44:00 why A having independent columns means that all columns of A are independent? Why A can't have, say, 2 independent columns and 1 dependent column?
@aymericzambo345
@aymericzambo345 7 жыл бұрын
@43:00 I laughed like crazy. I just felt like he was saying : '' Please God let these kids understand the one most basic thign of all this linear algebra. This is about to be on tape! ''. Mr Strang is a great professor I wish I had him as teacher. Learnign alot from this linear algebra onlien course. Thank you MIT for creating open courseware
@baconpenguin94
@baconpenguin94 Жыл бұрын
HES THE GOAT. THE GOAAAAAT
@georgemendez5245
@georgemendez5245 6 жыл бұрын
the man the myth the legend Gilbert Strang
@Hobbit183
@Hobbit183 7 жыл бұрын
An alternative way of deriving the equation If we have Ax=b with no solution. lets say c is the best solution so that Ac=proj(b on C(A)). then Ac-b is orthogonal to Ac so that the dot product of Ac-b and Ac is zero. But that can be writing in matrix form as (Ac)^t(Ac-b)=0 c^tA^t(Ac-b)=0, c^t(A^tAc-A^tb)=0 since we don't want c to be zero c^t can't be zero. that means A^tAc-A^tb=0 or A^tAc=A^tb
@varun2275
@varun2275 6 жыл бұрын
why the restriction that c can't be 0 though? c can very well be 0 and the projection might have a 0 length?
@pelemanov
@pelemanov 13 жыл бұрын
Great lecture! But aren't we supposed to make an orthogonal projection? Instead he did a projection parallel to the Y-axis because he calculates p1, p2 and p3 by taking t-values 1, 2 and 3. You can also see it on his drawing. Why does he take this projection instead of the orthogonal one? And how can e turn out to be orthogonal to p anyway?
@kz_cbble9670
@kz_cbble9670 3 жыл бұрын
Well done Mr Gilbert, congratulations
@antoniolewis1016
@antoniolewis1016 8 жыл бұрын
"Make Bases Orthonormal Again!"
@1454LOU
@1454LOU 5 жыл бұрын
I dare any trumpster to get that.
@Hotheaddragon
@Hotheaddragon 4 жыл бұрын
Lol
@generalissimoblanc7395
@generalissimoblanc7395 4 жыл бұрын
Good one!
@melissaallinp.e.5209
@melissaallinp.e.5209 4 жыл бұрын
48:04..."Thank you, God". I love this man.
@MasterCivilEngineering
@MasterCivilEngineering 4 жыл бұрын
Thank you God
@dharmaturtle
@dharmaturtle 8 жыл бұрын
For the life of me I couldn't "get" the final proof, but now I think I get it. If (A^T A) has linearly independent columns, then A has linearly independent columns. The first is (A^T A)x = 0, and the latter is Ax=0. Remember that linear independence means that the only combination that goes to zero is the zero vector. If C(A) is linearly independent, then Ax=0 means x=0.
@SalomonZevi
@SalomonZevi 8 жыл бұрын
First he assumes that if A has n independent columns then the row space has rank n, and spans R^n. Therefore, if Ax=0 must be that x=0. Then he argues that if A'Ax=0 must be that Ax=0 which means that x=0.
@luigiquitadamo1990
@luigiquitadamo1990 5 жыл бұрын
Ever great lectures, thanks professor Strang and MIT.Merci la vie!
@MasterCivilEngineering
@MasterCivilEngineering 4 жыл бұрын
Thanks and bless you
@tianqilong8366
@tianqilong8366 Жыл бұрын
typo at 11:40, the b vector should be (1, 2, 2) instead of (1, 2, 3)
@poiuwnwang7109
@poiuwnwang7109 4 жыл бұрын
I always hated linear algebra, but Prof. Strang makes it fun.
@starriet
@starriet 2 жыл бұрын
Learning math is so... delicious!! :D I'm not even a math genius. Thanks Prof. Strang, MIT, and KZbin.
@pelemanov
@pelemanov 13 жыл бұрын
@dylanhoggyt I get that, but that doesn't answer my question. What I mean, is that around minute 14:00 he says that the error is vertical instead of orthogonal to the line. I thought we were trying to minimize the error by orthogonal projection. I'm probably mixing things up, but I don't see it.
@michaellewis7861
@michaellewis7861 3 жыл бұрын
Thm. A vector is orthogonal to itself if and only if it is the zero vector. Proof. Backwards. Suppose there exists a vector x s.t. that is orthogonal to itself has a nonzero entry. Then xTx=summation xixi>=bi^2. Analog holds for negative bi. Which contradicts there existing a vector with nonzero entries which has a zero dot product. Forward. Trivial. Then Ax=0 If A has independent columns ie. is of full rank, then the only vector in the N(A)=[0]. Δ
@achillesarmstrong9639
@achillesarmstrong9639 6 жыл бұрын
is it 14:40 there is a mistake? should it be perpendicular to the line not vertical distance to the line?
@ashutoshtiwari4398
@ashutoshtiwari4398 5 жыл бұрын
It's the vertical distance. Consider C vs t axis. We cannot plot a 'straight line' for points b1, b2, b3 for the corresponding t1, t2, t3. So we plot 'approximate straight line' P with points p1, p2, p3 for t1, t2, t3. Now, the approximate line is giving p1 instead of b1. So the error is p1 - b1, which is along the vertical line.
@thanhduynguyen2225
@thanhduynguyen2225 8 ай бұрын
In the case of regression, why is the projection of a point on a regressed line connected by a vertical line (which is parallel to the y-axis) rather than a perpendicular e-line to the regressed line as shown in the chapter of projection?
@elborrador333
@elborrador333 8 жыл бұрын
I thought that the projection of the points onto the line ("error") was meant to be perpendicular to the line, and not a projection in the y-axis.
@loukas371
@loukas371 8 жыл бұрын
I was wondering about the same thing :/
@lamps453
@lamps453 8 жыл бұрын
I don't think so. This projection is not literally projecting the points onto the line, and the error is calculated as Ax_hat-b, hence has to be along y-axis.
@SalomonZevi
@SalomonZevi 8 жыл бұрын
The projection, is of the vector b (all 3 y values) onto a vector in the column space. Thus, reassigning p to be the 3 y values. The x values are not changing since they define the matrix. The error is indeed perpendicular to the column space. Here Strang is showing how linear algebra is translated in terms of linear regression.
@vipulpatki
@vipulpatki 6 жыл бұрын
Look at it this way: For each x, there is a computed value, and the observed value. Then the difference between the two ordinates is the error. The "line" comes later, after we have minimized the total error. Hope I am making sense.
@lucasm4299
@lucasm4299 6 жыл бұрын
MIT!! MIT!! 🇺🇸🏆👌🏼
@MultiRNR
@MultiRNR 10 ай бұрын
Actually it is still hard to visualize why the projection leads to least square, I think those Bs are observed data points and form a hyper-plane, and you want to map their target values to that hyper plane. The closest (in terms of distance) is a projection to it
@anangelsdiaries
@anangelsdiaries 25 күн бұрын
12:08 but the column space doesn't contain this vector had me like uh? Until I realized there was a typo on the last row of b. Should be 2.
@kpfxzzsy
@kpfxzzsy 3 жыл бұрын
Thank you professor, Thank you MIT.
@User-cv4ee
@User-cv4ee 6 жыл бұрын
It does not seem like we are going for a perpendicular projection of the data points on the line. Rather we are taking error (e) in vertical directions. Is that still correct?
@achillesarmstrong9639
@achillesarmstrong9639 6 жыл бұрын
same doubt
@gulshanjangid3470
@gulshanjangid3470 6 жыл бұрын
I think he is minimizing error in Y direction (error in Y) = |("actual Y value" - "Y value given by our line")| But I guess to minimize overall error we should go for perpendicular projections
@User-cv4ee
@User-cv4ee 6 жыл бұрын
Thank you for the replies... I thought about it and realize that minimizing in y direction is kind of same as minimizing the perpendicular distance, since they are related by Pythagoreans theorem and restricted on a line with same slope.
@prensandre
@prensandre 3 жыл бұрын
Sen nasıl bir kralsın yaa.
@irem9955
@irem9955 3 жыл бұрын
sjkhdjkshjj
@afianzamientofermath1207
@afianzamientofermath1207 4 жыл бұрын
Thank you so much, MIT
@amyzeng7130
@amyzeng7130 3 жыл бұрын
Could someone give me a clue why use A(T)AX(hat)=A(T)b to solve [C,D] at 21:05 ? To my understanding, AX(hat) cannot be b if we fit a line which points are P1, P2, P3 instead of b1, b2, b3 or [1 2 2]_t?
@amyzeng7130
@amyzeng7130 3 жыл бұрын
I got it. I messed up the two pictures. This is to solve a invertible squared matrix A(T)Ax= J, where J=A(T)b, and x=[C, D].
@seanyboyblu
@seanyboyblu 2 жыл бұрын
One thing that I cannot figure out if somebody can help me. The projection is supposed to be A times that x hat. but when we solve, he does A^TAx = A^Tb. But that is not p, p would be A times all of that, wouldn't it?
@anonym498
@anonym498 2 жыл бұрын
I wonder if multivariable calculus is a prerequisite for this course. Could you tell me please
@weizhili447
@weizhili447 7 жыл бұрын
Why wouldn't the error be the perpendicular distance to the line? in the video, the prof said it to be the vertical distance to the line.
@etsevnevo1315
@etsevnevo1315 6 жыл бұрын
Hello, I think this is because the regression line is drawn in the 2D space which is the row space of A. The minimization/projection is carried out in the column space of A.
@khurshedfitter5695
@khurshedfitter5695 6 жыл бұрын
Same query.
@michaelj7677
@michaelj7677 5 жыл бұрын
"Make the error vector perpendicular to the Column Space" means "minimize the euclidean Norm of e". Because the shortest vector from the Column Space of A to b is the perpendicular vector. Minimizing the euclidean Norm means: "minimize the Square root of the sum of squares (of the components of e with regard to the m-dimensional space)". This happens to be the same thing as minimizing the sum of squares of each of the m residuals (=the vertical distance to the line)
@coffle1
@coffle1 9 жыл бұрын
Why I-P at 8:22?
@coffle1
@coffle1 9 жыл бұрын
Ahh, that makes sense. Thanks for clearing that up!
@doge-coin
@doge-coin 6 жыл бұрын
Why is that? Are there some comments being deleted? I can't see the answers. :(
@sourabhk2373
@sourabhk2373 6 жыл бұрын
lol same here.
@jazonjiao638
@jazonjiao638 6 жыл бұрын
Phoebe Wang : e = b - p = Ib - Pb = (I - P) b where I is identity matrix
@karsunbadminton7180
@karsunbadminton7180 4 жыл бұрын
respect for Prof.Strang
17. Orthogonal Matrices and Gram-Schmidt
49:10
MIT OpenCourseWare
Рет қаралды 222 М.
15. Projections onto Subspaces
48:51
MIT OpenCourseWare
Рет қаралды 544 М.
黑天使只对C罗有感觉#short #angel #clown
00:39
Super Beauty team
Рет қаралды 36 МЛН
How Strong Is Tape?
00:24
Stokes Twins
Рет қаралды 96 МЛН
The Least Squares Formula: A Derivation
10:31
MathTheBeautiful
Рет қаралды 128 М.
10 Math Concepts for Programmers
9:32
Fireship
Рет қаралды 2 МЛН
2023 MIT Integration Bee - Finals
28:09
MIT Integration Bee
Рет қаралды 2,1 МЛН
The Math behind (most) 3D games - Perspective Projection
13:20
Brendan Galea
Рет қаралды 433 М.
21. Eigenvalues and Eigenvectors
51:23
MIT OpenCourseWare
Рет қаралды 661 М.
Vector Projections | Vector Calculus #17
5:17
Bari Science Lab
Рет қаралды 74 М.
19. Determinant Formulas and Cofactors
53:17
MIT OpenCourseWare
Рет қаралды 362 М.
Least squares I: Matrix problems
10:00
Lorenzo Sadun
Рет қаралды 162 М.
14. Orthogonal Vectors and Subspaces
49:48
MIT OpenCourseWare
Рет қаралды 548 М.