I am a graduate student majored in electrical and computer engineering. Though most of us have learned linear algebra in undergraduate study, I would like to highly recommend this course to those who are interested in machine learning and signal processing. Thank you Prof.Strang!
@cagrkaymak30637 жыл бұрын
same for me, I am a grad student too but I learn a lot from these lectures
@andre.queiroz6 жыл бұрын
I'm finishing college and I'm studying this to get into Machine Learning.
@qiangli58606 жыл бұрын
I am also a graduate student majored in ECE. machine learning and numerical linear algebra.
@alexandresoaresdasilva19665 жыл бұрын
Same here. The insights are invaluable - the lecture about projections finally clarified why a color calibration project I had during my undergrad didn’t always work well. These lectures should be used to teach linear algebra everywhere where there’s no really strong linear algebra classes, as image processing/ML tend to require way more command of linear algebra than what the common linear algebra college classes(talking about Texas Tech here) tend to offer.
@johncarloroberto26353 жыл бұрын
Same I graduated with an ECE degree, but our curriculum didn't have linear algebra so I'm taking this in order to pursue a masters with a focus in machine learning. The intuition and guidance Prof Strang offers is really great!
@hurbig4 жыл бұрын
With lectures this good I can watch this instead of Netflix. I have one professor who also hold phenomenal lectures and lectures this good bring me as much joy or even more than playing a good video game or watching a good show. It is interesting and entertaining and it blows my mind. Truly a fantastic job! Thank you professor Strang!
@MasterCivilEngineering4 жыл бұрын
Really true dear
@starriet2 жыл бұрын
Guys, let's watch Prof. Strang instead of watching dumb TV shows!!! (well.. I'm dumb too though!)
@staa08 Жыл бұрын
These kinda people are scary bruh!!! Hats off to you for having this kinda motivation
@trevandrea890911 ай бұрын
@@starriet You're not dumb, the fact you watch Linear Algebra videos show you're interested in learning, and you are smart:)
@ashutoshtiwari43985 жыл бұрын
18:04 and 28:12 It proves Prof. Strang is a man of his words.
@MasterCivilEngineering4 жыл бұрын
He definitely is
@starriet2 жыл бұрын
That's what I wanted to say :)
@jaydenou681811 ай бұрын
In 7:28, in case someone is wondering why e=(I-P)b , you can derive it as e=b-Pb, which it only meant e=b-p.
@ozzyfromspace4 жыл бұрын
"please come out right". "oh yes!" "thank you god" 😂
@MasterCivilEngineering4 жыл бұрын
Good
@super-creative-stuff14213 жыл бұрын
I think it's that sort of personality that my teachers in school were missing. They didn't care about math at all.
@bboysil11 жыл бұрын
I love how you can reach the same answer also using a calculus approach.. and I LOVE the two pictures for the least squares regression. Beautiful stuff and amazing lecturer.
@LAnonHubbard11 жыл бұрын
The bit at 4:54 where b is replaced by Ax giving A(A^TA)ˉ¹ATAx, which then collapses to Ax is fantastic. It's so high-level and so simple to see.
@Hotheaddragon4 жыл бұрын
He does not teach Linear Algebra, He teach us to see the MATH as an art form and tells us how to draw math, and admire its beauty.
@MasterCivilEngineering4 жыл бұрын
Yes absolutely dear
@fearlesspeanut68687 жыл бұрын
This is probably one of the best courses I have ever taken, Prof Gilbert Strang really rocks! Never though linear algebra can be this beautiful
@yryj10008 жыл бұрын
"Oh god, come out right here!" ... "Thank you, God!" XD I was dying at those parts of the lecture. Not only does he teach skillfully, he's hilarious too.
@xiangzhang85088 жыл бұрын
linear algebra is so fun in Prof Gilbert Stang 's hand.
@pubgplayer17204 жыл бұрын
Yeah. It's so good. It's senior algebra!
@maheepchaudhary42005 жыл бұрын
Thank you Prof. Strang for changing the way to study maths rather than cramming now we not only just study matrices but can visualize itbecaue of you. Never seen a wonderful teacher like you.I hope I will meet you somday to show my gratitude. Highly recommended for everyone.
@nguyenbaodung16033 жыл бұрын
I love it how he does not even need to explain it carefully, but everyone was already interested in Linear Algebra taught by him. He really encourages students to brainstorm other insights based on strong background that he could provide them.
@Concon934312 жыл бұрын
Really an inspiration to me as how the things add up and come together. Great lecture intense easy to follow / understand
@bhaweshkumar712810 жыл бұрын
Mr Strang and MIT ,thank you so much.
@maoqiutong6 жыл бұрын
Professor Strang you are the first person who makes me deeply understand linear regression from linear algebra's point of view.
@ankanghosal3 жыл бұрын
His lectures are in an endless loop. He comes back to the statements that he has said earlier in the lecture.
@eklavyashukla81063 жыл бұрын
WE NEED MORE TEACHERS LIKE YOU Mr.Strang!!!! Regards a fellow student you never met
@heropadaimazero15 жыл бұрын
The mistake starts at 11:26. The right hand side is (1 2 2), not (1 2 3). But he later uses the (1 2 2) for all other calculations, so not a big deal.
@matthewwhitted-tx3xf7 ай бұрын
Not the first time he's made a mistake and not caught it.
@lee_land_y696 жыл бұрын
11:34 made a typo in b as it should be [1, 2, 2] right?
@subhasishsarkar3635 жыл бұрын
yes
@jurgenkoopman90914 жыл бұрын
I think he makes these mistakes on purpose. Unbelievable there is almost no reaction from students.
@walterlevy59244 жыл бұрын
Yes, and he got away with it because he remembered [1,2,2] instead of using the erroneous [1,2,3] for b that he put on the blackboard.
@ZhanyeLI4 жыл бұрын
@@jurgenkoopman9091 Maybe he wanted to inspect that if students were careful in the class
@francescocostanzo82254 жыл бұрын
I thought I was going crazy since this is not the top comment? Like wait if even the comment section doesn't see it then I have nothing
@niclasn2695 Жыл бұрын
I'm watching this 27 years after I took a similar course in my university. Haven't seen linear algebra much during my career. Now when watching, everything seem much clearer to me. Strang is a really good lecturer.
@gigis13932 жыл бұрын
learned this about 30 years ago at Technion Haifa. If I could only have such videos or Instructor then life would have been a breeze
@shivammalviya37374 жыл бұрын
This is the 3rd time I am taking these lectures in the last 2 years. Thank you, professor, these lectures are amazing.
@divdagr84 жыл бұрын
Rediscovering Linear Algebra again with Professor Strang! So intuitive with him.
@johndon19866 жыл бұрын
This video quitely provides the proof of assumption of regression that features have to be uncorrelated/independent. I have read that only in theory but now I can see exactly why. Thank you prof strang
@Eizengoldt Жыл бұрын
Its mad that we dont learn statistics this way in class
@samuelphiri85415 жыл бұрын
29:14 “You have to admire the beauty of this answer”. 😂😂
@sahajthareja94154 жыл бұрын
I am econ Student and have studied Regression in my statistics class but never was able to understand how exactly was it connected to Nullspace and Column Space . Totally a new persepective , Thanks a lot for this series Prof Strang and MIT
@bilyzhuang92425 жыл бұрын
I suggest every colleges' linear algebra course using this course video. Prof Strang makes linear algebra so intuitive, interesting and easy to understand. He plots the pictures and tells you what's going on in the vector space and then he will go back to the theory to make you have a deep comprehension
@dylanhoggyt13 жыл бұрын
@pelemanov from what I understand we are projecting b in to the column space so that we can actually solve the system with a best estimate. The closest (least squares) projection of b is p which will only be orthogonal if b happens to be in the null space of A transpose, but there is no requirement that this is the case. In fact if b happens to be in the column space, then the projection doesn't change b at all (i.e. P = I.)
@jcf129er7 жыл бұрын
This is like pure intellectual chocolate. Gilbert Strang should've taken over Wonka's Chocolate Factory, not Charlie.
@francescocostanzo82254 жыл бұрын
36:48 I legitimately was wondering this. Thank Professor Strang for answering my questions from beyond the screen
@georgesadler78303 жыл бұрын
This is another great lecture by MIT Professor DR. Gilbert Strang. Least Squares put linear algebra into another world by itself.
@supersnowva6717 Жыл бұрын
Beautiful lecture, just beautiful. Prof. Strang is drawing the beauty of Linear Algebra on a blackboard.
@nateshtyagi3 жыл бұрын
I can't stress my thanks enough. Thanks for everything Prof Strang, MIT.
@PyMoondra5 жыл бұрын
This was a really good lecture. It was packed with insights. I love how everything is coming together.
@НикитаЮрченко-э3ь5 жыл бұрын
The best course of linear algebra. Thanks Prof. Strang!
@MasterCivilEngineering4 жыл бұрын
Absolutely dear
@super-creative-stuff14213 жыл бұрын
I had horrible experiences with learning math in elementary school and since then, I've had a negative predisposition to it. This playlist is reversing that predisposition.
@lucaswolf94458 жыл бұрын
He might correct it later, but as I am watching it: I'm afraid Prof. Strang made a tiny error (pun not intended) at about 12:00. According to my understanding the right-hand side of the equation should be (1 2 2)^T not (1 2 3)^T. Can anyone confirm this? Awesome lecture nevertheless.
@matthewlang87118 жыл бұрын
+Lucas Wolf Yeah I noticed that too.
@OhCakes8 жыл бұрын
+Lucas Wolf You are correct. It is Ax=b where b in this case is (1,2,2)
@OhCakes8 жыл бұрын
+Lucas Wolf He does switch it back to (1,2,2) though so the output is still correct.
@rongrongmiao46387 жыл бұрын
Interesting that no MIT student corrected him on that...
@bayesianlee64476 жыл бұрын
Me too. I was just curious full why they never let him know and fix ..
@santiagotheone2 жыл бұрын
34:29~35:13 It is really helpful for me that he explicitly pointed that out.
@karthik36853 жыл бұрын
Nothing new in my comment. EE Grad - did all of this math in undergrad. Don't remember any of it, and never developed an intuition for it. This is so friggin' amazing!! Dr. Strang is a rockstar!
@phononify3 ай бұрын
Minute 15:19: Typo on slight components of third point are [3, 2] .. b switched to [1, 2, 3]. Such a very good lecture to start linear algebra ... I looked it many years before. But come back if I need to repeat again.
@thabsor2 жыл бұрын
I finally understood OLS in econometrics, now I can say I comprehend what I'm doing, instead of mindlessly applying formulas and rules. Thank you verry much Mr Strang.
@mauriciobarda5 жыл бұрын
professor Strang you are excellent. Thanks a lot to you and MIT for these lectures and to all the supporters of OCW.
@jaydenou681811 ай бұрын
In 22:50, in case someone is wondering why he tack on the columns like that, it is just out of convenience to solve for \hat{x} in A^TA\hat{x} = A^Tb , you could just do it in a regular way by first multiply all the matrix out and make it in a form of A\hat{x}=b , then solve it from there.
@inazuma3gou4 жыл бұрын
I had some trouble connecting the two pictures. What helped me to understand the connection is rewrite the original equation as Ax = b = e + p. That means we breaking down b into the error vector and its projection, p, onto A. We find e1, e2, e3, which are the elements of the error vector, by solving for C and D such as e is in the null space of A transpose.
@nguyenbaodung16033 жыл бұрын
With his lecture, I could sit in my desk all day and study. Math is so great.
@AngeloYeo8 жыл бұрын
It is crazy... really amazed... (tears drop)
@김창현-v4z3 жыл бұрын
Absolutely!
@jamesking24394 жыл бұрын
Gilbert is really good at teasing the next lecture. I have to force myself to stop watching so I can sleep.
@nguyenbaodung16033 жыл бұрын
Listening to your voice has been my priority these days.
@ManishKumar-xx7ny3 жыл бұрын
This is a lecture of the best of the best quality. Thrilling
@bezaeshetu54543 жыл бұрын
What interesting lecture is this. you showed how maths is a pillars of statistics. during the lecture I think of the assumptions of least square estimation. it comes from maths (like the independency assumption). great work. God bless you prof.
@dwijdixit78102 жыл бұрын
17:54 Considering Prof. Strang's art is teaching, he is undoubtedly one of the greatest in the world!
@hj-core Жыл бұрын
Professor Gilbert is going to make us a good space traveler😂Thanks to MIT OCW and Professor Gilbert for bringing such great lectures to us
@haileywei18844 жыл бұрын
I can't stop watching these lectures...
@LAnonHubbard10 жыл бұрын
The proof of A^tA being invertible around 39:00 was great!
@ThePimp4dawin4 жыл бұрын
Beautiful lecture and amazing lecturer! Thank you Mr. Strang!
@MasterCivilEngineering4 жыл бұрын
Thankyou
@HanZhang199410 жыл бұрын
Around 20:00. Why are the errors "e" the vertical distance between the line and the "b" points? Why don't we look at the shortest distance between b and p which is when you use a distance between b and the line that is perpendicular to the line (not parallel with y axis)?
@shaochenghua10 жыл бұрын
Because the line is not column space. If you go back to the previous lecture, the goal was to find vector b project on the column space of A for solving Ax = b. In this example, A is a 3x2 matrix, and b is 3x1 vector. b-A*x_hat is perpendicular to the column space of A that is a 2D plane. Here X-Y plane (the plane on the blackboard) is not column space, as it represents the possible solution plane (null space moved by the special solution, check earlier lectures). The line is not column space either. The perpendicularity doesn't happen in this geometrical illustration. If you are looking for the shortest distance, i.e. error e perpendicular to line, you are projecting each point onto a 1-dimention line, i.e. you are considering the line as column space of a totally different equation. The example is not a good example indeed, when professor Strang said the "best line" close to all three points. It's not the closest line to all three points. What you said is another way of finding the "best line", to minimize the least square of the distance to a 1-D line, but it cannot be written in the the same Ax=b format as the professor wrote on board.
@MrNtuer6 жыл бұрын
That would be Support Vector Machine
@varun22756 жыл бұрын
i think you're confusing it with principal component analysis
@user-fv8im2 жыл бұрын
Excellent , I was amazed to see the one to one correspondence between solving Ax=b when b is not in the column space of A and least squares fitting by a line when all the points doesn't exactly fit in a straight line
@khurshedfitter56956 жыл бұрын
At around 18:00, why did he take p1, p2 and p3 on the same vertical line as b1, b2 and b3 respectively? Why not take them on the line perpendicular to the line we drew (I mean why not project them properly??) Bcos the line we drew is not perpendicular to the vertical. Help pls. Thanks :)
@ashutoshtiwari43985 жыл бұрын
I don't think I understood your question but I will try to help anyway. - b1, b2, b3 are not on any straight line. -p1, p2, p3 are on a line P = C + Dt . - b1 is e1 away from p1 (along the vertical axis or C axis ). Similarly for b2 and b3.
@ShoookeN9 жыл бұрын
32:56 Thank you Prof. Strang.
@muhammedyusufsener16223 жыл бұрын
Excellent lecture. Professor Strang is a legend.
@LAnonHubbard12 жыл бұрын
This is mind blowing. Great lecture.
@pelemanov13 жыл бұрын
@j4ckjs What I mean, is that around minute 14:00 he says that the error is vertical instead of orthogonal to the line. I thought we were trying to minimize the error by orthogonal projection. I'm probably mixing things up, but I don't see it.
@yanshudu93702 жыл бұрын
Notes: 1. For Ax=b, we can draw a picture that projection vector p plus e is equal to b. To solve linear regression problems we can calculate the A'Ax=A'b firstly. Then p=Ax, e=b-p. 2. If A has independent columns then A'A is invertible.
@gladragsakshay13 жыл бұрын
Prof Strang spoke so much about errors and he did make one ! :P
@richarddow89672 жыл бұрын
I loved his sincerity when he thanks god @32:58
@watchmanling11 ай бұрын
I just don’t understand why other mit recordings can’t follow this masterpiece standard
@vigneshStack11 ай бұрын
Bro if you don't mind can you explain me at 29:0 min how 5/3 value comes -2/6
@richarddow89672 жыл бұрын
Glad I decided to go all the way back to basic LA, such a great and thorough review
@Gabriel-pd8sv3 жыл бұрын
Before this lesson, I liked linear algebra. Now I LOVE IT!!
@mind-blowing_tumbleweed Жыл бұрын
On 44:00 why A having independent columns means that all columns of A are independent? Why A can't have, say, 2 independent columns and 1 dependent column?
@aymericzambo3457 жыл бұрын
@43:00 I laughed like crazy. I just felt like he was saying : '' Please God let these kids understand the one most basic thign of all this linear algebra. This is about to be on tape! ''. Mr Strang is a great professor I wish I had him as teacher. Learnign alot from this linear algebra onlien course. Thank you MIT for creating open courseware
@baconpenguin94 Жыл бұрын
HES THE GOAT. THE GOAAAAAT
@georgemendez52456 жыл бұрын
the man the myth the legend Gilbert Strang
@Hobbit1837 жыл бұрын
An alternative way of deriving the equation If we have Ax=b with no solution. lets say c is the best solution so that Ac=proj(b on C(A)). then Ac-b is orthogonal to Ac so that the dot product of Ac-b and Ac is zero. But that can be writing in matrix form as (Ac)^t(Ac-b)=0 c^tA^t(Ac-b)=0, c^t(A^tAc-A^tb)=0 since we don't want c to be zero c^t can't be zero. that means A^tAc-A^tb=0 or A^tAc=A^tb
@varun22756 жыл бұрын
why the restriction that c can't be 0 though? c can very well be 0 and the projection might have a 0 length?
@pelemanov13 жыл бұрын
Great lecture! But aren't we supposed to make an orthogonal projection? Instead he did a projection parallel to the Y-axis because he calculates p1, p2 and p3 by taking t-values 1, 2 and 3. You can also see it on his drawing. Why does he take this projection instead of the orthogonal one? And how can e turn out to be orthogonal to p anyway?
@kz_cbble96703 жыл бұрын
Well done Mr Gilbert, congratulations
@antoniolewis10168 жыл бұрын
"Make Bases Orthonormal Again!"
@1454LOU5 жыл бұрын
I dare any trumpster to get that.
@Hotheaddragon4 жыл бұрын
Lol
@generalissimoblanc73954 жыл бұрын
Good one!
@melissaallinp.e.52094 жыл бұрын
48:04..."Thank you, God". I love this man.
@MasterCivilEngineering4 жыл бұрын
Thank you God
@dharmaturtle8 жыл бұрын
For the life of me I couldn't "get" the final proof, but now I think I get it. If (A^T A) has linearly independent columns, then A has linearly independent columns. The first is (A^T A)x = 0, and the latter is Ax=0. Remember that linear independence means that the only combination that goes to zero is the zero vector. If C(A) is linearly independent, then Ax=0 means x=0.
@SalomonZevi8 жыл бұрын
First he assumes that if A has n independent columns then the row space has rank n, and spans R^n. Therefore, if Ax=0 must be that x=0. Then he argues that if A'Ax=0 must be that Ax=0 which means that x=0.
@luigiquitadamo19905 жыл бұрын
Ever great lectures, thanks professor Strang and MIT.Merci la vie!
@MasterCivilEngineering4 жыл бұрын
Thanks and bless you
@tianqilong8366 Жыл бұрын
typo at 11:40, the b vector should be (1, 2, 2) instead of (1, 2, 3)
@poiuwnwang71094 жыл бұрын
I always hated linear algebra, but Prof. Strang makes it fun.
@starriet2 жыл бұрын
Learning math is so... delicious!! :D I'm not even a math genius. Thanks Prof. Strang, MIT, and KZbin.
@pelemanov13 жыл бұрын
@dylanhoggyt I get that, but that doesn't answer my question. What I mean, is that around minute 14:00 he says that the error is vertical instead of orthogonal to the line. I thought we were trying to minimize the error by orthogonal projection. I'm probably mixing things up, but I don't see it.
@michaellewis78613 жыл бұрын
Thm. A vector is orthogonal to itself if and only if it is the zero vector. Proof. Backwards. Suppose there exists a vector x s.t. that is orthogonal to itself has a nonzero entry. Then xTx=summation xixi>=bi^2. Analog holds for negative bi. Which contradicts there existing a vector with nonzero entries which has a zero dot product. Forward. Trivial. Then Ax=0 If A has independent columns ie. is of full rank, then the only vector in the N(A)=[0]. Δ
@achillesarmstrong96396 жыл бұрын
is it 14:40 there is a mistake? should it be perpendicular to the line not vertical distance to the line?
@ashutoshtiwari43985 жыл бұрын
It's the vertical distance. Consider C vs t axis. We cannot plot a 'straight line' for points b1, b2, b3 for the corresponding t1, t2, t3. So we plot 'approximate straight line' P with points p1, p2, p3 for t1, t2, t3. Now, the approximate line is giving p1 instead of b1. So the error is p1 - b1, which is along the vertical line.
@thanhduynguyen22258 ай бұрын
In the case of regression, why is the projection of a point on a regressed line connected by a vertical line (which is parallel to the y-axis) rather than a perpendicular e-line to the regressed line as shown in the chapter of projection?
@elborrador3338 жыл бұрын
I thought that the projection of the points onto the line ("error") was meant to be perpendicular to the line, and not a projection in the y-axis.
@loukas3718 жыл бұрын
I was wondering about the same thing :/
@lamps4538 жыл бұрын
I don't think so. This projection is not literally projecting the points onto the line, and the error is calculated as Ax_hat-b, hence has to be along y-axis.
@SalomonZevi8 жыл бұрын
The projection, is of the vector b (all 3 y values) onto a vector in the column space. Thus, reassigning p to be the 3 y values. The x values are not changing since they define the matrix. The error is indeed perpendicular to the column space. Here Strang is showing how linear algebra is translated in terms of linear regression.
@vipulpatki6 жыл бұрын
Look at it this way: For each x, there is a computed value, and the observed value. Then the difference between the two ordinates is the error. The "line" comes later, after we have minimized the total error. Hope I am making sense.
@lucasm42996 жыл бұрын
MIT!! MIT!! 🇺🇸🏆👌🏼
@MultiRNR10 ай бұрын
Actually it is still hard to visualize why the projection leads to least square, I think those Bs are observed data points and form a hyper-plane, and you want to map their target values to that hyper plane. The closest (in terms of distance) is a projection to it
@anangelsdiaries25 күн бұрын
12:08 but the column space doesn't contain this vector had me like uh? Until I realized there was a typo on the last row of b. Should be 2.
@kpfxzzsy3 жыл бұрын
Thank you professor, Thank you MIT.
@User-cv4ee6 жыл бұрын
It does not seem like we are going for a perpendicular projection of the data points on the line. Rather we are taking error (e) in vertical directions. Is that still correct?
@achillesarmstrong96396 жыл бұрын
same doubt
@gulshanjangid34706 жыл бұрын
I think he is minimizing error in Y direction (error in Y) = |("actual Y value" - "Y value given by our line")| But I guess to minimize overall error we should go for perpendicular projections
@User-cv4ee6 жыл бұрын
Thank you for the replies... I thought about it and realize that minimizing in y direction is kind of same as minimizing the perpendicular distance, since they are related by Pythagoreans theorem and restricted on a line with same slope.
@prensandre3 жыл бұрын
Sen nasıl bir kralsın yaa.
@irem99553 жыл бұрын
sjkhdjkshjj
@afianzamientofermath12074 жыл бұрын
Thank you so much, MIT
@amyzeng71303 жыл бұрын
Could someone give me a clue why use A(T)AX(hat)=A(T)b to solve [C,D] at 21:05 ? To my understanding, AX(hat) cannot be b if we fit a line which points are P1, P2, P3 instead of b1, b2, b3 or [1 2 2]_t?
@amyzeng71303 жыл бұрын
I got it. I messed up the two pictures. This is to solve a invertible squared matrix A(T)Ax= J, where J=A(T)b, and x=[C, D].
@seanyboyblu2 жыл бұрын
One thing that I cannot figure out if somebody can help me. The projection is supposed to be A times that x hat. but when we solve, he does A^TAx = A^Tb. But that is not p, p would be A times all of that, wouldn't it?
@anonym4982 жыл бұрын
I wonder if multivariable calculus is a prerequisite for this course. Could you tell me please
@weizhili4477 жыл бұрын
Why wouldn't the error be the perpendicular distance to the line? in the video, the prof said it to be the vertical distance to the line.
@etsevnevo13156 жыл бұрын
Hello, I think this is because the regression line is drawn in the 2D space which is the row space of A. The minimization/projection is carried out in the column space of A.
@khurshedfitter56956 жыл бұрын
Same query.
@michaelj76775 жыл бұрын
"Make the error vector perpendicular to the Column Space" means "minimize the euclidean Norm of e". Because the shortest vector from the Column Space of A to b is the perpendicular vector. Minimizing the euclidean Norm means: "minimize the Square root of the sum of squares (of the components of e with regard to the m-dimensional space)". This happens to be the same thing as minimizing the sum of squares of each of the m residuals (=the vertical distance to the line)
@coffle19 жыл бұрын
Why I-P at 8:22?
@coffle19 жыл бұрын
Ahh, that makes sense. Thanks for clearing that up!
@doge-coin6 жыл бұрын
Why is that? Are there some comments being deleted? I can't see the answers. :(
@sourabhk23736 жыл бұрын
lol same here.
@jazonjiao6386 жыл бұрын
Phoebe Wang : e = b - p = Ib - Pb = (I - P) b where I is identity matrix