"No mathematics went on there; we just got some vectors that were lying down to stand up."
@corey333p8 жыл бұрын
Gotta know the bases for the spaces.
@why64474 жыл бұрын
AHAHHAHHAHAHHAHAH
@delta_sleepy Жыл бұрын
😂
@jtlvhpublic19 күн бұрын
This place is not a place of honor... no highly esteemed deed is commemorated here... nothing valued is here. What is here was dangerous and repulsive to us. This message is a warning about danger.
10 жыл бұрын
Thank you MIT, thank you Prof Strang.
@PhucLe-qs7nx3 жыл бұрын
00:00 Error from last lecture, row dependent. 04:28 4 Fundamental subspaces. 08:30 Where are those spaces? 11:45 Dimension of those spaces. 21:20 Basis for those space. 30:00 N(A^T) "Left nullspace"? 42:10 New "matrix" space?
@lokahit6940 Жыл бұрын
i am asking you because your's is the most recent comment? 1)at 9:15 how the column space is R^m? for mxn(m rows x n columns)matrix there are n colums so there are n column vectors so it supposed to be R^n right?
@aarongreenberg1599 ай бұрын
@@lokahit6940 Because each vector in the column space has m components. Yes, there are n vectors, but the number of components of a vector describes the dimensions of space its in. This is different once you get to a basis, where the number of vectors describe its dimension, but even that is a subspace of R^(# of components). So a two-vector basis where each vector has 5 components is a 2d subspace in R^5.
@deveshbhatt40636 ай бұрын
@@aarongreenberg159 Thanks for the clarification.
@GavinoFelix10 жыл бұрын
"But, after class - TO MY SORROW - a student tells me, 'Wait a minute that [third vector] is not independent...'" I love it. What other professor brings this kind of passion to linear algebra? This is what makes real in the flesh lectures worthwhile.
@xoppa096 жыл бұрын
Give that brave student a medal.
@fanzhang37466 жыл бұрын
xoppa09 I think here it is the Professor that's honorable . He elaborated on his mistake, which is reasonably embarrassing for him, and made clear important concepts. I think most others would just correct it, apologize, and move on. You can see his embarrassment when he used words like 'bury', and the reaction when he accidentally uncovered the board again later.
@andersony49703 жыл бұрын
@@fanzhang3746 I don't think he is much embarrassed. He talked about doing math in class in the first vedio of this series, if you've watched that. He said that it might be inevitable to make mistakes, and it's great to go through all the processes with the students including making errors and correcting those.
@NazriB2 жыл бұрын
Lies again? FAS FUS Sheng Siong
@sahil00942 жыл бұрын
whats so passionate about accepting & correcting own mistake?
@matthewsarsam892011 ай бұрын
Cant lie being able to pause the video and ponder about the ideas is so nice to have. Goes to show how much work those students had to put in
@juansepardo2020 Жыл бұрын
I am a 4th year, double engineering student re-learning linear algebra so I can have a stronger basis for ML, DL and AI. Never in my college classes, or independent studying, have I been so amazed in the way a concept is introduced as I was when prof. Strang got to the computing of the left null space. The way this man teaches is just astonishing, thank you very much.
@reganmian11 ай бұрын
Have you checked out his newest book "Linear Algebra and Learning from Data"?. That plus "Introduction to Statistical Learning" given a foundation in programming, probability, and statistical inference is a killer combo. I'm a statistics graduate student wanting to specialize in ML. I've been watching these on 2x speed as a review
@itsnotthattough75886 ай бұрын
OMG I'm literally the same. I jumped on ML and AI early in my 2nd year, but could not understand any concepts thoroughly. Now I really feel the need to relearn the basics and prof. Strang is like the savior for me.
@davidwilliam1524 жыл бұрын
How a perfect thing that being able to be a great mathematician and a great teacher at the same time! Especially, being a great teacher is priceless!
@duqueng14 жыл бұрын
The best teacher ever. I really admire the act of MIT. Like in a phrase in its website: "Unlocking Knowledge, Empowering Minds."
@yanshudu93702 жыл бұрын
Conclusion: Four fundamental subspaces of A(m*n), including 1. The column space means spanning the column vectors, which is in R to m, notation as C(A) 2. The nullspace of A means the free variables corresponding vector span the null space, which is in R to n, notation as N(A) 3. The row space means spanning the row vectors, which is in R to n, notation as C(A') equal to n-r 4. The left nullspace of A means the A' free variables corresponding vector span the null space, which is in R to m, notation as N(A') equal to m-r. other conclusions: The sum of dim(C(A')) and N(A) is equal to n, the sum of dim(C(A)) and N(A') is equal to m.
@HamizAhmed-uk4de27 күн бұрын
The four fundamental subspaces are the column space, null space, row space, and left null space. The dimensions of these spaces are related to the rank of the matrix, with the sum of the dimensions of the null space and row space equaling the number of columns, and the sum of the dimensions of the column space and left null space equaling the number of rows. Highlights: 00:10 The lecture focuses on correcting errors from the previous lecture and introducing the concept of four subspaces associated with a matrix, including column space, null space, row space, and the left null space. -Explanation of the error correction process from the previous lecture and the significance of having different bases for spaces in linear algebra. -Introduction and explanation of the row space as a fundamental subspace, its basis, and its connection to the rows of a matrix through combinations. -Discussion on transposing matrices to work with column vectors, leading to the column space of the transposed matrix and the null space of the transposed matrix. 07:52 Understanding the four spaces in linear algebra - null space of A, column space of A, row space of A, and null space of A transpose - is crucial as they provide insights into the properties of matrices and their dimensions. -The importance of the four spaces in linear algebra and their relation to matrices' properties and dimensions. -The process of determining bases and dimensions for each of the four spaces, providing a systematic approach to understanding and analyzing matrices. -Explanation of the dimensions of the column space, row space, and null space of A transpose, highlighting their significance in understanding linear algebra concepts. 15:47 Understanding the dimensions of row space, column space, and null space in a matrix is crucial. The row space and column space have the same dimension, while the null space dimension is determined by the number of free variables. -The relationship between row space, column space, and null space dimensions. The row space and column space have the same dimension, while the null space dimension is determined by the number of free variables. -Determining the basis and dimensions of the null space. Special solutions from free variables form a basis for the null space, with the dimension being n-r, where n is the total variables and r is the number of pivot variables. -Exploring the dimensions of the left null space. The left null space dimension is m-r, where m is the number of columns in the transpose matrix. A transpose follows similar rules as the original matrix in terms of dimensions. 23:24 Understanding row space and column space in matrix operations is crucial. The row space and column space of a matrix can have different bases, but the row space basis can be identified by the matrix's rows. -Difference between row space and column space in matrix operations. Identifying the basis for the row space using the matrix's rows. -Exploring the concept of basis for the row space and its significance in matrix transformations. The importance of independence in determining the basis for the row space. -Understanding the left null space of a matrix and its relation to the null space of the matrix's transpose. Exploring the concept of vectors in the null space of A transpose. 31:06 Understanding the left null space involves transforming A to R using row reduction, resulting in a matrix E. In the invertible square case, E is the inverse of A, but for rectangular A, E connects A to R. -Explanation of left null space and its connection to row reduction and matrix E in transforming A to R. -Comparison of E in the invertible square case to the case of a rectangular A, where E does not represent the inverse of A. 38:58 Understanding the concept of subspaces in linear algebra is crucial. The video discusses row space, null space, column space, and left null space, emphasizing their dimensions and relationships in a matrix. It also introduces a new vector space using three by three matrices. -Exploring the dimensions and relationships of row space, null space, column space, and left null space in a matrix is essential in linear algebra. -Introducing a new vector space using three by three matrices and discussing the rules that define vectors within this space. -Discussing subspaces within the matrix space, such as upper triangular matrices and symmetric matrices, and how the intersection of subspaces forms a subspace. 46:36 The dimension of different subspaces of matrices can be determined by finding a basis. Diagonal matrices have a dimension of three and can be spanned by three independent matrices. -Understanding the concept of dimension in linear algebra and how it relates to subspaces of matrices. -Exploring the basis of diagonal matrices and how they form a subspace.
@KaveriChatra5 жыл бұрын
"I see that this fourth space is getting second class citizen treatment..it doesn't deserve it"
@NG-we8uu5 жыл бұрын
Kaveri Chatra by coincidence I read this exactly when he said it
@alenjose39034 жыл бұрын
@@NG-we8uu me too, i just read this while i was listening to it 😂
@MrGameWWE4 жыл бұрын
Me too 😂😂
@bobmike8285 жыл бұрын
Correct me if I'm wrong but Strang was introducing abstract algebra at the end. Once you have all of these linear transformation transforming more linear transformations, you have an even greater transformation of space. Absolutely love this man
@usozertr4 жыл бұрын
Bob Mike yes, and in an earlier lecture he was talking about how n x n permutation matrices form a group
@pubgplayer1720 Жыл бұрын
Yes, abstract vector spaces are quite important in linear algebra
@PyMoondra5 жыл бұрын
The end portion really educated how matrix algebra theory can be applied to computer vision; really glad he added that in.
@Upgradezz3 жыл бұрын
It's my honor to have met you even virtually, sir!
@DanielCoutoF9 жыл бұрын
I am so fascinated by the way that professor G. Strang gives his lectures, he does it in such a great way that even a 5 years old boy could understand , on the side , teachers from my university make the subject so complicated, that even highly above the avarege students struggle to understand the concepts poperly.
@JadedForAlways9 жыл бұрын
+Daniel Couto Fonseca What about a 5 year old girl?
@DanielCoutoF9 жыл бұрын
Only 5 years old WHITE BOYS I would say
@JadedForAlways9 жыл бұрын
Are you joking? I can't tell
@DanielCoutoF9 жыл бұрын
I guess it's more funny if you dont
@Bm23CC9 жыл бұрын
+Daniel Couto Fonseca I challenge you to teach a 5 yr old linear algebra.Good luck to that.
@xiaohanwang38859 жыл бұрын
For the first time I envy students in MIT. Because they have such genius lectures to attend.
@NostraDavid2 Жыл бұрын
I don't. I've got it better. No time pressure to watch the lectures, I don't NEED to make the exercises, nor the exams. It's great! 😁
@swatejreddy216 Жыл бұрын
@@NostraDavid2 and nor the hefty money too. So yeah.
@YufanZhou2 ай бұрын
This lecture about the four subspaces is the most beautiful Linear Algebra lecture I have ever had.
@Cyraxsify8 жыл бұрын
At t = 38:00, Strang shows a way that expedites finding L: find E, then solve [E| I | to get E inverse which = L. Now we can quickly decompose A into LU if we do Gaussian elimination only--not Gauss-Jordan elimination--from the beginning. At t = 43:00, he defines a vector space out of 3x3 matrices, call it M_33. At t = 47:00, he covers the dimensions of subspaces of M.
@maoqiutong6 жыл бұрын
The second time to see nobody in the classroom. The camera man is really happy to be a VIP student I believe.
@phil97n Жыл бұрын
How can you tell? He seemed to be talking to audience
@klartraum84955 ай бұрын
@@phil97n cameraman avoids to point at the chairs and at the end you don't hear the usual chatter, just silence
@antoniolewis10168 жыл бұрын
This man has dedication! Also, that girl in the beginning must have been a sharp genius.
@ispeakforthebeans5 жыл бұрын
Bruh its MIT they got Gods in there you talk about sharp
@akmalsultanov98014 жыл бұрын
well, when you have an intuition of just row space and column space and connection between them, it's quite obvious and you don't have to be a genius to recognize the dependency of those row vectors. In fact, the first half of the linear algebra is relatively simple.
@sreenjaysen9274 жыл бұрын
I think professor just made that up and he intentionally did wrong in the previous lecture just to introduce the row space. Professor just planned it like in "Money Heist"
@leophysics3 жыл бұрын
@@sreenjaysen927 I agree
@jonathanoneill34648 жыл бұрын
These lectures are saving my bachelors in Engineering. Thanks MIT!
@rohanmalik8956 жыл бұрын
woah your icon image tells that very precisely that you survived engineering after all.....wish me luck
@bfl90753 жыл бұрын
I was totally astonished by the idea of computing left nullspace! Thank you Dr. Gilbert.
@trevandrea890911 ай бұрын
Thank you so much!! Your explanation is soo amazing! Now I finally get why the column space of A and R are different, and why the row space of A and R is the same!! Btw, I'm saving 24:00 for the explanation of the subspaces of A and R
@serg30313 жыл бұрын
I want to write on that chalkboard with that chalk.
@vabez004 жыл бұрын
It seems quite satisfying indeed
@Lets_MakeItSimple3 жыл бұрын
the chalk looked like a big stone
@yolobot29203 ай бұрын
Nah man u got sm severe autism
@easterPole7 жыл бұрын
I'm into the fifth minute and wondering whether he made that mistake in last lecture knowingly
@sachidanandprajapati94464 жыл бұрын
man, exactly. Due to this error, i came to know if a matrix in non invertible, the columns would be linearly dependent
@eduardoschiavon56524 жыл бұрын
40:54 There's no one in the class...
@ManishKumar-xx7ny4 жыл бұрын
Same thought and maybe he did. Great chance
@matthieugrosrenaud17773 жыл бұрын
@@eduardoschiavon5652 nah it's because they reduced the rows of the class, whtat we see are the rows of zeros.
@GiovannaIwishyou3 жыл бұрын
I'm actually pretty sure he did this on purpose to trick the audience. Since first two rows are identical, it's too obvious when you learn that matrix must have the same number of linearly independent columns and rows (and it's a GREAT introduction to the lecture).
@Afnimation11 жыл бұрын
It's interesting that he constantly regards on the fact that he exposes things without proving them, but in fact I think he explains the things so clearly an understandable that he does'nt need to prove them, because we can realize about them almost in an axiomatic way.
@robertcarhart4168 Жыл бұрын
Strang proves things without you even realizing that you've just experienced a 'proof.' He makes it very conversational and intuitive.
@LAnonHubbard12 жыл бұрын
Loved the bit at the end where he showed that upper triangular or symmetric or diagonal matrices form a subspace.
@yufanzhou99484 жыл бұрын
The mistake professor Strang made turned into a great connection to the new topic. That's why he is a genius
@gavinresch11444 жыл бұрын
It is amazing how he can do these lectures in front of no students and still be so engaging. In a way he is a great actor.
@Lets_MakeItSimple3 жыл бұрын
There are students in back rows
@georgesadler78303 жыл бұрын
Incorporating MATLAB commands in the lecture is a great way for students to learn about matrices and linear algebra in context. The overall lecture is another classic by DR. Gilbert Strang.
@stefanfarier7384 Жыл бұрын
I really like how he talks. He sounds so friendly in his explanations.
@lokeshkumar-ub9bb9 жыл бұрын
at 3:15 - 3:20 Instead of looking at the row picture to realize the dependence we may also see that 2*(column 2) - (column 1) gives (column-3) :)
@jacobm70266 жыл бұрын
This is correct, but his mistake actually illuminates the importance of understanding independence from both the row space and column space. Most matrices wont be this easy to find column space independence so conceptualizing both of those spaces will give you a deeper, richer understanding of vector spaces in general
@dhruvg5505 жыл бұрын
He explains in the first three minutes why you didn't even have to look at the columns. The girl who pointed this out was quick!
@京城五5 жыл бұрын
@@dhruvg550 I think the girl was Gilbert Strang himself
@jingyiwang5113 Жыл бұрын
I am really grateful for your wonderful explanation about the four fundamental subspaces. My mathematics exam is tomorrow. It is a wonderful source for me to learn and refresh my memory. Thank you so much!
@yourroyalhighness76629 ай бұрын
My, I feel so….dense. What a sense of humor this brilliant man must have to have penned a book entitled “Linear Algebra for Everyone”. Sir, I can’t even subract!
@DeLuini985 Жыл бұрын
Thank God for dr.Strang. I am understanding concepts that have eluded me for over a decade.
@archilzhvania62426 жыл бұрын
He makes everything look so clear.
@navs86036 жыл бұрын
Thank you MIT for enabling us enjoy these treats.. And Prof. Strang is just pure genius
@All_Kraft10 ай бұрын
Thank was great performance! Thank you MIT.
@ispeakforthebeans5 жыл бұрын
"Poor misbegotten fourth subspace" -Gilbert Strang, 1999 Remember when Elizabeth Sobeck decided to give GAIA feelings? These guys gave math feelings. And I love him for that. I didn't even know that was possible.
@dariopl8664 Жыл бұрын
min 18:50 If it's helpful for anybody: the dimension of the null space is the same as the number of basis vectors that form the null space. Just like the dimension of a column space (or rank) is the number of linearly independent columns (i.e. vectors within the matrix), in the case of the null space, its dimension is the number of linearly independent columns, i.e. the number of basis vectors that form the null space.
@gavilanch15 жыл бұрын
So? This can mean a lot of things, and one of them is that they couldn´t tape this class and Strang had to repeat it in front of the cameras and they didn´t pay to some people to just sit right there so people like you would stop commenting that fact. Great classes, I do not speak english as native language, but certainly this is awesome, I really appreciate it So much Thanks to MIT and Professor Strang!!
@chuckhei4 жыл бұрын
I really don't know what to say..... Satisfying? Grateful? OMG I just love it!!!!
@Mike-mu3og5 жыл бұрын
45:26 transform an exclamation mark into an M. Brilliant!
@kaiding33222 жыл бұрын
I believe Prof. Strang deliberately made the mistake at the end of Lec 9, in order to transition the focus from column space to row space. The transition was too smooth for this to be an accident. This is also a great show of humility that he didn't mind being perceived making a mistake!
@ozzyfromspace4 жыл бұрын
Worth mentioning: if row-reduction of the matrix generates the most natural row space basis without much effort, we can also generate the most natural basis of the column space of said matrix by doing row-reduction on the transpose of the matrix. This is all so incredibly fascinating!
@shivamkasat63345 жыл бұрын
A mathematician with Great sense of Humour. Mr. Strang !
@DerekWoolverton4 жыл бұрын
I was nodding my head, keeping up just swimmingly, it all made perfect sense. He wrapped up the diagram and it seemed like we were done. Then he stepped over to the far board and replaced vectors with matrices and just turned everything upside down. Didn't see that coming.
@ucrclxl14 жыл бұрын
What fascinates me are some stats you can find below this video. Maybe it's some bug but youtube tells us that this video is most popular among: 1) men 45-54 yo 2) men 35-44 3) men 25-34 Which I find really strange cause I've thought that most of viewers would be actual students. Also, popularity by region is interesting stat.
@agarwaengrc8 жыл бұрын
where exactly can you find these stats? When I click on statistics I just get a viewcount graph
@williamss42772 ай бұрын
beginning from natural number N, then integer Z, then real number, then complex number which is just only 2 dimensional number, then vector which is a n dimensional number. vector space of vectors is like the range of N, or Z, or real numbers. with vector space the confident definition of vectors can be obtained. in terms of calculation linear algebra uses computer to do calculation. the key is to find out the algorith by studying examples of low dimensions vectors and matrices. with the algorithm the calculation can scale to vectors and matrices of high dimensions. obviously vector is a much much more expressive number comparing with preceeding any kind of number. so vector as well as linear algebra become a very powerful math tools in many applications.
@ozzyfromspace4 жыл бұрын
*Question:* what is the relationship between rank(A) and rank(A^T)? Does rank(A) = rank(A^T) in general? The professor seems to be hinting at this, but rref(A) only preserves the column space, so it doesn’t seem so trivial to me. Any insight is highly appreciated. Edit: I found the answer. rank(A) = rank(A^T) by virtue of the fact that linear independence of the columns implies linear independence of the rows, even for non-square matrices. I proved this for myself this evening. The main idea for the proof (at least how I did it) is that if you have two linearly dependent rows, one above the other say, row reduction kills the lower one (reduces number of possibly independent rows). Killing off the row (making the row all zeros) also makes it so that the given row can’t have a pivot. Thus, we’ve reduced the number of potential pivot columns by one. That’s the relationship in a nutshell. The math is only slightly more involved
@ostrodmit2 жыл бұрын
rref(A) does not preserve the column space, only the null and row spaces. It does preserve the dim(Row(A)) however, which suffices to prove that the row and column ranks are equal.
@arteks20013 жыл бұрын
Correction of error from previous lecture 0:43 Introduction to the four fundamental subspaces (column space, null space, row space, left null space) 4:20 Basis and dimension of each fundamental subspace 11:44 Basis and dimension of the column space 12:50 Dimension of the row space (it is the rank) 14:41 Basis and dimension of the null space 17:05 Dimension of the left null space (m - rank) 19:41 Basis of the row space (nonzero rows in the rref) 21:08 Basis of the left null space 29:48 Review of the four fundamental subspaces 42:09 A new vector space of all 3 by 3 matrices 42:32
@karthik36853 жыл бұрын
There is a problem with Dr. Strang's lectures. The problem is, he makes it so intuitive that I'm literally nodding in agreement the entire lecture. I've now watched the lectures once, read the book chapters, and watched the lectures a second time. And while I have a good grasp of everything discussed so far, they all sort of blend in. I couldn't list the things I learnt one by one for these 10 lectures. :D (well, I sort of can.)
@johnk81742 жыл бұрын
ya gotta do the problems. That pulls it together in your head.
@MAGonzzManifesto11 жыл бұрын
Thank you Dr. Strang and MIT. These videos are amazing and keeping me afloat in my class.
@brogcooper2513 жыл бұрын
He is not only a master lecturer, he is a master of writing on a chalkboard. I swear, it looks like he is using a paint pen.
@magdaamiridi70906 жыл бұрын
Hello! Does anybody know any other lecturers like Dr. Strang with such passion in fields like convex optimization, detection estimation or probability theory?
@q44444q5 жыл бұрын
Look up lectures by Steven Boyd. "Stanford Engineering Everywhere" is like Stanford's version of OCW and has some great courses in convex optimization: EE263 and EE364A. They aren't quite as good as Strang's lectures, but he's hard to beat!
@nonconsensualopinion4 жыл бұрын
John N. Tsitsiklis has great probability lectures on MIT open courseware here on KZbin. Highly recommended.
@fuahuahuatime519610 жыл бұрын
25:06 So performing row eliminations doesn't change the row space but changes the column space? So to get the basis for the column space, would you have to do column elimination for matrix [A]? Or could you take the transpose, do row elimination, and just use that row basis for [A] transpose as the column basis for [A]?
@readap4278 жыл бұрын
+Pablo P That's what I was thinking as I watched that part of the video. It seems that approach would work. Before this lecture, it's the approach I probably would have used, but now that I see the tie-in to pseudo-Gauss-Jordan, I think I prefer pseudo-Gauss-Jordan.
@alsah-him15714 жыл бұрын
9:45 Professor Strang subtly integrates class consciousness into his lecture of the Four Fundamental Subspaces. Truly a genius.
@bokumo70633 жыл бұрын
Last hired First fired?
@ozzyfromspace4 жыл бұрын
It's not that she found a numerical error, it was the power of her reasoning for it. I'm shook, whoever that girl is, she's clearly brilliant.
@webstime13 жыл бұрын
He made that story up to drive a point
@liverpooler19978 жыл бұрын
40:39 ... where are all the students?
@mitocw8 жыл бұрын
+liverpooler1997 Some lectures were re-recorded in an empty lecture room by Gil Strang, either because he was unhappy with a recording or there were technical difficulties the first time they were recorded in class.
@liverpooler19978 жыл бұрын
+MIT OpenCourseWare must have been scary if someone accidentally walked into an room watching a professor giving a lecture to nobody!
@ProcyonicR8 жыл бұрын
+liverpooler1997 Lol! that would be weird. I am kind of sad for professor , this should be so boring.
@bobwebster8358 жыл бұрын
actually i'm wondering if there are camera operators in the room, or are they remote, (or AI, or prof. strang was actually controlling them with his mind (0 .o) )
@TonyKaku-g8n6 жыл бұрын
they were subtracted as nullspace
@durgeshmishra9449 Жыл бұрын
@ 29:32 the prof said that the basis are the same, but that is not correct, right? Row space are same but with different set of basis for A and R?
@fanggladys99862 жыл бұрын
He is lecturing to an empty classroom if you look at time 40'53'' !! Even more wonders!
@nileshkumarJadav2 ай бұрын
So A = mxn, where m = columns and n = rows ? Is this right at 8:40?
@ramkrishna32564 жыл бұрын
For finding basis for N(A), Why can't we use similar approach of finding basis for left nullspace. 1) trans(A) - - - -> RREF 2) E' × trans(A) = RREF 3) finding basis from E'
@MultiRNR10 ай бұрын
Yes I have same question and this way sounds more mechanical (programmable) than earlier way
@georgipopov27543 жыл бұрын
Brilliant. This lectures connects the complex puzzle
@maximliankremsner6335 жыл бұрын
Thank you so much for this lecture series. This helps a lot! Great professor with great and easy to understand explanations.
@onatgirit47983 жыл бұрын
If all youtube content would be deleted today, the most upsetting thing for me would probably be losing this series of lessons.
@aymensekhri21335 жыл бұрын
Thank you Prof. Strang
@abdelaziz27883 жыл бұрын
40:50 is the best plot twist awesomee
@serenakillion70085 жыл бұрын
Thank you MIT and Professor Strang!
@davidmurphy563 Жыл бұрын
3:30 Funny, I was looking in col space and noticed that -1 * C1+ 2 * C2 = C3 and completely missing the far more obvious fact that R1 = R2. Hey ho. Target fixation. It's what sunk the Kaga and Akagi.
@ermomusic4 жыл бұрын
You could also argue that it isnt a basis because -1 time the first vector plus 2 times de second vector gives us the third vector... You really dropped the ball there professor G. hahahaha just kidding, this man is the best thing that ever happened to Linear Algebra right after Gauss
@ozzyfromspace4 жыл бұрын
At 14:00 that’s *not* true in general. The number of basis vectors of C(A) is rank(A) *yes* but said basis vectors are not the pivot columns. The professor probably misspoke or something. Doing rref(A) of a matrix means we’re taking linear combinations of the rows of A, so there’s no reason to believe the column space will be preserved during the row-reduction operations. In general, C(A) is not equal to C(rref(A)). Let A^T stand for transpose of matrix A. Then the following is always true: *C(A) = C( ( rref(A^T) )^T )* . He’s kind of implied this already in previous lectures without using the rote notation, I’m just connecting the dots like a game. Edit: A simple example is A = [1,2; 2,4]. rref(A) = [1,2; 0,0]. The column space of rref(A) is along the “y” axis whereas the column space of A is along the line “y” = 2 ”x”. It should be clear that aside from the zero vector, they have no other vectors in common. Further, rref(A) has one pivot column, so the dimension of A should be 1. 2*[1;2] = [2;4] (columns are linearly dependent, so we really only have *1* vector to scale. Hence, dimension of C(A) = r = 1, as earlier predicted 😊 Hopefully this helps someone. Professor Strang is one of the best educators out there, I see why people admire MIT! This playlist in linear algebra is the perfect way to prepare for the vectorized implementations of machine learning (if you actually wanna understand what you’re doing, that is). Best wishes to you all. 👨🏽🏫
@encheng11368 жыл бұрын
There are no students sitting there, but the lecture is still so good.
@p.z.83556 ай бұрын
Why is he such a good lecturer, my Prof used to just read from the text book
@Afnimation11 жыл бұрын
I don't know if if made my point, but if you see the other lectures you will understand better... in fact i just realize i made a mistake in the grammar at the end... next time i'll review before posting anything.
@SandeepSingh-hc3no2 жыл бұрын
It's like an enlightenment moment when he says, "she said, it's got two identical rows"
@Saket-op2xp Жыл бұрын
26:15 here can we take basis as first r rows of A also , iff our elimination doesn't involve any row exchanges?
@shavuklia77317 жыл бұрын
Oh cool. I've never computer the nullspace of the row space before. Initially, I thought of computer the nullspace of the columnspace of the transpose, but the method he provides - calculating E - is so easy, once you've already done all the work computing the other subspaces.
@scoringwolf2 жыл бұрын
35:55 Size of identity matrix should be be nxn so that its conformable, shouldn't it?
@rohanshah19574 жыл бұрын
@9:33 how column space C(A) is in R^m ?? It should be in R^n . Am I right?
@hash89104 жыл бұрын
In an m*n matrix, there are m components in every column. For example any point in R3 has 3 components (i, j and k). So if every column has 3 components (which is the number of rows in the matrix), then we say the column space of the matrix is in R3. Replace 3 with m above.
@christophercrawford2883 Жыл бұрын
Nice lecture. Would like to have seen that N(A) and C(A^T) are independent (or even orthogonal!)
@markymark4438 жыл бұрын
lol funny I'm just first watching this today and it was posted exactly 7 years ago xD thanks for the video, really helpful! I was struggling with this concept for my current linear algebra 2 course since I took the non-specialist version of linear algebra 1 which didn't really test us on proofs at all. I think I have a better understanding of the four fundamental subspaces now! :)
@s4mwize6 жыл бұрын
Here's a paper by prof. Strang related to this lecture. web.mit.edu/18.06/www/Essays/newpaper_ver3.pdf
@rishiaman29 ай бұрын
4:30 four sub spaces
@AlexanderList10 жыл бұрын
Class is crowded these days, no worries. Don't know why no one is attending back in 2005!
@omega73777 жыл бұрын
It was actually in 2000. But it was uploaded to web in Spring 2005. The dates written in video titles are dates of upload not dates of record.
@adelaidekhayon26316 жыл бұрын
40:41 why empty again?
@srch1004 жыл бұрын
Adelaide Khayon reshoots
@flyLeonardofly8 жыл бұрын
i think he breaks with the usual convention for m * n Matrix being: m rows, n collums ... which confused me, but great lecture anyways ... edit: I was wrong 9:12 ... misunderstood what he said there, ofc the columspace has m (rows many) components, because colums go m (rows-many) components down ... thanks Robert Smits
@TheRsmits8 жыл бұрын
He keeps the convention. His example A=[[1 2 3 1], [1 1 2 1], [1 2 3 1]] has 3 rows and 4 columns. The 1st row is [1 2 3 1] has 4 components and is thus an element of R^4.
@Neme1128 жыл бұрын
He doesn't but it is definitely a convention that confuses me and I have to think twice about the coordinates every time. Usually in mathematics (and in programming) the dimensions are X (meaning horizontal offset) and then Y - here it's reversed. If somebody tells me coordinates [100, 1] I expect it to be far to the right, not way down.
@kristiantorres10805 жыл бұрын
I was confused too! So I scrolled down in the comments looking for a lost soul like me xD Thank you for the explanation Robert!
@user-wm8xr4bz3b5 жыл бұрын
thanks Luis .. you cleared my doubt !! ^^
@erinsam78215 жыл бұрын
40:38 Wait...where is everyone? How many are there?
@ryankung32446 жыл бұрын
@ 24:30 why is row operation able to preserve row space and not column space? as Prof Strang mentioned that 111 can longer be found in C(R) so column space is not preserved. However, I don't see any row in A being found in R as well?. To put my question simply, how do I know if certain space is preserved after certain operation?
@adityabahuguna68156 жыл бұрын
1. A certain space is preserved if the operations u do are the linear combinations of vectors from that space. Here the row operations take the linear combination of rows and produce the RREF. 2. In order to see how rows of A can be produced from rows of R, u need to use the combinations of rows of R. Like first row in A is row1 + 2 x row2 of R. The row operations only guarantees that u are taking the linear comb. of rows in ur operations, but it is not guaranteed for the columns and hence you can not say that C(A) is same after row operations.
@KatherineRogers6 жыл бұрын
Lets suppose you have 2 independent vectors that make the row space. Lets call them vector Q, and R. Two independent vectors describe a plane. Lets also suppose that you have multiplied or divided Q by what ever number is needed to make vector Q have a leading 1. The next thing you are going to do is to subtract any multiple of vector Q in vector R to get a new second row with no component in the first column. What does this literally mean when you draw vectors Q and R? Draw some vector Q and some vector R through the origin. Now break R into a component parallel to Q and a component perpendicular to Q. IF you subtract whatever multiple of Q from R, your left with that perpendicular component. Notice that these 2 vectors Q and the vector R-kQ are still are in the plane initially made by Q and R. Swapping the order in which you write the vectors does not change the space spanned by vectors. Taking a multiple of a vector in the row space does not change the space spanned by that vector and the other basis vectors....because....you already can take any multiple and or linear combination of basis vectors to describe the space. On the other hand, what do row space operations do to the columns. Suppose you have the first column vector transpose ie write a then b then c then d in a vertical column. Subtracting some number k times the first row from the second row as you would in row reduction and you have the vector transpose. This is not the same column vector as transpose. This problem happens in every column where the component in row 1 is not zero for that column. If you wanted to preserve the column space....take A transpose, and do row operations. This is sometimes done for example if you wished to determine if the columns of A are linearly independent OR if you wished to determine a basis for the column space of A with numbers much lower than those used in the original matrix A. Can you define a set of operations where you take linear combinations of columns and take those away from other columns in matrix A? You could do that and know that this preserves the column space but not the row space. In working with determinants....both column operations and row operations are allowed before calculating the determinant. Sometimes this will reduce a matrix from something with huge numbers down to a matrix where the determinant can be calculated in your head. You have not yet studied the determinant of a matrix. My point is that it is not that column operations are NEVER done, its just that if you mix column and row operations you must know what changes and if that is what you want. For now just do row operations on A, or row operations on A transpose. Once you are very fluent with those....the book will lead you toward when column operations are appropriate and what that means. Question: Could you take a vector....and arbitrarily set one of its components to zero and claim the vector is preserved? Isnt that what you do to column 1 when you take some multiple of row 1 from row 2 Didnt you just set the first component of row 2 of the column 1 vector to zero while leaving all else intact? From the column point of view, row operations allow you to set any component to zero while leaving the other components in the column intact.
@slowpoke77859 ай бұрын
Prof Strang said that C(A) != C(R). I'm wondering if this true, because the basis for C(A) are the pivot columns which we got from row operations...
@Anan_Kisho2 ай бұрын
The pivot columns obtained were [1,0,0] and [0,1,0], which do not form the basis for [1,1,1] and [2,1,2] ( a simple check- the basis vectors always give the third component to be 0)
@xiemins4 жыл бұрын
May I say that the vectors in R span the same space as vectors in A after row operation because you can do a reverse ROW operation and construct the same vectors in A from R? It can't be true for column space because after row operations you most likely can't reverse and reconstruct the original column vectors from R through COLUMN combinations.
@ghsjgsjg53chjdkhjydhdkhfmh744 жыл бұрын
😖😖 He's the best professor I know and yet my brain doesn't get it at once😂
@nonconsensualopinion3 жыл бұрын
That's fine. All at once doesn't matter. What matters is "forever and always". Do what you must to understand it deeply so that you will know it the rest of your life. It may take watching the video many times and will probably require writing down some matrices and doing them yourself. Math is a subject which is hard to learn by observation; it really depends on participation. Remember, the students in the audience were MIT students, so they had proven they were quite talented. Those students saw what you saw in the video. Those students had the ability to talk to this professor after class. Those students had homework practice. Still, when the quiz was administered, I guarantee the average score was below 100%. Even after all that help, some students didn't quite get it all. They didn't get it "all at once". How can you expect yourself to do better than that, especially if you demand it happen "all at once"?
@JohnPaul-di3ph3 жыл бұрын
My mind got blown when I realized you could get the basis for the left null space from row transformation. I mean, it seems completely obvious after he points it out but I never thought much of it until then.
@marcuschiu86154 жыл бұрын
this is mind-blowing i don't fully understand it but i know it's mind-blowing
@timelordyunt76965 жыл бұрын
Take another look at the list...the first time I feel glad at so many left unwatched.
@sauravparajuli49885 жыл бұрын
The twist at the end was better than that of GOT's.
@shadownik2327 Жыл бұрын
If 3 3 7 and 3 3 8 are both dependent does that mean dependent vectors need not be in the same plane, but that's shit too, how does this even work and what does it mean ? @MIT
@MultiRNR10 ай бұрын
I think in this case it is simply both 3 3 7 and 3 3 8 in the same plane spanned by other vectors 2C2-1C1 leads to 3 3 8
@habenbelai74204 жыл бұрын
36:24 ...SPORTS. IT'S IN THE GAME!
@xoppa096 жыл бұрын
35:40 The identity should be n x n
@adityabahuguna68156 жыл бұрын
Nope. He is appending the matrix not multiplying by it. :)
@saadsaad77869 Жыл бұрын
How intersection of upper triangular matrix and symmetric matrix is equal to diagonal matrix