This man single handedly saved my university algebra course, my teacher was just reading notes, he's actually expalining in a very clear manner.
@BaranwalAYUSHАй бұрын
This man has really reignited my passion for mathematics. Thank You Professor Strang for such amazing lectures.
@charmenk11 жыл бұрын
Good professor with good old blackboard and white chalk teaching method. This is way better than all the fancy powerpoints that many teachers use now a days.
@98885654074 жыл бұрын
hey did ya benefit from these lectures
@yusufkavcakar87443 жыл бұрын
@@9888565407 yeah ı did
@jiayuanwang14982 жыл бұрын
I can't agree more!!!
@kayfouroneseven12 жыл бұрын
this is a bazillion times more straightforward and clear than the lectures i pay for at my university. :( I appreciate this being online
@bsmichael9570 Жыл бұрын
He tells it like a story. It’s like he’s taking us all on a journey. You can’t wait to see the next episode.
@jollysan32289 жыл бұрын
I agree. > Just one small correction at 32:30: It should have been S * LAMBDA^100 * c instead of LAMBDA^100 * S * c.
@Slogan64185 жыл бұрын
thank you
@ozzyfromspace4 жыл бұрын
The sad thing was, a few moments later he was struggling to explain things because even though he hadn't pinned down the error, he someone knew that something wasn't quite right. But he obviously had the core idea nailed
@alexandresoaresdasilva19664 жыл бұрын
thank you so much, was about to post asking about this.
@吴瀚宇4 жыл бұрын
I stuck on this for like 10 mins, until I saw the comments here...
@maitreyverma29964 жыл бұрын
Perfect. I was about to write the same.
@Tutkumsdream11 жыл бұрын
Thanks to him! I passed Linear Algebra.. I watched his videos for 4 days before final exam and I got 74 from final.. If I couldnt watch Dr.Strang's lectures, I would probably fail...
@snoefnone9647 Жыл бұрын
For some reason i thought you were saying Dr. Strange's lecture!
@apocalypse20048 жыл бұрын
I think Strang leaves out a key point in the difference equation example, which is that the n unique eigenvectors form a basis for R^n, which is why u0 can be expressed as a linear combination of the eigenvectors.
@alessapiolin7 жыл бұрын
thanks!
@wontbenice7 жыл бұрын
I was totally confused until you chimed in. Thx!
@seanmcqueen84986 жыл бұрын
Thank you for this comment!
@arsenron6 жыл бұрын
in my opinion it is so obvious that it is not worth stopping on it
@dexterod6 жыл бұрын
I think Strang assumed that A has n independent eigenvectors since most matrices do not have repeated eigenvalues.
@albertacristie9914 жыл бұрын
This is magnificiant!! I have no words to express how thankful I am towards the exposure of this video
@eye2eyeerigavo7775 жыл бұрын
Math surprises you everytime...🤔 Never thought that connections between rate of growth in system dynamics, fibonacci Series and diagnalization of an INDEPENDENT vectors will finally boil down INTO GOLDEN RATIO OF EIGENVALUES at END! 😳
@BirnieMac110 ай бұрын
You know you’re in for some shenanigans when they pull out the “little trick” Professor Gilbert is an incredible teacher; I struggled with Eigenvalues and vectors in a previous course and this series of lectures has really helped understand it better Love your work Professor Gilbert
@christoskettenis880 Жыл бұрын
The explanations of this professor of all those abstract theorems and blind methodologies are simply briliant
@ranabhatashim8 күн бұрын
There is no mistake at 32:00. The thing about c1x1 is c1 is a number while x1 is a vector, keep this in mind. we have u0 = c1x1....cnxn. Multiply by A both sides, Au0 = Ac1x1..Acnxn. Bring c1 to the front since its a number, c1Ax1 + c2Ax2..., now since x1...xn are eigenvectors Ax1 = lambda x1, so Au0 = c1*lambda*x1. We can now factor out the lamba as a eigenmatrix so Au0 = lambdamatrix(c1x1 + c2x2 .. cnxn). Remeber than x is a vector and c is a number? Therefore we can do c1x1+....cnxn as Xc, here X is eigenvector matrix which is S and c is a vector of (c1 c2...). Therefore Au0 = lambda * S * c.
@keremdirlikКүн бұрын
Nope, be careful. It's S *lambda *c
@georgesadler78303 жыл бұрын
From this latest lecture , I am learning more about eigenvalues and eigenvectors in relation to diagonalization of a matrix. DR. Strang continues to increase my knowledge of linear algebra with these amazing lectures.
@syedsheheryarbokhari2780 Жыл бұрын
There is a small writing mistake at 32:30 by Prof Strang. He writes (eigenvalue matrix)^100 multiplying (eigenvector matrix) multiplying c's (constants). It ought to be (eigenvector matrix) multiplying (eigenvalue matrix)^100 multiplying c's. At the end of the lecture Professor Strang does narrate the correct formula but it is easier to miss.
@clutterbrainx Жыл бұрын
Yeah I was confused for a very long time there
@go_all_in_7779 ай бұрын
At 28:07, uk = (A^k)uo, can also be written as uk = S*(Lambda)^k*(S^-1)*uo. Also, we can write uo = S*c as explained at 30:00. therefore, uk = S*(Lambda)^k*(S^-1)*S*c=S*(Lambda)^k*c
@jeanpierre-st7rl9 ай бұрын
Hi @ 29:46 Uo = C1X1 + C2X2 +C3X3... Is U0 a vector? If so, How can split this U0 in to a combination of eigen vectors? What is Ci ? If you have any info pleases let me know. Thanks.
@SimmySimmy5 жыл бұрын
through single matrix transformation, the whole subspace will expand or shrink with the rate of eigenvalues in the direction of its eigenvectors, suppose you can decompose a vector in this subspace into the linear combination of its eigenvectors, so after many times of the same transformation, the random vector will ultimately land on one of its eigenvectors with the largest eigenvalue.
@MsAlarman2 жыл бұрын
Mindbending
@dwijdixit78102 жыл бұрын
33:40 Correction: Eigenvalue matrix be multiplied to S from the right. That has been made in the book. Probably, it slipped off Prof. Strang in the flow.
@eroicawu14 жыл бұрын
It's getting more and more interesting when differential equations are involved!
@sathviktummala54804 жыл бұрын
44:00 well that's an outstanding move
@neoneo15033 жыл бұрын
A*S=S*Lambda (Using the linear combination view (Ax1=b1 column part) of Matrix Multiplication), That is Brilliant and Clear! Thanks!
@neoneo15033 жыл бұрын
Also expressing the state u_0 to u_k as linear combination of eigenvectors (at 30:00 and 50:00)
@Wabbelpaddel3 жыл бұрын
Well, it's - if you interpret it that way - just a basis transformation from the standard base (up to isomorphism, then just additionally multiply the transforms of the alternate basis) onto the eigenvector basis. Provided of course, that either the characteristic polynomial factors distinctly, or that geometric and algebraic multiplicity match (because then the eigenspaces distinctly span the vector space up to isomorphism; if they weren't, you'd just have a subspace as a generating system). For anyone who wanted one more run-through.
@neoneo15033 жыл бұрын
@@Wabbelpaddel Thanks! =)
@ozzyfromspace4 жыл бұрын
For the curious: F_100 = (a^99 - b^99) * b/sqrt(5) + a^99 , where a = (1 + sqrt(5))/2 and b = (1 - sqrt(5))/2 are the two eigenvalues of our system of difference equations. Numerically, F_100 = ~3.542248482 * 10^20 ... it's a very large number that grows like ~1.618^k 😲 Overall, great lecture Professor Strang! Thank you for posting, MIT OCW ☺️
@starriet2 жыл бұрын
Notes for future ref.) (7:16) there are _some_ matrices that do _NOT_ have n-independent eigenvectors, but _most_ of the matrices we deal with do have n-independent eigenvectors. (17:14) If all evalues are different, there _must_ be n-indep evectors. But if there are same evalues, it's possible _no_ n-indep evectors. (Identity matrix is an example of having the same evalues but still having n-indep evectors) * Also, the position of Lambda and S should be changed(32:36). You'll see why by just thinking matrix multiplication, and it can also be viewed by knowing A^100=S*Lambda^100*S^-1 and u_0=S*c. Thus, it should be S*Lambda^100*c, and this also can be thought of as 'transformation' between the two different bases - one of the two is the set of the egenvectors of A. * Also, (43:34) How prof. Strang could calculate that?? Actually that number _1.618033988749894..._ is called the 'golden ratio'. * (8:15) Note that A and Lambda are 'similar'. (And, S and S_-1(S inverse) transforms the coordinates.. you know what I mean.. both A and Lambda can be though of as some "transformation" based on different basis.. and S(or S_-1) transforms the coord between those two world.)
@shadowByte992 жыл бұрын
I spent a few hours on the second point before figuring it out :(
@bastudil9411 жыл бұрын
There is a MISTAKE on the formula of the minute 32:31. It must be S(Λ^100)c in order to work as it is supposed. However it is an excellent lecture, thanks a lot. :)
@YaguangLi10 жыл бұрын
Yes, I am also confused by this mistake.
@sammao84789 жыл бұрын
Yaguang Li agree with you.
@AdrianVrabie8 жыл бұрын
+Bryan Astudillo Carpio why not S(Λ^100)S^{-1}c ???
@apocalypse20048 жыл бұрын
u0 is Sc, so S inverse cancels out with the S
@daiz91097 жыл бұрын
You're right... it confused me too...
@cuinuc15 жыл бұрын
I love professor Strang's great lectures. Just one small correction at 32:30: It should have been S * LAMBDA^100 * c instead of LAMBDA^100 * S * c.
@starriet2 жыл бұрын
Nice catch!
@jeffery7772 жыл бұрын
haha I think so
@eyuptarkengin81611 ай бұрын
yeah, i though of the same thing and scrolled down the comments for a approval. Thanks mate :D
@ranabhatashim8 күн бұрын
There is no mistake. The thing about c1x1 is c1 is a number while x1 is a vector, keep this in mind. we have u0 = c1x1....cnxn. Multiply by A both sides, Au0 = Ac1x1..Acnxn. Bring c1 to the front since its a number, c1Ax1 + c2Ax2..., now since x1...xn are eigenvectors Ax1 = lambda x1, so Au0 = c1*lambda*x1. We can now factor out the lamba as a eigenmatrix so Au0 = lambdamatrix(c1x1 + c2x2 .. cnxn). Remeber than x is a vector and c is a number? Therefore we can do c1x1+....cnxn as Xc, here X is eigenvector matrix which is S and c is a vector of (c1 c2...). Therefore Au0 = lambda * S * c.
@Zumerjud10 жыл бұрын
This is so beautiful!
@ozzyfromspace4 жыл бұрын
Did we ever prove that if the set of eigenvalues are distinct, the set of eigenvectors are linearly independent? I ask because at ~ 32:00 taking u_o = c1*x1 + c2*x2 + ... + cn*xn requires the eigenvectors to form a basis for an n-dimensional vector space (i.e. span the column space of an invertible matrix). It feels right but I have no solid background for how to think about it
@roshinis9986 Жыл бұрын
The idea is easy for 2d. If you have two distinct eigenvalues and their corresponding eigenvectors, you don't just have one eigenvector per eigenvalue, the whole span of that vector (its multiples forming a line) are also the eigenvectors associated with that eigenvalue. If the original eigenvectors were to be dependent, they would lie in the same line making it impossible for them to scale by a factor of two distinct eigenvalues simultaneously. I haven't yet been able to extend this intuition to 3 or higher dimensions though as now dependence need not mean lying in the same line.
@jeanpierre-st7rl9 ай бұрын
@@roshinis9986 Hi @ 29:46 Uo = C1X1 + C2X2 +C3X3... Is U0 a vector? If so, How can split this U0 in to a combination of eigen vectors? What is Ci ? If you have any info pleases let me know. Thanks.
@kanikabagree10844 жыл бұрын
This teacher made fall in love with linear algebra thankyou ❤️
@maoqiutong Жыл бұрын
32:41 There is a slight error here. The result Λ^100 * S * C may be wrong. I think it should be S * Λ^100 * C.
@meetghelani5222 Жыл бұрын
Thank you for existing MITOCW and Prof. Gilbert Strang.
@coreconceptclasses74944 жыл бұрын
I got 70 out of 75 in my final linear algebra exam thanks MIT...
@muyuanliu31753 ай бұрын
32:42, should be S lambda^100 c, great lecture, 3rd time I learn this
@Huayuan-p4z Жыл бұрын
I have learned about the Fibonacci sequence in my high school, and it is so good to have a new perspective on the magical sequence.I think the significane of learning lies in the collection of new perspectives.😀
@nguyenbaodung16033 жыл бұрын
I read something on SVD without even knowing about eigenvalues and eigenvectors, then watch a youtube video, explaining that V is actually the eigenvector decomposition of A^TA. Which is extremely insane when I got to see this video oh my godness. Now even haven't watched your SVD lecture, I can even tell the precise concept of it. Oh my godness Math is so perfect!!
@rolandheinze71825 жыл бұрын
Hard lecture to get through personally but does illustrate some of the cool machinery for applying eigenvectors
@florianwicher6 жыл бұрын
Really happy this is online! Thank you Professor :)
@dalisabe624 жыл бұрын
The golden ratio arose from the Fibonacci sequence and has nothing to do with eigenvectors or eigenvalues. The beauty of using the eigenvectors and eigenvalue of a matrix though is limiting the effect of the transformation to the change in magnitude only, which reduces dynamics systems such as population growth that is a function of several variables to be encoded in a matrix computation without worrying about the effect of direction or rotation typically associated with matrix transformation. Since eigenvectors and eigenvalues change the magnitude of the parameter vector only, the idea of employing the Eigen transformation concept is quite genius. The same technique could be used in any dynamic system that could be modeled as a matrix transformation but one that produces a change in magnitude only.
@Arycke Жыл бұрын
Hence the title of his *example* as "Fibonacci Example." Nowhere was it stated explicitly sthat the golden ratio didn't arise from the Fibonacci sequence, so I don't see where you got that from. The example has a lot to do with eigenvalues and eigenvectors by design, and is using a simple recurrence relation to show a use case. The Fibonacci sequence isn't unique anyway.
@ccamii__ Жыл бұрын
Absolutely amazing! This lecture really helped me to understand better the ideas about Linear Algebra I've already had.
@RolfBazuin11 жыл бұрын
Who would have guessed, when this guy explains it, it almost sounds easy! You, dear dr. Strang, are a master at what you do...
@cecilimiao14 жыл бұрын
@cuinuc I think they are actually the same, because LAMBDA is a diagonal matrix, you can have a try.
@aattoommmmable14 жыл бұрын
the lecture and the teacher of my life!
@serden88044 жыл бұрын
bro yasiyor musun
@Afnimation11 жыл бұрын
well i got impressed at the begining, but when he stated the second eigenvalue i realized it is just the golden ratio... That does not demerits him, he's great!
@mospehraict13 жыл бұрын
@PhilOrzechowski he does it to make first order difference equations system out of second order
@uzferry5524 Жыл бұрын
bruh the fibonacci example just blew my mind. crazy how linear algebra just works like that!!
@mike-yj5mm3 жыл бұрын
I don't understand 11:25 why A square can be written in the way on the blackboard. I think A^2 should be (S Lambda S^-1)^T (S Lambda S^-1), the result differs from the one on the blackboard. Could someone explain this?
@mike-yj5mm3 жыл бұрын
Okay, I figured it out. The S is an orthogonal matrix under the n independent eigenvector assumption, the inverse of which equals to its transpose.
@APaleDot2 жыл бұрын
@@mike-yj5mm No, it doesn't require S to be an orthogonal matrix. n independent eigenvectors ≠ n orthogonal eigenvectors of unit length, which would be required to make S an orthogonal matrix. At this point in the lecture we've already proven that A = S ∧ S^-1 and therefore it follows immediately that A^2 = AA = (S ∧ S^-1)(S ∧ S^-1). All the matrices are square, so there is no conflict in their dimensions.
@zyctc000 Жыл бұрын
If any one ever asks you about why the Fibonacci and the golden ratio phi is connected , point him/her to this video. Thank you Dr. Strang
@shadownik232711 ай бұрын
Now I get it, so its like breaking the thing ( vector or matrix or system really) we want to transform into little parts and then transforming them individually cz thats easier as the parts get transformed in the same direction and then adding up all those pieces. E vectors tell us how to make the pieces and e values how to make the transformation with the given matrix or system. Wow thanks ! It’s like something fit in in my mind and became very simple. Basically this is like finding the easiest way to transform. Thanks to @MIT and Professor Strang for making this available online for free.
@Mohamed1992able13 жыл бұрын
a big thanks tothis prof for his efforts to give us cours about linear algebra
@alexspiers62299 ай бұрын
This is one of the best in the series
@jojowasamanwho Жыл бұрын
19:21 I would sure like to see the proof that if there are no repeated eigenvalues, then there are certain to be n linearly independent eigenvectors
@eren96lmn8 жыл бұрын
43:36 that moment when your professor's computational abilities goes far beyond standart human capabilities
@BalerionFyre8 жыл бұрын
Yeah wtf? How did he do that in his head?? lol
@BalerionFyre8 жыл бұрын
Wait a minute! He didn't do anything special. 1.618... is the golden ratio! He just knew the first 4 digits. Damn that's a little anticlimactic. Bummer.
@AdrianVrabie8 жыл бұрын
+Stephen Lovejoy Damn! :D Wow! AWESOME! I have no words! Nice spot! I actually checked it in Octave and I was amazed the prof could do it in his head. But I guess he knew the Fibonacci is related to the golden ratio.
@IJOHN845 жыл бұрын
All students should know the solution to that golden quadratic by heart.
@ozzyfromspace4 жыл бұрын
Fun fact since we're all talking about the golden ratio. The Fibonacci sequence isn't that special. Any sequence F_(k+2) = F_(k+1) + F_k for any seeds F_0 = a and F_1 = b != -a generate a sequence that grows at the rate (1+sqrt(5))/2 .. your golden ratio. Another fun way to check this: take the limit of the ratio of numbers in your arbitrary sequence with your preferred software :) edit: that's a great excuse to write a bit of code lol
@gomasaanjanna28973 жыл бұрын
Iam from india I love your teaching
@abdulghanialmasri55502 жыл бұрын
The best math teacher ever.
@LAnonHubbard13 жыл бұрын
I've only just learnt about eigenvalues and eigenvectors from KhanAcademy and Strang's Lecture 21 so a lot of this went whoooosh over my head, but managed to find the first 20 minutes useful. Hope to come back to this when I've looked at differential equations (which AFAIK are very daunting), etc and understand more of it.
@rolandheinze71825 жыл бұрын
Don't think you need diff EQ at all to understand the algebra. Maybe the applications
@jasonhe69475 жыл бұрын
absolutely a brilliant example for how to apply eigenvalues to real world problem
@gianlucacococcia23844 жыл бұрын
Can I ask you why A1 x x1 is lambda x1?
@gianlucacococcia23844 жыл бұрын
Zhixun He
@wendywang423212 жыл бұрын
something wrong with this lecture, 32:39, A^{100}u_0=SM^100c. Here I use M to substitute the eigenvalue diagonal matrix. The professor said A^{100}u_0=M^100Sc which is not correct.
@PaulHobbs2313 жыл бұрын
@lolololort 1/2(1 + sqrt(5)) is also the golden ratio! Math is amazing =] I'm sure the professor knew the answer and didn't calculate it in his head on the spot.
@kunleolutomilayo40186 жыл бұрын
Thank you, Prof. Thank you, MIT.
@ItsKhabib23 күн бұрын
true masterpiece!
@suzukikenta10798 жыл бұрын
Could someone explain why vector, lambda and 1 is a solution to the nullspace at 48:43? First component, (1-lambda)*lambda+1 is not 0 or different from lambda^2 - lambda -1.
@antoniolewis10168 жыл бұрын
Not sure what you mean, but the professor was saying the vector [lambda ] [ 1 ] is in the nullspace of the matrix A-lambda* I, and not the nullspace of A. And since this vector is in that nullspace, it must be an eigenvector of A attached to eigenvalue lamda.
@jasarinvorawathanabuncha66208 жыл бұрын
(1-lambda)*lambda+1 is indeed 0 because of the determinant next to it: lambda^2 - lambda -1= 0
@roronoa_d_law10757 жыл бұрын
that makes lambda^2 - lambda + 1 and not -1 I'm confused
@roronoa_d_law10757 жыл бұрын
nevermind
@rolandheinze71825 жыл бұрын
Do the matrix multiplication and see fr yourself, it evaluates to 0 fr the eigenvalues. Also these eigenvectors give you the same equation as characteristic equation (flipped to other side of 0=...) So must be a solution
@stumbling8 жыл бұрын
7:33 Surprise horn?
@mayurkulkarni7557 жыл бұрын
wtf was that xD
@canmumcu98046 жыл бұрын
probably grinding of a chair xd
@dexterod8 жыл бұрын
I'd say if you play this video at speed 1.5, it's even more awesome!
@abdulbasithashraf54806 жыл бұрын
Trueee
@ozzyfromspace4 жыл бұрын
1x all the way. I savor the learning ☺️
@pelemanov13 жыл бұрын
@LAnonHubbard You don't really need to know about differential equations to understand this lecture. Just watch lessons 1 to 20 as well ;-). Takes you only 15h :-D.
@eugenek95111 ай бұрын
He is my linear algebra super hero!🙂
@benzhang72614 жыл бұрын
Master Yoda passed on what he has learnt by fibonacci and 1.618.
@dennisyangji15 жыл бұрын
A great lecture showing us the wonderful secret behind linear algebra
@tomodren12 жыл бұрын
Thank you for posting this. These videos will allow me to pass my class!
@khanhdovanit4 жыл бұрын
15:02 interested information inside matrix - eigenvalues
@iebalazs2 жыл бұрын
At 32:32 the expression is actually S*Lamdba^100*c, and not Lambda^100*S*c .
@thomassun30466 ай бұрын
Here comes a question, How U0 is equal to c1x1+c2x2...+cnxn. at 29:50, confused, could anyone explain it to me?
@Tman1000-be7op2 ай бұрын
He is just writing that the initial condition is equal to a combination of n independent eigen vectors.
@putrijulianarahayu79754 жыл бұрын
39:03 where the A (1,1,1,0) comes from? help.....
@finalfantasy11124 жыл бұрын
It comes from the first order system on the left.
@NisargJain4 жыл бұрын
You can read my comment where I explained it.
@praduk15 жыл бұрын
Fibonacci numbers being solved for as an algebraic equation with linear algebra was pretty cool.
@technoshrink9 жыл бұрын
U0 == "you know it" First time I've heard his boston accent c:
@phononify Жыл бұрын
very nice discussion about Fibonacci ... great !
@laurencerousseau238 жыл бұрын
Someone know why the x1 in he nullspace is (1 0) and not (0 1) as he said 26:36.
@giulianoguaragna99628 жыл бұрын
(0,1) don't have sense ,because 0 *((0,0)t) + 1*((1,0)t) is not equal to ((0,0)t). Think in the null space of (A -2I)
@mohammedal-haddad26526 жыл бұрын
The first equation is x2=0 so x1 is arbitrary and he chose x1=1 so this gives (1 0)'.
@faizanmohsin3685 Жыл бұрын
Because 0 1will not give 00
@amyzeng71303 жыл бұрын
What a brilliant lecture !!!
@iDiAnZhu11 жыл бұрын
At around 32:45, Prof. Strang writes Lambda^100*S*c. Notation wise, shouldn't this be S*Lambda^100*c?
@sharmabu4 ай бұрын
absolutely beautiful
@kebabsallad14 жыл бұрын
@PhilOrzechowski , he says that he just adds it to create a system of equations.
@Hindusandaczech13 жыл бұрын
Bravo!!! Very much the best and premium stuff.
@niraj_ds2 жыл бұрын
@ 44:00 why summation of both eigen values are 1?? have i missed any concept behind this?? : (
@ElectricTeaCup2 жыл бұрын
Yes, "The sum of the n eigenvalues equals the sum of the n diagonal entries". The sum of the diagonal entries is 1.
@胯下蜈蚣長老4 жыл бұрын
Excuse me, why doesn't the calculation of F_100 at "46:08" time the eigenvector X_1? Am i missing something?
@98885654074 жыл бұрын
can you elaborate your concern ?
@胯下蜈蚣長老4 жыл бұрын
Hello, the formula at "33:12" shows that A^100*u_0= c_1*lambda_1^100*x_1+c2*....., so i think perhaps F_100 equals to c1*lambda_1^100*x_1? Thanks for replying~
@santiagotheone2 жыл бұрын
@@胯下蜈蚣長老 Nope. F_100 is a scalar, but x_1 is a vector (in R^2 in this case). I guess the explicit thought in your question is actually A^100 * u_0 = A^100 [F_1 ; F_0] = c_1 * lambda_1^100 * x_1 + c_2 * lambda_2^100 * x_2 = [F_101 ; F_100] Focus on the term F_100 in second row. By using x_1 and x_2 (in 49:07), we know F_100 = c_1 * lambda_1^100 * 1 (element in second row of x_1) + c_2 * lambda_2^100 * 1 (element in second row of x_2) = c_1 * {[1 + sqrt(5)] / 2}^100 + c_2 * {[1 - sqrt(5)] / 2}^100. c_2 * {[1 - sqrt(5)] / 2}^100 can be omitted since {[1 - sqrt(5)] / 2}^100 is too small compared to {[1 + sqrt(5)] / 2}^100. Then we get the professor's approximation in 46:08.
@shayanghanbari68382 жыл бұрын
@@santiagotheone thanks for your clarifying
@theshreyansjain Жыл бұрын
Is there an error at 32:30? Shouldn't S be multiplied before (lamda matrix)^100?
@zionen0115 жыл бұрын
Great stuff. I was able to do my homework with this lecture. I will definitely be getting Strang's book.
@MaproXiZ10 жыл бұрын
I dont undertand why the eigenvectors are [lamba_1 1] and [lamba_2 1] at 49:19... since it is NOT true that ((1 - lamba) * (lamba)) + 1 is lamba^2 - lamba - 1 ... or it is? or what is happening?
@youcefyahiaoui146510 жыл бұрын
He's just using the original definition of the eigenvalues. We already have the characteristic equation lamd^2-lamda-1=0 as the polynomial equation the solution of which is both eigenvalues. Then he recognized that by writing the A-lamda.I by the vector [lamda 1] will generate this same characteristic equation. Hence, the eigenvectors are just [lamda 1]
@rohitsaxena225 жыл бұрын
Why u_0 can be written as a linear combination of eigenvector of A? 29:55
@Samurai_Jack__10 ай бұрын
4 years for the question but I'll still answer it if the eigenvectors are independent it means they span the whole n-dimensional space so any vector of size n can be written as a combination of this vectors. they can form a basis for the space.
@MeridianLights8 жыл бұрын
In the Fib example, it seems impossible to find a c1 and c2 s.t. c1*x1 + c2*x2 = [1 0]
@jasarinvorawathanabuncha66208 жыл бұрын
You can! Try using elimination to easily see that c1 = 1/sqrt5 c2 = -1/sqrt5
@ozzyfromspace4 жыл бұрын
@@jasarinvorawathanabuncha6620 not true, c1 = (a-1)*(1+(a-1)/(b-a)) and c2 = -(a-1)*(b-1)/(b-a) where a is the positive eigenvector and b is the negative eigenvector of our problem. Also worth noting, the eigenvectors have the form x = [1/(lambda - 1), 1], not x = [-lambda, 1] as the professor wrote :) There were a few mistakes in the way to the solution so whatever answer we arrived at was simply not correct Lol I just watched the video now but obviously this is really late for you, hopefully someone else finds this useful. Best wishes friend
@tanjiaqing12944 жыл бұрын
@@ozzyfromspace [1/(lambda - 1), 1] = [-lambda, 1] in this case, u can plug in lambda = (1+sqrt(5))/2 and (1-sqrt(5))/2 to check. So the c1 and c2 got by @Jassarin should be correct.
@Zoro31208 жыл бұрын
In the computation of the Eigen values for A², he used A = SʌSˉ¹ to derive that ʌ² represents its Eigen value matrix. However this can be true only if S is invertible for A², which need not be always true. For example, for the matrix below (say A), the Eigen values are 1, -1(refer previous lecture). This would imply that A² has only one Eigen value of 1. This would imply that S has 2 columns which are same (if it has only one column then it is no longer square and hence inverse doesn't apply) and hence non invertible. This implies that this proof cannot be used for all the cases of the matrix A. _ _ │ 0 1 │ │ 1 0 │ ¯ ¯ Is there something I'm missing here?
@hinmatth8 жыл бұрын
Please check 17:32
@sviswesh3555 Жыл бұрын
Which null space does Prof mean, at @24:36 ?
@atefehpeimani5488 Жыл бұрын
the null space of A - 2I which gives the answer for X in: (A - 2I)X = O
@sviswesh3555 Жыл бұрын
@@atefehpeimani5488 got it, thanks! :)
@safatkhan68396 жыл бұрын
49:35 Why is F1 equal to 1? Isn't the first number in the sequence 0?
@Jason-ke4jf6 жыл бұрын
Yeah, it starts from zero. F0 = 0, F1 = 1 and so on.
@ashutoshtiwari43985 жыл бұрын
First number in the sequence is F0 = 0 and second number is F1 = 1.
@coffle19 жыл бұрын
Can anyone direct me to a good proof for 19:36?
@cartmansuperstar6 жыл бұрын
allegedly 26:00 , but i didn´t get it either...
@АлександрСницаренко-р4д4 жыл бұрын
MIT, thanks you!
@shamsularefinsajib777811 жыл бұрын
Gilbert strang a great math teacher............
@abhi2208 жыл бұрын
I have a doubt in Difference equations part. He writes u_0 as a combination of eigen-vectors of A. Why should this be true?
@olfchandan8 жыл бұрын
eigen vectors span the entire space (Remeber - S is square invertible matrix). So, U0 will be a linear combination of eigen vectors.
@suziiemusic8 жыл бұрын
A set of n independent eigenvectors, each one with n components, is a basis for Rn, and therefore any vector in Rn (including u0) can be written as a linear combination of these n eigenvectors. We could choose any other set of n independent vectors as a basis and do the same thing. The "standard" basis would be the columns of the identity matrix, which in 3 dimensions correspond to the x,y and z axes.
@thedailyepochs3384 жыл бұрын
for anyone wondering how he turned the fibonaci sequence to a matrix @ 37:00 you are not alone , check this video out kzbin.info/www/bejne/n4exoHytjpWIjJo
@aditiprasad55492 жыл бұрын
Got stuck on the same thing ..Thanks for this!
@anonymous.youtuber3 жыл бұрын
Just wondering...what keeps us from calling the eigenvector matrix E instead of S ? Is E already used for something else ?
@Itsmeaboud3 жыл бұрын
Yes, it is used for elimination matrix.
@rambohrynyk8897 Жыл бұрын
It always shits me how quickly the students clammer to get out of the class….how are you not absolutely dumbfounded by the profundity of what this great man is laying down!!!!
@ashutoshtiwari43985 жыл бұрын
Why the skew-symmetric matrix have zero or imaginary eigenvalue?
@sidaliu89895 жыл бұрын
By the definition of the skew-symmetric matrix (A^T=-A), all entries in the diagonal of the matrix must be 0. So when we come up with the characteristic equation, it will be lambda^n+b^2=0 (since the trace is zero and the determinant is some square), and this will give us pure imaginary solutions if b^2>0.
@ricardocesargomes72748 жыл бұрын
Thanks for uploading.!
@utxeee6 жыл бұрын
So, the second component of u(k+1) is useless, right? The actual value is given by the first component.
@thovinh53865 жыл бұрын
Yep, you can use u(k) = u(k) and it still works.
@jamesmcpherson39244 жыл бұрын
I had to pause to figure out how he got the eigenvectors at the end. Plugging in Phi works but it wasn’t until I watched again that I noticed he was pointing to the lambda^2-lambda-1=0 relationship to reveal the vector.