I think the conclusive statement should be the power method yields to a vector in the direction of the eigenvector associated with the largest (in magnitude) eigenvalue
@zainyact5512 ай бұрын
albert einstein
@natewilliams70053 ай бұрын
Good video
@laulinky3346 ай бұрын
why 32K L1 cache can only store 3 40x40 matrix in
@dvir-ross6 ай бұрын
Great explanation! Thank you!
@eddy97568 ай бұрын
I am the only one in 2024
@meowmeowkami9 ай бұрын
yay math!
@Hans-JoachimMarseille-dg6vk11 ай бұрын
Why the cost of Ab_j is 2mk? I reckon it is 2mk-m because it can be km multiplications to compute k vectors corresponding to k elements in b_j, while adding the vectors up would only take (k-1)*m steps
@Hans-JoachimMarseille-dg6vk11 ай бұрын
Ok I c, it's just simplified as 2mk
@mrinalde11 ай бұрын
Typo @ 3:46 a2_perp = a2 - < big term> // slide shows as a1 - <big term>
@brod515 Жыл бұрын
@11:59 how did the order suddenly change to column x row. how is that correct?
@OKJazzBro Жыл бұрын
If anyone found it hard to understand why the rotations on the basis vectors can be used to construct the rotation on the original vector, it's because they proved using geometry in the first week's lecture that rotation is a linear transformation. So R(v) = R(mag_x * [1, 0] + mag_y * [0, 1]) = R(mag_x * [1, 0]) + R(mag_y * [0, 1]) = mag_x * R([1, 0]) + mag_y * R([0, 1]). Now, again using geometry they derive what R(basis vector) is for both basis vectors, which turns out to R([0, 1]) = [cos(theta), sin(theta)] and R([1, 0]) = [-sin(theta), cos(theta)]. Now if we put these back in the above formula, we'll get the full formula for the rotation of any 2D vector v.
@k0185123 Жыл бұрын
I am so grateful that you share this invaluable videos on KZbin, so that people like me can easily learn from it any time we need. This is extremely helpful! Thank you.
@k0185123 Жыл бұрын
wonderful explanation!!!!
@k0185123 Жыл бұрын
thank you!!!!!!
@sandah Жыл бұрын
I still don't get it
@kkuo13 Жыл бұрын
Basically at the 3:30 mark she split the equation into two: the part where all the E sub j's = 0, and the right side where the E sub i is guaranteed to be 1. Because of this we know that the left side will always equal zero, and the right side of the equation will always be one, therefore 0 + chi times 1 will equal chi
@krg81212 жыл бұрын
will it work for any dimension or only square?
@tianayounan65792 жыл бұрын
Horrible video. Terrible explanation. Complete waste of 2 minutes
@farhanfouadacca2 жыл бұрын
Much better than 7 pages of my crap text book. Good job!
@turuus52152 жыл бұрын
Yeah, but still a mystery to me. I'm trying to write code in Java, but I have no basis. People say ヤコビ法、べき乗法、Gaussの消去法 works. I need to find a demonstration of them.
in practice however, one of them will be faster due to how matrices and vectors would be represented in memory and cpu cache misses.
@saraho53382 жыл бұрын
Nice, thank you
@aymanelhasbi50302 жыл бұрын
Unfortunately i didn't know this , respect sir !!
@allrounder23673 жыл бұрын
Didn't got it
@KaranSingh-ku6vr3 жыл бұрын
I was wondering why I am having such a hard time understanding this. This guy never even explained
@leminh34823 жыл бұрын
Thank you so much! You saved my life.
@ladc89603 жыл бұрын
Official Retiremento (Aug 31)@UT XLonghorns 🤘
@Cat_Sterling3 жыл бұрын
Very helpful! Thank you!
@KA-yw7ex3 жыл бұрын
You are not making any sense
@gentlemandude13 жыл бұрын
Still not that clear. A more graphical representation of the operations would be much more helpful.
@sassonvaknin40684 жыл бұрын
Thank you! The only video which help mr to understand that "set path" issue
@edgartorres94814 жыл бұрын
Very nice!
@michaelatorn83804 жыл бұрын
Awsome video, it really helped me👍
@SB-rf2ye4 жыл бұрын
I've never seen any explanation as easy to understand as this one. Kudos!
@Sai485774 жыл бұрын
not explained properly
@PatatjesDora4 жыл бұрын
Nice
@Nik-dz1yc4 жыл бұрын
very simple
@rafaelsoto10994 жыл бұрын
Excellent video, thanks a lot!!
@momosakura1904 жыл бұрын
Could we calculate the inverse of R? Is R a n by n matrix? Thank you
@LAFFutX4 жыл бұрын
We avoid calculating inverses. They are costly in many ways. R is upper triangular and triangular solves are better. Often in the literature when people say "inverse", they really mean decompose and solve.
@iheartalgebra4 жыл бұрын
@@LAFFutX Suppose we stop at 2:22 and obtain the solution by computing x = R^-1 Q^T b (perhaps using the "thin" QR factorization so that we may assume R is square). Would this be numerically more stable than the naive approach of computing (X^T X)^-1 X^T b?
@advancedlaff64534 жыл бұрын
@@iheartalgebra thank you for your question. The devil is in the detail. Yes, if you use the QR factorization to solve the problem, this is numerically more stable IF you use the right method for computing the QR factorization (which means: not Gram-Schmidt). The problem with using the method of normal equations is that is square the condition number of the matrix (a measure of how much a small relative error in the right-hand side is amplified into a relative error in the solution0. For details, you may want to look at Weeks 1-4 of our graduate course (ulaff.net, fourth column).
@tazyohivesg-player86744 жыл бұрын
premium content
@qqshendon57884 жыл бұрын
so cool
@mschepps214 жыл бұрын
Very well done
@cowcabobizle4 жыл бұрын
Woohoo! LAFF on KZbin!
@a.s.81134 жыл бұрын
nicely explained
@robertvandegeijn53204 жыл бұрын
Thank you. We now have a MOOC: LAFF-On Programming for High Performance, that teaches you how to do it yourself.
@wallysilva44784 жыл бұрын
Notebooks of the ulaff in github.com/maurice60/LAFF
@wallysilva44784 жыл бұрын
Excellent!
@lilahamel95264 жыл бұрын
awesome!!! I believe that linear algebra is easier to explain with geometry, i was looking for this kind of explanation for so long, thank you.