Professor Strang thank you for a straight forward lecture on Gradient Descent: Downhill to a Minimum and its relationship with convex function. The examples are important for deep understanding of this topic in numerical linear algebra.
@tungohoang92012 жыл бұрын
Very clear and natural to follow the lesson. Thank Professor Strang so much. Btw, his books are also very wonderful.
@Musabbir_Sakib4 жыл бұрын
Very natural way of teaching. Thank you sir
@perlaramos87834 жыл бұрын
Gradient Descent is at 34:33
@NaveenKumar-yu6eo4 жыл бұрын
that helped thank you
@samirhajiyev69052 жыл бұрын
thank you.
@TheRsmits3 жыл бұрын
If in calc 1 they introduced the term argmin for the place where the minimum occurred there would be less confusion as students often mistake argmin for the actual min.
@paradisal20143 жыл бұрын
Thanks for this lol
@trevandrea89094 ай бұрын
Thank you so much
@satyamwarghat99875 жыл бұрын
Wow the video quality is awesome and lecture of Professor Gilbert Strang is the best
@mkelly663 жыл бұрын
Your lectures are a pleasure to watch (and learn from)!
@naterojas92724 жыл бұрын
Omg look at how clean those top boards are 🤩
@kirinkirin95935 жыл бұрын
what a beautiful functions. that's why i love linear algebra.
@martinspage5 жыл бұрын
his picture with grad(f) pointing up is a bit misleading around 9:00 I think. grad(f) is a vector in the x-y plane, pointing in the direction you should move in the x-y plane to maximize increase in f.
@quocanhhbui82715 жыл бұрын
True, this has been bothering me since last year when I started cal 3/
@RC98.194 жыл бұрын
I think both of you and Prof. Strang are right. Actually, what Prof. Strang plotted on the board is level graph (just like the previous comment mentioned about). While we have a function f(x, y) = ax + by, we can plot the level graph by setting the f(x, y) = C (some constant). If we increase the constant level by level, we could observe that we're actually shifting the level graph in the direction of grad(f). That direction is perpendicular to the level graph. In my point of view, Prof. Strang did want to show that the gradient is perpendicular to the level graph. However, he didn't notice that the arrow he drew is pointing upward. This is probably the point that confused you.
@davidbenz22804 ай бұрын
First of all, 2x + 5y = 0 is not a plane, as Professor Strang says. Rather, it is a level "curve" of the plane described by f(x,y) = 2x + 5y (with f(x,y) set to 0). The level "curves" of a plane in 3D are parallel lines on the xy plane. Then, Professor Strang really makes an error when he says that the gradient is somehow perpendicular to the plane. No, the gradient is perpendicular to the level "curves" of the plane, or the parallel lines in the xy plane. And, all movement to any new z value is in the plane. I also think the way he drew the plane was very confusing, as he didn't even try to approximate its actual orientation in 3D.
@TheNeutralGuy0Ай бұрын
For those who have not taken the previous 22 lecture , This lecture wont much help them.
@ashwinmanickam4 жыл бұрын
34:36 gradient descent
@finweman5 жыл бұрын
I am hoping for a discussion about conventions of derivatives. Much of the stuff I've seen would make the gradient a row vector, which leads to the derivatives being the transpose of what he shows. In his example, the derivative a'x is 'a' which is contrary to intuition from single variable calculus though he uses intuition for x'Sx.
@tommy-lee-johnes3 жыл бұрын
To get the intuition you should try make the multiplication of a'x, arrive at a new matrix, and then calculate the derivative for x. Will be a
@Anskurshaikh2 жыл бұрын
same. I feel alot of people are using different notations for these vector/matrix derivatives. Nobody takes the time to elaborate the details :(
@brainstormingsharing13093 жыл бұрын
Absolutely well done and definitely keep it up!!! 👍👍👍👍👍👍
@에헤헿-l7v Жыл бұрын
why in 42:25 insn't the gradient [x,by] since there is 1/2 multiplied at f?
@nadeemqaiser Жыл бұрын
Thanks, Teacher !
@gopalkulkarni4023 жыл бұрын
Isnt grad(f) supposed to be [x by] instead of [2x 2by]?
@Andrew_J1233 жыл бұрын
Yes I had the same objection. I think he glossed over the 1/2 present in the function. It's a multiple of the same vector so in the grand scheme of things I don't think it matters too much but with that being said having [x by] would have eased my mind
@nabeelali67215 жыл бұрын
Wonderful teaching
@HieuLe-un7ll2 жыл бұрын
I think the grad(f) at 16:00 should be 0.5(S+S tranpose)x-a , right? anyway, thank you for the amazing lecture!
@samuelyeo54504 жыл бұрын
How did he get all the equations of xk, yk and fk at 45:35? Specifically, how did he get (b-1)/(b+1) and vice versa? I shifted the equation to make xk+1 and yk+1 the subjects of the equations but instead I got xk+1 = xk (1-2sk), where xk = x0 = b.
@theos-4 жыл бұрын
Same question here. Post the answer if you found it please.
@zacharylee90304 жыл бұрын
I think he have already use the optimized step size (sk) for the iterration. He didn't tell us what is the sk looks like, while he just show us the finally equation to explain the idea of good or bad convergance.
@ky89203 жыл бұрын
at the limit [x,y]=[x,y]-[sx,bsy] and to minimize f=0.5x^2+0.5y^2, we must have |x|=|y|. So |x-sx|=|y-bsy| s=2/(1+b) and [x,y]=[x,y]-[sx,bsy]=[(1-s)x,(1-bs)y]. we got x=(b-1)/(b+1)*x_old, y=(1-b)/(b+1)*y_old etc...
@RC98.194 жыл бұрын
Around 40:27. Does anybody know how to derive the result of reduction rate of m/M (the condition number)? Any tip or reference?
@Hotheaddragon4 жыл бұрын
By condition number I guess he meant (what I got) lambda(max) / lambda(min) max eigen value / min e value which was 1/b for that example
@ronsreacts3 ай бұрын
💯👍
@jerrywilsonwilliams24314 жыл бұрын
❤️❤️❤️❤️❤️
@shenzheng21164 жыл бұрын
In 26:51, the professor writes Gradient(f) = entries of X^-1. Do anyone know how to get that equation? Thanks!
@samuelyeo54504 жыл бұрын
if f(X) = -ln(det(X)), gradient(f) = (derivatives of det(X))/det(X) in matrix form, which is the same as a matrix of the entries of X^-1 for each entry. I'm also not too certain myself, but this does make sense to me.
@yuchaoli63854 жыл бұрын
en.wikipedia.org/wiki/Adjugate_matrix this gives the answer
@SuperDeadparrot Жыл бұрын
What the hell is a Hessian?
@pnachtwey Жыл бұрын
He is too long winded. Why not use a simple function of x,y. Find the derivatives and start dong a few iteration. Finally he gets to gradient descent. Gradient descent works but the are better algorithms. The line search idea is a good start. WTF is wrong with this guy? A simple python program or even excel would be much more meaningful. Thumbs down.