22. Gradient Descent: Downhill to a Minimum

  Рет қаралды 81,086

MIT OpenCourseWare

MIT OpenCourseWare

Күн бұрын

Пікірлер: 44
@georgesadler7830
@georgesadler7830 3 жыл бұрын
Professor Strang thank you for a straight forward lecture on Gradient Descent: Downhill to a Minimum and its relationship with convex function. The examples are important for deep understanding of this topic in numerical linear algebra.
@tungohoang9201
@tungohoang9201 2 жыл бұрын
Very clear and natural to follow the lesson. Thank Professor Strang so much. Btw, his books are also very wonderful.
@Musabbir_Sakib
@Musabbir_Sakib 4 жыл бұрын
Very natural way of teaching. Thank you sir
@perlaramos8783
@perlaramos8783 4 жыл бұрын
Gradient Descent is at 34:33
@NaveenKumar-yu6eo
@NaveenKumar-yu6eo 4 жыл бұрын
that helped thank you
@samirhajiyev6905
@samirhajiyev6905 2 жыл бұрын
thank you.
@TheRsmits
@TheRsmits 3 жыл бұрын
If in calc 1 they introduced the term argmin for the place where the minimum occurred there would be less confusion as students often mistake argmin for the actual min.
@paradisal2014
@paradisal2014 3 жыл бұрын
Thanks for this lol
@trevandrea8909
@trevandrea8909 4 ай бұрын
Thank you so much
@satyamwarghat9987
@satyamwarghat9987 5 жыл бұрын
Wow the video quality is awesome and lecture of Professor Gilbert Strang is the best
@mkelly66
@mkelly66 3 жыл бұрын
Your lectures are a pleasure to watch (and learn from)!
@naterojas9272
@naterojas9272 4 жыл бұрын
Omg look at how clean those top boards are 🤩
@kirinkirin9593
@kirinkirin9593 5 жыл бұрын
what a beautiful functions. that's why i love linear algebra.
@martinspage
@martinspage 5 жыл бұрын
his picture with grad(f) pointing up is a bit misleading around 9:00 I think. grad(f) is a vector in the x-y plane, pointing in the direction you should move in the x-y plane to maximize increase in f.
@quocanhhbui8271
@quocanhhbui8271 5 жыл бұрын
True, this has been bothering me since last year when I started cal 3/
@RC98.19
@RC98.19 4 жыл бұрын
I think both of you and Prof. Strang are right. Actually, what Prof. Strang plotted on the board is level graph (just like the previous comment mentioned about). While we have a function f(x, y) = ax + by, we can plot the level graph by setting the f(x, y) = C (some constant). If we increase the constant level by level, we could observe that we're actually shifting the level graph in the direction of grad(f). That direction is perpendicular to the level graph. In my point of view, Prof. Strang did want to show that the gradient is perpendicular to the level graph. However, he didn't notice that the arrow he drew is pointing upward. This is probably the point that confused you.
@davidbenz2280
@davidbenz2280 4 ай бұрын
First of all, 2x + 5y = 0 is not a plane, as Professor Strang says. Rather, it is a level "curve" of the plane described by f(x,y) = 2x + 5y (with f(x,y) set to 0). The level "curves" of a plane in 3D are parallel lines on the xy plane. Then, Professor Strang really makes an error when he says that the gradient is somehow perpendicular to the plane. No, the gradient is perpendicular to the level "curves" of the plane, or the parallel lines in the xy plane. And, all movement to any new z value is in the plane. I also think the way he drew the plane was very confusing, as he didn't even try to approximate its actual orientation in 3D.
@TheNeutralGuy0
@TheNeutralGuy0 Ай бұрын
For those who have not taken the previous 22 lecture , This lecture wont much help them.
@ashwinmanickam
@ashwinmanickam 4 жыл бұрын
34:36 gradient descent
@finweman
@finweman 5 жыл бұрын
I am hoping for a discussion about conventions of derivatives. Much of the stuff I've seen would make the gradient a row vector, which leads to the derivatives being the transpose of what he shows. In his example, the derivative a'x is 'a' which is contrary to intuition from single variable calculus though he uses intuition for x'Sx.
@tommy-lee-johnes
@tommy-lee-johnes 3 жыл бұрын
To get the intuition you should try make the multiplication of a'x, arrive at a new matrix, and then calculate the derivative for x. Will be a
@Anskurshaikh
@Anskurshaikh 2 жыл бұрын
same. I feel alot of people are using different notations for these vector/matrix derivatives. Nobody takes the time to elaborate the details :(
@brainstormingsharing1309
@brainstormingsharing1309 3 жыл бұрын
Absolutely well done and definitely keep it up!!! 👍👍👍👍👍👍
@에헤헿-l7v
@에헤헿-l7v Жыл бұрын
why in 42:25 insn't the gradient [x,by] since there is 1/2 multiplied at f?
@nadeemqaiser
@nadeemqaiser Жыл бұрын
Thanks, Teacher !
@gopalkulkarni402
@gopalkulkarni402 3 жыл бұрын
Isnt grad(f) supposed to be [x by] instead of [2x 2by]?
@Andrew_J123
@Andrew_J123 3 жыл бұрын
Yes I had the same objection. I think he glossed over the 1/2 present in the function. It's a multiple of the same vector so in the grand scheme of things I don't think it matters too much but with that being said having [x by] would have eased my mind
@nabeelali6721
@nabeelali6721 5 жыл бұрын
Wonderful teaching
@HieuLe-un7ll
@HieuLe-un7ll 2 жыл бұрын
I think the grad(f) at 16:00 should be 0.5(S+S tranpose)x-a , right? anyway, thank you for the amazing lecture!
@samuelyeo5450
@samuelyeo5450 4 жыл бұрын
How did he get all the equations of xk, yk and fk at 45:35? Specifically, how did he get (b-1)/(b+1) and vice versa? I shifted the equation to make xk+1 and yk+1 the subjects of the equations but instead I got xk+1 = xk (1-2sk), where xk = x0 = b.
@theos-
@theos- 4 жыл бұрын
Same question here. Post the answer if you found it please.
@zacharylee9030
@zacharylee9030 4 жыл бұрын
I think he have already use the optimized step size (sk) for the iterration. He didn't tell us what is the sk looks like, while he just show us the finally equation to explain the idea of good or bad convergance.
@ky8920
@ky8920 3 жыл бұрын
at the limit [x,y]=[x,y]-[sx,bsy] and to minimize f=0.5x^2+0.5y^2, we must have |x|=|y|. So |x-sx|=|y-bsy| s=2/(1+b) and [x,y]=[x,y]-[sx,bsy]=[(1-s)x,(1-bs)y]. we got x=(b-1)/(b+1)*x_old, y=(1-b)/(b+1)*y_old etc...
@RC98.19
@RC98.19 4 жыл бұрын
Around 40:27. Does anybody know how to derive the result of reduction rate of m/M (the condition number)? Any tip or reference?
@Hotheaddragon
@Hotheaddragon 4 жыл бұрын
By condition number I guess he meant (what I got) lambda(max) / lambda(min) max eigen value / min e value which was 1/b for that example
@ronsreacts
@ronsreacts 3 ай бұрын
💯👍
@jerrywilsonwilliams2431
@jerrywilsonwilliams2431 4 жыл бұрын
❤️❤️❤️❤️❤️
@shenzheng2116
@shenzheng2116 4 жыл бұрын
In 26:51, the professor writes Gradient(f) = entries of X^-1. Do anyone know how to get that equation? Thanks!
@samuelyeo5450
@samuelyeo5450 4 жыл бұрын
if f(X) = -ln(det(X)), gradient(f) = (derivatives of det(X))/det(X) in matrix form, which is the same as a matrix of the entries of X^-1 for each entry. I'm also not too certain myself, but this does make sense to me.
@yuchaoli6385
@yuchaoli6385 4 жыл бұрын
en.wikipedia.org/wiki/Adjugate_matrix this gives the answer
@SuperDeadparrot
@SuperDeadparrot Жыл бұрын
What the hell is a Hessian?
@pnachtwey
@pnachtwey Жыл бұрын
He is too long winded. Why not use a simple function of x,y. Find the derivatives and start dong a few iteration. Finally he gets to gradient descent. Gradient descent works but the are better algorithms. The line search idea is a good start. WTF is wrong with this guy? A simple python program or even excel would be much more meaningful. Thumbs down.
@John-wx3zn
@John-wx3zn 7 ай бұрын
This sounds like a bunch of non sense.
@beloaded3736
@beloaded3736 Жыл бұрын
Wonderful fella professor ☺️
23. Accelerating Gradient Descent (Use Momentum)
49:02
MIT OpenCourseWare
Рет қаралды 52 М.
25. Stochastic Gradient Descent
53:03
MIT OpenCourseWare
Рет қаралды 86 М.
ТЫ В ДЕТСТВЕ КОГДА ВЫПАЛ ЗУБ😂#shorts
00:59
BATEK_OFFICIAL
Рет қаралды 3,8 МЛН
Gradient Descent, Step-by-Step
23:54
StatQuest with Josh Starmer
Рет қаралды 1,4 МЛН
6. Singular Value Decomposition (SVD)
53:34
MIT OpenCourseWare
Рет қаралды 227 М.
Solve any equation using gradient descent
9:05
Edgar Programmator
Рет қаралды 55 М.
16. Portfolio Management
1:28:38
MIT OpenCourseWare
Рет қаралды 6 МЛН
Introduction to Poker Theory
30:49
MIT OpenCourseWare
Рет қаралды 1,4 МЛН
Lecture 8: Norms of Vectors and Matrices
49:21
MIT OpenCourseWare
Рет қаралды 161 М.
24. Linear Programming and Two-Person Games
53:34
MIT OpenCourseWare
Рет қаралды 67 М.
Intro to Gradient Descent || Optimizing High-Dimensional Equations
11:04
Dr. Trefor Bazett
Рет қаралды 74 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 513 М.
Gradient Descent Explained
7:05
IBM Technology
Рет қаралды 74 М.