Go to LEM.MA/LA for videos, exercises, and to ask us questions directly.
@snnwstt Жыл бұрын
1:18 Just as an observation, while it is usual to see the quadratic form as presented here, I find the following a little bit more ... elegant: 0.5 * [W] {x y z 1} With a line vector, { } a column vector and [ ] a matrix. Here W = 4 1 2 -2 1 8 5 -3 2 5 4 -4 -2 -3 -4 0 symmetric if A is symmetric. Note that the minus sign for the last column and the last line is due to the original subtraction. The 0 stands when the constant term is ... zero.
@vothiquynhyen096 жыл бұрын
I have to say that I love your voice, and the passion you have for the subject.
@joshuaronisjr5 жыл бұрын
He talks a little like Feynman
@ekandrot7 жыл бұрын
For your gradient descent, do you need the -b in there, eg x -> x - a(Ax-b) ? It seemed without that -b and a positive definite matrix A, zero is the only solution. But with -b then -1,-2,4 is the solution.
@MathTheBeautiful7 жыл бұрын
Yes, you are correct!
@ijustneedaname473 жыл бұрын
This video really helped tie these concepts together for me. I really appreciate your posting it.
@omedomedomedomedomed4 жыл бұрын
To understand the least square derivation, I check this. Super helpful !!!
@jesalshah814 күн бұрын
Omg, I have never learnt math like this before! So great!
@MathTheBeautiful4 күн бұрын
I'm glad you feel this way!
@bryan-97425 жыл бұрын
this is so cool. Love this channel. I'm learning so much I should have learned years ago.
@sora2907625943 жыл бұрын
great way of explaining quadratic optimization
@joaquingiorgi51332 жыл бұрын
Made this concept easy to understand, thank you!
@Userjdanon2 жыл бұрын
Great video. This was explained very intuitive.
@TuNguyen-ox5lt7 жыл бұрын
Gradient descent is a technique used in machine learning nowadays to optimize a loss function . This video is great
@gerardogutierrez49114 жыл бұрын
Why does he talk like hes trying to get me to recapture the means of production from the bourgeoisie?
@MathTheBeautiful4 жыл бұрын
Because he is lenin in that direction
@jjgroup.investments2 жыл бұрын
Thanks for this awesome video
@devrimturker3 жыл бұрын
Is there a relation between positive definite matrix and convex set
@MathTheBeautiful3 жыл бұрын
Yes, excellent intuition. The level set for a positive-definite quadratic form is a convex shape.
@DiegoAToala2 жыл бұрын
Thank you, so clear!
@ibrahimalotaibi23995 жыл бұрын
Monster of Math.
@s254123 жыл бұрын
7:15 what if your matrix is positive semi-definite? Wouldn't there be a minimum?
@bobstephens97 Жыл бұрын
Awsome. Thank you.
@MathTheBeautiful Жыл бұрын
Thank you!
@ashwinkraghu16464 жыл бұрын
Excellent teacher! and Life saver
@user-xt9js1jt6m4 жыл бұрын
Nice explanation sir You look like Jason Statham ❤️❤️❤️ I felt like action star is giving lecture on matrix❤️❤️🙏
MathTheBeautiful it is normal to say it in past tense in my language, so I thought in it, but wrote in English. so no real reason
@MathTheBeautiful7 жыл бұрын
:) I just wanted to convey that the course is ongoing!
@TheTacticalDood5 жыл бұрын
@@MathTheBeautiful Is it still ongoing? This channel is amazing, it would be sad to see it stop!
@みの-c5c4 жыл бұрын
This really helps a lot in understanding matrix derivative, and it's so clear. Thanks!!!
@somekindofbluestuff3 жыл бұрын
thank you!
@serkangoktas55024 жыл бұрын
I always knew that something was off with this derivation. I am relieved that this wasn't because of my lack of talent in math.
@MathTheBeautiful4 жыл бұрын
It's **never** you. It's always the textbook.
@johnfykhikc7 жыл бұрын
where i can found the statement ? i did an unsuccessful search
@kaursingh6375 жыл бұрын
SIR - U R VERY CLEAR =PLEASE GIVE SHORT LECTUR
@joshuaronisjr5 жыл бұрын
This is just a comment for me to look at in the future, but at some point, he says that A will mostly be filled with zeroes before we start Gaussian elimination. A will be the covariance matrix, (X^T X) (look at the next video, the least squares solution video). That it's mostly filled with zeroes indicates that most of the random variables (each column of X is a different random variable of the dataset) are independent of one another (or at least, if they ARE independent then their covariance will be 0). However, Gaussian elimination involves linearly combining rows. The matrices in between may NOT be sparse! As for computer storage...I don't know much about it, but maybe computers store zeroes in a different way, so that sparse matrices are easier to store? Actually, I guess this comment is more than for just me...why can computers store sparse matrices well?
@telraj3 жыл бұрын
Why skip the matrix calculus? It's not rocket science
@roaaabualgasim48823 жыл бұрын
I wont examples or meteial to illusstrate the idea of method of maximization and minimaization of function with constraint(lagrage multiplier ) and with no constraint(by quadratic form and hessian matrex) 😭