"Please, take a minute to pause and convince yourself that everything on this board is accurate." So difficult to do when I was in school ("several" moon ago) madly scribbling down everything before it got wiped off the board, but now with the internet, with videos, and most importantly with a person who wants you to learn, this is so much easier to absorb. I'm looking forward to teaching my children and using your wise words. Thank you!
@REALdavidmiscarriage4 жыл бұрын
Don't homeschool your kids! you'll screw them up for life!
@berylliosis52504 жыл бұрын
@@REALdavidmiscarriage And your evidence for this is..? Homeschool has issues, but so does regular schooling.
@REALdavidmiscarriage4 жыл бұрын
@@berylliosis5250 dude in my line of work I got to know a lot of people who have been homeschooled and they all show anti social tendencies and varying degrees of depression, but most of all they all hate their parents for forcing them into being homeschooled. most of them have an extremely hard time making friends or socialising with others. how are you supposed to learn to work in a group with kids in your age, if you don't have the social construct of a school. Also why not trust people who have studied a subject for years to teach your kids, over your own superficial knowledge of science and literatur. Also it's almost always the parents who want this whole homeschooling thing never the children. Cause they have serious attachment problems with their kids and can't let go of them because they are so obsessive. please get over yourselves hoomschooling parents!
@berylliosis52504 жыл бұрын
@@REALdavidmiscarriage I know a bunch of people who've been homeschooled too. They've been socially capable, intelligent, mentally healthy (in one case, far more so than when they were in public school), and completely educated - potentially more so than their peers. They started homeschooling by mutual consent with their parents. Anecdotes don't prove anything here. While I personally wouldn't want to be homeschooled or to homeschool myself, there are some people who thrive in that kind of system.
@REALdavidmiscarriage4 жыл бұрын
@@berylliosis5250 No shit. you just proved my point, exceptions prove the rule. Also you aren't bringing any evidence for it being as good as regular school or better. That's not how that works. You can't just say unicorns exist and ask me to disprove it. You are the one making a bold claim here in comparing homeschooling with regular schools you have to bring factual evidence but you are using anecdotes yourself. So why don't we just slow down a bit and treat this for what it is an argument based on anecdotes not some scientific research paper. Maybe 1 in 1000 students might thrive off of homeschooling. Yeah also maybe 1 in a few million people win the lottery ,so? Does that mean it is worth playing the lottery?
@nikitakipriyanov72604 жыл бұрын
12:00 And if A isn't symmetric, the derivative could be represented as (A+At)x, where At is A transposed. Which also looks nice.
@ritvikmath4 жыл бұрын
great point :)
@tx67793 жыл бұрын
One question: why the derivative of the second example is a column vector? (9:35) I thought it was a row vector, similar to the form in 3:38 (the first row: [df/dx1 df/dx2]. A great video! (It is the same problem as Ravi Shankar’s two months ago)
@countmonkey29902 жыл бұрын
me too
@danielcordeiro60032 жыл бұрын
I think you are correct, at 10:23 he does say that "if you had 3 different functions and 4 different variables you would have a 3 by 4 matrix, i.e. 3 rows and 4 columns". And the result would be 2*xt*A
@liatan31612 жыл бұрын
Me too! I think it should be a row vector, and this pushed me to go back to see the video again
@userozancinci Жыл бұрын
same! is there any answer?? was the instructor wrong?
@Tom-qz8xw11 ай бұрын
yeah hes mixing numerator and denominator layout :/, in numerator layout a vector function by a scalar is a column vector, a scalar function by a vector is a row vector. In denominator layout a vector function by a scalar is a row vector and a scalar function by a vector is a column vector. (*By = derivatve with respect to)
@the_iron_laws77104 жыл бұрын
Wow. I haven't taken calculus in years and this video made taking derivative of a matrix seem easy to do and understand. Well done as teaching well is an art form unto itself.
@ritvikmath4 жыл бұрын
Glad you liked it!
@user-ib4bg9kg5s4 жыл бұрын
Everyone is sleeping and I'm here watching derivatives of matrices
@danielchmiel77874 жыл бұрын
Relatable
@doce76064 жыл бұрын
'Everyone' includes all persons, presumably... that would include the observer, so this sentence is inadmissible or meaninglesss.. ps i am only a minor student of logic so I praise the observer's meaning...peace
@danielchmiel77874 жыл бұрын
@@doce7606 "except for me" is always implied
@doce76064 жыл бұрын
@@danielchmiel7787 not to a nit-picking logician, which normally I'm not, lol, i had just been reading Quine..
@danielschwegler52204 жыл бұрын
@@doce7606 "everyone" makes no statement about the one who said it
@wanjadouglas30584 жыл бұрын
You're good at this ... extremely amazing....would you mind making a video on the following: 1. Maximum Likelihood Estimation 2. GMM 3. GLS
@datasciencewithshreyas18064 жыл бұрын
amazing, love the energy.
@ritvikmath4 жыл бұрын
Thank you!
@dylanbeck36073 жыл бұрын
You are an absolute life-saver! I am a transfer student studying chemical engineering at UC Davis and your videos match up perfectly with what we are taught :) You have helped tremendously and have given me the knowledge to solve my overly complicated problem sets. Keep making videos and I'm certain you've helped many others as well. Brilliant instructor.
@vanessamarumo62502 жыл бұрын
are you still studying chem eng?
@shetro10142 ай бұрын
i started with PACF video.. now I am almost bing watching you math series... amazing how things that you teach get stuck for days .. some of the line will stay forever... great video
@johnk81743 жыл бұрын
You are really good at what you do (i.e. making this simple and understandable). Hats off to you.
@ethanbartiromo28884 жыл бұрын
I got this randomly from KZbin’s algorithm, and I’m gonna give this man a follow! I’m a math major
@8304Hustla4 жыл бұрын
in like the first week or something? you see there is some weird shit going on right?
@ethanbartiromo28884 жыл бұрын
@Roman Koval everything is probability
@ethanbartiromo28884 жыл бұрын
@Roman Koval literally the very existence of an electron in a place in space is a probability, and electrons are building blocks for literally every material object
@redangrybird75644 жыл бұрын
You are a wizard, thanks. I've watched the video 3 times and picked up few things that I didn't in the first time. I'm a little slow though.
@beoptimistic58534 жыл бұрын
kzbin.info/www/bejne/joKsk6FobMmCoKc 💐💐
@divyamanify Жыл бұрын
Absolutely love it! It was so useful to have the analogy between regular calculus and matrix calculus shown. Makes things much more intuitive.
@RaviShankar-jm1qw3 жыл бұрын
Hi RItwik! One doubt ---> At 9:04 of the video, shouldn't the resultant matrix be 1*2 and not 2*1 as we are multiplying 1*2 and 2*2, so result should be 1*2 and not 2*1?. Please correct me if I am wrong. A fan of your videos!
@wiwl60513 жыл бұрын
i think u are right.i have same question.
@kilian82504 жыл бұрын
So it’s basically a weird notation for a Jacobian?
@christophecornet56694 жыл бұрын
I was thinking the same thing
@obilisk14 жыл бұрын
@@ramakrishnaamitr10 even though he doesn't write them fancy, with how he does the math it looks like these are partial derivatives.
@richardaversa71284 жыл бұрын
@@ramakrishnaamitr10 he isn't using the appropriate symbol, but he is indeed performing partial derivatives
@seanki984 жыл бұрын
Okay, so he looks at the function x -> Ax. This is a linear transformation, and the jacobian of any linear transformation is the linear transformation itself. This makes sense because you can think of the Jacobian as the best linear approximation for any function between R^n and R^m, whether it be linear or not. Now, in some sense, yes you can say that the derivative of the matrix is the Jacobian, because a matrix, after all, represents a linear function. As already stated, the derivative of a linear function is basically the Jacobian. I think the moral of this video is that it is best to actually think in terms of function from R^n -> R^m, (vector-valued functions) Does this clarify things?
@seanki984 жыл бұрын
@Aletak 13 yeah, the Jacobian represents a local linear transformation, which describes how much you are stretching or squishing space. The determinant of the transformation gives you what the area is scaled by, which is why it comes up when you change variables :)
@jean-michelgonet94834 жыл бұрын
Came here looking for LOWESS algorithm, and it turns out that the the derivative of xTAx plays a role in it. You helped me understand what matrix derivation is, plus solved my very particular need. Thanks.
@Clairesuismoimaispas5 жыл бұрын
this video just saved me!!! Exactly what I need for my Econometrics assignment!
@Bennilenny5 жыл бұрын
lol same
@kingfrozen42574 жыл бұрын
The derivative of Ax is A^T
@fjficm3 жыл бұрын
This channel is what we ALL needed, its great ur a genius. Should be a uni lecturer
@algotrader90543 жыл бұрын
Great video, you have way with drilling the concept into people's heads. Just awesome.
@ritvikmath3 жыл бұрын
I appreciate that!
@suyashsreekumar3031 Жыл бұрын
This really simplifies the matrix derivative. Thanks alot for making this so simple to understand!
@zheyu27014 жыл бұрын
13:15 Think of rearranging k*x^2 as x^T*k*x since x is a scalar. That is just the analog of quadratic form of x^T*A*x
@garrycotton70944 жыл бұрын
Indeed :) - I've always thought of x^T*k*x as the vector form of quadratic too.
@JeffersonRodrigoo4 жыл бұрын
Nice!
@tachyon77774 жыл бұрын
Sure we can take the derivative of a matrix! It just depends on what the function is. In this example shown in the video the function output is a vector. But it could have also been a matrix output. In that case we would have a rank 4 matrix as the derivative assuming inputs are two 2 dimensional tensors each. The main idea is to understand what a Jacobian matrix is and then you will see how all these are various special cases of that general idea. To rephrase, yes, we don' take a derivative of just any matrix as it makes no sense in the same way it doesn't make sense to take derivative of a vector. Derivative is defined for a function. But no matter what the output of a function is, be it scalar, vector, tensor or matrix, there is always a way to define its derivative.
@astrobullivant59084 жыл бұрын
A matrix inherently has discrete, integral indices, so it can't be differentiated, but you can differentiate a function whose coefficients are expressed by a matrix
@seanki984 жыл бұрын
I'd even go further and just say that you can identify a matrix with a vector in R^{nm} and use the idea of the Jacobian matrix like you talk about. I don't think it is necessary to go into the idea of rank unless you specifically care about tensor calculus. Even still, In that case, it is still basically vectors, except you might be taking tensor products with elements in the dual space. I absolutely agree that the main idea is to understand what a Jacobian matrix is
@seanki984 жыл бұрын
@@astrobullivant5908 The fact that the indices are discrete doesn't matter- a vector also has discrete indices! You don't differentiate with respect to the index number, but with respect to whatever variable each component depends on. If the matrix is constant, like [ 1 2 ; 3 4], then the derivative would just be the zero matrix.
@astrobullivant59084 жыл бұрын
@@seanki98 You're right, I'm wrong.
@TawhidShahrior3 жыл бұрын
man you deserve more spotlight. thank you from the bottom of my heart.
@ashablinski4 жыл бұрын
Thanks for all your work ritvik! Especially explaining things with a PURPOSE, not just math porn with no applications in real world.
@beoptimistic58534 жыл бұрын
kzbin.info/www/bejne/joKsk6FobMmCoKc 💐💐
@b.f.skinner43834 жыл бұрын
Super easy to follow along and clearly explained, thank you!
@ritvikmath4 жыл бұрын
Glad it was helpful!
@thirdreplicator3 жыл бұрын
You're a great communicator. Go Bruins!
@ritvikmath3 жыл бұрын
go Bruins!
@doce76064 жыл бұрын
Chandrashekar would be proud. I'm learning. Thanks
@alejrandom65923 жыл бұрын
12:48 the derivative is equal to 2Ax only when A is symetric, that is, A=A^T. The more general derivative is (A+A^T)x.
@alejrandom65926 ай бұрын
Thanks, me from the past 😊
@harry3851 Жыл бұрын
You saves my warm quiz on Introduction to ML. Many thanks!
@azrielstephen2 жыл бұрын
At 10:25 you said for 3 different functions and 4 different variables you'd have a 3x4 matrix. But the one you solved above only had 1 function and 2 variables x1 and x2. Why then did you create a 2x1 matrix instead of a 1x2?
@peterlinhan3 ай бұрын
Dude, you have talent of teaching.
@yelircaasi5 жыл бұрын
You are the man. I really appreciate your clear explanations.
@christosathanasiadis66563 жыл бұрын
When you calculated the derivative of A over the vector x you add the partial derivatives of the function f1 and f2 as row vectors in the matrix. Then, when you calculated the gradient of f1 = x^{T}Ax then over x the results was a column vector. Shouldn't be in this case the first result A^{T}?
@onguyenthanh11373 жыл бұрын
same thought bro
@alexandersmith6140 Жыл бұрын
This is astonishingly easy to follow.
@awesomebroification20 күн бұрын
Wow, this just clarified so much. Thank you thank you.
@joybagchi3 жыл бұрын
Who are the 226 people who didn't like the video? Maybe the ones who didn't understand why the derivative of kx = k, and the derivative of kx^2 is 2kx. This is mind-blowingly intuitive. I've never heard a matrix being called a bunch of scalars in a box. All the videos made by ritvikmath are excellent videos. Although I have used Eigenvalues, Eigenvectors, and derivatives of linear combinations extensively, it never made this kind of intuitive sense.
@mahyaf9142 жыл бұрын
You are just AMAZING !! So clear and easy to get!
@orenjoffe48082 ай бұрын
Great interpretation of calculus to linear algebra and back to calculus.
@berwingan41004 жыл бұрын
Dude I just wanted to let you know that your explanation is very intuitive and noice
@ritvikmath4 жыл бұрын
thanks!
@iidtxbc4 жыл бұрын
I love your energy in what you are doing. I cheer for you and thank you for making great contents!
@ritvikmath4 жыл бұрын
I appreciate that!
@talibdaryabi9434 Жыл бұрын
At@ 9:41, could you tell me why you took a column vector and not a row vector? Is it a rule that we should take it as a column vector ? How to know what would be the shape of the matrix or vector?
@mycreation26764 жыл бұрын
Wonderful Amazing skills to clear students doubt
@beoptimistic58534 жыл бұрын
kzbin.info/www/bejne/joKsk6FobMmCoKc 💐💐💐
@knp43565 жыл бұрын
Def look into becoming a professor. Thanks for the vids.
@ritvikmath5 жыл бұрын
Thank you!
@Alicia-em8bt2 жыл бұрын
This video is really helpful! Thanks for making this concept so clear!!!
@sripradpotukuchi94154 жыл бұрын
This video helped me a lot! Love your energy, keep 'em coming!
@ritvikmath4 жыл бұрын
Thank you! Will do!
@i-fanlin5684 жыл бұрын
It is very helpful! I am learning linear model. But I am not familiar with derivatives of matries. Thank you!
@ChristianLezcano-n2u Жыл бұрын
an incredible easy to follow class, thanks a lot!
@Mgggggggggggggggggggggg3 жыл бұрын
Thank you so much for making all of these videos!
@zeppelinpage8612 жыл бұрын
Very good content. Democratizing linear algebra
@tungdinh41144 жыл бұрын
I have a question, in the first derivative d(Ax)/dx, why should we do it in row, while d(x'Ax)/dx, we do it in column? Thank you
@taosun4594 жыл бұрын
Same question for this...
@Shenron5574 жыл бұрын
Hmm... Good question. I didn't notice that before I read your comment. It could because of the x' present at the beginning of x'Ax. I'm not sure though.
@p.stroker89204 жыл бұрын
That's exactly what I thought.
@wheresthesauce38864 жыл бұрын
Maybe he is writing the d(Ax)/dx in matrix notation while d(x^(T)Ax)/dx in vector notation? He does use square brackets for the former and parentheses for the latter, but I'm not too sure myself.
@snes094 жыл бұрын
Because there's a difference between X and the transpose of X. X is a column vector and so X transpose is a row vector.
@BLITZ01004 жыл бұрын
When you take partial derivatives but use normal derivative notation...
@kchannel53174 жыл бұрын
Lol that's exactly what I was thinking.
@NoahElRhandour4 жыл бұрын
No its correct the way he does it. And 65 morons liked this...
@BLITZ01004 жыл бұрын
@@NoahElRhandour Ding dong you're mr. wrong go back to zero. At 3:34 he writes (df_1/dx_1) etc. but uses normal d:s when he's writing out a derivate of a multi-variable function with respect to one of the parameters. This is known at a partial derivative and is written with a squiggly d, not a normal d. You could interpret his d:s as squiggly but in that case, he wrote out partial derivatives of single-variable functions with a squiggly d which is also incorrect notation. Really rude to call people who have a lesser degree of education morons (this isn't simple mathematics) and even worse to call people morons when they're right and you're wrong. Maybe there is a special notation that uses normal d:s when talking about partial derivatives of multi-variable matrix functions but I doubt it... And if it is the case, no one with that minor misunderstanding is a moron. Don't be a prick.
@BrikaEXE4 жыл бұрын
Yee it seems logic to use partial derivatives because of the different x1 and x2
@beoptimistic58534 жыл бұрын
kzbin.info/www/bejne/joKsk6FobMmCoKc 💐👍
@gesuchter4 жыл бұрын
Wow, that was a brilliant video! I really like the teaching style. +1 Subscriber
@ritvikmath4 жыл бұрын
Awesome, thank you!
@indylawi50214 жыл бұрын
Great job clearing up this topic.
@spurious4 жыл бұрын
Some advice: The kids that need this video most have likely learned about gradients. They may have heard of this concept as a 'hyper-gradient,' which is a less common way they can be taught in some schools. In either case, I've found that introducing it to them as a stack of gradients, one of f1 and one of f2, can help a lot. This puts things in terms that many kids would have already learned. Also, it may help to determine the ideal background of the viewers your targeting before making the video, just to crystallize the constraints you should be working with in making this video. If you do this, it's not apparent, and maybe identifying the ideal background explicitly can help. Finally, many concepts, especially differential operators like derivatives, may have other names. In this case, Jacobian is an obvious one. Listing these aliases may help students that need additional resources.
@ritvikmath4 жыл бұрын
love the detailed feedback, thanks so much!
@zoso254 жыл бұрын
Close your eyes and you'll hear Russel Peters explaining matrix derivatives.
@beoptimistic58534 жыл бұрын
kzbin.info/www/bejne/joKsk6FobMmCoKc 💐💐
@moonsun85355 жыл бұрын
Actually, the calculations for \frac{\mathrm{d} Ax}{\mathrm{d} x} you use the numerator-layout notation and the result is A, but when you compute \frac{\mathrm{d} x^T Ax}{\mathrm{d} x}, you use the denominator-layout notation which the result is 2Ax, and if you use the numerator-layout notation, the result should be 2 x^T X. Reference: en.wikipedia.org/wiki/Matrix_calculus
@tissuewizardiv59825 жыл бұрын
I found the same thing. xTAx = 2xTA instead of 2Ax. The difference is the result is a row vector instead of a column vector. I also used the same wikipedia resource for definitions.
@yanweidu19055 жыл бұрын
@@tissuewizardiv5982 Agreed.
@novanova37172 жыл бұрын
Thank you for being a tremendous help!
@LOTRT14 жыл бұрын
you just saved me Econometrics student from Korea ; Thx a lot
@Grassmpl4 жыл бұрын
Time to reward yourself with some Milkis, kimichi, and of course, gangnam style.
@mohanace25334 жыл бұрын
Very clearly explained.Subscribed.Thanks
@vijayakrishna074 жыл бұрын
Your teaching quotient is very high.
@waqasdar15503 жыл бұрын
superb! .... Everyone is sleeping and I'm here watching derivatives of matrices
@mmczhang4 жыл бұрын
Excellent! I was looking for the explanation of derivative of linear transformation for a long time!
@beoptimistic58534 жыл бұрын
kzbin.info/www/bejne/joKsk6FobMmCoKc 💐💐
@vincezzz97575 жыл бұрын
Excellent explanation. Thank you!
@saapman10 ай бұрын
Wow. Excellent video. Thanks!
@ritvikmath10 ай бұрын
Glad you liked it!
@MrCreeper20k4 жыл бұрын
Math is so cool! I half suck at linear algebra but seeing all the crazy stuff you can do with it makes me want to go back and learn it really well.
@stekim4 жыл бұрын
thanks for the video! i recommend manual focus on the whiteboard, if possible though!
@ritvikmath4 жыл бұрын
Thanks for the tip! I've fixed this in my more recent videos thanks to suggestions like yours :)
@qqq_Peace4 жыл бұрын
Thanks for your awesome video!
@dilinijayasinghe813410 ай бұрын
great video:) you're really good at explaining. Thank you very much!!
@ritvikmath10 ай бұрын
You're very welcome!
@abhishekarora40073 жыл бұрын
exactly what i was looking for !
@palashkamble23254 жыл бұрын
Amazing video. Thanks man. Subscribed right away.
@amba19744 жыл бұрын
Your concept is very clear and agood teacher
@azavier-a2 жыл бұрын
you're a very good teacher
@Vitenuto3 жыл бұрын
Maaan really good video, thanks for that!
@farhanhyder73043 жыл бұрын
Thanks, very good video. helped me in understanding everything
Жыл бұрын
Thats very good content, helped me out alot, thank you good sir
@leoxu96733 жыл бұрын
Earned a sub. Nice job man, thank you so much.
@Han-ve8uh3 жыл бұрын
Could you clarify my confusion with notation and shapes? This is critically important to understanding matrix implementations of backpropagation. At 10:25 you mentioned 3 functions and 4 vars will be 3x4 matrix. This is consistent with the Ax example where you wrote different f down the rows and different x across columns. However at the xTAx example it's 1 function with 2 variables so i was expecting a 1x2 matrix, but you arranged it as 2x1, why so? If I had set up the result to be 1x2 following your rule at 10:25, then the resulting derivative will not be Ax but xTA. I know when doing by hand we can transpose/setup anyhow as long as matrices multiply correctly but when it comes to implementing in programs, the direction of multiplication of the whole chain, with the matrices shapes (transpose or not) must be correct relative to each other, and this I can't find a good teaching source. From another video (2:50-3:03) kzbin.info/www/bejne/n4jbimqMmciGfpo&ab_channel=BenLambert , it emphasizes that the shape of resulting derivative must be same as the vector you're differentiating wrt. I do see how your xTAx example then ends up with a 2x1 which is same dimension as how x started (2x1 too), but then this "same shape rule" fails to apply to your Ax example, where the output shape became 2x2 which is not same as x shape of 2x1. Please help! I don't know who or what is right or wrong and which are conventions or rules.
@onguyenthanh11373 жыл бұрын
hi have you got any clarification? i got the same confusion also.
@shuaili14572 жыл бұрын
en.wikipedia.org/wiki/Matrix_calculus#Vector-by-vector This might help
@charumathibadrinath73334 жыл бұрын
Thank you! This video really cleared things up for me :)
@ritvikmath4 жыл бұрын
I'm so glad!
@vinceb80414 жыл бұрын
Very impressive! I like how the total derivative "emerges" from the xtAx form. It really shows how effectively linear algebra notation can be used to assemble new structures. One comment I would make is that as far as I know when taking the partial derivative it is common to use ∂ instead of d.
@beoptimistic58534 жыл бұрын
kzbin.info/www/bejne/joKsk6FobMmCoKc 💐💐💐
@danielemingolla3 жыл бұрын
Hi man, at the minute 9:28 there is one function on two variables, why have wrote a matrix with 1 column and two rows and not viceversa?
@djprometheus9234 жыл бұрын
ritvikmath up next just waitin for people to stop sleepin on him
@cjspear2 жыл бұрын
Excellent video, thank you!
@aaskyboi3 жыл бұрын
BRAVO! This explanation has helped me tremendously...THANK YOU!!
@xhongi33902 жыл бұрын
Excellently explained
@elliekongnamul Жыл бұрын
Thank you so much for this amazing video!!! It was exactly what I needed
@ritvikmath Жыл бұрын
You're so welcome!
@dylwhs4 жыл бұрын
This is the first time I have seen this, even though I am a post grad physics grad. Thanks!
@Pete-Prolly4 жыл бұрын
Suppose 2×2 matrix=A has a characteristic polynomial = C.P(A) = λ² - bλ + c then dƒ/dλ = 2λ - b Cayley Hamilton: A² - b•A + c•I means dƒ/dA = 2A - b•A which looks an awful lot like 2λ - bλ Oh, that doesn't mean anything I'm just using power rule with A & λ instead of x.... right? Well what is rhe definition of a derivative? lim [ (ƒ(x+Δx)-ƒ(x))/Δx] = dƒ/dx Δx→0 What about this? lim [ (ƒ(λ+Δθ)-ƒ(λ))/Δλ] = dƒ/dλ? Δλ→0 What about this? lim [ (ƒ(A+ΔA)-ƒ(A))/ΔA] =dƒ/dA ? ΔA→0 Ok, fine Im doing the same thing again with limits now. but suppose you define a 2×2 matrix=A with actual numbers and then you say ƒ[A] = A² = AA and you speculate dƒ/dA = d/dA[A²] =2A Right??? I mean you actually write entries in the matrix in this limit below s.t. I = Identity matrix only instead of this: lim [ (ƒ(A+ΔA)-ƒ(A))/ΔA] ΔA→0 you cant ÷ a matrix, so you do this lim [ ((A+ΔAI)² -A²)(ΔA)⁻¹ ] = ΔA→0 lim [ A² + 2ΔAI + (ΔAI)² -A² (ΔA)⁻¹ ] ΔA→0 ΔAI = ΔA•Identity matrix = [ΔΑ 0] [0 ΔΑ] = ΔΑΙ (ΔΑΙ)⁻¹ = (1/det(ΔΑΙ))•adj(ΔΑΙ)= [1/ΔΑ 0 ] [ 0 1/ΔΑ] = (ΔΑΙ)⁻¹ We can get 2A.... right? Or is it, as you say, just like taking the derivative of a constant? (I leave this as an exercise for the reader to verify.) Just playing... I'M DOING THIS!! NO CONSTANT, BABY!! e.g. Claim: It is possible to take the derivative of at least one 2×2 matrix = A s.t. ƒ[A] = A² & d/dA [A²] = 2A according to "the limit definition of a derivative" and the definition of a function, ƒ. Proof of Claim: Let [ 1 1] [ 0 2] = A [ 1 3 ] [ 0 4 ] =A² [ 2 2 ] [ 0 4 ] = 2A [1/ΔΑ 0 ] [ 0 1/ΔΑ] = (ΔΑΙ)⁻¹ lim [ A² + 2ΔAI + (ΔAI)² -A² (ΔA)⁻¹ ] ΔA→0 lim [ A² + 2ΔAI + (ΔAI)² -A² (ΔA)⁻¹ ] ΔA→0 oh, look at that boy!! wait until that s**t cancels out (I kneew they wouldn't line up, but you see it!!) [1/ΔΑ 0]• ([1 3]+[2 2]+[ΔΑ 0]+[(ΔΑ)²0]-[1 3]) [0 1/ΔΑ] ([0 4] [0 4] [0 ΔΑ] [0(ΔΑ)²] [0 4]) as lim ΔA→0 Look at A² & -A² gone! canceled [1/ΔΑ 0]• ([2 2]+[ΔΑ 0]+[(ΔΑ)²0]) [0 1/ΔΑ] ([0 4] [0 ΔΑ] [0(ΔΑ)²]) as lim ΔA→0 now add those 3 matrices [1/ΔΑ 0][2+ΔΑ+(ΔΑ)² 2ΔΑ] [0 1/ΔΑ][ 0 4ΔΑ+(ΔΑ)²] as lim ΔA→0 Multiply [1/ΔΑ 0][2ΔΑ+(ΔΑ)² 2ΔΑ] [0 1/ΔΑ][ 0 4ΔΑ+(ΔΑ)²] as lim ΔA→0 = [(2ΔΑ+(ΔΑ)²)/ΔΑ 2ΔΑ/ΔΑ] [ 0/ΔΑ (4ΔΑ+(ΔΑ)²)/ΔΑ] as lim ΔA→0 = [(2+ΔΑ 2] [ 0 4+(ΔΑ)] as lim ΔA→0 = [(2+0 2] [ 0 4+0] = [ 2 2 ] [ 0 4 ] = 2A = d/dA[A²] therefore, it is possible to take the derivative of at least one 2×2 matrix = A s.t. ƒ[A] = A² & d/dA [A²] = 2A according to "the limit definition of a derivative" and the definition of a function, ƒ. ■ edit : I knew these wouldn't all line up, lol
@tanvipurwar60484 жыл бұрын
Wha-what did you do?
@sigma_z Жыл бұрын
dayyyyyyyyyum. So nicely explained. Thank you!
@samersheichessa43314 жыл бұрын
You are great ! great video, great representation, Thanks!
@beoptimistic58534 жыл бұрын
kzbin.info/www/bejne/joKsk6FobMmCoKc 💐💐
@anynamecanbeuse2 жыл бұрын
9:41 It’d be better to use the partial derivative notation since you have 2 variables essentially is that correct?
@TheR4Z0R9965 жыл бұрын
Great job, thanks a lot from italy. Keep up the good work ;)
@ritvikmath5 жыл бұрын
Wow all the way from Italy! Thank you :)
@ВладимирКалашников-з9с3 жыл бұрын
great explanation! Thanks a lot
@ritvikmath3 жыл бұрын
You are welcome!
@BlackmetalSM5 жыл бұрын
You are a great teacher!
@ritvikmath5 жыл бұрын
Aw thank you :)
@scottzeta30672 жыл бұрын
I don't understand 8:11, should matrix calculation strictly follow the sequence? why can we take Ax before x^T A?
@melbourneopera4 жыл бұрын
Interesting. I never learn this stuff from colleague nor it introduce it before.
@beoptimistic58534 жыл бұрын
kzbin.info/www/bejne/joKsk6FobMmCoKc 💐💐
@hdrevolution1234 ай бұрын
Really useful video. Thanks
@Grassmpl4 жыл бұрын
With the xT A x case, we can let A be symmetric without loss of generality. If not, replace aij and aji with their mean, you get the exact same function. In fact you would not get that derivative to be 2Ax if A wasn't symmetric
@seanki984 жыл бұрын
He didn't say "let A be symmetric without loss of generality". Instead, he said that he will only focus on the case when A is symmetric because that is the case which we care about and can apply to "principal component analysis"
@Grassmpl4 жыл бұрын
@@seanki98 I think you didn't understand what I'm saying. I mean, it is a FACT the A can be assumed symmetric without loss of generality. Thus, only considering the symmetric case invokes NO LOSS OF INFORMATION. for each square matrix A, define f_A(x) = x^T A x. it hold that "for all" such A, symmetric or not, "there exists" symmetric B, so that f_A, and f_B are identical functions. In fact B=(1/2)(A^T+A). DO YOU UNDERSTAND NOW YOU DUMB DIMWIT?
@yanbinliu12524 жыл бұрын
94mathdude That is cool to learn that A can be replaced with a symmetric matrix without loss of generality. Thanks a lot!
@berylliosis52504 жыл бұрын
@@Grassmpl Pretty sure there's only one DUMB DIMWIT here, and it certainly isn't Sean. Oh, wait, I'm here too, that makes two. Seriously, man, insulting somebody for not automatically understanding your poorly-worded and difficult-to-read comment isn't cool.
@liamdillon94653 жыл бұрын
Great video, thanks for sharing
@kiran101104 жыл бұрын
Great video! You’re really awesome at explaining things clearly!
@beoptimistic58534 жыл бұрын
kzbin.info/www/bejne/joKsk6FobMmCoKc 💐💐
@Gruemoth2 жыл бұрын
Sorry if my question is lame but at 10:24 you say that "If you have 3 different functions and 4 different variables, you have 3X4 matrix." Since we have 1 function and 2 different variables in the example, why don't we have 1X2 matrix instead of 2X1 matrix?