You are a life saver! I have an assignment that's related to this method and I understood it pretty well! THANK YOU! <3
@QT-yt4db Жыл бұрын
The dog at the end is lovely...
@QT-yt4db Жыл бұрын
Great video, but the voice is a little bit low, makes it hard to hear without turning high the volume...
@beeseb Жыл бұрын
🍵
@lenargilmanov7893 Жыл бұрын
What I don't understand is: why use an iterative process if we know that there's exactly one minimum, just set the gradient to 0 and solve the resulting system of equations, no?
@mrlolkar6229 Жыл бұрын
those methods are used when you have let say 10^6+ equations (for example in Finite Element Method). With those method you solve in much faster then by setting all derivatives equal to 0. Even it seems there you need all steps to get to minimum its not true, usually you are close enought even in those humangus number of equations to minimum that you are already satisfied with answer that you dont need rest of 95% of missing gradients, and thats why those methods are so powerfull,.
@lenargilmanov7893 Жыл бұрын
@@mrlolkar6229 Yeah, I kinda figured it out now.
@graphicsRat Жыл бұрын
Simple, clear. I love it. I want more. A whole series on optimization.
@dmit10 Жыл бұрын
Another interesting topic is Newton-CG and what to do if the Hessian is indefinite.
@dmit102 жыл бұрын
I would suggest L-BFGS, partial Cholesky decomposition (version with limited memory), and "when to use proximal methods over non-proximal" (not just how they work).
@pablocesarherreraortiz52392 жыл бұрын
thank you very much
@swaggcam4122 жыл бұрын
This is a great resource, thank you for sharing!
@from-chimp-to-champ12 жыл бұрын
Good job, Priya, elegant explanation!
@colin_hart3 жыл бұрын
Thank you for creating this presentation. I arrived here while looking for exposition on Arnoldi and Lanczos iteration. Any new content about sparse iterative methods would be gratefully received. Cheers!
@Darkev773 жыл бұрын
Proximal GD next?
@MyName-gl1bs3 жыл бұрын
I like fud
@zhangkin78963 жыл бұрын
Here is 2021, your class is still great for now. ❤️
@xplorethings3 жыл бұрын
Really well done!
@edoardostefaninmustacchi22323 жыл бұрын
Excellent stuff. Really helped
@exploreLehigh3 жыл бұрын
gold
@ricardomilos94043 жыл бұрын
In this video was shown that the A^-1 ≈ L^-1U^-1 = M. In code was used M^-1 = L * U (element wise multiplication of LU, not matrix multiplication). It follows that preconditioning only requires the diagonal of the matrix U, since everything else is reduced Should elementwise multiplication be implemented instead of matrix multiplication? And is it correct that M^-1 = LU (maybe it should be like this M^-1 = L^-1U^-1 ?)
@PriyaDeo3 жыл бұрын
Please note that A^-1 ≈ U^-1L^-1 (not L^-1U^-1), since we are taking the LU factorization of matrix A : A ≈ LU => A^-1 = (LU)^-1 => A^-1 = U^-1L^-1. In the code, we did do matrix multiplication. The numpy matrix library uses '*' to denote matrix multiplication. For element-wise multiplication you need to use np.multiply(L, U).
@mkelly663 жыл бұрын
All of your videos did a great job of clearly explaining the topics. Thanks, I really appreciated the clear explanations!
@MrFawkez3 жыл бұрын
A great followup to the CG video. Thank you.
@frankruta47014 жыл бұрын
is alpha_k a matrix or scalar quantity?
@frankruta47014 жыл бұрын
scalar... i just didn't flatten my residual (which was a matrix in my case)
@nickp75264 жыл бұрын
Great stuff! What range of subjects can you cover?
@nickp75264 жыл бұрын
Oh, I just got to the end :) Do you know anything about support vector machines? I have a class on it this semester and I would like a small introduction
@PriyaDeo4 жыл бұрын
I have approximate knowledge of many things :P But to answer this question properly: topics from most college-level undergrad courses in Maths & Computer science. Some topics from grad-level courses in Machine Learning, Computer Vision & Robotics. Software Engineering Interview problems (e.g. HackerRank questions). And so on. I can definitely do a video on SVM's.
@JakubH4 жыл бұрын
okay, how is this video related to that? kzbin.info/www/bejne/bl7bnJx9d5drsMU it was uploaded the same day on the same topic, uses the same notation and the same algorithm, that is too much to be just a coincidence :D my guess is it was some kind of school project
@PriyaDeo4 жыл бұрын
Yes it was! Nate was in my class. But I'm glad so many people still found it useful.
@vijayakrishnanaganoor93354 жыл бұрын
Great video!
@Koenentom4 жыл бұрын
great video. Thanks!!
@Aarshyboy964 жыл бұрын
I dont understand how you updated alpha1.
@josecarlosferreira49423 жыл бұрын
alpha1 is calculated in 7:42, in this case d1= -grad(f) = b-A*x
@xruan65825 жыл бұрын
lack of detailed explanation and hard to understand
@valentinzingan11515 жыл бұрын
The first method you described is called the Steepest Descent (not the Gradient Descent). Gradient Descent is the simplest one, the Steepest Descent is an improvment on the Gradient Descent, exactly as you described.
@alexlagrassa89615 жыл бұрын
Good video, clear explanation.
@Marmelademeister5 жыл бұрын
It’s okay... It’s too slow at the beginning and too fast at the end. And why would you start with gradient descent? I would think that most people studying cg are already miles beyond gradient descent, have seen Newton’s method and now study Newton-like methods.
@AdityaPrasad0075 жыл бұрын
wow interesting how she made one technical video and stopped. Motivation was lost I guess?
@nickp75264 жыл бұрын
Have you not seen Bear and Simba dumbass?
@AdityaPrasad0074 жыл бұрын
@@nickp7526 I said technical video my dear chap.
@ethandickson94904 жыл бұрын
@@AdityaPrasad007 Think he was joking bruh
@AdityaPrasad0074 жыл бұрын
@@ethandickson9490 really? I'm pretty bad at sarcasm... @Nick was it a joke?
@PriyaDeo4 жыл бұрын
I made the video for a class. I guess I didn't expect it to get so many views and comments especially for people to keep watching it after some years. But if theres alot of interest I can make another video. Do you have any suggestions for topics?
@erickdanielperez94636 жыл бұрын
You don't use your mathematics for resolve all the problema. If you have problems with more than 3 variables, not is possible look the solution if not used the abstract mathematics. A mutli dimensional problems i.e. chemical problems (pressure, tempeture, flux, composition and rate) only is visualized with math, not with graph. Used you mathematics and numbers
@yubai65496 жыл бұрын
Many thanks!
@mari0llly6 жыл бұрын
good video, but you used the Laplace operator instead of the nabla operator for the gradient.
@songlinyang92487 жыл бұрын
Very very clear and helpful, thank you very much
@kandidatfysikk867 жыл бұрын
Great video!
@gogopie648 жыл бұрын
Good thing all real world problems are convex.
@DLSMauu8 жыл бұрын
cute lecture :P
@louisdavies53678 жыл бұрын
Thank you for making this video!! It's really helpful with my studies :)
@bigsh0w19 жыл бұрын
Please can you share the code
@aboubekeurhamdi-cherif69629 жыл бұрын
Sorry! Something was missing in my last comment. Please note that x* is the minimizer and NOT the minimum.
@kokori1009 жыл бұрын
+Aboubekeur Hamdi-Cherif yeap same notice
@yashvander4 жыл бұрын
hmm, that means x1 = x0 + x* right?
@aboubekeurhamdi-cherif69629 жыл бұрын
Please note that x* is the minimizer and the minimum.