Conjugate Gradient Method
9:35
11 жыл бұрын
Пікірлер
@ryanmckenna2047
@ryanmckenna2047 8 ай бұрын
What is a TOD?
@bellfish188
@bellfish188 9 ай бұрын
low volume
@pigfaced9985
@pigfaced9985 Жыл бұрын
You are a life saver! I have an assignment that's related to this method and I understood it pretty well! THANK YOU! <3
@QT-yt4db
@QT-yt4db Жыл бұрын
The dog at the end is lovely...
@QT-yt4db
@QT-yt4db Жыл бұрын
Great video, but the voice is a little bit low, makes it hard to hear without turning high the volume...
@beeseb
@beeseb Жыл бұрын
🍵
@lenargilmanov7893
@lenargilmanov7893 Жыл бұрын
What I don't understand is: why use an iterative process if we know that there's exactly one minimum, just set the gradient to 0 and solve the resulting system of equations, no?
@mrlolkar6229
@mrlolkar6229 Жыл бұрын
those methods are used when you have let say 10^6+ equations (for example in Finite Element Method). With those method you solve in much faster then by setting all derivatives equal to 0. Even it seems there you need all steps to get to minimum its not true, usually you are close enought even in those humangus number of equations to minimum that you are already satisfied with answer that you dont need rest of 95% of missing gradients, and thats why those methods are so powerfull,.
@lenargilmanov7893
@lenargilmanov7893 Жыл бұрын
@@mrlolkar6229 Yeah, I kinda figured it out now.
@graphicsRat
@graphicsRat Жыл бұрын
Simple, clear. I love it. I want more. A whole series on optimization.
@dmit10
@dmit10 Жыл бұрын
Another interesting topic is Newton-CG and what to do if the Hessian is indefinite.
@dmit10
@dmit10 2 жыл бұрын
I would suggest L-BFGS, partial Cholesky decomposition (version with limited memory), and "when to use proximal methods over non-proximal" (not just how they work).
@pablocesarherreraortiz5239
@pablocesarherreraortiz5239 2 жыл бұрын
thank you very much
@swaggcam412
@swaggcam412 2 жыл бұрын
This is a great resource, thank you for sharing!
@from-chimp-to-champ1
@from-chimp-to-champ1 2 жыл бұрын
Good job, Priya, elegant explanation!
@colin_hart
@colin_hart 3 жыл бұрын
Thank you for creating this presentation. I arrived here while looking for exposition on Arnoldi and Lanczos iteration. Any new content about sparse iterative methods would be gratefully received. Cheers!
@Darkev77
@Darkev77 3 жыл бұрын
Proximal GD next?
@MyName-gl1bs
@MyName-gl1bs 3 жыл бұрын
I like fud
@zhangkin7896
@zhangkin7896 3 жыл бұрын
Here is 2021, your class is still great for now. ❤️
@xplorethings
@xplorethings 3 жыл бұрын
Really well done!
@edoardostefaninmustacchi2232
@edoardostefaninmustacchi2232 3 жыл бұрын
Excellent stuff. Really helped
@exploreLehigh
@exploreLehigh 3 жыл бұрын
gold
@ricardomilos9404
@ricardomilos9404 3 жыл бұрын
In this video was shown that the A^-1 ≈ L^-1U^-1 = M. In code was used M^-1 = L * U (element wise multiplication of LU, not matrix multiplication). It follows that preconditioning only requires the diagonal of the matrix U, since everything else is reduced Should elementwise multiplication be implemented instead of matrix multiplication? And is it correct that M^-1 = LU (maybe it should be like this M^-1 = L^-1U^-1 ?)
@PriyaDeo
@PriyaDeo 3 жыл бұрын
Please note that A^-1 ≈ U^-1L^-1 (not L^-1U^-1), since we are taking the LU factorization of matrix A : A ≈ LU => A^-1 = (LU)^-1 => A^-1 = U^-1L^-1. In the code, we did do matrix multiplication. The numpy matrix library uses '*' to denote matrix multiplication. For element-wise multiplication you need to use np.multiply(L, U).
@mkelly66
@mkelly66 3 жыл бұрын
All of your videos did a great job of clearly explaining the topics. Thanks, I really appreciated the clear explanations!
@MrFawkez
@MrFawkez 3 жыл бұрын
A great followup to the CG video. Thank you.
@frankruta4701
@frankruta4701 4 жыл бұрын
is alpha_k a matrix or scalar quantity?
@frankruta4701
@frankruta4701 4 жыл бұрын
scalar... i just didn't flatten my residual (which was a matrix in my case)
@nickp7526
@nickp7526 4 жыл бұрын
Great stuff! What range of subjects can you cover?
@nickp7526
@nickp7526 4 жыл бұрын
Oh, I just got to the end :) Do you know anything about support vector machines? I have a class on it this semester and I would like a small introduction
@PriyaDeo
@PriyaDeo 4 жыл бұрын
I have approximate knowledge of many things :P But to answer this question properly: topics from most college-level undergrad courses in Maths & Computer science. Some topics from grad-level courses in Machine Learning, Computer Vision & Robotics. Software Engineering Interview problems (e.g. HackerRank questions). And so on. I can definitely do a video on SVM's.
@JakubH
@JakubH 4 жыл бұрын
okay, how is this video related to that? kzbin.info/www/bejne/bl7bnJx9d5drsMU it was uploaded the same day on the same topic, uses the same notation and the same algorithm, that is too much to be just a coincidence :D my guess is it was some kind of school project
@PriyaDeo
@PriyaDeo 4 жыл бұрын
Yes it was! Nate was in my class. But I'm glad so many people still found it useful.
@vijayakrishnanaganoor9335
@vijayakrishnanaganoor9335 4 жыл бұрын
Great video!
@Koenentom
@Koenentom 4 жыл бұрын
great video. Thanks!!
@Aarshyboy96
@Aarshyboy96 4 жыл бұрын
I dont understand how you updated alpha1.
@josecarlosferreira4942
@josecarlosferreira4942 3 жыл бұрын
alpha1 is calculated in 7:42, in this case d1= -grad(f) = b-A*x
@xruan6582
@xruan6582 5 жыл бұрын
lack of detailed explanation and hard to understand
@valentinzingan1151
@valentinzingan1151 5 жыл бұрын
The first method you described is called the Steepest Descent (not the Gradient Descent). Gradient Descent is the simplest one, the Steepest Descent is an improvment on the Gradient Descent, exactly as you described.
@alexlagrassa8961
@alexlagrassa8961 5 жыл бұрын
Good video, clear explanation.
@Marmelademeister
@Marmelademeister 5 жыл бұрын
It’s okay... It’s too slow at the beginning and too fast at the end. And why would you start with gradient descent? I would think that most people studying cg are already miles beyond gradient descent, have seen Newton’s method and now study Newton-like methods.
@AdityaPrasad007
@AdityaPrasad007 5 жыл бұрын
wow interesting how she made one technical video and stopped. Motivation was lost I guess?
@nickp7526
@nickp7526 4 жыл бұрын
Have you not seen Bear and Simba dumbass?
@AdityaPrasad007
@AdityaPrasad007 4 жыл бұрын
@@nickp7526 I said technical video my dear chap.
@ethandickson9490
@ethandickson9490 4 жыл бұрын
@@AdityaPrasad007 Think he was joking bruh
@AdityaPrasad007
@AdityaPrasad007 4 жыл бұрын
@@ethandickson9490 really? I'm pretty bad at sarcasm... @Nick was it a joke?
@PriyaDeo
@PriyaDeo 4 жыл бұрын
I made the video for a class. I guess I didn't expect it to get so many views and comments especially for people to keep watching it after some years. But if theres alot of interest I can make another video. Do you have any suggestions for topics?
@erickdanielperez9463
@erickdanielperez9463 6 жыл бұрын
You don't use your mathematics for resolve all the problema. If you have problems with more than 3 variables, not is possible look the solution if not used the abstract mathematics. A mutli dimensional problems i.e. chemical problems (pressure, tempeture, flux, composition and rate) only is visualized with math, not with graph. Used you mathematics and numbers
@yubai6549
@yubai6549 6 жыл бұрын
Many thanks!
@mari0llly
@mari0llly 6 жыл бұрын
good video, but you used the Laplace operator instead of the nabla operator for the gradient.
@songlinyang9248
@songlinyang9248 7 жыл бұрын
Very very clear and helpful, thank you very much
@kandidatfysikk86
@kandidatfysikk86 7 жыл бұрын
Great video!
@gogopie64
@gogopie64 8 жыл бұрын
Good thing all real world problems are convex.
@DLSMauu
@DLSMauu 8 жыл бұрын
cute lecture :P
@louisdavies5367
@louisdavies5367 8 жыл бұрын
Thank you for making this video!! It's really helpful with my studies :)
@bigsh0w1
@bigsh0w1 9 жыл бұрын
Please can you share the code
@aboubekeurhamdi-cherif6962
@aboubekeurhamdi-cherif6962 9 жыл бұрын
Sorry! Something was missing in my last comment. Please note that x* is the minimizer and NOT the minimum.
@kokori100
@kokori100 9 жыл бұрын
+Aboubekeur Hamdi-Cherif yeap same notice
@yashvander
@yashvander 4 жыл бұрын
hmm, that means x1 = x0 + x* right?
@aboubekeurhamdi-cherif6962
@aboubekeurhamdi-cherif6962 9 жыл бұрын
Please note that x* is the minimizer and the minimum.
@narvkar6307
@narvkar6307 11 жыл бұрын
how is the value of alpha1 updated..