Conjugate Gradient Method

  Рет қаралды 125,689

Priya Deo

Priya Deo

Күн бұрын

Пікірлер: 49
@zhangkin7896
@zhangkin7896 3 жыл бұрын
Here is 2021, your class is still great for now. ❤️
@pigfaced9985
@pigfaced9985 Жыл бұрын
You are a life saver! I have an assignment that's related to this method and I understood it pretty well! THANK YOU!
@louisdavies5367
@louisdavies5367 8 жыл бұрын
Thank you for making this video!! It's really helpful with my studies :)
@valentinzingan1151
@valentinzingan1151 5 жыл бұрын
The first method you described is called the Steepest Descent (not the Gradient Descent). Gradient Descent is the simplest one, the Steepest Descent is an improvment on the Gradient Descent, exactly as you described.
@mari0llly
@mari0llly 6 жыл бұрын
good video, but you used the Laplace operator instead of the nabla operator for the gradient.
@from-chimp-to-champ1
@from-chimp-to-champ1 2 жыл бұрын
Good job, Priya, elegant explanation!
@dmit10
@dmit10 Жыл бұрын
Another interesting topic is Newton-CG and what to do if the Hessian is indefinite.
@edoardostefaninmustacchi2232
@edoardostefaninmustacchi2232 3 жыл бұрын
Excellent stuff. Really helped
@songlinyang9248
@songlinyang9248 7 жыл бұрын
Very very clear and helpful, thank you very much
@aboubekeurhamdi-cherif6962
@aboubekeurhamdi-cherif6962 9 жыл бұрын
Please note that x* is the minimizer and the minimum.
@lenargilmanov7893
@lenargilmanov7893 Жыл бұрын
What I don't understand is: why use an iterative process if we know that there's exactly one minimum, just set the gradient to 0 and solve the resulting system of equations, no?
@mrlolkar6229
@mrlolkar6229 Жыл бұрын
those methods are used when you have let say 10^6+ equations (for example in Finite Element Method). With those method you solve in much faster then by setting all derivatives equal to 0. Even it seems there you need all steps to get to minimum its not true, usually you are close enought even in those humangus number of equations to minimum that you are already satisfied with answer that you dont need rest of 95% of missing gradients, and thats why those methods are so powerfull,.
@lenargilmanov7893
@lenargilmanov7893 Жыл бұрын
@@mrlolkar6229 Yeah, I kinda figured it out now.
@kandidatfysikk86
@kandidatfysikk86 7 жыл бұрын
Great video!
@alexlagrassa8961
@alexlagrassa8961 5 жыл бұрын
Good video, clear explanation.
@Koenentom
@Koenentom 4 жыл бұрын
great video. Thanks!!
@pablocesarherreraortiz5239
@pablocesarherreraortiz5239 2 жыл бұрын
thank you very much
@ryanmckenna2047
@ryanmckenna2047 8 ай бұрын
What is a TOD?
@narvkar6307
@narvkar6307 11 жыл бұрын
how is the value of alpha1 updated..
@frankruta4701
@frankruta4701 4 жыл бұрын
is alpha_k a matrix or scalar quantity?
@frankruta4701
@frankruta4701 4 жыл бұрын
scalar... i just didn't flatten my residual (which was a matrix in my case)
@yubai6549
@yubai6549 6 жыл бұрын
Many thanks!
@exploreLehigh
@exploreLehigh 3 жыл бұрын
gold
@aboubekeurhamdi-cherif6962
@aboubekeurhamdi-cherif6962 9 жыл бұрын
Sorry! Something was missing in my last comment. Please note that x* is the minimizer and NOT the minimum.
@kokori100
@kokori100 9 жыл бұрын
+Aboubekeur Hamdi-Cherif yeap same notice
@yashvander
@yashvander 4 жыл бұрын
hmm, that means x1 = x0 + x* right?
@Aarshyboy96
@Aarshyboy96 4 жыл бұрын
I dont understand how you updated alpha1.
@josecarlosferreira4942
@josecarlosferreira4942 3 жыл бұрын
alpha1 is calculated in 7:42, in this case d1= -grad(f) = b-A*x
@beeseb
@beeseb Жыл бұрын
🍵
@xruan6582
@xruan6582 5 жыл бұрын
lack of detailed explanation and hard to understand
@bigsh0w1
@bigsh0w1 9 жыл бұрын
Please can you share the code
@bellfish188
@bellfish188 9 ай бұрын
low volume
@AdityaPrasad007
@AdityaPrasad007 5 жыл бұрын
wow interesting how she made one technical video and stopped. Motivation was lost I guess?
@nickp7526
@nickp7526 4 жыл бұрын
Have you not seen Bear and Simba dumbass?
@AdityaPrasad007
@AdityaPrasad007 4 жыл бұрын
@@nickp7526 I said technical video my dear chap.
@ethandickson9490
@ethandickson9490 4 жыл бұрын
@@AdityaPrasad007 Think he was joking bruh
@AdityaPrasad007
@AdityaPrasad007 4 жыл бұрын
@@ethandickson9490 really? I'm pretty bad at sarcasm... @Nick was it a joke?
@PriyaDeo
@PriyaDeo 4 жыл бұрын
I made the video for a class. I guess I didn't expect it to get so many views and comments especially for people to keep watching it after some years. But if theres alot of interest I can make another video. Do you have any suggestions for topics?
@DLSMauu
@DLSMauu 8 жыл бұрын
cute lecture :P
@Marmelademeister
@Marmelademeister 5 жыл бұрын
It’s okay... It’s too slow at the beginning and too fast at the end. And why would you start with gradient descent? I would think that most people studying cg are already miles beyond gradient descent, have seen Newton’s method and now study Newton-like methods.
@MyName-gl1bs
@MyName-gl1bs 3 жыл бұрын
I like fud
@erickdanielperez9463
@erickdanielperez9463 6 жыл бұрын
You don't use your mathematics for resolve all the problema. If you have problems with more than 3 variables, not is possible look the solution if not used the abstract mathematics. A mutli dimensional problems i.e. chemical problems (pressure, tempeture, flux, composition and rate) only is visualized with math, not with graph. Used you mathematics and numbers
@vijayakrishnanaganoor9335
@vijayakrishnanaganoor9335 4 жыл бұрын
Great video!
Visually Explained: Newton's Method in Optimization
11:26
Visually Explained
Рет қаралды 117 М.
[CFD] Conjugate Gradient for CFD (Part 1): Background and Steepest Descent
45:01
Непосредственно Каха: сумка
0:53
К-Media
Рет қаралды 12 МЛН
Gradient Descent, Step-by-Step
23:54
StatQuest with Josh Starmer
Рет қаралды 1,4 МЛН
Conjugate gradient method
14:32
Lewis Mitchell
Рет қаралды 8 М.
Solve any equation using gradient descent
9:05
Edgar Programmator
Рет қаралды 55 М.
Preconditioned Conjugate Gradient Descent (ILU)
7:36
Priya Deo
Рет қаралды 7 М.
Numerical linear algebra: Conjugate Gradient method
24:49
Franks Mathematics
Рет қаралды 8 М.
Descent methods and line search: preconditioned steepest descent
15:22
Michel Bierlaire
Рет қаралды 19 М.
[CFD] Conjugate Gradient for CFD (Part 2): Optimum Distance and Directions
34:26
Constrained Optimization: Intuition behind the Lagrangian
10:49
CS885 Lecture 14c: Trust Region Methods
20:19
Pascal Poupart
Рет қаралды 22 М.
Gradient descent, Newton's method
32:22
gr_teach
Рет қаралды 40 М.