Visually Explained: Newton's Method in Optimization

  Рет қаралды 113,105

Visually Explained

Visually Explained

Күн бұрын

Пікірлер: 103
@quyenhuynh572
@quyenhuynh572 2 жыл бұрын
I'm a visual learner and this video is exactly what I'm looking for! Great content!
@rmrumman4837
@rmrumman4837 3 жыл бұрын
This is so high quality stuff! Thanks for the graphical explanation at the beginning!
@jyothishkumar3098
@jyothishkumar3098 3 жыл бұрын
I'm here from yesterday's 3b1b video on Newton's method of finding roots, after wondering if there's any way to use it for minimizing a function. Mainly to see why we can't use it instead of Stochastic Gradiend Descent in Linear Regression. Turns out the Hessian of functions with many components can turn out to be large and computationally intensive, and also that if the second derivative is not a parabola, it can lead you far away from the minima. Still it was nice to see how the operation works in practice, and you mentioned the same points about Hessians too. Good job 😊👍
@mattkriese7170
@mattkriese7170 9 ай бұрын
Just finished calculus 1 and learning about Newton’s method brought me here. The visuals were fantastic and the explanation was clear. I’ll need to learn a lot more to grasp the entire concept, but it’s exciting to see topics taught like this for us visual learners. Subbed 😁
@aayushjariwala6256
@aayushjariwala6256 3 жыл бұрын
It's rare when less viewed video gives best explanation. Your presentations are almost like 3Blue1Brown or Khan academy! Don't know why this video has this less view!!
@igbana_ai
@igbana_ai 2 жыл бұрын
This guy knowwwwsssss🔥🔥🙌🙌I love 3blue1brown
@bradhatch8302
@bradhatch8302 2 жыл бұрын
What the what?! Even I understood this. Killer tutorial!
@VisuallyExplained
@VisuallyExplained 2 жыл бұрын
Yayy!
@shimuk8
@shimuk8 3 жыл бұрын
HOLYYYYY FKKK !!!! I really wish I came across your video much before I took the painful ways to learn all this… definitely a big recommendation for all the people I know who just started with optimisation courses. Great work !!!!!
@anoojjilladwar203
@anoojjilladwar203 3 ай бұрын
Hello Mr. Bachir El Khadir, I recently came across your channel and was truly impressed by your videos and your clear explanations. I've just started working with AI and am also using the Manim library (created by Grant Sanderson) to make animated explanations. I would really appreciate any advice you could offer, and I'm also curious to learn more about how you create your videos.
@mlharville
@mlharville 10 ай бұрын
Loved this - very helpful! I knew this a long time ago and forgot much of it, so this was an excellent refresher, accessible to many. (And this is coming from a Stanford / Caltech grad working in computer vision, machine learning, and related things.)
@tuntstunt
@tuntstunt 2 жыл бұрын
your videos are so good i wish they were a thing when I took my course on continuous optimization. my professor could never. i wish you would keep making them though!!!
@SonLeTyP9496
@SonLeTyP9496 3 жыл бұрын
Hi Bachir, what an interesting series, very helpful. Cant wait to see next episode
@JosephZhang-s2d
@JosephZhang-s2d 4 ай бұрын
@Visually Explained, could you help me understand @8:26, why the Newton method can be written from xk - grad^2(f(xk))^-1 * grad(f(xk)) to xk- g(x)/ grad(g(x)) ?
@razmyhathy2398
@razmyhathy2398 Жыл бұрын
It is indeed a truly amazing explanation, and it helps me to understand Newton method visually.
@aniketbhandare2847
@aniketbhandare2847 3 жыл бұрын
It just needs more videos to get rocket growth !! Very Good Quality stuff ..
@sirelkir
@sirelkir 2 жыл бұрын
Another problem is for a negative curvature, the method climbs uphill. E.g. ML Loss functions tend to have a lot of saddle points, which attract the method, so gradient descent is used, because it can find the direction down from the saddle
@saqlainsajid4067
@saqlainsajid4067 3 жыл бұрын
This is brilliant thank you, hope you give us more visual insight into calculus related things
@wenyunie3575
@wenyunie3575 3 жыл бұрын
Your explanation is awesome. Extension from root-finding scenario to minimum-point-finding problem was exactly my question.
@minoh1543
@minoh1543 3 жыл бұрын
Amazing explaination! This is very helful for understanding. Thanks a lot sir.
@jahn4517
@jahn4517 2 жыл бұрын
WoW! This is amasing work man, thank you.
@VisuallyExplained
@VisuallyExplained 2 жыл бұрын
Thank you for your amazing comment :-)
@swazza9999
@swazza9999 3 жыл бұрын
Excellent video. I especially liked how you linked it back to the root finding version we learned in school. My one beef with this video is that that's an unfair depiction of Tuco.
@TheTessatje123
@TheTessatje123 Жыл бұрын
Is my intuition correct (7:21) if the curvature is high you take a small step and vice-versa?
@bryanthien3151
@bryanthien3151 3 жыл бұрын
Hi, can you please explain how do you convert alpha into 1 over second derivative of xk at 7:06? Thank you!
@VisuallyExplained
@VisuallyExplained 3 жыл бұрын
Sure. Consider the quadratic approximation f(x) ~ f(xk) + f'(xk) (x - xk) + 1/2 f''(xk) (x-xk)^2 at the bottom of the screen at 7:06. To minimize the right hand side, we can take the derivative with respect to x and set it to zero (i.e., f'(xk) + f''(xk) (x - xk) = 0). If you solve for x, you get x = xk - 1 / f''(xk) * f'(xk).
@bryanthien3151
@bryanthien3151 3 жыл бұрын
@@VisuallyExplained Got it. Thank you so much for the explanation! :)
@prub4146
@prub4146 3 жыл бұрын
@@VisuallyExplained I appreciate your answer and video explanation. I have one confusion. Why do we want to take the derivative in the RHS? In other words, why did we decide to take the minimizer of the quadratic equation as the next step?
@VisuallyExplained
@VisuallyExplained 3 жыл бұрын
@@prub4146 What we are really trying to do is minimize the LHS (i.e., the function f), but it is often hard to do that directly. Instead, we approximate f by a quadratic function (the one in the RHS), and we minimize that quadratic instead. (The minimizer of a quadratic function admits a simple analytical formula, which we find by taking the derivative.) The hope is that the quadratic function is a good enough approximation that its minimum and the minimum of f are close to each other. Let me know if this explanation is clear enough, otherwise I can expand a bit more.
@prub4146
@prub4146 3 жыл бұрын
@@VisuallyExplained Thank you for the explanation. Thanks
@farhanhyder6378
@farhanhyder6378 2 жыл бұрын
Loved the graphical presentation
@samiissaadi53
@samiissaadi53 3 жыл бұрын
Crystal clear explanation, thank you!
@SumitChauhan-vv5ix
@SumitChauhan-vv5ix 3 ай бұрын
Brilliant visualization and explanation
@jungeunk
@jungeunk 11 ай бұрын
What a concise and informative explanation!!! thank you SO MUCH!! I subscribe your channel from now!
@adnon2604
@adnon2604 7 ай бұрын
Amazing video! I could save a lot of time! Thank you very much.
@capsbr2100
@capsbr2100 Жыл бұрын
Very nice video, complicated topic made easy to understand.
@tuongnguyen9391
@tuongnguyen9391 3 жыл бұрын
Hey can you do a sum of square, dsos optimization tutorial for post graduate student.
@benoitmialet9842
@benoitmialet9842 3 жыл бұрын
Brillant explanation, thank you so much.
@HasdaRocks
@HasdaRocks 3 жыл бұрын
you reading out the whole things made things confusing. Can you explain what did you meant by pick a direction "IE" @1:51 ? Or did you mean i.e. an abbreviation for 'that is'. Hope you don't read next time " = " as double dash.
@fezkhanna6900
@fezkhanna6900 2 жыл бұрын
This was such an awesome explanation, so grateful thank you.
@filippocuscito4333
@filippocuscito4333 3 жыл бұрын
Amazing video. Looking forward to more.
@bl4ckr4bbit
@bl4ckr4bbit 4 ай бұрын
Do you have a video for quasi newton?
@ha15224
@ha15224 Жыл бұрын
thank you for this amazing visualization. Is it also possible to find roots of a multivariable vector fuction (f: R^n -> R^m)? The resources I found solved this by using the jacobi matrix such that x_{k+1} = x_{k} - J^{-1} f, where J^{-1} is the inverse or the pseudoinverse. Is this method referred to as the newton method for a vector function or is it a completely different method? Any help and reference to resources would be greatly appreciated.
@sidhartsatapathy1863
@sidhartsatapathy1863 7 ай бұрын
sir do you use "MANIM" libray of python to create these beautiful animations in your great videos ?
@1239719
@1239719 2 жыл бұрын
oh man is this gold
@kravacc7369
@kravacc7369 10 ай бұрын
Truly an amazing video!!
@weisongwen3042
@weisongwen3042 Жыл бұрын
Nice videos! May i know what tools do you use to make this figures?
@NithinSaiSunkavalli
@NithinSaiSunkavalli 8 ай бұрын
I didnt understand how you changed alpha to 1/f''(x) at 7:00
@akshayavenkataramanan8121
@akshayavenkataramanan8121 2 жыл бұрын
how come by subtracting the multiple of the slope from the current iterate, we find the minimum point?
@aanchaljain4610
@aanchaljain4610 8 ай бұрын
just amazing explanation!!
@lalonalel
@lalonalel 3 жыл бұрын
can someone please tell me whats the algebra needed for getting the newton method from the taylor series stated in 6:58. thank you in advance
@VisuallyExplained
@VisuallyExplained 3 жыл бұрын
I have explained this in another comment. Let me paste it here: "Sure. Consider the quadratic approximation f(x) ~ f(xk) + f'(xk) (x - xk) + f''(xk) (x-xk)^2 at the bottom of the screen at 7:06. To minimize the right hand side, we can take the derivative with respect to x and set it to zero (i.e., f'(xk) + f''(xk) (x - xk) = 0). If you solve for x, you get x = xk - 1 / f''(xk) * f'(xk)." Hope this answers your question.
@lalonalel
@lalonalel 3 жыл бұрын
@@VisuallyExplained thank you it really helped me!
@AJ-et3vf
@AJ-et3vf Жыл бұрын
Great video. Thank you!
@hosseinshahbazi3655
@hosseinshahbazi3655 2 жыл бұрын
Excellent, Please explain LBFGS
@brandondean961
@brandondean961 2 жыл бұрын
Great content
@alle9ro
@alle9ro Жыл бұрын
where we can see quasi-newton video??
@jfusion99
@jfusion99 3 жыл бұрын
Amazingly presented, thank you.
@neelabhchoudhary2063
@neelabhchoudhary2063 10 ай бұрын
holy cow this was super helpful
@rajivgulati4298
@rajivgulati4298 3 жыл бұрын
Great video man. God bless you
@zhongxina9569
@zhongxina9569 4 ай бұрын
Love the video!
@tomxiao
@tomxiao 2 жыл бұрын
Thank you, brilliant stuff.
@totalynotfunnyguy6581
@totalynotfunnyguy6581 Жыл бұрын
The first iteration gives me 1.25 not 1.7, is this a mistake on the video or am I doing something wrong? x_(k+1)= x-(1/(6(x)))(3(x^2)-3) Evaluating the with the 2 x_(k+1)= 2-(1/(6(2)))(3(2^2)-3)=1.25
@thegoru0106
@thegoru0106 2 жыл бұрын
Great explanation
@LoL4476
@LoL4476 3 жыл бұрын
Very good explanation
@rayankasam4784
@rayankasam4784 10 ай бұрын
Loved the video
@mrtochiko2885
@mrtochiko2885 11 ай бұрын
very useful, thanks !
@geze2004
@geze2004 3 жыл бұрын
This is great. What is the plotting tool you are using?
@VisuallyExplained
@VisuallyExplained 3 жыл бұрын
Thank! For this video I used the excellent library manim: github.com/3b1b/manim
@mitratavakkoli2865
@mitratavakkoli2865 2 жыл бұрын
Amazing job! Thanks a lot!!
@vigneshbalaji21
@vigneshbalaji21 Жыл бұрын
Nice explanation
@angelacy7977
@angelacy7977 2 ай бұрын
Thank you so much!
@fatihburakakcay5026
@fatihburakakcay5026 3 жыл бұрын
Again amazing
@hyperduality2838
@hyperduality2838 Жыл бұрын
Iterative optimization towards a target or goal is a syntropic process -- teleological. Convergence (syntropy) is dual to divergence (entropy) -- the 4th law of thermodynamics! Teleological physics (syntropy) is dual to non teleological physics (entropy). Synchronic lines/points are dual to enchronic lines/points. Points are dual to lines -- the principle of duality in geometry. "Always two there are" -- Yoda. Concepts are dual to percepts -- the mind duality of Immanuel Kant. Mathematicians create new concepts all the time from their perceptions or observations.
@igbana_ai
@igbana_ai 2 жыл бұрын
The first statement you made explained half of my confusions 😩🤲
@himanshuprasad9579
@himanshuprasad9579 11 ай бұрын
thankyou . very helpful
@knobberschrabser424
@knobberschrabser424 2 жыл бұрын
You run into another problem with this method when you evaluate the Hessian at a point where it's not positive-definite. Then you're suddenly calculating a saddle point or even a maximum of the approximation which might lead you farther and farther away from the desired minimum of f(x).
@multiverse6968
@multiverse6968 2 жыл бұрын
lovely explanation 🤩🤩🤩🤩🤩🤩
@VisuallyExplained
@VisuallyExplained 2 жыл бұрын
Thanks a lot 😊
@shourabhpayal1198
@shourabhpayal1198 3 жыл бұрын
Good job. I am subscribing !
@VisuallyExplained
@VisuallyExplained 3 жыл бұрын
Awesome, thank you!
@saturapt3229
@saturapt3229 Жыл бұрын
Tyvm sir
@ivanstepanovftw
@ivanstepanovftw 11 ай бұрын
More!
@yassine-sa
@yassine-sa 3 жыл бұрын
I'm curious to know where are you from, my guesses are egypt and Morocco
@VisuallyExplained
@VisuallyExplained 3 жыл бұрын
Morocco. Was it that obvious? :-)
@deutsch_lernen_mit_kindern
@deutsch_lernen_mit_kindern 3 жыл бұрын
amazing
@pietheijn-vo1gt
@pietheijn-vo1gt 2 жыл бұрын
Hello, great video. I am currently following a course on non-linear optimization and I would like to make videos like this for my own problems. I think you used manim for this video, is this code available somewhere that I can take a look? thanks
@preetunadkat8823
@preetunadkat8823 3 жыл бұрын
i am sad you are tooo much underrated :(
@VisuallyExplained
@VisuallyExplained 3 жыл бұрын
Thank you for the words of encouragement, I appreciate it!
@amanutkarsh724
@amanutkarsh724 2 жыл бұрын
holy good.
@PapiJack
@PapiJack Жыл бұрын
Great video! Please use a different backgouind music. It's all weird and out of tune :)
@tsunningwah3471
@tsunningwah3471 5 ай бұрын
增添
@jackkrauser1763
@jackkrauser1763 2 жыл бұрын
well done but u overskipped intermediate steps which made u lose me
@VisuallyExplained
@VisuallyExplained 2 жыл бұрын
Thank you for the feedback! Would mind elaborating a little bit on which part of the video I lost you? It will help me a lot for future videos
@tsunningwah3471
@tsunningwah3471 5 ай бұрын
😢
@epistemocrat
@epistemocrat Жыл бұрын
Newton's Method is now LESS clear then before watching this vid.
@brandondean961
@brandondean961 2 жыл бұрын
Great content
Newton's method (introduction & example)
20:53
blackpenredpen
Рет қаралды 211 М.
Don’t Choose The Wrong Box 😱
00:41
Topper Guild
Рет қаралды 62 МЛН
黑天使只对C罗有感觉#short #angel #clown
00:39
Super Beauty team
Рет қаралды 36 МЛН
Newton’s fractal (which Newton knew nothing about)
26:06
3Blue1Brown
Рет қаралды 2,8 МЛН
Understanding Lagrange Multipliers Visually
13:18
Serpentine Integral
Рет қаралды 372 М.
What Is Mathematical Optimization?
11:35
Visually Explained
Рет қаралды 139 М.
Newton's Method (1 of 2: How does it work?)
13:26
Eddie Woo
Рет қаралды 133 М.
Intro to Gradient Descent || Optimizing High-Dimensional Equations
11:04
Dr. Trefor Bazett
Рет қаралды 77 М.
Linear Approximation/Newton's Method
31:41
MIT OpenCourseWare
Рет қаралды 155 М.
The Art of Linear Programming
18:56
Tom S
Рет қаралды 709 М.
What is Jacobian? | The right way of thinking derivatives and integrals
27:14
Don’t Choose The Wrong Box 😱
00:41
Topper Guild
Рет қаралды 62 МЛН