I'm a visual learner and this video is exactly what I'm looking for! Great content!
@rmrumman48373 жыл бұрын
This is so high quality stuff! Thanks for the graphical explanation at the beginning!
@jyothishkumar30983 жыл бұрын
I'm here from yesterday's 3b1b video on Newton's method of finding roots, after wondering if there's any way to use it for minimizing a function. Mainly to see why we can't use it instead of Stochastic Gradiend Descent in Linear Regression. Turns out the Hessian of functions with many components can turn out to be large and computationally intensive, and also that if the second derivative is not a parabola, it can lead you far away from the minima. Still it was nice to see how the operation works in practice, and you mentioned the same points about Hessians too. Good job 😊👍
@mattkriese71709 ай бұрын
Just finished calculus 1 and learning about Newton’s method brought me here. The visuals were fantastic and the explanation was clear. I’ll need to learn a lot more to grasp the entire concept, but it’s exciting to see topics taught like this for us visual learners. Subbed 😁
@aayushjariwala62563 жыл бұрын
It's rare when less viewed video gives best explanation. Your presentations are almost like 3Blue1Brown or Khan academy! Don't know why this video has this less view!!
@igbana_ai2 жыл бұрын
This guy knowwwwsssss🔥🔥🙌🙌I love 3blue1brown
@bradhatch83022 жыл бұрын
What the what?! Even I understood this. Killer tutorial!
@VisuallyExplained2 жыл бұрын
Yayy!
@shimuk83 жыл бұрын
HOLYYYYY FKKK !!!! I really wish I came across your video much before I took the painful ways to learn all this… definitely a big recommendation for all the people I know who just started with optimisation courses. Great work !!!!!
@anoojjilladwar2033 ай бұрын
Hello Mr. Bachir El Khadir, I recently came across your channel and was truly impressed by your videos and your clear explanations. I've just started working with AI and am also using the Manim library (created by Grant Sanderson) to make animated explanations. I would really appreciate any advice you could offer, and I'm also curious to learn more about how you create your videos.
@mlharville10 ай бұрын
Loved this - very helpful! I knew this a long time ago and forgot much of it, so this was an excellent refresher, accessible to many. (And this is coming from a Stanford / Caltech grad working in computer vision, machine learning, and related things.)
@tuntstunt2 жыл бұрын
your videos are so good i wish they were a thing when I took my course on continuous optimization. my professor could never. i wish you would keep making them though!!!
@SonLeTyP94963 жыл бұрын
Hi Bachir, what an interesting series, very helpful. Cant wait to see next episode
@JosephZhang-s2d4 ай бұрын
@Visually Explained, could you help me understand @8:26, why the Newton method can be written from xk - grad^2(f(xk))^-1 * grad(f(xk)) to xk- g(x)/ grad(g(x)) ?
@razmyhathy2398 Жыл бұрын
It is indeed a truly amazing explanation, and it helps me to understand Newton method visually.
@aniketbhandare28473 жыл бұрын
It just needs more videos to get rocket growth !! Very Good Quality stuff ..
@sirelkir2 жыл бұрын
Another problem is for a negative curvature, the method climbs uphill. E.g. ML Loss functions tend to have a lot of saddle points, which attract the method, so gradient descent is used, because it can find the direction down from the saddle
@saqlainsajid40673 жыл бұрын
This is brilliant thank you, hope you give us more visual insight into calculus related things
@wenyunie35753 жыл бұрын
Your explanation is awesome. Extension from root-finding scenario to minimum-point-finding problem was exactly my question.
@minoh15433 жыл бұрын
Amazing explaination! This is very helful for understanding. Thanks a lot sir.
@jahn45172 жыл бұрын
WoW! This is amasing work man, thank you.
@VisuallyExplained2 жыл бұрын
Thank you for your amazing comment :-)
@swazza99993 жыл бұрын
Excellent video. I especially liked how you linked it back to the root finding version we learned in school. My one beef with this video is that that's an unfair depiction of Tuco.
@TheTessatje123 Жыл бұрын
Is my intuition correct (7:21) if the curvature is high you take a small step and vice-versa?
@bryanthien31513 жыл бұрын
Hi, can you please explain how do you convert alpha into 1 over second derivative of xk at 7:06? Thank you!
@VisuallyExplained3 жыл бұрын
Sure. Consider the quadratic approximation f(x) ~ f(xk) + f'(xk) (x - xk) + 1/2 f''(xk) (x-xk)^2 at the bottom of the screen at 7:06. To minimize the right hand side, we can take the derivative with respect to x and set it to zero (i.e., f'(xk) + f''(xk) (x - xk) = 0). If you solve for x, you get x = xk - 1 / f''(xk) * f'(xk).
@bryanthien31513 жыл бұрын
@@VisuallyExplained Got it. Thank you so much for the explanation! :)
@prub41463 жыл бұрын
@@VisuallyExplained I appreciate your answer and video explanation. I have one confusion. Why do we want to take the derivative in the RHS? In other words, why did we decide to take the minimizer of the quadratic equation as the next step?
@VisuallyExplained3 жыл бұрын
@@prub4146 What we are really trying to do is minimize the LHS (i.e., the function f), but it is often hard to do that directly. Instead, we approximate f by a quadratic function (the one in the RHS), and we minimize that quadratic instead. (The minimizer of a quadratic function admits a simple analytical formula, which we find by taking the derivative.) The hope is that the quadratic function is a good enough approximation that its minimum and the minimum of f are close to each other. Let me know if this explanation is clear enough, otherwise I can expand a bit more.
@prub41463 жыл бұрын
@@VisuallyExplained Thank you for the explanation. Thanks
@farhanhyder63782 жыл бұрын
Loved the graphical presentation
@samiissaadi533 жыл бұрын
Crystal clear explanation, thank you!
@SumitChauhan-vv5ix3 ай бұрын
Brilliant visualization and explanation
@jungeunk11 ай бұрын
What a concise and informative explanation!!! thank you SO MUCH!! I subscribe your channel from now!
@adnon26047 ай бұрын
Amazing video! I could save a lot of time! Thank you very much.
@capsbr2100 Жыл бұрын
Very nice video, complicated topic made easy to understand.
@tuongnguyen93913 жыл бұрын
Hey can you do a sum of square, dsos optimization tutorial for post graduate student.
@benoitmialet98423 жыл бұрын
Brillant explanation, thank you so much.
@HasdaRocks3 жыл бұрын
you reading out the whole things made things confusing. Can you explain what did you meant by pick a direction "IE" @1:51 ? Or did you mean i.e. an abbreviation for 'that is'. Hope you don't read next time " = " as double dash.
@fezkhanna69002 жыл бұрын
This was such an awesome explanation, so grateful thank you.
@filippocuscito43333 жыл бұрын
Amazing video. Looking forward to more.
@bl4ckr4bbit4 ай бұрын
Do you have a video for quasi newton?
@ha15224 Жыл бұрын
thank you for this amazing visualization. Is it also possible to find roots of a multivariable vector fuction (f: R^n -> R^m)? The resources I found solved this by using the jacobi matrix such that x_{k+1} = x_{k} - J^{-1} f, where J^{-1} is the inverse or the pseudoinverse. Is this method referred to as the newton method for a vector function or is it a completely different method? Any help and reference to resources would be greatly appreciated.
@sidhartsatapathy18637 ай бұрын
sir do you use "MANIM" libray of python to create these beautiful animations in your great videos ?
@12397192 жыл бұрын
oh man is this gold
@kravacc736910 ай бұрын
Truly an amazing video!!
@weisongwen3042 Жыл бұрын
Nice videos! May i know what tools do you use to make this figures?
@NithinSaiSunkavalli8 ай бұрын
I didnt understand how you changed alpha to 1/f''(x) at 7:00
@akshayavenkataramanan81212 жыл бұрын
how come by subtracting the multiple of the slope from the current iterate, we find the minimum point?
@aanchaljain46108 ай бұрын
just amazing explanation!!
@lalonalel3 жыл бұрын
can someone please tell me whats the algebra needed for getting the newton method from the taylor series stated in 6:58. thank you in advance
@VisuallyExplained3 жыл бұрын
I have explained this in another comment. Let me paste it here: "Sure. Consider the quadratic approximation f(x) ~ f(xk) + f'(xk) (x - xk) + f''(xk) (x-xk)^2 at the bottom of the screen at 7:06. To minimize the right hand side, we can take the derivative with respect to x and set it to zero (i.e., f'(xk) + f''(xk) (x - xk) = 0). If you solve for x, you get x = xk - 1 / f''(xk) * f'(xk)." Hope this answers your question.
@lalonalel3 жыл бұрын
@@VisuallyExplained thank you it really helped me!
@AJ-et3vf Жыл бұрын
Great video. Thank you!
@hosseinshahbazi36552 жыл бұрын
Excellent, Please explain LBFGS
@brandondean9612 жыл бұрын
Great content
@alle9ro Жыл бұрын
where we can see quasi-newton video??
@jfusion993 жыл бұрын
Amazingly presented, thank you.
@neelabhchoudhary206310 ай бұрын
holy cow this was super helpful
@rajivgulati42983 жыл бұрын
Great video man. God bless you
@zhongxina95694 ай бұрын
Love the video!
@tomxiao2 жыл бұрын
Thank you, brilliant stuff.
@totalynotfunnyguy6581 Жыл бұрын
The first iteration gives me 1.25 not 1.7, is this a mistake on the video or am I doing something wrong? x_(k+1)= x-(1/(6(x)))(3(x^2)-3) Evaluating the with the 2 x_(k+1)= 2-(1/(6(2)))(3(2^2)-3)=1.25
@thegoru01062 жыл бұрын
Great explanation
@LoL44763 жыл бұрын
Very good explanation
@rayankasam478410 ай бұрын
Loved the video
@mrtochiko288511 ай бұрын
very useful, thanks !
@geze20043 жыл бұрын
This is great. What is the plotting tool you are using?
@VisuallyExplained3 жыл бұрын
Thank! For this video I used the excellent library manim: github.com/3b1b/manim
@mitratavakkoli28652 жыл бұрын
Amazing job! Thanks a lot!!
@vigneshbalaji21 Жыл бұрын
Nice explanation
@angelacy79772 ай бұрын
Thank you so much!
@fatihburakakcay50263 жыл бұрын
Again amazing
@hyperduality2838 Жыл бұрын
Iterative optimization towards a target or goal is a syntropic process -- teleological. Convergence (syntropy) is dual to divergence (entropy) -- the 4th law of thermodynamics! Teleological physics (syntropy) is dual to non teleological physics (entropy). Synchronic lines/points are dual to enchronic lines/points. Points are dual to lines -- the principle of duality in geometry. "Always two there are" -- Yoda. Concepts are dual to percepts -- the mind duality of Immanuel Kant. Mathematicians create new concepts all the time from their perceptions or observations.
@igbana_ai2 жыл бұрын
The first statement you made explained half of my confusions 😩🤲
@himanshuprasad957911 ай бұрын
thankyou . very helpful
@knobberschrabser4242 жыл бұрын
You run into another problem with this method when you evaluate the Hessian at a point where it's not positive-definite. Then you're suddenly calculating a saddle point or even a maximum of the approximation which might lead you farther and farther away from the desired minimum of f(x).
@multiverse69682 жыл бұрын
lovely explanation 🤩🤩🤩🤩🤩🤩
@VisuallyExplained2 жыл бұрын
Thanks a lot 😊
@shourabhpayal11983 жыл бұрын
Good job. I am subscribing !
@VisuallyExplained3 жыл бұрын
Awesome, thank you!
@saturapt3229 Жыл бұрын
Tyvm sir
@ivanstepanovftw11 ай бұрын
More!
@yassine-sa3 жыл бұрын
I'm curious to know where are you from, my guesses are egypt and Morocco
@VisuallyExplained3 жыл бұрын
Morocco. Was it that obvious? :-)
@deutsch_lernen_mit_kindern3 жыл бұрын
amazing
@pietheijn-vo1gt2 жыл бұрын
Hello, great video. I am currently following a course on non-linear optimization and I would like to make videos like this for my own problems. I think you used manim for this video, is this code available somewhere that I can take a look? thanks
@preetunadkat88233 жыл бұрын
i am sad you are tooo much underrated :(
@VisuallyExplained3 жыл бұрын
Thank you for the words of encouragement, I appreciate it!
@amanutkarsh7242 жыл бұрын
holy good.
@PapiJack Жыл бұрын
Great video! Please use a different backgouind music. It's all weird and out of tune :)
@tsunningwah34715 ай бұрын
增添
@jackkrauser17632 жыл бұрын
well done but u overskipped intermediate steps which made u lose me
@VisuallyExplained2 жыл бұрын
Thank you for the feedback! Would mind elaborating a little bit on which part of the video I lost you? It will help me a lot for future videos
@tsunningwah34715 ай бұрын
😢
@epistemocrat Жыл бұрын
Newton's Method is now LESS clear then before watching this vid.