Symmetric Rank 1 | Exact Line Search | Theory and Python Code | Optimization Techniques #7

  Рет қаралды 93,672

Ahmad Bazzi

Ahmad Bazzi

Күн бұрын

Пікірлер: 179
@kerimetasc6981
@kerimetasc6981 Жыл бұрын
Best lecture in quasi-newton method which I have so far found on internet!
@yaglz4584
@yaglz4584 Жыл бұрын
2:50 the animations are very nice. Thank you for taking time to record the lecture.
@gamingboychannel4992
@gamingboychannel4992 Жыл бұрын
Using LaTeX generated equations like a boss. Thank you sir Ahmad !
@techguru4792
@techguru4792 Жыл бұрын
It's rare when less viewed video gives best explanation. Your presentations are almost like 3Blue1Brown or Khan academy! Don't know why this video has this less view!!
@robloxeren7527
@robloxeren7527 Жыл бұрын
Ten minutes of this video explains better than an hour of lecture in the course I’m taking🤣 thanks for saving my brain!
@utpalgaming8189
@utpalgaming8189 Жыл бұрын
Hello Ahmad. Many thanks for your support! To be honest, I don't know much about gradient methods. I often use search-based optimization methods in my research such as GA, PSO ...
@awesomegameplays3126
@awesomegameplays3126 Жыл бұрын
I am a PhD student and I will be using optimization methods in my research.
@walak3955
@walak3955 Жыл бұрын
Honestly, this guy is incredible. He explains everything soo precisely and efficiently without any unnecessary information. Thanks a lot for this video. You made my life easier.
@mehmetakif2211
@mehmetakif2211 Жыл бұрын
Thanks so much for posting!!
@efey2605
@efey2605 Жыл бұрын
I have been watching your videos regularly and they are very informative. Thanking you for taking time to enlighten us. Would you mind making videos on conventional optimizationmethods like conjugate gradient methods?
@patronkral7664
@patronkral7664 Жыл бұрын
Thank you so much for wonderful series of videos. Can you please make a video to solve a bi-level optimization problem with a number of variables to solve using different optimization solvers, like GA etc.,? It will be very much appreciated.
@frycomfort4002
@frycomfort4002 Жыл бұрын
This is wonderful!
@gaffarsolihu1617
@gaffarsolihu1617 Жыл бұрын
Hi Ahmad, how are you doing? Thank you so much for your videos. Personally, they have been very eye-opening and educational. This might be a far reached request, As a graduate student, your videos have been very helpful, especially with implementing which is missing in classes, but I'll like to know if you have any plan for a full-blown project implementation on any of your Playlists, be it ML or Math Optimization. Thank you
@AhmadBazzi
@AhmadBazzi Жыл бұрын
Hello Gaffar, I'm doing well, hope are you as well. I'm very glad you found it useful. As a matter of fact, this is a very great idea. I will give this a deep thought, then act accordingly. Thank you for you idea :)
@bollywoodtalkies5052
@bollywoodtalkies5052 Жыл бұрын
he did all this hard work and sent it to the internet for free. and he doesn't get too much but what he gets is RESPECT and credit for bringing new aspiring engineers to earth.
@ardaerennaim182
@ardaerennaim182 Жыл бұрын
This guy is the most underrated youtuber on planet earth.
@essamsayedemam7078
@essamsayedemam7078 Жыл бұрын
thank u very much it was soo helpful can i get the pdf version !!
@benvesly
@benvesly Жыл бұрын
To find this whole course freely available on KZbin is such a gift. Seriously, you cover a LOT of ground.
@furkanefebayrakc8080
@furkanefebayrakc8080 Жыл бұрын
Understandable with example, rather than those who explained long enough using matrix formula only. Thank you 🙏✨
@ercansarusta677
@ercansarusta677 Жыл бұрын
I've known this man only for 40 minutes, but I feel like I owe him 40 decades of gratitude. Thank you for this awesome tutorial!
@purplerain1562
@purplerain1562 Жыл бұрын
Thanks for posting these videos. They are quite helpful. So, to ensure that we minimize and not maximize, is it sufficient to ensure that the newton step has the same sign (goes towards the same direction) as the gradient? Is it ok to just change the sign of the step if that's not the case? (my experiments seem to indicate its not, but what should be done then?)
@ayuuu2920
@ayuuu2920 Жыл бұрын
I can't believe these type of courses are for free here, it's amazing how education has change.
@haktankoctv7426
@haktankoctv7426 Жыл бұрын
Your explanation is awesome. Extension from root-finding scenario to minimum-point-finding problem was exactly my question.
@zmd9678
@zmd9678 Жыл бұрын
this guy, sat for about 1 hour and talked about newton in one video, and then released it for free. legend
@fatihbiz4105
@fatihbiz4105 Жыл бұрын
This course has literally changed my life. 2 years ago i started learning optimization from this course and now i am a software engineer intern at a great startup. Thanks Ahmad !
@nihathatipoglu8936
@nihathatipoglu8936 Жыл бұрын
Dude, I'm less than 2 minutes in and I just want to say thank you so much for creating this absolute monster of a video.
@flicksstudio4054
@flicksstudio4054 Жыл бұрын
Wonderful video for clearing optimization of newtons method for finding minima of function in machine learning
@bunyamincc1177
@bunyamincc1177 Жыл бұрын
ABSOLUTELY LOVE your 40 minute video series... Thanks a lot Ahmad :)😍
@unknowngone782
@unknowngone782 Жыл бұрын
What an absolutely epic contribution to the world. Thank you!
@nurettinefe537
@nurettinefe537 Жыл бұрын
Just finished watching and following along with this. Ahmad, thank you so much! It took me about 12 hours to actually get through it cus I kept pausing and going back to make sure I got all the things right.
@jaimahakaal65
@jaimahakaal65 Жыл бұрын
Super clear explanations and very well put together. Thank you!
@Туганаарга
@Туганаарга Жыл бұрын
man, perfect explanation. clear and intuitive!
@suleymanozcan6093
@suleymanozcan6093 Жыл бұрын
Hats off ! Ahmad I have no words to let you know how grateful I am for this free course, it is not only well designed but also easy to follow, God bless you.
@origamianddiy4861
@origamianddiy4861 Жыл бұрын
I'm here from yesterday's 3b1b video on Newton's method of finding roots, after wondering if there's any way to use it for minimizing a function. Mainly to see why we can't use it instead of Stochastic Gradiend Descent in Linear Regression. Turns out the Hessian of functions with many components can turn out to be large and computationally intensive, and also that if the second derivative is not a parabola, it can lead you far away from the minima. Still it was nice to see how the operation works in practice, and you mentioned the same points about Hessians too. Good job 😊👍
@benhammouyt7300
@benhammouyt7300 Жыл бұрын
Loved the graphical presentation
@KarakterFilm4039
@KarakterFilm4039 Жыл бұрын
Thank you for the words of encouragement, I appreciate it!
@xuramanhmdova1364
@xuramanhmdova1364 Жыл бұрын
This course is extremely useful. Thanks a lot. You did a great job!
@onlinestudylaksar1097
@onlinestudylaksar1097 Жыл бұрын
Excellent video, it really helps me for understanding the quasi Newton method, thank you very much!
@apptutorials2158
@apptutorials2158 Жыл бұрын
Sir your way of explaining is really good.
@RandomVideos-hl5kc
@RandomVideos-hl5kc Жыл бұрын
Amazingly presented, thank you.
@jadolive-fan5103
@jadolive-fan5103 Жыл бұрын
Thanks for this tutorial. Awesome explanations perfect for beginners and experts.
@Epicorstroys
@Epicorstroys Жыл бұрын
The way you explain this is so helpful - love the comparison to the linear approximation. Thank you!
@theworld-stoptime.5800
@theworld-stoptime.5800 Жыл бұрын
Ahmad can really keep you hooked up on the way he explains things. What a legend.
@oyunking9012
@oyunking9012 Жыл бұрын
An incredible work as usual. Congratulations for the whole video.
@MRBEASTFAN1102
@MRBEASTFAN1102 Жыл бұрын
Amazing explaination! This is very helful for understanding. Thanks a lot sir.
@zazasabah3378
@zazasabah3378 Жыл бұрын
Can we just take a moment to appreciate this guy for providing this type of content for free ? great help, Thank you sir! 🙏🙏🙏
@batuhanuysl6905
@batuhanuysl6905 Жыл бұрын
superb,excellent,best video
@pankajmittal3201
@pankajmittal3201 Жыл бұрын
Thank you for the amazing optimization algorithms tutorial! We Appreciate your time and the effort to teach us coding 😃
@TechnicalRH
@TechnicalRH Жыл бұрын
Very nice and clear explanations
@Iamdevil.1
@Iamdevil.1 Жыл бұрын
Amazing video. Looking forward to more.
@talhayavas6640
@talhayavas6640 Жыл бұрын
I can't even imagine how long it took to complete this video. Thanks a ton for your effort.
@VRCreations2O
@VRCreations2O Жыл бұрын
Gorgeous tutorial ! I have never even saw the pyhton interface in my life before, but with the help of your videos i feel like i understand a lot.
@islamiclife9391
@islamiclife9391 Жыл бұрын
Illuminating! Thank you
@thevowtv1563
@thevowtv1563 Жыл бұрын
HOLYYYYY FKKK !!!! I really wish I came across your video much before I took the painful ways to learn all this… definitely a big recommendation for all the people I know who just started with optimisation courses. Great work !!!!!
@ahmetmelihsanl9663
@ahmetmelihsanl9663 Жыл бұрын
I love your videos!, having learnt all this in my gcses / a levels, just rewatching it after 4 months after my exams
@sajalshah6522
@sajalshah6522 Жыл бұрын
I really appreciate your precious effort ,not to mention how much fun and friendly to learn. Thanks Prof. Ahmad.
@dammnoe
@dammnoe Жыл бұрын
This was exactly what I needed, thank you!
@TvShow-ml3dz
@TvShow-ml3dz Жыл бұрын
WoW! This is amasing work man, thank you.
@007AryanVlogs
@007AryanVlogs Жыл бұрын
Brillant explanation, thank you so much.
@eser_bodur9302
@eser_bodur9302 Жыл бұрын
This was such an awesome explanation, so grateful thank you.
@salihbeyy9864
@salihbeyy9864 Жыл бұрын
Amazing lecture! Muchas gracias!
@PoringMC
@PoringMC Жыл бұрын
Hii Dr. Ahmad ,as per my knowledge this methods are used for Machine learning, where gradient descent is a classical algorithm to find minimum of a function( not always zero), If you know basics about ML then you will be familiar with loss function , so we have to minimize that function, for that we need its derivative to be zero, for finding that we use gradient as direction where the change in function is maximum.Now we have the direction but the we dont have the magnitude , for that we use a learning rate as a constant which is what 1st order does.In 2nd order we would use the magnitude which gives us the magnitude for which the point where derivative of function is 0 can be reached in less iterations.Thus 3rd order will ultimately result in finding the minimum of dervative of the loss function , but we need to find minimum of the loss function so ,it will be useless. Hope this was helpul
@superiorarmy416
@superiorarmy416 Жыл бұрын
Thank you very much for your suggestion! I will try my best.
@AJ-et3vf
@AJ-et3vf Жыл бұрын
Awesome video! Thank you!
@Mitchyyy92
@Mitchyyy92 Жыл бұрын
Very good explanation
@gangadharparate531
@gangadharparate531 Жыл бұрын
Really appreciate your course! Your tutorials are always so helpful.
@dinivideolarpaylasmlar8256
@dinivideolarpaylasmlar8256 Жыл бұрын
Another problem is for a negative curvature, the method climbs uphill. E.g. ML Loss functions tend to have a lot of saddle points, which attract the method, so gradient descent is used, because it can find the direction down from the saddle
@husarr5111
@husarr5111 Жыл бұрын
Thank you soo much for the amazing lecture.
@critcalops3107
@critcalops3107 Жыл бұрын
I think that the visualization makes sense if we think about approximating the function f(x) by its second order Taylor expansion around x_t. Taking the derivative of the second order Taylor expansion and setting it equal to zero leads us to the formula of the Newton's method for optimization. This operation is the same as minimizing the second order approximation of the function at x_t as depicted in the video.
@sujunprodhanwordpress
@sujunprodhanwordpress Жыл бұрын
Very great content
@Babapr.o
@Babapr.o Жыл бұрын
Amazing job! Thanks a lot!!
@muhammadazan5540
@muhammadazan5540 Жыл бұрын
lovely explanation 🤩🤩🤩🤩🤩🤩
@Brns_8399
@Brns_8399 Жыл бұрын
thanks , very informative
@madkar988
@madkar988 Жыл бұрын
This was actually quite helpful :)
@AbhishekKumar-kt6yp
@AbhishekKumar-kt6yp Жыл бұрын
Thank you, Ahmad, for the time and effort you took into making this marvellous tutorial. Much, much appreciated!
@roshan4542
@roshan4542 Жыл бұрын
Wow, this looks like a great course! 😀
@herseyburada9288
@herseyburada9288 Жыл бұрын
Ahmad is a legend !
@lionjenkins2063
@lionjenkins2063 Жыл бұрын
Awesome, thank you!
@furkansenol1450
@furkansenol1450 Жыл бұрын
Your videos are awesome!
@troguz195
@troguz195 Жыл бұрын
Thank you Ahmad !
@gamerhappyonline4175
@gamerhappyonline4175 Жыл бұрын
What the what?! Even I understood this. Killer tutorial!
@Abdullahqamar16
@Abdullahqamar16 Жыл бұрын
Thank you for the video!
@oyuntv7174
@oyuntv7174 Жыл бұрын
This is brilliant thank you, hope you give us more visual insight into calculus related things
@ahmyahmy9269
@ahmyahmy9269 Жыл бұрын
Again amazing
@user-rt3wl1wv4x
@user-rt3wl1wv4x Жыл бұрын
Ahmad you should write your book , it'll be really helpful for literally a lot of people out there
@user-bt5zx1li1i
@user-bt5zx1li1i Жыл бұрын
very good. thank you
@furkanatukk
@furkanatukk Жыл бұрын
I hope listening to this brings more positive KZbin channels like yours 💜
@linkbox3117
@linkbox3117 Жыл бұрын
Very nice. Thank you for your inside information Ahmad. Always pleased to watch your content.
@acimasinhd9590
@acimasinhd9590 Жыл бұрын
really appreciate your work :)
@snxgz2808
@snxgz2808 Жыл бұрын
Thank You so much Sir✨
@afthabrazagaming3547
@afthabrazagaming3547 Жыл бұрын
Great content
@mehmetkaymak627
@mehmetkaymak627 Жыл бұрын
OMG, this video just saved my homework
@sggcengizhan2999
@sggcengizhan2999 Жыл бұрын
Sure. Consider the quadratic approximation f(x) ~ f(xk) + f'(xk) (x - xk) + 1/2 f''(xk) (x-xk)^2 at the bottom of the screen at 7:06. To minimize the right hand side, we can take the derivative with respect to x and set it to zero (i.e., f'(xk) + f''(xk) (x - xk) = 0). If you solve for x, you get x = xk - 1 / f''(xk) * f'(xk).
@BrawlStars-hf8hm
@BrawlStars-hf8hm Жыл бұрын
We appreciate you ❤️
@how2make208
@how2make208 Жыл бұрын
Good job. I am subscribing !
@reissoyundaa4901
@reissoyundaa4901 Жыл бұрын
Yes, I think that statement on Wikipedia is a little misleading. Any symmetric rank one update can be written as c^T c for some vector c. For a problem in R^n, then c will have n degrees of freedom, but equation (**) gives you n constraints, so it's not surprising that you get a unique solution.
@openmusicnocopyright3249
@openmusicnocopyright3249 Жыл бұрын
This is great.
@WayneWarp
@WayneWarp Жыл бұрын
Very nice video
@radiolight7793
@radiolight7793 Жыл бұрын
All the best!!
@ROBOOTDANCING
@ROBOOTDANCING Жыл бұрын
Liked, Subscribed and voted for him 👍
@BorntobeHiu
@BorntobeHiu Жыл бұрын
underrated