Tutorial 13- Global Minima and Local Minima in Depth Understanding

  Рет қаралды 100,086

Krish Naik

Krish Naik

Күн бұрын

In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given range (the local or relative extrema) or on the entire domain of a function (the global or absolute extrema)Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions.
Below are the various playlist created on ML,Data Science and Deep Learning. Please subscribe and support the channel. Happy Learning!
Deep Learning Playlist: • Tutorial 1- Introducti...
Data Science Projects playlist: • Generative Adversarial...
NLP playlist: • Natural Language Proce...
Statistics Playlist: • Population vs Sample i...
Feature Engineering playlist: • Feature Engineering in...
Computer Vision playlist: • OpenCV Installation | ...
Data Science Interview Question playlist: • Complete Life Cycle of...
You can buy my book on Finance with Machine Learning and Deep Learning from the below url
amazon url: www.amazon.in/...
🙏🙏🙏🙏🙏🙏🙏🙏
YOU JUST NEED TO DO
3 THINGS to support my channel
LIKE
SHARE
&
SUBSCRIBE
TO MY KZbin CHANNEL

Пікірлер: 50
@saravanakumarm5647
@saravanakumarm5647 4 жыл бұрын
Am self studying machine learning. Really your videos are amazing to get the full overview quickly and even a layman can understand.
@nithinmamidala
@nithinmamidala 4 жыл бұрын
your videos are like a suspense movie. need to watch another, need to see till the final playlist.. so much time to spend to know the final result.
@hiteshyerekar2204
@hiteshyerekar2204 5 жыл бұрын
Hi krish,your all video are too good.But do some practicle example on those videos so we can understand how to implement it practically.
@SundasLatif
@SundasLatif 4 жыл бұрын
Yes, adding how to implement will make this series more helpful.
@aujasvimoudgil2738
@aujasvimoudgil2738 4 жыл бұрын
Hi Krish, Please make a playlist of practical implementation of these theoretical concepts
@sairaj6875
@sairaj6875 Жыл бұрын
Stopped this video halfway through to say thank you! Your grasp on the topic is outstanding and your way of demonstration is impeccable. Now resuming the video!
@CoolSwag351
@CoolSwag351 3 жыл бұрын
Hi Krish. Thanks a lot for your videos. You make me fell love with DL❤️ I took many introductory courses in coursera and udemy from which I couldn't understand all the concepts. You're videos are just amazing. One request, could you please make some practical implementations of the concepts so that it would be easy for us to understand in practical problems.
@shalinianunay2713
@shalinianunay2713 4 жыл бұрын
You making people fall in love with Deep learning.
@abhishek247ai6
@abhishek247ai6 2 жыл бұрын
You are awesome... One of the gems in this field who making others life simpler.
@sudhasagar292
@sudhasagar292 3 жыл бұрын
this is sooo easily understandable sir.. Im sooo lucky to find you here.. thanks a ton for these valuable lessons sir.. keep shining..
@harshstrum
@harshstrum 4 жыл бұрын
Krish bhaiya, you are just awesome. Thanks for all that you are doing for us.
@poojarai7336
@poojarai7336 Ай бұрын
you are a blessing for new students sir..God's gift to we students
@sandipansarkar9211
@sandipansarkar9211 4 жыл бұрын
Hi Krish, .That was also a great video in terms of understandingPlease make a playlist of practical implementation of these theoretical concepts.Then please download the ipynb notebook just below so that we can practice it in jupyter notbook
@munjirunjuguna5701
@munjirunjuguna5701 2 жыл бұрын
Hello Krish, Thanks for the amazing work you are doing. Quick one: you have talked about the derivative being zero when updating the weights...so how do you tell it's a global minima and not the vanishing GD problem?
@sportsoctane
@sportsoctane Жыл бұрын
U will check for the slope, let say you are getting started from negative slope, that means weights are getting decreased, now after reaching zero if it changes to positive, that means you got ur minima. As for vanishing it will keep decreasing only. Correct me @anyone if I'm wrong.
@zzzmd11
@zzzmd11 3 жыл бұрын
Hi Krish, very informative as always. Thank you so much. Can you pls also do a tutorial on Fokker Planck equation...Thanks alot in advance...
@enoshsubba5875
@enoshsubba5875 4 жыл бұрын
Never Skip Calculus Class.
@liudreamer8403
@liudreamer8403 2 жыл бұрын
very impressive explanation. Now I total adapt to India English. So wonderful
@mohdazam1404
@mohdazam1404 4 жыл бұрын
Ultimate explanation, thanks Krish
@vgaurav3011
@vgaurav3011 4 жыл бұрын
Very very amazing explanation thanks a lot!!!
@muhammadshifa4886
@muhammadshifa4886 Жыл бұрын
You are always awesome! Thanks Krish Naik
@sahilmahajan421
@sahilmahajan421 Жыл бұрын
amazing. simple, short & crisp
@sarahashmori8999
@sarahashmori8999 Жыл бұрын
i like this video you explained this very well! thank you!
@vishaldas6346
@vishaldas6346 4 жыл бұрын
I don't think if the derivative of loss function for calculating new weights should be used as when equal to zero it makes the weights for the neural networks to W(new) = W(old). It would be related to vanishing gradient problem. Isn't it like the derivative of loss function for the output of neural network used where the y actual and y hat becomes approximately equal and the weights are optimised iteratively. Please make me correct if I'm wrong.
@touseefahmad4892
@touseefahmad4892 5 жыл бұрын
Nice Explanation Krish Sir ...
@rafibasha1840
@rafibasha1840 2 жыл бұрын
Hi Krish,when the slope is zero at local maxima why don’t we consider local/global maxima instead of minima
@vishaljhaveri7565
@vishaljhaveri7565 2 жыл бұрын
Thank you, Krish sir. Good explanation.
@ibrahimShehzadGul
@ibrahimShehzadGul 4 жыл бұрын
I think, at local minima the "∂L/∂w" is not = 0, bcz the ANN output is not equal to the required output. if I am wrong please correct me
@xiyaul
@xiyaul 4 жыл бұрын
You have mentioned in previous video that you will talk about Momentum in this video but i am yet to hear....
@anindyabanerjee743
@anindyabanerjee743 4 жыл бұрын
If at global minima w'new is equal to w'old ,what is point of reaching there ?? am I missing something?? @krish naik
@bhagyashrighuge4170
@bhagyashrighuge4170 3 жыл бұрын
after that point slope increases or decreses
@KrishnaMishra-fl6pu
@KrishnaMishra-fl6pu 3 жыл бұрын
The whole point is to reach the global minima only... Because at global minima you get W and at that W you'll get minimum loss..
@thealgorithm7633
@thealgorithm7633 5 жыл бұрын
Very nice explanation
@vikashverma7893
@vikashverma7893 4 жыл бұрын
Nice explanation krish sir ..........
@baaz5642
@baaz5642 2 жыл бұрын
Awesome!
@Velnio_Išpera
@Velnio_Išpera 2 жыл бұрын
Why do we need to minimize cost function in machine learning, what's the purpose of this? Yeah, I understand that there will be less erorrs etc., but I need to understand it from fundamental perspective. Why don't we use global maximum for example?
@aritratalapatra8452
@aritratalapatra8452 Жыл бұрын
You minimise the error of your prediction, maxima means the point where error function is highest.
@ohn0oo
@ohn0oo Жыл бұрын
what if i have a decrease form 8 to infinity, would the lowest visible point still be my global minima?
@mizgaanmasani8456
@mizgaanmasani8456 4 жыл бұрын
why do Neurons need to get converge at global minima ?
@ish694
@ish694 4 жыл бұрын
Neurons dont. Weights converge to some values and those values represent the point at which the loss functions is at its minimum. Our goal here is to formulate some loss function and to find the weights or parameters that optimize, minimize, that loss function. Because if we don't optimize it, then our function won't learn any input-output relationship. It wont know what to predict when given a set of inputs. Also I think when he said neurons converge at the end, he meant parameters of a neuron not the value of a neuron itself.
@jaggu6409
@jaggu6409 4 жыл бұрын
krish bro when the w new and w old are equal then that will be forming the vanishing gradient decent right??
@alinawaz8147
@alinawaz8147 2 жыл бұрын
no bro vanishing gradient is a problem that occurs in chain rule when we use sigmoid or tanh to overcome that problem we use the ReLu activation function
@shefaligoyal3907
@shefaligoyal3907 2 жыл бұрын
at global minima if the deriavtive of the loss function wrt w becomes 0 then wold=wnew and lead to no change in value then how the loss function value be reduced?
@ahmedpashahayathnagar5022
@ahmedpashahayathnagar5022 Жыл бұрын
nice explanation Sir
@knowledgehacker6023
@knowledgehacker6023 5 жыл бұрын
very nice
@louerleseigneur4532
@louerleseigneur4532 3 жыл бұрын
Thanks Krish
@mscsakib6203
@mscsakib6203 5 жыл бұрын
Awesome...
@prerakchoksi2379
@prerakchoksi2379 4 жыл бұрын
How do we deal with local maxima I am still not clear
@adityaanand3065
@adityaanand3065 3 жыл бұрын
Look for simulated annealing... you will get your answer. There are definitely many other methods, but I know this one.
@quranicscience9631
@quranicscience9631 4 жыл бұрын
nice
Tutorial 14- Stochastic Gradient Descent with Momentum
13:15
Krish Naik
Рет қаралды 116 М.
Local and Absolute Maximum Minimum Differences
8:34
Anil Kumar
Рет қаралды 59 М.
when you have plan B 😂
00:11
Andrey Grechka
Рет қаралды 53 МЛН
escape in roblox in real life
00:13
Kan Andrey
Рет қаралды 45 МЛН
Just Give me my Money!
00:18
GL Show Russian
Рет қаралды 1,2 МЛН
Tutorial 12- Stochastic Gradient Descent vs Gradient Descent
12:17
The Fast Fourier Transform (FFT): Most Ingenious Algorithm Ever?
28:23
Domain and Range - Get Ready to Understand!
15:34
TabletClass Math
Рет қаралды 45 М.
Global Maxima & Minima Trick | Maxima Minima Shortcut Engineering Mathematics | BYJU'S GATE
5:54
BYJU'S Exam Prep GATE & ESE: EE,EC,IN,CS
Рет қаралды 3,7 М.
Tutorial 7- Vanishing Gradient Problem
14:30
Krish Naik
Рет қаралды 203 М.
Implementing Word Embedding Using Keras- NLP | Deep Learning
18:05
Cursor Is Beating VS Code (...by forking it)
18:00
Theo - t3․gg
Рет қаралды 91 М.
Constrained Optimization: Intuition behind the Lagrangian
10:49