All other videos on gradient descent are atleast 20 minutes long. This one is five, and made me understand more than any of those videos. Thank you!
@DataScienceGarage4 жыл бұрын
Thank you for watching! Hoping it was useful.
@blendaguedes4 жыл бұрын
Sometimes we just need two loops to understand a whole. Thank you!
@uncoded03 жыл бұрын
Thank you! Many hours of trying to understand gradient decent, now I finally get it, thanks to this video. Thank you!
@anamikabhowmick63223 жыл бұрын
This is one of the best and easiest way to learn and understand gradient descent, thank you so much for this
@DataScienceGarage3 жыл бұрын
Glad you liked it! :)
@impzhu30884 жыл бұрын
That’s the way to explain a concept! Example with detailed steps. Thank you so much!
@DataScienceGarage4 жыл бұрын
Thanks for watching! :)
@hemantsah85674 жыл бұрын
It is easy... spent my 2 days on learning gradient descent.... then I came to your video... Thanks bro
@DataScienceGarage5 жыл бұрын
If you found useful in this video I highly recommend to check other related ones: -- Calculate Convolutional Layer Volume in ConvNet (kzbin.info/www/bejne/aaaolWN7p9Z6sLc) -- Adam. Rmsprop. Momentum. Optimization Algorithm. - Principles in Deep Learning (kzbin.info/www/bejne/j5LGgXh5pK5oibs) -- Numpy Argsort - np.argsort() - function. Simple Example (kzbin.info/www/bejne/bIibhnuso52Wock) -- Python Regular Expression (RegEx). Extract Dates from Strings in Pandas DataFrame (kzbin.info/www/bejne/e2XEp3WOl7OCfcU)
@riki24043 жыл бұрын
Thank you for such a clear explanation. Short and precise. no unnecessary talk.
@DataScienceGarage3 жыл бұрын
Thanks for watching! Hoping it was useful :)
@yaminikommi54062 жыл бұрын
We can take any number as intial parameters and learning rate
@wtry00674 жыл бұрын
It's very short and very useful.. I get clarity what I was looking for. Thanks once again.
@aalishanhunzai9 ай бұрын
Bro thank you so much for your efforts, couldn't find a more simple explanation of gradient descent than this one.
@DataScienceGarage9 ай бұрын
Thanks for such feedback! :)
@abdelrahmane6572 жыл бұрын
Oh my god, you are excellent. You make the difference on KZbin. Thank you so much. 🎉🙏👏🙌👌👍✌🏼
@DataScienceGarage2 жыл бұрын
Thanks you for such feedback, appreciate it! :)
@Elementiah7 ай бұрын
Thank you so much for this! This is the perfect explanation! 😄
@sukanya44982 жыл бұрын
Love this video ❤️, Very simple and precise! Thank you !
@DataScienceGarage2 жыл бұрын
Thanks for watching! :)
@Alex-pd5xc Жыл бұрын
wow dude, very clearly explained and you made it simple for me to understand. cheers man
@DataScienceGarage Жыл бұрын
Thanks for such feedback, appreciate!
@Ziad-Ahmed-Mohamed Жыл бұрын
2 hours in DL course i didnt get it 5 min made my day this is how actually learning should be
@DataScienceGarage Жыл бұрын
Glad it was helpful for you! :)
@samvanoye10 ай бұрын
Perfectly explained, thanks!
@DataScienceGarage10 ай бұрын
Thanks for watching! :)
@phaniraju04564 жыл бұрын
I bow to your for this great clarification ..loved it
@muhammadhashir79493 жыл бұрын
Thank you so much your work was practical and I loved it alot and underestood gradient descent. Before that I spent lots of time but didn't understood it properly
@TingBie9 ай бұрын
Thanks for this example, simple and spot-on!
@murat20732 жыл бұрын
thank you Sir! you are a HERO!!!
@DataScienceGarage2 жыл бұрын
Thanks a lot! :)
@josephsmy19943 жыл бұрын
Awesome explanation! straight to the point
@DataScienceGarage3 жыл бұрын
Thanks for such feedback! :)
@zafarnasim92672 жыл бұрын
You made it so simple. Great Job!
@DataScienceGarage2 жыл бұрын
Thanks a lot! :)
@ajaykushwaha42333 жыл бұрын
Best explanation ever.
@nawaab92753 жыл бұрын
thanks for saving the semester
@eramitvajpeyee853 жыл бұрын
Thank you so much for explaining it in short and easy way!! Please keep uploading content like this.
@DataScienceGarage3 жыл бұрын
Thank you for watching! Glad you enjoyed! :)
@mohamedelkhanche7073 жыл бұрын
ohhhhh wonderful i was chocked this is insane thank you from all my heart
@DataScienceGarage3 жыл бұрын
Thanks a lot for such feedback, appreciate!
@omkarkadam57153 жыл бұрын
Thanks mate, Finally Enlightened.
@DataScienceGarage3 жыл бұрын
Thanks for watching! I hope it was useful :)
@glenfernandes2533 жыл бұрын
how do you know, how many iterations to run before reaching the global/local minimum, what if it reaches the minimum and starts climbing on the other side ?
@yasamannazemi67063 жыл бұрын
It was so simple and helped me a lot :) Thanks👍🏻
@DataScienceGarage3 жыл бұрын
Thanks!
@alexbarq19003 жыл бұрын
I get the idea but any reason for not doing simple math to find the local min? dy/dx = 2(x+5) If we want to find the min, we just do dy/dx=0.. then: 0 = 2(x+5) x = -5
@hindbelkharchiche16544 жыл бұрын
Thank you .. the explanation is as simple as useful .
@abdellatifmarghan7521 Жыл бұрын
Thank you. grateful explanation
@DataScienceGarage Жыл бұрын
Glad it was useful! :)
@98916766102 жыл бұрын
Awesome explanation . Thanks a lot !!
@DataScienceGarage2 жыл бұрын
Thanks for watching! Hope it was useful!
@sanurcucuyeva70402 жыл бұрын
Hi, thanks for explanation. If our function is hard, at what point in the iteration should we stop to find the minimum point
@pwan39712 жыл бұрын
Thanks a lot, really appeciate the video, this makes so much sense now
@DataScienceGarage2 жыл бұрын
Thank you for you feedback! Glad it was useful for you :)
@praneethcj65445 жыл бұрын
Simple and clear ... Yet need more detailing ...!!!!
@radhar53492 жыл бұрын
Great explanation. Easy to get the concept
@DataScienceGarage2 жыл бұрын
Thanks for feedback! :)
@fmikael13 жыл бұрын
Thanks for the great explination. everyone always complicates it
@DataScienceGarage3 жыл бұрын
Thanks for feedback, glad it was helpful! :)
@mbogitechconpts2 жыл бұрын
Beautiful video. I have to like it.
@DataScienceGarage2 жыл бұрын
Thanks for feedback, inspiring!
@abdanettaye82173 жыл бұрын
good starting, thank you
@dennisjoseph45284 жыл бұрын
Great job of explaining this as simple as possible Sir
@colton30003 жыл бұрын
How do we find learning rate?
@basheeralwaely96583 жыл бұрын
Well done sir, very easy to understand
@george47464 жыл бұрын
Thanks, It was very clear and concise.
@DataScienceGarage4 жыл бұрын
Thanks!
@ydkmusic4 жыл бұрын
Great video! There is a typo around 3:50. The bottom equation should be x_2 = .... instead of x_1.
@michaelscott85724 жыл бұрын
What I don't get is: When we use this method in neural net, we don't know the Errorfunction. We just have some point. So how can I build the derivative?
@AJ-et3vf3 жыл бұрын
Very useful! Awesome ❤️
@DataScienceGarage3 жыл бұрын
Thanks for watching! :)
@twicestay6683 Жыл бұрын
Thx a lot!!! But I'd like to ask why the learning rate=0.01? is it a random number? Thx
@SuperYtc12 жыл бұрын
This is a good video.
@DataScienceGarage2 жыл бұрын
Thanks!
@luisurena17704 жыл бұрын
Coñazo siempre hay un indu que me ayuda a entender todo🔥🔥🔥🔥
@Kay122344 жыл бұрын
Wonderful Video!!! Thank You!
@DataScienceGarage4 жыл бұрын
Thanks for feedback!:)
@eliashossain43272 жыл бұрын
Best explanation.
@DataScienceGarage2 жыл бұрын
Thanka for such feedback! :)
@blinky1892 Жыл бұрын
How do we know what the y value is of the parabole at any given x?😊
@RayhanAhmedsimanto5 жыл бұрын
Amazing Practical Explanation. Great work.
@DataScienceGarage5 жыл бұрын
Thanks Rayhan!
@smurfNA Жыл бұрын
hey! so do we choose the learning rate? and the gradient is simply just the function right ?
@karthiklogan93844 жыл бұрын
really helpful sir.thank you so much
@DataScienceGarage4 жыл бұрын
Happy that was useful.
@mastan7754 жыл бұрын
Very good explanation...thanks a lot.
@Snetter2 жыл бұрын
Nice work! thanks
@DataScienceGarage2 жыл бұрын
Thanks for feedback, glad for this! :)
@bhavikdudhrejiya44784 жыл бұрын
Very good video. I appreciate your hard work. Keep uploading more videos.
@DataScienceGarage4 жыл бұрын
Many thanks for such comment!
@machinelearningid39314 жыл бұрын
Thanks, this give me the light in darkness.
@davidbarnwell_virtual_clas67292 жыл бұрын
How do we choose the learning rate? Good video but it's things like that I'd love to know
@DataScienceGarage2 жыл бұрын
Hi! Choosing learning rate often is not easy task. I usually makes experiments on model performance with multiple learning rate (manual, Grid search hyperparameter tuning, Bayesian search, etc.).
@davidbarnwell_virtual_clas67292 жыл бұрын
@@DataScienceGarage Ahh...ok...I get you...it's very interesting.
@Hasasinful4 жыл бұрын
Thanks just what i needed
@DataScienceGarage4 жыл бұрын
Hope it was useful. Thank you!
@MuditDahiya4 жыл бұрын
Very nice explanation!!
@DataScienceGarage4 жыл бұрын
Thanks!
@ericklestrange62555 жыл бұрын
didnt explain how do calculate the direction we are moving to (the minus), why the derivatives etc
@ak-ot2wn4 жыл бұрын
That's what I am looking for already for several days and nobody mentions this. Anyways I still think, that it is trivial. If your derivative is negative, you have to "move" to the right side (in case of 2 variables). If it is positive, you have to "move" to the left.
@debayondharchowdhury26804 жыл бұрын
he also didn't talk about loss calculation. why do we need to calculate loss at all if we can simply use the gradient descent on the function.
@blendaguedes4 жыл бұрын
@@debayondharchowdhury2680 Your loss function is the one pointing the difference between your output and your "y". You calculate the gradient to your loss function. At his example, he shows something that looks like a ' mean squared error' as loss function to me, and he is doing a linear regression with only one input "x". I recommend you the Andrew Ng classes on Coursera. have a good time
@blendaguedes4 жыл бұрын
@@ak-ot2wn I totally agree with what you are saying, the only matter is when you are programming you don't see witch direction your vector is going. So basically if the error is going down: keep going, if it starts to increase go back. You can just stop, or you can make your learning rate smaller to increase your accuracy
@rssaiganesh4 жыл бұрын
I think the comments thread is looking for the math behind the formula for the gradient descent. Apologies if I misunderstood. But here is a link that helped me: towardsdatascience.com/understanding-the-mathematics-behind-gradient-descent-dde5dc9be06e
@suezsiren1173 жыл бұрын
2:29 "Put the walrus in the correct places."
@DataScienceGarage3 жыл бұрын
In 2:29 we are having x0 as - 3, graph representing it. Values are in correct places in formula. Could you clarify what is wrong there? Thanks!
@pearlsofwisdom24164 жыл бұрын
Good explanation but would have been better if you elaborated its formula of why it is used to reach next step. Why is derivative multiplied by learning rate and why it is then substracted from first point value
@blendaguedes4 жыл бұрын
The learning rate makes the "decay slow". At his first interaction, the result would be: -3 -4 = -7. Can you see where this is going? As he goes slow he will keep dropping his "y", until he get to as close as possible to -5. Sometimes to get at the minimum you have to make you learning rate smaller while computing your weights .
@sandipmaity26874 жыл бұрын
Amazing Explanation :) Really simple and to the point 😀
@muhammadhilmirozan12663 жыл бұрын
thx for explanation!
@DataScienceGarage3 жыл бұрын
Thanks for watching! :)
@darkman89393 жыл бұрын
thanks, very hhelpful.
@kronlogic24084 жыл бұрын
For the Iteration 2, shouldn't the second line be x2= and not x1= ?
@denisplotnikov68753 жыл бұрын
How to use this example for Stochastic Gradient Descent?
@harshithbangera79053 жыл бұрын
How we know -5 is global minimum...is there when gradient or derivative become 0
@explovictinischool2234 Жыл бұрын
Hello, better now than never. Let's assume we have reached -5 at step Xn. However, we don't know that we have reached the local minimum. We perform another step Xn+1 with the formula, which gives: Xn+1 = Xn - (learning_rate) * (dy/dx) Xn+1 = -5 - (0.01) * (2 * (-5+5)) Xn+1 = -5 - (0.01) * 0 Xn+1 = -5 And so, we have Xn+1 = Xn which means we can not progress anymore and which means we reached the local minimum.
@adscript4713Ай бұрын
@@explovictinischool2234 So we know whenever the result is not getting any smaller?
@bharatcreations71542 жыл бұрын
Can we compute same thing without getting into learning rate??
@shankaks7217 Жыл бұрын
Why did we choose 0.01 as the learning rate?
@diegososa52804 жыл бұрын
Thank you very much!
@DataScienceGarage4 жыл бұрын
Thanks! Hoping it was useful. :)
@mattk61823 жыл бұрын
using x as your means of showing multiplication is confusing, makes it looks like you took the derivative wrong with 2x(x+5)..maybe in future videos either leave the x out so the multiplication is implied.
@boniface385 Жыл бұрын
Hi, why the learning rate are 0.01? Can it be any random learning rate? For example 0.2, 0.02 or any. I appreciate it for thee fast reply, thank you😊🙏🏻🙏🏻🙏🏻
@DataScienceGarage Жыл бұрын
Hello! Thanks for watching this video, I'm glad it was useful for you. While modelling ML system, you can specify random Learning rate. However the good practice is to use 0.1, 0.01, 0.001, or 0.0001. Each ML model has its own architecture, and different training data, hyperparameters, etc., so learning rate can be adopted separately for each case. Here, I used 0.01 just for demonstration purposes.
@boniface385 Жыл бұрын
@@DataScienceGarage thank you so much for the explanation. 🫶🏻
@مروةمجيد-ت1خ2 жыл бұрын
Well done 👏
@DataScienceGarage2 жыл бұрын
Thanks! :)
@supantha1182 жыл бұрын
Thank you so much
@codingtamilan4 жыл бұрын
How you draw that curve can be fixed as -5 ? Always it is centre as -5 ?
@blendaguedes4 жыл бұрын
First you decides witch will be his loss function. On his case it was (5+x)^2, or x^2 + 10x + 25. Nos you program the gradient descent to find the minimum of the function. It depends of your function.
@codingtamilan4 жыл бұрын
@@blendaguedes thq... pleasure to meet you
@thankyouthankyou11723 жыл бұрын
Useful thank you
@DataScienceGarage3 жыл бұрын
Thank you for watching!
@tevinwright51096 ай бұрын
GREAT VIDEO
@DataScienceGarage6 ай бұрын
Thanks for watching this!
@govardhan30994 жыл бұрын
Great explained...
@DataScienceGarage4 жыл бұрын
Thanks!
@jimyang88244 жыл бұрын
Good explanation!
@kombuchad12372 жыл бұрын
2:30 - is anybody else getting -12.04 for X1? Im realising know that if I can't do the arithmetic I shouldn't be trying to understand machine learning haha
@moazelsawaf20004 жыл бұрын
Thanks a lot sir
@dtakamalakirthidissanayake97704 жыл бұрын
Thank You So Much. Great Simple Explanation!!!
@chenxiaoyu43794 жыл бұрын
maybe change the x that represents multiplication into a dot? It is confusing to see in the dy/dx
@AlfredEssa4 жыл бұрын
Good job!
@arvinds71822 жыл бұрын
On point👏
@DataScienceGarage2 жыл бұрын
Thanks!
@bernardaslabutis50983 жыл бұрын
Ačiū, padėjo!
@DataScienceGarage3 жыл бұрын
Džiaugiuosi! :)
2 жыл бұрын
Perfect !
@preetbenipal10344 жыл бұрын
its simple short and easy here ..thank you :)
@prakritisinha93444 жыл бұрын
Thank you!
@DataScienceGarage4 жыл бұрын
You're welcome!
@gireejatmajhiremath67514 жыл бұрын
Thank you very much sir for clearing my concept.
@RK-ro4br Жыл бұрын
Why the learning rate = 0.01 ? it chosen randomly ? or what?
@DataScienceGarage Жыл бұрын
Hi! For this video - yes. It is chosen randomly for demonstration purposes only.