Krish sir your youtube channel is just like GITA for me as one gets all the answers to life in GITA I get all my doubts cleared on your channel. Thank you, SIr.
@kartikdave6594 жыл бұрын
after becoming member how can i get the data science material, can you please tell me?
@BalaguruGupta3 жыл бұрын
Amazing explanation Sir! You'll always be the hero for the AI Enthusiasts. Thanks a lot!
@saurabhnigudkar61154 жыл бұрын
Best Deep Learning playlist on youtube
@ravindrav18952 жыл бұрын
whenever i am confused with some topics , i come back to this channel and watch your videos and it helps me a lot sir .Thank you sir for an amazing explanation
@archanamaurya894 жыл бұрын
This video is such a light bulb moment for me :D Thank you so very much!!
@nitayg13265 жыл бұрын
My God! Finally am clear about GD SGD and mini batch SGD!
@nagesh8664 жыл бұрын
what an amazing teacher you are. Crystal clear.
@lakshminarasimhanvenkatakr37544 жыл бұрын
This is excellent explanation so that anyone can understand with so much granular level of details.
@ajithtolroy54415 жыл бұрын
I saw many videos but this one is quite comprehensible and informative
@fedisalhi63205 жыл бұрын
Excellent explanation, it was really helpful thank you.
@guytonedhai Жыл бұрын
How are you so good at explaining 😭😭😭😭😭 Thanks a lot ♥♥♥
@funpoint396610 ай бұрын
please workout your camera issue it seems like it is set to auto focus resulting in a little disturbance.
@VVV-wx3ui5 жыл бұрын
Superb...simply superb. understood the concept now from the Loss function. Well don Krish.
@OmerFarukUcer15 күн бұрын
Really nice explanation! Thanks for the video
@tsharmi9193 ай бұрын
Best video explanation on this so far
@khuloodnasher16064 жыл бұрын
Really this is the best video i'v seen ever explaining the concept better than famous. school
@Anand-uw2uc4 жыл бұрын
Good Explanation! But you did not speak much about when to use SGD although you clarified better on GD and Mini Batch SGD
@vishaldas63464 жыл бұрын
There is nothing much to explain about SGD when you are talking about 1 datapoint at a time while considering dataset of 1000 datapoints.
@SandeepKashyap-ek2hx2 жыл бұрын
You are a HERO sir
@RishikeshGangaDarshan3 жыл бұрын
Good Good clearly explained nobody can explained like this
@user-wd2xh3vj2vАй бұрын
Thank you sir for your valuable information.. ❤
@ArthurCor-ts2bg4 жыл бұрын
Krish you concise subject most meaningfully
@gayathrijpl Жыл бұрын
such a clean way of explanation
@goodnewsdaily-tamil19902 жыл бұрын
1000 likes for you man👏👍
@sandipansarkar92114 жыл бұрын
Thanks Krish. Good video.I want to use all this knowledge in my next batch of deep learning by ineuron
@tonyzhang25013 жыл бұрын
Thank you, It is clear explanation. I got it!
@yukeshnepal48854 жыл бұрын
8:58 , using GD it converge quickly and while using mini-batch SGD it follows zigzag path, How??
@kannanparthipan79074 жыл бұрын
In case of mini batch sgd, we are considering only some points so some deviations will be there in the calculation compared to usual gradient descent where we are considering all values. Simple example GD is like total population and mini SGD is like sample population, it will never be equal and in sample population some deviation always will be there in distribution compared to total population distribution. We cant use GD everywhere, due to time computation factor, using mini SGD will give approximate correct result.
@bhargavpotluri51474 жыл бұрын
@@kannanparthipan7907 Deviation will be there in the final output or in the final converge result. Question is why do we have during the process of convergence. Also for every epoch if we consider different samples then understood that there can be zig zag results in the process of convergence. But if only one sample of k records are considered then why is that zig zag during convergence?
@bhargavpotluri51474 жыл бұрын
Ok now I got it. For every iteration, samples are picked at random, so is zig zag. Just gone through other artciles
@rohitsaini8480 Жыл бұрын
Sir, please solve my problem, in my view we are doing gradient descent to find the best value of m (slop in case of linear regression, considering b = 0) so if we use all the point then we must came to know at which point the value of m is less, so why we have to use learning rate to update weight because we already know the best value.
@allaboutdata20505 жыл бұрын
What an explaination 🧡 . Great !! Awesome !! .
@ukc27045 жыл бұрын
Great video man 👍👍..Please keep it up. I am waiting for next videos
@chinmaybhat96364 жыл бұрын
Awesome @KrishNaik Sir.
@koustavdutta53173 жыл бұрын
Hi Krish, one request to you ...like this playlist, please make long videos for the ML Playlist with the Loss Functions , Optimizers used in various ML Algorithms --> mainly in case of Classification Algorithms
@aditisrivastava70795 жыл бұрын
Just wanted to ask to ask if you could also suggest some good resources online that we can read which could bring more clarity.......
@minakshiboruah13563 жыл бұрын
@12:02 Sir it should bemini batch stocastic g.d.
@Kurtmind2 жыл бұрын
Excellent explanation Sir!
@severnsevern14454 жыл бұрын
Great explanation . Very clear . Thank!
@vinuvarshith6412 Жыл бұрын
Top notch explanation!
@soheljagirdar88304 жыл бұрын
4:17 SGD have minimum 256 records to find error / minima you said it's 1 record at a time
@pramodyadav44224 жыл бұрын
I read few articles which says In "SGD a randomly one data point is picked from the whole data set at each iteration". 256 records which you're talking about may be Mini Batch SGD "It is also common to sample a small number of data points instead of just one point at each step and that is called “mini-batch” gradient descent."
@tejasvigupta074 жыл бұрын
@@pramodyadav4422 yeah ,even I have read that in SCD only one data point is selected and updated in each iteration instead of all.
thank you very much for your efforts. please how can we solve a portfolio allocation problem using this algorithm? please answer me
@Skandawin785 жыл бұрын
Your vidoes are excellent reference to brush up these concepts
@sreejus82184 жыл бұрын
If we use a sample of output to find the loss, will we use its derivative for changing whole weight or change the weights of the respective output
@lj123-g9d6 ай бұрын
So simply explained
@jiayuzhou60518 ай бұрын
the only video that explains
@nikkitha924 жыл бұрын
Sir your videos are amazing. Can you please explain about latest methodologies such as BERT , ELMO
@bhavanapurohit26274 жыл бұрын
Hi, is it completely theoretical or will you code in further sessions?
@ankitbiswas83802 жыл бұрын
when you mentioned SGD takes place in linear regression . I didnt understand that comment . Even in your linear regression videos for the mean square error we are having sum of squares for all data points . So how SGD got linked in linear regression ?
@rabidub73310 ай бұрын
thanks for this! great explanation
@jsverma1435 жыл бұрын
negative weights and positive weights best explained as-- since the angle of tangent is more than 90 degree in left side of the curve so this results in -ve values and for other its less than 90 degree so it would be +ve
@bikkykumar6312Ай бұрын
Hello sir, I am stuck with gradient descent, ,mini batch and sgd . Sir can you recommend some text book or material for this topics. Any help will be appreciated. Thank you
@muhammedsahalot86838 ай бұрын
which have more convergence speed SGD or GD ?
@NaveenKumar-ts1om6 ай бұрын
Awesome KRISHHHHHH
@nansonspunk2 жыл бұрын
yes i really liked this explanation thanks
@taranilakshmi96805 жыл бұрын
Explained very well. Thankyou.
@achrafkmout93983 жыл бұрын
very good explanation
@AjanUnderscore2 жыл бұрын
Thank u sir 🙏🙏🙌🧠🐈
@samiabidah41973 жыл бұрын
please what the difference between GD and Batch GD !
@siddharthachatterjee99594 жыл бұрын
Good attempt 👍. Please record with camera on manual focus.
@a.sharan8876 Жыл бұрын
py:28: RuntimeWarning: overflow encountered in scalar power cost = (1/n)*sum([value**2 for value in(y-y_predicted)]) hey bro . ia m stuck here with this error , i could not understand the error itself, if you suggests me some solution. .... just now i started to practice a ml algorthm.
@r79183 жыл бұрын
I have 1 question regarding this topic. Is this concept applicable to linear regression, right?
@syedsaqlainabatool33994 жыл бұрын
This is what i was looking for
@ruchikalalit13045 жыл бұрын
have you make the videos of practical implementation of all the work if so please share the links
@vineetagarwal182 жыл бұрын
Great Sir
@bijaynayak64735 жыл бұрын
Hello Sir, could you share the link for the code where you explained, these videos series are very nice with short of the period we can cover so many concepts. :)
@akfvc87124 жыл бұрын
greate video excelent effort. appreciated!!
@pareesepathak73483 жыл бұрын
can you share the paper for reference and also can you share the resources for deep learning for image processing.
@alsabtilaila19233 жыл бұрын
Great one!
@manojsalunke28424 жыл бұрын
9.28 time, you said sgd will take time to converge than gd, then which is fast , sgd or gd????
@response2u2 жыл бұрын
Thank you, sir!
@muralimohan69744 жыл бұрын
How can we take k inputs at the same time
@rdf16164 жыл бұрын
good explanation! thankss
@vishaljhaveri75653 жыл бұрын
Thank you sir.
@abhrapuitandy33274 жыл бұрын
please do tell about stochastic gradient ascent also
@_JoyshreeMozumder4 жыл бұрын
what is resource of data point?
@khushboosoni2788 Жыл бұрын
sir can you explain me SPGD algorithm please
@codewithbishal8954 ай бұрын
Excellent
@percyjardine57244 жыл бұрын
thanks Krish
@aminuabdulsalami43255 жыл бұрын
Great guy.
@RaviRanjan_ssj45 жыл бұрын
great video !!
@louerleseigneur45323 жыл бұрын
Thanks buddy
@rameshthamizhselvan24585 жыл бұрын
Excellent!
@ting-yuhsu42294 жыл бұрын
You are AWESOME! :)
@thanicssubakar63035 жыл бұрын
Nice bro
@phaneendra37004 жыл бұрын
hats off man
@sathvikambati34642 жыл бұрын
Thanks
@praneethcj65445 жыл бұрын
Perfect ..!!!
@shubhangiagrawal3364 жыл бұрын
good video
@atchutram98945 жыл бұрын
Switch the auto focus feature in your camera. It is distracting.
@devaryan22013 жыл бұрын
do change your method of teaching seems like someone has read a book and just trying to copy thatt content from ones side .....use your own ideologies for it :)