Linear Regression Gradient Descent | Machine Learning | Explained Simply

  Рет қаралды 82,710

Learn With Jay

Learn With Jay

Күн бұрын

Пікірлер: 98
@MachineLearningWithJay
@MachineLearningWithJay 4 жыл бұрын
If you found any value from the video, hit the red subscribe and like button 👍. I would really love your support! 🤗🤗 👉 You will get a New Video on Machine Learning, every Sunday, if you subscribe to my channel, here : kzbin.info/door/JFAF6IsaMkzHBDdfriY-yQ
@11aniketkumar
@11aniketkumar 2 жыл бұрын
Finally! I found something useful. Thanks a lot, everyone teaches working of gradient descent in very crude way, but almost no one teaches the maths behind it. Almost everyone simply imports gradient descent from some library and no one shows pseudo code. I wanted to understand the working behind those functions, how these parameters get adjusted, and what maths is getting used behind the scenes, so if required we can create our own functions, and this video fulfilled all these requirements.
@Sansaar_Ek_Vistaar
@Sansaar_Ek_Vistaar 6 ай бұрын
So so true!! ❤
@simonwang4368
@simonwang4368 3 жыл бұрын
This is a great explanation of gradient descent! Thank you!
@MachineLearningWithJay
@MachineLearningWithJay 3 жыл бұрын
Your welcome!
@dyxtopia21
@dyxtopia21 Ай бұрын
Thank youu. I zone out a lot in my lectures so this was really helpful to me
@JJ-pz2dx
@JJ-pz2dx 10 ай бұрын
hello, in 11:00 why did you multibly the m with 2? in the previos video there was only m
@IbrahimAli-kx9kp
@IbrahimAli-kx9kp 3 жыл бұрын
Last 5 minutes were epic 😍... Thanks 💙
@MachineLearningWithJay
@MachineLearningWithJay 3 жыл бұрын
Thank you so much! Your comment means a lot to me.
@msliyanage8411
@msliyanage8411 Ай бұрын
Your explanation is really Nice and understandable.Thank you. From Sri Lanka🥰
@johans7585
@johans7585 2 жыл бұрын
Don't stop! This was more than helpful!
@MachineLearningWithJay
@MachineLearningWithJay 2 жыл бұрын
Sure 😇… glad to help!
@vinyasshreedhar9833
@vinyasshreedhar9833 3 жыл бұрын
Your explanation is really good. It would be helpful if you could make video playlists on Linear Algebra, Optimization and Calculus.
@MachineLearningWithJay
@MachineLearningWithJay 3 жыл бұрын
Hi Shreedhar, thanks for the compliment and the suggestion. I will consider making videos on these topics too, just that, it might take some time. 🙂
@Sansaar_Ek_Vistaar
@Sansaar_Ek_Vistaar 6 ай бұрын
The best ever explanation with detailed mathematical explanation
@mrguitaramateure
@mrguitaramateure 3 жыл бұрын
Thanks for this. I'm learning data analytics but I come from a profession with little math, so it's challenging.
@MachineLearningWithJay
@MachineLearningWithJay 3 жыл бұрын
Your welcome James! I will make more videos on Machine Learning with Mathematics for sure !
@aayushjitendrakumar4844
@aayushjitendrakumar4844 3 жыл бұрын
Your way of explaining things were just amazing!! , I got all u wanted to explain , thanks..
@MachineLearningWithJay
@MachineLearningWithJay 3 жыл бұрын
Thank you so much! I really admire your comment.
@Pubuditha
@Pubuditha Жыл бұрын
Thank you a lot for this. Your explanation helped to wrap my head around gradient descent !
@ananthdev2388
@ananthdev2388 8 ай бұрын
i usually never comment, but this was so simple and easy to understand ty
@dhananjay7513
@dhananjay7513 2 жыл бұрын
I think something is missing at 12:08 where you Ommited SUM without explaining all you showed was the Matrix Differentiation
@MachineLearningWithJay
@MachineLearningWithJay 2 жыл бұрын
The sum will already happen with matrix multiplication… like instead of having 1^2 + 2^2 + 3^2 … we are writing [1 2 3]*[1 2 3].T
@dhananjay7513
@dhananjay7513 2 жыл бұрын
@@MachineLearningWithJay yeah I figured it out after watching few times but in the video you mentioned that we used derivative of x^2 so I think you should have emphasized that part , over all a great video You made it very much easy to clear some of my doubt in beginners stage plus I would be very much grateful if you could create a community channel on Telegram or on Discord for someone who wants to clear doubts as it's not. Possible on YT
@purplefoxdevs1280
@purplefoxdevs1280 2 жыл бұрын
Keep going on bro u are clearing my concepts, please make a playlist on python tutorials
@MachineLearningWithJay
@MachineLearningWithJay 2 жыл бұрын
Thank you! I will try covering python tutorials if I get time… until then, you can check out some other playlist on KZbin for python.
@abhzme1
@abhzme1 3 жыл бұрын
at 8:15 I did not get why y-hat is equal to that summation ending with theta 0.
@abhzme1
@abhzme1 3 жыл бұрын
I revisited and got the answer at 9:0, thanks a lot. just because there is no animation while you point out points its bit of a task to listen and figure out. i wish you reach next level in presentation, because you are doing a great job with all logic and fluency! i had a small confusion as i am doing Stanfords machine learning too on coursera and your video helped in no time. thanks. grow well,
@MachineLearningWithJay
@MachineLearningWithJay 3 жыл бұрын
@@abhzme1 glad it helped. And thanks for suggesting. I have added presentation and animation in the videos uploaded in Neural Network Playlist. Hope you find it better than this. Let me know if you have any specific suggestion while you go through those videos. I will greatly appreciate it.
@jokebro6711
@jokebro6711 8 ай бұрын
why do u ignore the -ve sign in the partial derivative
@demon3769
@demon3769 8 ай бұрын
Nice explanation point to point explanation others only give confusions 😅
@Ramesh-rp6jq
@Ramesh-rp6jq 3 жыл бұрын
What does theta represents in GD. Please explain
@MachineLearningWithJay
@MachineLearningWithJay 3 жыл бұрын
Theta is a parameter, which we first initialize with zero. Then we train the model to changes the theta value in such a way, that with this changed value, we can make accurate predictions. Think of it like parameters of straight line. Let say, Equation of straight line is y = ax + b. Then a, b are parameters of this straight line. If we have so many such parameters, then we represent it with Theta. So intialy, our straight line will be y=0. And after training the model, value of parameters will be changes, and with these parameters our stright line will fit best on our dataset.
@MachineLearningWithJay
@MachineLearningWithJay 3 жыл бұрын
Check out my “What is Linear Regression?” And “Linear Regression Cost function” video from this playlist for better understanding: kzbin.info/aero/PLuhqtP7jdD8AFocJuxC6_Zz0HepAWL9cF
@刘小平-m7b
@刘小平-m7b 3 жыл бұрын
hey, can you please help me to solve this question? Question: . You run gradient descent for 15 iterations with α=0.4 and compute J(θ) after each iteration. You find that the value of J(θ) increases over time. Based on this, please describe how do you choose a suitable value for α.
@rambo3rd471
@rambo3rd471 2 жыл бұрын
If J(θ) is your cost function and it is increasing over time, you need to choose a smaller learning rate for alpha so that it instead decreases over time.
@playwithcode_python
@playwithcode_python Жыл бұрын
Why we take a column of zero in (m*n) features Matrix why we can not multiply directly
@barnchak
@barnchak 2 жыл бұрын
How did anyone formulate the equation of theta and alpha ?
@MachineLearningWithJay
@MachineLearningWithJay 2 жыл бұрын
It is provided by researchers in their paper of Linear Regression.
@v1hana350
@v1hana350 2 жыл бұрын
Can you make a video based on the XGboost algorithm with mathematical formulas?
@MachineLearningWithJay
@MachineLearningWithJay 2 жыл бұрын
Thank you for your suggestion! I will consider making video on it, but it will take time. 😊
@v1hana350
@v1hana350 2 жыл бұрын
@@MachineLearningWithJay make it as soon as possible
@v1hana350
@v1hana350 2 жыл бұрын
Thanks for your respond
@v1hana350
@v1hana350 2 жыл бұрын
I have another doubt about machine learning algorithms. Please can you clarify it....how to find the cost function of K mean clustering?
@MachineLearningWithJay
@MachineLearningWithJay 2 жыл бұрын
@@v1hana350 There is no need to for using cost function in K means Clustering. It is a clustering algorithm, which works differently from linear regression. It works as: - you randomly initialize cluster points. - calculate the distance between cluster point and all other points in the dataset - group data points in a particular clusters in such a way, that we put it into a cluster of nearest cluster point. - compute cluster points as average of all the points in a cluster - repeat the process You dont need cost function here. Still if you want to use one. You can take the summation of distance of cluster points to other points in that cluster:
@aarthisomasundaram7005
@aarthisomasundaram7005 3 жыл бұрын
I am a COBOL programmer started machine learning. I have a doubt.. why we randomly fix to 1000 Iterations? As you mentioned, the derivate of cost wrt to theta is a slope, why don't we stop iteration as and when the derivative reached to ZERO(meaning at centre bottom where no slope exist) OR why don't we determine cost function has reached minima by comparing it's previous value less than current value ? since I searched many sites for this reason, no where mentioned the dynamic iteration than constant iteration. I'm not sure if I'm missing something else.please guide
@rambo3rd471
@rambo3rd471 2 жыл бұрын
If your derivative reaches 0, then you will stop whether you want to or not. Learning occurs from a non-zero derivative (tells you the direction you need to move in), so if it's 0, you stop. This is typically bad for larger problems because we don't usually have an obvious global minimum, so we want our code to run as long as the cost is decreasing. But if you get a 0, this essentially "kills" the neuron which results in no learning. This is a common problem when using ReLu activation function and is why they created leaky ReLu to mitigate this issue. But if in if you truly did reach the global minima and your derivative is 0, then there's no problem. Your model will stop updating each iteration, but since you reached the minima, you should be good.
@tapashnalge6703
@tapashnalge6703 5 ай бұрын
What is theta ?
@paulsong4345
@paulsong4345 2 жыл бұрын
Hello, I have a question on the impact of increasing the value of theta when d(cost) / d(theta) is negative. Since the rate of change of the cost function is determined to be positive or negative by (Y - Y_predicted), does this mean that when we INCREASE theta, the value of Y_predicted decreases? I am having trouble understanding this since I assumed because X and Y_predicted share a linear relationship, increasing theta should also increase the value of Y_predicted. Would be grateful if you are able to find the time to clarify this point for me, and by the way, great video I learned a ton!
@MachineLearningWithJay
@MachineLearningWithJay 2 жыл бұрын
Hi Paul, we don't manually set (increase or decrease) value of theta. The model automatically sets it. That is why we use Gradient Descent algorithm, to set the appropriate value of theta to make correct predictions. If you manipulate value of theta manually yourself, then your results won't be accurate. The point you should focus on here is why and how the cost function decreases. And how it helps to automatically adjust the value of theta. The value of theta can be very small or very large. Positive or Negative. It doesn't matter. What matters is, it is automatically adjusted (whether positive/negative/small/large) in a way that it makes correct predictions.
@MLOpsBasics
@MLOpsBasics Жыл бұрын
You explain really well.....seeing in 2023
@swapnilborse3150
@swapnilborse3150 Жыл бұрын
Thank you bro for this explanation 🙏
@veeresh4441
@veeresh4441 3 жыл бұрын
That's a awesome explanation.
@MachineLearningWithJay
@MachineLearningWithJay 3 жыл бұрын
Thank You so much Veeresh !
@jessicasaini712
@jessicasaini712 2 жыл бұрын
Great explanation. Please make a video on knn too.
@MachineLearningWithJay
@MachineLearningWithJay 2 жыл бұрын
Sure... I will make a video on it too! Thanks for the suggestion.
@saketh712
@saketh712 2 жыл бұрын
tysm really appreciate your explanation
@MachineLearningWithJay
@MachineLearningWithJay 2 жыл бұрын
You’re welcome!
@vl...6426
@vl...6426 Жыл бұрын
can you solve questions too please , all the video you explained...
@KeringKirwa
@KeringKirwa 10 ай бұрын
what an explanation, thanks sir .
@MoviesMagicClip
@MoviesMagicClip 2 ай бұрын
You get a new sebscriber here
@naomieawounang9153
@naomieawounang9153 Жыл бұрын
Sehr hilfreich Dankeschön
@AndayRubin
@AndayRubin 3 жыл бұрын
OMG, THANK YOU!
@kubidem83
@kubidem83 2 ай бұрын
very good and e2e explanation
@MachineLearningWithJay
@MachineLearningWithJay 2 ай бұрын
Thank you!!
@ManaseeParulekar
@ManaseeParulekar Жыл бұрын
it was helpful!
@utkarshsharma4041
@utkarshsharma4041 Жыл бұрын
WOW explanation
@MachineLearningWithJay
@MachineLearningWithJay Жыл бұрын
Haha… thanks!
@salmansaeed4039
@salmansaeed4039 3 жыл бұрын
Need gradient descent logistic regression and derivation
@MachineLearningWithJay
@MachineLearningWithJay 3 жыл бұрын
Hi Salman... I have already made a video on it... you can check that out in logistic regression playlist
@salmansaeed4039
@salmansaeed4039 3 жыл бұрын
@@MachineLearningWithJay thanks
@mudassirbeautytips
@mudassirbeautytips 3 жыл бұрын
Please explain the cost function using graphs .....
@MachineLearningWithJay
@MachineLearningWithJay 3 жыл бұрын
Okay... Thanks for the feedback Mudassir ! I will try to cover it in my future videos.
@supriyamanna715
@supriyamanna715 3 жыл бұрын
man the name is coding lane, what is boost?
@MachineLearningWithJay
@MachineLearningWithJay 3 жыл бұрын
Hi Supriya... previously, the name of the channel was Code Booster... that is why.
@supriyamanna715
@supriyamanna715 3 жыл бұрын
@@MachineLearningWithJay bro, I request you to make video on a roadmap on how to learn ML engineering from scratch to adv, and specify the resources for the same, so every self taught get an idea
@MachineLearningWithJay
@MachineLearningWithJay 3 жыл бұрын
@@supriyamanna715 Thank you for the suggestion. I will create a video on it.
@hamza09100
@hamza09100 5 ай бұрын
hello , I believe that sigma goes from zero to m not from 1 to m , anyway thanks for the great explanation
@aditya5531
@aditya5531 Жыл бұрын
thanks bro :)
@rameshwarsingh5859
@rameshwarsingh5859 3 жыл бұрын
you also Hinted The Gradient Descent Problem..where local value will be disappear like ghost.......👻👻👻.......
@Adil-qf1xe
@Adil-qf1xe Жыл бұрын
Hi JP, You stop uploading the video, I hope everything is fine with you.
@Mustistics
@Mustistics 2 жыл бұрын
I don't understand why your cost function is divided by 2 times the population, instead of just m. Any other guide shows it should only be m.
@MachineLearningWithJay
@MachineLearningWithJay 2 жыл бұрын
Hi, the final performance won't be affected if you divide it by m or 2m. You can check my detailed answer in the comments below (in this videos or some other video of this playlist)
@11aniketkumar
@11aniketkumar 2 жыл бұрын
After differentiation the entire function gets multiplied by 2. To eliminate that 2, he divided by 2m in beginning itself. Once the 2 is removed, it makes updating values much easier.
@SanthoshDevaraju-q1d
@SanthoshDevaraju-q1d 9 ай бұрын
who will give alpha value :??
@uchindamiphiri1381
@uchindamiphiri1381 Жыл бұрын
I am starting machine learning journey now I feel like I am late😪
@Murmur1131
@Murmur1131 4 жыл бұрын
Pls assume people dont know calculus. That could be your niche, where other channels give their people up.
@MachineLearningWithJay
@MachineLearningWithJay 4 жыл бұрын
Ohh... thats a very valuable feedback. I am definitely going to take action on this. Thanks a lot !!
@hypebeastuchiha9229
@hypebeastuchiha9229 Жыл бұрын
If you don’t know BASIC CALCULUS GO BACK TO SCHOOL AND PICK ART CLASSES YOU ARENT SMART ENOUGH FOR THIS FIELD STUPID
@sumanjyoti6063
@sumanjyoti6063 Ай бұрын
I might be late commenting this 3 years later lol, but some calculus is a prerequisite for ML and you need to know some vector concepts, probability concepts Naive Bayes, random variable and prob distribution and much more. If people were to cover them, video would be long and most of the viewers would not watch it
@DS_Gurukul
@DS_Gurukul Жыл бұрын
Nothing interesting in this😢
@MachineLearningWithJay
@MachineLearningWithJay Жыл бұрын
Yupp… Machine Learning is not interesting, but powerful 😇
@o__bean__o
@o__bean__o Жыл бұрын
​@@MachineLearningWithJayHey, it is interesting also 😠
@akshaykrgupta88
@akshaykrgupta88 9 ай бұрын
Pretty bad explanations..lacks the flow and seems to be copied from somewhere
@dyxtopia21
@dyxtopia21 Ай бұрын
This is why your wife doesn't love you
Complete | What is Linear Regression Machine Learning ?
24:10
Learn With Jay
Рет қаралды 21 М.
Gradient Descent, Step-by-Step
23:54
StatQuest with Josh Starmer
Рет қаралды 1,4 МЛН
Cheerleader Transformation That Left Everyone Speechless! #shorts
00:27
Fabiosa Best Lifehacks
Рет қаралды 16 МЛН
小丑女COCO的审判。#天使 #小丑 #超人不会飞
00:53
超人不会飞
Рет қаралды 16 МЛН
When you have a very capricious child 😂😘👍
00:16
Like Asiya
Рет қаралды 18 МЛН
Building the Gradient Descent Algorithm in 15 Minutes | Coding Challenge
22:29
Linear Regression Cost Function | Machine Learning |  Explained Simply
5:42
Gradient descent, how neural networks learn | DL2
20:33
3Blue1Brown
Рет қаралды 7 МЛН
Cheerleader Transformation That Left Everyone Speechless! #shorts
00:27
Fabiosa Best Lifehacks
Рет қаралды 16 МЛН