3.4: Linear Regression with Gradient Descent - Intelligence and Learning

  Рет қаралды 148,114

The Coding Train

The Coding Train

Күн бұрын

Пікірлер: 172
@FANSasFRIENDS
@FANSasFRIENDS 2 жыл бұрын
This is the amount of enthusiasm I need from my professor. Keep up the good work, sir!
@josephgigot8827
@josephgigot8827 7 жыл бұрын
You are a really great teacher. Watching you, we are feeling that you re-discover what you already knows with us ! I think it is the perfect way to learn people knowledges !
@Kevin-ex9vr
@Kevin-ex9vr Жыл бұрын
man, this series with both board and coding together is really the best from yt, congrats
@luisa534
@luisa534 3 жыл бұрын
got stuck on gradient descent from the andrew ng coursera course, so as always, I'm back here for more digestable explanations. love your teaching style!
@LearnWithYK
@LearnWithYK 3 жыл бұрын
Excellent. Love the way you present - enthusiastic, excited, but totally at ease.
@sagauer
@sagauer 7 жыл бұрын
Hey, I am watching your channel for the first time and I am amazed how good you explain things! I am a teacher myself and I find you very inspiring!
@anubratanath5342
@anubratanath5342 3 жыл бұрын
This is the most intuitive explanation of linear regression. Thank you sir!
@christophersheppard3249
@christophersheppard3249 Жыл бұрын
You single handedly made me go into cs. Thank you for your inspiration.
@NickKartha
@NickKartha 6 жыл бұрын
2:35 spoiler for Avengers: Infinity War
@Thronezzz
@Thronezzz 3 жыл бұрын
Keep up the good work. Your teaching is the best, especially when it comes to complicated topics.
@sues4370
@sues4370 Жыл бұрын
This was a great visual representation of SGD, thank you!
@franciscohanna2956
@franciscohanna2956 7 жыл бұрын
Great videos Daniel! Thank you! I started a IA course at college this semester (it's almost ending now), and this helped me to settle what I was studying. Keep it up!
@niharika7631
@niharika7631 6 жыл бұрын
Dan i love how you get so excited to explain things.. So much to say! 😅 super cute. Plus so informative. I m glad I found this channel.
@mkalicharan
@mkalicharan 6 жыл бұрын
How awesome is this explanation, theory + programming is the way to go Coding train
@mohammedsaeed7241
@mohammedsaeed7241 3 жыл бұрын
Dude thank you so much for the intuition! many ppl don't bother going through that
@Manojshankaraj
@Manojshankaraj 6 жыл бұрын
Really awesome video! Thank you for making machine learning and math so much fun!!
@niklasheise
@niklasheise 6 жыл бұрын
Its incredible when you display the error and guess values, my next try is to make a learning rate which changes depending on the numbers behind the comma. This tutorial is awesome!!
@sarangchouguley6292
@sarangchouguley6292 5 жыл бұрын
Thank you Dan. Really you made this topic so easy to understand. Keep up the good work.
@josephkarianjahi1467
@josephkarianjahi1467 3 жыл бұрын
You are hilarious man! Best teacher on youtube for machine learning
@kingoros
@kingoros 7 жыл бұрын
Thank you for making these! Very informative!
@TheCodingTrain
@TheCodingTrain 7 жыл бұрын
You're welcome!
@st101k
@st101k 7 жыл бұрын
I agree ;)
@solomonrajkumar5537
@solomonrajkumar5537 2 жыл бұрын
you are really incredibly awesome teaching Sir!!!!... there is no words say....
@renelalla7799
@renelalla7799 6 жыл бұрын
Thank you for your awesome and easy to understand explanations! :) But I have a question regarding the code from 18:08 Why can we see the line moving instead of being just in its final position? So far, as I can see it in the code, the drawline() method is called after the gradientDescent() method. What am I missing here?
@juliekell9454
@juliekell9454 6 жыл бұрын
Thank you for this. I was taking a coursera course on machine learning and got stuck on week one (incredibly frustrating!!) because half the math instructions didnt make sense. I had no idea it was so simple! I just passed week one. Thank you.
@kwajomensah940
@kwajomensah940 7 жыл бұрын
Thank you so much! I've been wanting to go over statistics to start diving into ml and mv you've just made my day!
@TheCodingTrain
@TheCodingTrain 7 жыл бұрын
I'm so glad to hear, thank you!
@gracelungu3646
@gracelungu3646 7 жыл бұрын
This channel is really an amazing place to learn high programming algorithm. thank you for the videos Mr shiffman.
@TheCodingTrain
@TheCodingTrain 7 жыл бұрын
Thank you!
@aeroptical
@aeroptical 7 жыл бұрын
Sooo impressed by the white board being magically erased! I watched the live stream and thought it would be a total disaster; well, I'm beyond impressed - some fine editing there! :) Loving the ML series so far Dan.
@arzoosingh5388
@arzoosingh5388 5 жыл бұрын
I must say i like the way you teach . You're a nice man God bless .
@SharonKlinkenberg
@SharonKlinkenberg 7 жыл бұрын
Great videos Dan keep up the good work. The code really helps getting a handle on the theory.
@TheCodingTrain
@TheCodingTrain 7 жыл бұрын
That's great to hear.
@miteshsharma3106
@miteshsharma3106 7 жыл бұрын
the snap was cool ..... but we saw the truth in livestream lol😁
@teja2775
@teja2775 5 жыл бұрын
Awesome cool..... What a teaching style I really love it you made my day by understanding linear regression with simple story really love you man
@syedabuthahirkaz
@syedabuthahirkaz 6 жыл бұрын
Shiffman is always nice man. Love you Guru !
@fernandonakamuta1502
@fernandonakamuta1502 6 жыл бұрын
That is an awesome use of DOM man!
@zhimingkoh1029
@zhimingkoh1029 4 жыл бұрын
Hey Dan, thank you so much for making all these videos (: You're amazing!
@francescozappala8822
@francescozappala8822 7 жыл бұрын
Hi, I love your videos...I think they are amazing! I'm Italian and don't understand many words😕 you are great!
@TheCodingTrain
@TheCodingTrain 7 жыл бұрын
Thank you! I need to get more language subtitles!
@Algebrodadio
@Algebrodadio 7 жыл бұрын
Are you going over gradient descent because it's used by the back propagation algorithms for neural networks? Because I can't wait to watch you do stuff with NN's.
@TheCodingTrain
@TheCodingTrain 7 жыл бұрын
That's right!
@benjaminsmeding8966
@benjaminsmeding8966 7 жыл бұрын
Hi Dan, I really enjoy your movies, I'm a self thought programmer and your movies give a real good insight in different kind of algorithms. Maybe nice to know... I'm actually a railtrack (P-Way) engineer and we use for example the least square method quite a lot. Keep up the great work! Ps. If your interested in some actual train datasets (from the Dutch Rail Network) leave a message.
@TheCodingTrain
@TheCodingTrain 7 жыл бұрын
Oh yes, that could be good!
@varalakshmi3932
@varalakshmi3932 4 жыл бұрын
Great videos! You are good at making videos by just being yourself and explaining in the best way possible. :))
@crehenge2386
@crehenge2386 7 жыл бұрын
thank you for showing me how to implement multivariable calculus in programming!
@capmi1379
@capmi1379 7 жыл бұрын
Wow! machine learning!.. you gave understanding how they work and how they write by line by line without package unlike package like tensor flow XD Wow..thank u
@junaid1464
@junaid1464 7 жыл бұрын
wonderful. nobody can teach better than you.
@TheCodingTrain
@TheCodingTrain 7 жыл бұрын
Thank you so much!
@matteoveraldi.musica
@matteoveraldi.musica 6 жыл бұрын
you're the boss. Very good explanation, loved it!
@wengeance8962
@wengeance8962 7 жыл бұрын
Dan is wearing a funky t-shirt! looks good!
@rajcuthrapali800
@rajcuthrapali800 7 жыл бұрын
you are like my coding guru lol thanks so much mr dan for your help!
@ElBellacko1
@ElBellacko1 2 жыл бұрын
great explanation
@lakeguy65616
@lakeguy65616 6 жыл бұрын
velocity in this example doesn't mean speed? but instead means heading?
@Contradel
@Contradel 7 жыл бұрын
So my guess on an explanation on these lines: m = m + (error * x) * learning_rate; b = b + (error) * learning_rate; First line: think about the question "when I change m, how does that affect y?". This is what calculus is used for, more specifically differentiation. The answer to the question is written in math as dy/dm, if our line expression is defined as: y = m * x + b. dy/dm = D(m * x + b, m) = x. This is why the error should by multiplied by x. For the second line same thing! Change of y when changing b? dy/db = D(m * x + b, b) = 1. We could multiply error by 1, or leave it out as Shiffman did. What does the D function do? It differentiates the expression with regards to the second parameter passed. To calculate this you can either use a calculator, use a lookup table of rules or derive the answer yourself following the proof.
@troatie
@troatie 7 жыл бұрын
This isn't quite right I don't think? Shouldn't you divide by x? Let's say your error was 1. So you want to change y by 1. If you change m by 1 you'll get a change of x! out of that. If you change m by 1/x you'll get the 1 out that you want. Or maybe written out... e1 = y - m1 * x - b1 e2 = y - (m1 + m_change) * x - b1 if you want e2 to be 0, then you get 0 = y - m1 * x - m_change * x - b1 = y - m1 * x - b1 - m_change * x = e1 - m_change * x m_change = e1 / x
@Contradel
@Contradel 7 жыл бұрын
I'm not sure I'm following you. But if, for one of the datapoints, the error is 1, you want to adjust the parameters (m and b) a small amount (learning_rate), weighted by error, so that for all your datapoints you get closer to a best fit.
@amitbansode
@amitbansode 6 жыл бұрын
All videos by you are rocking
@adammontgomery7980
@adammontgomery7980 6 жыл бұрын
Would you have two separate learning rates for m and b? Seems like weighting the slope change higher could be beneficial.
@vengalraochowdary4712
@vengalraochowdary4712 5 жыл бұрын
Really superb explanation of Gradient Descent. Is there any book which you refer or suggest us for Machine Learning ?
@darek4488
@darek4488 5 жыл бұрын
You need separate learning rates for m and b. Then set the learning rate for b higher than the one for m so it would rotate faster, but move up and down slower.
@8eck
@8eck 4 жыл бұрын
So the steer on the graph, it would be vertical line between Yguess and Yactual as a difference?
@8eck
@8eck 4 жыл бұрын
So the so called steer is the delta of the weights? So called the change of weights in each iteration/epoch?
@jairajsahgal5062
@jairajsahgal5062 4 жыл бұрын
you are a good man. thank u
@tigerspidey123
@tigerspidey123 Жыл бұрын
Would it be possible Applying PID control scheme to the learning rate, so it will accelerate our learning process?
@adaptine
@adaptine 7 жыл бұрын
What you're describing here is effectively a kalman filter?
@nnmrts
@nnmrts 7 жыл бұрын
Hey Dan! I really like your videos, but sometimes you seem so lonely in that studio. :D Wouldn't be something like a co-op coding challenge awesome?
@TheCodingTrain
@TheCodingTrain 7 жыл бұрын
Hah, love this idea!
@BinaryReader
@BinaryReader 7 жыл бұрын
great stuff Dan, this stuff is invaluable for anyone starting out in ML. top stuff.
@stefanoslalic2199
@stefanoslalic2199 6 жыл бұрын
can you host me?
@PatrickPissurno
@PatrickPissurno 6 жыл бұрын
You're really amazing! Thank you so much. Really enjoyed the way you explain things.
@dukestt
@dukestt 7 жыл бұрын
It worked yay haha. I was waiting for it. I was watching at the time though.
@jadrima8640
@jadrima8640 7 жыл бұрын
Nice tutorial channel!
@gonengazit
@gonengazit 7 жыл бұрын
hey, nice video. could you explain why you normalize the values between 0 and 1 and what it does? i tried not normalizing them and i got some really wacky results using gradient descent even though it worked fine with the Ordinary Least Squares method. do you know why that happens?
@gonengazit
@gonengazit 7 жыл бұрын
Julian atlasovich but it didn't work without normalization
@yogeshpandey9549
@yogeshpandey9549 4 жыл бұрын
Would you please elaborate the implementation of Gradient Descent Algorithm using vectorization method in python?
@TheCodingTrain
@TheCodingTrain 4 жыл бұрын
Our Coding Train Discord is a great place to get help with coding questions ! discord.gg/hPuGy2g - The Coding Train Team
@iftikhar58
@iftikhar58 2 жыл бұрын
Cost function in this video is mean sqaure error?
@YauheniKisialiou
@YauheniKisialiou 7 жыл бұрын
Hey! Great Video! But...how is it possible that the line is self adjusting...According to the code..
@arijitdebnath4480
@arijitdebnath4480 6 жыл бұрын
why you multiply error * x by learning_rate??
@thehappycoder3760
@thehappycoder3760 2 жыл бұрын
Very helpful
@gozumetaklanlar9274
@gozumetaklanlar9274 7 жыл бұрын
Hi Dan, greate video. I had watch most of your videos and I would be glad if you could make video about addEventListener and what advantages and disadvantages over onclick, onblur, onmouseover... thank you in advance
@souravsarkar5724
@souravsarkar5724 5 жыл бұрын
Dear sir, if you give any suggesion to understand that formula : "DELTA_m = error * x " , I will be very greatful .
@ImtithalSaeed
@ImtithalSaeed 7 жыл бұрын
why you said x=data[i]*x and y=data[i]*y at 12:20
@OneShot_cest_mieux
@OneShot_cest_mieux 7 жыл бұрын
Hello, there is a traduction of your description and your title in french. I live in France and I can't disable this, how to do it please ?
@massadian75
@massadian75 7 жыл бұрын
Very interesting !
@TheNikhilmishras
@TheNikhilmishras 7 жыл бұрын
Great videos! :D You are the best! Do you recommend going with "Intelligence and Learning" sessions after p5.js introduction for someone who wants to get into Machine learning?
@vishwajeetsingh6766
@vishwajeetsingh6766 6 жыл бұрын
Can someone explain why this is correct ? m = m + (error * x) * learning rate; I mean how is it dimensionally correct ? Shouldn't error be divided by x so that m can be added to something that is of type m.
@michaelho9388
@michaelho9388 5 жыл бұрын
I agree with you, I feel confused at this part as well.
@nkemer
@nkemer 5 жыл бұрын
yep I do not understand either.
@nkemer
@nkemer 5 жыл бұрын
oh it is in the next video.
@PeterGObike
@PeterGObike 5 жыл бұрын
The derivative from first principles allows for that. 2error*x= (error*x)*2(or any small number epsilon)
@user-vl2oi3yr8q
@user-vl2oi3yr8q 7 жыл бұрын
Hey cool video, I have an unrelated question, how can I transform strings or single letters no numbers, like A to 65 in p5.js?
@gonengazit
@gonengazit 7 жыл бұрын
Felix Berg var s = 'A'; console.log(s.charCodeAt(0));
@user-vl2oi3yr8q
@user-vl2oi3yr8q 7 жыл бұрын
Thank you!
@gonengazit
@gonengazit 7 жыл бұрын
Felix Berg you're welcome!
@bellindaakwa-asare4442
@bellindaakwa-asare4442 Жыл бұрын
great video!! where can I get the code?
@TheCodingTrain
@TheCodingTrain Жыл бұрын
Apologies that it is missing, please file an issue here! github.com/CodingTrain/thecodingtrain.com/issues
@jasdeepsinghgrover2470
@jasdeepsinghgrover2470 7 жыл бұрын
GREAT Video .. Thanks a Lot
@jasdeepsinghgrover2470
@jasdeepsinghgrover2470 7 жыл бұрын
had a small doubt, shouldn't the change in slope be error/x instead of error*x?as it is rise / run
@hunarahmad
@hunarahmad 6 жыл бұрын
Your snap has inspired Thanos :D
@blackdedo93
@blackdedo93 7 жыл бұрын
Hey Thanks for The awesome video, i dont understand why not calculate the correct line directly ?
@DaSodaPopCop
@DaSodaPopCop 7 жыл бұрын
The reason for this is because he is not simply writing a program that finds the correct line. He specifically is writing this program in such a way that implements and showcases the idea of back propagation. Calculating the line directly would be the most efficient way to write this program, but that's not the point of the video. There will be instances with much higher dimensional data where prognostication is much more efficient than doing what you suggest, such as in a Neural Network.
@DaSodaPopCop
@DaSodaPopCop 7 жыл бұрын
look at 19:14 for his explanation
@blackdedo93
@blackdedo93 7 жыл бұрын
makes sense Thanks. but can u give examples or reference on why would i need this learning process
@williamobeng4703
@williamobeng4703 6 жыл бұрын
Well explained. it will be nice to see the code. Cant find it on github
@TheCodingTrain
@TheCodingTrain 6 жыл бұрын
github.com/CodingTrain/website/tree/master/Courses/intelligence_learning/session3 (Need to figure out a way for things to be more findable!)
@nikhilnambiar7160
@nikhilnambiar7160 5 жыл бұрын
Make a video on lasso regression without library as did for linear regression
@sujansonly007ify
@sujansonly007ify 7 жыл бұрын
is the desired velocity given ? , when i already know which direction is my target.. why would i choose other side and steer it ? Could someone please shed some light on desired velocity
@himannamdari7375
@himannamdari7375 7 жыл бұрын
I love this video Great Tnx Nice logo on your shirt
@anuraglahon8572
@anuraglahon8572 6 жыл бұрын
Where is the code ??i am not finding it in github?
@Bena_Gold
@Bena_Gold 6 жыл бұрын
That "come back to me" ... hahahahaha
@aakashkamalapur8510
@aakashkamalapur8510 7 жыл бұрын
can anyone help me out in what way should I solve a system of over determined non linear equations?
@iftikhar58
@iftikhar58 2 жыл бұрын
Brother , Do you have any slack Channel or discord?
@stefanoslalic2199
@stefanoslalic2199 6 жыл бұрын
But once again i don't understand how does NN know what the desired output is? You are calculating the Loss function based on desired output that you explicitly writing in the system? If you explicitly write out the numbers you are more or less telling the Neural Network what to do, isnt't the whole concept of neural network to find the way by itself?
@TheCodingTrain
@TheCodingTrain 6 жыл бұрын
Apologies for not making this clear. The technique I'm applying is called "supervised learning" where you have a set of training data with known outputs! The neural network learns how to reproduce the correct results with the known outputs so that it can (hopefully) produce the correct results also with data that doesn't have the answers paired with it. I think I cover this more in my 10.x neural network series.
@stefanoslalic2199
@stefanoslalic2199 6 жыл бұрын
Got it! Thank you for your time and determination, see you in next episode (:
@ac11dc110
@ac11dc110 5 жыл бұрын
what is the best book for machine learning?
@manojdude3
@manojdude3 5 жыл бұрын
2:41 Thanos of blackboard writings.
@namanbhardwaj8230
@namanbhardwaj8230 7 жыл бұрын
which platform does he using?
@tejasdevgekar1154
@tejasdevgekar1154 7 жыл бұрын
Really rookie right now... Gotta progress fast!
@mohammadpatel2315
@mohammadpatel2315 4 жыл бұрын
The concepts in this are very similar to the perceptron model
@toastyPredicament
@toastyPredicament 2 жыл бұрын
Sir, I wanted what they seem to have
@akashsrivastava5963
@akashsrivastava5963 7 жыл бұрын
I need a code for linear regression for n variables in java
@sreekrishnanr1812
@sreekrishnanr1812 6 жыл бұрын
I think you are awesome 😊😊
@NattapongPUN
@NattapongPUN 7 жыл бұрын
What if multiple regression?
@charbelsarkis3567
@charbelsarkis3567 7 жыл бұрын
I would love to see the snapping of the fingers live :pp
@adarshr3490
@adarshr3490 5 жыл бұрын
You have done the snap even before Thanos have done that =D
@frisosmit8920
@frisosmit8920 7 жыл бұрын
Maybe it would be cool if you made an AI for a simple game like noughts and crosses with a minimax algorithm
@darkcaper703
@darkcaper703 7 жыл бұрын
is this stochastic gradient descent?
@TheCodingTrain
@TheCodingTrain 7 жыл бұрын
Yes, indeed!
@lucafilippini1348
@lucafilippini1348 7 жыл бұрын
PID ? As always thx Dan...
3.5: Mathematics of Gradient Descent - Intelligence and Learning
22:36
The Coding Train
Рет қаралды 243 М.
Каха и дочка
00:28
К-Media
Рет қаралды 3,4 МЛН
Мен атып көрмегенмін ! | Qalam | 5 серия
25:41
Advanced Tips and Techniques in Site Explorer
1:18:31
AhrefsLive
Рет қаралды 585
Gradient Descent, Step-by-Step
23:54
StatQuest with Josh Starmer
Рет қаралды 1,4 МЛН
Coding Challenge 180: Falling Sand
23:00
The Coding Train
Рет қаралды 1 МЛН
Genius Machine Learning Advice for 10 Minutes Straight
9:46
Data Sensei
Рет қаралды 97 М.
I never understood why you can't go faster than light - until now!
16:40
FloatHeadPhysics
Рет қаралды 4,7 МЛН
Intro to Gradient Descent || Optimizing High-Dimensional Equations
11:04
Dr. Trefor Bazett
Рет қаралды 77 М.
Collisions Without a Physics Library! (Coding Challenge 184)
31:05
The Coding Train
Рет қаралды 129 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,4 МЛН
Каха и дочка
00:28
К-Media
Рет қаралды 3,4 МЛН