This is the amount of enthusiasm I need from my professor. Keep up the good work, sir!
@josephgigot88277 жыл бұрын
You are a really great teacher. Watching you, we are feeling that you re-discover what you already knows with us ! I think it is the perfect way to learn people knowledges !
@Kevin-ex9vr Жыл бұрын
man, this series with both board and coding together is really the best from yt, congrats
@luisa5343 жыл бұрын
got stuck on gradient descent from the andrew ng coursera course, so as always, I'm back here for more digestable explanations. love your teaching style!
@LearnWithYK3 жыл бұрын
Excellent. Love the way you present - enthusiastic, excited, but totally at ease.
@sagauer7 жыл бұрын
Hey, I am watching your channel for the first time and I am amazed how good you explain things! I am a teacher myself and I find you very inspiring!
@anubratanath53423 жыл бұрын
This is the most intuitive explanation of linear regression. Thank you sir!
@christophersheppard3249 Жыл бұрын
You single handedly made me go into cs. Thank you for your inspiration.
@NickKartha6 жыл бұрын
2:35 spoiler for Avengers: Infinity War
@Thronezzz3 жыл бұрын
Keep up the good work. Your teaching is the best, especially when it comes to complicated topics.
@sues4370 Жыл бұрын
This was a great visual representation of SGD, thank you!
@franciscohanna29567 жыл бұрын
Great videos Daniel! Thank you! I started a IA course at college this semester (it's almost ending now), and this helped me to settle what I was studying. Keep it up!
@niharika76316 жыл бұрын
Dan i love how you get so excited to explain things.. So much to say! 😅 super cute. Plus so informative. I m glad I found this channel.
@mkalicharan6 жыл бұрын
How awesome is this explanation, theory + programming is the way to go Coding train
@mohammedsaeed72413 жыл бұрын
Dude thank you so much for the intuition! many ppl don't bother going through that
@Manojshankaraj6 жыл бұрын
Really awesome video! Thank you for making machine learning and math so much fun!!
@niklasheise6 жыл бұрын
Its incredible when you display the error and guess values, my next try is to make a learning rate which changes depending on the numbers behind the comma. This tutorial is awesome!!
@sarangchouguley62925 жыл бұрын
Thank you Dan. Really you made this topic so easy to understand. Keep up the good work.
@josephkarianjahi14673 жыл бұрын
You are hilarious man! Best teacher on youtube for machine learning
@kingoros7 жыл бұрын
Thank you for making these! Very informative!
@TheCodingTrain7 жыл бұрын
You're welcome!
@st101k7 жыл бұрын
I agree ;)
@solomonrajkumar55372 жыл бұрын
you are really incredibly awesome teaching Sir!!!!... there is no words say....
@renelalla77996 жыл бұрын
Thank you for your awesome and easy to understand explanations! :) But I have a question regarding the code from 18:08 Why can we see the line moving instead of being just in its final position? So far, as I can see it in the code, the drawline() method is called after the gradientDescent() method. What am I missing here?
@juliekell94546 жыл бұрын
Thank you for this. I was taking a coursera course on machine learning and got stuck on week one (incredibly frustrating!!) because half the math instructions didnt make sense. I had no idea it was so simple! I just passed week one. Thank you.
@kwajomensah9407 жыл бұрын
Thank you so much! I've been wanting to go over statistics to start diving into ml and mv you've just made my day!
@TheCodingTrain7 жыл бұрын
I'm so glad to hear, thank you!
@gracelungu36467 жыл бұрын
This channel is really an amazing place to learn high programming algorithm. thank you for the videos Mr shiffman.
@TheCodingTrain7 жыл бұрын
Thank you!
@aeroptical7 жыл бұрын
Sooo impressed by the white board being magically erased! I watched the live stream and thought it would be a total disaster; well, I'm beyond impressed - some fine editing there! :) Loving the ML series so far Dan.
@arzoosingh53885 жыл бұрын
I must say i like the way you teach . You're a nice man God bless .
@SharonKlinkenberg7 жыл бұрын
Great videos Dan keep up the good work. The code really helps getting a handle on the theory.
@TheCodingTrain7 жыл бұрын
That's great to hear.
@miteshsharma31067 жыл бұрын
the snap was cool ..... but we saw the truth in livestream lol😁
@teja27755 жыл бұрын
Awesome cool..... What a teaching style I really love it you made my day by understanding linear regression with simple story really love you man
@syedabuthahirkaz6 жыл бұрын
Shiffman is always nice man. Love you Guru !
@fernandonakamuta15026 жыл бұрын
That is an awesome use of DOM man!
@zhimingkoh10294 жыл бұрын
Hey Dan, thank you so much for making all these videos (: You're amazing!
@francescozappala88227 жыл бұрын
Hi, I love your videos...I think they are amazing! I'm Italian and don't understand many words😕 you are great!
@TheCodingTrain7 жыл бұрын
Thank you! I need to get more language subtitles!
@Algebrodadio7 жыл бұрын
Are you going over gradient descent because it's used by the back propagation algorithms for neural networks? Because I can't wait to watch you do stuff with NN's.
@TheCodingTrain7 жыл бұрын
That's right!
@benjaminsmeding89667 жыл бұрын
Hi Dan, I really enjoy your movies, I'm a self thought programmer and your movies give a real good insight in different kind of algorithms. Maybe nice to know... I'm actually a railtrack (P-Way) engineer and we use for example the least square method quite a lot. Keep up the great work! Ps. If your interested in some actual train datasets (from the Dutch Rail Network) leave a message.
@TheCodingTrain7 жыл бұрын
Oh yes, that could be good!
@varalakshmi39324 жыл бұрын
Great videos! You are good at making videos by just being yourself and explaining in the best way possible. :))
@crehenge23867 жыл бұрын
thank you for showing me how to implement multivariable calculus in programming!
@capmi13797 жыл бұрын
Wow! machine learning!.. you gave understanding how they work and how they write by line by line without package unlike package like tensor flow XD Wow..thank u
@junaid14647 жыл бұрын
wonderful. nobody can teach better than you.
@TheCodingTrain7 жыл бұрын
Thank you so much!
@matteoveraldi.musica6 жыл бұрын
you're the boss. Very good explanation, loved it!
@wengeance89627 жыл бұрын
Dan is wearing a funky t-shirt! looks good!
@rajcuthrapali8007 жыл бұрын
you are like my coding guru lol thanks so much mr dan for your help!
@ElBellacko12 жыл бұрын
great explanation
@lakeguy656166 жыл бұрын
velocity in this example doesn't mean speed? but instead means heading?
@Contradel7 жыл бұрын
So my guess on an explanation on these lines: m = m + (error * x) * learning_rate; b = b + (error) * learning_rate; First line: think about the question "when I change m, how does that affect y?". This is what calculus is used for, more specifically differentiation. The answer to the question is written in math as dy/dm, if our line expression is defined as: y = m * x + b. dy/dm = D(m * x + b, m) = x. This is why the error should by multiplied by x. For the second line same thing! Change of y when changing b? dy/db = D(m * x + b, b) = 1. We could multiply error by 1, or leave it out as Shiffman did. What does the D function do? It differentiates the expression with regards to the second parameter passed. To calculate this you can either use a calculator, use a lookup table of rules or derive the answer yourself following the proof.
@troatie7 жыл бұрын
This isn't quite right I don't think? Shouldn't you divide by x? Let's say your error was 1. So you want to change y by 1. If you change m by 1 you'll get a change of x! out of that. If you change m by 1/x you'll get the 1 out that you want. Or maybe written out... e1 = y - m1 * x - b1 e2 = y - (m1 + m_change) * x - b1 if you want e2 to be 0, then you get 0 = y - m1 * x - m_change * x - b1 = y - m1 * x - b1 - m_change * x = e1 - m_change * x m_change = e1 / x
@Contradel7 жыл бұрын
I'm not sure I'm following you. But if, for one of the datapoints, the error is 1, you want to adjust the parameters (m and b) a small amount (learning_rate), weighted by error, so that for all your datapoints you get closer to a best fit.
@amitbansode6 жыл бұрын
All videos by you are rocking
@adammontgomery79806 жыл бұрын
Would you have two separate learning rates for m and b? Seems like weighting the slope change higher could be beneficial.
@vengalraochowdary47125 жыл бұрын
Really superb explanation of Gradient Descent. Is there any book which you refer or suggest us for Machine Learning ?
@darek44885 жыл бұрын
You need separate learning rates for m and b. Then set the learning rate for b higher than the one for m so it would rotate faster, but move up and down slower.
@8eck4 жыл бұрын
So the steer on the graph, it would be vertical line between Yguess and Yactual as a difference?
@8eck4 жыл бұрын
So the so called steer is the delta of the weights? So called the change of weights in each iteration/epoch?
@jairajsahgal50624 жыл бұрын
you are a good man. thank u
@tigerspidey123 Жыл бұрын
Would it be possible Applying PID control scheme to the learning rate, so it will accelerate our learning process?
@adaptine7 жыл бұрын
What you're describing here is effectively a kalman filter?
@nnmrts7 жыл бұрын
Hey Dan! I really like your videos, but sometimes you seem so lonely in that studio. :D Wouldn't be something like a co-op coding challenge awesome?
@TheCodingTrain7 жыл бұрын
Hah, love this idea!
@BinaryReader7 жыл бұрын
great stuff Dan, this stuff is invaluable for anyone starting out in ML. top stuff.
@stefanoslalic21996 жыл бұрын
can you host me?
@PatrickPissurno6 жыл бұрын
You're really amazing! Thank you so much. Really enjoyed the way you explain things.
@dukestt7 жыл бұрын
It worked yay haha. I was waiting for it. I was watching at the time though.
@jadrima86407 жыл бұрын
Nice tutorial channel!
@gonengazit7 жыл бұрын
hey, nice video. could you explain why you normalize the values between 0 and 1 and what it does? i tried not normalizing them and i got some really wacky results using gradient descent even though it worked fine with the Ordinary Least Squares method. do you know why that happens?
@gonengazit7 жыл бұрын
Julian atlasovich but it didn't work without normalization
@yogeshpandey95494 жыл бұрын
Would you please elaborate the implementation of Gradient Descent Algorithm using vectorization method in python?
@TheCodingTrain4 жыл бұрын
Our Coding Train Discord is a great place to get help with coding questions ! discord.gg/hPuGy2g - The Coding Train Team
@iftikhar582 жыл бұрын
Cost function in this video is mean sqaure error?
@YauheniKisialiou7 жыл бұрын
Hey! Great Video! But...how is it possible that the line is self adjusting...According to the code..
@arijitdebnath44806 жыл бұрын
why you multiply error * x by learning_rate??
@thehappycoder37602 жыл бұрын
Very helpful
@gozumetaklanlar92747 жыл бұрын
Hi Dan, greate video. I had watch most of your videos and I would be glad if you could make video about addEventListener and what advantages and disadvantages over onclick, onblur, onmouseover... thank you in advance
@souravsarkar57245 жыл бұрын
Dear sir, if you give any suggesion to understand that formula : "DELTA_m = error * x " , I will be very greatful .
@ImtithalSaeed7 жыл бұрын
why you said x=data[i]*x and y=data[i]*y at 12:20
@OneShot_cest_mieux7 жыл бұрын
Hello, there is a traduction of your description and your title in french. I live in France and I can't disable this, how to do it please ?
@massadian757 жыл бұрын
Very interesting !
@TheNikhilmishras7 жыл бұрын
Great videos! :D You are the best! Do you recommend going with "Intelligence and Learning" sessions after p5.js introduction for someone who wants to get into Machine learning?
@vishwajeetsingh67666 жыл бұрын
Can someone explain why this is correct ? m = m + (error * x) * learning rate; I mean how is it dimensionally correct ? Shouldn't error be divided by x so that m can be added to something that is of type m.
@michaelho93885 жыл бұрын
I agree with you, I feel confused at this part as well.
@nkemer5 жыл бұрын
yep I do not understand either.
@nkemer5 жыл бұрын
oh it is in the next video.
@PeterGObike5 жыл бұрын
The derivative from first principles allows for that. 2error*x= (error*x)*2(or any small number epsilon)
@user-vl2oi3yr8q7 жыл бұрын
Hey cool video, I have an unrelated question, how can I transform strings or single letters no numbers, like A to 65 in p5.js?
@gonengazit7 жыл бұрын
Felix Berg var s = 'A'; console.log(s.charCodeAt(0));
@user-vl2oi3yr8q7 жыл бұрын
Thank you!
@gonengazit7 жыл бұрын
Felix Berg you're welcome!
@bellindaakwa-asare4442 Жыл бұрын
great video!! where can I get the code?
@TheCodingTrain Жыл бұрын
Apologies that it is missing, please file an issue here! github.com/CodingTrain/thecodingtrain.com/issues
@jasdeepsinghgrover24707 жыл бұрын
GREAT Video .. Thanks a Lot
@jasdeepsinghgrover24707 жыл бұрын
had a small doubt, shouldn't the change in slope be error/x instead of error*x?as it is rise / run
@hunarahmad6 жыл бұрын
Your snap has inspired Thanos :D
@blackdedo937 жыл бұрын
Hey Thanks for The awesome video, i dont understand why not calculate the correct line directly ?
@DaSodaPopCop7 жыл бұрын
The reason for this is because he is not simply writing a program that finds the correct line. He specifically is writing this program in such a way that implements and showcases the idea of back propagation. Calculating the line directly would be the most efficient way to write this program, but that's not the point of the video. There will be instances with much higher dimensional data where prognostication is much more efficient than doing what you suggest, such as in a Neural Network.
@DaSodaPopCop7 жыл бұрын
look at 19:14 for his explanation
@blackdedo937 жыл бұрын
makes sense Thanks. but can u give examples or reference on why would i need this learning process
@williamobeng47036 жыл бұрын
Well explained. it will be nice to see the code. Cant find it on github
@TheCodingTrain6 жыл бұрын
github.com/CodingTrain/website/tree/master/Courses/intelligence_learning/session3 (Need to figure out a way for things to be more findable!)
@nikhilnambiar71605 жыл бұрын
Make a video on lasso regression without library as did for linear regression
@sujansonly007ify7 жыл бұрын
is the desired velocity given ? , when i already know which direction is my target.. why would i choose other side and steer it ? Could someone please shed some light on desired velocity
@himannamdari73757 жыл бұрын
I love this video Great Tnx Nice logo on your shirt
@anuraglahon85726 жыл бұрын
Where is the code ??i am not finding it in github?
@Bena_Gold6 жыл бұрын
That "come back to me" ... hahahahaha
@aakashkamalapur85107 жыл бұрын
can anyone help me out in what way should I solve a system of over determined non linear equations?
@iftikhar582 жыл бұрын
Brother , Do you have any slack Channel or discord?
@stefanoslalic21996 жыл бұрын
But once again i don't understand how does NN know what the desired output is? You are calculating the Loss function based on desired output that you explicitly writing in the system? If you explicitly write out the numbers you are more or less telling the Neural Network what to do, isnt't the whole concept of neural network to find the way by itself?
@TheCodingTrain6 жыл бұрын
Apologies for not making this clear. The technique I'm applying is called "supervised learning" where you have a set of training data with known outputs! The neural network learns how to reproduce the correct results with the known outputs so that it can (hopefully) produce the correct results also with data that doesn't have the answers paired with it. I think I cover this more in my 10.x neural network series.
@stefanoslalic21996 жыл бұрын
Got it! Thank you for your time and determination, see you in next episode (:
@ac11dc1105 жыл бұрын
what is the best book for machine learning?
@manojdude35 жыл бұрын
2:41 Thanos of blackboard writings.
@namanbhardwaj82307 жыл бұрын
which platform does he using?
@tejasdevgekar11547 жыл бұрын
Really rookie right now... Gotta progress fast!
@mohammadpatel23154 жыл бұрын
The concepts in this are very similar to the perceptron model
@toastyPredicament2 жыл бұрын
Sir, I wanted what they seem to have
@akashsrivastava59637 жыл бұрын
I need a code for linear regression for n variables in java
@sreekrishnanr18126 жыл бұрын
I think you are awesome 😊😊
@NattapongPUN7 жыл бұрын
What if multiple regression?
@charbelsarkis35677 жыл бұрын
I would love to see the snapping of the fingers live :pp
@adarshr34905 жыл бұрын
You have done the snap even before Thanos have done that =D
@frisosmit89207 жыл бұрын
Maybe it would be cool if you made an AI for a simple game like noughts and crosses with a minimax algorithm