Man, loved the explanation which I tried multiple times from different sources to understand. Why did you stop making such video? I would love to learn more in Machine Learning. Please start a playlist explaining A-Z of Machine Learning.
@albertobrody19793 жыл бұрын
i know Im randomly asking but does anyone know a trick to get back into an instagram account?? I stupidly forgot my account password. I would love any assistance you can give me
@romanrandall21063 жыл бұрын
@Alberto Brody instablaster ;)
@iamdurgeshk3 жыл бұрын
Ahaha, scammers replying to each other
@albertobrody19793 жыл бұрын
@Roman Randall thanks for your reply. I found the site thru google and Im waiting for the hacking stuff now. Seems to take a while so I will get back to you later with my results.
@albertobrody19793 жыл бұрын
@Roman Randall it worked and I now got access to my account again. I'm so happy! Thank you so much you saved my account!
@dileepn2479Ай бұрын
I took almost a week to find a such informative video to understand gradient descent ...
@sairasiddique51164 жыл бұрын
so fluent clear concepts that were like a big rock in front of me when our lecturer was giving us a lecture and I was like a dump but now I get everything with each detailing you have explained great work
@AdarshMenon3 жыл бұрын
Glad it was helpful!
@sathish77j904 жыл бұрын
Really fantastic sir... I am struggling to understand thia concept from 3 days finally now i got it... Thank u very much sir..
@charlescoult Жыл бұрын
Simple and straightforward. Thank you Adarsh.
@syedniamath58414 жыл бұрын
Great Video Adarsh, short and clear to understand and workout.
@AdarshMenon4 жыл бұрын
Thank you Syed !!
@umersalman25065 жыл бұрын
This is amazing man! Pls upload more videos on Machine learning algorithms with this type of proper intuitions and implementation
@AdarshMenon5 жыл бұрын
Thank you so much ! Yes I am working on more such videos. Thank you for subscribing
@turtlepedia51494 жыл бұрын
sir very very thanks after my job big donation coming to you
@late_nights4 жыл бұрын
good explanation. Your 1000th subscriber here
@AdarshMenon4 жыл бұрын
Awesome, thank you!
@modernman004 жыл бұрын
You are the best teacher. I recommend this course to anyone studying machine learning at Coursera. Please more video and continue with your teaching approach
@kalaivananr3 жыл бұрын
Thank you, Adarsh. You made it very simple.
@nisarahmed44872 жыл бұрын
@Adarsh ... no doubt you have done great job, but can you tell me why you are running the for loop on length of x?? but not on the epochs as you have already told that you are going to use it ?
@aman55346 ай бұрын
in his git repo he is using epochs as he said
@kruti2008 Жыл бұрын
Very well explained, step by step with clarity 👍
@sayandey14785 жыл бұрын
Impressive! That was a very clarifying one, but in the real world situations we don't know how many iterations are suitable for a given problem, moreover, you are not checking whether you have reached your minimum but are concluding from the data visualization. If you could clear that training algorithm to fix learning rate, which I have been looking for so far, could be very helpful.
@prathamgenius3 жыл бұрын
Thanks a lot @Adarsh , you helped me to complete ML assignment easily... Cheers 👍
@AdarshMenon3 жыл бұрын
Glad to hear that!
@rashidmehmood50083 жыл бұрын
Where did you use epochs in the code?? You declared it but I don’t think, you used it.
@shubhamgoyal9773 жыл бұрын
Great video man!! Loved it!! Helped me understand this concept really well. Thanks!!
@AdarshMenon3 жыл бұрын
Glad to hear it!
@adarshsingh4012 жыл бұрын
plz also make a video on multiple linear regression!!. This one was amazing.
@igorjaqueira57884 жыл бұрын
I believe there is a mistake in the code. First as you pointed out already in other comments, the variable epochs should be used instead of n. But also there should be an error limit, so the loop should run as long as the error is greater than some parameter (epsilon).
@AdarshMenon4 жыл бұрын
Yeah that is a good idea when training times are long, and it does not overfit. But yes a good best practice I could have mentioned, thanks !
@veerasekhar85513 жыл бұрын
thanks man :) subscribed expecting you to explain more ML algorithms.
@AdarshMenon3 жыл бұрын
More to come!
@sayonsai80063 жыл бұрын
Very easy to understand and implement
@sathishkumaranandashekaren43133 жыл бұрын
Hi , what is the need for adding epochs in the program without epochs also my program is running
@janhvibhosle38525 жыл бұрын
Excellent video sir, please make more videos on machine learning with easier explanation.
@AdarshMenon5 жыл бұрын
Thanks Ritesh ! Will sure do !
@davidthunder98042 жыл бұрын
Hey how you did that gradient descent converging animation in colab great video do
@chetansahu15055 жыл бұрын
Please upload more videos, all the model - KNN, Naive Bayes, Decision Tree, Random forest, dimentioanlity reduction techniques, etc all the models including the Depp learning from scratch. please make. Because the way you explain any layman to fool will easily understand. Awesome explanation bro. Big fan with your one video.
@MOHSINALI-bk2qo5 жыл бұрын
plz give me two variable example of gradient descent.
@narottamaswal39783 жыл бұрын
I liked and subscribed at this moment 2:47 🙏
@swapnamondal71613 жыл бұрын
Very intuitive video.
@nano75864 жыл бұрын
Best explanation so far.. thanks a lot!
@tejaskulkarni57953 жыл бұрын
Nice Explanation very clear.
@desavera3 жыл бұрын
Fantastic ... really solid !!! thanks ...
@raghavgaur89015 жыл бұрын
Sir,where did you use epochs in your code as it is just initialized.
@AdarshMenon5 жыл бұрын
Hi epochs were to be used in the for loop instead of n. Please see the code on GitHub (link in description)
@mohammeddanishreza49024 жыл бұрын
Why I am getting m, C value as in array, can u plz help me?
@neehanthreddy18688 ай бұрын
Could you provide an example with multiple independent variables?
@jyotshanajha8316 Жыл бұрын
very well explained. thank you
@sudeshnasen27312 жыл бұрын
What is epoch? Why are we declaring it without using it? Why run the loop for the length of X instead of the epoch?
@muhammadalikhan5003 Жыл бұрын
Thank you. It was very useful.
@bibekanandasahoo34972 жыл бұрын
Adarsh can you please make a vidio which explain how the gradient discent work in multiple linear regression ? please !
@MinaNaghshnejad3 жыл бұрын
Nice explanation. In gradient descent, the loop shouldn't for loop changed to for i in range(epochs)?
@AdarshMenon3 жыл бұрын
Hi yes, you are right. It was a mistake, but it worked because the dataset converged quickly. Thank you for pointing it out !
@DB-in2mr5 жыл бұрын
Hi just a note if somebody might get some issue by gathering data from your github data.csv ...there is an error related to " " or also like "Error tokenizing data. C error: Expected ....etc". Apprently ideal is to upload your data in raw format from your git area here ( raw.githubusercontent.com/chasinginfinity/ml-from-scratch/master/03%20Linear%20Regression%20in%202%20minutes/data.csv ) , adding ", header=None" .
@vinaywaingankar61334 жыл бұрын
Great explanation
@AdarshMenon4 жыл бұрын
Glad you liked it
@SaurabhKumar-gj5mp5 жыл бұрын
One of the best video. Nice explanation man
@AdarshMenon5 жыл бұрын
Thank you !
@aishwaryaravi52493 жыл бұрын
this video really helped me! one question though, it gives errors for higher learning rates. why is that so?
@sreeprakashneelakantan50515 жыл бұрын
Very well explained 👌
@sjback21452 жыл бұрын
Great thanks to you. It's nice and clear explanation
@Christian-mn8dh5 жыл бұрын
Hey man, love this vid. I tried to build a gradient descent algorithm from scratch too. Why isn't mine working? Why can't you multiply the learning_rate directly by the error? Here's my code: for i in range(4): ypred = m * x + b error = (ypred - y) **2 m = m - (0.001 * error) b = b -(0.001 * error) m = m.sum() b = b.sum() #My 'm' and 'b' values decrease infinitely
@kaustubhkapare807 Жыл бұрын
length of training dataset typecasted to float....and that float was given as a parameter to range function....
@Mason-yj1ip3 жыл бұрын
This is better than my stupid university lecture. Bro you should take my tuition.
@ROHANJUNEJABAI4 жыл бұрын
We repeat this process until our loss function is a very small value or ideally 0 (which means 0 error or 100% accuracy). The value of m and c that we are left with now will be the optimum values. THAT THE 4TH STEP OF GRADIENT DESCENT but according to your code, we are just updating the values till epochs =100 . HOW THIS JUSTIFY THAT WE ARE UPDATING THE VALUES TILL THE LOSS FUNCTION IS MINIMIZED?
@hirakmondal61744 жыл бұрын
Exactly my point. This code is wrong and some people with 0 to none knowledge who are learning are saying this to be GOD LEVEL..smh
@2020-h2s3 жыл бұрын
have you implemented gaussian kernel regression from scratch for two independent features dataset.
@destinedtotrade32143 жыл бұрын
I liked & subscribed Thx !
@harshmankodiya93974 жыл бұрын
loved the explanation
@AdarshMenon4 жыл бұрын
Glad to hear!
@navinbondade53654 жыл бұрын
I think we you need to add for i in range(iteration)
@namanjain11645 жыл бұрын
The best!! Loved it!!
@sarthak8103 жыл бұрын
After plotting the regression line can we apply the liner regression algorithm from sklearn to it
@agirlfrom_mountains4 жыл бұрын
Where we r using epiches here and please explain me using max and min how we are are Ploting the gradient decent line???
@tomtrask_YT4 жыл бұрын
You really should do something like this to read your csv (the way you do it, it uses the first row as column names) data = pd.read_csv('data.csv', header=0, names=["x_val", "y_val"]) You don't need to assign names but it's "nice" to do that, that way you don't have to use indexes (e.g. use data["x_val"] instead of data.iloc[:, 0])
@AdarshMenon4 жыл бұрын
Thanks Tom, yes totally agree with you. Will keep this in mind from now onwards !
@usamazahid12 жыл бұрын
excellent tutorial....kudos
@praveenbhatt31273 жыл бұрын
great video. Can someone tell me how he did that animation of different predicted lines in the starting of the video ?
@AdarshMenon3 жыл бұрын
I generated images using opencv for all the values and then used premier pro to generate a gif from the images (lots other gif generating tools available online)
@gchumbes3 жыл бұрын
Excellent thanks for sharing
@tejaskulkarni57953 жыл бұрын
Can you also give an example of how to use gradient descent to find the best alpha for ridge and lasso regression please.
@firstkaransingh2 жыл бұрын
Good explanation thanks bro ☺️
@turtlepedia51494 жыл бұрын
sir if epochs is used for n then i will go beyond its limit so what will be the interpretetion
@turtlepedia51494 жыл бұрын
sorry sir i got it thanks
@merohan6194 жыл бұрын
Is it necessary that gradient descent lines will be straight..? If the scatter plot is in form of tan function then?
@myentropyishigh87084 жыл бұрын
gradient is not a line or something its a technique ,its is used here to find optimum values of slope m and constant c its not a line
@vivekpuurkayastha15805 жыл бұрын
Could you please explain what will be the change in the partial derivative equation for multiple predictors?
@anthonybeaulamaryj53723 жыл бұрын
Is there any way to find the accuracy of this manual implementation?
@hreedishkakoty67712 жыл бұрын
Why is your for loop over len(X) and not epochs ?
@kanokyusuki5 жыл бұрын
Thanks a lot! it was a great tutorial.
@sivaramakrishna22485 жыл бұрын
Simply awesome ....
@sankojushivasai80283 жыл бұрын
where did the epochs used?
@AdarshMenon3 жыл бұрын
Hi epochs were to be used in the for loop instead of n. I made that mistake in the video. Please see the code on GitHub (link in description)
@datascientist29583 жыл бұрын
Can you continue on same data with stochastic gradient descent and mini batch
@AdarshMenon3 жыл бұрын
will definitely add it to my list
@turtlepedia51494 жыл бұрын
what is the use of the itertion variable please tell sir the epochs var u didnt used at all
@AdarshMenon4 жыл бұрын
Yes correct, i made a small mistake there, and instead of epochs in the loop I used n. The program worked because it was small values and didnt make much difference
@SahibzadaIrfanUllahNaqshbandi4 жыл бұрын
Thank you for the great video. I have a question. If I multiply X with -1 then Gradient descent and linear regression fail to obtain negative slope . Can you please tell me whats the problem?
@AdarshMenon4 жыл бұрын
I think you would also need to reverse the signs in the derivatives calculated. Try calculating the derivatives with negative x, use those equations to do the regression
@JainmiahSk3 жыл бұрын
To find best-fit line should I include Gradient Descent code or sklearn library automatically do that while include Linear Models?
@AdarshMenon3 жыл бұрын
Doing it with sklearn is the easies way
@JainmiahSk3 жыл бұрын
@@AdarshMenon To clarify doubts how to contact you? whatsapp or Telegram?
@JainmiahSk3 жыл бұрын
@@AdarshMenon Thank you.
@Saimelodies25124 жыл бұрын
can you explain the back propagation derivation of a simple neural network? plZZ???? step by step (using matrix calculus)
@AdarshMenon4 жыл бұрын
I made an entire series on it - kzbin.info/aero/PLP3ANEJKF1TwHRDS9sPANOzYaAIfJIuam
@siddhantdutta6123 жыл бұрын
I g you forgot to use epoch .. instead you used len(X) int the part where we find the value of m and c .. I may be wrong but i just spotted this
@AdarshMenon3 жыл бұрын
You are right, I forgot to use the variable epochs. The code works because, n iterations were enough for the code to converge. Thanks for pointing this out!
@siddhantdutta6123 жыл бұрын
@@AdarshMenon yaa exactly😅 ... Btw the video was very helpful .. good job.👍 Yaa and one more thing.. it would be even better if you make videos on multivariate regression too...
@NavneetKaur-ps8bn4 жыл бұрын
Why am I getting the values of m and c as nan and nan respectively. and also (FloatingPointError: overflow encountered in double_scalar) this error . Also i tried by fixing the values of learning rate and epochs but it won't work .
@AdarshMenon4 жыл бұрын
Inside the loop, you can try printing the values of m and c, to see exactly at which iteration it is overflowing. If it is happening from the first iteration itself, maybe a problem with the dataset, otherwise could be learning rate/epoch values
@tejaltatiwar4682 Жыл бұрын
Amazing
@vihaangoyal51343 жыл бұрын
NICE AND LEARNATE VIDEO
@swaroopsasikumar4 жыл бұрын
Great Video.
@AdarshMenon4 жыл бұрын
Thanks!
@shivangchauhan86092 ай бұрын
Good one, thanks
@himanshusoni98765 жыл бұрын
If the data set have multiple features than gradient descent is applicable or not??? please reply ..... Thank you
@AdarshMenon5 жыл бұрын
yes can be applied towardsdatascience.com/machine-learning-bit-by-bit-multivariate-gradient-descent-e198fdd0df85
@nikhil71294 жыл бұрын
sir can we use it when we have multiple independent features
@AdarshMenon4 жыл бұрын
Yes you can, use sklearn because it is easier using that
@ACC8614 жыл бұрын
If you use a different learning rate.. the results will vary right ?
@AdarshMenon4 жыл бұрын
Technically yes, but for such a small dataset I dont think there will be a huge difference. The learning rate determines how fast the function converges, so it is important to experiment with values and choose a suitable one for your use case
@turtlepedia51494 жыл бұрын
Sir I am currently into the Andrew NG course on coursera... Sir after i complete this can you suggest me some advance level course that u can follow after thus course.. As I want to get mor better in ml
@AdarshMenon4 жыл бұрын
To get better at ml, after doing the courses and understanding the basics, I would suggest you try and solve real problems. Kaggle would be a good place to start.
@bhavyanavuluri91775 жыл бұрын
Hi , I am getting output as NAN . why?
@balubalaji99565 жыл бұрын
Thanks Adarsh for making me understand 😅
@ManjeetSingh-ly3fk Жыл бұрын
Pls share a code for automatic updation of learning rate in this linear regression with scratch.
@Viratfanpage3693 ай бұрын
pls give the code for the graph that was in the starting of the video
@Yzyou112 жыл бұрын
Thank you bro 👍🙏
@ahmednabil21815 жыл бұрын
6 for i in range(len(x)): ----> 7 y_pred = m*x + c 8 dm = (-2/len(x))*sum(x*(y - y_pred)) 9 dc = (-2/len(x))*sum(y - y_pred) it gives me an error in line number 7 telling me that can only concatenate list (not "int") to list how do i fix that?
@navinbondade53654 жыл бұрын
you did not use iteration or epochs in this code, what is use of it
@vrstudios99782 жыл бұрын
Hii Adarsh if possible can you tell how to present output as animation gif with moving line for every iteration or If anyone knows let me know
@nikhil71294 жыл бұрын
i think their should be any condition to stop when we find optimal value?
@AdarshMenon4 жыл бұрын
you can adjust he value of epochs and stop where the loss is approaching 0
@riyazbagban91902 жыл бұрын
where did you use epochs
@IndiCEO5 жыл бұрын
Thanks man.........
@jayaprakashkumar45733 жыл бұрын
The output after printing m and c is "nan nan " for me. Plzz help
@christiankentorasmussen74923 жыл бұрын
Super good video :)
@suritchakraborty90554 жыл бұрын
Why am I getting the values of m and c as nan and nan respectively. I didn't have a CSV data set but instead I created two lists and converted them into panda series data type. I don't know where am I going wrong
@AdarshMenon4 жыл бұрын
i think you should use the csv or some other real world data. If the data is already the optimum value, gradient descent wont work. you can also experiment with other vales of learning rate and epochs. Maybe also try printing the value of m and c to see how many times the loop run before they become nan
@suritchakraborty90554 жыл бұрын
@@AdarshMenon it worked out needed to fix the learning rate as my data was small thanks a lot
@sathishkumaranandashekaren43133 жыл бұрын
Hi your program doesnt show iteratioins and not clear about global minima