How to implement Linear Regression from scratch with Python

  Рет қаралды 55,471

AssemblyAI

AssemblyAI

Күн бұрын

In the second lesson of the Machine Learning from Scratch course, we will learn how to implement the Linear Regression algorithm.
You can find the code here: github.com/Ass...
Previous lesson: • How to implement KNN f...
Next lesson: • How to implement Logis...
Welcome to the Machine Learning from Scratch course by AssemblyAI.
Thanks to libraries like Scikit-learn we can use most ML algorithms with a couple of lines of code. But knowing how these algorithms work inside is very important. Implementing them hands-on is a great way to achieve this.
And mostly, they are easier than you’d think to implement.
In this course, we will learn how to implement these 10 algorithms.
We will quickly go through how the algorithms work and then implement them in Python using the help of NumPy.
▬▬▬▬▬▬▬▬▬▬▬▬ CONNECT ▬▬▬▬▬▬▬▬▬▬▬▬
🖥️ Website: www.assemblyai...
🐦 Twitter: / assemblyai
🦾 Discord: / discord
▶️ Subscribe: www.youtube.co...
🔥 We're hiring! Check our open roles: www.assemblyai...
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
#MachineLearning #DeepLearning

Пікірлер: 81
@Artificial_Intelligence_AI
@Artificial_Intelligence_AI Жыл бұрын
This channel is amazing ❤. This is the type of content a lot of instructors forget to teach you when you’re learning ML but this girl explains everything very well from scratch. Congratulations for your content and I hope to watch more of your videos, you deserve more views for this incredible job.
@LouisDuran
@LouisDuran Жыл бұрын
Woman
@MU_2000
@MU_2000 4 ай бұрын
Thank you for this video. Before i wached it i spend a couple of day to understand how to make custome code without frameworks ))
@bendirval3612
@bendirval3612 2 жыл бұрын
Wow, that really was from scratch. And the hardest way possible. But it's perfect for teaching python. Thanks!
@AssemblyAI
@AssemblyAI 2 жыл бұрын
You're very welcome :)
@afizs
@afizs 2 жыл бұрын
I have used Linear Regression many times, but never implemented from scratch. Thanks for an awesome video. Waiting for the next one.
@AssemblyAI
@AssemblyAI 2 жыл бұрын
That's great to hear Afiz!
@kamranzamanni6826
@kamranzamanni6826 11 күн бұрын
Thanks for making it easy to follow along!
@tienshinhan8189
@tienshinhan8189 9 ай бұрын
I rarely bother commenting because I'm rarely impressed. This video was amazing. I love that you're showing theory and OOP. Usually I see basic definitions and code all in one script.
@AssemblyAI
@AssemblyAI 9 ай бұрын
Wow, thank you!
@haseebmuhammad5145
@haseebmuhammad5145 Ай бұрын
I was seeking such kind of lectures. Lectures are awesome
@1Rashiid
@1Rashiid 16 күн бұрын
perfect explanation! Thank you.
@markkirby2543
@markkirby2543 Жыл бұрын
This is amazing. Thank you so much for all your clear explanations. You really know you stuff, and you make learning this complex material fun and exciting.
@l4dybu9
@l4dybu9 2 жыл бұрын
Thank u so much for this video. 💖💖 It's makes us feel more confident when we know how to do it drom scratch than using libraries ✨
@AssemblyAI
@AssemblyAI 2 жыл бұрын
That's great to hear!
@ayo4590
@ayo4590 Жыл бұрын
Thank you so much, this video really helped me get started with understanding machine learning algorithms. I would love if you could do a video on how you would modify the algorithm for multivariate linear regression.
@compilation_exe3821
@compilation_exe3821 2 жыл бұрын
YOU guys are AWWWWESOME
@AssemblyAI
@AssemblyAI 2 жыл бұрын
Thank you :)
@lamluuuc9384
@lamluuuc9384 Жыл бұрын
I think there is a problem of missing the 2 multiplication in calculating dw, db in the .fit() method: dw = (1/n_samples) * np.dot(X.T, (y_pred-y)) * 2 db = (1/n_samples) * np.sum(y_pred-y) * 2 If we follow the slides, it's not absolutely wrong but it can affect the learning rate
@priyanshumsharma4276
@priyanshumsharma4276 11 ай бұрын
No, after derivation, there is no square.
@zh7866
@zh7866 11 ай бұрын
The comment suggests multiplication with 2 based on derivation rule NOT square
@jhicks2306
@jhicks2306 8 ай бұрын
Agree with this! Also, thanks, Misra, for the awesome video series.
@surajsamal4161
@surajsamal4161 6 ай бұрын
@@priyanshumsharma4276 he's not talking about the square he's talking about the square which came down cause of derivative
@luis96xd
@luis96xd 2 жыл бұрын
Amazing video, I liked the code and the explanations, it was easy to read and understand, thanks! 😁👍👏💯
@AssemblyAI
@AssemblyAI 2 жыл бұрын
Great to hear!
@GeorgeZoto
@GeorgeZoto Жыл бұрын
Another amazing video! Slight typo in the definition of matrix multiplication and =dw part as well as an omission on the constant 2 (which does not effect calculations much) in the code when you define the gradients but other than that this is beautiful 😃
@AssemblyAI
@AssemblyAI Жыл бұрын
Thanks for that!
@m.bouanane4455
@m.bouanane4455 8 ай бұрын
The gradient calculation lacks the multiplication by coefficient 2, I guess.
@sajinsanthosh9536
@sajinsanthosh9536 2 жыл бұрын
followed the tutorial exactly right, but still different. Using trial version. Thank you*
@AssemblyAI
@AssemblyAI 2 жыл бұрын
You're welcome :)
@KelvinMafurendi
@KelvinMafurendi Ай бұрын
Great explanations indeed. The transition from theory to implementation in Python was awesome! Say, is this a good starting point for a beginner in Data Science or I should stick to the out-of-the-box sklearn methods for now?
@1000marcelo1000
@1000marcelo1000 Жыл бұрын
Amazing video! I learned so much from it! Congrats!!! Could explain more detailed all of this and show next steps like "where" and "how" this can be implemented further in some scenarios?
@kreativeworld2073
@kreativeworld2073 2 жыл бұрын
Thank you so much. It really helped me understand the entire concept:)
@AbuAl7sn1
@AbuAl7sn1 3 ай бұрын
thanks lady .. that was easy
@parthaguha4705
@parthaguha4705 Жыл бұрын
Awesome explanation😇
@AssemblyAI
@AssemblyAI Жыл бұрын
Glad you liked it!
@AndroidoAnd
@AndroidoAnd 6 ай бұрын
I am not sure if you copied this code from Patrick Loeber. He has a youtube video with the same code posted years ago. If you did, please give credit to Patrick. This is the name of his video: Linear Regression in Python - Machine Learning From Scratch 02 - Python Tutorial
@MahmouudTolba
@MahmouudTolba 2 жыл бұрын
your code is clean
@AssemblyAI
@AssemblyAI 2 жыл бұрын
Thank you!
@akifarslan6022
@akifarslan6022 Жыл бұрын
Güzel video
@rajesh_ramesh
@rajesh_ramesh 2 ай бұрын
the gradient part actually misses the coefficient 2 in both differentiations (dw, db) and y - y^
@mohamedsadeksnoussi8479
@mohamedsadeksnoussi8479 Жыл бұрын
The content is really insightful, but for dw and db, should it be (2/n_samples) instead of (1/n_samples) ?
@adarshtiwari7395
@adarshtiwari7395 Жыл бұрын
did you figure out why she used the equations without 2 in the video?
@mohamedsadeksnoussi8479
@mohamedsadeksnoussi8479 Жыл бұрын
Hi@@adarshtiwari7395, unfortunately no. But (2/n_samples) appears to be correct. I checked out other resources and all of them used (2/n_samples). You can even try it by yourselves, (1/n_samples) doesn't affect the model behavior (performance), but from my point of view, it's incorrect.
@prashantkumar7390
@prashantkumar7390 Жыл бұрын
@@adarshtiwari7395 It doesn't change our actual optimisation problem, I mean when the mean_square_loss is minimised, any scalar*mean_square_loss will be minimised. Hence using 2 or not doesn't make a difference at all.
@mohammedabdulhafeezkhan4633
@mohammedabdulhafeezkhan4633 Жыл бұрын
@@prashantkumar7390 But it is better to use 2 in the equation, even though it's a constant and doesn't affect the outcome in a major way.
@prashantkumar7390
@prashantkumar7390 Жыл бұрын
@@mohammedabdulhafeezkhan4633 it doesn't affect the outcome AT ALL.
@harry-ce7ub
@harry-ce7ub 7 ай бұрын
Do you not need to np.sum() the result of np.dot(x, (y_pred-y)) in dw as well as multiply by 2?
@rajesh_ramesh
@rajesh_ramesh 2 ай бұрын
we have too
@alexisadr
@alexisadr Жыл бұрын
Thank you!
@AssemblyAI
@AssemblyAI Жыл бұрын
You're welcome!
@KumR
@KumR 7 ай бұрын
Wow... Awesome.. Do you have something like this for Deep learning algo ?
@rbodavan8771
@rbodavan8771 2 жыл бұрын
how you designed your video templets is awesome. but it would be good for learner if you also post it on kaggle notebook and link it each other. sometimes it's happened to me like best to read then watch. let me know your thought
@AssemblyAI
@AssemblyAI 2 жыл бұрын
You can find the code in our github repo. The link is in the description :)
@MartinHolguin-d3k
@MartinHolguin-d3k 11 ай бұрын
your explanation is amazing and I see how linear regression works however this only works with 1 feature and if you want to implement it with more than one it will fail.
@srinivasanshanmugam9527
@srinivasanshanmugam9527 9 ай бұрын
Thanks for this video, May I know , why we need to run in a for loop for 1000 iterations?
@omaramil5103
@omaramil5103 Жыл бұрын
Thank you ❤
@AssemblyAI
@AssemblyAI Жыл бұрын
You're welcome 😊
@سعیداحمدی-ل1ث
@سعیداحمدی-ل1ث 11 ай бұрын
ادامه بده دختر کارت عالیه ممنون ازت
@LouisDuran
@LouisDuran 6 ай бұрын
WHat check could be added in the case that the model is not converging to the best fit within the n_iters value?
@2122345able
@2122345able 8 ай бұрын
where is the part where gradient decent is being coded , how the code will know when to stop ?
@okefejoseph6825
@okefejoseph6825 4 ай бұрын
Is there a way I can get the slide?
@gechimmchanel2917
@gechimmchanel2917 Жыл бұрын
Has anyone encountered the same situation as me when given a larger dataset with about 4 weights and 200 rows, the result predicts -inf or NaN , anyone have a way to fix this??
@gechimmchanel2917
@gechimmchanel2917 Жыл бұрын
[-inf -inf -inf] -inf [-inf -inf -inf] -inf [-inf -inf -inf] -inf result của weight and bias :))
@سعیداحمدی-ل1ث
@سعیداحمدی-ل1ث 10 ай бұрын
لطفا ویدیو های بیشتری بسازید
@tiagosilva856
@tiagosilva856 2 жыл бұрын
Where did you put the "2" of the mathematical formula in dw and db on phyton?
@mgreek31
@mgreek31 2 жыл бұрын
the 2 is a scaling factor that can be omitted.
@tiagosilva856
@tiagosilva856 2 жыл бұрын
@@mgreek31 ty. I understand. But omitting that don't change the performance? M.S.E still be the same if we don't ommite the "2"?
@mgreek31
@mgreek31 2 жыл бұрын
@@tiagosilva856 MSE will still be the same. intuitively if you multiply the 2 in the formula it scales the x for all values of x, therefore removing it will affect the whole dataset in the same way as if nothing happened
@traagshorts4617
@traagshorts4617 Жыл бұрын
How did you fit the dimensions for the test and weights in the predict function
@shivamdubey4783
@shivamdubey4783 2 жыл бұрын
can you plzz explain why we took x.T transpose
@0xmatriksh
@0xmatriksh Жыл бұрын
It is because we are doing dot product of two matrices here. So we need two matrices to be in the form of mxn and nxp, that means the number of columns in first matrix should be same as the number of rows in second matrix. And here, Suppose number of rows of X is n(which is same as y) But the n is number of rows for both of them so we transpose X to make n in column to match the n of mxn and nxp(like explained above) to successfully dot product them
@surendrakakani7014
@surendrakakani7014 Жыл бұрын
I have taken input(200 ,1) while executing above program i am getting y_pred as (200,200) can and geting dw shape as (1,200) but dw should be (1,1) right any body explaing is that correct
@ge_song5
@ge_song5 3 ай бұрын
what's her name?
@prigithjoseph7018
@prigithjoseph7018 2 жыл бұрын
Hi can you please upload, presentation file also
@jinxiong2973
@jinxiong2973 2 жыл бұрын
how to improve my attention span , I have good ideas and soft that I tNice tutorialnk up , the problem is putting it down in fruit loops and knowing
@timuk2008
@timuk2008 2 ай бұрын
J'(m,b) = [df / dm df/db] ===> m is the w
@hiteshmistry413
@hiteshmistry413 Жыл бұрын
Thank you very much, You made Linear Regression very easy for me. Here is how the linear regression training looks like in action "kzbin.info/www/bejne/aILUfZ-VrNWZidE"
@hannukoistinen5329
@hannukoistinen5329 10 ай бұрын
Forget Python!!! It's just fashionable right now. R is much, much better!!!
@saideeppandey3788
@saideeppandey3788 Ай бұрын
Pls note x.wtranspose because w is (1,n)
How to implement Logistic Regression from scratch with Python
14:04
GIANT Gummy Worm Pt.6 #shorts
00:46
Mr DegrEE
Рет қаралды 114 МЛН
А ВЫ ЛЮБИТЕ ШКОЛУ?? #shorts
00:20
Паша Осадчий
Рет қаралды 10 МЛН
From Small To Giant Pop Corn #katebrush #funny #shorts
00:17
Kate Brush
Рет қаралды 72 МЛН
PHYS 2211: Motion of a Falling Object
5:30
Vandana
Рет қаралды 10
Linear Regression From Scratch in Python (Mathematical)
24:38
NeuralNine
Рет қаралды 175 М.
How to implement Decision Trees from scratch with Python
37:24
AssemblyAI
Рет қаралды 66 М.
Linear Regression in Python - Full Project for Beginners
50:52
Alejandro AO - Software & Ai
Рет қаралды 24 М.
Gradient Descent From Scratch In Python
42:39
Dataquest
Рет қаралды 18 М.
Gradient Descent From Scratch in Python - Visual Explanation
28:44
How to implement KNN from scratch with Python
9:24
AssemblyAI
Рет қаралды 89 М.
Building the Gradient Descent Algorithm in 15 Minutes | Coding Challenge
22:29
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 447 М.
GIANT Gummy Worm Pt.6 #shorts
00:46
Mr DegrEE
Рет қаралды 114 МЛН