I rarely bother commenting because I'm rarely impressed. This video was amazing. I love that you're showing theory and OOP. Usually I see basic definitions and code all in one script.
@AssemblyAI11 ай бұрын
Wow, thank you!
@Artificial_Intelligence_AI2 жыл бұрын
This channel is amazing ❤. This is the type of content a lot of instructors forget to teach you when you’re learning ML but this girl explains everything very well from scratch. Congratulations for your content and I hope to watch more of your videos, you deserve more views for this incredible job.
@LouisDuran Жыл бұрын
Woman
@afizs2 жыл бұрын
I have used Linear Regression many times, but never implemented from scratch. Thanks for an awesome video. Waiting for the next one.
@AssemblyAI2 жыл бұрын
That's great to hear Afiz!
@ryanliu93 Жыл бұрын
I think there is a problem of missing the 2 multiplication in calculating dw, db in the .fit() method: dw = (1/n_samples) * np.dot(X.T, (y_pred-y)) * 2 db = (1/n_samples) * np.sum(y_pred-y) * 2 If we follow the slides, it's not absolutely wrong but it can affect the learning rate
@priyanshumsharma4276 Жыл бұрын
No, after derivation, there is no square.
@zh7866 Жыл бұрын
The comment suggests multiplication with 2 based on derivation rule NOT square
@jhicks230610 ай бұрын
Agree with this! Also, thanks, Misra, for the awesome video series.
@surajsamal41619 ай бұрын
@@priyanshumsharma4276 he's not talking about the square he's talking about the square which came down cause of derivative
@bendirval36122 жыл бұрын
Wow, that really was from scratch. And the hardest way possible. But it's perfect for teaching python. Thanks!
@AssemblyAI2 жыл бұрын
You're very welcome :)
@MU_20006 ай бұрын
Thank you for this video. Before i wached it i spend a couple of day to understand how to make custome code without frameworks ))
@markkirby2543 Жыл бұрын
This is amazing. Thank you so much for all your clear explanations. You really know you stuff, and you make learning this complex material fun and exciting.
@haseebmuhammad51454 ай бұрын
I was seeking such kind of lectures. Lectures are awesome
@ayo4590 Жыл бұрын
Thank you so much, this video really helped me get started with understanding machine learning algorithms. I would love if you could do a video on how you would modify the algorithm for multivariate linear regression.
@l4dybu92 жыл бұрын
Thank u so much for this video. 💖💖 It's makes us feel more confident when we know how to do it drom scratch than using libraries ✨
@AssemblyAI2 жыл бұрын
That's great to hear!
@KelvinMafurendi4 ай бұрын
Great explanations indeed. The transition from theory to implementation in Python was awesome! Say, is this a good starting point for a beginner in Data Science or I should stick to the out-of-the-box sklearn methods for now?
@kamranzamanni68262 ай бұрын
Thanks for making it easy to follow along!
@GeorgeZoto2 жыл бұрын
Another amazing video! Slight typo in the definition of matrix multiplication and =dw part as well as an omission on the constant 2 (which does not effect calculations much) in the code when you define the gradients but other than that this is beautiful 😃
@AssemblyAI2 жыл бұрын
Thanks for that!
@1000marcelo10002 жыл бұрын
Amazing video! I learned so much from it! Congrats!!! Could explain more detailed all of this and show next steps like "where" and "how" this can be implemented further in some scenarios?
@harry-ce7ub10 ай бұрын
Do you not need to np.sum() the result of np.dot(x, (y_pred-y)) in dw as well as multiply by 2?
@rajesh_ramesh5 ай бұрын
we have too
@luis96xd2 жыл бұрын
Amazing video, I liked the code and the explanations, it was easy to read and understand, thanks! 😁👍👏💯
@AssemblyAI2 жыл бұрын
Great to hear!
@compilation_exe38212 жыл бұрын
YOU guys are AWWWWESOME
@AssemblyAI2 жыл бұрын
Thank you :)
@mohamedsadeksnoussi84792 жыл бұрын
The content is really insightful, but for dw and db, should it be (2/n_samples) instead of (1/n_samples) ?
@adarshtiwari73952 жыл бұрын
did you figure out why she used the equations without 2 in the video?
@mohamedsadeksnoussi84792 жыл бұрын
Hi@@adarshtiwari7395, unfortunately no. But (2/n_samples) appears to be correct. I checked out other resources and all of them used (2/n_samples). You can even try it by yourselves, (1/n_samples) doesn't affect the model behavior (performance), but from my point of view, it's incorrect.
@prashantkumar7390 Жыл бұрын
@@adarshtiwari7395 It doesn't change our actual optimisation problem, I mean when the mean_square_loss is minimised, any scalar*mean_square_loss will be minimised. Hence using 2 or not doesn't make a difference at all.
@mohammedabdulhafeezkhan4633 Жыл бұрын
@@prashantkumar7390 But it is better to use 2 in the equation, even though it's a constant and doesn't affect the outcome in a major way.
@prashantkumar7390 Жыл бұрын
@@mohammedabdulhafeezkhan4633 it doesn't affect the outcome AT ALL.
@1Rashiid3 ай бұрын
perfect explanation! Thank you.
@LouisDuran9 ай бұрын
WHat check could be added in the case that the model is not converging to the best fit within the n_iters value?
@srinivasanshanmugam952711 ай бұрын
Thanks for this video, May I know , why we need to run in a for loop for 1000 iterations?
@KumR10 ай бұрын
Wow... Awesome.. Do you have something like this for Deep learning algo ?
@sajinsanthosh95362 жыл бұрын
followed the tutorial exactly right, but still different. Using trial version. Thank you*
@AssemblyAI2 жыл бұрын
You're welcome :)
@kreativeworld20732 жыл бұрын
Thank you so much. It really helped me understand the entire concept:)
@tiagosilva8562 жыл бұрын
Where did you put the "2" of the mathematical formula in dw and db on phyton?
@chidubem312 жыл бұрын
the 2 is a scaling factor that can be omitted.
@tiagosilva8562 жыл бұрын
@@chidubem31 ty. I understand. But omitting that don't change the performance? M.S.E still be the same if we don't ommite the "2"?
@chidubem312 жыл бұрын
@@tiagosilva856 MSE will still be the same. intuitively if you multiply the 2 in the formula it scales the x for all values of x, therefore removing it will affect the whole dataset in the same way as if nothing happened
@JweJwe-youtube13 күн бұрын
Thanks for this perfect video! When I tried this code, the learning rate (0.001) was too small to fit the ground truth 'weight'( it was about 75.00). I saw the visualization plot and could manually calculate approximate weight.( thought the weight would be between 50 to 100). Finally, the learning rate 1 was way better than 0.001. The range of y and x are about -200 to 200 and -3 to 3. I thought that 1. normalizing X and y in order to change each range into -1 to 1 that makes easy to find weight and bias without fixing learning rate( e.g. 0.001 to 1 that I did) 2. Fit the model 3. Using the way we normalize at 1. I thought I can Re-normalize the model (2.) And my real question is which is common, finding appropriate initial learning rate and bias value OR renormalize model after normalize the ranges of X and y?? Sorry for my English writing skills :)
@JweJwe-youtube13 күн бұрын
If I enlarge and reduce the coordinate plane, predicting procedure would be complicated. The x_test should be normalize in order to use the normalized model. And only then the re-normalization is meaningful. And also, It may be a problem in terms of post-processing the y_pred because of re-normalization process.
@m.bouanane445511 ай бұрын
The gradient calculation lacks the multiplication by coefficient 2, I guess.
@2122345able11 ай бұрын
where is the part where gradient decent is being coded , how the code will know when to stop ?
@rajesh_ramesh5 ай бұрын
the gradient part actually misses the coefficient 2 in both differentiations (dw, db) and y - y^
@gechimmchanel2917 Жыл бұрын
Has anyone encountered the same situation as me when given a larger dataset with about 4 weights and 200 rows, the result predicts -inf or NaN , anyone have a way to fix this??
@gechimmchanel2917 Жыл бұрын
[-inf -inf -inf] -inf [-inf -inf -inf] -inf [-inf -inf -inf] -inf result của weight and bias :))
@parthaguha4705 Жыл бұрын
Awesome explanation😇
@AssemblyAI Жыл бұрын
Glad you liked it!
@okefejoseph68257 ай бұрын
Is there a way I can get the slide?
@traagshorts46172 жыл бұрын
How did you fit the dimensions for the test and weights in the predict function
@surendrakakani7014 Жыл бұрын
I have taken input(200 ,1) while executing above program i am getting y_pred as (200,200) can and geting dw shape as (1,200) but dw should be (1,1) right any body explaing is that correct
@AbuAl7sn16 ай бұрын
thanks lady .. that was easy
@rbodavan87712 жыл бұрын
how you designed your video templets is awesome. but it would be good for learner if you also post it on kaggle notebook and link it each other. sometimes it's happened to me like best to read then watch. let me know your thought
@AssemblyAI2 жыл бұрын
You can find the code in our github repo. The link is in the description :)
@MahmouudTolba2 жыл бұрын
your code is clean
@AssemblyAI2 жыл бұрын
Thank you!
@shivamdubey47832 жыл бұрын
can you plzz explain why we took x.T transpose
@0xmatriksh2 жыл бұрын
It is because we are doing dot product of two matrices here. So we need two matrices to be in the form of mxn and nxp, that means the number of columns in first matrix should be same as the number of rows in second matrix. And here, Suppose number of rows of X is n(which is same as y) But the n is number of rows for both of them so we transpose X to make n in column to match the n of mxn and nxp(like explained above) to successfully dot product them
@AndroidoAnd9 ай бұрын
I am not sure if you copied this code from Patrick Loeber. He has a youtube video with the same code posted years ago. If you did, please give credit to Patrick. This is the name of his video: Linear Regression in Python - Machine Learning From Scratch 02 - Python Tutorial
@alexisadr Жыл бұрын
Thank you!
@AssemblyAI Жыл бұрын
You're welcome!
@omaramil5103 Жыл бұрын
Thank you ❤
@AssemblyAI Жыл бұрын
You're welcome 😊
@akifarslan6022 Жыл бұрын
Güzel video
@سعیداحمدی-ل1ث Жыл бұрын
ادامه بده دختر کارت عالیه ممنون ازت
@MartinHolguin-d3k Жыл бұрын
your explanation is amazing and I see how linear regression works however this only works with 1 feature and if you want to implement it with more than one it will fail.
@prigithjoseph70182 жыл бұрын
Hi can you please upload, presentation file also
@jinxiong29732 жыл бұрын
how to improve my attention span , I have good ideas and soft that I tNice tutorialnk up , the problem is putting it down in fruit loops and knowing
@Salma30b2 ай бұрын
Why don't our teachers teach us like this?? Like 15 minutes of this video is all it took to understand hours of lecturing
@سعیداحمدی-ل1ث Жыл бұрын
لطفا ویدیو های بیشتری بسازید
@ge_song55 ай бұрын
what's her name?
@LegendJonny-bq6td8 сағат бұрын
It's written in the title "with Python"😅
@hiteshmistry413 Жыл бұрын
Thank you very much, You made Linear Regression very easy for me. Here is how the linear regression training looks like in action "kzbin.info/www/bejne/aILUfZ-VrNWZidE"
@timuk20084 ай бұрын
J'(m,b) = [df / dm df/db] ===> m is the w
@hannukoistinen5329 Жыл бұрын
Forget Python!!! It's just fashionable right now. R is much, much better!!!