Linear Regression Algorithm In Python From Scratch [Machine Learning Tutorial]

  Рет қаралды 31,070

Dataquest

Dataquest

Күн бұрын

Пікірлер: 49
@ninobach7456
@ninobach7456 11 ай бұрын
I recommend this video for those who understand the general concept of linear regression, but want to know what happens 'under the hood'
@namrata_roy
@namrata_roy 2 жыл бұрын
Amazing tutorial. Difficult concepts were explained with such ease. Kudos team Dataquest!
@sulaimansalisu5833
@sulaimansalisu5833 2 жыл бұрын
Very explicit. You are a wonderful teacher. Thanks so much
@Mara51029
@Mara51029 10 ай бұрын
This is absolutely amazing and great video. I can’t wait to see more great work
@datosilustrados
@datosilustrados 2 жыл бұрын
this is a great tutorial. Beautifully explained.
@HIEUHUYNHUC
@HIEUHUYNHUC Жыл бұрын
Today you will my teacher. I'm from VietNam. Thank you so much
@anfedoro
@anfedoro Жыл бұрын
Great and very clear explanation. The only point missed in the end is the regression visualisation 😉. Nice to have both initial data and the regression plotted
@ycombine1053
@ycombine1053 2 жыл бұрын
"if you only enter one athlete, the most medals you can win is one" - Michael Phelps has entered the chat.
@zheshipeng
@zheshipeng 2 жыл бұрын
Thanks so much. Better than any E-books 🙂
@fassstar
@fassstar 8 ай бұрын
One correction, not relevant to the actuall regression, but should be said nonetheless. The number of medals one athlete can win is not limitted to one, rather it is limited to the number of events the athlete competes in (maximum of one per event). In fact, numerous athletes have one multiple medals in one Olympics. Just wanted to clarify that. Of course, from a certain number of athelets, it will be impossible for a smaller team to compete in as many events as the large team, making it more likely that the larger team wins more medals.
@BTStechnicalchannel
@BTStechnicalchannel 2 жыл бұрын
Very well explained!!!
@JohnJustus
@JohnJustus 10 ай бұрын
Perfect,,, thnks a lot
@learn-with-lee
@learn-with-lee 2 жыл бұрын
Thank you . It was well explained.
@guilhermesaraiva3846
@guilhermesaraiva3846 Жыл бұрын
thanks for the lesson, but just a question, during the model the separation of x,y_train and x,y_test was not made, why would it not be necessary, and if it is necessary to do it, how would it be done? thanks
@cclementson1986
@cclementson1986 Жыл бұрын
Is there a reason you chose to implement the normal equation over gradient descent? I'm quite curious as I am more familiar with gradient descent.
@iamgarriTech
@iamgarriTech Жыл бұрын
Why do we need to add those "1" when solving the matrix
@jeanb2682
@jeanb2682 Жыл бұрын
Hey, That is a great beatiful demonstration of linear regression. Thank you. But I didn't understand where prev_medals coming in building X matrix at the beginning? some one can give to me explanation on apparution of these value inside the X matrix?
@television80
@television80 Жыл бұрын
Hi Vikas, which is better for GLM models in python: sklearn or statmodels package?
@xSparkyX188
@xSparkyX188 Жыл бұрын
Do you have an example like this with multiple x-values or features?
@AndresIniestaLujain
@AndresIniestaLujain 2 жыл бұрын
Would the solution for B be considered a least squares solution? Also, If we wanted to construct say a 95% confidence interval for each coefficient, would we take B for intercept, athletes, and prev_medals (-1.96, 0.07, 0.73) and multiply them by their respective standard errors and t-scores? Would the formula would be as follows: B(k) * t(n-k-1, alpha = 0.05/2) * SE(B(k)) , or does this require more linear algebra? Great tutorial btw, thanks for the help.
@abidson690
@abidson690 2 жыл бұрын
Thanks so much for the Video
@bomidilakshmimadhavan9501
@bomidilakshmimadhavan9501 2 жыл бұрын
Can you please make a video demonstrating the multivariate regression analysis with the following information taken into consideration? Performs multiple linear regression trend analysis of an arbitrary time series. OPTIONAL: error analysis for regression coefficients (uses standard multivariate noise model). Form of general regression trend model used in this procedure (t = time index = 0,1,2,3,...,N-1): T(t)=ALPHA(t) + BETA(t)*t + GAMMA(t)*QBO(t) + DELTA(t)*SOLAR(t) + EPS1(t)*EXTRA1(t) + EPS2(t)*EXTRA2(t) + RESIDUAL_FIT(t), where ALPHA represents the 12-month seasonal fit, BETA is the 12-month seasonal trend coefficient, RESIDUAL_FIT(t) represents the error time series, and GAMMA, DELTA, EPS1, and EPS2 are 12-month coefficients corresponding to the ozone driving quantities QBO (quasi-biennial oscillation), SOLAR (solar-UV proxy), and proxies EXTRA1 and EXTRA2 (for example, these latter two might be ENSO, vorticity, geopotential heights, or temperature), respectively. The general model above assumes simple linear relationships between T(t) and surrogates which is hopefully valid as a first approximation. Note that for total ozone trends based on chemical species such as involving Chlorine, the trend term BETA(t)*t could be replaced (ignored by setting m2=0 in the procedure call), with EPS1(t)*EXTRA1(t) where EXTRA1(t) is the chemical proxy time series. This procedure assumes the following form for the coefficients ALPHA, BETA, GAMMA,...) in effort to approximate realistic seasonal dependence of sensitivity between T(t) and surrogate. The expansion shown below is for ALPHA(t) - similar expansions for BETA(t), GAMMA(t), DELTA(t), EPS1(t), and EPS2(t): ALPHA(t) = A0
@oluwamuyiwaakerele4287
@oluwamuyiwaakerele4287 2 жыл бұрын
Hi, this is a wonderful explanation. Great job putting this together. The only thing that really confuses me is how you factor in previous medals in the predictive model. What would that look like in the linear equation at 1:54?
@Dataquestio
@Dataquestio 2 жыл бұрын
You would add a second term b2x2, so the full equation would be b0 + b1x1 + b2x2. x1 would be athletes, x2 is previous medals. Then you'd have separate coefficients (b1 and b2) for each.
@hameedhhameed1996
@hameedhhameed1996 2 жыл бұрын
It is such a fantastic explanation of Linear Regression. My question is, is there any possibility that we can't obtain the inverse of matrix X?
@Dataquestio
@Dataquestio 2 жыл бұрын
Hi Hameed - yes, some matrices are singular, and cannot be inverted. This happens when columns or rows are linear combinations of each other. In those cases, ridge regression is a good alternative. Here is a ridge regression explanation - kzbin.info/www/bejne/o6HYfIalq99srq8 .
@im4485
@im4485 Жыл бұрын
This guy is old, young, sleepy and awake all at the same time.
@sunilnavadia6347
@sunilnavadia6347 2 жыл бұрын
Hi Team... Very well explained Linear Regression from scratch... Do you have any video for Ridge Regression from Scratch using Python?
@Dataquestio
@Dataquestio 2 жыл бұрын
Hi Sunil - we don't. I'll look into doing ridge regression in a future video! -Vik
@sunilnavadia8203
@sunilnavadia8203 2 жыл бұрын
@@Dataquestio Thank you
@manyes7577
@manyes7577 2 жыл бұрын
@@Dataquestio thanks you are awesome
@abidson690
@abidson690 2 жыл бұрын
@@Dataquestio thanks
@josuecurtonavarro8979
@josuecurtonavarro8979 2 жыл бұрын
Hi guys! Very interesting indeed! There is one thing I don't understand though. The identity matrix, as you mentioned , behaves like one in matrix multiplication when you multiply it with a matrix of the same size. But in this precise case (around the 13:08) the matrix B doesn't have the same size. So how come you can eliminate the identity matrix here from the equation? Thanks!
@Dataquestio
@Dataquestio 2 жыл бұрын
Hi Josué - I shouldn't have said "of the same size". Multiplying the identity matrix by another matrix behaves like normal matrix multiplication. So if the identity matrix (I) is 2x2, and you multiply by a 2x1 matrix B, you end up with a 2x1 matrix (equal to B). The number of columns in the first matrix you multiply has to match the number of rows in the second matrix. And the final matrix has the same row count as the first matrix, and the same column count as the second matrix.
@AlMar-j5l
@AlMar-j5l 11 ай бұрын
THANKS ALOT🤯
@dembobademboba6924
@dembobademboba6924 Жыл бұрын
great job
@yousif_alyousifi
@yousif_alyousifi 2 жыл бұрын
Thank you for this video. Could you please share the ppt slides of this lesson?
@Dataquestio
@Dataquestio 2 жыл бұрын
Hi Yousif - this was done using video animations, so there aren't any powerpoint slides, unfortunately. -Vik
@sunilnavadia8203
@sunilnavadia8203 2 жыл бұрын
In predictions we got values as 0.24,-1.6,-1.39 so can you explain does -1.6 medals is valid? Or I need to use some other dataset to perform regression like house prediction? Can you suggest me some dataset in which i can apply ridge regression?
@Dataquestio
@Dataquestio 2 жыл бұрын
Hi Sunil - with the way linear regression works, you can get numbers that don't make sense with the dataset. The best thing to do is to truncate the range (anything below 0 gets set to 0). Other algorithms that don't make assumptions about linearity can avoid this problem (like decision trees, k-nn, etc).
@sunilnavadia8203
@sunilnavadia8203 2 жыл бұрын
@@Dataquestio Thank you for your message, As the prediction of this data(Medals is in decimal ) so do you have any suggestion regarding other dataset in which i can make prediction which make sense using ridge regression?
@borutamena8207
@borutamena8207 2 жыл бұрын
tnx sir
@gabijakielaite3179
@gabijakielaite3179 2 жыл бұрын
I am wondering is it okay to have a model which predicts country to receive negative amount of medals? Isn't that just impossible?
@Dataquestio
@Dataquestio 2 жыл бұрын
This is one of the weaknesses of linear regression. Due to the y-intercept term, you can get predictions that don't make sense in the real world. An easy solution is to replace negative predictions with 0.
@adityakakade9172
@adityakakade9172 Жыл бұрын
I rather use statsmodel than using this mthod which makes things complex
@oluwamuyiwaakerele4287
@oluwamuyiwaakerele4287 2 жыл бұрын
I guess another question I have is how to invert a matrix
@Dataquestio
@Dataquestio 2 жыл бұрын
Hi Oluwamuyiwa - there are a few ways to invert a matrix. The easiest to do by hand is Gaussian elimination - en.wikipedia.org/wiki/Gaussian_elimination . That said, there isn't a lot of benefit to knowing how to invert a matrix by hand, so I wouldn't worry too much about it.
@HIEUHUYNHUC
@HIEUHUYNHUC Жыл бұрын
sorry teacher. i guess you were confused SSR was SSE and R2 = 1 - (SSE/SST) = SSR/SST
@izzathawari5025
@izzathawari5025 26 күн бұрын
After finish the tutorial, I want to plot the regression line, to athlete and medals plot graph. What parameters can I can change to make the line better? predictions = X @ B # plt.plot(y, predictions, 'ro') plt.plot(X[["athletes"]],y,'o') plt.plot(X[["athletes"]],np.dot(X[["athletes"]],B.loc["athletes"]) + B.loc["intercept"].values) plt.xlabel("athlete") plt.ylabel("medals") plt.show()
Мясо вегана? 🧐 @Whatthefshow
01:01
История одного вокалиста
Рет қаралды 7 МЛН
Mom Hack for Cooking Solo with a Little One! 🍳👶
00:15
5-Minute Crafts HOUSE
Рет қаралды 23 МЛН
Linear Regression From Scratch in Python (Mathematical)
24:38
NeuralNine
Рет қаралды 193 М.
How to implement Linear Regression from scratch with Python
17:03
Polynomial Regression in Python - sklearn
14:01
RegenerativeToday
Рет қаралды 12 М.
Predict Football Match Winners With Machine Learning And Python
44:43
Machine Learning in Python: Building a Linear Regression Model
17:46
Data Professor
Рет қаралды 151 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 442 М.
Мясо вегана? 🧐 @Whatthefshow
01:01
История одного вокалиста
Рет қаралды 7 МЛН