Breaking Linear Regression

  Рет қаралды 8,352

CodeEmporium

CodeEmporium

Күн бұрын

Пікірлер: 34
@Brickkzz
@Brickkzz 2 жыл бұрын
This channel is like an alternative universe where Raj Siraj is actually humble, smart and not a scammer
@CodeEmporium
@CodeEmporium 2 жыл бұрын
Many thanks for the compliments:D
@NicholasRenotte
@NicholasRenotte 2 жыл бұрын
😂 preach
@AI_BotBuilder
@AI_BotBuilder 2 жыл бұрын
Funny seeing Nicholas and Code emporium, two of my favs and the one I hate discussed all in one comment space
@kalpeshmore9499
@kalpeshmore9499 2 жыл бұрын
THANK YOU, MATH GIVES A CLEAR IDEA BEHIND THE ALGORITHM
@theepicturtleman
@theepicturtleman Жыл бұрын
Thank you so much for the video, super easy to understand and cleared up a lot of misconceptions I’ve had about the assumptions of linear regression
@EuforynaUderzeniowa
@EuforynaUderzeniowa 3 ай бұрын
7:23 The peak of blue line should be above the intersection of green and dashed line.
@adityaj9984
@adityaj9984 Жыл бұрын
If I frame this problem again in form of least sum of squares, can I incorporate probability into it? If yes then how? I asked this question because MLE is purely based on probability where it incorporates knowledge of Normal Distribution in it. But in case of least sum of squares, I didn't see it happening.
@chirchan
@chirchan 2 жыл бұрын
Love your work ❤️. Keep posting more stuff
@CodeEmporium
@CodeEmporium 2 жыл бұрын
Thanks so much! I shall :) (new video in 2 days)
@RAHUDAS
@RAHUDAS 2 жыл бұрын
Can we do same MLE for logistic regression??? Can you make a tutorial on that??
@CodeEmporium
@CodeEmporium 2 жыл бұрын
Yep!
@RAHUDAS
@RAHUDAS 2 жыл бұрын
@@CodeEmporium Sorry I forgot thanks you for the content, it was really great, I am learning a lot from your tutorial
@Raj-gc2rc
@Raj-gc2rc 2 жыл бұрын
Please do math for deep learning and explain why it's mostly trial an error ... how can math guide in achieving state of the art models if it's possible ... And how much would theory of statistics be helpful in actually coming up with good models .. like right now I dont know how it can be applied, ex) knowing that stochastic gradiant descent is an unbiased estimator of the gradient ... what does knowing this fact tells us
@blairnicolle2218
@blairnicolle2218 6 ай бұрын
Tremendous!
@user-wr4yl7tx3w
@user-wr4yl7tx3w 2 жыл бұрын
Why is it sometimes called ordinary least squares?
@CodeEmporium
@CodeEmporium 2 жыл бұрын
Ordinary Least Squares and Maximum Likelihood estimation are 2 different techniques to find the parameters of a linear regression. The former tries to minimize the sum of squared errors while the other latter will maximize the probability of seeing data. But for the linear regression case, MLE will effectively converge to the OLS equation (just doing a lil math shown in the end of the video)
@chirchan
@chirchan 2 жыл бұрын
@@CodeEmporium thank you for the explanation
@Septumsempra8818
@Septumsempra8818 2 жыл бұрын
Have you guys tried the EconML library by Microsoft for causal machine learning?
@CodeEmporium
@CodeEmporium 2 жыл бұрын
I have not. But sounds interesting :)
@user-wr4yl7tx3w
@user-wr4yl7tx3w 2 жыл бұрын
An idea, can you consider a video on generalized method of moments.
@CodeEmporium
@CodeEmporium 2 жыл бұрын
Potentially yes. :) I'll put some more thought into this
@n_128
@n_128 2 жыл бұрын
Why is theta^2 a problem?, if it is just a constant, the model can learn that constant. Let's say you have a k^2 = theta and we find theta. The only problem I understand is if that theta in theta^2 changes (like x does), but that's not the case.
@CodeEmporium
@CodeEmporium 2 жыл бұрын
Good question and it gets me thinking. The true / best value of theta is a constant (as you correctly point out). But at the time of training, we don’t know it‘s value. In the eyes of the model during training (in particular, the cost function), theta is now a variable. And so we can take derivatives with respect to these theta values to find the minima / maxima values and get an estimation for the true cost. However as I pointed out in the video, if you take the derivative with respect to the theta squared terms, you’ll end up with equations that are not linear (in terms of theta) and hence can’t be solved with linear algebra. This is why we can’t write a definite statement of “The best value for theta is X”. Instead, we would need to resort to estimation techniques like gradient descent to get an “estimate” of the value for theta. Hope this clears something’s up
@javiergonzalezarmas8250
@javiergonzalezarmas8250 Жыл бұрын
This is beautiful
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much!
@harryfeng4199
@harryfeng4199 2 жыл бұрын
Much appreciated. Thank u
@CodeEmporium
@CodeEmporium 2 жыл бұрын
You are very welcome :)
@NeoZondix
@NeoZondix 2 жыл бұрын
This is awesome, thanks
@CodeEmporium
@CodeEmporium 2 жыл бұрын
You are very welcome :)
@shivamkaushik6637
@shivamkaushik6637 2 жыл бұрын
Damn! So MSE I was using was just Maximum log likelihood this hole time.
@CodeEmporium
@CodeEmporium 2 жыл бұрын
Yep!
@louisvuittondonvg9040
@louisvuittondonvg9040 2 жыл бұрын
Like your thumbnails
@CodeEmporium
@CodeEmporium 2 жыл бұрын
Thanks so much! I’m trying out new ideas :)
Logistic Regression - Is it Linear Regression?
10:34
CodeEmporium
Рет қаралды 4 М.
Probability for Machine Learning!
27:35
CodeEmporium
Рет қаралды 7 М.
Одну кружечку 😂❤️
00:12
Денис Кукояка
Рет қаралды 2,2 МЛН
快乐总是短暂的!😂 #搞笑夫妻 #爱美食爱生活 #搞笑达人
00:14
朱大帅and依美姐
Рет қаралды 14 МЛН
Правильный подход к детям
00:18
Beatrise
Рет қаралды 1,7 МЛН
Lec 59 Summary of key concepts in AM
47:12
NPTEL - Indian Institute of Science, Bengaluru
Рет қаралды 3
Regularization Part 1: Ridge (L2) Regression
20:27
StatQuest with Josh Starmer
Рет қаралды 1,1 МЛН
Logistic Regression - VISUALIZED!
18:31
CodeEmporium
Рет қаралды 27 М.
I never understood why you can't go faster than light - until now!
16:40
FloatHeadPhysics
Рет қаралды 4,2 МЛН
Regularization - Explained!
12:44
CodeEmporium
Рет қаралды 17 М.
Maximum Likelihood : Data Science Concepts
20:45
ritvikmath
Рет қаралды 38 М.
Gradient Descent - THE MATH YOU SHOULD KNOW
20:08
CodeEmporium
Рет қаралды 18 М.
Simple Linear Regression - Discussion of the Normality Assumption
12:43
Linear Regression, Clearly Explained!!!
27:27
StatQuest with Josh Starmer
Рет қаралды 296 М.
Одну кружечку 😂❤️
00:12
Денис Кукояка
Рет қаралды 2,2 МЛН