This channel is like an alternative universe where Raj Siraj is actually humble, smart and not a scammer
@CodeEmporium2 жыл бұрын
Many thanks for the compliments:D
@NicholasRenotte2 жыл бұрын
😂 preach
@AI_BotBuilder2 жыл бұрын
Funny seeing Nicholas and Code emporium, two of my favs and the one I hate discussed all in one comment space
@kalpeshmore94992 жыл бұрын
THANK YOU, MATH GIVES A CLEAR IDEA BEHIND THE ALGORITHM
@theepicturtleman Жыл бұрын
Thank you so much for the video, super easy to understand and cleared up a lot of misconceptions I’ve had about the assumptions of linear regression
@EuforynaUderzeniowa3 ай бұрын
7:23 The peak of blue line should be above the intersection of green and dashed line.
@adityaj9984 Жыл бұрын
If I frame this problem again in form of least sum of squares, can I incorporate probability into it? If yes then how? I asked this question because MLE is purely based on probability where it incorporates knowledge of Normal Distribution in it. But in case of least sum of squares, I didn't see it happening.
@chirchan2 жыл бұрын
Love your work ❤️. Keep posting more stuff
@CodeEmporium2 жыл бұрын
Thanks so much! I shall :) (new video in 2 days)
@RAHUDAS2 жыл бұрын
Can we do same MLE for logistic regression??? Can you make a tutorial on that??
@CodeEmporium2 жыл бұрын
Yep!
@RAHUDAS2 жыл бұрын
@@CodeEmporium Sorry I forgot thanks you for the content, it was really great, I am learning a lot from your tutorial
@Raj-gc2rc2 жыл бұрын
Please do math for deep learning and explain why it's mostly trial an error ... how can math guide in achieving state of the art models if it's possible ... And how much would theory of statistics be helpful in actually coming up with good models .. like right now I dont know how it can be applied, ex) knowing that stochastic gradiant descent is an unbiased estimator of the gradient ... what does knowing this fact tells us
@blairnicolle22186 ай бұрын
Tremendous!
@user-wr4yl7tx3w2 жыл бұрын
Why is it sometimes called ordinary least squares?
@CodeEmporium2 жыл бұрын
Ordinary Least Squares and Maximum Likelihood estimation are 2 different techniques to find the parameters of a linear regression. The former tries to minimize the sum of squared errors while the other latter will maximize the probability of seeing data. But for the linear regression case, MLE will effectively converge to the OLS equation (just doing a lil math shown in the end of the video)
@chirchan2 жыл бұрын
@@CodeEmporium thank you for the explanation
@Septumsempra88182 жыл бұрын
Have you guys tried the EconML library by Microsoft for causal machine learning?
@CodeEmporium2 жыл бұрын
I have not. But sounds interesting :)
@user-wr4yl7tx3w2 жыл бұрын
An idea, can you consider a video on generalized method of moments.
@CodeEmporium2 жыл бұрын
Potentially yes. :) I'll put some more thought into this
@n_1282 жыл бұрын
Why is theta^2 a problem?, if it is just a constant, the model can learn that constant. Let's say you have a k^2 = theta and we find theta. The only problem I understand is if that theta in theta^2 changes (like x does), but that's not the case.
@CodeEmporium2 жыл бұрын
Good question and it gets me thinking. The true / best value of theta is a constant (as you correctly point out). But at the time of training, we don’t know it‘s value. In the eyes of the model during training (in particular, the cost function), theta is now a variable. And so we can take derivatives with respect to these theta values to find the minima / maxima values and get an estimation for the true cost. However as I pointed out in the video, if you take the derivative with respect to the theta squared terms, you’ll end up with equations that are not linear (in terms of theta) and hence can’t be solved with linear algebra. This is why we can’t write a definite statement of “The best value for theta is X”. Instead, we would need to resort to estimation techniques like gradient descent to get an “estimate” of the value for theta. Hope this clears something’s up
@javiergonzalezarmas8250 Жыл бұрын
This is beautiful
@CodeEmporium Жыл бұрын
Thanks so much!
@harryfeng41992 жыл бұрын
Much appreciated. Thank u
@CodeEmporium2 жыл бұрын
You are very welcome :)
@NeoZondix2 жыл бұрын
This is awesome, thanks
@CodeEmporium2 жыл бұрын
You are very welcome :)
@shivamkaushik66372 жыл бұрын
Damn! So MSE I was using was just Maximum log likelihood this hole time.