Hi Ben. In the 'deriving least squares estimators' videos you ended up with the numerator for beta hat being: summation(xi - xbar)(yi-ybar). But here you've dropped the ybar from the numerator. Does this matter?
@NhatLinhNguyen829 жыл бұрын
+Matt Sharp In one of the previous video, he derived algebraically that (xi-xbar)(y-ybar) = (xi-xbar)yi and = xi(yi-ybar)
@JamesChengJing5 жыл бұрын
@@NhatLinhNguyen82 which video? Thanks
@HaineGratuite5 жыл бұрын
"deriving least square estimators", part 2 I think @@JamesChengJing
@LordDockerton9 ай бұрын
@@HaineGratuitethank you!
@mariustu10 жыл бұрын
Ben. Great initiative. One question/comment: Should the Sxx you use for the denominator actually bee Sxx2 (squared)?
@akashp012 жыл бұрын
He wrote the variance as Sxx as far back as the Gauss Markov proof 1. Why does he do it? who knows. Half of the problems would be simply resolved if people agreed to some common way of doing things, I wish he agreed to common notion to reserve Sxx for standard deviation or something and Sxx (square) for variance, it makes instant sense in a rational world.
@braziliankew7 жыл бұрын
PLEASE SAVE MY LIFE! In a multiple linear regression model with two or more regressors (and an intercept), describe a two-step “partialling out” method by which you can obtain the OLS estimator, , of the first slope coefficient. State your arguments on why this two step method is the same as conducting the full multiple regression on all the regressors.
@braziliankew7 жыл бұрын
PLEASE SAVE MY LIFE! When would you not need to conduct multiple regressions on several regressors? You may show and describe the situations in which a “simple” bivariate regression of the dependent variable on the first regressor obtains the same unbiased estimator?