Thank you Ben! In case somebody wonders, the two constraining conditions (3:50) are explained in video 15.
@ireneisme87474 жыл бұрын
Thanks! But do you know what's the difference between the var here and the s^2 discussed in video 73?
@kottelkannim49194 жыл бұрын
@@ireneisme8747 Important catch. I hope I am not misleading you: The difference is the model used, either 1 parameter (in 73: alpha), or 2 parameter model (here: alpha and beta). The number of parameters equals the number of constraints. In video 73, one wishes to estimate the variance of X using a sample of {x_i}. First, one need to estimate the expected value of X. Without any constraints, X is estimated by a constant, alpha, X_hat=alpha. The estimator, alpha, is the mean of the sample, i.e. alpha=1/N*sum{x_i}. Now, when alpha is estimated, there is only one constraint, alpha=constant, when one heads on to estimating var(X). In this video, two constraints on u_hat arise (see video 15 pointed out by Mariyana Angelova ), namely: 1...................sum{ui_hat}=0 (the equivalent of the single constraint of Video 73) 2...................sum(x_i*ui_hat)=0 Now, when estimating Var(u_hat), one is obliged to consider 2 constraints.
@Retumn987163 жыл бұрын
why did the (xi- xbar)^2 from the previous video drop here completely?
@LordOfNoobstown4 жыл бұрын
why alpha and beta star and not hat?
@1982sadaf9 жыл бұрын
N-k or N-k-1? (K is # of regressors.) It should be N-k-1, to be consistent with N-2 for the case of 1 regressor. ?
@박수민-b8j4z7 жыл бұрын
in this video, Ben defines K by the number of the constraints. I think if we define K by regressors, it should be N-k-1, as you said! Just my opinion!
@doanhaibui7 жыл бұрын
He did mention in the last few seconds of the video that: in case of more than 1 regressors, we do have N-k-1
@ushmita243 ай бұрын
Its n-k-1 when intercept is not counted in k, and n-k when intercept is included in the count of k. Hence here he has 1 intercept + 1 explanatory variable which becomes N-2
@lameresque9 жыл бұрын
whats is all these, so confusing
@НатальяОрлова-б9б3 жыл бұрын
You need to watch all the videos from the beginning to get what he's talking about
@metehansert6473 жыл бұрын
@@НатальяОрлова-б9б f*ck that
@ireneisme87474 жыл бұрын
Can some one kindly tell me what's the difference between the var here and the one discussed in video 73???
@JeffreyYS3 жыл бұрын
Basically they discuss two different things. In the earlier video, it discussed what is the unbiased estimate of the variance of a sample of independent variables and 1/(n-1) is applied bcz only 1 d.o.f is lost. But here it discussed the unbiased estimate of the error term of a regression function. It clarifies that in order to get an unbiased estimate of the error term, 1/(n-2) is used as 2 d.o.f. is lost. May think that error term captures the “relationship” btw the x and y. And also pls note that sigma^2 in video 73 is not the sigma^2 in this video. The former is the (population) variance of X, while the latter is the (population) variance of U.