Пікірлер
@watchmanling
@watchmanling 12 күн бұрын
tangency condition is satisfied when f is real mapping on data x, does that implies x is uniform distribution ?
@subhajitsarkar8239
@subhajitsarkar8239 Ай бұрын
watching this lecture series out of interest lol...😅
@PitsaneMahlako
@PitsaneMahlako Ай бұрын
Very clear🎉
@gialukaraffel2050
@gialukaraffel2050 2 ай бұрын
Amazing lecture, almost as good as Christou 💪💪🤭
@rumplewang2814
@rumplewang2814 2 ай бұрын
感谢教授的精彩讲解 非常喜欢❤❤❤
@WenanZhou
@WenanZhou 2 ай бұрын
I really like that the professor made it clear that X1,....Xn is ONE sample because I was confused by different versions.
@WenanZhou
@WenanZhou 2 ай бұрын
Hi, just wondering Stats100B only has 17 lectures? Are those all for this course?
@tanujdeshmukh
@tanujdeshmukh 3 ай бұрын
Doubt: When we find MSE(theta) how do we know the true value of theta. Isn't that what we are actually trying to find out? So when we say MSE(theta_hat) = bias(theta_hat)^2 + variance(theta_hat) , how do we calculate bias if we do not actually know the theta of the population?
@tanujdeshmukh
@tanujdeshmukh 3 ай бұрын
1:02:21 How do we know that X1,X2,X3…… come from a poisson distribution in real life? We just realised a certain value from out population which is a number how can we infer that the population is Poisson?
@minhhao5031
@minhhao5031 3 ай бұрын
Dear Professor Jingy Jessica Li, thank you very much for spending your precious time making these invaluable, informative lectures publicly open for poor students like me. I learned a lot from you than my lecturer at my university. I truly respect and I wish you and your family all the best, Professor Li. Much love from Vietnam
@pan19682
@pan19682 3 ай бұрын
Really a very presentation. It was the first time I understood this tough subject. Thanks slot professor 😊
@yangxu575
@yangxu575 3 ай бұрын
Thank you very much! very useful!
@liaoyixu6882
@liaoyixu6882 5 ай бұрын
Thanks for such a nice lecture! May I ask some questions in 17:12 1: I am confusing about which case need to add "^", in your first lecture you mentioned the value estimated from the data need to add "^" on top of it but why residual e doesn't have "^" on the top of it? 2: how to derive Var(e) = sigma^2(I - H) and Var(e_i) = sigma^2(1 - h_{ii})? Thank you!
@calcifer464
@calcifer464 6 ай бұрын
The third eaquation of leapfrog method is wrong 39:21.
@minhhao5031
@minhhao5031 7 ай бұрын
Dear Professor Li, thank you very much for spending your precious time making these invaluable lectures for poor students like me. I am a third-year undergraduate student major in Biotechnology in Vietnam and I feel very lucky to find your lectures to learn about Statisical Methods in Computational Biology. May I ask you a question? If able, would you mind if I asked you to recommend some textbooks to supplement for your course, Madam? Once again, thank you so so much, Professor Li. I wish you and your family all the best in this year of Dragon!!!!
@JSBUCLA
@JSBUCLA 7 ай бұрын
I'm glad that you find the class helpful. I think "Introduction to Statistical Learning" (www.statlearning.com) should be a good start point. I plan to write a book based on the course materials I'm teaching this quarter. Hopefully I can finish the book writing soon.
@JSBUCLA
@JSBUCLA 7 ай бұрын
If you are interested in learning more about genomics, I highly recommend Dr. Shirley Liu's Harvard STAT 115 class (liulab-dfci.github.io/bioinfo-combio/)
@minhhao5031
@minhhao5031 7 ай бұрын
​@@JSBUCLA Thank you very much for your true kindness, Professor <3 I wish you much health and success <3 I really hope to see your book soon. Have a nice day, Professor Li
@jeromedavidson3615
@jeromedavidson3615 7 ай бұрын
This is what i needed!!!!!!!! Thk u soooo much
@CalgaryC
@CalgaryC 7 ай бұрын
I was reviewing these class content recently, but find some of the episodes have been personalized. I was wondering whether could these videos be public again, as they are really really great material for my learning 🎉. The illustrations are wonderful and very comprehensive,thanks for your sharing nevertheless!
@JSBUCLA
@JSBUCLA 6 ай бұрын
Thanks for the compliment! The unlisted videos are TA's discussion sections about homework solutions, so we didn't make them publicly available.
@xinglinli9874
@xinglinli9874 7 ай бұрын
Thanks for sharing !!!
@stephenomenal1245
@stephenomenal1245 7 ай бұрын
excellent lecture bravo!!!
@pushlin806
@pushlin806 8 ай бұрын
CHINA CHINA LOOOOOOL ASIAN CHINA BRUH PUSSY EYES
@algorithmo134
@algorithmo134 9 ай бұрын
** At 14:31, this is what I have from discussion with Professor Jessica Li. To further clarify, Correct: set the decision criterion (what alpha value to reject the null; this does not depend on data) -> do the data analysis (p-value calculation) -> make a decision incorrect: set the decision (rejection) -> do the data analysis (p-value calculation) -> set the alpha value so you can always report a rejection decision. This is what we called a biased analysis because the decision was pre-made and irrelevant to data.
@StevenLouis-p2h
@StevenLouis-p2h 9 ай бұрын
comment: Thanks so much for your clear explanation of GLM concept. And here I want to make one comment of the canonical link function of binomial GLM. We have g(\mu)=log(\mu/(n-\mu))=log(p\(1-p)), if y~Bin(n,p), with \mu=np, thanks to the communication with Jessica.
@shenzixuan-cy4qv
@shenzixuan-cy4qv 11 ай бұрын
you made a mistake about the definition of measurable function, a measurable function only needs the inverse images of any Borel sets to be measurable.
@quonxinquonyi8570
@quonxinquonyi8570 11 ай бұрын
UCLA has the best maths and stats faculty and beats Harvard and mit hands down in explaining things
@jbm5195
@jbm5195 11 ай бұрын
Thanks for this great video. May I know the textbook you follow for this course. Thanks very much
@yufeiruan531
@yufeiruan531 11 ай бұрын
I'm a little bit confused about the Y=X beta + e model in oneway ANOVA. In Categorical data, for example (wage, educational level) data, our task is more likely as given wage to predict which educational level/category, which in this model is given Y to predict X. That seems a little bit odd to me, as normally the model is used for given X to predict Y. How should we justify this model? Another small question is for ordinal v.s. nominal predictors setting, it there any difference in their setting to reflect the "ordinal" inheritance? Like 0<alpha_2<alpha_3<...<alpha_I, etc. Shall we deal with these two settings differently?
@yufeiruan531
@yufeiruan531 11 ай бұрын
Oh I see, that's categorical predictor variable not response variable
@JSBUCLA
@JSBUCLA 11 ай бұрын
You are right. We have a categorical predictor. If you want to code the categorical predictor as nominal, you may check stats.stackexchange.com/questions/33413/continuous-dependent-variable-with-ordinal-independent-variable @@yufeiruan531
@yufeiruan531
@yufeiruan531 11 ай бұрын
I see! Interesting to know. Thanks so much for your timely reply!@@JSBUCLA
@ShenghuiYang
@ShenghuiYang Жыл бұрын
Very clear. Thanks.
@mralan3022
@mralan3022 Жыл бұрын
from 1:01:32 thereafter for 2-sample... I believe it should be Ybar with "m" as subscript, instead of "n". Many books drop the subscripts for the Xbar and Ybar anyway...
@Ohrobo
@Ohrobo Жыл бұрын
I can't understand why the first derivative of the characteristic function evaluated at t=0 is 0. Isn't the first derivative of the characteristic function i \times E[e^{itX} X]? How it can be 0 at t=0?
@JSBUCLA
@JSBUCLA Жыл бұрын
We talk about the characteristic function of (X-\mu). At t=0, i E[e^{it(X-\mu)} (X-\mu)] = i E[X-\mu] = 0 because E[X-\mu] = 0
@Ohrobo
@Ohrobo Жыл бұрын
@@JSBUCLA Oh I understand! I really appreciate it!
@grantguo9399
@grantguo9399 Жыл бұрын
Thank you so much, brilliant lecture! Personally, I think this is better than the MIT OCW one of statistics.
@zheshunwu
@zheshunwu 8 ай бұрын
totally agree.
@ItsKhabib
@ItsKhabib 2 ай бұрын
agree
@mralan3022
@mralan3022 Жыл бұрын
good points on why learn this...6:35. 😀
@ktosoleang9567
@ktosoleang9567 Жыл бұрын
Thank you That is very kind of your wisdom sharing.
@pulakgautam3536
@pulakgautam3536 Жыл бұрын
I cant thank you enough for sharing these!
@connorfrankston5548
@connorfrankston5548 Жыл бұрын
The Negative Binomial distribution is a natural generalization of Poisson through a Bayesian approach, since the Gamma distribution is the conjugate prior of the Poisson distribution, yes?
@connorfrankston5548
@connorfrankston5548 Жыл бұрын
Hi Professor Li! I'm curious about what your thoughts are on defining a Bayesian prior and posterior on the T-statistic for a permutation test. It seems to me that if we can reasonably model the distribution of the T-statistic, that could enable us to make more flexible inferences beyond the discrete p-values achieved by using the ECDF for a limited number of technical replicates. I would like to try this myself, but do you have any suggestions about limitations or methods for this based on your expertise? Thank you for reading this, and for your lectures!
@ericagao6944
@ericagao6944 Жыл бұрын
Referred to more than 5 lecture notes from different universities and still confused. Finally figured it out by this video. Thanks a lot!
@syz911
@syz911 Жыл бұрын
22:50: This is misleading. You have made an implicit assumption that Var(Xi^2) is finite. Law of large numbers work when the population variance is finite. In this case you need to prove that Var(Xi^2) is finite in order to apply the law of large numbers. Your derivation works for samples for distributions for which the fourth and second moments are finite. In general, Sn^2 is not a consistent estimator of sigma^2 unless Var(Sn^2) approaches zero at the limit.
@piby2
@piby2 Жыл бұрын
This video is suitable for people who had already done a graduate level course in MCMC. But then again, why will they listen🙄 to a youtube video?
@penguin1780
@penguin1780 Жыл бұрын
Hi, we had a student pop on over to our fountain pen forum to ask what pen was being used in this video. Professor, could you please tell me what brand or model your fountain pen is?
@JSBUCLA
@JSBUCLA Жыл бұрын
The fountain pen's brand is Faber-Castell.
@penguin1780
@penguin1780 Жыл бұрын
@@JSBUCLA Thank you so much!
@chenjxing
@chenjxing Жыл бұрын
Great lecture! But the proof didn’t cover why we can put the higher moments into small o(1/n). Say if 3rd moment does not exist, how to fill this gap?
@JSBUCLA
@JSBUCLA Жыл бұрын
This proof assumes bounded moments, as in www.cs.toronto.edu/~yuvalf/CLT.pdf
@chenjxing
@chenjxing Жыл бұрын
@@JSBUCLA Thank you for illustration!
@shaaficihussein1678
@shaaficihussein1678 Жыл бұрын
Where should i get the hidden 16 videos kindly.
@JSBUCLA
@JSBUCLA Жыл бұрын
These videos were TA's discussion sections covering homework problems, so we will not release them to the public.
@vincentojera2868
@vincentojera2868 Жыл бұрын
Hello professor .. kindly assist me with pdf notes on moment generating function
@chaowang3093
@chaowang3093 Жыл бұрын
great lectures, cleared things up a lot for me. Thanks!!!!
@liwang3
@liwang3 Жыл бұрын
one mistake in the last part maybe, when p=0.5, the asymptotic variance gets its maximum value for a given density function (e.g uniform density), which implies that the median estimator actually is the worst.
@paulmonnu4892
@paulmonnu4892 Жыл бұрын
hello ma, Is it possible to get the unavailable videos😪, these lectures are live saving for my class and ongoing projects.
@blablablerg
@blablablerg Жыл бұрын
very clear explanation!
@TomHutchinson5
@TomHutchinson5 Жыл бұрын
Great advice! I'll be sure to share this with colleagues. One issue I've run into is versions of libraries I use change. Instead of pulling in whatever the latest version of a library is, I try to pin down a specific release. Docker has been helpful for that. I've heard packrat is useful for that issue in R. I'm more in the Python world but the dangers and concepts are similar.
@underlecht
@underlecht Жыл бұрын
you are a great lecturer. worth 10000x more views
@georgerochester4658
@georgerochester4658 Жыл бұрын
Excellent presentation very well done
@amberxv4777
@amberxv4777 Жыл бұрын
thank you, your MM explanation finally made me understand it.