Пікірлер
@LilyHuang208
@LilyHuang208 Ай бұрын
Thank you so much for sharing this. I am a PhD student learning Bayesian stats, and among all the online courses I have encountered, you made it the most clear and easy to understand! I also realise that you are a professor right now, congratulations! Thank you for helping so many learners like me 😀
@bnuzhanglei2008
@bnuzhanglei2008 27 күн бұрын
thank you for your comment! much appreciated.
@zetterik
@zetterik 9 ай бұрын
Hey, your course is super helpful, thanks a lot! I was just wondering why you specified the mu_raw boundaries in the generated quantities block if you do not use it in the code after? Also the loo that is returned when running the model is the same with and without that part.
@赵焕哉
@赵焕哉 2 жыл бұрын
thank you for the lesson. you calculated both looic and waic in the code, but only mentioned how to use looic. so I'd like to ask what do we do with the calcuated waic? and whats the relationship with looic? thank you so much!
@bnuzhanglei2008
@bnuzhanglei2008 2 жыл бұрын
thanks for the interest. You may want to check this paper, link.springer.com/article/10.1007/s11222-016-9696-4
@megfan1383
@megfan1383 2 жыл бұрын
May I ask if the summer school is open to the public? Where can I access it? Thank you
@bnuzhanglei2008
@bnuzhanglei2008 2 жыл бұрын
thanks for the interest. this is a semester course, not a summer school.
@wzyjoseph7317
@wzyjoseph7317 2 жыл бұрын
37:28 maybe do a correlation coefficient between keyword cognition / cognitive Computational / reinforcement learning XDXDXD
@manuelmanolo7099
@manuelmanolo7099 2 жыл бұрын
Will you be holding additional master seminars anytime soon? I started my masters this semester at univie and would love to take a course of yours :)
@bnuzhanglei2008
@bnuzhanglei2008 2 жыл бұрын
Hi Manuel, thanks for the interest! I am not sure because it has not yet been planned. But possibly there will be something in the winter semester of 2022.
@pawelsirotkin4142
@pawelsirotkin4142 3 жыл бұрын
The best lecture for this topic I've ever saw! Thank you very much.
@bnuzhanglei2008
@bnuzhanglei2008 3 жыл бұрын
very glad , thank you!
@MW-mg9dm
@MW-mg9dm 3 жыл бұрын
谢谢你,以不同的方式提醒我。我也意识到了发言非常不妥,收回发言,有错必纠。这和你的财富多少没有关系,即使是homeless只要说得对,我照样采纳。也向那位张三朋友表示歉意。
@omidghasemi1401
@omidghasemi1401 3 жыл бұрын
Hi Lei. Thanks for your amazing talks. I have a question. I was wondering if these new talks are similar in content to the previous similar ones that you uploaded last year? Thanks.
@bnuzhanglei2008
@bnuzhanglei2008 3 жыл бұрын
hi Omid, thanks for the interest! They are nearly the same, but this year I have been using a bit more Zoom whiteboard to draw stuff when explaining.
@yitinghuang9874
@yitinghuang9874 3 жыл бұрын
Dear Sir, thank you sincerely for your lectures and videos, it gives me a lot of great insights in Batesian computations! Nevertheless, I have a question about the true parameter. I have seen true parameter in the info section several times in the R scripts you uploaded in GitHub. I wonder do we need to calculate the true parameter in advances to compare the model? Thank you again for answering my questions! All the best!
@bnuzhanglei2008
@bnuzhanglei2008 3 жыл бұрын
thanks for the interest! NO, the true parameters were used during simulations, so I know the "truth". It does not apply in reality.
@APfishing_guitar_statistics
@APfishing_guitar_statistics 3 жыл бұрын
is there a lecture 5 in this series? not in this play list. Thanks
@APfishing_guitar_statistics
@APfishing_guitar_statistics 3 жыл бұрын
working through this helpful set of videos. Was there a lecture 2? It is not on the playlist as far as I can tell
@bnuzhanglei2008
@bnuzhanglei2008 3 жыл бұрын
thanks for the interest! a few lectures this year were not recorded. You may want to check out the recordings from the last year. kzbin.info/aero/PLfRTb2z8k2x9gNBypgMIj3oNLF8lqM44-
3 жыл бұрын
Just finished the last lecture today. Excellent all around, the only issue is that it isn't longer. I wish you would've had to talk about things like the hierarchical model with different groups or repeated measures; simulation and parameter recovery; and the parametric analyses with fMRI, etc.. Nonetheless, I definitely got the basis to figure these out myself with the user guide, the forums, and the many papers with available code, so thank you for this series and thank you especially for making it available online for free!
@bnuzhanglei2008
@bnuzhanglei2008 3 жыл бұрын
Hi Vasco - thank you for the interest and support! And sorry that I overlooked your comment. This is a master's course and the length is pre-decided. And there is always the challenge to balance more advanced learners like you and who are a bit slower. re: your wished contents, - hierarchical model with different groups or repeated measures - we are working on a tutorial paper - simulation and parameter recovery - perhaps see here, github.com/lei-zhang/COSN_webinar/tree/master/20200506_comp_workflow - and the parametric analyses with fMRI - it was originally there but was reduced because students were not familiar with fmri yet. But maybe take a look here where there are slightly more details regarding model-based fmri. github.com/lei-zhang/BayesCog_Part2
@m222-d8q
@m222-d8q 3 жыл бұрын
Thanks for your lectures which is really amazing! It is nice of you! But I cannot find the course slides through your link. Maybe something wrong happens...
@bnuzhanglei2008
@bnuzhanglei2008 3 жыл бұрын
thank you for the interest! I will check the links.
3 жыл бұрын
I had the same issue. I managed to get them by going into the slides folder in the GitHub and going to an older version of the repo.
@adamhu9445
@adamhu9445 3 жыл бұрын
在b站看了,没想到在油管又看到了^^
@bnuzhanglei2008
@bnuzhanglei2008 3 жыл бұрын
感谢支持!
@adamhu9445
@adamhu9445 3 жыл бұрын
@@bnuzhanglei2008 hi,张老师,我最近需要写一篇关于computational psychiatry 的论文,您能否给个建议,哪些方向比较热门,可以值得写一下?
@mengguojing6556
@mengguojing6556 3 жыл бұрын
Best I 've ever watched in CogSci !
@bnuzhanglei2008
@bnuzhanglei2008 3 жыл бұрын
thank you!
@zhaoningli4449
@zhaoningli4449 3 жыл бұрын
Can you share the link of talk that teaches people how to be a good presenter?
@bnuzhanglei2008
@bnuzhanglei2008 3 жыл бұрын
hi, do you mean this? www.ted.com/talks/bonnie_bassler_on_how_bacteria_communicate?language=en#t-30724
@npt0112
@npt0112 3 жыл бұрын
It is very clear and easy to understand. Thank you!
@npt0112
@npt0112 3 жыл бұрын
Thanks a lot!
@李焜耀-l7d
@李焜耀-l7d 3 жыл бұрын
MCMC Robot 是不是就像是随机梯度下降方法?
@bnuzhanglei2008
@bnuzhanglei2008 3 жыл бұрын
indeed, they are similar, but there seems to be a fundamental difference, in that one goal is to optimize (gradient descending), and the other goal is to approximate the posterior (MCMC). The math is related, though. See here: stats.stackexchange.com/questions/78876/is-the-mcmc-simply-a-probabilistic-gradient-descent
@李焜耀-l7d
@李焜耀-l7d 3 жыл бұрын
老师,others are not ruled out 是因为还有还有一个95%的置信区间存在吗?
@bnuzhanglei2008
@bnuzhanglei2008 3 жыл бұрын
not exactly. it is because this is an entire distribution, and you could quantify the uncertainty.
@manuelmanolo7099
@manuelmanolo7099 4 жыл бұрын
This is so valuable! Thank you for uploading this on youtube! :)
@bnuzhanglei2008
@bnuzhanglei2008 4 жыл бұрын
thanks! very kind
@gavinaustin4474
@gavinaustin4474 4 жыл бұрын
Thanks Lei. Could you also please upload lecture 03?
@bnuzhanglei2008
@bnuzhanglei2008 4 жыл бұрын
Hi Gavin, thanks! L03 was not recorded, but it's the same as last semester. kzbin.info/www/bejne/i52aZKWqhLOpfqs
@behnamplays
@behnamplays 4 жыл бұрын
your lectures are very helpful and informative. thank you!
@bnuzhanglei2008
@bnuzhanglei2008 4 жыл бұрын
thank you, Behnam! very much appreciated.
@gavinaustin4474
@gavinaustin4474 4 жыл бұрын
Thanks Lei. An excellent lecture series. I've learnt a couple of useful things already.
@bnuzhanglei2008
@bnuzhanglei2008 4 жыл бұрын
thanks Gavin - very glad!
@wendywen931
@wendywen931 4 жыл бұрын
very good
@bokkieyeung504
@bokkieyeung504 4 жыл бұрын
for my 2nd question, is that related to maximize likelihood of parameter estimation? i.e., we prefer the robot to tell us which value the parameter is most likely to take given the data (on that hill, which place with the highest elevation)? but if the purpose is to delineate the curve, the robot should visit any place with equal chance, while if the purpose is to find the max likelihood, the robot should find the highest place... but I have a following question: 3) so with the elevation sensor, we can know for each sampled parameter value (randomly visited place), what's the posterior probability of it (elevation of that place)? that's why MCMC can provide us information about the curve shape by approximating the product of likelihood and prior? still can't fully understand MCMC...
@bnuzhanglei2008
@bnuzhanglei2008 4 жыл бұрын
cont. (3) so in this 2D space, x-axis is the parameter value, y-axis is the relative density. so you are right that this enables MCMC to approximate the shape. see the link I gave below.
@bokkieyeung504
@bokkieyeung504 4 жыл бұрын
Hi, I would like to ask 2 questions: 1) what do you mean by "compiling" (i.e., the computation part usually takes time but can be skipped with command "rstan_options(auto_write = TRUE)" as long as it is the same model specification)? 2) why the algorithm of MCMC (i.e., robot definitely go up but stochastically go down) could delineate the shape of likelihood?
@bnuzhanglei2008
@bnuzhanglei2008 4 жыл бұрын
hi, thanks for the questions. (1) to compile Stan code to c++; mc-stan.org/cmdstanr/reference/model-method-compile.html (2) so in the simple example I used in the class, the robot will visit "high" positions more often and "low" positions less often; then if you take the summary statistics (eg histogram), those visits are approximating the shape. towardsdatascience.com/a-zero-math-introduction-to-markov-chain-monte-carlo-methods-dcba889e0c50?gi=576b8ac62430
@bokkieyeung504
@bokkieyeung504 4 жыл бұрын
​@@bnuzhanglei2008 thanks a lot, will continue following this course.
@valeriepu6619
@valeriepu6619 4 жыл бұрын
Very well explained! Easy to understand. Beginner friendly.
@bnuzhanglei2008
@bnuzhanglei2008 4 жыл бұрын
ha thanks a lot!!!
@bokkieyeung504
@bokkieyeung504 4 жыл бұрын
I think the difference between "throwing dice" and your example lies in that, the former is a simulation situation, like a "population" (if we throw the dice for many times, the average probability of getting x=1 is 1/6, and the same for x=2,3,4,5,6), while your example is a specific situation, like a "sample" (we only recruit 20 students to do the test, and we get the probability and the graph only based on this sample data). If we make the "throwing dice" example as a specific situation, let's say throw dice for 60 times, there will be (about) 10 times we get x=1, thus the exact probability calculation is 10/60: the numerator (10) is the frequency we get the outcome x=1, and the denominator (60) is the total number we throw the dice. Just as in your example, for x=5, the numerator (10) is the frequency that students answered 5 questions correctly, and the denominator (20) is the total number of students. Is that correct?
@bokkieyeung504
@bokkieyeung504 4 жыл бұрын
Hi, I would like to ask about how to calculate the value on y-axis for probability mass function (PMF). If I understand correctly, the y-axis of PMF refers to the probability corresponding to one event/outcome along x-axis. At 16:57, you mentioned that "divide the frequency by the total number of questions, i.e., 8". But should it be "divide the frequency by the total number of students, i.e., 20"? (if "frequency" means the number of students who answered X questions correctly). I understand that in “throwing dice” example, the PMF will be a uniform distribution, and the value on y-axis for each value along x-axis is 1/6, and “6” is the total number of event outcomes. But I don’t think it will be the same for your “20-student-8-question” example. Let’s assume among 20 students, there are 10 students answered 5 questions correctly, 8 students answered 6 questions correctly, and 2 students answered 7 questions correctly. Hence the PMF would be 0.5=10/20 height-bar for x=5, 0.4=8/20 height-bar for x=6, 0.1=2/20 height-bar for x=7, and 0 for x=0,1,2,3,4,8. Because only x=5,6,7 happens, the sum of their probability is 1 (0.5+0.4+0.1). Am I correct?
@bnuzhanglei2008
@bnuzhanglei2008 4 жыл бұрын
Bokkie Yeung hi, you are definitely correct. That was a misspoken and the denominator should be 20, the total number of students. Sorry about that.
@xurio1785
@xurio1785 4 жыл бұрын
Very helpful. 1:25:19 ,There should be a comment, "Some rows or columns may not sum exactly to their displayed marginals because of rounding error from the original data. Data adapted from Snee (1974). "
@bnuzhanglei2008
@bnuzhanglei2008 4 жыл бұрын
Xu Rio thanks for the comment! I’ll add it.
@nikkijuntaoni340
@nikkijuntaoni340 4 жыл бұрын
我正在做精神疾病患者决策表现的研究,非常感谢你的分享~
@bnuzhanglei2008
@bnuzhanglei2008 4 жыл бұрын
感谢你的支持:)
@meiyunwu5257
@meiyunwu5257 4 жыл бұрын
nice! thank you
@bnuzhanglei2008
@bnuzhanglei2008 4 жыл бұрын
Thank you too!
@xurio1785
@xurio1785 4 жыл бұрын
支持
@bnuzhanglei2008
@bnuzhanglei2008 4 жыл бұрын
thanks!!!
@MrHigashino
@MrHigashino 4 жыл бұрын
Nice lecture
@bnuzhanglei2008
@bnuzhanglei2008 4 жыл бұрын
thanks!!!