great presentation. It gave me a lot of insight especially given the current developments in score-based models and generation with them.
@leonardohuang4 жыл бұрын
Fantastic talk! Clear, informative and vivid.
@Tryndamereization2 жыл бұрын
Very good presentation. Thank you
@jfndfiunskj52994 жыл бұрын
Very good. Well done.
@sayarerdi3 жыл бұрын
Could you also share the codes of your animations?
@qiuhaowang20634 жыл бұрын
Helpful talk! Thank you . I just don't understand why there is a 'div' in the equation at 16:45. Could you show me some references?
@k_neklyudov3 жыл бұрын
Remember that div(f) is just a convenient notation for \sum_i df_i/dx_i. If you'll try to take the jacobian of the formula for x' you will end up evaluating the determininant of a large matrix, and there you should see that some terms of this determinant are negligible: o(dt). Thus, in some sense, the diagonal of the matrix (which is 1+df_i/dx_i*dt) plays the main role and, thus, div appears.
@ivanmedri2882 жыл бұрын
I know it is old post but could you solve this? I am stuck on the same part.
@abhishekmaiti83324 жыл бұрын
At 34:01, are \thetas parameters of the model or the sampled data points from a model which is parameterised by \theta?
@k_neklyudov3 жыл бұрын
thetas are the parameters of the model, which could be a regression or a classifier in this case. hence, here we sample the parameters of a model (say weights of linear regression), and then we should average the predictions over all these sampled parameterrs.
@abhishekmaiti83323 жыл бұрын
@@k_neklyudov I see, thanks a lot for the clarification.
@nadineca33254 жыл бұрын
Thank you for this informative discussion and amazing presentation!
@PradeepBanerjeeKr4 жыл бұрын
Simply super!
@hakiim.jamaluddin3 жыл бұрын
Super clear, thanks!
@MrDudugeda24 жыл бұрын
great talk!
@해위잉7 ай бұрын
Fantastic,,,, i want to go samsung
@umountable4 жыл бұрын
Maybe its useful to motivate a little more before showing only slides packed with equations for half an hour straight
@MNasirAziz4 жыл бұрын
I understand you. But you can look who he uses final derived equation by just skipping derivation.
@k_neklyudov4 жыл бұрын
Hi Stefan! Thanks a lot for your feedback! I totally agree that this talk taking separately lacks motivation and practical examples. The reason is that this talk was designed as a part of Day 5, which is about MCMC. You can also find the discussion of the Langevin dynamics in this talk kzbin.info/www/bejne/h5ClmmV-brN-sMU.