It's a good presentation. Very useful for me! Thanks a lot!
@jeremydy33407 ай бұрын
Talk starts 32:57
@Shintuku7 ай бұрын
Does this presentation correspond to some paper? It would be nice to have access to the slides/citations, very interesting stuff
@jeremydy33408 ай бұрын
Talk starts at 16:30
@hansbleuer33469 ай бұрын
Interesting explanation
@CandidDate10 ай бұрын
I think a sense of humor in robotics would lead to clownish appeal.
@DhruvMetha10 ай бұрын
Starts at 15:25
@aennmatyasbarra-hunyor550610 ай бұрын
Great one, thank you! I would like to be part of it. One day it will be possible.
@JosephHeck11 ай бұрын
Content actually starts at 17:30, and the speaker's audio starts at 18:30
@araldjean-charles3924 Жыл бұрын
For the initial conditions that work, have anybody look at how much wiggle room you have. Is there an epsilon-neighborhood of the initial state you can safely start from, and how small is epsilon?
@cbasile22 Жыл бұрын
is there any formal course that covers multi agent RL? I find it confusing thus far. Thanks!
@AngeloKrs878 Жыл бұрын
1:07 subtitles "my experience with drugs couldn't be better"
@dbp_patel_1994 Жыл бұрын
😂
@bhaskartripathi2 жыл бұрын
I was always confused by MA-MDP. You made it look very simplistic. Mathematical notations were very concise and research paper ready.
@keeperofthelight96813 жыл бұрын
how to do convolution lstm and other things on JAX more tutorials please sir Mathew Johnson
@kshitijshekhar11442 жыл бұрын
flax is a high level nn library built on top of Jax, check out its documentation. It's a very new library, built for flexibility. And you can make a mark in that by making PRs
@ImtithalSaeed3 жыл бұрын
u and a confuse me
@georgemu74643 жыл бұрын
Very insightful
@devjaiswal16853 жыл бұрын
Thank you sir
@adamantinebipartite47323 жыл бұрын
Nazi.
@LB-fx1kx3 жыл бұрын
Great work!
@iandanforth3 жыл бұрын
Really enjoyed the presentation. The 'Puzzle' slide is problematic. All three have 'lots of wiring', the camera has smaller wires in a better package.
@syedshahid8316 Жыл бұрын
I live in karachi Pakistan I like your
@p.z.83553 жыл бұрын
How do you linearize the KG without getting into exponential complexity ?
@p.z.83553 жыл бұрын
How do you combine selfsupervised learning with declarative knowledge ?
@pakistanbtsarmy26253 жыл бұрын
👌
@ImtithalSaeed3 жыл бұрын
Which book I can refer to
@f150bc3 жыл бұрын
The diehold foundation is along with the suspecious observers pushing a 12 thousand year cycle of super nova and magnetic reversal which brings a catastrophic event please debate them on their theory's they have tens of thousands of people following them. I fear that the theory might be partially right. Find them on KZbin under those names.please look into this thanking you in advance Carl.
@fredxu98263 жыл бұрын
This is a great talk. Personally I haven't had the prerequisite for manifold learning, but the idea behind hybrid message passing is quite profound. Just wandering: if you have a Bayesian GNN where the prior encodes the linear assumption, would that be equivalent to the GNN + PGM model presented here? or is there a limit to the expressiveness of a Bayesian prior?
@thanasisk3 жыл бұрын
Great talk, thank you for uploading.
@DistortedV123 жыл бұрын
Amazing work
@wgharbieh3 жыл бұрын
Talk starts at 6:00
@harrysaini77024 жыл бұрын
Can we get the PPT plzz.....??
@alinouruzi53714 жыл бұрын
good
@TheRcCrazyFan4 жыл бұрын
Starts at 12:08
@AvindraGoolcharan4 жыл бұрын
Starts around 7:03
@viktoriyat18154 жыл бұрын
this was amazing, thank you so much for uploading it!!
@Mefaso094 жыл бұрын
Starts at 11:20
@mitembodiedintelligence86752 жыл бұрын
Thank you! I have updated the video so that it starts playing from the very beginning! -Ge