Wonderful, more and more🙃. Many thanks, Alfredo!!!
@alfcnz3 жыл бұрын
Haha, I don't see the end 😭🤣😭🤣
@muhammadharris44703 ай бұрын
Great resource, Text needs to have a background to be eligible
@alfcnz3 ай бұрын
Thanks! 😀 That’s why we provide the slides 😊
@geekyrahuliitm3 жыл бұрын
Thanks for making these videos publically available. :-)
@alfcnz3 жыл бұрын
😇😇😇
@ShihgianLee3 жыл бұрын
Thank you for uploading this Alf! This wasn't in the 2020. After watching the 2021 EBM lecture, I feel that everything is about pushing down and up energy. This clarifies it! The interpolation vs extrapolation in high dimension space is interesting. It is like a detective work to deduce result in high dimensional space 😀
@alfcnz3 жыл бұрын
You're welcome 😊😊😊 Next semester there'll be more material on energy stuff. 🔋🔋🔋
@ShihgianLee3 жыл бұрын
🥳🥳🥳
@siddhantrai75293 жыл бұрын
Hi Alfredo, Just a small doubt at 1:11:30 (at end of factor graph), when Yann mentioned that the algo is dp and it is in linear time. But the way he explained the algo, it was more like Dijkstras greedy search, which is O(V log E). As far as I remember, Dp based shortest path that work on network exhaust ively, have O(VE) time complexity, like bellman-ford. Please do correct me if I am wrong. I know this isn't of much concern here, but it bugged me a bit, thus wanted to clarify. Thank you.
@666zhang6662 жыл бұрын
In a GAN part: How do we know that Gen(z) when we learn it to produce 'y^hat' with lowest possible energy will always produce 'wrong sample'? (so sample for which we want to increase energy). Maybe it can happen that it will produce something correct?
@sebastianpinedaarango82393 жыл бұрын
Thanks for the video. I really like it. Unfortunately, the background does not help to read the equations sometimes. I would suggest to look for another approach to increase the contrast between the font and the background.
@alfcnz3 жыл бұрын
Usually students follow along with the PDF version of the slides, so I thought it was not a big problem. But yeah, I've got your point. I've been constantly experimenting new techniques, some work better than others.
@shrey-jasuja Жыл бұрын
I have a doubt, In previous video Yann explained that while training EBM in contrastive learning methods with joint embedding methods, we take negative samples in such a way that they are very different so that the system learns better, but in Graph transformer networks we took the best possible answer for contrastive learning. So how does it works?
@alfcnz Жыл бұрын
You need to add some time stamps or it’s going to be impossible for me to address your question.
@shrey-jasuja Жыл бұрын
@@alfcnz I am talking about the discussion between the time instants 1:39:00 and 1:43:00
@-mwolf2 жыл бұрын
What does "averaging the weights over time" mean exactly? at 43:40