OUTLINE: 0:00 - Intro 0:55 - Paper Overview 4:30 - Aren't you just adding extra data? 9:35 - Why are you splitting up the AMIGo teacher? 13:10 - How do you train the grounding network? 16:05 - What about causally structured environments? 17:30 - Highlights of the experimental results 20:40 - Why is there so much variance? 22:55 - How much does it matter that we are testing in a video game? 27:00 - How does novelty interface with the goal specification? 30:20 - The fundamental problems of exploration 32:15 - Are these algorithms subject to catastrophic forgetting? 34:45 - What current models could bring language to other environments? 40:30 - What does it take in terms of hardware? 43:00 - What problems did you encounter during the project? 46:40 - Where do we go from here? Paper: arxiv.org/abs/2202.08938
@drtristanbehrens2 жыл бұрын
This is a fantastic interview! Very inspiring and insightful. Thanks for sharing!
@urfinjus3782 жыл бұрын
Great! Very thoughtful author, pleasure to listen. Yannic, I would autolike your videos if there would be this option in youtube. Appreciate your efforts and skills to share knowledge and ideas. Towards the paper. If it were obvious that we can take a better policies from adding text, than we can say that it is obvious we can get this extra data from using big image captioning models)
@oncedidactic2 жыл бұрын
Wow, great interview again. Nice questions as always Yannic, and super impressed with the author. Acquitted himself very well on the questions raised in the paper review and far beyond into deeper and future questions. Really interesting to see how they re-aimed the paper to address a more abstract research question and still benefit from their earlier work in specific algo implementation. This is a fantastic exemplar, how great would it be if just half of the "neat ideas / benchmark chasing" research could be transmogrified into an increment in the "basic research" space. Get low hanging fruit that tastes good *and* is good for you, lol.