Рет қаралды 782
My talk at Oxford in June, 2024, including new work on OMNI-EPIC
Foundation models create exciting new opportunities in our longstanding quests to produce open-ended and AI-generating algorithms, meaning agents that can truly keep learning forever. In this talk I share some of our recent work harnessing the power of foundation models to make progress in these areas, including taking advantage of different forms of being goal-conditioned. I cover our recent research: (1) OMNI-EPIC: Open-endedness via Models of human Notions of Interestingness with Environments Programmed in Code, (2) Video Pre-Training (VPT), and (3) Thought Cloning: Learning to Think while Acting by Imitating Human Thinking.
MIT Talk Motivating and Explaining Open-endedness and AI-Generating Algorithms: • Improving Deep Reinfor...
OMNI: arxiv.org/abs/...
OMNI-EPIC: arxiv.org/abs/...
Thought Cloning: arxiv.org/abs/...
VPT: openai.com/res...
AI-Generating Algorithms: arxiv.org/abs/...