Open-Endedness and AI GAs in the Era of Foundation Models

  Рет қаралды 782

Jeff Clune

Jeff Clune

Күн бұрын

My talk at Oxford in June, 2024, including new work on OMNI-EPIC
Foundation models create exciting new opportunities in our longstanding quests to produce open-ended and AI-generating algorithms, meaning agents that can truly keep learning forever. In this talk I share some of our recent work harnessing the power of foundation models to make progress in these areas, including taking advantage of different forms of being goal-conditioned. I cover our recent research: (1) OMNI-EPIC: Open-endedness via Models of human Notions of Interestingness with Environments Programmed in Code, (2) Video Pre-Training (VPT), and (3) Thought Cloning: Learning to Think while Acting by Imitating Human Thinking.
MIT Talk Motivating and Explaining Open-endedness and AI-Generating Algorithms: • Improving Deep Reinfor...
OMNI: arxiv.org/abs/...
OMNI-EPIC: arxiv.org/abs/...
Thought Cloning: arxiv.org/abs/...
VPT: openai.com/res...
AI-Generating Algorithms: arxiv.org/abs/...

Пікірлер: 5
@juanilcom
@juanilcom 3 ай бұрын
What an amazing talk, Jeff. Coming from an AI background in Computer Vision, it's incredible how divergent the research paths and intuitions can end up being. This is an extremely exciting area. Thanks, kind regards from Argentina.
@warrenhenning8064
@warrenhenning8064 3 ай бұрын
15:14 woah.
@Kram1032
@Kram1032 3 ай бұрын
yeah, POET is super fun
@Kram1032
@Kram1032 3 ай бұрын
the main thing with OMNI I'm concerned about is whether all forms of interestingness can a priori be put into language. I mean, like, you can certainly say to something you observe "oh that's interesting", but that technically doesn't involve language for the interestingness judgement. The language comes in once I want to communicate that judgement to somebody else. Language models - as they stand - aren't actually that great at being creative beyond the training set. They are very strong interpolators but not as strong extrapolators. Interestingness relies quite a bit on memory, but these language models are also memory-limited, not able to consider thoughts arbitrarily far back. They also tend to develop a sort of one-track-mind, overvaluing events and getting stuck on specific themes, which clearly is counter to what you want here. While the quality of output certainly is often quite good, diversity is often kinda meh. I think in their current form they as a result *would* ultimately run into a plateau of tasks where things become quasi-static again, just like it'd happen with POET. - Even if the task space is genuinely far vaster. I'd love to be proven wrong about that though: Very curious what OMNI-EPIC will do given enough time. Hopefully that sort of longer run is actually happening. Like, just let it do its thing for like a year and evaluate whatever it comes up with by then or something.
@InquilineKea
@InquilineKea 3 ай бұрын
"interestingly new" as pathology - lol who's the most interesting person you know?
From Small To Giant Pop Corn #katebrush #funny #shorts
00:17
Kate Brush
Рет қаралды 70 МЛН
规则,在门里生存,出来~死亡
00:33
落魄的王子
Рет қаралды 20 МЛН
The joker favorite#joker  #shorts
00:15
Untitled Joker
Рет қаралды 30 МЛН
GIANT Gummy Worm Pt.6 #shorts
00:46
Mr DegrEE
Рет қаралды 94 МЛН
Kubernetes 101 workshop - complete hands-on
3:56:03
Kubesimplify
Рет қаралды 1,6 МЛН
The Turing Lectures: The future of generative AI
1:37:37
The Alan Turing Institute
Рет қаралды 605 М.
Triton Conference 2024: Afternoon Session
3:37:31
Triton
Рет қаралды 931
WE GOT ACCESS TO GPT-3! [Epic Special Edition]
3:57:17
Machine Learning Street Talk
Рет қаралды 309 М.
Neural and Non-Neural AI, Reasoning, Transformers, and LSTMs
1:39:39
Machine Learning Street Talk
Рет қаралды 61 М.
AGI Series 2024 - Joscha Bach: Is Consciousness a Missing Link to AGI?
2:50:45
DigitalFUTURES world
Рет қаралды 7 М.
From Small To Giant Pop Corn #katebrush #funny #shorts
00:17
Kate Brush
Рет қаралды 70 МЛН