EI Seminar - Jacob Andreas - Good Old-fashioned LLMs (or, Autoformalizing the World)

  Рет қаралды 1,191

MIT Embodied Intelligence

MIT Embodied Intelligence

Күн бұрын

Title: Good Old-fashioned LLMs (or, Autoformalizing the World)
Abstract:
Classical formal approaches to artificial intelligence, based on manipulation of symbolic structures, have a number of appealing properties---they generalize (and fail) in predictable ways, provide interpretable traces of behavior, and can be formally verified or manually audited for correctness. Why are they so rarely used in the modern era? One of the major challenges in the development of symbolic AI systems is what McCarthy called the "frame problem": the impossibility of enumerating a set of symbolic rules that fully characterize the behavior of every system in every circumstance. Modern deep learning approaches avoid this representational challenge, but at the cost of interpretability, robustness, and sample-efficiency. How do we build learning systems that are as flexible as neural models but as understandable and generalizable as symbolic ones? In this talk, I'll describe a recent line of work aimed at automatically building "just-in-time" formal models tailored to be just expressive enough to solve tasks of interest In this approach, neural sequence models pre-trained on text and code are used to place priors over symbolic model descriptions, which are then verified and refined interactively---yielding symbolic graphics libraries that can be used to solve image understanding problems, or symbolic planning representations for sequential decision-making. Here natural language turns out to play a central role as an intermediate representation linking neural and symbolic computation, and I'll conclude with some very recent work on using symbolic reasoning to improve the coherence and factual accuracy of language models themselves.
Bio:
Jacob Andreas is an associate professor at MIT in the Department of Electrical Engineering and Computer Science as well as the Computer Science and Artificial Intelligence Laboratory. His research aims to build intelligent systems that can communicate effectively using language and learn from human guidance. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has been named a Kavli Fellow by the National Academy of Sciences, and has received the NSF CAREER award, MIT's Junior Bose and Kolokotrones teaching awards, and paper awards at ACL, ICML and NAACL.

Пікірлер: 1
@Shintuku
@Shintuku 7 ай бұрын
Does this presentation correspond to some paper? It would be nice to have access to the slides/citations, very interesting stuff
EI Seminar - Naomi Saphra - Interpreting Training
1:04:20
MIT Embodied Intelligence
Рет қаралды 296
Jacob Andreas: Language Models as World Models
1:05:21
KUIS AI
Рет қаралды 1,4 М.
НАШЛА ДЕНЬГИ🙀@VERONIKAborsch
00:38
МишАня
Рет қаралды 2,8 МЛН
兔子姐姐最终逃走了吗?#小丑#兔子警官#家庭
00:58
小蚂蚁和小宇宙
Рет қаралды 9 МЛН
Sigma baby, you've conquered soap! 😲😮‍💨 LeoNata family #shorts
00:37
НИКИТА ПОДСТАВИЛ ДЖОНИ 😡
01:00
HOOOTDOGS
Рет қаралды 2,7 МЛН
EI Seminar - Paul Liang - Foundations of High-Modality Multisensory AI
56:59
MIT Embodied Intelligence
Рет қаралды 323
Deep Learning Bootcamp: Kaiming He
1:15:46
MIT Schwarzman College of Computing
Рет қаралды 55 М.
EI Seminar - Siyuan Feng & Ben Burchfiel - Towards Large Behavior Models
1:23:05
MIT Embodied Intelligence
Рет қаралды 3,3 М.
Jacob Steinhardt: Using AI to understand AI
1:06:40
Berkeley EECS
Рет қаралды 440
Language Models as World Models
1:13:23
MITCBMM
Рет қаралды 5 М.
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 654 М.
НАШЛА ДЕНЬГИ🙀@VERONIKAborsch
00:38
МишАня
Рет қаралды 2,8 МЛН