Рет қаралды 283
What's the difference between a large language model and the human brain? And what's wrong with our theories of agency? In this episode, I chat about these questions with Jan Kulveit, who leads the Alignment of Complex Systems research group.
Patreon: / axrpodcast
Ko-fi: ko-fi.com/axrp...
The transcript: axrp.net/episode/2024/05/30/episode-32-understanding-agency-jan-kulveit.html
Topics we discuss, and timestamps:
0:00:47 - What is active inference?
0:15:14 - Preferences in active inference
0:31:33 - Action vs perception in active inference
0:46:07 - Feedback loops
1:01:32 - Active inference vs LLMs
1:12:04 - Hierarchical agency
1:58:28 - The Alignment of Complex Systems group
Website of the Alignment of Complex Systems group (ACS): acsresearch.org
ACS on X/Twitter: x.com/acsresearchorg
Jan on LessWrong: lesswrong.com/users/jan-kulveit
Predictive Minds: Large Language Models as Atypical Active Inference Agents: arxiv.org/abs/2311.10215
Other works we discuss:
Active Inference: The Free Energy Principle in Mind, Brain, and Behavior: / 58275959
Book Review: Surfing Uncertainty: slatestarcodex...
The self-unalignment problem: www.lesswrong....
Mitigating generative agent social dilemmas (aka language models writing contracts for Minecraft): social-dilemma...