The current alignment plan, and how we might improve it | Buck Shlegeris | EAG Bay Area 23

  Рет қаралды 4,560

Centre for Effective Altruism

Centre for Effective Altruism

Күн бұрын

In this session, Buck is discussing how he thinks we should try to align artificial general intelligence (AGI) if we made no more fundamental progress on alignment, and then talks about how he thinks alignment researchers should try to improve this plan and ensure that whatever plans are available are executed competently.
Buck is the CTO at Redwood Research, a nonprofit based in Berkeley which does technical alignment research. He spent most of the last year researching mechanistic interpretability and related alignment techniques. He previously worked at MIRI and was a fund manager for the EA Infrastructure Fund.
Find out more about EA Global conferences at: www.eaglobal.org
Learn more about effective altruism at: www.effectivealtruism.org

Пікірлер
The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment
23:24
Robert Miles AI Safety
Рет қаралды 218 М.
БРАВЛЕРЫ ОТОМСТИЛИ МАТЕРИ😬#shorts
00:26
Kick Awesome
00:58
Russo
Рет қаралды 98 МЛН
РАДУЖНАЯ ГОРКА 🌈😱
00:30
ВИОЛА 🐰
Рет қаралды 3,3 МЛН
The Fermi Paradox - where is everyone?
10:44
Modern Day Eratosthenes
Рет қаралды 996
Safety evaluations and standards for AI | Beth Barnes | EAG Bay Area 23
32:21
Centre for Effective Altruism
Рет қаралды 3,2 М.
There's No Rule That Says We'll Make It
11:32
Robert Miles 2
Рет қаралды 32 М.
How to Fix Bunions in 5 Steps
14:40
Barefoot Strength
Рет қаралды 9 МЛН
We Were Right! Real Inner Misalignment
11:47
Robert Miles AI Safety
Рет қаралды 241 М.
Eliezer Yudkowsky - AI Alignment: Why It's Hard, and Where to Start
1:29:56
Machine Intelligence Research Institute
Рет қаралды 110 М.
Intelligence and Stupidity: The Orthogonality Thesis
13:03
Robert Miles AI Safety
Рет қаралды 662 М.
БРАВЛЕРЫ ОТОМСТИЛИ МАТЕРИ😬#shorts
00:26