Рет қаралды 4,560
In this session, Buck is discussing how he thinks we should try to align artificial general intelligence (AGI) if we made no more fundamental progress on alignment, and then talks about how he thinks alignment researchers should try to improve this plan and ensure that whatever plans are available are executed competently.
Buck is the CTO at Redwood Research, a nonprofit based in Berkeley which does technical alignment research. He spent most of the last year researching mechanistic interpretability and related alignment techniques. He previously worked at MIRI and was a fund manager for the EA Infrastructure Fund.
Find out more about EA Global conferences at: www.eaglobal.org
Learn more about effective altruism at: www.effectivealtruism.org