Katja Grace on the Largest Survey of AI Researchers

  Рет қаралды 1,051

Future of Life Institute

Future of Life Institute

2 ай бұрын

Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, and her thoughts on how we can mitigate AI risk. Find more on Katja's work at aiimpacts.org/.
Timestamps:
0:20 AI Impacts surveys
18:11 What AI will look like in 20 years
22:43 Experts’ extinction risk predictions
29:35 Opinions on slowing down AI development
31:25 AI “arms races”
34:00 AI risk areas with the most agreement
40:41 Do “high hopes and dire concerns” go hand-in-hand?
42:00 Intelligence explosions
45:37 Discontinuous progress
49:43 Impacts of AI crossing the human-level intelligence threshold
59:39 What does AI learn from human culture?
1:02:59 AI scaling
1:05:04 What should we do?

Пікірлер: 6
@martinnielsen2498
@martinnielsen2498 Ай бұрын
Fantastic episode! 👏 Katja Grace’s insights on the potential risks and opportunities of AI are incredibly valuable. Her ability to articulate complex ideas clearly and accessibly makes this a must-watch for anyone interested in the future of technology. Thank you for highlighting such important perspectives and for a thorough discussion that truly illuminates the critical aspects of AI development. Greetings from Sweden 🇸🇪👋
@PauseAI
@PauseAI 2 ай бұрын
The aiimpacts surveys are perhaps the most useful studies that exist for convincing politicians that they need to urgently act. Thank you for that, Katja! Some of the stats from the latest one that we use all the time: - 86% believe the control problem is real and important - Average p(doom) is 14 to 18% (depending on how you phrase the question) As Katja says around 25:20, most people would be shocked by these numbers. It's beyond insanity that we're still allowing these companies to risk all our lives by building increasingly large digital brains.
@blahblahsaurus2458
@blahblahsaurus2458 2 ай бұрын
To me one of the most obvious factors that make AI dangerous is that it can be copied infinitely to run on more hardware. So towards the end of the interview 52:00 you describe a scenario in which your productivity is 10% that of an AI, but that still doesn't necessarily mean you can't contribute to the economy (at a much reduced salary). Yes it will. It's an artificial, external constraint to say that your boss can't use "as much AI as they want". Maybe they will get price gouged by the provider of AI. But since AI is much cheaper to run than a human (who beyond the energy requirements of the brain, still needs housing, transportation, healthcare, 16 hours off a day, two days off a week, etc.) - it is likely that your boss and the provider will eventually be able to agree on a price that is slightly less than your salary. But even ignoring that - if your boss can't get as much AI as they want, the provider DOES have as much AI as THEY want. If your boss is in such a bad situation that they must rely on human labor for the indefinite future, they won't be in business for long.
@nowithinkyouknowyourewrong8675
@nowithinkyouknowyourewrong8675 2 ай бұрын
Experts might be our best predictors, but our best might still be junk. Historically, how well were experts at predicting new things?
@dancingdog2790
@dancingdog2790 2 ай бұрын
So much nervous laughter 😞
Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable
1:31:14
Liron Shapira on Superintelligence Goals
1:26:30
Future of Life Institute
Рет қаралды 2,5 М.
Would you like a delicious big mooncake? #shorts#Mooncake #China #Chinesefood
00:30
Cat story: from hate to love! 😻 #cat #cute #kitten
00:40
Stocat
Рет қаралды 14 МЛН
ХОТЯ БЫ КИНОДА 2 - официальный фильм
1:35:34
ХОТЯ БЫ В КИНО
Рет қаралды 2,4 МЛН
Annie Jacobsen on Nuclear War - a Second by Second Timeline
1:26:29
Future of Life Institute
Рет қаралды 91 М.
Jaan Tallinn Keynote: 2024 Vienna Conference on Autonomous Weapons
13:17
Future of Life Institute
Рет қаралды 671
Frank Sauer on Autonomous Weapon Systems
1:42:41
Future of Life Institute
Рет қаралды 997
How might AI be weaponized? | Al, Social Media and Nukes at SXSW 2024
57:53
Future of Life Institute
Рет қаралды 1,5 М.
Carl Robichaud on Preventing Nuclear War
1:39:04
Future of Life Institute
Рет қаралды 1 М.
3 Most chilling nuclear war aftermath movies of ALL TIMES
2:26
Minute Before Midnight Videos
Рет қаралды 27 М.
Darren McKee on Uncontrollable Superintelligence
1:40:38
Future of Life Institute
Рет қаралды 2,1 М.
AI does not exist but it will ruin everything anyway
1:03:18
Angela Collier
Рет қаралды 353 М.
ПРОБЛЕМА МЕХАНИЧЕСКИХ КЛАВИАТУР!🤬
0:59
Корнеич
Рет қаралды 3,4 МЛН
wyłącznik
0:50
Panele Fotowoltaiczne
Рет қаралды 23 МЛН
AMD больше не конкурент для Intel
0:57
ITMania - Сборка ПК
Рет қаралды 502 М.