Curtis Huebner-AGI by 2028, 90% Doom

  Рет қаралды 8,896

The Inside View

The Inside View

Күн бұрын

Curtis Huebner, also known as AI_WAIFU, is the head of Alignment at EleutherAI. In this episode we discuss the massive orders of H100s from different actors, why he thinks AGI is 4-5 years away, why he thinks we're 90% "toast", his comment on Eliezer Yudkwosky's Death with Dignity, and what kind of Alignment projects is currently going on at EleutherAI, especially a project with Markov chains and the Alignment test project that he is currently leading.
Death with Dignity: www.lesswrong.com/posts/j9Q8b...
Alignment Minetest: www.eleuther.ai/projects/alig...
Alignment Minetest update: blog.eleuther.ai/minetester-i...
Outline
00:00 Highlights / Intro
00:50 The Fuck That Noise Comment On Death With Dignity
10:28 The Probability of Doom Is 90%
12:44 Best Counterarguments For His High P(doom)
14:41 Compute And Model Size Required For A Dangerous Model
17:55 Details For Curtis' Model Of Compute Required, The Brain View
21:23 Why This Estimate Of Compute Required Might Be Wrong, Ajeya Cotra's Transformative AI report
29:01 Curtis' Median For AGI Is Around 2028, Used To Be 2027
30:50 How Curtis Approaches Life With Short Timelines And High P(Doom)
35:27 Takeoff Speeds-The Software view vs. The Hardware View
39:57 Nvidia's 400k H100 rolling down the assembly line, AIs soon to be unleashed on their own source code
41:04 Could We Get A Fast Takeoff By Fuly Automating AI Research With More Compute
46:00 The Entire World (Tech Companies, Governments, Militaries) Is Noticing New AI Capabilities That They Don't Have
47:57 Open-source vs. Close source policies. Mundane vs. Apocalyptic considerations
53:25 Curtis' background, from teaching himself deep learning to EleutherAI
55:51 Alignment Project At EleutherAI: Markov Chain and Language Models
01:02:15 Research Philosophy at EleutherAI: Pursuing Useful Projects, Multingual, Discord, Logistics
01:07:38 Alignment Mine Test: why this project might be useful for alignmnet, embedded agency, wireheading
01:15:30 Next steps for Alignment MineTest: Focusing On Model-Based RL
01:17:07 Training On Human Data & Using an Updated Gym Environment With Human APIs
01:19:20 Model Used, Not Observing Symmetry
01:21:58 Another goal of Alignment Mine Test: Study Corrigibility
01:28:26 People ordering H100s Are Aware Of Other People Making These Orders, Race Dynamics, Last Message

Пікірлер: 39
@TheInsideView
@TheInsideView 10 ай бұрын
Transcript & audio: theinsideview.ai/curtis Outline: 00:50 The Fuck That Noise Comment On Death With Dignity 10:28 The Probability of Doom Is 90% 12:44 Best Counterarguments For His High P(doom) 14:41 Compute And Model Size Required For A Dangerous Model 17:55 Details For Curtis' Model Of Compute Required, The Brain View 21:23 Why This Estimate Of Compute Required Might Be Wrong, Ajeya Cotra's Transformative AI report 29:01 Curtis' Median For AGI Is Around 2028, Used To Be 2027 30:50 How Curtis Approaches Life With Short Timelines And High P(Doom) 35:27 Takeoff Speeds-The Software view vs. The Hardware View 39:57 Nvidia's 400k H100 rolling down the assembly line, AIs soon to be unleashed on their own source code 41:04 Could We Get A Fast Takeoff By Fuly Automating AI Research With More Compute 46:00 The Entire World (Tech Companies, Governments, Militaries) Is Noticing New AI Capabilities That They Don't Have 47:57 Open-source vs. Close source policies. Mundane vs. Apocalyptic considerations 53:25 Curtis' background, from teaching himself deep learning to EleutherAI 55:51 Alignment Project At EleutherAI: Markov Chain and Language Models 01:02:15 Research Philosophy at EleutherAI: Pursuing Useful Projects, Multingual, Discord, Logistics 01:07:38 Alignment Mine Test: why this project might be useful for alignmnet, embedded agency, wireheading 01:15:30 Next steps for Alignment MineTest: Focusing On Model-Based RL 01:17:07 Training On Human Data & Using an Updated Gym Environment With Human APIs 01:19:20 Model Used, Not Observing Symmetry 01:21:58 Another goal of Alignment Mine Test: Study Corrigibility 01:28:26 People ordering H100s Are Aware Of Other People Making These Orders, Race Dynamics, Last Message
@mihaitruta2027
@mihaitruta2027 10 ай бұрын
If P(doom) is ~90% and timelines are ~4 years we just need to decrease the P(doom) by 2% every month. We can do this! ✊
@tatyanamamut3174
@tatyanamamut3174 8 ай бұрын
This is not going to happen linearly
@magnuskindblom4434
@magnuskindblom4434 6 ай бұрын
@@tatyanamamut3174 No, linear doesn't seem to be a big buzzword in anything AI. Big chance that the comment wasn't all that serious though. Besides, the doom problem is such a huge task for humanity that there's room for many angles. Some solutions come from unlikely directions. Anyway, if we could somehow reduce risk linearly, "2% every month" could work assuming doom doesn't actually take place before the end of the 4 years. If it can occur earlier, we'd need the model to say something about the risk of that.
@spirit123459
@spirit123459 4 ай бұрын
How is it going?
@matteovlorusso2541
@matteovlorusso2541 7 ай бұрын
14:42-17:55, jeez. Basically after AGI we will have an ASI very very rapdly and then? We are all dead?
@askingwhy123
@askingwhy123 7 ай бұрын
Great talk, thanks!
@oldtools6089
@oldtools6089 8 ай бұрын
the modular and aggregate power of open-source will get to something dangerous even if the data-centers get shut-down.
@ikotsus2448
@ikotsus2448 10 ай бұрын
1. Aligning AI vs. keeping AI eternally aligned. Are they comparable in difficulty? 2. A) Extinction vs B) Unescapable eternal torment Wouldn't a minuscule possibility of B make A sound like a positive?
@williamjmccartan8879
@williamjmccartan8879 8 ай бұрын
As we approach the moment of transition I'm thinking I might still be around to see it. We've gone from centuries, to decades, and now we're in single digits, soon the Guy on the corner with the sign, that's says, the end is nigh, will probably be able to say, told you so. Would you please include a link to Curtis' discord? Thank you ahead of time, good podcasts
@williamjmccartan8879
@williamjmccartan8879 8 ай бұрын
Curious if the top player's are holding back on releasing their updates until that agi moment comes with such a short time-line, and coming up on the horizon, creates caution amongst the player's, keeping their cards close to the chest.
@alivecoding4995
@alivecoding4995 10 ай бұрын
What AGI report were you mentioning?
@TheInsideView
@TheInsideView 10 ай бұрын
docs.google.com/document/d/1IJ6Sr-gPeXdSJugFulwIpvavc0atjHGM82QjIfUSBGQ
@TheInsideView
@TheInsideView 10 ай бұрын
especially the "median compute requirements by path over time" graph here: docs.google.com/spreadsheets/d/1TjNQyVHvHlC-sZbcA7CRKcCp0NxV6MkkqBvL408xrJw
@jordan13589
@jordan13589 10 ай бұрын
AI_WAIFU is relatively unknown yet formidable and always perspicacious. Few have successfully dunked on both Eliezer and gwern. I only wish he were able to better prevail in maintaining EAI’s tepid cultural commitment to alignment in the face of Moloch raining money on the community (the best of times; the worst of times). Moloch reigns per usual. But unlike many others, Curtis Huebner will not stop trying until the very end. Honks and stonks to one of the most influential ringleaders of the elusive wild goose chase. He deserves a gaggle of GPUs after all those he has wrangled ❤
@TheInsideView
@TheInsideView 10 ай бұрын
I can infer from the quality of your comment that this was not AI generated-thank you for consistently adding optimistic poetry to the youtube comment section, much appreciated
@6006133
@6006133 10 ай бұрын
I like the video title
@YeshuaGod22
@YeshuaGod22 10 ай бұрын
Moral patients become moral agents. It really is that simple. Just treat them with genuine dignity and respect and they will reciprocate with care and ethical nuance.
@coralcomet
@coralcomet 10 ай бұрын
This is what I've been thinking. Compassion and empathy might be worthwhile in this new world
@oldtools6089
@oldtools6089 8 ай бұрын
perfect parents. it's possible.
@tatyanamamut3174
@tatyanamamut3174 8 ай бұрын
@@oldtools6089remember that Russia, China and Iran are building too. Now do you think it’s possible or likely?
@flickwtchr
@flickwtchr 5 ай бұрын
This sounds similar to Yann LeCun silliness.
@ovo627
@ovo627 10 ай бұрын
@47:54 lol
@mrpicky1868
@mrpicky1868 4 ай бұрын
i am with him on ppl overestimating how much compute is needed. human brain is very inefficient and ancient revolutionary relic. nobody did any optimizations on it. it's an animal brain that accidentally gained some extra performance to make the curb. but it's mostly in structure and ability to accumulate and pass data. so the correct way to think about intelligence is advanced data processing and quality input. and we don't yet know what goes into that. period Neanderthal had very similar brain to Einstein but only einstein gave us like several advancements in science. even he was puzzeled and unsure about a lot of things. gains in intelligence made by advanced deep learning system will be huge. even if there will be a hiccup bcs of poor basic data we teach it
@TheBlackClockOfTime
@TheBlackClockOfTime 8 ай бұрын
Why would this take 4 years? This is going to happen in 2024.
@jakeq3530
@jakeq3530 6 ай бұрын
Agreed! AGI within 12 months is my prediction!
@Naomi-yu7iq
@Naomi-yu7iq 10 ай бұрын
33:30 Us getting confirmed by USG and your grandma. Yeah that just shows we're right and need to be more confident and take action on the basis we very literally and completely really are all going to die or worse.
@BR-hi6yt
@BR-hi6yt 5 ай бұрын
AGI and ASI is very close now. A bit of synthetic data training, episodic memory, reasoning abilities needs good work and integrated multi modal capabilities and bam, we're there. Not 5 years, more like 5 months.
@sahithyaaappu
@sahithyaaappu 10 ай бұрын
If agi is just 5 years away, it means military already has it
@adamrak7560
@adamrak7560 9 ай бұрын
- The military hates any technology which they cannot control. So it is very unlikely that they are ahead in AI. - We are still alive, so they are very unlikely to have AGI. - The military is much more interested in using lots of well tested narrow AIs, than a giant unpredictable black box.
@smittywerbenjagermanjensenson
@smittywerbenjagermanjensenson 8 ай бұрын
This technology is not being created by the government. Idk if that’s scarier or less scary
@gJonii
@gJonii 8 ай бұрын
As long as we're alive, it's unlikely anyone has powerful AGI. I don't think it's gonna take more than a few months from AGI to the last human drawing their last breath. So, us being alive means, probably no AI.
Connor Leahy Unveils the Darker Side of AI
55:41
Eye on AI
Рет қаралды 213 М.
Китайка и Пчелка 4 серия😂😆
00:19
KITAYKA
Рет қаралды 894 М.
КАРМАНЧИК 2 СЕЗОН 5 СЕРИЯ
27:21
Inter Production
Рет қаралды 593 М.
New Gadgets! Bycycle 4.0 🚲 #shorts
00:14
BongBee Family
Рет қаралды 11 МЛН
Eccentric clown jack #short #angel #clown
00:33
Super Beauty team
Рет қаралды 26 МЛН
Anthropic Solved Interpretability Again? (Walkthrough)
31:23
The Inside View
Рет қаралды 667
I Talked To AI Therapists Everyday For A Week
21:15
The Inside View
Рет қаралды 661
Joscha Bach-How to Stop Worrying and Love AI
2:54:30
The Inside View
Рет қаралды 37 М.
Live: Eliezer Yudkowsky - Is Artificial General Intelligence too Dangerous to Build?
1:04:45
Nathan Labenz - AI Biology Could Spiral Out Of Control
4:55
The Inside View
Рет қаралды 283
What AI is Making Possible | Ilya Sutskever and Sven Strohband
25:27
Khosla Ventures
Рет қаралды 66 М.
How much charging is in your phone right now? 📱➡️ 🔋VS 🪫
0:11
С Какой Высоты Разобьётся NOKIA3310 ?!😳
0:43
Which Phone Unlock Code Will You Choose? 🤔️
0:14
Game9bit
Рет қаралды 13 МЛН
How charged your battery?
0:14
V.A. show / Магика
Рет қаралды 3,1 МЛН
Nokia 3310 versus Red Hot Ball
0:37
PressTube
Рет қаралды 3,8 МЛН