Robin Hanson debate prep: Liron argues *against* AI doom!

  Рет қаралды 1,318

Doom Debates

Doom Debates

Күн бұрын

Пікірлер: 20
@masonlee9109
@masonlee9109 3 ай бұрын
David Xu's closing statement at 1:27:00 is spot on. Great debate guys. My new favorite podcast; thanks for doing this.
@Sawa137
@Sawa137 3 ай бұрын
Both of you guys did a great job! Even the shirt is a match lol. One conclusion is to nail him down on a point before pivoting to something else, the real Hanson is slippery too.
@human_shaped
@human_shaped Ай бұрын
I watched in reverse order, and this wasn't a terrible simulation. Surprisingly good even.
@EvanderHammer
@EvanderHammer 2 ай бұрын
Great debate! Love the Ideological Turing Test experiment, haven't seen this often. Well done, Liron :) Let's grow the channel!
@DoomDebates
@DoomDebates 2 ай бұрын
@@EvanderHammer thanks!
@goodleshoes
@goodleshoes 3 ай бұрын
This is really difficult. Usually the person arguing from the non doom side is more arrogant and less informed about the doom position. In this roleplay it's like the anti doomer has the repository of all pro doom arguments and have compiled all the counters ahead of time. Very interesting.
@DoomDebates
@DoomDebates 3 ай бұрын
Thanks. Yeah a few smart non-doom people do exist, and one of them is Robin Hanson. The arguments I made are all his arguments (as best I understand them), not e.g. those of another prominent anti-doomer like Yann LeCun.
@JS-kr7zy
@JS-kr7zy 3 ай бұрын
What was said about how general AI needs many different parts reminds me of how reinforcement learning can reach a dead end, that it takes a wrong turn and reinforces that approach until it's incapable of finding another branch with better results. Wouldn't it be fair to say that massive improvements in LLM's are one branch that won't necessarily lead to other branches required to realize general AI?
@DoomDebates
@DoomDebates 3 ай бұрын
@@JS-kr7zy Sure one could claim that. Many experts do, and Robin probably agrees. I think it’s a big step toward having all the pieces we need to AGI but we just need another small step or two or three and then we’ll get there.
@angloland4539
@angloland4539 2 ай бұрын
@tylermoore4429
@tylermoore4429 3 ай бұрын
Values is a vague category (everyone can put in it whatever they choose), but if we assume that survival is the minimal, non-negotiable value that any agent must pursue, then the AGI's ability to survive in conditions far harsher than ours, and therefore its ability to pursue goals far wider than ours, inexorably puts it at odds, in the short or long haul, with our complex and fragile human environment, the Earth and its ecosystems, just like human economic/industrial activity today wipes out entire ecosystems and species while we barely notice. An AGI-friendly environment may be an all-nitrogen atmosphere for example, triggering a Great De-Oxygenation Event. This value matters more than how my human descendants vote on trans issues or whatever.
@masonlee9109
@masonlee9109 3 ай бұрын
What, you're not ready to be the boot loader for a grabby alien civilization?! 70 degrees and sunny is not going to scale! /s
@OnionKnight541
@OnionKnight541 3 ай бұрын
dang. the first question that Liron / Robin asked was great, and the guy just ignored it. this illustrates a problem that many well-read people have (and it's the same brittleness / overfitting that LLMs have), which is: they are unable to think for themselves or engage in any conversation where there is exploration of topics or ambiguity. the question was: what will the future of humanity look like without [super powerful] AI? and the guy just froze, questioned the question, "where are you trying to lead the conversation," and then deflected it completely and asked Liron / Robin. this makes me very sad. the guy could have answered that... should have answered that... it was a great question. he didn't even recognize the coolness of the question / it's implications... :/
@DoomDebates
@DoomDebates 3 ай бұрын
I asked the question that Robin asked everyone in his series of 2023 convos (which you can find on KZbin), and I feel that question leads to a dead end, so I actually told David before the debate that my strategy is to avoid the question as he did, and I also avoided that question the same way with the real Robin.
@Jack-ii4fi
@Jack-ii4fi 3 ай бұрын
I was curious if you'd ever consider creating a discord server for this channel or just a community in general (if you don't already have a server) so that people could discuss these ideas there? I know that discord seems like it's purely for gamers, but as someone who is working on deep learning projects/research, I know it's possible to create an academic/research oriented discord community because I'm in a lot of successful AI/philosophy/3d art/graphics servers and they tend to be relatively professional.
@DoomDebates
@DoomDebates 3 ай бұрын
I recommend joining the PauseAI.info discord, it's a great community I'm part of and Doom Debates's mission is aligned with that. A separate community for Doom Debates realtime chat seems like overkill to me. Maybe when Doom Debates gets more popular there can be a subreddit.
@adamalexander8386
@adamalexander8386 3 ай бұрын
It's an interesting thing you're doing here. Taking this debate itself as a kind of object of study, I feel like what you've demonstrated is that these two sides are too capable of talking past each other, especially about a realm of beliefs we might call "how things are likely to go right now because of how I believe human behavior in the short term is best predicted". Seems like what's needed is a way to move the debate entirely out of this space. I don't think I know that move, but I wonder if it's something like: we agree that ASI gen n in a million years has goals we probably don't like. How could we figure out now how close are we to a system with those goals we don't like? Get people even kinda half-interestedly thinking about how to predict foom and maybe you're on the right track? Just seems to me that if the debate shifted to people honestly trying to figure out how close we were to a thing rather than reacting to their sense that that thing's alleged proximity is coming out of what sounds to them like a juvenile prediction system, they'd be able to study the current situation with a lot less dissonance.
@human_shaped
@human_shaped Ай бұрын
How would you critique the accuracy of your own emulation?
@DoomDebates
@DoomDebates Ай бұрын
I feel like I mostly nailed it overall :) In the real debate, Robin emphasizing the importance of "cultural copying" was a bit different from how I mocked him as de-emphasizing intelligence. I also didn't exactly predict his connection between "quality of monitoring" as a key factor that's going to let humans avoid AI extinction in a way that other species didn't / won't be able to avoid extinction from humans in evolutionary time.
Roman Yampolskiy & Robin Hanson Discuss AI Risk
1:25:34
Robin Hanson
Рет қаралды 2,9 М.
ДЕНЬ УЧИТЕЛЯ В ШКОЛЕ
01:00
SIDELNIKOVVV
Рет қаралды 3,3 МЛН
How To Get Married:   #short
00:22
Jin and Hattie
Рет қаралды 26 МЛН
How Strong is Tin Foil? 💪
00:26
Preston
Рет қаралды 146 МЛН
Eliezer Yudkowsky on the Dangers of AI 5/8/23
1:17:09
EconTalk
Рет қаралды 42 М.
What Does the AI Boom Really Mean for Humanity? | The Future With Hannah Fry
24:02
Generative A.I - We Aren’t Ready.
16:10
Kyle Hill
Рет қаралды 1,7 МЛН
Why Eliezer Yudkowsky is Wrong with Robin Hanson
1:45:13
Bankless
Рет қаралды 45 М.
Are AI Risks like Nuclear Risks?
10:13
Robert Miles AI Safety
Рет қаралды 97 М.
A Classicist Farmer: The Life and Times of Victor Davis Hanson | Uncommon Knowledge
1:07:03
ДЕНЬ УЧИТЕЛЯ В ШКОЛЕ
01:00
SIDELNIKOVVV
Рет қаралды 3,3 МЛН