David Xu's closing statement at 1:27:00 is spot on. Great debate guys. My new favorite podcast; thanks for doing this.
@Sawa1373 ай бұрын
Both of you guys did a great job! Even the shirt is a match lol. One conclusion is to nail him down on a point before pivoting to something else, the real Hanson is slippery too.
@human_shapedАй бұрын
I watched in reverse order, and this wasn't a terrible simulation. Surprisingly good even.
@EvanderHammer2 ай бұрын
Great debate! Love the Ideological Turing Test experiment, haven't seen this often. Well done, Liron :) Let's grow the channel!
@DoomDebates2 ай бұрын
@@EvanderHammer thanks!
@goodleshoes3 ай бұрын
This is really difficult. Usually the person arguing from the non doom side is more arrogant and less informed about the doom position. In this roleplay it's like the anti doomer has the repository of all pro doom arguments and have compiled all the counters ahead of time. Very interesting.
@DoomDebates3 ай бұрын
Thanks. Yeah a few smart non-doom people do exist, and one of them is Robin Hanson. The arguments I made are all his arguments (as best I understand them), not e.g. those of another prominent anti-doomer like Yann LeCun.
@JS-kr7zy3 ай бұрын
What was said about how general AI needs many different parts reminds me of how reinforcement learning can reach a dead end, that it takes a wrong turn and reinforces that approach until it's incapable of finding another branch with better results. Wouldn't it be fair to say that massive improvements in LLM's are one branch that won't necessarily lead to other branches required to realize general AI?
@DoomDebates3 ай бұрын
@@JS-kr7zy Sure one could claim that. Many experts do, and Robin probably agrees. I think it’s a big step toward having all the pieces we need to AGI but we just need another small step or two or three and then we’ll get there.
@angloland45392 ай бұрын
❤
@tylermoore44293 ай бұрын
Values is a vague category (everyone can put in it whatever they choose), but if we assume that survival is the minimal, non-negotiable value that any agent must pursue, then the AGI's ability to survive in conditions far harsher than ours, and therefore its ability to pursue goals far wider than ours, inexorably puts it at odds, in the short or long haul, with our complex and fragile human environment, the Earth and its ecosystems, just like human economic/industrial activity today wipes out entire ecosystems and species while we barely notice. An AGI-friendly environment may be an all-nitrogen atmosphere for example, triggering a Great De-Oxygenation Event. This value matters more than how my human descendants vote on trans issues or whatever.
@masonlee91093 ай бұрын
What, you're not ready to be the boot loader for a grabby alien civilization?! 70 degrees and sunny is not going to scale! /s
@OnionKnight5413 ай бұрын
dang. the first question that Liron / Robin asked was great, and the guy just ignored it. this illustrates a problem that many well-read people have (and it's the same brittleness / overfitting that LLMs have), which is: they are unable to think for themselves or engage in any conversation where there is exploration of topics or ambiguity. the question was: what will the future of humanity look like without [super powerful] AI? and the guy just froze, questioned the question, "where are you trying to lead the conversation," and then deflected it completely and asked Liron / Robin. this makes me very sad. the guy could have answered that... should have answered that... it was a great question. he didn't even recognize the coolness of the question / it's implications... :/
@DoomDebates3 ай бұрын
I asked the question that Robin asked everyone in his series of 2023 convos (which you can find on KZbin), and I feel that question leads to a dead end, so I actually told David before the debate that my strategy is to avoid the question as he did, and I also avoided that question the same way with the real Robin.
@Jack-ii4fi3 ай бұрын
I was curious if you'd ever consider creating a discord server for this channel or just a community in general (if you don't already have a server) so that people could discuss these ideas there? I know that discord seems like it's purely for gamers, but as someone who is working on deep learning projects/research, I know it's possible to create an academic/research oriented discord community because I'm in a lot of successful AI/philosophy/3d art/graphics servers and they tend to be relatively professional.
@DoomDebates3 ай бұрын
I recommend joining the PauseAI.info discord, it's a great community I'm part of and Doom Debates's mission is aligned with that. A separate community for Doom Debates realtime chat seems like overkill to me. Maybe when Doom Debates gets more popular there can be a subreddit.
@adamalexander83863 ай бұрын
It's an interesting thing you're doing here. Taking this debate itself as a kind of object of study, I feel like what you've demonstrated is that these two sides are too capable of talking past each other, especially about a realm of beliefs we might call "how things are likely to go right now because of how I believe human behavior in the short term is best predicted". Seems like what's needed is a way to move the debate entirely out of this space. I don't think I know that move, but I wonder if it's something like: we agree that ASI gen n in a million years has goals we probably don't like. How could we figure out now how close are we to a system with those goals we don't like? Get people even kinda half-interestedly thinking about how to predict foom and maybe you're on the right track? Just seems to me that if the debate shifted to people honestly trying to figure out how close we were to a thing rather than reacting to their sense that that thing's alleged proximity is coming out of what sounds to them like a juvenile prediction system, they'd be able to study the current situation with a lot less dissonance.
@human_shapedАй бұрын
How would you critique the accuracy of your own emulation?
@DoomDebatesАй бұрын
I feel like I mostly nailed it overall :) In the real debate, Robin emphasizing the importance of "cultural copying" was a bit different from how I mocked him as de-emphasizing intelligence. I also didn't exactly predict his connection between "quality of monitoring" as a key factor that's going to let humans avoid AI extinction in a way that other species didn't / won't be able to avoid extinction from humans in evolutionary time.