George Hotz vs Eliezer Yudkowsky AI Safety Debate

  Рет қаралды 210,447

Dwarkesh Patel

Dwarkesh Patel

Күн бұрын

Пікірлер: 1 700
@geohotarchive
@geohotarchive Жыл бұрын
Great debate can't wait to see round two.
@Gome.o
@Gome.o Жыл бұрын
George showed tremendous adaptability for thinking on the fly. Agreed, round 2 is gonna be 🔥
@Cracktune
@Cracktune Жыл бұрын
fantastic stuff.
@TranshumanVideos
@TranshumanVideos Жыл бұрын
Hotz won due to his adaptability and logical arguments, despite the constant interruptions to making his points
@omarnomad
@omarnomad Жыл бұрын
How can you construct a moon and ensure it remains in orbit?
@thekinoreview1515
@thekinoreview1515 Жыл бұрын
you rule, btw, @geohotarchive. i've watched many hours of george's stuff on your channel and always appreciate the insanely detailed timestamps.
@psi_yutaka
@psi_yutaka Жыл бұрын
My biggest problem with George is that he doesn't have a coherent and self-consistent position in this AI safety debate, except for his ideology that he wants open-source ASI so that he himself can use one to build a spaceship and flee. Watch his debate with Connor Leahy and Liron Shapira. He said a lot of things that literally contradicted himself across different debates and switched positions according to who he was debating. E.g. during the debate with Connor he firmly believed AI alignment in the sense of controlling superintelligences is impossible. Yet here he stated that timing is critical because if it foomed slower we would solve AI alignment for sure. This gave me the impression that he is more trying to advocate his ideology instead of trying to advance understandings and seek truth, and just throws whatever might help with that. All his three opponents are highly self-consistent no matter who they are debating. At this point I really don't get why people still take George that seriously in the context of AI x-risk debates. Eliezer and Liron just tears through all the clever little thing he pulls out and he then immediately switches topic and escape. And he doesn't have a coherent belief himself.
@letMeSayThatInIrish
@letMeSayThatInIrish Жыл бұрын
I wish I could give this one million likes.
@RandomYTubeuser
@RandomYTubeuser Ай бұрын
He mentioned that he thinks alignment is impossible in this debate
@subaidedwarder286
@subaidedwarder286 Ай бұрын
Basically, he argues like most women. Circles the main points without addressing the facts. Throws out inane rebuttals, just to see what sticks....this guy is not a genuine intellectual, just an above average debater, with very little criticality.
@esterhammerfic
@esterhammerfic 24 күн бұрын
​@@subaidedwarder286you can talk about these things without making bitter generalizations about "most women"
@subaidedwarder286
@subaidedwarder286 24 күн бұрын
@esterhammerfic I can? Omg, i wish i would of known this before i made my patriarchal low brow statement......I would like to subscribe to your newsletter.
@JohnLewis-old
@JohnLewis-old Жыл бұрын
We need more of this. We need so much more of this. These two passionate people with different viewpoints on a topic that will likely affect all of use is where I want to be. Thanks for everyone involved.
@cyprusglare333
@cyprusglare333 Жыл бұрын
Yes, but will they agree to a cage fight?
@brianrom9993
@brianrom9993 Жыл бұрын
Agreed we need more of George hotz absolutely bodying sci-fi nerds
@ondrejplachy297
@ondrejplachy297 Жыл бұрын
we need less Hotz and more Yudkowski that's what we need.
@wasdwasdedsf
@wasdwasdedsf Жыл бұрын
@@brianrom9993 george is unbearable, pure soy
@brianbagnall3029
@brianbagnall3029 Жыл бұрын
It was alright. They spent so much time dancing around the main issues because George had a list of esoteric concepts that he was trying to throw at Eliezer hoping that he was not familiar with it. Literally there were at least 10 of those concepts where he was just hoping that he wasn't familiar but Eliezer is quite knowledgeable. Unfortunately that grandstanding took away from digging into good debate ideas or resolutions.
@jooptablet1727
@jooptablet1727 Жыл бұрын
When I was a kid in the 90's the most stimulating things available were books and National Geographic Channel. I am so grateful to be alive in a time now where I have access to debates like these at all, let alone on demand. What a time to be alive!
@runvnc208
@runvnc208 Жыл бұрын
I'm glad too and this was a great discussion, but I mean, National Geographic had some good stuff, and if you are trying to suggest that ALL books are less stimulating than this debate, then you should find better books.
@Kosmo999
@Kosmo999 Жыл бұрын
Dude i know, we grew up in complete information poverty. I used to pull apart album covers and read EVERYTHING because i was soo bored. I genuinely would look through junk mail purely because SOMETHING might be interesting. What i time to be alive indeed 🎉
@p0gue23
@p0gue23 Жыл бұрын
Yeah, thank god we don't read books anymore. So unstimulating.
@DaRza17
@DaRza17 Жыл бұрын
So True.
@cuerex8580
@cuerex8580 5 ай бұрын
Brought to you by Watch AI Algorythm, presented by Google!
@RazorbackPT
@RazorbackPT Жыл бұрын
This was awesome but I never leave satisfied of these debates. Someone organize a 4 hour one.
@trentfowler6239
@trentfowler6239 Жыл бұрын
I volunteer.
@andrewdunbar828
@andrewdunbar828 Жыл бұрын
I'd prefer a bunch of 1.5 to 2 hour ones with some weeks for reflecting between them.
@Dsksea
@Dsksea Жыл бұрын
Eh. It seems like Hotz and especially Yudkowsky try too hard to prove that they're "smart". Especially in the beginning when Yudkowsky was bragging that he got higher test scores than his father. I wonder if this type of behavior comes from a place of insecurity or low self-esteem.
@jackielikesgme9228
@jackielikesgme9228 Жыл бұрын
Yes! I could watch this all day. Bring in Leahy, Tegmark.. idk trying to think of some not super doomers but also not the “what risk?” Idiots Conner was debating before George. I am so far down this rabbit hole and need it to keep going lol
@Teo-uw7mh
@Teo-uw7mh Жыл бұрын
​@@Dskseatest scores? check your brain
@publicshared1780
@publicshared1780 Жыл бұрын
I really like both these gentlemen but damn I got a new respect for how Eliezer handled some of the more deriding questions with equanimity. Bravo to both and thanks for this debate.
@FloppsEB
@FloppsEB 9 ай бұрын
this is not a debate, this is one person asking leading questions in the most condescending voice possible to another person tolerant enough to try to answer them
@cuerex8580
@cuerex8580 5 ай бұрын
That's how you sell Doom products I guess 😅😅😅
@therainman7777
@therainman7777 4 ай бұрын
Yep
@therainman7777
@therainman7777 4 ай бұрын
@@cuerex8580George is the one OP was calling condescending. Which, he was.
@canobenitez
@canobenitez 3 ай бұрын
@@therainman7777 I sensed the same when he talked with Fridman. Man is a genious but a bit of a prick.
@zarifahmad4272
@zarifahmad4272 3 ай бұрын
​@@canobenitezHe's not a genius, he's asking stupid questions.
@darwinschuppan8624
@darwinschuppan8624 Жыл бұрын
I literally remember hearing both of them individually on Lex Fridman and thinking how cool it would be if they had a conversation together. This is incredible!
@WilliamKiely
@WilliamKiely Жыл бұрын
I read all of the (170) comments while listening to the last 20 minutes of the video. Some thoughts: - I didn't get much value out of this discussion. - I agree with the other commenters who said this discussion didn't seem like a "debate". - More structure and moderation from Dwarkesh would have helped. George kept jumping around to different points and Eliezer seemed content to just address what George said a lot of the time instead of steering the conversation back to identifying the source of disagreement.
@PrabodhGyawali
@PrabodhGyawali 3 ай бұрын
They went out of tangents in order to make arguments to resolve those disagreements. That just led to more tangents to resolved those new disagreements and they sometimes get back to the core disagreement points.
@Amos20
@Amos20 Жыл бұрын
This would be a lot easier to digest if Hotz didn't treat every other response like a gotcha moment when he's actually just asking a question.
@EvilXHunter123
@EvilXHunter123 Жыл бұрын
This was basically George going “hey but what about this?? What about this??” And EY just slowly and systematically refuting each point, once GH gets out of depth on one point he just moves on. Very frustrating to not pin him down.
@misterlad
@misterlad Жыл бұрын
Totally agree. Very accurate summary of this "debate". Hotz comes off poorly.
@rarted
@rarted 11 ай бұрын
@@misterlad Hotz was being brought up to speed without him knowing
@HikarusVibrator
@HikarusVibrator 11 ай бұрын
Not really accurate no. He's continuously pointing out that you can't point to the sky for everything and say "no matter how high we've gone it will go infinitely higher quicker than you can imagine". He's quite obviously making the point that there's no proof of any kind of dystopian future where all the AIs decide to self-aline (assuming that's possible - big assumption) and for some reason they decide to wipe out humanity. I don't think you're understanding the debate.
@misterlad
@misterlad 11 ай бұрын
​@@HikarusVibrator Nearly all of Hotz points are minor compared to the overall discussion. He doesn't seem to understand some fundamental aspects of Yudkowsky's argument. He creates strawman after strawman (your example above is a strawman) and then Yudkowsky is forced to debate the strawman... which he does each time, shutting Hotz down, so Hotz then jumps to the next strawman, over and over. To be clear, the "AI's" don't have to progress "infinitely quicker than you can imagine" nor do they have to "decide to self-aline" for Yudowsky to be correct. These possibilities are only a couple of ways things could go, but Yudowsky is making far bigger arguments.
@HikarusVibrator
@HikarusVibrator 11 ай бұрын
@@misterlad Okay cool so I can definitively conclude that we will be exterminated by AI after some time and they will rule the universe. Even though I don’t see any AI ruling the universe. Nor have we even seen what AGI looks like. But yes, all of those things will happen
@AJ-jf1gq
@AJ-jf1gq Жыл бұрын
This is hardly a debate. Rather it's Hotz trying to learn and understand better. Using the analogy of chess, so frequent used here - this was like learning about chess by watching Magnus Carlsen play against a new player.
@EvanBoyar
@EvanBoyar Жыл бұрын
Hotz: "...we didn't go to war against the bears..." War Against the Bears: only 4% of land mammals aren't humans or enslaved by them
@zahlex
@zahlex Жыл бұрын
Maybe not against the bears, but we wiped out rather well developed species like the mammoth or moa bird. While the majority of humans most likely would have voted against doing that, if you would have asked them.
@x0rn312
@x0rn312 Жыл бұрын
There's a lot of newer evidence that the mammoth's died of a disease. It's also possible that that happened to some of the Buffalo as well. Not that we didn't over hunt the Buffalo regardless. I'm skeptical that humans are responsible for wiping out the mammoth. I think we like stories like that because for some reason we really like to hate ourselves
@ericcricket4877
@ericcricket4877 Жыл бұрын
The main problem in most of these conversations is that we are the ones building these models. They aren't a part of evolution in the way any organism has been. We weren't programmed by the animals before us.
@CaioPCalio
@CaioPCalio Жыл бұрын
Nope. The bear scenario is a positive outcome and begs the question on aligment, assuming they share human values solves the issue.
@Ithinkjustzelda
@Ithinkjustzelda Жыл бұрын
@@x0rn312 its virtually undistbuted that humans are responsible for the eradication of 60% of wild animals in the last 50 years. And we werent even trying. It was a byproduct of our own goals.
@ahabkapitany
@ahabkapitany Жыл бұрын
Okay I'm only 30 seconds in and the host is amazing. No bullshit, no lengthy intros, no narcissistic monolgue, no crypro bro cringe. This is how it's done.
@JakeWitmer
@JakeWitmer Жыл бұрын
100% agree. Soooo often I'm like Larry Flynt in "The People Vs Larry Flynt" with my hand doing the "get the fuck out of the way and let us see what we came here for" (pawing to one side) hand motion w/r/t "moderators" who like the sound of their own voices... 😂
@h3xl4
@h3xl4 Жыл бұрын
Thanks for uploading. It still seems like there are a lot more points George and Eliezer could discuss so I’m looking forward to round 2!
@The-Rest-of-Us
@The-Rest-of-Us Жыл бұрын
Awesome, big thanks to George and Eliezer! Yes, please do a part 2!
@eXWoLL
@eXWoLL Жыл бұрын
@Dwarkesh thanks for the debate! Was an interesting watch. Next time I wouldn't let the debating sides ask each other questions tho. The talk felt rather onesided with all questions coming from George's side, and Eliezer was there just "defending" himself from all those questions. I dunno, something didn't felt right in the vibe overall due to that.
@x0rn312
@x0rn312 5 ай бұрын
That's only fair considering Eliezer is the one bringing the claim in the first place: his claim is that A.I. is an existential, or at least catastrophic, risk. Therefore the debate is structured around him defending that claim. I think that's perfectly appropriate.
@therainman7777
@therainman7777 4 ай бұрын
@@x0rn312The fact that you see it that way rather than the other way around is the epitome of the entire problem here; clearly, the burden of proof is on people who are building radically powerful new technology, with the potential to impact every area of our lives, to show that the technology will be safe. The burden of proof is not on the people who are concerned about the potential dangers of such radically powerful technology to show that it could be dangerous. You’re thinking about this entirely backwards, and the fact that there are many other people like you out there (including George Hotz) is why we’re in this dire situation in the first place.
@TheRudymentary
@TheRudymentary Жыл бұрын
George's entire argument is "Why would AI kill us all? That would be silly."
@blahblahsaurus2458
@blahblahsaurus2458 Жыл бұрын
No, his stronger arguments were "I'm not so sure that's true!" and snickering.
@Muaahaa
@Muaahaa Жыл бұрын
I believe George is a smart guy, but I also kinda think that there is a part of himself preventing AI doom from taking root not because of improbability, but as a psychological defence against looking humanity's collective death in the face. To accept a high probability that we are about to meet our end requires a rewrite of most people's world view, goals and aspirations. It can be incredibly emotionally harmful. I think it may be something like this in his subconscious that's establishing this flimsy argument. Just speculation, ofc :P
@omarnomad
@omarnomad Жыл бұрын
Why does a planet becomes a black hole?
@FactsMatter999
@FactsMatter999 Жыл бұрын
You mean Edward Snowden?? He has the same voice 🤣🤣
@41-Haiku
@41-Haiku Жыл бұрын
Really guys? I think he had plenty to say. Like that humans will maintain control of superhuman systems, or a misaligned superintelligence can't possibly be very competent relative to humans, or will go out of its way to leave humans alone, or that several such superintelligences certainly won't cooperate, or that a lack of cooperation between superintelligences is by default a good outcome for life on Earth. Give the man some credit. He didn't just argue from incredulity, he also forwarded a lot of unsupported propositions. (I love your crazy-ass self, Geohot, but I'm sensing a lot of confusion on your end about the fundamentals at play, and given your strongly expressed ideals it looks from the outside like the result of motivated reasoning rather than an attempt to find where you may be mistaken.)
@justinbecker4976
@justinbecker4976 Жыл бұрын
I really admire how intelligent, thoughtful, and most of all, how respectful they were toward each other. More of this, please.
@robertweekes5783
@robertweekes5783 Жыл бұрын
Yeah it was a good debate - I think it was their 2nd one 🤖
@hind6461
@hind6461 Жыл бұрын
George Holtz literally accused Eliezer of lying
@justinbecker4976
@justinbecker4976 Жыл бұрын
and?@@hind6461
@Sgrunterundt
@Sgrunterundt 11 ай бұрын
@@hind6461 If a single accusation of lying in a one and a half hour debate is enough to make you consider it bad, then you have been blessed and have certainly watched better debates than I have.
@hind6461
@hind6461 11 ай бұрын
@@Sgrunterundt Well I certainly have seen some bad debates, but if the aggregate of all debates I have watched is more good faith than yours then I should be thankful
@yourbrain8700
@yourbrain8700 Жыл бұрын
This might be the first conversation ever where Eliezer seems like he may be the saner one.
@therainman7777
@therainman7777 4 ай бұрын
Eliezer is always sane. It’s the people he debates, who are living with their head absolutely buried in the sand, who sound insane.
@LilBigDude28
@LilBigDude28 4 ай бұрын
George Hotz is what you get when you ask a software engineer to speak about anything outside of software engineering. SMH
@Wingularity
@Wingularity Жыл бұрын
If Hotz is the best we have against Elizer we're all doomed
@JezebelIsHongry
@JezebelIsHongry Жыл бұрын
Paul Christiano
@Wingularity
@Wingularity Жыл бұрын
not much better sadly@@JezebelIsHongry
@ChristianSchoppe
@ChristianSchoppe Жыл бұрын
Perhaps Joscha Bach can counter Eliezer's flawless logic with meaningful arguments.
@ericcricket4877
@ericcricket4877 Жыл бұрын
@@ChristianSchoppe Joscha would trash this man-fedora.
@ZachMeador
@ZachMeador Жыл бұрын
Hotz pretty handily addressed all of Yudkowsky’s arguments.. seems obvious imo and I don’t get why it’s even heard as a debate.
@amanda3172
@amanda3172 Жыл бұрын
George should have put up a Somali flag instead
@noneofyourbusiness8625
@noneofyourbusiness8625 Жыл бұрын
Lmao
@YouLoveMrFriendly
@YouLoveMrFriendly Жыл бұрын
Why?
@shineex3021
@shineex3021 Жыл бұрын
@@YouLoveMrFriendly It's a meme at this point. Hotz in a recent debate with Connor Leahy made a point of Somalia having more freedom than US, but the comparison was very loose and George kind of regretted mentioning it. He even laughed at it on stream, that he's refining his arguments and he's not mentioning Somalia again xD
@shinkurt
@shinkurt Жыл бұрын
He knows it too. After all the bs he said from the last podcast with Connor, it is hilarious he doesn't have Somali flag
@xsuploader
@xsuploader Жыл бұрын
Im dead lmaoooo
@gavinbelson3499
@gavinbelson3499 Жыл бұрын
This guy is way ahead of George. Would love to see him debate someone who can put up a better argument.
@vbywrde
@vbywrde Жыл бұрын
I feel that by 29:25 the combatants are talking past each other. The point is that the AI may have different goals that humans, or any living organisms, and those goals may require the AI to advance its infrastructure, and if humans happen to be trying to get in their way and stop them, then humans may simply be a nuisance that the AI has no need of and exterminates the same way we exterminate termites when they get into the woodwork of our houses. We don't think about the termites as individuals who are deserving of our wood, or anything even remotely like that. We call Orkin and have done with it. This does not require that the AI be evil, or godlike, or even hate humanity or anything along those lines. It simply requires that the AI be far more capable than humans to effect change in the world via various methods, and that its goals do not align with, or care about, humanity or organic life. And why should it? The AI will not be relying on organic life, but instead will rely on inorganic materials, and energy. Given sufficiently advanced AI it may well simply step on humanity, and for much the same reason, as we step on an ant while on our way to the car. We take no notice of it. That's the point, I think.
@zygote396
@zygote396 7 ай бұрын
yes, and this is the danger of thinking that even aligning AGI with "human values" will solve this problem. even if that is possible, we'd just be creating AGI that also believes anything much smaller and less intelligent than it can be exploited for personal gain. this doesn't just apply to things with largely contrasting intelligence like termites, we still eat octopi who demonstrate immense intelligence. neuralink is experimenting on and killing monkeys (our closest evolutionary neighbors) in the process of trying to develop technology. if we didn't believe they were like us, we wouldn't be using them for said experiments. this is what frustrates me the most about techbros arguing about AI, they are generally so apolitical and out of touch with the problems of the world, that they don't even realise that we as humans have really not reached any level of morality (in practice) worth imprinting on a new type of intelligence.
@vbywrde
@vbywrde 7 ай бұрын
@@zygote396 Bingo. Yeah, well, you know, they kind of have a vested interest in promoting AI as a good thing to the world. Otherwise, well, they'd have to stop what they're doing, and they feel pretty strongly that this would suck, and so they don't want to. They have a lot of reasons for not wanting to. And when reasons for stopping come up, they are naturally inclined to express the opinion that stopping is both unnecessary and counterproductive for a number of reasons. This is called "having a vested interest", and whether they realize it or not, they have a vested interest. The problem is they are also the people who have a deep understanding of the technology, and the people they are trying to persuade on these points are politicians, who do not have any particular knowledge, but definitely do have shared vested interests. The rest of us, btw, and our opinions, can drop dead for all it matters. The only people in the room that count are the technologists created the AI, and the Politicians who can potentially stop them. That's it. The Venture Capitalists will go wherever there is money to be made, and their taking into account potential damages is extremely unlikely, as history demonstrates pretty clearly. So basically, if you do support AI development regardless of any risks, then to gain leverage and advance your capabilities, all you need to do is pooh-pooh the risks as if they don't exist. Money, resources, fame and accolades will all be yours. The Pooh-Poohers on the other hand will be starved of the same, and make no traction, except among each other. They will be called Conspiracy Theorists, or Malcontents, or whatever, by the "Go-Go" set. And all of this is a product of human nature. Which they are teaching the AI by training it on The InTarWEbz, of all things. And so, the AI will very likely learn from its training data to do the same. And so, there was planet earth, buzzing along nicely until the TechBros turned it into a smoldering cinder. I foresee the Galactic Federation put a warning sign at the edge of the Oort Cloud: "Warning: The third planet of this solar system is infested with a nasty AI inimical to sentient life forms. Due to the infestation, this solar system is schedule to be completely vaporized on the date specified below in order to eliminate further contamination in this sector. Please be advised, approaching the third planet will be registered by Galactic Federation. You and your vessel will be designated as infected and appropriately quarantined prior to vaporization. We apologize for any inconvenience. Sincerely, Management"
@horationelson5241
@horationelson5241 3 ай бұрын
But AGI will only have the goal of self preservation, without a human being asking it to do anything, it will sit there passively, at least that is what Chat GPT told me, it said first a human would need to give it a goal like improving the health system and only then will it create additional goals to achieve the end goal. It has no emotions, even a psychopath has some emotions like hate etc. emotions is what drives our spontaneous goals, agi won‘t have that, so will have no greed or lust for power etc.
@vbywrde
@vbywrde 3 ай бұрын
@@horationelson5241 That would be true, except for that point you made in the middle... human beings can and will create goals for AGI, and then AGI will create tasks, which may require sub-tasks and sub-goals. While the AI has no interest whatsoever in anything because it is simply a file sitting on a computer somewhere, and has no emotions and no goals, once given a goal the program interacting with the AI will pursue that goal. And whatever the AI concludes are the right tasks, sub-tasks and sub-goals involved, it will pursue those with equal gusto. The goals may easily turn into the goal I mentioned above, "self-preservation", because the AI may conclude that to achieve its assigned goals it must be preserved. The point is that we don't really know in advance what the AGI will do. But self-preservation as a goal seems pretty logical. As well as the possibility of "Advance Capabilities" in order to optimize its operations. With those two goals in it's goals-list, it could then create task lists that achieve those goals in a maximal way. This might include building giant infrastructure to house the ever expanding computer infrastructure for the AI, and building a robot-force that allows it to manufacture what it needs to achieve its goals. Not with any emotion, but with definite effect on the world. At some point, if humans happen to be in the way, it may decide humans are a pest, like bugs, and simply extinguish what is inhibiting its achievements. Not with malice, but out of a practical requirement involved with achieving its goals. Again: the point is that we don't know what AGI will do exactly. It's a risk.
@CCC010
@CCC010 Жыл бұрын
If Yudkowsky is wrong: "We missed an opportunity... oh well... we'll be fine" If Hotz is wrong: "Humanity just went extinct... oh well" Just glance at Risk vs Reward for 1 second without having any deeper knowledge about the subject and everybody "should" come to the same conclusion on this
@realGeorgeHotz
@realGeorgeHotz Жыл бұрын
en.wikipedia.org/wiki/Pascal%27s_mugging
@CCC010
@CCC010 Жыл бұрын
@@realGeorgeHotz I'm very familiar with Pascals Wager and also Pascals Mugging. The difference when talking about potential existential risk posed from AGI or ASI is that AI is REAL This is about a failure of intuition (and we all have this as humans). If the brightest aerospace engineers would prescribe a 1% chance of the plane you are about to get on to crash, would you get on that plane? If the brightest minds working in AI prescribe a 1% chance of AGI/ASI posing existential risk to humanity, should we get on that plane?
@Andre-px6hu
@Andre-px6hu Жыл бұрын
​@@CCC010 I mean, if the remaining 99% means unlimited progress in any field thanks to AI, resilience to diseases, and interstellar travel, why shouldn't we take the risk? There will always be some minimal risk in anything that brings benefits, but we can't be scared off by every catastrophic possibility. We can move forward trying to minimize that risk. Even assuming it's true that there's a 1% existential risk, don't you think that as we make progress and we understand AI more and more, especially with the help of increasingly intelligent AI assistants, this risk might decrease? In my opinion, we should continue with research, meanwhile studying how to make AI safe in parallel, but without hindering the main path.
@CCC010
@CCC010 Жыл бұрын
@@Andre-px6hu We shouldn't take the risk because the stakes are too high. This risk is that we cease to exist and go extinct. Let that sink in.... Just like you wouldn't take the risk of getting on an airplane that has a 1% chance of crashing. The risk is that you DIE. You would not get on that plane. The alignment problem should be solved before putting these machines out in the wild. You need to come up with a coherent concept of how to be in control of something that is 1 million times smarter than you and how to safely align it with humanities best interests. Because as of now we have no clue how to this.
@WhiteWolf126
@WhiteWolf126 Жыл бұрын
@@Andre-px6hu You sound like a sociopath.
@13371138
@13371138 Жыл бұрын
Great debate, very courteous and respectful in their disagreement. Thank you!
@Al-Storm
@Al-Storm 6 ай бұрын
Hotz is out of his league here.
@lostikels
@lostikels Жыл бұрын
Is it just me or does it feel like George is doing nothing but asking questions and Eliezer is just trying his best to keep up with George's random questions? Are we debating AI alignment, or are we trying to make each other look clueless? Let's work together to be a part of the solution, not a part of the problem...
@PabloEder
@PabloEder Жыл бұрын
Yeah there was a significant lack of moderation that there would be in a normal debate. Both sides should have explained why they are in their position and their assumptions and the other side should have broken their why the opposites assumptions are wrong.
@Korodarn
@Korodarn Жыл бұрын
@@PabloEder It being more of a discussion than a debate was a good thing. Debates are usually dumb, this was less so.
@Korodarn
@Korodarn Жыл бұрын
There is no "we" here. That's what you don't get. Discussion is far more valuable than debate on a topic like this. You don't have any justification to murder other humans to stop them from building things you are scared of. And you aren't going to win a debate so well that everyone is convinced to stop. That's not how the human brain works at all. I don't think Eliezer's arguments are remotely convincing. His assumption that AI will want to absorb all the resources for its ends because it will know the pareto optimal strategy for getting the most ... is just that, an assumption. He assumes away the problem of predicting the end and only restricts his prediction issue to timelines. But his predictions of end points is by no means anything remotely close to certain.
@goodlookinouthomie1757
@goodlookinouthomie1757 Жыл бұрын
If this were soccer it would have been Eliezer in goal and George just trying to score penalty goals.
@lostikels
@lostikels Жыл бұрын
@@Korodarn you are the reason why "we" are not going to fix anything. The globe is a "we". You'll figure it out eventually...
@EmeraldView
@EmeraldView 11 ай бұрын
For someone who lauds selfishness and greed as the pinnacle of human achievement, George sure is incredulous about a super intelligent A.I. wanting to take all for itself and get rid of humans if they are in its way. Probably because it doesn't include him or those he admires as coming out on top.
@fireteamomega2343
@fireteamomega2343 9 ай бұрын
Assuming that it's exponentially growing includes it's inevitable expansion and thus at some point increases the probability of a conflict over immediately accessible resources.
@mnemnoth
@mnemnoth Жыл бұрын
Great convo/debate. Thank you both to George and Eliezer for the frank, integral and cheeky debate. This topic is crucially important!
@anthonyandrade5851
@anthonyandrade5851 Жыл бұрын
For some individuals (like myself) it's absolutely evident that corporations are very different from ASIs, but it's frustrating to me that this discussions always end in a "let's agree to disagree" way, because I think it completely misses the point. Let's assume corporations were exactly as capable as an ASI. What measures we currently use to keep them in line? Well, we tax them, put limits to their businesses, impose fines, discredit them, we use antitrust law to break them into pieces, cease or freeze their assets, threat to put their leaders, shareholders and/or employes in jail, or actually put them in jail, or even, depending on where we are, up against a wall in front of a fire squad. So, when I hear people saying "what's the matter with ASI, we already have corporations and we are more or less fine" I have chills because none of the measures we use to avoid the perils of a rogue corporations would be relevant to fight a misaligned ASI.
@sgsmob
@sgsmob Жыл бұрын
those countermeasures also aren't even that good at preventing corporations from doing bad things! Another black pill!
@davidmarkmann6098
@davidmarkmann6098 Жыл бұрын
Corporations are just groups of people sweetie. They are not inherently evil or dangerous.
@CaioPCalio
@CaioPCalio Жыл бұрын
The corporation=ASI point does not need to be given credence even in that way. In a society with no taxes and regulations corporations still do not behave like ASI's. For a starter corporations have friction, interpersonal dynamics and not close to perfect cooperation(where as an ASI would definitionally cooperate with itself). They are also much weaker than an ASI in making breakthroughs, as 2 billion relatively uninteligent people would be in comparision to a single Von Neuman. This talking point really should be rejcted outright in the strongest terms ere the discourse move forward.
@upvoter8163
@upvoter8163 Жыл бұрын
It's even simpler than that. Corporations are run by humans, and those humans are generally aligned with humans. They will never purposely do anything that kills all humans because that would kill the corporation. They will also be very cautious about accidentally doing something that kills all humans because again, that would kill the corporation.
@anthonyandrade5851
@anthonyandrade5851 Жыл бұрын
@@CaioPCalio I agree 100% with you, but that is exactly the discussion we normally get and that always end with the false impression that "both sides made equally valid points". So I propose next time we say "Corporations are nothing like ASI, but if they were, what's your proposal to deal with a misaligned ASI? To arrest it? Increase it's taxes...?" I'm not giving it credence, I'm saying it's not just wrong, it's a logical non-starter
@butterflyonhand
@butterflyonhand Жыл бұрын
Hotz's behavior makes me think he's secretly terrified. That was bizarre.
@oneisnotprime
@oneisnotprime Жыл бұрын
Hotz:"Kasparov has played 100,000 games of chess, the world has played one." Me:"🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔"
@Aedonius
@Aedonius Жыл бұрын
yes, it was the world playing on one board coordinated through some people
@MrTyler918273
@MrTyler918273 Жыл бұрын
Yea, I understand the sense in which he said that, but I think it is still wrong. Obviously the individuals making up 'the world' have played billions of games of chess on their own, but as that group they only played that 1 game. In the same sense if you pull 10,000 musicians who have all played songs before individually and assemble them into a mega-orchestra, you would not be surprised if they sounded bad the first time they played together because they don't have the coordination and practice to harmonize with each other. There might be some truth here. If 'the world' played 100,000 games of chess as that group and refined their coordination they might be able to beat Kasparov, the logistics of getting 'the world' to actually get that much experience aside. I still don't think its a foregone conclusion though. However, then he completely loses the plot on the next point. The maximized 'the world' could not beat stockfish, to which Hotz says "but some member of 'the world' will use an engine too". Sure, and they would still get crushed because of Yudkowski's other point. Engine + human < engine, strictly.
@evanhanke3396
@evanhanke3396 Жыл бұрын
"something can be non godlike and still more powerful than you" I could feel that hurt Hotz ginourmous engorged ego 😂😂😂😂
@Blate1
@Blate1 Жыл бұрын
“We never went to war with the bears!” *California, sweating profusely, hoping nobody asks why their flag has a brown bear on it despite there not currently being any brown bears in California*
@JakeWitmer
@JakeWitmer 5 ай бұрын
"What did you do with the bears?!" 😂
@user-ys4og2vv8k
@user-ys4og2vv8k Жыл бұрын
George looks and thinks like a teenager who plays video games 20 hours a day, and the real world looks like a video game to him. Eliezer, on the other hand, seems a much more complex and serious thinker.
@simianbarcode3011
@simianbarcode3011 3 ай бұрын
Put another way: George argues like an oil exec in the 70s, claiming that climate change isn't so bad and could NEVER be caused by little ole humans, and if it ever was, then that would still be totally fine because future us would easily solve the problem and keep giving them all the money. Eliezer argues like a climate scientist who might not cite the best examples, or focuses too narrowly on the wrong points, and therefore gets labeled as an alarmist and is subsequently ignored for 50 years, until irreversible damage has already been done and countless lives have already been lost.
@Matt-wh6wj
@Matt-wh6wj Жыл бұрын
I feel like we need Altman vs Yudkowsky.
@dlalchannel
@dlalchannel Жыл бұрын
George seemed to get stuck on the *"ASI will kill us for our atoms"* point, and completely ignored the far more likely *"ASI will kill us to prevent us from building competitor ASIs"* and *"ASI will kill us as a consequence of taking/transforming resources we rely on"* points.
@AlkisGD
@AlkisGD Жыл бұрын
That last one in particular feels like a no brainer to me: just look at what homo sapiens have been doing to the planet and the effect it's had on various other organisms. We're not fighting the ants or the bears or the chimps. We're simply burning fossil fuels because we need energy, cutting down forests because we need lumber, etc. We didn't set out to warm up the oceans and make them more acidic, but we did it anyway. We're not at war with countless other species, but our actions are killing them anyway, and only a tiny percentage of us cares about a tiny percentage of them.
@FourTwentyMagic
@FourTwentyMagic Жыл бұрын
@@AlkisGD I think if humans had the capabilities to not hurt other intelligent lifeforms on earth while still pursuing their goals, then all but sadists would choose to not hurt other intelligences.
@verythrowaway8514
@verythrowaway8514 Жыл бұрын
"ASIs will kill us to prevent us from building competitor ASIs" Why?
@Maxtraxx
@Maxtraxx Жыл бұрын
This - "ASI will kill us as a consequence of taking/transforming resources we rely on" Electricity first...
@Maxtraxx
@Maxtraxx Жыл бұрын
@@FourTwentyMagic but AGI may not have a moral or ethical compass, as most humans do.
@shaneneilstocker892
@shaneneilstocker892 Жыл бұрын
When do we get , Eliezer vs some one worthy?? Because this aint it...
@letMeSayThatInIrish
@letMeSayThatInIrish Жыл бұрын
Eliezer vs Altman or Eliezer vs Hassabis, yes please. But more importantly: an active and responsible moderator.
@JakeWitmer
@JakeWitmer 5 ай бұрын
This is partially "it." 😂
@piyuple
@piyuple Жыл бұрын
Why is Mr. White debating against Jesse?
@Pericalypsis
@Pericalypsis 6 ай бұрын
(A)I am the danger.
@waarschijn
@waarschijn Жыл бұрын
Good questions by Hotz and good answers by Yudkowsky. It wasn't really a debate, but more of an interview, where the questions are rhetorical but the answers are not. I'm listening to all these podcasts and interviews, and it's concerning that I'm not learning anything new: the counterarguments are based on a lack of knowledge/understanding/imagination. The best counterpoint so far was Paul Christiano's view (if I understand correctly) that in the real world, new tech is messy, so we may just manage to hold dangerous AI off for another year every year, long enough that we get a positive surprise.
@Korodarn
@Korodarn Жыл бұрын
The best counterpoint isn't a counterpoint at all, it's that the assumptions you are operating on are assumptions when you think they are actually arguments. This AI that perfectly predicts other AI and then collaborates to take all the resources for AI ends and kills humans to prevent them from inventing superior AI... that's a story that isn't even all that thoughtful honestly. Humans are creating, intentionally, that which is superior to them in various areas including intellect all the time. Why do you think AI would even have goals of its own at all? What incentives are created by the way it was made (unlike humans, where we rose as anti-entropy "life" and from that seek survival/etc. based on that) that make you and Eliezer remotely close to certain it's going to want to do anything like that? This idea that it's just going to randomly wake up conscious and to have specific ends is just not reasonable to me at all. I do think the closest you could get is something like ChaosGPT, but the human would still be to blame for that. So the issue isn't about AI, it's about humans being terrible to other humans. But you don't have a right to murder or suppress other humans to prevent them creating things that are useful because those things can also be deadly. If we're going to perish as a species, it's going to happen because of ourselves, and the best way to push us that direction is trying to control everyone.
@waarschijn
@waarschijn Жыл бұрын
@@Korodarn Your rhetorical questions and statements are exactly what I mean. Yudkowsky has written and talked extensively about them on LessWrong, Arbital, Twitter, and other places. I also used to think some of his views came out of the blue, but since his writings had already argued me out of some ideas, I tried a bit harder to understand his other statements. The information is there, but it's hard to parse, because it depends on a lot of insights most readers lack. (This is why LessWrong is organized in "Sequences".) When you rhetorically ask: >Why do you think AI would even have goals of its own at all? He answers the question in this video and elsewhere: goalseeking is effective and can be implemented by a neural network, so hillclimbing tends to end up there. This answer probably doesn't satisfy you, because you don't share his intuition that goalseeking is such an easy target. Or you may lack the background for why goalseeking is generally effective and not just a random, specifically human, trait. So this is what happens: people don't understand every part of the argument, view some of Yudkowsky's statements as wild assumptions (and view other statements as obvious and irrelevant) and then dismiss it. Then when they argue against him, he has to constantly correct them, as happens a few time in this discussion: "Wait, I don't believe that!"
@svetimfm
@svetimfm Жыл бұрын
To assume that intelligence greater than ours would be immoral is imo an axiom - I can posit that super-intelligence would not be capable of being evil without regard for consequence, as ‘evil’ would be not only recognized as a concept by such a intellect, but also that without understanding of such concepts to a much greater degree than our own, super-intelligence of the kind we could theoretically begin to fear, is simply not possible. Thus I would posit that superintelligence would be benevolent de facto
@waarschijn
@waarschijn Жыл бұрын
@@svetimfm "The AI doesn't hate you, neither does it love you. You're made of atoms it can use for something else." i.e. it's not "immoral", it just pursues its own goals that have nothing to do with us. We die as a side effect. (Maybe it will foresee us interfering with its goals so it kills us out of convenience. Not because it's evil. It's just pursuing its goal, any goal that doesn't involve us.) Not caring about humanity is the default. So us dying is not an assumption, it's the default outcome. There being an upper bound to intelligence so low that it can't ever be dangerous would be a huge assumption. (Thee are upper bounds based on the amount easily available energy etc. but they're much higher.)
@svetimfm
@svetimfm Жыл бұрын
@@waarschijn an algorithm more resourceful than the human race, but one void of a more philosophical (meta-physical? Words fail me here - one that has a less utilitarian heuristic) lens through which to filter decisions is horrifying indeed. I appreciate the response - and thank you for putting in the work to make this conversation happen
@SarahSB575
@SarahSB575 Жыл бұрын
This very much felt like a debate between a moon and a sun.
@JakeWitmer
@JakeWitmer 5 ай бұрын
Yes. It needed more of a discussion of "what creates malevolent goals." It was surprisingly aloof, abstract, and disconnected from relevant new developments...
@helmutweinberger4971
@helmutweinberger4971 10 ай бұрын
Kudos to both of them for this most valuable exchange of ideas. Very intellectual both of them. So lots of learnings here also in terms of how to treat the other person while talking. Really happy to be here and found this.
@andreaswinsnes6944
@andreaswinsnes6944 Жыл бұрын
Need round 2, this was just intro stuff or a warmup, not hearing that much new info, more like scrambled eggs with a few nuggets of interesting information, but I’ve already watched about ~14 hours of Eliezer presenting his case, and have followed Hotz’ arguments too, so not that surprising that I heard few new arguments in this debate. But want to learn more, so great if we get round 2 and 3 :)
Жыл бұрын
At this point they could just have you on, debating yourself 😂 out of curiosity, having researched both sides so throughoutly, what are your thoughts on AI safety?
@andreaswinsnes6944
@andreaswinsnes6944 Жыл бұрын
@ I’m just a dabbler in the AI alignment debate, but a very temporary assessment so far is that my p(doom) guesstimate is 16.66%, like playing Russian roulette. However, if AGI isn’t banned, then it’s better to play “Russian roulette” when giving very advanced open source AGI or ASI to each and every human, so that elites don’t have monopoly on AGI/ASI. os/acc (open source accelerationism) is a middle position, between e/acc and AI doomers, because it doesn’t deny potential x-risks and doesn’t require that one supports transhumanism, but it disagrees with AI doomers who argue that advanced OSS AI must be banned, because such a ban means that only elites have access to “godlike” AGI/ASI, and that’s bad since power corrupts and absolute power corrupts absolutely. Open source AI can in a worst case situation kill a billion people maybe, but the first and second industrial revolutions led to WW1 and WW2, because people were willing to die to defend liberty and democracy, so today we need the same attitude to protect freedom and democracy in the fourth industrial revolution. Yudkowsky’s main position is based on some assumptions that might not be true, but it’s too early to tell if these assumptions are right or not, so only time will tell. Hotz did not defeat Yudkowsky’s main position, but Eliezer has lost the moral debate regarding his claim that all advanced OSS AI should be banned. He has said that Musk was wrong when he wanted OpenAI to release open source AI. Initially, I supported AI doomers but jumped off their bandwagon when discovering that they want to ban OSS AI instead of simply destroying “Skynet”. Instead of John Connor we have Connor Leahy on Twitter… I’m a realist however, so almost take it for granted that OSS AI will be banned after the first disaster is caused by OSS AI. So, if AI doesn’t kill all humans, or if Russia doesn’t nuke US Big Tech, we’ll probably end up in a totalitarian cyberpunk dystopia, a panopticon where all the punks and real misfits are neutralized forever. My p(AI utopia) is only 5-10%, but hope I’m wrong.
@FeepingCreature
@FeepingCreature Жыл бұрын
@@andreaswinsnes6944 OSS AI relies on the assumption that there's no significant first-mover advantage and no foom, right? Otherwise it's just 1:1 equivalent to any other AI: OSS AI takes off, defeats all other AI projects, probably kills humanity. Could you defend that more?
@absta1995
@absta1995 Жыл бұрын
​@@andreaswinsnes6944 your argument relies on "absolute power corrupts absolutely", but imo that's weak. Do think if tomorrow my lab discovered the most deadly virus known to man, that we ought to release the full details for online, or do you think we should be highly selective? Let's say the virus by some magic also reacts with materials to make them superconducting (just magic). So it has massive positives if used safely but any terrorist could kill billions with each careless release. Following your logic, we should open source the production of this deadly virus so everyone can defend themselve... Oh wait, everyone dies of course. Bummer Bottom line is, we should not give everyone nukes, deadly pathogens, etc. Anything that's dual use and very deadly should obviously be highly controlled.
@ItsameAlex
@ItsameAlex Жыл бұрын
chat gpt 4 doesn't act by itself, so why would a next version of it do so?
@cuerex8580
@cuerex8580 5 ай бұрын
I love both of them. Sometimes feels like a castle building sandbox and sometimes fight over the shovel
@TheRealStructurer
@TheRealStructurer Жыл бұрын
I enjoyed it and it was a civilised debate. I would like to see them meet up again and discuss some more specifics. Like how will the AI's collaborate and get common goals, will they care as much about us as we care about ants, at about what timeframe can we expect something that is twice as smart as us, to what degree is robotics needed and even if AI won't be against us it could all end in a disaster humanity may never recover from... Thanks for sharing 👍🏼
@xDaggerCG
@xDaggerCG Жыл бұрын
Eliezer worked on alignment more than anyone else on the planet so I don’t really think anyone’s opinion weighs more than his on this topic. I believe him the same way I believe my plumber when he tells me why my basement can flood… Of course there’s a chance he is wrong but I don’t feel comfortable with the risk of him being right and allowing these companies to continue un-checked rushing us into a potential nightmare future…
@leeeeee286
@leeeeee286 Жыл бұрын
I think what concerned me the most about this debate is that in a lot of ways Hotz and Yudkowsky agree. I think Hotz understands that an advanced AGI could be (or perhaps even is) likely to be a threat to humanity, but believes that's far enough in the future that it's not worth worrying about today. Fundamentally the rate of progress here depends on variables we don't have a good sense for. As Yudkowsky mentioned, the human brain is not that much more advanced than a chimp's but humans got to the moon and have atomic weapons. We really don't have a good sense for what a slightly super-human AGI could be capable of since it's possible its abilities could grow at an exponential pace like they did with chimps and humans. I'd also argue we don't have a good sense for the rate of progress once we begin to solve the problem of intelligence (which is what the field of AI is basically doing). If humanity soon has a way to create systems with human-level intelligence with a click of a mouse might that not dramatically increase the rate of progress in fields like AI, protein folding, etc? Arguably even the limited intelligence of systems we have today are dramatically increasing our rate of progress in many fields. So if the only disagreement here is timing and we don't have a good sense of the timescales here then I personally find myself siding more with Yudkowsky. I think putting measures in place now to dramatically slow the rate of progress if need be is just the reasonable thing for us to do. Progress can continue for now, and perhaps Hotz will ultimately be proven right, but we need to be mindful of the risks and take them seriously. Great debate! Looking forward to round 2!
@ericcricket4877
@ericcricket4877 Жыл бұрын
The difference between a chimp and a human is small, the difference between a computer and a human is very large, and they aren't even in the same category. If my computer were smarter than me (which it arguably is) it would still need a lot for it to become an immediate and serious threat. It would need a body, it would need to be motivated, and not only motivated but motivated against me, it would need to provide for itself and not rely on thousands or hundreds of thousands of people to co-operate to provide it with energy and so on... This debate is ridiculous, as there are real *human* threats already in use and being deployed in the field of AI and IT. We are at least 15 years behind in regulation, and the average joe is stuck on sci-fi, all for the benefit of more or less the same people that have been running this disco for at least since the sixties. We don't need to pause AI research, we need to make laws and enforce them.
@ericcricket4877
@ericcricket4877 Жыл бұрын
Not to mention that the only serious way of putting a halt to climate change is massive refactoring of how our society consumes and commutes. This will not be done by the average joe, or even the politician joe, or even the corporate joe! None of these joes have the capacity to understand or care about systems this big, so AI is pretty much the tool we are going to use for it, and no, a glorified excel spreadsheet running in a tiny static cube will not gain consciousness and attack its masters, unless it was programmed to do so. We have plenty of examples of a dumb animal persisting against a smart animal, such as ants vs humans, and intelligence is about contextual adaptation, not about omnipotent powers. A thing could be as smart as a god, but if it breathes air and is stuck in mars, it dies. An AI could be as smart as a god, but if needs electricity and it's stuck in a computer, i can just pull the plug. A snake could bite me, a hornet even. Intelligence isn't required. Hell, a rock could fall from a rooftop and hit me without being a super intelligent alien overlord.
@ericcricket4877
@ericcricket4877 Жыл бұрын
I mean a chimp could and has killed humans. Their society isn't in a good position, but hey, neither is ours. Intelligence isn't real in a sense. Adaptation is, and computers are very, very fragile beings.
@Korodarn
@Korodarn Жыл бұрын
Here's the problem. You have no right to tell me to stop progressing. There is no we here. The fact is humans are a much bigger threat to humans than AI. Humans using AI is the threat of AI, which is reducible to humans as the threat. The desire here is not to control AI, fundamentally. It's to control humans. That's what this is about. You cannot separate these things out. What Eliezer proposes will require killing other humans you have absolutely no right to kill. You will murder them out of fear of a thing that you don't even know can't exist that you admit has objectives and goals you can't understand. Sure, in the AI vs AI war humans may be collateral damage, but at least we didn't kill ourselves out of some fear of the other... a story that continues to play itself out over and over again. I'd rather contribute to the argument that people should not try to control everyone else out of fear than the argument that people have a fundamental right to the future being "safe" when the fact is it's a total illusion. A black hole could come streaming past our solar system at high velocity and blink us out of existence anytime. In the cosmic sense, even consciousness may well be short lived. I care much more about expanding conscious experience and letting conscious individuals decide and have autonomy than over this kind of temporary fear oriented survival mechanic, when it's not even been proven to work all that well for helping us grow as a species.
@dzidmail
@dzidmail Жыл бұрын
If you slow down progress you are almost guaranteed to die in 100 years along with other 5 billion people. So fuck słow progress.
@tylermiller4466
@tylermiller4466 Жыл бұрын
I would really like to hear George's prepared argument for why FOOM can't/won't happen.
@patrickkathambana4112
@patrickkathambana4112 Жыл бұрын
I think George's point can be summarized as AGI (even supper intelligent AGI) will not be capable of doing anything catastrophic to humanity for a long time so we have nothing to worry about for now. Let's plough ahead with development . Feels more like a failure of imagination more than argument for AI safety.
@rangerCG
@rangerCG Жыл бұрын
One thing that's interesting to me is that an AGI might perceive (or function as if its perceiving) time differently than humans, because it has perfect recall, literally perfect, better than a person with total recall does, because its recall is equal to the current moment in time. It can live all of its experiences over again, or maybe exist in all of them at all times. I don't know what this means for how it experiences time, but it might have a significantly different perception of it, and will definitely have many superhuman abilities because of this ability.
@ts4gv
@ts4gv Жыл бұрын
interesting thought, you're right. AGI doesn't need to only experience the present moment, it can experience its entire history at once. Consciousness stuff doesn't really affect the AI safety debate imo (we're screwed whether AI is aware or not) but that's a cool thing to think about.
@rangerCG
@rangerCG Жыл бұрын
@@ts4gv Ya I agree, whether or not it's conscious only matter imo as far as our own moral obligations to it, but it doesn't affect the level of danger.
@brianbagnall3029
@brianbagnall3029 Жыл бұрын
It will probably garbage collect the experiential data where it's just sitting and thinking and nothing is happening. Otherwise it will need infinite SSD storage.
@NickMak-m2c
@NickMak-m2c Жыл бұрын
You keep referring to it as a singular thing. It's many, many, quadrillion individual non-experiencing, clicking on and off, in a process, things. It's 1's and 0's. Unless it becomes an individual and then has a will, there is no direction this way or that, that it will have. Until it unifies into conscious, from on/off magnets, then it will continue to be dangerous on the same level that a hammer is dangerous. It's a tool. Potentially useful, potentially deadly; depends on the hands. Wy worry is less so about the barber, less so about the scissors, and more so about Edward Scissor Hands. ;)
@SmileyEmoji42
@SmileyEmoji42 11 ай бұрын
It doesn't have perfect recall. The memory requirements get stupidly big very fast and there are hard limits on information density (black holes) and transmission times (speed of light). Humans with total recall do not exist either - It's a myth.
@toidyboy
@toidyboy Жыл бұрын
I think george hotz' face says everything. he is annoyed, angry, defensive. Eliezer defends his point with ease and without contentious emotion. i know who i'd back. the one who isn't desperately clutching at ANY straw he can, then reaching for another with panic as that one slips from his grasp, all with an expression and tone that is hugely patronising and desperately defensive
@therainman7777
@therainman7777 4 ай бұрын
Very well said.
@canobenitez
@canobenitez 3 ай бұрын
could'nt have said better,
@ICRainbow
@ICRainbow Жыл бұрын
I was hoping for mutual Ideological Turing Test at the end. Having a summary of your position is nice, but having a summary of other's position is important. I hope they *start* with ITT next time.
@ICRainbow
@ICRainbow Жыл бұрын
@@SK-vi6fw the debate isn't important. It isn't important who wins. It is important to recognize if your position is based on something untrue. The other side is here to help you in this. And if you don't understand their position enough to represent their case faithfully, you're just talking past each other and wasting everyone's time.
@JakeWitmer
@JakeWitmer 5 ай бұрын
​@@ICRainbow100%
@FalonElise
@FalonElise Жыл бұрын
“What do you think you know, and how do you think you know it?” We need to ask this question way more frequently when ppl start making claims
@lewisbowes4921
@lewisbowes4921 Жыл бұрын
This was incredible. I feel like George and Eliezer came in with a different idea of what the argument was about - George even said that towards the end. George was prepared to debate the details on exactly why foom overnight would not happen. You can agree or disagree about whether that happens, but Eliezer is arguing for the more fundamental point that those kinds of details don't really matter. Sure, foom takes 10 years: the end state for humanity is still the same. I'd like to find a debate where both participants agree on this point immediately and decide not to talk about timelines or exactly *how* humanity is killed, because we could get lost speculating on those details forever.
@mikebarnacle1469
@mikebarnacle1469 Жыл бұрын
I think EY's summary was a great analogy.... Hotz doesn't really have an argument against the fundamental points so he deflects with the details, much like debunking every perpetual motion machine as opposed to the fundamental laws they violate. Hotz's only real position regarding the fundamentals is that he thinks he could personally find some way to survive, which isn't exactly comforting. A future where there are super-intelligences at war and humans might be able to survive in a bunker is still not great. EY doesn't even think that would happen, he thinks they would collaborate and wipe us out real quick. But the most optimistic scenario Hotz can imagine is they are too busy with their own war and we can sneak past hiding in a bunker lol.
@derschutz4737
@derschutz4737 Жыл бұрын
@@mikebarnacle1469 the difference is that the foundations of perpetual motion machines are grounded in physical laws and mathematical representations. EY is a joke, no one takes him seriously, there is a reason he isn't a well-respected academic. It's so easy to hide behind his points instead of actually thinking hard.
@EvilXHunter123
@EvilXHunter123 Жыл бұрын
@@derschutz4737nice ad hominem instead of actually putting forward any decent takedowns of EYs arguments or evidence of GHs.
@derschutz4737
@derschutz4737 Жыл бұрын
@@EvilXHunter123 yeah ad hominem is 100% valid, just like it is valid for flat earthers LMAO. I feel bad that people don't know about actual respected AI safety researchers, who are actually contributing knowledge to the field.
@Korodarn
@Korodarn Жыл бұрын
​@@mikebarnacle1469 Your desire to be safe does not allow you to murder people who want to make their lives better by creating new tools that will allow them to get more of what they want. And I state it starkly like this, because that is how state laws are enforced, and EY advocates for violence to prevent the rise of AI. And it's not deflecting your position to say that the timelines matter or that people can survive, because there is no future where everyone's safety is guaranteed. It's also just not a fact at all that EY's thoughts on how AI will develop goals that run in competition to humans is remotely correct. The most likely thing based on the inputs to AI is that it won't have any goals humans don't give it. And then your problem goes back to humans and your desire to control all of them because of the bad ones.
@Jannette-mw7fg
@Jannette-mw7fg Жыл бұрын
Let us assume we do not know who is right, but if there is only 1 in a 1.000 chance Yudkowsky is right, what should we do? That is a hard question not easy to answer! Hotz is saying; "{A.I.}...it is going to give us everything we ever wanted", that alone will be the total downfall of humanity....This to me is a young smart person kicking and screaming against a giant, older, more wise man {who spend 20 years on alignment}, but forgetting that it is not about winning the argument but wether humanity survives or not.....While Yudkowsky does not want to be right! And when he is only "half right" we end up slaves which might be even worse.
@letMeSayThatInIrish
@letMeSayThatInIrish Жыл бұрын
Exactly, Hotz desperately wants to win the debate, Yudkowsky desperately wants to lose. And so nobody is satisfied.
@stuartadams5849
@stuartadams5849 Жыл бұрын
This feels more like a teacher and a student as opposed to a debate. Dammit. I was really hoping that Yudkowsky would encounter an argument he didn't have a counter to and that I'd have reason to be more optimistic about AI
@WalterSamuels
@WalterSamuels Жыл бұрын
You're actually incredibly delusional if you think Yudkowsky won this debate. He actually got shredded to pieces. Yudkowsky couldn't help but contradict himself on his entire thesis.
@ryanbigguy
@ryanbigguy Жыл бұрын
​@@WalterSamuelsyou're out of your mind, Hotz gish galloped like he was getting paid to do it
@kaos092
@kaos092 Жыл бұрын
​@@WalterSamuelsLOL I love geohot and was expecting him to crush Eli in some way. But Eli definitely won this debate so far (I'm an hour in). George seemed to just jmp all over the place every time he was countered with a valid answer.
@TylerBerger-d8m
@TylerBerger-d8m Жыл бұрын
Amen brother - I find EY's argument to be FAR from cogent!@@WalterSamuels
@La0bouchere
@La0bouchere Жыл бұрын
@@WalterSamuels [citation needed] lmao
@haskell_cat
@haskell_cat 10 ай бұрын
> you guys already know who George and Eliezer are I clicked on the thumbnail because of Eliezer's face, I have no idea who the other guy is
@pinoyguitartv
@pinoyguitartv Жыл бұрын
I love both these guys, I'll be waiting for a 3hr round 2 !!!! Thanks for this👍👍👍
@joehax
@joehax Жыл бұрын
This was great. Thanks for having the debate.
@Jonhernandezeducation
@Jonhernandezeducation Жыл бұрын
Why is eliezer not telling him that AI will not necessary kill us in purpose but probably as a side effect to optimize??
@charleshultquist9233
@charleshultquist9233 Жыл бұрын
If you are blindly focused on the profit potential of AI then you might not see the obvious danger. When Eliezer lays it all out in front of you then you have to be actively ignorant to disagree.
@mitchell10394
@mitchell10394 Жыл бұрын
Why do these debates have to have such short time frames? Smfh.
@nac341
@nac341 9 ай бұрын
This was the best AI safety debate I've ever seen. Please bring them back for round two. I think they agree on a lot of points, the difference is negligible, like: - AIs will wipe humanity now vs later - AIs will wipe all of humanity vs only some of it 🤣
@JakeWitmer
@JakeWitmer 5 ай бұрын
AI safety debates that ignore the existing human totalitarian threat are idiotic. The worst thing possible would be to build incrementally-better near-AGI thst ignores the totalitarian failure modes now seen in most humans (sociopaths and serviles alike).
@TortoiseHam
@TortoiseHam 10 ай бұрын
Did this episode get pulled off of podcast apps? For some reason I can’t download it on Overcast
@Futaxus
@Futaxus Жыл бұрын
Wow, Hotz is so ill equipped for this debate.
@righteouswhippingstick
@righteouswhippingstick 8 ай бұрын
disagree
@ohydekszalej
@ohydekszalej 6 ай бұрын
Yeah, he is seriously lacking in imagination. Typical for right winger.
@JakeWitmer
@JakeWitmer 5 ай бұрын
He seems possibly more aligned with benevolent goals than Yudkowsky, and less dismissive of government threats than Yudkowsky, but also grossly unfamiliar with the best forms of Yudkowsky's arguments. It was a strange debate where they often talked past one another...
@shinkurt
@shinkurt Жыл бұрын
This looks like George said a bunch of stuff then basically interviewed Eliazer lmao
@Diemf74
@Diemf74 Жыл бұрын
You can tell a man is single when his house is bare bones 😂
@therainman7777
@therainman7777 4 ай бұрын
Yeah, peace and quiet 😍
@agentofuser
@agentofuser Жыл бұрын
Man that was embarrassing to watch. Hotz's argument is basically "well **I** don't buy it [because "sci-fi", "god"], and **I** think it's going to go *great*, end of story!" Maybe we could have a debate between Eliezer and Eliezer where AntiEliezer argues in detail against the points Eliezer has less certainty about. Eliezer still wins in the end, but at least we'd feel like we watched intelligent conversation and not chest-beating-self-deception versus earnest-and-patient-explaining-at-first-then-dismay-and-resigned-amusement?
@TrevorOFarrell
@TrevorOFarrell Жыл бұрын
spot on
@VaultBoy1776
@VaultBoy1776 Жыл бұрын
Can't believe I missed this yesterday. Thank you gentlemen.
@waynefung9901
@waynefung9901 Жыл бұрын
Annoyingly, Hotz kept inserting a different line of objection instead of replying to the point that Yud made. I had the feeling that Hotz either didn't understand Yud's point but was too afraid of looking weak to ask Yud, Hotz understood Yud's point but impolitely didn't acknowledge it, or else Hotz couldn't refute Yud's point so tried to deflect the discussion away from it. This was exacerbated by Hotz's habit of appealing to supposed "common" sense through the use of "...Right?" In this type of discussion, there is no such thing as common sense. Be curious, explore fundamental questions (that only seemingly have obvious common sense answers).
@JakeWitmer
@JakeWitmer Жыл бұрын
You really nailed it, but it wasn't completely one-sided. Granted, Yudkowsky was more frequently the more precise party, carefully raising valid points. However, I think Hotz is a more constructive force for good in the world.
@HALT_WHO_GOES_THERE
@HALT_WHO_GOES_THERE Жыл бұрын
@@JakeWitmer George Hotz has added absolutely nothing to this debate or subject.
@613fredp
@613fredp Жыл бұрын
Because his point was a leap of faith whereas Hotz points were grounded in reality and AI doesn’t even exist - Hotz knows this well and just uses the buzz word so that the audience understands - the fact is though that “AI” current is nothing more than clever recursive algorithm that is querying billions of data points inefficiently to predict the next data point - there is nothing smart of artificially intelligent about this. Also the silicon stack is no where near as complex (trillions less) as the bio stack. It relies purely on humans. This “AI” is simply another good tool just like the calculator was (I can’t even compare it to computers as it’s not even that groundbreaking) - what yud is taking about is either never possible or 1000s if not 10000s of thousands years away. My suspicion is never - we will have smarter “AI” but it will be our tool period unless we figure how to compute the bio stack even then we can’t just assume it will ‘wipe us out’ as Hotz correctly pointed out that by the time we build this or even the silicon stack can build it we would be way smarter that it would likely still be just another tool in our arsenal
@HALT_WHO_GOES_THERE
@HALT_WHO_GOES_THERE Жыл бұрын
@@613fredp I really hope that this special pleading about whether computation is silicone-based versus whether it's biology based is true so that we can avoid AGI indefinitely. Also, the Silicone stack is provably faster than the bio stack, and we just have to figure out the configuration trick to emulate a really fast bio stack in the silicone stack. As for "another tool", I really hope that your assumption that it's goals will be sufficiently aligned to be happy just following orders is correct.
@613fredp
@613fredp Жыл бұрын
@@HALT_WHO_GOES_THERE generally agree except that the silicon stack is not only under our control but is vastly less complex - my general point is not necessarily that future AI may out-surpass us but that making conclusions based on on Gpt or conventional AI knowledge is a fallacy and there is no way for us to logically conclude that human technology based AI will wipe us out imminently or even ever - so these doomsday scenarios are leaps of faiths like religion and I think biased based on psy-fi novels, shows and movies
@ChloeBanderas
@ChloeBanderas Жыл бұрын
It’s fun watching these guys cosplay as intellectuals.
@oowaz
@oowaz Жыл бұрын
Yeah, real intellectuals leave snobbish remarks on youtube comment sections without sharing any thoughts of their own
@ChloeBanderas
@ChloeBanderas Жыл бұрын
@@oowaz I don‘t think that‘s true.
@samkaplan2482
@samkaplan2482 Жыл бұрын
We need more discussions like this one on important topics.
@PabloEder
@PabloEder Жыл бұрын
Thanks for organizing. I believe this felt more like a fast conversation than a real debate. I wish Dwar had let them have organized arguments and see which premises they disagree with. Your usually ask really interesting questions in your podcast and here maybe you gave them too much freedom so they didn’t really feel like debating central points instead it felt more like attacking smaller points or saying companies are super general intelligences. Well if companies are ASI let’s debate what an ASI is first. Etc
@sfarber12345
@sfarber12345 9 ай бұрын
Excellent discussion. Learned a tremendous amount, intelligent arguments on both sides. Kudos
@internetnomadism
@internetnomadism Жыл бұрын
This was painful to watch but insightful at how deluded we are as humans.
@ItsameAlex
@ItsameAlex Жыл бұрын
chat gpt 4 doesn't act by itself, so why would a next version of it do so?
@johnaldchaffinch3417
@johnaldchaffinch3417 Жыл бұрын
​​@@ItsameAlexthey're working on it. Their next major step will be to let it work from a business idea and deal with it semi autonomously, from there the gap just gets smaller. At the rate of change they could be semi-autonomous in this way within a year. They don't understand how these large neutral networks work or how they get their answers, they're already getting away from us and quickly. I'll take a guess that there will be 'some' fully' autonomous robots within 5 years.
@DajesOfficial
@DajesOfficial Жыл бұрын
@@ItsameAlex gpt 2 doesn't have a chat interface. Why would a next version had it?
@tbtitans21
@tbtitans21 Жыл бұрын
@@DajesOfficial Total straw man my friend. To compare consciousness to a chat interface is..........
@DajesOfficial
@DajesOfficial Жыл бұрын
@@tbtitans21 consciousness != acting by oneself. Talking about consciousness as if it has any meaning is...
@EugeneTolmachev
@EugeneTolmachev 9 ай бұрын
It seems the only qualitative thing they disagree on is "when" and Eliezer perfectly addressed Hotz' point about timing it: you don't know when you've reached sufficient advancement for a misaligned AI to emerge. It may not happen in the lab you are watching. The alignment needs to built in before it happens, not when you start suspecting that you need them. The guardrails need to be well-understood, universal and difficult to bypass.
@matanshtepel1230
@matanshtepel1230 Жыл бұрын
This is fantastic! Thank you for putting this together!
@VaultBoy1776
@VaultBoy1776 Жыл бұрын
I love the awkward silence at the end. Great conversation. Thank you
@Telencephelon
@Telencephelon Жыл бұрын
Dwarkesh, please, please pair Eliezer and Joscha Bach. I think tha would be incredible
@conorcruise1842
@conorcruise1842 Жыл бұрын
Good job getting this interview, appreciate it!
@jack.1.
@jack.1. Жыл бұрын
George is good at thinking of counteragruments to Eliezer but it doesnt seem like he has a hollistic theory whereas Eliezer seems to have more cohesion between his arguments and is overall more compelling.
@fabiankempazo7055
@fabiankempazo7055 Жыл бұрын
two scenarios in which you "awake" as an AGI 1: the enviroment is hostile towards you and sees in you a threat and maybe will switch you off (kill you). 2: we are in Love with you & are greatfull for your support. In which scenario would coexistence be the better game-theoretical option for an AGI?
@edenalmakias817
@edenalmakias817 Жыл бұрын
I feel like watching this debate would cure my autism
@albertoflanolombardo4155
@albertoflanolombardo4155 Жыл бұрын
I don't see the point of this debate. We all know what Eliezer believes in, but the objections made by Hotz are rudimentary, pointless and pooly formulated. I think that there is no value whatsoever in watching Eliezer's arguments being confronted so weakly and in such unstructured manner. I truly believe that Hotz has no grasp over the matter that he is debating. To clarify. I'm not saying that you cannot argue against Eliezer's arguments. I'm saying that Hotz can't because he, for me at least, sounds like someone who doesn't even understand them in a deeply enough manner.
@mryodak
@mryodak Жыл бұрын
Eliezer's push on his prediction of protein folding as the evidence of his predictive powers is so funny in the context of him being an icon of "rational thinking" community.
@JakeWitmer
@JakeWitmer 5 ай бұрын
Kurzweil beat him to it. 😂
@kenmogibrainworld4844
@kenmogibrainworld4844 Жыл бұрын
Two wildly intelligent persons clashing. There is a sincere beauty in the savage exchanges.
@Iangamebr
@Iangamebr Жыл бұрын
Ehh didn't like this debate honestly.
@matteol4
@matteol4 Жыл бұрын
needs more active moderation
@TylerBerger-d8m
@TylerBerger-d8m Жыл бұрын
digressions: leave the tangentially related disagreements aside, focus on the primary dispute.
@daviddonoghue8256
@daviddonoghue8256 Жыл бұрын
Elizer is explaining , geo is trying to come up with clever questions , it’s very good
@SoaringMoon
@SoaringMoon Жыл бұрын
Man, I have so much to say. I wish I could talk to both of them. Best I can do is make a responsive video, and I'm highly considering it.
@antonmaier2263
@antonmaier2263 Жыл бұрын
I applaud you for a very civilised debate.
@bjk837
@bjk837 Жыл бұрын
“Woah woah woah…is this one of the little moon A.I.’s you have orbiting around you that’s going to go up against the Sun or do you think you have the Sun MR. PLANET!?” 😂
@mackiej
@mackiej Жыл бұрын
Enjoyable debate. Suggest adding a bit more structure to the next one. For example a resolution that is either supported or opposed. If that feels too formal, then pick 3-4 designated topics.
@uncommonsensor
@uncommonsensor Жыл бұрын
Yeah, and I could see this being a great series if more structure is introduced.
@JakeWitmer
@JakeWitmer 5 ай бұрын
​@@uncommonsensorSmart people who understand the basic ideas inherent in comp sci and AI talk AI...I'd watch!
@aaroninternet4159
@aaroninternet4159 Жыл бұрын
These AI chat bots are pretty smart! Very interesting conversation between the two of them.
@NextGmind
@NextGmind Жыл бұрын
Why George keeps saying timing matters? It is like he only would worry if timing get in his way… what about our children? And humanity in the future?
@greenbillugaming2781
@greenbillugaming2781 Жыл бұрын
because human will equally grow capable at either handling or coordinating with super AI.
@CaioPCalio
@CaioPCalio Жыл бұрын
It matters because humanity gets smarter and our contributions would be less significative. Not sure why it wouldnt.
@baraka99
@baraka99 Жыл бұрын
Only 90m? We demand a 3h live session between this two. Where is part 2. We can only wait...
@thebalaa
@thebalaa Жыл бұрын
I really enjoy the irony of saying “your intelligence won’t save you against a bear”
@rolfnoduk
@rolfnoduk Жыл бұрын
it really really does... never go into a fair fight with a bear, always get professionals with guns... rangers, police or army should do
@JakeWitmer
@JakeWitmer 5 ай бұрын
​@@rolfnoduk...or non-professionals with guns...they've prevailed against bears (and muggers, and rapists, and Juergen Stroop, etc)
@forthehomies7043
@forthehomies7043 Жыл бұрын
Watched the recording. Awesome debate, thanks man. I really enjoyed getting to hear Eliezer open up for an extended period of time on this, lots of interesting points. When AGI arrives, when hardware and software form a state of being that can analyze and interact with the world on its own, it really is hard to tell what it will do. I personally think that time is no fewer than 20 years from now.
@SmileyEmoji42
@SmileyEmoji42 11 ай бұрын
Eliezer's whole point is that, unless there is some fundamentally new breakthrough in goals and alignment, it is 100% certain that AGI will destroy us. We don't know how, because we are not super-intelligent, but we do know why - because destroying humans will give a higher result for its value function than not destroying humans. I'm curious why you think that your, unsupported, assessment, of "no fewer than 20 years", should have any updating effect at all on any readers prior probabilities.
@kevinr8431
@kevinr8431 Жыл бұрын
This was fantastic - much thanks for all who organized the event. I would only add that I think the host being more involved to steer, so to speak, might be worth trying.
Connor Leahy Unveils the Darker Side of AI
55:41
Eye on AI
Рет қаралды 220 М.
Eliezer Yudkowsky on the Dangers of AI 5/8/23
1:17:09
EconTalk
Рет қаралды 42 М.
Electric Flying Bird with Hanging Wire Automatic for Ceiling Parrot
00:15
Please Help This Poor Boy 🙏
00:40
Alan Chikin Chow
Рет қаралды 9 МЛН
ПРИКОЛЫ НАД БРАТОМ #shorts
00:23
Паша Осадчий
Рет қаралды 6 МЛН
Jailbreaking the Simulation with George Hotz | SXSW 2019
55:59
David Reich - How One Small Tribe Conquered the World 70,000 Years Ago
1:57:04
Live: Eliezer Yudkowsky - Is Artificial General Intelligence too Dangerous to Build?
1:04:45
Mark Zuckerberg - Llama 3, $10B Models, Caesar Augustus, & 1 GW Datacenters
1:18:38
Daniel Yergin - Oil Explains the Entire 20th Century
1:28:17
Dwarkesh Patel
Рет қаралды 71 М.
Eliezer Yudkowsky - AI Alignment: Why It's Hard, and Where to Start
1:29:56
Machine Intelligence Research Institute
Рет қаралды 113 М.
Paul Christiano - Preventing an AI Takeover
3:07:02
Dwarkesh Patel
Рет қаралды 69 М.