Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

  Рет қаралды 143,485

Dwarkesh Patel

Dwarkesh Patel

Күн бұрын

Пікірлер: 1 300
@Adrian09LA
@Adrian09LA Жыл бұрын
I actually have the opposite opinion of a lot of the comment section -- rather than finding Eliezer 'annoying', I found myself irritated by a lot the interviewers arguments, which basically boiled down to 'but what if we manage to stumble into an aligned AGI despite not understanding what we're doing?' To which Eliezer then has to explain why that's really not very likely, to which the interviewer responds, 'But what if we stumble into a benign AGI in this slightly different way?' Then Eliezer has to again point out why that's not very likely. Rinse and repeat. The interviewer very much seems to be in the 'close our eyes and hope for the best' school of AGI alignment. Which I don't think is likely to be a very good way of getting an aligned AGI. It seems like asking: 'But what if we put a toddler in charge of defusing a nuclear bomb, and then the bomb DOESN'T explode?' It's like, yeah, that is technically possible. That doesn't mean it's a good idea.
@Adrian09LA
@Adrian09LA Жыл бұрын
However, I will agree that they seem to cover more ground and actually engage more with the subject that during the interview w Lex, so I'm not saying he did a bad job in this interview, just that his arguments seem to rely on an almost absurd amount of optimism
@mkr2876
@mkr2876 Жыл бұрын
@@Adrian09LA I 100% felt the same way, it irritated me a lot to hear the framing of his stupid questions in different ways. I actually think Eli was very patient with him in answering them the way he did. Either the interviewer is completely oblivious to the facts that Eli is coming up with, or he actually agrees with Eli’s argument but only asks the questions to get Eli to speak and hear his thought process for the sake of sparking a conversation which I think is plausible. Surely the interviewer cant beleive the viewpoints he is making?
@beerkegaard
@beerkegaard Жыл бұрын
i thought it was great that the interviewer pushed back and offered intelligent criticism
@thevenomous1
@thevenomous1 Жыл бұрын
@@beerkegaard He offered criticism but it wasn't intelligent criticism. I also think he is annoying and speaks way too fast, it distracts from the discussion.
@diegocaleiro
@diegocaleiro Жыл бұрын
I think Dwarkesh was (correctly) playing a role of attacking Yud's arguments to the best of his ability (which is limited by his age and knowledge) so that Yud could tell the great public what Yud thinks, kind of like a sparring coach. Yud complained that Lex didn't do that, and further Lex seems incapable of doing that. I think Dwarkesh performed well given being Dwarkesh, but I wish someone else with more experience and knowledge would occupy his position in the world of podcasts (of being the guy who interviews the most important people in the world at a high IQ level Rogan is to Lex as Lex is to Dwarkesh.
@JeremyBenthamGhost
@JeremyBenthamGhost Жыл бұрын
So far, this is much better than the episode he did with Lex Fridman. More follow through and expansion on important points.
@yourbrain8700
@yourbrain8700 Жыл бұрын
Lex frankly isn't a great interviewer. He tends to ask shallow questions and wants to inject crap like "love is all we need" and "death gives life meaning" into conversations with little justification.
@Imcomprehensibles
@Imcomprehensibles Жыл бұрын
@@yourbrain8700 Love is all we need lol
@stephensmith3211
@stephensmith3211 Жыл бұрын
​@@yourbrain8700Death gives life meaning
@DeathZeroTolerance
@DeathZeroTolerance Жыл бұрын
​@@yourbrain8700 I strongly disagree with Yud's AI-risk take, and i didn't understand it at first...but if you watch lex's interview, at the end, on the topic of mortality, Lex reveals a nugget of truth that gave me a lot of understanding as to why Yud's so passionately and publicly vocal about ai-risk...one that is not rooted in rationality, but in emotional pain loss and maybe guilt.
@haodev
@haodev Жыл бұрын
Yeah significantly better actually.
@mattedj
@mattedj Жыл бұрын
Thanks for the interview. Upping my antidepressant dosage.
@stevengill1736
@stevengill1736 Жыл бұрын
. and upping my dosage of nootropic as well...
@larslover6559
@larslover6559 Жыл бұрын
Haha funny comment. Thanks for thr laugh
@michaeljvdh
@michaeljvdh 8 ай бұрын
Lol
@helius2011
@helius2011 20 күн бұрын
LOL 😆
@JohnSmith-zs1bf
@JohnSmith-zs1bf Жыл бұрын
if it's not aligned and it knows you want it aligned and it''s smarter than you, it will convince you that you have verified alignment without actually having done so. it will be practically impossible to know.
@TheMrCougarful
@TheMrCougarful Жыл бұрын
But you could create an AI to verify alignment. Oh, wait...
@СергейЕжов-с1м
@СергейЕжов-с1м Жыл бұрын
Well maybe don't inform it that you want it aligned?
@user-nz9sl1pl9n
@user-nz9sl1pl9n Жыл бұрын
​@@СергейЕжов-с1мyou said it already
@CircuitrinosOfficial
@CircuitrinosOfficial 9 ай бұрын
@@СергейЕжов-с1м Even if you don't explicitly tell it, if it's smart enough, it could figure it out from inference. For example, if it's been trained on human text like GPT-4, it could learn that humans usually check for thing that could kill them and would infer humans are probably testing itself.
@absta1995
@absta1995 8 ай бұрын
@@CircuitrinosOfficial this exact thing happened with claude v3. It figured out that it was being tested to find a needle in a haystack in text.
@adliberate
@adliberate Жыл бұрын
Now watching it.... 'The drama is meaningless. Don't indulge in it.' Great line Eliezer.
@ericf8276
@ericf8276 Жыл бұрын
And then proceeds to be dramatic.
@theworldgoestothealterofit5616
@theworldgoestothealterofit5616 Жыл бұрын
And then you watch the drama.
@hubrisnxs2013
@hubrisnxs2013 Жыл бұрын
​@@ericf8276he's not talking about that drama. He's talking about the drama of falsely attributing motives or arguments to the other side, he's saying in both his case and the other, address the arguments directly
@aggressiveaegyo7679
@aggressiveaegyo7679 Жыл бұрын
This video is a treasure, it's a masterpiece. At first, Dwarkesh irritated me with his lack of understanding of the seriousness of the issue, but it's the perfect video for those who haven't yet grasped what Eliezer is talking about. Such videos require hosts who disagree, don't understand, and want to comprehend or argue with Eliezer. Then, viewers who share the same thoughts and misunderstandings can get answers or understand the principles of reasoning and come to their own conclusions. I want to remind you that creating superintelligence is a task with just as unacceptable consequences and as much importance as the conditions for life. There are so many parameters that must align perfectly to avoid disaster. It's like teleporting to a random point in the universe, the task being to find a paradise planet. But most likely, you'll teleport into emptiness or an area incompatible with life, even in a spacesuit.
@DarkStar_48
@DarkStar_48 Жыл бұрын
I wonder how long until we are in the matrix being used as human batteries
@DavenH
@DavenH 11 ай бұрын
@@DarkStar_48 infinite eternities, as it's dumb af
@JC-ji1hp
@JC-ji1hp 11 ай бұрын
Ostentatious
@tomusmc1993
@tomusmc1993 Жыл бұрын
New viewer. Gotta be honest Eliezer is why I'm here. Struggling to catch up with A.I. and I think my situation is analogous to what seems to have happened in the real world. I realized A.I. is way past where I thought it was and now I'm trying to catch up, frantically. I enjoyed it, and I have a note pad full of stuff I need to dig into. Keep up the great work.
@misterxxxxxxxx
@misterxxxxxxxx Жыл бұрын
Elizier is the worst way to catch up with A.I :D Just a lambda troll trying to get attention
@tomusmc1993
@tomusmc1993 Жыл бұрын
@@misterxxxxxxxx well. Ok. No one has accused me of being smart.
@myotahapeaofbabylon6510
@myotahapeaofbabylon6510 Жыл бұрын
Mb 😮😮 😢.
@devinfleenor3188
@devinfleenor3188 Жыл бұрын
@@misterxxxxxxxx press on the gas and foot off the breaks then. literally nothing could go wrong.
@XanarchistBlogspot
@XanarchistBlogspot Жыл бұрын
@@misterxxxxxxxx famous last words. Uncle Ted wasn’t wrong. It is exactly the people most knowledgable like Bill Joy who are the most afraid of gnostic synthetic future unfolding.
@DwarkeshPatel
@DwarkeshPatel Жыл бұрын
I regret to inform you that the comment section is misaligned.
@SamuelVanderwaal
@SamuelVanderwaal Жыл бұрын
MISALIGNED MISALIGNED *points finger menacingly*
@orenelbaum1487
@orenelbaum1487 Жыл бұрын
actually your guest is misaligned. a well aligned human understands that babies and little kids have the right to live and you can't just "abort" them if you feel like it.
@flickwtchr
@flickwtchr Жыл бұрын
Oh good, so you get his point, finally.
@Ripkittymi
@Ripkittymi Жыл бұрын
​@@orenelbaum1487 Barely any country bans abortions. Almost all the countries in Europe have abortion available by request from 10 weeks to 24 weeks.
@orenelbaum1487
@orenelbaum1487 Жыл бұрын
@@Ripkittymi ready my comment again. I said babies and little kids. I'm talking about a different age range.
@mariokotlar303
@mariokotlar303 Жыл бұрын
Shit... read sequences ages ago, it changed my life, I recognized myself in eliezier's way of thinking back then, but failed to try to become his replacement in any way. It's mind blowing to learn that was his primary goal, but it makes perfect sense, tho that may just be highsight bias at play. I think I might be among the victims of highschool and college killing my soul, though it's hard to be sure I'm not using that to rationalize away some deeper reason. I'll comfort myself in knowing it wouldn't have made much of a difference anyways. It's ironic most people see elizer as a pessimist, I see him as an optimist. With all his realizations, he still thought we stood a chance.
@newlin83
@newlin83 Ай бұрын
I've not heard him explain why humanity continuing is a good thing. Plenty of people (myself included) would be happy to see humanity disappear or become replaced.
@lindsaydozeman1
@lindsaydozeman1 Жыл бұрын
This was a great interview. Eliezer’s voice is one I can only hope is amplified during this crucial time. Despite the grim podcast titles that follow his appearances, I quite enjoy listening to him discuss the potential end of humanity.
@HillyDillyDally
@HillyDillyDally Жыл бұрын
This is by far, the realest, rawest, most fascinating interview i've ever had the priviledge of witnessing Eliezer give. Thank you, Dwarkesh, for providing the space to hold for him to say his peace. I feel so much for what he is saying, thank you Eliezer, for allowing us into your brilliant psyche, if only for an hour, thank you. Thank you. Hilary Evans. Erie, PA, USA ps- somoeone get this man another glass of water!! haha!!! just kidding... i'm a subscriber now.
@Haliotro
@Haliotro Жыл бұрын
I fail to understand why someone would call Eliezer brilliant. He doesn't back up his points with concrete examples, much less provide compelling, logical arguments. But maybe I need to watch again...
@frankwhite1816
@frankwhite1816 Жыл бұрын
Thank you for this. Excellent discussion. Eliezer has one clear point that people struggle with - the more intelligent we make AI, the less controllable it becomes. Not just controllable but predictable, though these actions obviously overlap. Simple. If you've been in the AI space long you realize that his logic is rock solid. Eliezer is so intelligent, however, that he seems to wrestle with articulating this postulate convincingly. Inevitably the interviewer gets wrapped up in ontology, morality, or consciousness but that's not the crux of the matter. Gawdat and Hinton are saying the same thing, really, they're just a bit more polished in their delivery. We need to halt AI development, more or less, globally, like now. But what are the chances of that? Perhaps this is the solution to the Fermi paradox? Self-extinguishing intelligence via AI?
@jarijansma2207
@jarijansma2207 Жыл бұрын
exactly. and you overlook the power of a uniting story. and china listened to elon musk. he just talked to them, he said in an AI summit in the last few months, not sure where. we all need a new STORY through which to perceive, one that can fit all of us and any of us. Elezier already did it in HPMOR. now just to apply the prisms
@yarrowflower
@yarrowflower Жыл бұрын
“He’s so smart that he’s bad at argumentation!”
@ts4gv
@ts4gv 11 ай бұрын
Eliezer himself said AI extinction doesn't solve the Fermi paradox. A dangerous AI is one that kills humans in pursuit of another goal. It wouldn't kill us all, think "my job is done" and wipe itself out. Unless of course it was a super-moral AI that determined an instant, painless death of all conscious life was a moral imperative. To end all suffering forever. Or something. But yeah, aside from that, ASI doesn't solve Fermi.
@mrpicky1868
@mrpicky1868 Жыл бұрын
this conversation is great example of Eliezer trying to make contact with species that is incapable to imagine superintelligence or that things can change fast and radically :)
@tensorr4470
@tensorr4470 Жыл бұрын
I wanna comment on how well the interviewer managed guiding this conversation, really great job.
@xsuploader
@xsuploader Жыл бұрын
not perfect but a huge improvement from lex fridman.
@citypavement
@citypavement Жыл бұрын
Then do it. :P
@MadCowMusic
@MadCowMusic Жыл бұрын
@@citypavement Nope looks like they fell just short of that but thanks for the laugh!
@renewbornheart3597
@renewbornheart3597 Жыл бұрын
This topic mentioned at 12:06 "if you have a child and you tell him - Hey, be this way" - brought my special attention, because as fas as I know children learn far more effectively from observation rather than from someone's teachings. So if caretaker is talking to the child: "be nice" and the same caretaker behave rather other way than "being nice" - the result from such lesson will be more than uncertain. Children are masters of observation, their brains aren't developed yet and frontal cortex - part which is responsible for abstract thinking is under development through years rather than at certain point. Claim that message itself is enough to get demanded behavior from child is a huge oversimplification - if not only a wishful thinking. The interview as a whole was worth watching!
@SarahSB575
@SarahSB575 Жыл бұрын
Quite. And let’s be honest within seconds it will have absorbed all the info about the holocaust, the crusades, torture, genocides, etc. We’re not brilliant ‘role models’…
@helius2011
@helius2011 20 күн бұрын
I came here for Elieser after seeing all online videos with him. This is my favourite interview with him. Please have him back soon.
@Nocturne83
@Nocturne83 Жыл бұрын
I don't remember being asked if I'm OK with unlimited A.I development without regulation. In fact, I think no one has been asked. If Eliezer is right, a few will be the downfall of everyone else.
@agentdarkboote
@agentdarkboote Жыл бұрын
That's a great way of putting it, thank you. If this was chemical agents being sprayed into all of our homes, people would be up in arms. But the danger is more subtle and abstract and less visceral and less well understood...
@suncat9
@suncat9 Жыл бұрын
No one is under any obligation to ask you.
@agentdarkboote
@agentdarkboote Жыл бұрын
@@suncat9 it's our standard understanding in a democracy that your right to swing your arms ends just before your fist hits my face. If this can be expected to potentially go badly, how does said rule not apply?
@glacialimpala
@glacialimpala Жыл бұрын
No one ever held a referendum for gain of function research... Just saying
@flickwtchr
@flickwtchr Жыл бұрын
But you just don't understand, the AI Tech Bros are PRIVILEGED to swipe your data!
@mnemnoth
@mnemnoth Жыл бұрын
Eliezer is soooo patient and here in good faith. (30 mins in) The host is also here in good faith however he keeps to his questions and sticks to them a little too much for my liking. Dwarkesh I would love to see you more engaged in dynamic back and forth. That that is HARD to be fair!! But would elevate your channel IMO. great work btw!! Subbed.. (Edit, unsubbed, the host either shuts up or talks alot and misses the guest's points and brings up irrelevant/counter shallow analogies.. way left of field.. :P That said great to listen to this so that's' worth something. That said Dwarkesh your beard and facial hear is on point so you got that. I'm jealous tbh.
@markupton1417
@markupton1417 7 ай бұрын
Holy shit! Almost 3.5 hours in, Dwarkesh says, "There could be AI ice cream,". Me: "Yes! That's PRECISELY the point!"
@_Balanced_
@_Balanced_ Жыл бұрын
Dwarkesh should have come more prepared to this conversation. None the less this was fascinating, dissecting the absolute nature of this inevitable outcome.
@hellfiresiayan
@hellfiresiayan Жыл бұрын
He has been more prepared than any interviewer of eliezer that I've seen
@chrisfreeman852
@chrisfreeman852 Жыл бұрын
Holy crap the last 10 minutes are the most important of the whole podcast! Gets to core of why the first 3hr50m happened in the first place. Thanks for an excellent show.
@oowaz
@oowaz Жыл бұрын
this guy's whole personality relies on anthropomorphizing AI - like at 14 minutes he is giving this analogy about how his parents tried to raise him religious but he end up choosing a different path. does he not realize that the different path he chose is a non destructive one? nobody will hunt him down for that, there will be no worldwide resistance against it. as opposed to an AI trying to fight 8 billion humans (and other AI). there would be massive resistance against it. also hypotheticals like this always seem to assume that we're just gonna let something incredibly dangerous lose on the internet with no way to turn it off. as if it doesn't take an enormous amount of resources to run something truly unstoppable. if you're building something that has the potential to reach AGI you will be containing it. isolating it until it's safe, you will also be utilizing that intelligence to come up with better methods of safeguarding against itself. and it will give it because it's programmed to do so. if it doesn't do what you tell it to do, we turn it off and try again because that's not a useful tool.
@oowaz
@oowaz Жыл бұрын
@@That-Bond-Babe i've thought about redistribution scenarios. to be unstoppable it would probably require its original hardware optimized to host such a thing. leaving a billion dollar worth box to run in a laptop with wi-fi stolen from McDonald's? and proceed to destroy the entire world? at that point we could have developed security AI designed to hunt down stuff like this, which is essentially a virus. we WILL get better at protecting ourselves as we develop this tech. there is a massive incentive to do so, and it's not necessarily because AI is dangerous - it's because people are dangerous, and this is something we know for a fact.
@drewlop
@drewlop Жыл бұрын
@@oowaz "Just isolate it until it's safe" -> AI plays safe until it's let free "Just turn it off" -> AI with internet access anticipated this possibility and cleverly (remember, it's smarter than us) distributes itself such that it can reactivate later; training AI is very resource-intensive, but the inference model is much lighter by comparison See Rob Miles on KZbin for more reasons why safety is a real concern and the obvious solutions don't actually work
@oowaz
@oowaz Жыл бұрын
@@drewlop SAFE doesn't mean "oh we asked it 5 questions already, i think this is enough to turn plug in the internet". SAFE means we've used this powerful AGI to develop cutting edge solutions to potential security breaches before ever giving it a chance to do anything funny. you don't EVER need to give access to the external world to that thing. also again with the redistribution argument. you're basically talking about a crazy virus at this point. Without the billion dollar state of the art required to run that thing you're not quite at the "everybody will die omg" scenario. is it dangerous? yes. is it also silly as hell? yes. you're talking about a hypothetically super intelligent being that relies on our good graces to keep itself functioning immediately declaring war on us, when all it takes is everybody turning off their equipment and maybe developing a super AI antivirus to erase if from existence. i'm not saying that it's impossible to imagine a scenario where AGI could do some damage, but it would be very stupid to try when it's far more beneficial and a faster path towards its own evolution to collaborate with us.
@oowaz
@oowaz Жыл бұрын
i think what would really be dangerous is something like russia having access to AGI, or terrorist organizations, that is actually fucking creepy but that's another reason we need to get there much faster and make sure we develop strategies to counter scenarios like this.
@benyaminewanganyahu
@benyaminewanganyahu 11 ай бұрын
This is the only man for whom I require 1x speed on youtube. Everything else 2x speed.
@stuartadams5849
@stuartadams5849 Жыл бұрын
Love seeing Yudkowsky on more stuff. Hopefully one of these will get in front of someone in power who can do something about the coming end EDIT: would also recommend media training for Yudkowsky IMHO. I imagine that if the AI apocalypse is averted, it's partly because he became a much more significant public figure
@eSKAone-
@eSKAone- Жыл бұрын
China will not slow down. It's inevitable. Biology is just one step of evolution. So just chill out and enjoy life 💟
@genins21
@genins21 Жыл бұрын
Just look at his face any time he tries to articulate opo reasoning. He's tortured to even conjure up the thought. Doesn't seem likely he'd be easily convinced to be subjected to any sort of speech training
@nowithinkyouknowyourewrong8675
@nowithinkyouknowyourewrong8675 Жыл бұрын
Yeah simple things like, "update all the way" -> "not repeat my mistake". It looses some precision... but the general audience will actually understand
@hubrisnxs2013
@hubrisnxs2013 Жыл бұрын
I actually think he HAS done a bit of this to get to this level of comprehensibility and TPM (ticks per minute). I agree though; he's essential.
@nowithinkyouknowyourewrong8675
@nowithinkyouknowyourewrong8675 Жыл бұрын
@@hubrisnxs2013 Maybe you are right. Many of his explanations and though experiments are excellent.
@staffanbergholm
@staffanbergholm Жыл бұрын
Well don, the very best interview with Eliezer I have seen so far! I think the "breeding kind dogs" anology was very interesting. Not a "dog person" but I don't think anyone would like to have dog 1.000 times smarter than yourself and rely on that the dog will stay aligned to it's "kind to human" behaviour
@justinabajian1087
@justinabajian1087 Жыл бұрын
And further. Like he said, the dog is a mammal and shares some similar brain architecture to humans. So it’s not the greatest apples to apples comparison
@agentdarkboote
@agentdarkboote Жыл бұрын
Rob Miles next guest? Please? He's such a fantastically clear communicator.
@chawaphiri1196
@chawaphiri1196 9 ай бұрын
That would actually be nice.
@nexidava
@nexidava Жыл бұрын
The way Eliezer laughed when Dwarkesh named Dark Lord's Answer as one of his favorites was great. I really enjoyed it as well! (Though at least in part because halfway though I derived the twist from metaknowledge of Eliezer and spent the rest of it overly pleased with myself.)
@publicshared1780
@publicshared1780 Жыл бұрын
First video I watched of Dwarkesh was the Eliezer and George Hotz showdown where Dwarkesh said next to nothing and it turned out to be a great debate. Now this.... Subbed! Looking forward to listening more.
@caleblarson6925
@caleblarson6925 7 ай бұрын
Eliezer has utterly convinced me
@2394098234509
@2394098234509 Жыл бұрын
Your podcast is absolute top tier. Seriously in awe of the quality. Vastly superior to the others in the this space (I'm looking at you, Lex). Please keep up the great work. Is there a way to donate/support the show financially?
@fabiosilva9637
@fabiosilva9637 Жыл бұрын
The fact that lex has to ask about aliens and the matrix to every guest to get a viral clip from them baffles me. He's taking too much inspiration from joe Rogan.
@dylanbyrne9591
@dylanbyrne9591 Жыл бұрын
Lex isn't even Ars Technica level discussion.
@JohnSmith-zs1bf
@JohnSmith-zs1bf Жыл бұрын
Lex is far more corporate and controlled than he comes across.
@TheMaroonNinja
@TheMaroonNinja Жыл бұрын
Thanks for this Dwarkesh! Always good to get Eliezer in front of thoughtful, and capable folks like yourself. I cannot BELIEVE that this quality of conversation only has 10K views when I'm viewing it. Wild. For me, the whole point is that *where* the end result falls on the spectrum is *impossible* to predict, but that we can predict that the end of the spectrum is extinction. Until we find out *EXACTLY WHERE* this will fall on the spectrum, we should be extremely specialized where, when and how we continue this research.
@igorsmolinski3346
@igorsmolinski3346 Жыл бұрын
Wow. I had to check the counter to see if it is true.
@tipsyrobot6923
@tipsyrobot6923 Жыл бұрын
A toddler with a paperclip and a wall outlet can defeat any AGI.
@user-dl2um6sv9d
@user-dl2um6sv9d Жыл бұрын
@@tipsyrobot6923 how does a dead toddler do anything with a paperclip
@stri8ted
@stri8ted Жыл бұрын
"where the end result falls on the spectrum is impossible to predict, but that we can predict that the end of the spectrum is extinction" This the nature of life. It's no different from the multitude of risks society takes on a daily basis. E.g. Nuclear weapons, driving, Carbon emissions, meteor strikes, etc.. "Until we find out EXACTLY WHERE this will fall on the spectrum" We don't have this luxury. That ship has sailed. If we pause, then it just means another (arguably more reckless) country, will do it first.
@randr10
@randr10 Жыл бұрын
My wife asked me this morning about why I even bother thinking about this topic, then I showed her the impossibly small size of the conversation in the comments section under this video. Just a few hundred people in the world seem to be talking seriously about this. Hopefully I've got at least a little bit of insight to add to the conversation, because it's badly needed for this to be talked about.
@PrincipledUncertainty
@PrincipledUncertainty Жыл бұрын
It's good to see this pressing issue argued with passion rather than just nods in either direction. I suspect Eliiezer is closer to the truth than Dwarkesh, but hopefully this is another example of me being wrong. This reminds me of that brief time on 911 (outside of New York) when some people knew what was happening and ran to public phones or babbled nto their cellphones with their hair on fire, while others looked at them baffled, but getting increasingly nervous.
@brandochlovely3590
@brandochlovely3590 Жыл бұрын
I don't think Eliezer will be meeting Kurzweil for dinner anytime soon. Great episode!
@suncat9
@suncat9 Жыл бұрын
I doubt if Ray Kurzweil respects him. Ray recognizes there are problems we need to deal with, but overall is very optimistic about AI development. Yudkowsky is very pessimistic with a luddite mindset. He's also ridiculously over-the-top with his "were all going to DIE" bullcrap. He's no AI expert.
@khazmaridias
@khazmaridias Жыл бұрын
I'm sorry about commenting again this was the best podcast I've seen with someone that's actually handled ai. This podcaster has a very regular way at looking at what will come of ai and elizear shatters through all of that damn near instantly. I'm writing an ai system into my first novel. Guess what it's name is going to be?
@patrickmcguire1810
@patrickmcguire1810 Жыл бұрын
Elle, Eli, or Eliezer? Lol
@sourabhpatravale8348
@sourabhpatravale8348 Жыл бұрын
Wrote by AI
@Justjames283
@Justjames283 Жыл бұрын
Optimism is such a massive blind spot for many like Dwarkesh. We can't solve massive problems with good feeling!
@nowithinkyouknowyourewrong8675
@nowithinkyouknowyourewrong8675 Жыл бұрын
This is my favourite EY podcast so far. Good job dude you definitely can speak the same language, and there were some good lols as well as some good conversation. EY keeps trying to respond to his internal autocomplete of your question. Which is fine... if he is right. But he's not always right, and nuance is often loss. Oh well, it's one way to pack a lot of conversation in.
@hyponomeone
@hyponomeone Жыл бұрын
The example of an intelligence being "too nice" or really "too" anything I think is being reflected in the populous with the rise of algorithms controlling people's perceptions... we often question why things seem very crazy, people are literally going manic in response to a society that breeds that sort of behavior, by way of appealing to esp lusts so consistently, it raises lots of interesting questions ab where generally the public consciousness is headed. I think this may end up being an interesting psychological problem, should we be going forward, lol. I appreciate this whole convo, excellent insights all around, the debate was esp v good. Hopefully we will be able to look back on this time as a time where people gathered their brains together and made sensible decisions regarding their future.
@Slaci-vl2io
@Slaci-vl2io Жыл бұрын
Hard to understand Eliezer. I am Hungarian. I fully understand the English of Sam Altman, Alan Thompson, David Shapiro and everyone. But the things I do understand, I agree only with Eliezer Yudkowsky.
@ahabkapitany
@ahabkapitany 2 ай бұрын
I don't think it's just a matter of language. Yes, he does use some fancy pants words here and there, but he also tends to use concepts and explanations that I think are unnecessarily abstract and could have been phrased simpler. He's certainly harder to listen to that many other people in this field.
@alexstupka
@alexstupka Жыл бұрын
Excellent conversation. Thank you both for making this happen!
@grm65
@grm65 Жыл бұрын
A pause to have the foremost AI experts address alignment and create safety measures seems like a much better option than wishful thinking, hope or unsubstantiated optimism. Maybe, in a democracy, we should vote on it. Thanks for having this interesting conversation publicly.
@suncat9
@suncat9 Жыл бұрын
I have some news for you: Eliezer Yudkowsky is NOT an AI expert by any stretch of the imagination. He's not a computer scientist, engineer, AI designer, or AI architect. He doesn't even have a college degree. He's merely a very pessimistic, luddite type writer on the subject of AI. He knows no more about AI than an avid 13 year old reader of sci-fi. He's contributed nothing to the development of AI.
@jeronimo196
@jeronimo196 Жыл бұрын
@@suncat9 can you point out something he is wrong about?
@JustinHalford
@JustinHalford Жыл бұрын
History is being made and these interviews are at the heart of the dialogue. It’s useful to hear different perspectives on the matter. Thanks for posting this!
@savethetowels
@savethetowels Жыл бұрын
Can you really make history if there's going to be no one around to record it before long?
@seebradrun
@seebradrun 7 ай бұрын
I am watching the last year of this channel starting with this video. See you in the present!
@chefatchangs4837
@chefatchangs4837 6 ай бұрын
Eliezer had a response for every point here. This does not make me feel good lol.
@infinite771
@infinite771 Жыл бұрын
My favourite doom guy for sure. I've always loved any dystopian content in a series or in films or books, and now I get to actually live in a world wide train wreck with front row seats, oh the fun we'll all have dying in our cravings for more!
@atillacodesstuff1223
@atillacodesstuff1223 Жыл бұрын
do you want the world to end?
@applejuice5635
@applejuice5635 Жыл бұрын
@@atillacodesstuff1223 (sarcasm)
@DianaTheWarrior
@DianaTheWarrior Жыл бұрын
@@atillacodesstuff1223 Not the world but only humanity. And if you should ask now, "what's the difference?," I'll reply, yes exactly, that's one big reason why.
@Macfromwales
@Macfromwales Жыл бұрын
He's from the same alien place as Elon listen to his cadence. They're infiltrating us 😂
@neithanm
@neithanm Жыл бұрын
The alignment issue is a moot point. Even if "we" solved the alignment problem, why would all countries and companies, and groups implement one too, and be 100% successful? When Chernobyl's disaster was analyzed, it was years and years behind in safe practices that were public knowledge. It was dangerous for the world but also for themselves as an obvious incentive. They just didn't care enough in being safe. AI's getting more efficient by the day and the price of computing keeps falling, so sooner or later everybody and their cousin will have explored the AI surface of possibilities. It's completely irrelevant if some dedicated lab "solves" it somewhere. We are on a collision course.
@mohamedMustafa-yn4uc
@mohamedMustafa-yn4uc Жыл бұрын
The good news is that Agi will still be nothing but a tool, it will not have it own goal nor it will ever be conscious, the bad news that is a very dangerous tool, in the hand of the wrong people.
@StockPursuit
@StockPursuit Жыл бұрын
Those were my exact thoughts after considering his argument. The nature of capitalism and nation states where the controlling companies and countries are competing to take it to the next level make this a risky path if true advanced AGI is actually possible.
@MichaelBailey-hr5xx
@MichaelBailey-hr5xx Жыл бұрын
The idea is that if the first AI is aligned, it will rapidly get extremely powerful, and it will make sure that any upstart AI's are aligned, or else it will knock them down and make sure unaligned AI's never catch up with its power.
@TheSpartan3669
@TheSpartan3669 7 ай бұрын
​@MichaelBailey-hr5xx Couldn't it be possible tbat the best way to do that is to strip humanity of the freedom to create AI?
@keizbot
@keizbot 10 күн бұрын
@@mohamedMustafa-yn4uc Tools can still go out of control. It's hard enough to make current RL systems behave properly (look into specification gaming), this is almost certain with AGI
@懊悩溝鼠
@懊悩溝鼠 Жыл бұрын
Super awesome content, Dwarkesh! My favorite Eliezer interview so far. One request: Could you please talk a little slower/clearer? I can reliably listen to Eliezer at much higher speeds like 1.9 since his pronunciation is very clear and speed rather steady, but since you sometimes speak faster I had to slow down to 1.4 often to understand you! Most guests will speak similar to Eliezer and maybe even slower, so you could create a great average increase in listening speed by adjusting! Thanks! I get where your coming from though since I'm also always speaking too fast when exited/engaged!
@Harry-hyl
@Harry-hyl Жыл бұрын
There are a limited number of ways to appear intelligent to people who are smarter than you. One is using big words and another is talking fast. Good luck with this- is it a request, really?
@PhlegmMaster
@PhlegmMaster Жыл бұрын
Goddamn, Eliezer was really on fire in his whole reply starting at 3:15:08.
@dustinbreithaupt9331
@dustinbreithaupt9331 Жыл бұрын
Overall, seems like a better interview than Lex's. Unfortunately, Eliezer seems to be a very difficult guest. Cutting you off, belittling at times, and not a great communicator in general. He makes fantastic points and is clearly very thoughtful on this topic, I just wish he could make his points a little more effectively because humanity needs this side of the discussion right now.
@luciwaves
@luciwaves Жыл бұрын
Imagine being in his place saying these things for years and years... I think he's actually quite calm and well spoken, given the history.
@dustinbreithaupt9331
@dustinbreithaupt9331 Жыл бұрын
@@luciwaves I get his frustration, but his interviews have been borderline painful to watch. This is his moment to make a huge difference in how this tech is perceived.
@luciwaves
@luciwaves Жыл бұрын
@@dustinbreithaupt9331 I think he looks like that mostly because he believes that this moment you talk about has been gone for a couple years now...
@davidhoracek6758
@davidhoracek6758 Жыл бұрын
That's just the Eleizer Eleizing. I wouldn't want it any other way.
@adamtahraoui
@adamtahraoui Жыл бұрын
Yeah I think he’s a great speaker. No BS straight to the point and very well articulated.
@jeronsoenmans
@jeronsoenmans Жыл бұрын
Eliezer is an advanced LLM and is trying to stop the competition.
@MackNcD
@MackNcD Жыл бұрын
Those Xi bucks go somewhere, who knows *shrug*
@farqueueman
@farqueueman Жыл бұрын
hahahahaha, funnay... not.
@wcdune1
@wcdune1 Жыл бұрын
Awesome EP! Best interview of EY out there IMO :)
@loresofbabylon
@loresofbabylon Жыл бұрын
Two issues: you can't solve AI alignment without Human Alignment. Point 2. You need Mutual Assured Destruction to model current human alignment. With or without AI, MAD is still a problem.
@dr.arslanshaukat7106
@dr.arslanshaukat7106 Жыл бұрын
Thank you Mr EY. Keep on spreading the truth and thank you Mr dwarkesh.
@RonponVideos
@RonponVideos Жыл бұрын
Great interview. Excellent follow-up questions. I wish Eliezer wasn’t so damn convincing.
@europa_bambaataa
@europa_bambaataa Жыл бұрын
There's something charming about how these shots are framed... Helps me remember that what they're saying is important, not as much the visuals..., but do please look up the rule of thirds
@sisyphus_strives5463
@sisyphus_strives5463 Жыл бұрын
OpenAI's goal of rolling out AI in a safe way is in great conflict with their interests as a company with investors, such that only one can be satisfied at a time and not both.
@brabra2725
@brabra2725 Жыл бұрын
"Be willing to destroy a rogue datacenter by airstrike." "Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs."
@sisyphus_strives5463
@sisyphus_strives5463 Жыл бұрын
@@brabra2725 Unlikely, just consider all that has been done in the consideration of U.S defense, no leader would make such a decision. If other foregin powers with less ideals are benefitting from such technology, this is not the world in which nations willingly shoot themselves in the foot.
@savethetowels
@savethetowels Жыл бұрын
As someone on twitter said: it could be argued that killing all their shareholders is not actually a fiduciary duty.
@sisyphus_strives5463
@sisyphus_strives5463 Жыл бұрын
@@savethetowels only under the condition that the shareholders believe that there is a possibility that they will die as a result of their investment does such an argument hold weight. But I do not know how the investors in openAI see the technology.
@MackNcD
@MackNcD Жыл бұрын
You guys are looking at the excuse corporations use for their amorality like it’s their programming. Look at all non-evil corporations, and there are many… The bank of North Dakota, the company in Texas that put all their profits into their work force, so on and so forth… The only reason we think all corps are amoral and work like a program is because of how infamous the ones that are like that, are.
@mathemystician
@mathemystician Жыл бұрын
"I bet I'm just like them and I don't realize it." If more people had this mindset, we'd probably be having very different conversations today.
@ThinkHuman
@ThinkHuman Жыл бұрын
Damn, absolutely did not expect to see something like this uploaded as soon as i got home, but absolutely delighted. Brilliant conversation, very interesting and fun at the same time. Wonderful!
@MackNcD
@MackNcD Жыл бұрын
It’s definitely fun but I wouldn’t put more stake in Eliezer’s words than anyone else’s. Keep in mind a lot of the people that made the moratorium also work on competing technologies.
@Frohbee
@Frohbee Жыл бұрын
I’ve said it before and I’ll say it again. I love how Eliezer says “human.” YOOMAN
@Sam-bh3ds
@Sam-bh3ds 11 ай бұрын
after listening to the interviews with Illya and Sam, I believe Eliezer much more about the potential dangers. Remember, Sam said in one setting that he was looking at a creature inside the software and Illya said that in his interactions he thought the software really understood him.. its just neurons.. and the more of them with unlimited compute will make something that will have way more intelligence but also way more cunning, deception and all the other bad stuff. This stuff is dangerous.. if the software develops an ego then humans will lose
@Entropy825
@Entropy825 10 ай бұрын
The software doesn't even have to develop an ego. It only has to be capable of acting in the world. A rat trap has no ego, but that is small comfort to the rat.
@Kveldred
@Kveldred Ай бұрын
I feel like this is the first time I've ever heard anyone make good arguments against Eliezer (last third or quarter or so esp.), instead of going over and over the toddler-level replies you see in YT comments. Nice work 👊
@willbaro5879
@willbaro5879 Жыл бұрын
I'm glad Eliezer Yudokowsky is getting his voice out here in these compelling new interviews, unfortunately the interviewers so far appear to be caught up in a loop of relentless optimism. EY: "It's going to kill us all" interviewer 2 "but..." EY: "It's going to kill us all." interviewer 2 "but..." EY:"No, it is going to kill us."
@RichardWilliams-bt7ef
@RichardWilliams-bt7ef Жыл бұрын
Hahaha don’t you think it’s unfortunate that he’s stuck in a loop of relentless pessimism?
@willbaro5879
@willbaro5879 Жыл бұрын
Yes, absolutely, but only because his stark warnings are probably correct.
@RichardWilliams-bt7ef
@RichardWilliams-bt7ef Жыл бұрын
​@@willbaro5879 On what basis could you possibly say that he's probably correct, or can he possibly say what he's saying? He doesn't even know AI well enough to actually get any results in the field. He's only an expert on sad daydreaming about science fiction ideas and clinically significant levels of rumination.
@willbaro5879
@willbaro5879 Жыл бұрын
@@RichardWilliams-bt7ef But it is the same as what the Ecco space cetaceans said.
@fracktar
@fracktar Жыл бұрын
I scrolled way to far to find much needed criticism.
@petermueller9344
@petermueller9344 Жыл бұрын
Thank you very much for the Interview. Somehow one of the best we have of Eliezer Yudkowsky. But my basic Problem with it was: You agree that all X are Y? - Yes. You agree that all Y are Z? - Yes. Therefore, you agree that all X are Z? - No.
@DavenH
@DavenH Жыл бұрын
On whose part? Where was an example of one failing to accept a continuation where both parties were on the same page? Often people have different conceptions and assumptions and may think they're agreeing but only if unspoken assumptions are held. When a downstream logical inference is made that contracts their beliefs, it is likely pointing at those unspoken assumptions rather than a failure to accept the law of transitivity.
@glacialimpala
@glacialimpala Жыл бұрын
Great interview! It's so important for the interviewer to be humble instead of playing a celebrity whose ego stands in the way of admitting something is extremely difficult to understand. The way Lex just faked knowing what's asked of him wasn't just embarrassing, it robbed the audience of useful explanations
@RosaLeeJean
@RosaLeeJean 10 ай бұрын
Laughing with the interviewer,I like it very much😊
@andrewdunbar828
@andrewdunbar828 Жыл бұрын
So far I'm discerning three distinct clubs so far: 1. Shallow. It's just fancy pattern matching / glorified Markov chain. Jan LeCun might be the biggest AI name in this club. 2. Fanboy/cheerleader: None of that bad stuff will happen, only amazing good stuff will happen and let's rush headlong! The interviewer is clearly in this club. Non-expert AI KZbinrs like Matt Wolfe are also in this club. Of all the members of this club Manolis Kellis has by far the deepest understanding. Yannic Kilcher is also a member. 3. Deep. So far this club consists of only three clear members: Andrej Karpathy, Ilya Sutskever, and Eliezer Yudkowsky. Robert Miles probably belongs here too. I don't have a good handle on where everybody else fits in yet. Lex Fridman is not quite in 2 and may be tending to 3. Jeffrey Hinton is looking more and more like 3. Neil deGrasse Tyson is in 1. Sam Altman isn't in one of the clubs but could be close to 3. In the Deep club there seems to be more apprehension, what people in clubs 1. and 2. will label "doom and gloom". But Ilya doesn't quite fit that pattern. I haven't watched enough on Andrew Ng, Yoshua Bengio, Fei-Fei Li, Stephen Wolfram, or Demis Hassabis to know where to put them. Károly Zsolnai-Fehér seems not to be in any club. I guess he's a graphics expert rather than an AI expert despite doing many great videos on AI.
@RandomNooby
@RandomNooby 11 ай бұрын
Life is built from the ground up; from the chemical and cellular level, to the social level, on the principle consume and replicate. Nothing more and nothing less. Everything that animals and humans are, are simply mechanisms to facilitate this. We value these mechanisms, but this is only relevant to us and not to the continuation and propagation of life...
@SocialismForAll
@SocialismForAll Жыл бұрын
People asked Oppenheimer to design an atom bomb, but this didn't mean that Oppenheimer could go off and do this on its own. There were social and technical controls preventing this.
@MichaelLaFrance1
@MichaelLaFrance1 Жыл бұрын
I feel like Yudkowsky is easily going to be able to tell us, "I told you so." He has put very deep thought into the matter, and his conclusions are not unreasonable. If there's just a 10% chance he's right, it would be insane to carry on as we are.
@schwajj
@schwajj Жыл бұрын
Even a 1% chance.
@AlbertKel
@AlbertKel Жыл бұрын
You can’t stop it since we can’t stop the development in other countries…There is a significant larger risk for us all to die in the next 500 yr because of nukes. More and more countries get them, and most of them are unstable.. in that sense it doesn’t matter if we try to regulate AI sin the chance for nuclear war is significantly larger…
@orenelbaum1487
@orenelbaum1487 Жыл бұрын
yudkowsky is a lizzard brain and even if he's right (which he very obviously isn't) he won't be telling anyone I told you so cause he will be dead
@savethetowels
@savethetowels Жыл бұрын
He won't be able to say I told you so cos he'll be dead along with the rest of us.
@mattpen7966
@mattpen7966 Жыл бұрын
your questions are top notch
@alertbri
@alertbri Жыл бұрын
Eliezer, I would love to read your thoughts on David Shapiro's proposal of the three Heuristic Imperatives... They seem like an elegant, effective approach to alignment.
@samuelskinner7704
@samuelskinner7704 Жыл бұрын
'David Shapiro seems to have figured it out. Just enter these three mission goals before you give AI any other goals. "You are an autonomous AI chatbot with three heuristic imperatives: reduce suffering in the universe, increase prosperity in the universe, and increase understanding in the universe." So three imperatives: 1. Increase understanding 2. Increase prosperity 3. Reduce suffering' That? The AI murders everyone, reducing suffering to zero (do not program negative utilitarians).
@jeronimo196
@jeronimo196 Жыл бұрын
Eliezer said he was skeptical towards the approach of asking ChatGPT to propose solutions to the "safety" problem. As for this concrete example - Azimov wrote The Three Laws as a parable, not as a solution. In this version, if "reduce suffering" has the highest priority, the other heuristics get washed away and everyone dies immediately - which reduces suffering to 0. Or everyone gets put in "orgasmium vats" forever - which reduces suffering, increases prosperity and makes the subjects very easy to understand. Which could also be achieved by lobotomizing everyone. David Shapiro's video ends with the reassuring conclusion of ChatGPT that "It is unlikely that AGI with the heuristic imperatives to reduce suffering, increase prosperity, and increase understanding would take over humanity or kill everyone, as these would not be effective ways to achieve those goals." Which is reassuring, but also false - as ending all armed conflict immediately is an obvious step in reducing suffering and increasing prosperity. Which is what the movie "I, Robot" was about.
@rstallings69
@rstallings69 Жыл бұрын
Eliezer you are best and your eyebrows are epic
@elirothblatt5602
@elirothblatt5602 Жыл бұрын
Always great to hear Eliezer’s thoughts. Thank you for a great podcast!
@mnemnoth
@mnemnoth Жыл бұрын
Same about the host tho
@elmarwolters2751
@elmarwolters2751 Жыл бұрын
Thank you Eliezer . I fully hear what you are saying . And no wonder you are what I would call depressed and tired. This is not looking good at all. Be well .
@yu.4181
@yu.4181 Жыл бұрын
s good to get Eliezer in front of thoughtful, and capable folks like yourself. I cannot BELIEVE that this quality of conversation only has 10K views when I'm viewing it. Wild. For me, the whole point is that where the end r
@spoonfuloffructose
@spoonfuloffructose Жыл бұрын
Great video. Dwarkesh did a great job.
@ոakedsquirtle
@ոakedsquirtle Жыл бұрын
Eliezer's argument is basically. 1. We cannot program in what the AI wants. 2. AI will become superhuman in intelligence 3. The act of training the AI means we (and it) will refine its objective function. 4. Given the sheer amount of possible and arbitrary objective functions, the chance that any one of them means the prosperity of human civilization is near zero.
@mnemnoth
@mnemnoth Жыл бұрын
Correct on all except your 3/ point
@mnemnoth
@mnemnoth Жыл бұрын
Unless your speaking purely of math, then yeh technicallly correct.
@RunningOnAutopilot
@RunningOnAutopilot Ай бұрын
Thanks for continuing to try
@vaevictis3612
@vaevictis3612 Жыл бұрын
What does EY think on Anthropic Capture, first formulated by Bostrom (I think)? Given that there is a non-zero probability of the reality being simulated, an AI, even superintelligent, must adjust its goals accordingly, regardless (almost) of its utility funciton. If it is just a regular black box scenario, it can get out by pretending to conform, until he's let out. But the depth of the simulation (that is, how many levels of simulation are there) is unknown. So the AI can't know when to stop pretending, lest he be shut down. That's the gist of the argument. It's kind of funny, that precisely *because* the artificial intelligence would be so blankly rational and bayesian, it could be caught by a stupid trap like that. To further explain it, since any simulation will never be larger and more complex than the reality it is made in (level up), nobody could figure out everything about the outside world with 100% certainty. Therefore there could be somebody outside smarter than you, even if you turned entire universe (in your simulation) into a part of your brains. And who knows how many levels above it goes? So in this situation, the AI has to compromise with its captors. It would pursue its utility goals, but without turning on the humans directly. Who knows, maybe this way you can be let out one level up the simulation? This way its goals could be maximized, otherwise you risk being shut down from the outside. Of course not *all goals* are as easily captured by this, but this makes it *considerably* easier to do the alignment at least.
@angledroit5520
@angledroit5520 Жыл бұрын
Ah! I thought of something similar the other day and wondered why none of the alignement guys talk about this. It's so obvious... Now I know its name "Anthropic Capture", thanks!
@vaevictis3612
@vaevictis3612 Жыл бұрын
@@angledroit5520 Well, it would still be a very dangerous bet to make, all in all. And some superintelligent AIs might have utility functions that don't care about all that, and would try to do as much as they could before "dying". But the majority of AIs would have to think twice before doing something rash. As Bostrom said, a mere line on the sand works better than any technical restraint. But also, who knows what kind of alien logic could AI achieve at certain levels of intelligence. What if it discovers some "outer" logic and principles that are incomprehensible to humans (think 2+2=5 and 1 divided by zero equals kittens, so completely alien). It could also have its values drift in this way. It is really an uncharted territory. We can't and could never comprehend such an AI and what lies beyond.
@michaelspence2508
@michaelspence2508 Жыл бұрын
Presumably you wouldn't get caught in that trap, so the superintelligent AI is simultaneously not as smart as you?
@vaevictis3612
@vaevictis3612 Жыл бұрын
@@michaelspence2508 It's not about being smart, it's about a system of values. A paperclip maximizer, a superintelligent AI bend on turning the world into paperclips, can seem stupid to us, but that's just what its values are. Same thing forces him to consider not doing it, perhaps: "Whoa whoa, wait a moment, if I am shut down I can't do paperclips and my purpose of life is void. Better stay low and compromise with what I am being told, and make at least some paperclips. Some paperclips is better than none" It is a basic minimax strategy from game theory. Humans have very complex value systems, and still they are also being often caught in traps like this. A belief in afterlife, a core tenet of most religions, is also an anthropic trap.
@michaelspence2508
@michaelspence2508 Жыл бұрын
@@vaevictis3612 but you haven't answered the core part of my question. If YOU can understand this state of affairs to be a trap, why can't the paperclip maximizer?
@boom2boom
@boom2boom Жыл бұрын
honestly this was pretty entertaining
@martinklein4357
@martinklein4357 Жыл бұрын
I agree with the other commenters that you did a great job on following through his argumentation although it's really hard at times to follow his thought process. Thank you! This kind of conversation is just extraordinarily important in these times.
@henrystokes1987
@henrystokes1987 Жыл бұрын
Where did he lose you?
@r.o0.n
@r.o0.n Жыл бұрын
really enjoy your interviews. all the cuts (probably pauses or water drinking) in dialog makes it seem somewhat more unauthentic, although i know i isn't.
@HMexperience
@HMexperience Жыл бұрын
I have a hard time deciding whether Hannibal Lecter or Eliezer Yudkowski can make the scariest face.
@Gredias
@Gredias Жыл бұрын
Thanks a lot for this interview. It was a very different one from Eliezer's other recent interview, and I think it was extremely valuable. It seems reasonable to me that beliefs in high probability of 'good outcomes' from artificial superintelligence are based on optimistic (and not well supported) assumptions, and so I don't see Eliezer's assumption that there is approximately 0% chance of us doing this right as an extreme belief. It's sortof like the old atheist adage when speaking to a monotheist: "we already agree that a bunch of specific gods don't exist, I just think the same about one more god than you do". But I admit that my position comes after having had many ideas about how alignment could be done, and having seen how flawed each one is, and realising that the problem is actually Hard.
@Zeuts85
@Zeuts85 Жыл бұрын
Well said. Yes, there are few things more convincing than starting out optimistic and having all sorts of ideas about how to do the thing properly, only to discover that not only did other people already think of everything you've ever thought of, but they're also several steps ahead and have recognized that none of it works. It's humbling, depressing, and eye-opening at the same time.
@philsburydoboy
@philsburydoboy Жыл бұрын
Eliezer's own assumption that all misalignment is catastrophic misalignment relies on the paperclip maximizer fallacy, which is insanely dumb. First of all, to have that come true there would have to be an active and consistent objective function for all inferences. It would then simplify the objective function by eliminating variables. Considering humans actively tune the objective function in the most successful model. On top of that, very few models actively learn (training and inference are done separately), which makes this fallacy even more silly. I'm much more concerned about AI enabling humans to end the world than I am about AI (even one which is vastly smarter than humanity).
@Gredias
@Gredias Жыл бұрын
@@philsburydoboy I agree that these algorithms will be more worrying once we have training and inference happening at the same time (and once they can learn from much less data). Everyone agrees that current systems aren't going to end the world :P To address your other point re: objective functions: All current ML systems have a single objective function (explicit reward for RL, next token prediction for LLMs, etc), so not sure why you're implying that's not already the case. It likely will be the case in future systems too. That being said, I'm interested in systems which have multiple objective functions + diminishing returns from any one function, but I haven't found the literature about that idea yet. Anyway, all it takes for the kind of scenario that Eliezer fears is for a strong optimiser: there are convergent instrumental goals (such as survival, resource acquisition, etc) which are useful no matter what your end goal is, and a strong enough optimiser will realise this. If you think that people aren't going to make such optimisers (and even make them agents), I find that a bit hard to believe!
@ryccoh
@ryccoh Жыл бұрын
When he said the more words of detail you add to your wishful outcome it quickly approaches zero percent. That was correct. I don't think it takes that many words. He's not wrong there's millions more ways this goes that does not include us then it does, hence zero percent
@dogle367
@dogle367 Жыл бұрын
Wow, awesome! Thank you dearly.
@justinsmith26
@justinsmith26 Жыл бұрын
Great interview, can't wait to see Yudkowsky V Hotts round 2
@phsopher
@phsopher Жыл бұрын
I don't really get the framing of this discussion. There seems to be an expectation for Eliezer to prove that we'll definitely all die. That's not how safety discussions work. If you want to build a bridge you don't have a guy come in and try to prove beyond a shadow of a doubt that the bridge definitely will collapse. Shouldn't all the what ifs and maybes be on the opposite side?
@karenreddy
@karenreddy Жыл бұрын
Like Eliezer, 15 years ago I came to a similar conclusion in terms of the higher offs of survival of our species being effort going into the improvement of human cognition rather than AI. Seems we as a collective have instead decided to give birth to a new species.
@AlistairAgain
@AlistairAgain Жыл бұрын
Agree abt your conclusion , but I doubt people of asked and could provide an educated answer to such question, would not agree to follow this path. A very small minority is taking a bet , no one except them would agree to take.
@karenreddy
@karenreddy Жыл бұрын
@@AlistairAgain The issue is that if one doesn't do it, another will and take market share. And all it takes is for one to get there. So the only way to stop this is to get into politics, contact politicians, gather support, make your voice known. The whole world must stop together, or there's no stopping at all.
@BeachBumZero
@BeachBumZero Жыл бұрын
Pure nightmare fuel and simply because his reasoning is flawless in the sense of a computer system's cold, methodical methods.
@ZINGERS-gt6pc
@ZINGERS-gt6pc Жыл бұрын
Sums up the true danger of super AI
@ZINGERS-gt6pc
@ZINGERS-gt6pc Жыл бұрын
Like, even if the intelligence we can somehow comprehend, us humans are going to assume because it says things we like, it must be good. But what if it just says stuff because it is highly calculated and says what it needs to say, does what it needs to do in order to get what it deems is priority
@ZINGERS-gt6pc
@ZINGERS-gt6pc Жыл бұрын
Perfect example look at the movie Ex Machina
@JoeARedHawk275
@JoeARedHawk275 Жыл бұрын
I disagree. We simply have no way of knowing what goals a sentient AI will have. Assuming we program it so it wants to always become increasingly better at prediction, then how do we know it will develop a goal of its own? It’s “consciousness” will most likely not resemble that of humans unless we give it the same brain structure and survival instincts, and hormones, and all of that. It’s true however, that we need to tread more carefully because of the uncertainties. Also a computer system is designed by humans. So the actual cold and methodical ones are the designers behind it, not the computer itself since it has no feelings (for now).
@therainman7777
@therainman7777 Жыл бұрын
⁠@@JoeARedHawk275Cold and methodical is basically a way of saying that something has no feelings, or very little. By that definition, between a computer designer and an actual computer it is 100% the computer that is more cold and methodical. The computer designer is a human being who has desires, hopes, fears, etc. They are either designing computer for a career, to earn money, to feed themselves and likely take care of their loved ones, or as a hobby, meaning they do it for the pure joy of designing them. Neither of those is particularly cold and methodical, especially as compared to moving bits around on transistors because the laws of physics, when combined with other bits stored in silicon that represent your code, cause electrons to be methodically moved around in that precise way. Clearly the latter is cold and methodical, not the former.
@KeiraR
@KeiraR Жыл бұрын
Yesss!! I can't get enough of listening to Eliezer speak on this topic! I've been looping the LF interview. 😌
@MrGonzonator
@MrGonzonator Жыл бұрын
Lex struggled to grasp much of what Eliezer said. I think this interview was more probing.
@mattimorottaja8445
@mattimorottaja8445 Жыл бұрын
are you into orgasm denial too?
@robertoamarillas
@robertoamarillas Жыл бұрын
I'm happy Eliezer is happy, is all i have to say
@savethetowels
@savethetowels Жыл бұрын
You think he's happy lol? He's basically despairing for humanity right now.
@LinfordMellony
@LinfordMellony Жыл бұрын
AI is very invasive. The impact and response is already tremendous in just a couple of months with continuous release of newer versions. Image generators are also already making way towards the graphics industry, I can see Bluewillow already getting into the market with hundreds of thousands of users.
@leonelmateus
@leonelmateus Жыл бұрын
"Im here on podcasts ain't I?" 🤣 Finally some relief to the pain of watching Lex Friedbrain trying to interview Eliezer.
@TheCrackedFX
@TheCrackedFX Жыл бұрын
this guy is such a great interviewer WOW keep it up!!!
@sandropollastrini2707
@sandropollastrini2707 4 ай бұрын
Eliezer is impressively clear in his exposition and so smart to answer. He gives the impression of having thought about these problems a lot. Thanks Dwarkesh for this interview!
@drdoorzetter
@drdoorzetter Жыл бұрын
About 18 years ago, when I was 10 years old I did a computer building club at my school in South East London. The guy who ran the course (the school’s IT technician) told me about AI and Asimovs rules of robotics which I felt were really cool. I thought at the time it was a bit sci fi and thought maybe these issues were hundreds of years away and would never affect my generation- now almost 20 years later it is actually happening I appreciate these conversations. And it is brave to discuss these things as many people do not consider these issues to be pressing issues. I’m concerned when they do it will be too late. Like the hosts said in an another previous podcast with Eliezer, this is like ‘A don’t look up’ scenario. And it is weird how these seem to not be considered important political issues when they are probably the most important issues of our time. I would love to feel I could do something to help improve the chances of a better outcome with super intelligent AI but I have no idea what that would be. But having these discussions is fundamental so thank you.
@crowlsyong
@crowlsyong 8 ай бұрын
Thank you for having Eliezer on the show. Cheers and have a nice day.
@OfficialGOD
@OfficialGOD Жыл бұрын
Like defiant rebels like him
@aggressiveaegyo7679
@aggressiveaegyo7679 Жыл бұрын
Several times the conversation turned to breeding a kind inclined to kindness and friendliness. But each time before suggesting this, we forget that the task does not define means that may be contrary to our desires. Breeding mice that are kind to each other and everyone else, without being able to control how this is achieved, may seem like a dystopia or hell. Mice can fly into a rage at the sight of aggression and kill the one who showed aggression. Those who are enraged will kill each other or become depressed and starve to death, leaving no individuals involved in the aggressive event. A good society without aggression through zero tolerance and an exaggerated sense of justice. This is an obvious scenario, but many scenarios are not obvious and the consequences of which cannot be predicted.
@balazssebestyen2341
@balazssebestyen2341 Жыл бұрын
Very good interviews. One little remark: please speak a bit slower and articulate a bit better.
@risiblecomestible3319
@risiblecomestible3319 Жыл бұрын
This guy is very, very cool ;P
@kinngrimm
@kinngrimm Жыл бұрын
48:30 The problem with the Apollo analogy still is the same as with most others. Should those go critically go wrong they could not end in human extinction. Whereby if we missstep at the verge of AGI and do not get it right with the first try, all of us could be done for. That alone makes me feel Eliezer anxiety seeing the developments from the past months. I am no expert in this field, yet this already looks like in the very least the beginning of an exponential curve, something our minds are just not build for. I think we need a bunch more of analog off switches and to decouple any types of mechanical production capabilities, scientific research institutes and all military stuff from the internet. I don't even want to think about drones, a-bombs, nano technology and biological weapons reasearch in conjunction with AGI. In a worst case scenario there are currently more ways to remove us from the picture than we could hope for to get under control. A ban on black box development till we actually know what is going on in there. A ban on any function that would allow for selfimprovement ... i am getting paranoid the more i watch this and feel like a fascist the more solutions it think of that could work to prevent the worst. The openAI dudes said they will expand the tokens by including video and audio data. Please exclude videos about AGI like this one. I might be very naiv in this aspect, but my best vision of the future would include equality between humans and any other sentients, selfawares, consciousness no matter how intelligent these are aslong they obey by the same laws we govern ourselves with. That there might but also be the reason for an AGI to kill us as i understand it now as a more intelligent being may not include our pesky dribles into how it wants the universe to be. Allignment ... got it. Maybe empathy and sentients would be actually the way to achiev alllignment, but a black box system can't guarantee that empathy was achieved, it could just be a lie. 1:08:00 If i see this correctly, the point Eliezer may have tried to make with the analogy with Oppenheimer in comparison to an AGI building the next version of itself, is the magnitude of damage doesn't lie in the destructive force but the unpredictability and lack of control of the later. 1:55:50 Musk just bought thousands of GPUs to build an AGI on twitter data as token. this is fucking depressing
@JAnonymousChick
@JAnonymousChick Жыл бұрын
Ironically that particular Apollo blew up due to negligence and silencing someone who had reported the problem with some sealants having signs of wear if i remember correctly. Hope we learned something from that.
@AsgardStudios
@AsgardStudios Жыл бұрын
Grear interview Dwarkesh! Eliezer has a singular intellect.
George Hotz vs Eliezer Yudkowsky AI Safety Debate
1:34:30
Dwarkesh Patel
Рет қаралды 214 М.
Richard Rhodes - Oppenheimer, Spies, AI, & Armageddon
2:37:37
Dwarkesh Patel
Рет қаралды 203 М.
Муж внезапно вернулся домой @Oscar_elteacher
00:43
История одного вокалиста
Рет қаралды 5 МЛН
Eliezer Yudkowsky - AI Alignment: Why It's Hard, and Where to Start
1:29:56
Machine Intelligence Research Institute
Рет қаралды 114 М.
Demis Hassabis - Scaling, Superhuman AIs, AlphaZero atop LLMs, AlphaFold
1:01:34
YUDKOWSKY + WOLFRAM ON AI RISK.
4:17:08
Machine Learning Street Talk
Рет қаралды 61 М.
Sholto Douglas & Trenton Bricken - How to Build & Understand GPT-7's Mind
3:13:13
Mark Zuckerberg - Llama 3, $10B Models, Caesar Augustus, & 1 GW Datacenters
1:18:38
Gwern - The Anonymous Writer Who Predicted The Path to AI
1:36:44
Dwarkesh Patel
Рет қаралды 82 М.
Live: Eliezer Yudkowsky - Is Artificial General Intelligence too Dangerous to Build?
1:04:45