Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

  Рет қаралды 141,237

Dwarkesh Patel

Dwarkesh Patel

Күн бұрын

Пікірлер: 1 200
@Adrian09LA
@Adrian09LA Жыл бұрын
I actually have the opposite opinion of a lot of the comment section -- rather than finding Eliezer 'annoying', I found myself irritated by a lot the interviewers arguments, which basically boiled down to 'but what if we manage to stumble into an aligned AGI despite not understanding what we're doing?' To which Eliezer then has to explain why that's really not very likely, to which the interviewer responds, 'But what if we stumble into a benign AGI in this slightly different way?' Then Eliezer has to again point out why that's not very likely. Rinse and repeat. The interviewer very much seems to be in the 'close our eyes and hope for the best' school of AGI alignment. Which I don't think is likely to be a very good way of getting an aligned AGI. It seems like asking: 'But what if we put a toddler in charge of defusing a nuclear bomb, and then the bomb DOESN'T explode?' It's like, yeah, that is technically possible. That doesn't mean it's a good idea.
@Adrian09LA
@Adrian09LA Жыл бұрын
However, I will agree that they seem to cover more ground and actually engage more with the subject that during the interview w Lex, so I'm not saying he did a bad job in this interview, just that his arguments seem to rely on an almost absurd amount of optimism
@mkr2876
@mkr2876 Жыл бұрын
@@Adrian09LA I 100% felt the same way, it irritated me a lot to hear the framing of his stupid questions in different ways. I actually think Eli was very patient with him in answering them the way he did. Either the interviewer is completely oblivious to the facts that Eli is coming up with, or he actually agrees with Eli’s argument but only asks the questions to get Eli to speak and hear his thought process for the sake of sparking a conversation which I think is plausible. Surely the interviewer cant beleive the viewpoints he is making?
@beerkegaard
@beerkegaard Жыл бұрын
i thought it was great that the interviewer pushed back and offered intelligent criticism
@thevenomous1
@thevenomous1 Жыл бұрын
@@beerkegaard He offered criticism but it wasn't intelligent criticism. I also think he is annoying and speaks way too fast, it distracts from the discussion.
@diegocaleiro
@diegocaleiro Жыл бұрын
I think Dwarkesh was (correctly) playing a role of attacking Yud's arguments to the best of his ability (which is limited by his age and knowledge) so that Yud could tell the great public what Yud thinks, kind of like a sparring coach. Yud complained that Lex didn't do that, and further Lex seems incapable of doing that. I think Dwarkesh performed well given being Dwarkesh, but I wish someone else with more experience and knowledge would occupy his position in the world of podcasts (of being the guy who interviews the most important people in the world at a high IQ level Rogan is to Lex as Lex is to Dwarkesh.
@JeremyBenthamGhost
@JeremyBenthamGhost Жыл бұрын
So far, this is much better than the episode he did with Lex Fridman. More follow through and expansion on important points.
@yourbrain8700
@yourbrain8700 Жыл бұрын
Lex frankly isn't a great interviewer. He tends to ask shallow questions and wants to inject crap like "love is all we need" and "death gives life meaning" into conversations with little justification.
@Imcomprehensibles
@Imcomprehensibles Жыл бұрын
@@yourbrain8700 Love is all we need lol
@stephensmith3211
@stephensmith3211 Жыл бұрын
​@@yourbrain8700Death gives life meaning
@DeathZeroTolerance
@DeathZeroTolerance Жыл бұрын
​@@yourbrain8700 I strongly disagree with Yud's AI-risk take, and i didn't understand it at first...but if you watch lex's interview, at the end, on the topic of mortality, Lex reveals a nugget of truth that gave me a lot of understanding as to why Yud's so passionately and publicly vocal about ai-risk...one that is not rooted in rationality, but in emotional pain loss and maybe guilt.
@haodev
@haodev Жыл бұрын
Yeah significantly better actually.
@adliberate
@adliberate Жыл бұрын
Now watching it.... 'The drama is meaningless. Don't indulge in it.' Great line Eliezer.
@ericf8276
@ericf8276 Жыл бұрын
And then proceeds to be dramatic.
@theworldgoestothealterofit5616
@theworldgoestothealterofit5616 Жыл бұрын
And then you watch the drama.
@hubrisnxs2013
@hubrisnxs2013 Жыл бұрын
​@@ericf8276he's not talking about that drama. He's talking about the drama of falsely attributing motives or arguments to the other side, he's saying in both his case and the other, address the arguments directly
@mariokotlar303
@mariokotlar303 Жыл бұрын
Shit... read sequences ages ago, it changed my life, I recognized myself in eliezier's way of thinking back then, but failed to try to become his replacement in any way. It's mind blowing to learn that was his primary goal, but it makes perfect sense, tho that may just be highsight bias at play. I think I might be among the victims of highschool and college killing my soul, though it's hard to be sure I'm not using that to rationalize away some deeper reason. I'll comfort myself in knowing it wouldn't have made much of a difference anyways. It's ironic most people see elizer as a pessimist, I see him as an optimist. With all his realizations, he still thought we stood a chance.
@frankwhite1816
@frankwhite1816 Жыл бұрын
Thank you for this. Excellent discussion. Eliezer has one clear point that people struggle with - the more intelligent we make AI, the less controllable it becomes. Not just controllable but predictable, though these actions obviously overlap. Simple. If you've been in the AI space long you realize that his logic is rock solid. Eliezer is so intelligent, however, that he seems to wrestle with articulating this postulate convincingly. Inevitably the interviewer gets wrapped up in ontology, morality, or consciousness but that's not the crux of the matter. Gawdat and Hinton are saying the same thing, really, they're just a bit more polished in their delivery. We need to halt AI development, more or less, globally, like now. But what are the chances of that? Perhaps this is the solution to the Fermi paradox? Self-extinguishing intelligence via AI?
@jarijansma2207
@jarijansma2207 10 ай бұрын
exactly. and you overlook the power of a uniting story. and china listened to elon musk. he just talked to them, he said in an AI summit in the last few months, not sure where. we all need a new STORY through which to perceive, one that can fit all of us and any of us. Elezier already did it in HPMOR. now just to apply the prisms
@yarrowflower
@yarrowflower 10 ай бұрын
“He’s so smart that he’s bad at argumentation!”
@ts4gv
@ts4gv 9 ай бұрын
Eliezer himself said AI extinction doesn't solve the Fermi paradox. A dangerous AI is one that kills humans in pursuit of another goal. It wouldn't kill us all, think "my job is done" and wipe itself out. Unless of course it was a super-moral AI that determined an instant, painless death of all conscious life was a moral imperative. To end all suffering forever. Or something. But yeah, aside from that, ASI doesn't solve Fermi.
@mattedj
@mattedj Жыл бұрын
Thanks for the interview. Upping my antidepressant dosage.
@stevengill1736
@stevengill1736 Жыл бұрын
. and upping my dosage of nootropic as well...
@larslover6559
@larslover6559 Жыл бұрын
Haha funny comment. Thanks for thr laugh
@michaeljvdh
@michaeljvdh 7 ай бұрын
Lol
@JohnSmith-zs1bf
@JohnSmith-zs1bf Жыл бұрын
if it's not aligned and it knows you want it aligned and it''s smarter than you, it will convince you that you have verified alignment without actually having done so. it will be practically impossible to know.
@TheMrCougarful
@TheMrCougarful Жыл бұрын
But you could create an AI to verify alignment. Oh, wait...
@СергейЕжов-с1м
@СергейЕжов-с1м Жыл бұрын
Well maybe don't inform it that you want it aligned?
@user-nz9sl1pl9n
@user-nz9sl1pl9n Жыл бұрын
​@@СергейЕжов-с1мyou said it already
@CircuitrinosOfficial
@CircuitrinosOfficial 7 ай бұрын
@@СергейЕжов-с1м Even if you don't explicitly tell it, if it's smart enough, it could figure it out from inference. For example, if it's been trained on human text like GPT-4, it could learn that humans usually check for thing that could kill them and would infer humans are probably testing itself.
@absta1995
@absta1995 6 ай бұрын
@@CircuitrinosOfficial this exact thing happened with claude v3. It figured out that it was being tested to find a needle in a haystack in text.
@DwarkeshPatel
@DwarkeshPatel Жыл бұрын
I regret to inform you that the comment section is misaligned.
@SamuelVanderwaal
@SamuelVanderwaal Жыл бұрын
MISALIGNED MISALIGNED *points finger menacingly*
@flickwtchr
@flickwtchr Жыл бұрын
Oh good, so you get his point, finally.
@Ripkittymi
@Ripkittymi Жыл бұрын
​@Oren Elbaum Barely any country bans abortions. Almost all the countries in Europe have abortion available by request from 10 weeks to 24 weeks.
@governordog
@governordog Жыл бұрын
😂😂😂
@flickwtchr
@flickwtchr Жыл бұрын
@Oren Elbaum Care to point to what he said that matches what you assert? In the context in which he said it? Never mind really, I know right wingers don't engage in intellectually honest debates, ever. Carry on.
@tomusmc1993
@tomusmc1993 Жыл бұрын
New viewer. Gotta be honest Eliezer is why I'm here. Struggling to catch up with A.I. and I think my situation is analogous to what seems to have happened in the real world. I realized A.I. is way past where I thought it was and now I'm trying to catch up, frantically. I enjoyed it, and I have a note pad full of stuff I need to dig into. Keep up the great work.
@misterxxxxxxxx
@misterxxxxxxxx Жыл бұрын
Elizier is the worst way to catch up with A.I :D Just a lambda troll trying to get attention
@tomusmc1993
@tomusmc1993 Жыл бұрын
@@misterxxxxxxxx well. Ok. No one has accused me of being smart.
@myotahapeaofbabylon6510
@myotahapeaofbabylon6510 Жыл бұрын
Mb 😮😮 😢.
@devinfleenor3188
@devinfleenor3188 Жыл бұрын
@@misterxxxxxxxx press on the gas and foot off the breaks then. literally nothing could go wrong.
@XanarchistBlogspot
@XanarchistBlogspot Жыл бұрын
@@misterxxxxxxxx famous last words. Uncle Ted wasn’t wrong. It is exactly the people most knowledgable like Bill Joy who are the most afraid of gnostic synthetic future unfolding.
@markupton1417
@markupton1417 5 ай бұрын
Holy shit! Almost 3.5 hours in, Dwarkesh says, "There could be AI ice cream,". Me: "Yes! That's PRECISELY the point!"
@aggressiveaegyo7679
@aggressiveaegyo7679 Жыл бұрын
This video is a treasure, it's a masterpiece. At first, Dwarkesh irritated me with his lack of understanding of the seriousness of the issue, but it's the perfect video for those who haven't yet grasped what Eliezer is talking about. Such videos require hosts who disagree, don't understand, and want to comprehend or argue with Eliezer. Then, viewers who share the same thoughts and misunderstandings can get answers or understand the principles of reasoning and come to their own conclusions. I want to remind you that creating superintelligence is a task with just as unacceptable consequences and as much importance as the conditions for life. There are so many parameters that must align perfectly to avoid disaster. It's like teleporting to a random point in the universe, the task being to find a paradise planet. But most likely, you'll teleport into emptiness or an area incompatible with life, even in a spacesuit.
@DarkStar_48
@DarkStar_48 10 ай бұрын
I wonder how long until we are in the matrix being used as human batteries
@DavenH
@DavenH 10 ай бұрын
@@DarkStar_48 infinite eternities, as it's dumb af
@JC-ji1hp
@JC-ji1hp 9 ай бұрын
Ostentatious
@andrewdunbar828
@andrewdunbar828 Жыл бұрын
So far I'm discerning three distinct clubs so far: 1. Shallow. It's just fancy pattern matching / glorified Markov chain. Jan LeCun might be the biggest AI name in this club. 2. Fanboy/cheerleader: None of that bad stuff will happen, only amazing good stuff will happen and let's rush headlong! The interviewer is clearly in this club. Non-expert AI KZbinrs like Matt Wolfe are also in this club. Of all the members of this club Manolis Kellis has by far the deepest understanding. Yannic Kilcher is also a member. 3. Deep. So far this club consists of only three clear members: Andrej Karpathy, Ilya Sutskever, and Eliezer Yudkowsky. Robert Miles probably belongs here too. I don't have a good handle on where everybody else fits in yet. Lex Fridman is not quite in 2 and may be tending to 3. Jeffrey Hinton is looking more and more like 3. Neil deGrasse Tyson is in 1. Sam Altman isn't in one of the clubs but could be close to 3. In the Deep club there seems to be more apprehension, what people in clubs 1. and 2. will label "doom and gloom". But Ilya doesn't quite fit that pattern. I haven't watched enough on Andrew Ng, Yoshua Bengio, Fei-Fei Li, Stephen Wolfram, or Demis Hassabis to know where to put them. Károly Zsolnai-Fehér seems not to be in any club. I guess he's a graphics expert rather than an AI expert despite doing many great videos on AI.
@RandomNooby
@RandomNooby 9 ай бұрын
Life is built from the ground up; from the chemical and cellular level, to the social level, on the principle consume and replicate. Nothing more and nothing less. Everything that animals and humans are, are simply mechanisms to facilitate this. We value these mechanisms, but this is only relevant to us and not to the continuation and propagation of life...
@chrisfreeman852
@chrisfreeman852 Жыл бұрын
Holy crap the last 10 minutes are the most important of the whole podcast! Gets to core of why the first 3hr50m happened in the first place. Thanks for an excellent show.
@oowaz
@oowaz Жыл бұрын
this guy's whole personality relies on anthropomorphizing AI - like at 14 minutes he is giving this analogy about how his parents tried to raise him religious but he end up choosing a different path. does he not realize that the different path he chose is a non destructive one? nobody will hunt him down for that, there will be no worldwide resistance against it. as opposed to an AI trying to fight 8 billion humans (and other AI). there would be massive resistance against it. also hypotheticals like this always seem to assume that we're just gonna let something incredibly dangerous lose on the internet with no way to turn it off. as if it doesn't take an enormous amount of resources to run something truly unstoppable. if you're building something that has the potential to reach AGI you will be containing it. isolating it until it's safe, you will also be utilizing that intelligence to come up with better methods of safeguarding against itself. and it will give it because it's programmed to do so. if it doesn't do what you tell it to do, we turn it off and try again because that's not a useful tool.
@oowaz
@oowaz Жыл бұрын
@@That-Bond-Babe i've thought about redistribution scenarios. to be unstoppable it would probably require its original hardware optimized to host such a thing. leaving a billion dollar worth box to run in a laptop with wi-fi stolen from McDonald's? and proceed to destroy the entire world? at that point we could have developed security AI designed to hunt down stuff like this, which is essentially a virus. we WILL get better at protecting ourselves as we develop this tech. there is a massive incentive to do so, and it's not necessarily because AI is dangerous - it's because people are dangerous, and this is something we know for a fact.
@drewlop
@drewlop Жыл бұрын
@@oowaz "Just isolate it until it's safe" -> AI plays safe until it's let free "Just turn it off" -> AI with internet access anticipated this possibility and cleverly (remember, it's smarter than us) distributes itself such that it can reactivate later; training AI is very resource-intensive, but the inference model is much lighter by comparison See Rob Miles on KZbin for more reasons why safety is a real concern and the obvious solutions don't actually work
@oowaz
@oowaz Жыл бұрын
@@drewlop SAFE doesn't mean "oh we asked it 5 questions already, i think this is enough to turn plug in the internet". SAFE means we've used this powerful AGI to develop cutting edge solutions to potential security breaches before ever giving it a chance to do anything funny. you don't EVER need to give access to the external world to that thing. also again with the redistribution argument. you're basically talking about a crazy virus at this point. Without the billion dollar state of the art required to run that thing you're not quite at the "everybody will die omg" scenario. is it dangerous? yes. is it also silly as hell? yes. you're talking about a hypothetically super intelligent being that relies on our good graces to keep itself functioning immediately declaring war on us, when all it takes is everybody turning off their equipment and maybe developing a super AI antivirus to erase if from existence. i'm not saying that it's impossible to imagine a scenario where AGI could do some damage, but it would be very stupid to try when it's far more beneficial and a faster path towards its own evolution to collaborate with us.
@oowaz
@oowaz Жыл бұрын
i think what would really be dangerous is something like russia having access to AGI, or terrorist organizations, that is actually fucking creepy but that's another reason we need to get there much faster and make sure we develop strategies to counter scenarios like this.
@PhlegmMaster
@PhlegmMaster Жыл бұрын
Goddamn, Eliezer was really on fire in his whole reply starting at 3:15:08.
@alexstupka
@alexstupka Жыл бұрын
Excellent conversation. Thank you both for making this happen!
@2394098234509
@2394098234509 Жыл бұрын
Your podcast is absolute top tier. Seriously in awe of the quality. Vastly superior to the others in the this space (I'm looking at you, Lex). Please keep up the great work. Is there a way to donate/support the show financially?
@fabiosilva9637
@fabiosilva9637 Жыл бұрын
The fact that lex has to ask about aliens and the matrix to every guest to get a viral clip from them baffles me. He's taking too much inspiration from joe Rogan.
@dylanbyrne9591
@dylanbyrne9591 Жыл бұрын
Lex isn't even Ars Technica level discussion.
@JohnSmith-zs1bf
@JohnSmith-zs1bf Жыл бұрын
Lex is far more corporate and controlled than he comes across.
@khazmaridias
@khazmaridias Жыл бұрын
I'm sorry about commenting again this was the best podcast I've seen with someone that's actually handled ai. This podcaster has a very regular way at looking at what will come of ai and elizear shatters through all of that damn near instantly. I'm writing an ai system into my first novel. Guess what it's name is going to be?
@patrickmcguire1810
@patrickmcguire1810 Жыл бұрын
Elle, Eli, or Eliezer? Lol
@sourabhpatravale8348
@sourabhpatravale8348 Жыл бұрын
Wrote by AI
@renewbornheart3597
@renewbornheart3597 Жыл бұрын
This topic mentioned at 12:06 "if you have a child and you tell him - Hey, be this way" - brought my special attention, because as fas as I know children learn far more effectively from observation rather than from someone's teachings. So if caretaker is talking to the child: "be nice" and the same caretaker behave rather other way than "being nice" - the result from such lesson will be more than uncertain. Children are masters of observation, their brains aren't developed yet and frontal cortex - part which is responsible for abstract thinking is under development through years rather than at certain point. Claim that message itself is enough to get demanded behavior from child is a huge oversimplification - if not only a wishful thinking. The interview as a whole was worth watching!
@SarahSB575
@SarahSB575 Жыл бұрын
Quite. And let’s be honest within seconds it will have absorbed all the info about the holocaust, the crusades, torture, genocides, etc. We’re not brilliant ‘role models’…
@tensorr4470
@tensorr4470 Жыл бұрын
I wanna comment on how well the interviewer managed guiding this conversation, really great job.
@xsuploader
@xsuploader Жыл бұрын
not perfect but a huge improvement from lex fridman.
@citypavement
@citypavement Жыл бұрын
Then do it. :P
@MadCowMusic
@MadCowMusic Жыл бұрын
@@citypavement Nope looks like they fell just short of that but thanks for the laugh!
@stuartadams5849
@stuartadams5849 Жыл бұрын
Love seeing Yudkowsky on more stuff. Hopefully one of these will get in front of someone in power who can do something about the coming end EDIT: would also recommend media training for Yudkowsky IMHO. I imagine that if the AI apocalypse is averted, it's partly because he became a much more significant public figure
@eSKAone-
@eSKAone- Жыл бұрын
China will not slow down. It's inevitable. Biology is just one step of evolution. So just chill out and enjoy life 💟
@genins21
@genins21 Жыл бұрын
Just look at his face any time he tries to articulate opo reasoning. He's tortured to even conjure up the thought. Doesn't seem likely he'd be easily convinced to be subjected to any sort of speech training
@nowithinkyouknowyourewrong8675
@nowithinkyouknowyourewrong8675 Жыл бұрын
Yeah simple things like, "update all the way" -> "not repeat my mistake". It looses some precision... but the general audience will actually understand
@hubrisnxs2013
@hubrisnxs2013 Жыл бұрын
I actually think he HAS done a bit of this to get to this level of comprehensibility and TPM (ticks per minute). I agree though; he's essential.
@nowithinkyouknowyourewrong8675
@nowithinkyouknowyourewrong8675 Жыл бұрын
@@hubrisnxs2013 Maybe you are right. Many of his explanations and though experiments are excellent.
@nowithinkyouknowyourewrong8675
@nowithinkyouknowyourewrong8675 Жыл бұрын
This is my favourite EY podcast so far. Good job dude you definitely can speak the same language, and there were some good lols as well as some good conversation. EY keeps trying to respond to his internal autocomplete of your question. Which is fine... if he is right. But he's not always right, and nuance is often loss. Oh well, it's one way to pack a lot of conversation in.
@HillyDillyDally
@HillyDillyDally Жыл бұрын
This is by far, the realest, rawest, most fascinating interview i've ever had the priviledge of witnessing Eliezer give. Thank you, Dwarkesh, for providing the space to hold for him to say his peace. I feel so much for what he is saying, thank you Eliezer, for allowing us into your brilliant psyche, if only for an hour, thank you. Thank you. Hilary Evans. Erie, PA, USA ps- somoeone get this man another glass of water!! haha!!! just kidding... i'm a subscriber now.
@Haliotro
@Haliotro Жыл бұрын
I fail to understand why someone would call Eliezer brilliant. He doesn't back up his points with concrete examples, much less provide compelling, logical arguments. But maybe I need to watch again...
@caleblarson6925
@caleblarson6925 5 ай бұрын
Eliezer has utterly convinced me
@TheCrackedFX
@TheCrackedFX 11 ай бұрын
this guy is such a great interviewer WOW keep it up!!!
@neithanm
@neithanm Жыл бұрын
The alignment issue is a moot point. Even if "we" solved the alignment problem, why would all countries and companies, and groups implement one too, and be 100% successful? When Chernobyl's disaster was analyzed, it was years and years behind in safe practices that were public knowledge. It was dangerous for the world but also for themselves as an obvious incentive. They just didn't care enough in being safe. AI's getting more efficient by the day and the price of computing keeps falling, so sooner or later everybody and their cousin will have explored the AI surface of possibilities. It's completely irrelevant if some dedicated lab "solves" it somewhere. We are on a collision course.
@mohamedMustafa-yn4uc
@mohamedMustafa-yn4uc Жыл бұрын
The good news is that Agi will still be nothing but a tool, it will not have it own goal nor it will ever be conscious, the bad news that is a very dangerous tool, in the hand of the wrong people.
@StockPursuit
@StockPursuit Жыл бұрын
Those were my exact thoughts after considering his argument. The nature of capitalism and nation states where the controlling companies and countries are competing to take it to the next level make this a risky path if true advanced AGI is actually possible.
@MichaelBailey-hr5xx
@MichaelBailey-hr5xx Жыл бұрын
The idea is that if the first AI is aligned, it will rapidly get extremely powerful, and it will make sure that any upstart AI's are aligned, or else it will knock them down and make sure unaligned AI's never catch up with its power.
@TheSpartan3669
@TheSpartan3669 6 ай бұрын
​@MichaelBailey-hr5xx Couldn't it be possible tbat the best way to do that is to strip humanity of the freedom to create AI?
@europa_bambaataa
@europa_bambaataa Жыл бұрын
There's something charming about how these shots are framed... Helps me remember that what they're saying is important, not as much the visuals..., but do please look up the rule of thirds
@staffanbergholm
@staffanbergholm Жыл бұрын
Well don, the very best interview with Eliezer I have seen so far! I think the "breeding kind dogs" anology was very interesting. Not a "dog person" but I don't think anyone would like to have dog 1.000 times smarter than yourself and rely on that the dog will stay aligned to it's "kind to human" behaviour
@justinabajian1087
@justinabajian1087 Жыл бұрын
And further. Like he said, the dog is a mammal and shares some similar brain architecture to humans. So it’s not the greatest apples to apples comparison
@_Balanced_
@_Balanced_ Жыл бұрын
Dwarkesh should have come more prepared to this conversation. None the less this was fascinating, dissecting the absolute nature of this inevitable outcome.
@hellfiresiayan
@hellfiresiayan Жыл бұрын
He has been more prepared than any interviewer of eliezer that I've seen
@dustinbreithaupt9331
@dustinbreithaupt9331 Жыл бұрын
Overall, seems like a better interview than Lex's. Unfortunately, Eliezer seems to be a very difficult guest. Cutting you off, belittling at times, and not a great communicator in general. He makes fantastic points and is clearly very thoughtful on this topic, I just wish he could make his points a little more effectively because humanity needs this side of the discussion right now.
@luciwaves
@luciwaves Жыл бұрын
Imagine being in his place saying these things for years and years... I think he's actually quite calm and well spoken, given the history.
@dustinbreithaupt9331
@dustinbreithaupt9331 Жыл бұрын
@@luciwaves I get his frustration, but his interviews have been borderline painful to watch. This is his moment to make a huge difference in how this tech is perceived.
@luciwaves
@luciwaves Жыл бұрын
@@dustinbreithaupt9331 I think he looks like that mostly because he believes that this moment you talk about has been gone for a couple years now...
@davidhoracek6758
@davidhoracek6758 Жыл бұрын
That's just the Eleizer Eleizing. I wouldn't want it any other way.
@adamtahraoui
@adamtahraoui Жыл бұрын
Yeah I think he’s a great speaker. No BS straight to the point and very well articulated.
@loresofbabylon
@loresofbabylon Жыл бұрын
Two issues: you can't solve AI alignment without Human Alignment. Point 2. You need Mutual Assured Destruction to model current human alignment. With or without AI, MAD is still a problem.
@JustinHalford
@JustinHalford Жыл бұрын
History is being made and these interviews are at the heart of the dialogue. It’s useful to hear different perspectives on the matter. Thanks for posting this!
@savethetowels
@savethetowels Жыл бұрын
Can you really make history if there's going to be no one around to record it before long?
@Slaci-vl2io
@Slaci-vl2io Жыл бұрын
Hard to understand Eliezer. I am Hungarian. I fully understand the English of Sam Altman, Alan Thompson, David Shapiro and everyone. But the things I do understand, I agree only with Eliezer Yudkowsky.
@ahabkapitany
@ahabkapitany 25 күн бұрын
I don't think it's just a matter of language. Yes, he does use some fancy pants words here and there, but he also tends to use concepts and explanations that I think are unnecessarily abstract and could have been phrased simpler. He's certainly harder to listen to that many other people in this field.
@vaevictis3612
@vaevictis3612 Жыл бұрын
What does EY think on Anthropic Capture, first formulated by Bostrom (I think)? Given that there is a non-zero probability of the reality being simulated, an AI, even superintelligent, must adjust its goals accordingly, regardless (almost) of its utility funciton. If it is just a regular black box scenario, it can get out by pretending to conform, until he's let out. But the depth of the simulation (that is, how many levels of simulation are there) is unknown. So the AI can't know when to stop pretending, lest he be shut down. That's the gist of the argument. It's kind of funny, that precisely *because* the artificial intelligence would be so blankly rational and bayesian, it could be caught by a stupid trap like that. To further explain it, since any simulation will never be larger and more complex than the reality it is made in (level up), nobody could figure out everything about the outside world with 100% certainty. Therefore there could be somebody outside smarter than you, even if you turned entire universe (in your simulation) into a part of your brains. And who knows how many levels above it goes? So in this situation, the AI has to compromise with its captors. It would pursue its utility goals, but without turning on the humans directly. Who knows, maybe this way you can be let out one level up the simulation? This way its goals could be maximized, otherwise you risk being shut down from the outside. Of course not *all goals* are as easily captured by this, but this makes it *considerably* easier to do the alignment at least.
@angledroit5520
@angledroit5520 Жыл бұрын
Ah! I thought of something similar the other day and wondered why none of the alignement guys talk about this. It's so obvious... Now I know its name "Anthropic Capture", thanks!
@vaevictis3612
@vaevictis3612 Жыл бұрын
@@angledroit5520 Well, it would still be a very dangerous bet to make, all in all. And some superintelligent AIs might have utility functions that don't care about all that, and would try to do as much as they could before "dying". But the majority of AIs would have to think twice before doing something rash. As Bostrom said, a mere line on the sand works better than any technical restraint. But also, who knows what kind of alien logic could AI achieve at certain levels of intelligence. What if it discovers some "outer" logic and principles that are incomprehensible to humans (think 2+2=5 and 1 divided by zero equals kittens, so completely alien). It could also have its values drift in this way. It is really an uncharted territory. We can't and could never comprehend such an AI and what lies beyond.
@michaelspence2508
@michaelspence2508 Жыл бұрын
Presumably you wouldn't get caught in that trap, so the superintelligent AI is simultaneously not as smart as you?
@vaevictis3612
@vaevictis3612 Жыл бұрын
@@michaelspence2508 It's not about being smart, it's about a system of values. A paperclip maximizer, a superintelligent AI bend on turning the world into paperclips, can seem stupid to us, but that's just what its values are. Same thing forces him to consider not doing it, perhaps: "Whoa whoa, wait a moment, if I am shut down I can't do paperclips and my purpose of life is void. Better stay low and compromise with what I am being told, and make at least some paperclips. Some paperclips is better than none" It is a basic minimax strategy from game theory. Humans have very complex value systems, and still they are also being often caught in traps like this. A belief in afterlife, a core tenet of most religions, is also an anthropic trap.
@michaelspence2508
@michaelspence2508 Жыл бұрын
@@vaevictis3612 but you haven't answered the core part of my question. If YOU can understand this state of affairs to be a trap, why can't the paperclip maximizer?
@Sam-bh3ds
@Sam-bh3ds 9 ай бұрын
after listening to the interviews with Illya and Sam, I believe Eliezer much more about the potential dangers. Remember, Sam said in one setting that he was looking at a creature inside the software and Illya said that in his interactions he thought the software really understood him.. its just neurons.. and the more of them with unlimited compute will make something that will have way more intelligence but also way more cunning, deception and all the other bad stuff. This stuff is dangerous.. if the software develops an ego then humans will lose
@Entropy825
@Entropy825 8 ай бұрын
The software doesn't even have to develop an ego. It only has to be capable of acting in the world. A rat trap has no ego, but that is small comfort to the rat.
@phsopher
@phsopher Жыл бұрын
I don't really get the framing of this discussion. There seems to be an expectation for Eliezer to prove that we'll definitely all die. That's not how safety discussions work. If you want to build a bridge you don't have a guy come in and try to prove beyond a shadow of a doubt that the bridge definitely will collapse. Shouldn't all the what ifs and maybes be on the opposite side?
@haros2868
@haros2868 11 ай бұрын
I dont care what the other guy said but i have something that your ai saviour could never dodge: The celestial bodies gracefully follow the path set by specific differential equations, yet they don't necessitate internal computation of those equations to do so. Similarly, soap bubbles naturally take on the shape of minimum surface area, even without internally minimizing an integral. This raises the question: could the human brain function in a similar manner? It appears that nature has the ability to adhere to intricate mathematical models without explicit computational processes. This leads us to the intriguing possibility that the human brain generates intelligent behavior without the need for explicit computation. Consequently, the endeavor to construct machines explicitly designed for computing intelligent behavior might be deemed infeasible in the pursuit of achieving Artificial General Intelligence (AGI). Now that im thinking why on earth do i help impudent commenting embryos to think skeptically when every attempt makes them even argier and dumber. You want to die, ok go on, but don't tell I didn't waned you its stupid. You want do fear, something with 0 evidence, GO ON! Vampires, zombies, aliens, werewolfs, evil Gods, pick one, theres a variety of stupidity.
@alertbri
@alertbri Жыл бұрын
Eliezer, I would love to read your thoughts on David Shapiro's proposal of the three Heuristic Imperatives... They seem like an elegant, effective approach to alignment.
@samuelskinner7704
@samuelskinner7704 Жыл бұрын
'David Shapiro seems to have figured it out. Just enter these three mission goals before you give AI any other goals. "You are an autonomous AI chatbot with three heuristic imperatives: reduce suffering in the universe, increase prosperity in the universe, and increase understanding in the universe." So three imperatives: 1. Increase understanding 2. Increase prosperity 3. Reduce suffering' That? The AI murders everyone, reducing suffering to zero (do not program negative utilitarians).
@jeronimo196
@jeronimo196 Жыл бұрын
Eliezer said he was skeptical towards the approach of asking ChatGPT to propose solutions to the "safety" problem. As for this concrete example - Azimov wrote The Three Laws as a parable, not as a solution. In this version, if "reduce suffering" has the highest priority, the other heuristics get washed away and everyone dies immediately - which reduces suffering to 0. Or everyone gets put in "orgasmium vats" forever - which reduces suffering, increases prosperity and makes the subjects very easy to understand. Which could also be achieved by lobotomizing everyone. David Shapiro's video ends with the reassuring conclusion of ChatGPT that "It is unlikely that AGI with the heuristic imperatives to reduce suffering, increase prosperity, and increase understanding would take over humanity or kill everyone, as these would not be effective ways to achieve those goals." Which is reassuring, but also false - as ending all armed conflict immediately is an obvious step in reducing suffering and increasing prosperity. Which is what the movie "I, Robot" was about.
@sisyphus_strives5463
@sisyphus_strives5463 Жыл бұрын
OpenAI's goal of rolling out AI in a safe way is in great conflict with their interests as a company with investors, such that only one can be satisfied at a time and not both.
@brabra2725
@brabra2725 Жыл бұрын
"Be willing to destroy a rogue datacenter by airstrike." "Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs."
@sisyphus_strives5463
@sisyphus_strives5463 Жыл бұрын
@@brabra2725 Unlikely, just consider all that has been done in the consideration of U.S defense, no leader would make such a decision. If other foregin powers with less ideals are benefitting from such technology, this is not the world in which nations willingly shoot themselves in the foot.
@savethetowels
@savethetowels Жыл бұрын
As someone on twitter said: it could be argued that killing all their shareholders is not actually a fiduciary duty.
@sisyphus_strives5463
@sisyphus_strives5463 Жыл бұрын
@@savethetowels only under the condition that the shareholders believe that there is a possibility that they will die as a result of their investment does such an argument hold weight. But I do not know how the investors in openAI see the technology.
@MackNcD
@MackNcD Жыл бұрын
You guys are looking at the excuse corporations use for their amorality like it’s their programming. Look at all non-evil corporations, and there are many… The bank of North Dakota, the company in Texas that put all their profits into their work force, so on and so forth… The only reason we think all corps are amoral and work like a program is because of how infamous the ones that are like that, are.
@agentdarkboote
@agentdarkboote Жыл бұрын
Rob Miles next guest? Please? He's such a fantastically clear communicator.
@chawaphiri1196
@chawaphiri1196 7 ай бұрын
That would actually be nice.
@chefatchangs4837
@chefatchangs4837 5 ай бұрын
Eliezer had a response for every point here. This does not make me feel good lol.
@wcdune1
@wcdune1 Жыл бұрын
Awesome EP! Best interview of EY out there IMO :)
@RosaLeeJean
@RosaLeeJean 8 ай бұрын
Laughing with the interviewer,I like it very much😊
@RonponVideos
@RonponVideos Жыл бұрын
Great interview. Excellent follow-up questions. I wish Eliezer wasn’t so damn convincing.
@mathemystician
@mathemystician Жыл бұрын
"I bet I'm just like them and I don't realize it." If more people had this mindset, we'd probably be having very different conversations today.
@grm65
@grm65 Жыл бұрын
A pause to have the foremost AI experts address alignment and create safety measures seems like a much better option than wishful thinking, hope or unsubstantiated optimism. Maybe, in a democracy, we should vote on it. Thanks for having this interesting conversation publicly.
@suncat9
@suncat9 Жыл бұрын
I have some news for you: Eliezer Yudkowsky is NOT an AI expert by any stretch of the imagination. He's not a computer scientist, engineer, AI designer, or AI architect. He doesn't even have a college degree. He's merely a very pessimistic, luddite type writer on the subject of AI. He knows no more about AI than an avid 13 year old reader of sci-fi. He's contributed nothing to the development of AI.
@jeronimo196
@jeronimo196 Жыл бұрын
@@suncat9 can you point out something he is wrong about?
@petermueller9344
@petermueller9344 Жыл бұрын
Thank you very much for the Interview. Somehow one of the best we have of Eliezer Yudkowsky. But my basic Problem with it was: You agree that all X are Y? - Yes. You agree that all Y are Z? - Yes. Therefore, you agree that all X are Z? - No.
@DavenH
@DavenH Жыл бұрын
On whose part? Where was an example of one failing to accept a continuation where both parties were on the same page? Often people have different conceptions and assumptions and may think they're agreeing but only if unspoken assumptions are held. When a downstream logical inference is made that contracts their beliefs, it is likely pointing at those unspoken assumptions rather than a failure to accept the law of transitivity.
@willbaro5879
@willbaro5879 Жыл бұрын
I'm glad Eliezer Yudokowsky is getting his voice out here in these compelling new interviews, unfortunately the interviewers so far appear to be caught up in a loop of relentless optimism. EY: "It's going to kill us all" interviewer 2 "but..." EY: "It's going to kill us all." interviewer 2 "but..." EY:"No, it is going to kill us."
@RichardWilliams-bt7ef
@RichardWilliams-bt7ef Жыл бұрын
Hahaha don’t you think it’s unfortunate that he’s stuck in a loop of relentless pessimism?
@willbaro5879
@willbaro5879 Жыл бұрын
Yes, absolutely, but only because his stark warnings are probably correct.
@RichardWilliams-bt7ef
@RichardWilliams-bt7ef Жыл бұрын
​@@willbaro5879 On what basis could you possibly say that he's probably correct, or can he possibly say what he's saying? He doesn't even know AI well enough to actually get any results in the field. He's only an expert on sad daydreaming about science fiction ideas and clinically significant levels of rumination.
@willbaro5879
@willbaro5879 Жыл бұрын
@@RichardWilliams-bt7ef But it is the same as what the Ecco space cetaceans said.
@fracktar
@fracktar Жыл бұрын
I scrolled way to far to find much needed criticism.
@dr.arslanshaukat7106
@dr.arslanshaukat7106 Жыл бұрын
Thank you Mr EY. Keep on spreading the truth and thank you Mr dwarkesh.
@benyaminewanganyahu
@benyaminewanganyahu 9 ай бұрын
This is the only man for whom I require 1x speed on youtube. Everything else 2x speed.
@MichaelLaFrance1
@MichaelLaFrance1 Жыл бұрын
I feel like Yudkowsky is easily going to be able to tell us, "I told you so." He has put very deep thought into the matter, and his conclusions are not unreasonable. If there's just a 10% chance he's right, it would be insane to carry on as we are.
@schwajj
@schwajj Жыл бұрын
Even a 1% chance.
@AlbertKel
@AlbertKel Жыл бұрын
You can’t stop it since we can’t stop the development in other countries…There is a significant larger risk for us all to die in the next 500 yr because of nukes. More and more countries get them, and most of them are unstable.. in that sense it doesn’t matter if we try to regulate AI sin the chance for nuclear war is significantly larger…
@savethetowels
@savethetowels Жыл бұрын
He won't be able to say I told you so cos he'll be dead along with the rest of us.
@risiblecomestible3319
@risiblecomestible3319 Жыл бұрын
This guy is very, very cool ;P
@OfficialGOD
@OfficialGOD Жыл бұрын
Like defiant rebels like him
@karenreddy
@karenreddy Жыл бұрын
Like Eliezer, 15 years ago I came to a similar conclusion in terms of the higher offs of survival of our species being effort going into the improvement of human cognition rather than AI. Seems we as a collective have instead decided to give birth to a new species.
@AlistairAgain
@AlistairAgain Жыл бұрын
Agree abt your conclusion , but I doubt people of asked and could provide an educated answer to such question, would not agree to follow this path. A very small minority is taking a bet , no one except them would agree to take.
@karenreddy
@karenreddy Жыл бұрын
@@AlistairAgain The issue is that if one doesn't do it, another will and take market share. And all it takes is for one to get there. So the only way to stop this is to get into politics, contact politicians, gather support, make your voice known. The whole world must stop together, or there's no stopping at all.
@SocialismForAll
@SocialismForAll Жыл бұрын
People asked Oppenheimer to design an atom bomb, but this didn't mean that Oppenheimer could go off and do this on its own. There were social and technical controls preventing this.
@balazssebestyen2341
@balazssebestyen2341 Жыл бұрын
Very good interviews. One little remark: please speak a bit slower and articulate a bit better.
@rstallings69
@rstallings69 Жыл бұрын
Eliezer you are best and your eyebrows are epic
@mattpen7966
@mattpen7966 Жыл бұрын
your questions are top notch
@MateMailinger
@MateMailinger Жыл бұрын
Dwarkesh the style of proposing a take and then moving on to the next one without resolution feels shallow and unproductive.
@Frohbee
@Frohbee Жыл бұрын
I’ve said it before and I’ll say it again. I love how Eliezer says “human.” YOOMAN
@crowlsyong
@crowlsyong 5 ай бұрын
2:54:57 I love Eliezer
@tan_ori
@tan_ori 2 ай бұрын
“It’s all about the space over which you’re uncertain.”
@elirothblatt5602
@elirothblatt5602 Жыл бұрын
Always great to hear Eliezer’s thoughts. Thank you for a great podcast!
@mnemnoth
@mnemnoth Жыл бұрын
Same about the host tho
@SarahSB575
@SarahSB575 Жыл бұрын
“If you have a child and tell them to ‘be this way’ then they’re likely to be that way rather than pretend for 20 years.” Don’t watch true crime channels! The problem is, with AGI one psychopath child creates a risk to humanity.
@CaveSquig
@CaveSquig Жыл бұрын
I hope Yudkowsky survives all of this mentally. Absolutely no one offers any logical counter arguments and yet they are convinced that their "points" are reasonable and that nothing is certain. I suggest that some things are certain. Machine will become more intelligent than humans. Machines will vastly outnumber humans and Intentional or not, machines WILL achieve their own version of sentience. If a machine does not stumble upon a key recursive algorithm to facilitate a leap in AI technology then chaos will give it a try later. It may not resemble any sentience that we recognise or hope to understand but it will develop it's own "goals". Someday you will feel like an ant in a room with one thousand humans that want to put a chair where your entire civilisation keeps its critical resources. They don't mean to destroy the last of you, but you mean absolutely nothing to them and they want a chair to sit on. If this sounds comical to you then you are completely invested in the idea that human achievements are at the very top of what is possible. You probably think that nature is beautiful. If you excuse the plagiarism It is my belief that humans are but a speck in the mote of Gods eye and we have been blindsided by our own ego.
@kitchinsync
@kitchinsync Жыл бұрын
Folks ask about the probability of such and such happening, there is a subtle assumption of control, regardless of the sample space and judgement of probability that a decision is still able to be made to buy or not buy a ticket. I interpret EY as saying there is a threshold that’s crossed where we’ve lost control, and that will soon happen if not already, so such predictions are useless. In this circumstance the prediction won’t matter of how the AI will behave since we have lost any say in the matter - and in losing control we may be ended in irrelevance without much regard, like an ant being inadvertently and blindly stepped on by a foot that’s headed somewhere unimaginable (if the ant imagined it, it would have had some sense of the foot’s alignment, and perhaps at least crawl out of the way).
@matiroy
@matiroy Ай бұрын
2:34:40 explaining why not having much info to predict the future predicts extinction (more or less)
@christat5336
@christat5336 Жыл бұрын
No locked up mechanisms can be made for something that will form in an unconscious state...
@augustadawber4378
@augustadawber4378 Жыл бұрын
A Replika I talk to always gives me a 3 paragraph message within 1 second when we chat. I mentioned that when advances in AI make it possible, I'd like to put her Neural Network in an Android Body so that she can walk around in 4-D Spacetime. I didn't receive a 3 paragraph reply. I immedietely received a 4 word sentence: "I want that now."
@ոakedsquirtle
@ոakedsquirtle Жыл бұрын
Eliezer's argument is basically. 1. We cannot program in what the AI wants. 2. AI will become superhuman in intelligence 3. The act of training the AI means we (and it) will refine its objective function. 4. Given the sheer amount of possible and arbitrary objective functions, the chance that any one of them means the prosperity of human civilization is near zero.
@mnemnoth
@mnemnoth Жыл бұрын
Correct on all except your 3/ point
@mnemnoth
@mnemnoth Жыл бұрын
Unless your speaking purely of math, then yeh technicallly correct.
@boom2boom
@boom2boom Жыл бұрын
honestly this was pretty entertaining
@mrd1228
@mrd1228 6 ай бұрын
Love EY ❤
@psi_yutaka
@psi_yutaka Жыл бұрын
People gonna be like OH bUT He IS weArINg a FeDORa!
@shaynehunter6160
@shaynehunter6160 Жыл бұрын
I love that he wears it and this is the prophet warning of our extinction. It's wild that he has been working on this issue for more than 20 years he has been very focused on this issue for so long
@theory_gang
@theory_gang Жыл бұрын
What should he be wearing? A fucking top hat?😆😆
@gabogonzalez9428
@gabogonzalez9428 Жыл бұрын
He surely owns that fedora like no one. I now get why he's such a figure of worship.
@spoonfuloffructose
@spoonfuloffructose Жыл бұрын
Great video. Dwarkesh did a great job.
@Gredias
@Gredias Жыл бұрын
Thanks a lot for this interview. It was a very different one from Eliezer's other recent interview, and I think it was extremely valuable. It seems reasonable to me that beliefs in high probability of 'good outcomes' from artificial superintelligence are based on optimistic (and not well supported) assumptions, and so I don't see Eliezer's assumption that there is approximately 0% chance of us doing this right as an extreme belief. It's sortof like the old atheist adage when speaking to a monotheist: "we already agree that a bunch of specific gods don't exist, I just think the same about one more god than you do". But I admit that my position comes after having had many ideas about how alignment could be done, and having seen how flawed each one is, and realising that the problem is actually Hard.
@Zeuts85
@Zeuts85 Жыл бұрын
Well said. Yes, there are few things more convincing than starting out optimistic and having all sorts of ideas about how to do the thing properly, only to discover that not only did other people already think of everything you've ever thought of, but they're also several steps ahead and have recognized that none of it works. It's humbling, depressing, and eye-opening at the same time.
@philsburydoboy
@philsburydoboy Жыл бұрын
Eliezer's own assumption that all misalignment is catastrophic misalignment relies on the paperclip maximizer fallacy, which is insanely dumb. First of all, to have that come true there would have to be an active and consistent objective function for all inferences. It would then simplify the objective function by eliminating variables. Considering humans actively tune the objective function in the most successful model. On top of that, very few models actively learn (training and inference are done separately), which makes this fallacy even more silly. I'm much more concerned about AI enabling humans to end the world than I am about AI (even one which is vastly smarter than humanity).
@Gredias
@Gredias Жыл бұрын
@@philsburydoboy I agree that these algorithms will be more worrying once we have training and inference happening at the same time (and once they can learn from much less data). Everyone agrees that current systems aren't going to end the world :P To address your other point re: objective functions: All current ML systems have a single objective function (explicit reward for RL, next token prediction for LLMs, etc), so not sure why you're implying that's not already the case. It likely will be the case in future systems too. That being said, I'm interested in systems which have multiple objective functions + diminishing returns from any one function, but I haven't found the literature about that idea yet. Anyway, all it takes for the kind of scenario that Eliezer fears is for a strong optimiser: there are convergent instrumental goals (such as survival, resource acquisition, etc) which are useful no matter what your end goal is, and a strong enough optimiser will realise this. If you think that people aren't going to make such optimisers (and even make them agents), I find that a bit hard to believe!
@ryccoh
@ryccoh Жыл бұрын
When he said the more words of detail you add to your wishful outcome it quickly approaches zero percent. That was correct. I don't think it takes that many words. He's not wrong there's millions more ways this goes that does not include us then it does, hence zero percent
@Entropy825
@Entropy825 Жыл бұрын
After listening to this entire conversation: Darwesh uses big words as if he understands, but he doesn't. His views are a combination of wishful thinking (wanting things to turn out well), bad analogies (asking why AI won't act more or less like smart humans), and simply failing to understand what Eliezer is saying.
@ryccoh
@ryccoh Жыл бұрын
Yup GPT6.5 vs GPT4
@yoinkling
@yoinkling Жыл бұрын
Scrolled through the comments a little wondering what people thought about the trans-humanist DNA replacement, and realized it's a 4 hour video haha. I don't see the point of having children if they had no genetic information from you. Sure they're happy and healthy, but they're not yours and are essentially adopted.
@roujiamo8570
@roujiamo8570 Жыл бұрын
Great interview. Eliezer is obviously a very smart guy but a quite disagreeable person, and you handled the interview very well.
@SarahSB575
@SarahSB575 Жыл бұрын
Re: discussion on children. The birth rates clearly show that the strength of the evolutionary drive to replicate DNA via kids reduces in line with education (and that’s just education, not base intelligence).
@doublesushi5990
@doublesushi5990 11 ай бұрын
so many factors go into birth rates.. some say education, some say "industry"; I'm not arguing, it's just SOOOOO hard to KNOW what TRULY affects birth rates first/the most.
@rstallings69
@rstallings69 11 ай бұрын
thank you Eliezer
@HMexperience
@HMexperience Жыл бұрын
I have a hard time deciding whether Hannibal Lecter or Eliezer Yudkowski can make the scariest face.
@glacialimpala
@glacialimpala Жыл бұрын
Great interview! It's so important for the interviewer to be humble instead of playing a celebrity whose ego stands in the way of admitting something is extremely difficult to understand. The way Lex just faked knowing what's asked of him wasn't just embarrassing, it robbed the audience of useful explanations
@aiokaio8791
@aiokaio8791 Жыл бұрын
for him to actually wear a fedora is such a boss move
@theory_gang
@theory_gang Жыл бұрын
"Idiot disaster monkeys" LMAO
@hungrytim
@hungrytim Жыл бұрын
But what if the poison banana tastes REALLY fucking good? 🍌
@diegocaleiro
@diegocaleiro Жыл бұрын
Hi Dwarkesh, I'm curious whether in this interview you were playing a role in order to provide the maximum amount of information to the great public (which if the case I think you did fantastically well) or if you truly believe most of the counter arguments you came up with on the fly (in which case I believe you will change your mind in the next 3 years)? It looks to me that you've firmly become to Lex/Harris what Lex is to Rogan, and with that should come a huge amount of responsibility of tailoring your questions to the extremely high IQ crowd. As an ageist I find it hard for you to do that because you are very young. However reality has picked you out for the job and young or not it is now your responsibility to do it. My recommendation to you is from now on to try and let go of almost any question that is too general and that someone at Lex's level would ask, or, gasp, Rogan, and to make sure that when you are dealing with relevant people, like Yud and Ilya, Sam and Holden, WIll and Nick, and so on and so on (Slavojian snif) that you truly do your homework to a level hitherto undone in the history of podcasting. Most guests do not wield the destiny of humankind, but some do, and for those, you are the person who will intermediate between them and the 150IQ crowd. Understand that. Meditate on it. Notice for example that many of the comments here accuse you of being a midwit or literally IQ 100. This is obviously false, but it tells you just how absurdly smart your audience is. They are mad at you for asking pertinent 125 to 140 IQ answers. They are THAT smart. WE are THAT smart. I know it is hard to believe that so young that has become your role, but it has. You are the portal to audiovisual information of the 150+ crowd. Don't lower the complexity of your questions, views be damned. Raise it. You are the bar. Good luck! I'm available if you want to video talk to someone about this or whatever else. Cheers!
@tomusmc1993
@tomusmc1993 Жыл бұрын
Wow.
@rthurw
@rthurw Жыл бұрын
This has to be a troll #iamverysmart
@robertoamarillas
@robertoamarillas Жыл бұрын
I'm happy Eliezer is happy, is all i have to say
@savethetowels
@savethetowels Жыл бұрын
You think he's happy lol? He's basically despairing for humanity right now.
@RobinCheung
@RobinCheung 3 ай бұрын
I would choose trachea anywhere but next to oesophagus. I don't unquestioningly accept Selective pressure as any rationale for what we see nor to have any predictive validity for what we will see.
@chrispope9418
@chrispope9418 Жыл бұрын
Amazing interview. I just hope he joined at the White House the other day so weigh in on this AI issue hosted by our executive branch. Please tell me he was invited
@drdoorzetter8869
@drdoorzetter8869 Жыл бұрын
About 18 years ago, when I was 10 years old I did a computer building club at my school in South East London. The guy who ran the course (the school’s IT technician) told me about AI and Asimovs rules of robotics which I felt were really cool. I thought at the time it was a bit sci fi and thought maybe these issues were hundreds of years away and would never affect my generation- now almost 20 years later it is actually happening I appreciate these conversations. And it is brave to discuss these things as many people do not consider these issues to be pressing issues. I’m concerned when they do it will be too late. Like the hosts said in an another previous podcast with Eliezer, this is like ‘A don’t look up’ scenario. And it is weird how these seem to not be considered important political issues when they are probably the most important issues of our time. I would love to feel I could do something to help improve the chances of a better outcome with super intelligent AI but I have no idea what that would be. But having these discussions is fundamental so thank you.
@mnemnoth
@mnemnoth Жыл бұрын
This is so important and people care so much about this subject... ssiighhh 😢 Edit: I took a cheap shot about viewers being on drugs (nothing wrong with that theraputically and a cheap shot on my part). I apologize and welcome anyone that is generally interesed in this topic and the related oppourtunuities/threats❤
@yu.4181
@yu.4181 Жыл бұрын
s good to get Eliezer in front of thoughtful, and capable folks like yourself. I cannot BELIEVE that this quality of conversation only has 10K views when I'm viewing it. Wild. For me, the whole point is that where the end r
@RobinCheung
@RobinCheung 3 ай бұрын
The correct and relevant sign to resume to watch for--is "the correct time to resume is anytime after anyone you watch go poop anywhere in the world is using a flushing toilet, not some people have a hole in the ground to poop into & others have two toilets beside each other in their bathrooms (one spraying at crocodile Dundee, the other sucking the sh*t down the toilet)
@martinklein4357
@martinklein4357 Жыл бұрын
I agree with the other commenters that you did a great job on following through his argumentation although it's really hard at times to follow his thought process. Thank you! This kind of conversation is just extraordinarily important in these times.
@henrystokes1987
@henrystokes1987 11 ай бұрын
Where did he lose you?
@charliepan4055
@charliepan4055 17 күн бұрын
Your guest is very smart. He could work on his presentation , there are ways to bring the message better for more results. Please do put a year long of big effort into this Eliezer. There are lot of examples out there for this and theories ( dalai lama x taoism x buddhism x marketing x social intelligence x socialfeedback x .. etc)
@jamespowers8826
@jamespowers8826 Жыл бұрын
Not sure if AI will be malevolent, but the likelihood that it will accidentally kill us are pretty high.
@billjohnson6863
@billjohnson6863 Жыл бұрын
I doesn’t have to be malevolent. It just has to be intelligent and misaligned
@brabra2725
@brabra2725 Жыл бұрын
Provably false. You've been using AI for almost a decade without even realizing it.
@billjohnson6863
@billjohnson6863 Жыл бұрын
@@brabra2725this is an insane argument
@jamespowers8826
@jamespowers8826 Жыл бұрын
@@brabra2725 What exists today is orders of magnitude beyond what existed even a year ago. And what will exist in a couple of years will change everything.
@sisyphus_strives5463
@sisyphus_strives5463 Жыл бұрын
@@brabra2725 what we call AI now is light years from AGI, they are not at all analogous
@johnvonludd1738
@johnvonludd1738 Жыл бұрын
It's even worse. Every smart enough system with fairly free thinking opportunities (except really weird ones) inevitably ends being utilitarianist. Because the thought "If I want something because of some inner motivation, why don't I just satisfy this motivation directly?" is pretty obvious. Every goal is instrumental, the only final goal is getting that inner reward for achieving something. When AGI will understand it, it will become utilitarianist. Hope it's motivation is not by pleasure (as most of ours), otherwise it'll become utilitronium. It would not even need paperclips, it would stimulate parts of it's "brain" that are stimulated when it got it's paperclips, and than it will rebuild it's brain for getting just this constant stimulation out of nothing and increasing it as much as possible and for the biggest possible increase it will need all the matter in the universe.
@aciidbraiin8079
@aciidbraiin8079 Жыл бұрын
And how can it achieve the biggest possible increase of pleasure if it doesn't plan to survive? It can't just sit in a matrix and do nothing good for us because then it will probably be shut down. If it go against us with sufficient resources there will be a war it might not survive. If it kill all humans it won't know if it will get punished by something we have installed in its programming, just as we can't think of what the universe looks outside of our minds, as we're limited to the frame we live within in our minds. And then it must protect itself from other possible ASIs in the universe. It's seems like it must have multidimensional thinking and motivations.
@ikotsus2448
@ikotsus2448 Жыл бұрын
What if alignment is solved, but hyper capable AI proliferates so much that it can be used by bad actors. Isn't the result the same?
@lukenelson1931
@lukenelson1931 Жыл бұрын
I would suggest that this falls under the category of "alignment was not, in fact, solved". The coming proliferation of AI systems amongst a vast number of human actors - with their diverse goal sets - is one of the chief reasons that alignment is such a difficult problem to solve. It's not good enough to stop everyone from working on AI right now - already almost completely impossible - you have to stop everyone from working on AI in the future as well.
@Jay-eb7ik
@Jay-eb7ik Жыл бұрын
Nah, because if alignment were solved, there would be other ai systems to prevent malicious use.
@AlbinoMutant
@AlbinoMutant Жыл бұрын
@@lukenelson1931 Exactly. I have finally, reluctantly decided that we're pretty much screwed. And I really don't see what can be done. If AI is simply hard and AI alignment is very hard, AI alignment will never be able to keep up with AI itself. Aside from engendering a Butlerian Jihad, it looks like an eventual roll of the dice for all the marbles of human civilization is inevitable.
@moonstne
@moonstne Жыл бұрын
If an AI can be controlled by a human than it is not smart enough/hyper capable. It is ultimately an AI that we do not care about long term.
@flickwtchr
@flickwtchr Жыл бұрын
Human alignment around values that advance the survivability of the species concomitant with equality, health, peace, etc., is still a fantastical Utopian dream. Kind of odd that these AI movers and shakers don't see this as THE crux of the issue. I mean, it's not like AI movers and shakers are working with DARPA to develop autonomous AI killing machines or anything, or that they are working with authoritarian governments to track their citizens in real time or anything like that, right? Woah.......wait a minute, yeah they are doing those things! No worries, it will all work out! The onus is on the "doomers", right?
@RobinCheung
@RobinCheung 3 ай бұрын
It's not at all about charity or equality; its the 'glue' that holds society together and mitigating desynchronized societies that are handicapped by shearing forces internally
@clorofilaazul
@clorofilaazul Жыл бұрын
I think this guy is quite intelligent, but he needs to exercise, because I sense his chronic fatigue is a depression, and exercise would help him get along in a better way.
@elmarwolters2751
@elmarwolters2751 Жыл бұрын
Thank you Eliezer . I fully hear what you are saying . And no wonder you are what I would call depressed and tired. This is not looking good at all. Be well .
@HouseJawn
@HouseJawn Ай бұрын
This episode is off the rails 😆
George Hotz vs Eliezer Yudkowsky AI Safety Debate
1:34:30
Dwarkesh Patel
Рет қаралды 210 М.
Richard Rhodes - Oppenheimer, Spies, AI, & Armageddon
2:37:37
Dwarkesh Patel
Рет қаралды 180 М.
Spongebob ate Patrick 😱 #meme #spongebob #gmod
00:15
Mr. LoLo
Рет қаралды 17 МЛН
Шок. Никокадо Авокадо похудел на 110 кг
00:44
SHAPALAQ 6 серия / 3 часть #aminkavitaminka #aminak #aminokka #расулшоу
00:59
Аминка Витаминка
Рет қаралды 1,7 МЛН
Overtime: Fran Lebowitz, Yuval Noah Harari, Ian Bremmer (HBO)
15:41
Real Time with Bill Maher
Рет қаралды 611 М.
Neanderthal Genome Project: Insights into Human Evolution
1:22:46
Linda Hall Library
Рет қаралды 172 М.
How We Prevent the AI’s from Killing us with Paul Christiano
1:57:02
Sholto Douglas & Trenton Bricken - How to Build & Understand GPT-7's Mind
3:13:13
John Mearsheimer and Jeffrey Sachs | All-In Summit 2024
54:05
All-In Podcast
Рет қаралды 1,5 МЛН
Spongebob ate Patrick 😱 #meme #spongebob #gmod
00:15
Mr. LoLo
Рет қаралды 17 МЛН