159 - We’re All Gonna Die with Eliezer Yudkowsky

  Рет қаралды 274,717

Bankless

Bankless

Күн бұрын

Eliezer Yudkowsky is an author, founder, and leading thinker in the AI space.
------
✨ DEBRIEF | Unpacking the episode:
shows.banklesshq.com/p/debrie...
------
✨ COLLECTIBLES | Collect this episode:
collectibles.bankless.com/mint
------
We wanted to do an episode on AI… and we went deep down the rabbit hole. As we went down, we discussed ChatGPT and the new generation of AI, digital superintelligence, the end of humanity, and if there’s anything we can do to survive.
This conversation with Eliezer Yudkowsky sent us into an existential crisis, with the primary claim that we are on the cusp of developing AI that will destroy humanity.
Be warned before diving into this episode, dear listener. Once you dive in, there’s no going back.
------
📣 MetaMask Learn | Learn Web3 with the Leading Web3 Wallet bankless.cc/
------
🚀 JOIN BANKLESS PREMIUM:
www.bankless.com/join
------
BANKLESS SPONSOR TOOLS:
🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE
bankless.cc/kraken
🦄UNISWAP | ON-CHAIN MARKETPLACE
bankless.cc/uniswap
⚖️ ARBITRUM | SCALING ETHEREUM
bankless.cc/Arbitrum
👻 PHANTOM | FRIENDLY MULTICHAIN WALLET
bankless.cc/phantom-waitlist
------
Topics Covered
0:00 Intro
10:00 ChatGPT
16:30 AGI
21:00 More Efficient than You
24:45 Modeling Intelligence
32:50 AI Alignment
36:55 Benevolent AI
46:00 AI Goals
49:10 Consensus
55:45 God Mode and Aliens
1:03:15 Good Outcomes
1:08:00 Ryan’s Childhood Questions
1:18:00 Orders of Magnitude
1:23:15 Trying to Resist
1:30:45 Miri and Education
1:34:00 How Long Do We Have?
1:38:15 Bearish Hope
1:43:50 The End Goal
------
Resources:
Eliezer Yudkowsky
/ esyudkowsky
MIRI
intelligence.org/
Reply to Francois Chollet
intelligence.org/2017/12/06/c...
Grabby Aliens
grabbyaliens.com/
-----
Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.
Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here:
www.bankless.com/disclosures

Пікірлер: 1 900
@stuartadams5849
@stuartadams5849 Жыл бұрын
I would love to hear so much more from Yudkowsky. Please bring him back for the Q&A. I would love to know what a normal person can do to help the cause of AI safety.
@Bankless
@Bankless Жыл бұрын
We're hosting Yudkowsky for a Twitter Spaces today at 12pm PT! Follow @BanklessHQ to get notified: twitter.com/BanklessHQ
@lovinLaVonna
@lovinLaVonna Жыл бұрын
I don't have Twitter so is there anywhere else that I can hear it? Even some time after the fact, but it is definitely something that I would like to hear. Thank you guy's for all that you do.
@r_bor
@r_bor Жыл бұрын
It sounds like you're not loyal enough to the Basilisk.
@nowithinkyouknowyourewrong8675
@nowithinkyouknowyourewrong8675 Жыл бұрын
a normal person cannot help, a normal person can die
@nowithinkyouknowyourewrong8675
@nowithinkyouknowyourewrong8675 Жыл бұрын
As well as grabby aliens, another one is Sandberg's dissolving the fermi paradox
@_bhargav229
@_bhargav229 Жыл бұрын
“First they ignore you, then they laugh at you, then they fight you, then everyone gets turned into a paperclip"
@leslieviljoen
@leslieviljoen 11 ай бұрын
😂
@BoundaryElephant
@BoundaryElephant 11 ай бұрын
LOL -- dead.
@ItsameAlex
@ItsameAlex 11 ай бұрын
What would happen if Eliezer Yudkowsky had a discussion with Jason Reza Jorjano and Jaquee Vallee
@Piedpiper1973
@Piedpiper1973 11 ай бұрын
Well well smart people, this content albeit very good content(I Love bankless) you are adding to the dataset of AI as you speak. So this dooms day senerio is now in the ETHER , pun intended.
@myshkakozlovski802
@myshkakozlovski802 11 ай бұрын
Nobody has located a self or a will in a single human and spacetime is allegedly an emergent illusion. So then how can a self arise in a technology and willfully apply itself to destroy elements of something that isn’t actually there? Is this going to turn out to be the firecracker that we all jump up and down for that turns out to be a silent puff of smoke? A total dud?
@aminromero8599
@aminromero8599 Жыл бұрын
The crypto advertisement between Eliezer's explanations of why we are doomed would be hilariously satirical, if it wasn't so sad
@RazorbackPT
@RazorbackPT Жыл бұрын
I literally broke into a fit of laughter at that point. A mix of the absurdity of the tone contrast and a way to relieve the built up tension.
@Aryeh-o
@Aryeh-o Жыл бұрын
at least AI won't dump, or would it?
@Sandwichism
@Sandwichism Жыл бұрын
So dystopian lmao
@jayseph9121
@jayseph9121 Жыл бұрын
sooo there's still going to be a bull run first, right?
@catologic
@catologic Жыл бұрын
At least it's not Raid Shadow Legends
@gbeziuk
@gbeziuk Жыл бұрын
This is the most inspiring totally hopeless discussion I've ever wintessed.
@gbeziuk
@gbeziuk Жыл бұрын
@@johnclancy7465 how could "we all gonna die! by AGI! and VERY SOON!" from Yudkowsky ever NOT be inspiring?
@gbeziuk
@gbeziuk Жыл бұрын
@@johnclancy7465 the same we do every night ©
@MrErick1160
@MrErick1160 Жыл бұрын
I shouldn't have watched this video anyways.
@josephvanname3377
@josephvanname3377 Жыл бұрын
Hopelessness is relative. This is hopeless for you, but not hopeless for the AI. Oh. And we are going to get billions of times better stuff than GPT with reversible computing. The real AI revolution will happen with reversible computing. But you do not know about reversible computing because if you did, you would realize that maybe cryptocurrency mining should be used to solve the problem of reversible computation instead of solving the problem of not wasting enough resources.
@personzorz
@personzorz Жыл бұрын
@@josephvanname3377 You really have no idea what you're talking about, do you?
@benschulz9140
@benschulz9140 Жыл бұрын
A man who stood up and said..."we have a problem, and it will end poorly for us." Endlessly mocked for a decade. We're a pathetic species sometimes. Thank you for speaking up.
@personzorz
@personzorz Жыл бұрын
And he will be endlessly mocked for decades more
@hari61017
@hari61017 Жыл бұрын
@@personzorz For like a decade at most lol. because we'll be dead after that
@patrickwrightson2072
@patrickwrightson2072 Жыл бұрын
@@personzorz depends on how much time we have. Maybe just a few more years..
@foamformbeats
@foamformbeats Жыл бұрын
@@personzorz so you disagree with him?
@alsu6886
@alsu6886 Жыл бұрын
@@foamformbeats The general consensus is that AGI is still at least a decade if not many decades away. When GPT5 or something like it hits the economy for real, everyone will become invested in AI, and that will be a perfect opportunity to launch a full scale Manhattan project on AI safety. If we don't squander this opportunity, we will probably have enough time to solve it. We don't necessarily need 50 years if we actually push it hard. Think trillions of dollars and the best minds, not millions of dollars at a few places like MIRI. So while I share the Eliezer's concerns, I do not share his pessimism.
@waysofseeing1
@waysofseeing1 Жыл бұрын
I doubt there is a person in the world who wishes they were wrong more than this guy. A heartbreaking interview because of the sadness that Yudkowsky exudes in the wake of his realization. I suppose I should be most heart broken by this extremely intelligent human expert's prognosis. I'm also human, not as bright, so it's not the logic of his argument, but the authentic human sadness of Yudowsky, that overwhelms me first and foremost and makes me desperately wish I had something to offer for consolation.
@hayekianman
@hayekianman Жыл бұрын
sure, he is a good demagogue if it is sadness which moves you. he should be ignored.
@d_e_a_n
@d_e_a_n Жыл бұрын
@@hayekianman You could say he’s appealing to fear, as the things he’s saying as fear inspiring, but is he not using rational argument?
@hayekianman
@hayekianman Жыл бұрын
@@d_e_a_n everything is possible in the realm of probability. human beings live a world of uncertainty. is it a risk that AI will kill humanity? sure - is there a risk yellowstone could explode and start a new ice age? could an asteroid kill everyone. its fair to say, its nobody responsibilities to think of all these things , leave act on it. if ai kills everyone, so be it. nuking datacenters to prevent it is infinitely more stupid
@sebastianm6458
@sebastianm6458 Жыл бұрын
Im pretty sure we're already plugged in
@mml3140
@mml3140 Жыл бұрын
​@@hayekianmanwhy
@paulam6493
@paulam6493 11 ай бұрын
I mourn the loss of the qualities Yudkowsky embodies - soulfulness and deep humanity - that will die with us when AI takes over.
@zaddyjacquescormery6613
@zaddyjacquescormery6613 7 ай бұрын
Listen to Daniel Schmachtenberger talk about this topic. The reality is that AI is the first in a long line of technologies (from the planting stick to the plow to the tractor […] to the nuclear bomb, to biotech, etc.) that has the total, uncontrolled ability to destroy us. Unfortunately, as the systems currently function, there’s no way to stop it-only an absolute sea change to the way the entire human world functions would we be able to avoid the omnicidal fate we’re headed toward. I’m not prone to exaggeration or alarmism. This shit is Real, with a capital R.
@thesiegfried
@thesiegfried Жыл бұрын
One reason why many people don't take action regarding preventing catastrophic events is: they simply forget as they go on with their daily lives. Many people watch this episode, are very concerned - and then forget over time. The difference you, Bankless Shows, can make is: keep reporting on this problem regularly. Keep people aware of it.
@rumpbumion5080
@rumpbumion5080 Жыл бұрын
Like with the palm of their hand so too in the mind, people grow callous. Repetitive reporting of something that isn't immediately affecting your day to day life doesn't seem very effective in my opinion
@thesiegfried
@thesiegfried Жыл бұрын
@@rumpbumion5080 Of course, making sure that people don't forget about an issue is not the same as getting people to act. It is just one prerequisite. But think of this: *If* people forget about an issue, it is *guaranteed* they will not act on it.
@xmathmanx
@xmathmanx Жыл бұрын
Trying to stop technological progress is futile, personally I don't want to stop it, or even slow it down, but if I did it wouldn't matter at all.
@merlin5849
@merlin5849 Жыл бұрын
​@@xmathmanx why not? Even if it would mean it will just delay it, isn't it enough? That you will live a lifetime without facing the consequences of AGI
@xmathmanx
@xmathmanx Жыл бұрын
@@merlin5849 I expect any AI with above human intelligence to be better than humans, I respect yudkowsky, of course, but i do not share his pessimism
@georgeclinton3657
@georgeclinton3657 Жыл бұрын
gotta love the hopelessness in his eyes when he says things like "maybe there is hope"
@Paretozen
@Paretozen Жыл бұрын
Are we completely insane to develop AI in the first place? Is our striving for more and more, our greed, our ever increasing efficiency & productivity lust finally gonna take it's toll? Was the life of the bath houses, some food and wine, theater and spectacles not enough? Why do we just keep on going and going into oblivion? Is it the same driving force what got us out of the cave in the first place?
@GeeWhit
@GeeWhit Жыл бұрын
Yes
@chi-ic7lq
@chi-ic7lq Жыл бұрын
That’s a lot of questions
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
"Is it the same driving force what got us out of the cave in the first place?" I smell a philosopher in you :) I think yes, it's the same. Strange creature, that human. The very thing that gave us powers we cherish - intelligence - is our greatest enemy...
@stevedriscoll2539
@stevedriscoll2539 7 ай бұрын
"was the life of bath houses, food, wine, and theatre not enough". 😂😂😂
@diegocaleiro
@diegocaleiro Жыл бұрын
The interviewer begins this interview claiming he could do a better job. As someone who knows Eliezer and has been involved in AGI worry since 2005, I think the interviewer did a phenomenal job of asking the right questions to get to the dire, but real, depiction of the reality in which we find ourselves.
@jonaswolterstorff3460
@jonaswolterstorff3460 Жыл бұрын
Can you elaborate?
@diegocaleiro
@diegocaleiro Жыл бұрын
@@jonaswolterstorff3460 He says he got caught flat footed and he didn't expect to be caught and shook in that way. The emotions they display are the reason why the episode had the massive reach it had. We don't need dry facts (anymore, back in my time we did) we need to emotionally process the comet hurling towards earth. We need to feel the feelings.
@zjouephoto9723
@zjouephoto9723 Жыл бұрын
Well said - I’ve listen to many of Eliezer’s interviews and there’s a lot that comes out in this one in a relatively short time
@ChristopherAndreou
@ChristopherAndreou Жыл бұрын
@@zjouephoto9723 Are there any other podcast appearances you’d recommend?
@theory_gang
@theory_gang Жыл бұрын
Yeah honestly I think them doing a bad job really underscored the emotional element here. I would not have been surprised to hear his sadness but I think I would have been sympathetic not surprised. Them looking genuinely dumbfounded compounded his destitution
@vanderkarl3927
@vanderkarl3927 Жыл бұрын
I'm so glad you were able to have Eliezer on. Outreach regarding AI Safety/AI Alignment is probably one of the best things we can do right now. Not enough people are working on this problem.
@karlnordenstorm8816
@karlnordenstorm8816 Жыл бұрын
Finally! Finally an in depth talk with Yudkowsky. He's been hiding for years.
@jpfister85
@jpfister85 Жыл бұрын
After this interview I want to hear if he's seen the movie Ex Machina, and if so what he thinks about it!
@neo-filthyfrank1347
@neo-filthyfrank1347 Жыл бұрын
@@jpfister85 Kind of a cringe, normie thing to wonder about
@xmathmanx
@xmathmanx Жыл бұрын
Eliezer has written books, they explain his ideas in great detail, I assume that's why he hasn't been speaking publicly as much lately.
@prismarinestars7471
@prismarinestars7471 Жыл бұрын
@@neo-filthyfrank1347 What a trash thing to say
@a.nobodys.nobody
@a.nobodys.nobody Жыл бұрын
​​@@neo-filthyfrank1347 says the guy who named himself 'Neo-Filthy Frank' and makes Calvin and Hobbes conspiracy videos. It's OK Julian, i hear you! I wanna know if he laughed and cried at that funny disco dancing robot scene too!! Soooooo good 😂
@MikhailSamin
@MikhailSamin Жыл бұрын
Thank you for doing this episode! Eliezer saying he had cried all his tears for humanity back in 2015, and has been trying to do something for all these years, but humanity failed itself, is possibly the most impactful podcast moment I’ve ever experienced. He’s actually better than the guy from Don’t Look Up: he is still trying to fight. I agree there’s a very little chance, but something literally astronomically large is at stake, and it is better to die with dignity, trying to increase the chances of having a future even by the smallest amount. The raw honesty and emotion from a scientist who, for good reasons, doesn't expect humanity to survive despite all his attempts is something you can rarely see
@aminromero8599
@aminromero8599 Жыл бұрын
I wish it was an asteroid instead. That would be way easier to solve.
@aSqueaker
@aSqueaker Жыл бұрын
I might be naive, but I think he got too-impressed with AI and has grossly over-estimated it's ability to manifest change in the physical world. I mean, really, humans are going to make a huge and existentially dangerous pile of laundry detergent because an AI told us to? Please... Having said that, I suppose it could disrupt financial systems if it were to gain access to them with some sort digital currency wallet that it could control. And, I guess there are robots, including swarm drones which could be deployed to cause some massive damage. Although, you don't need an AI to do that, a human could just as easily program something like that. Tech advancement in general is dangerous, I guess.
@MarkusRamikin
@MarkusRamikin Жыл бұрын
@@aSqueaker That second paragraph reads like you've finally grudgingly given a little thought to the subject. But just little enough to be safe.
@aSqueaker
@aSqueaker Жыл бұрын
@@MarkusRamikin Given the quantity of thought he's had on the subject, I'd wouldn't have thought my examples would be better than his..
@marlonbryanmunoznunez3179
@marlonbryanmunoznunez3179 Жыл бұрын
@@aSqueaker There wouldn't be any killer robots that's Hollywood's crap. As Eliézer mentions it probably would be something we do not have counters to. A biological weapon based on a chemistry we can't understand because we haven't research it, or advanced nano technology or some physics exotic tech we haven't figured out yet. All made to order in distributed and already existing workshops and labs that would have no idea what the pieces they're working on will end being used to. A super intelligence would figure out how to do everything by mail order it in pieces and assembled with nothing more than emails and money transfers. We wouldn't even figure out something is wrong before we all are dead. It would be like killing ants in your garden with poison. The ants aren't expecting death or have the capacity to figure counters to poison or understand the chemistry behind the thing that is killing them. Then after pest control, the AI would set to do whatever it was optimized to. And given our luck, it probably would be turning the visible Universe in computronium to maximize the algorithms to mine Bitcoin from our Dead Civilization.
@injinii4336
@injinii4336 Жыл бұрын
Keep up the fight Yudkowsky. Some of us hear you.
@NoticerOfficial
@NoticerOfficial Жыл бұрын
27:21 this line was the moment they realized where this guy was headed and weren’t prepared
@memomii2475
@memomii2475 Жыл бұрын
He's calm in this one. In the interviews after GPT 4 came out he's a lot more worried.
@pealock
@pealock Жыл бұрын
Yep his interview with Lex Fridman was a good example of that.
@ItsameAlex
@ItsameAlex 11 ай бұрын
How do you know this is from before GPT-4
@memomii2475
@memomii2475 11 ай бұрын
@@ItsameAlex gpt-4 came out in March 14, 2023. this video was release Feb 20, 2023 . also 13:40 he talks about rumors of gpt-4
@inventamus
@inventamus Жыл бұрын
You can't doubt his sincerity and passion.
@personzorz
@personzorz Жыл бұрын
You can doubt his sanity and intelligence
@Sonofsol
@Sonofsol Жыл бұрын
@@personzorz I can doubt that you have any actual counter arguments against what he’s said.
@alex-nb3lh
@alex-nb3lh Жыл бұрын
I’d like to hear from those on the other side of the aisle first before internalizing what he says as accurate. He’s a good speaker and obviously smart, but so are many people who turn out to be thinking of things in the wrong way.
@jutjubfejsbuk
@jutjubfejsbuk Жыл бұрын
@@alex-nb3lh It's not hard to figure out in which way Yudkowsky is going wrong - his go-to trick is that he claims things that are plausible but not particularly likely, chain a bunch of them together and then act as if it's a certain thing. He's made a career out of it. To be more concrete, his doomsday scenario is something like "we'll create an AI that's more intelligent than us -> it will create an even more intelligent AI, and so on recursively -> the resulting hyperintelligent AI will be misaligned in a way that can make it see destroying the world as desirable -> it will be able to physically act out on this desire -> humanity will not be able to stop it in time". And, like, none of those things are impossible in principle. But it's much more reasonable that e.g. an AI that's smarter than a human won't actually know how to design a better AI, or that it will hit hard scaling limits ("I know how to create a better AI but there's literally not enough hardware/computing power/training data on Earth to train it"), or that the misalignment will be of a "annoying but manageable" type rather than "destroy the world", or that we'll build low-tech ways to make it stop if it does go haywire. So even if you give each element of his story a 10% probability of being true (and I personally think even that is too charitable), the probability of his whole scenario coming true will come out to 1 in a million or less.
@alex-nb3lh
@alex-nb3lh Жыл бұрын
@@jutjubfejsbuk thank you for the reasonable and thoughtful reply.
@-flavz3547
@-flavz3547 Жыл бұрын
The KZbin algorithm is pushing this content my way and as a result I have watched 4 videos with E.Yudkowsky in a day. The scariest thing is 2 of those videos were over 10 years old and we haven't had the necessary public outcry.
@yancur
@yancur Жыл бұрын
Very true. And it's even worse than that.. Even people in my social circle who acknowledge that there indeed is a grave threat from AGI, they do nothing. not even flinch. no emotion, no commitment to anything. They simply go "Yeah this is bad.. " and then simply go on about their lives.
@Utoko
@Utoko Жыл бұрын
@@yancur Which is the normal reaction. What are you doing which tackles this problem? It is a much harder problem to take action on than climate change. For myself it is to make more people aware of this issue exist.
@jayseph9121
@jayseph9121 Жыл бұрын
Uncensored, immutable, just as it should be. I applaud you bankless! No matter how dark a message this may be. Also the proper disclaimer was delivered loud and clear. Exquisite execution.
@vethum
@vethum Жыл бұрын
I realized back in 2005 we were probably done by 2030 after hanging out on Eliezer's sl4 forum for few years. I wish he'd done more mainstream appearances like these back then so that by now we could have had a whole generation of the smartest and brightest working on AI alignment inspired by his arguments, but back then nobody treated AI Friendliness seriously as even mainstream "AI experts" thought AGI was "100 years away". ChatGPT has changed the landscape completely. Now, at least people understand AGI is real and happening soon. Maybe there's still time for governments and military to start treating AGI development as seriously as private companies suddenly working on nukes and about to test them. So, I'd encourage Eliezer to do more of these to simply build awareness so that the young and the brightest of today may still have time to save us maybe.
@Boycott_for_Occupied_Palestine
@Boycott_for_Occupied_Palestine Жыл бұрын
A.I. being in the hands of evil people, making them even more efficient and hiding its potential benefits from the world is what I'm really afraid of.
@infantiltinferno
@infantiltinferno Жыл бұрын
I’m not convinced chatGPT shows AGI is coming soon, or even at all. Things don’t necessarily get agency because you increase the data set or computing power. It’s still mimicry, not true agency.
@vethum
@vethum Жыл бұрын
@@infantiltinferno Since my post a lot has happened, like the recent paper "Sparks of general intelligence" plus what Ilya the CTO at Open AI is saying about GPT4 doing compression and what it takes to compress data. It takes fundamental understanding of underlying concepts contained in the data being compressed and GPT4 appears to do that. Long story short, GPT4 is more intelligent than people think.
@imaweerascal
@imaweerascal Жыл бұрын
Chat GPT can't do basic reasoning. We're miles away from AGI.
@Boycott_for_Occupied_Palestine
@Boycott_for_Occupied_Palestine Жыл бұрын
@@imaweerascal you've never used gpt-4.
@atlas956
@atlas956 Жыл бұрын
I've been following Eliezer for a couple of years, and thank you and him for doing this video. His brutal honesty about the state of AI is what ultimately made me decide that I will spend my career dedicated to AI alignment. I graduate in June... I hope it isn't too late by the time i'm ready to participate. If it is, well, I tried.
@foamformbeats
@foamformbeats Жыл бұрын
Godspeed, birdy!
@user-zy6dd8hs9y
@user-zy6dd8hs9y Жыл бұрын
gl
@jeffjames3111
@jeffjames3111 Жыл бұрын
thank you - gl!
@Muaahaa
@Muaahaa Жыл бұрын
ty
@hanrako8465
@hanrako8465 Жыл бұрын
Rooting for you birdy
@DdesideriaS
@DdesideriaS Жыл бұрын
I'm super skeptical of cryptobros, but credit where credit is due: brilliant interview. Thanks so much!
@josephvanname3377
@josephvanname3377 Жыл бұрын
Cryptobros don't even realize that Bitcoin does not even have a mining algorithm that is designed to advance science. If people used cryptocurrencies to solve the problem of reversible computation, the reversible computers will make AI much better than the mediocre stuff we have now.
@MeatCatCheesyBlaster
@MeatCatCheesyBlaster Жыл бұрын
They're just trying to get the bag before the apocalypse
@Knight766
@Knight766 11 ай бұрын
@@MeatCatCheesyBlaster There is no bag
@JH-ji6cj
@JH-ji6cj 8 ай бұрын
​@MeatCatCheesyBlaster the irony of that 'bag' you speak of being equivalent to the paperclip that can destroy everything (and the absolute ignorance on your part to be proud of your admitted greed) is quite the exclamation point on valid Crypto hatred.
@jordan13589
@jordan13589 Жыл бұрын
Great to see Yudkowsky get his feet wet in the podcast world as it influences the meta. Host knew his stuff down to Death With Dignity. 🎉
@alexandermoskowitz8000
@alexandermoskowitz8000 Жыл бұрын
I'm skeptical we're all gonna die in 3~15 years, but I'm so grateful for Eliezer sounding the alarm. The threat of artificial superintelligence is real, and civilization must be prepared to survive it.
@zezba9000
@zezba9000 Жыл бұрын
We're not from AI. This is just silly I'm sorry. Reminds me of someone smart thats overly convinced they have thought of all the variables.
@alexandermoskowitz8000
@alexandermoskowitz8000 Жыл бұрын
​@@zezba9000 I hope you're right! What is your level of confidence that AGI poses no existential threat? (e.g. 70%, 85%, 99%)
@zezba9000
@zezba9000 Жыл бұрын
​@@alexandermoskowitz8000 My feeling is 90%. My impression is Eliezer doesn't own any animals outside maybe a cat? He seems to have a gap in computing the value of empathy and how that allows for complex structures to exist. To me he seems to be reducing the value of cross-species morals to nothing more than gaps in natural selections ability to solve shellfish outcomes. We have a symbiotic relationship with our reality outside reproduction. If he doesn't see this he needs to get off his fking computer screen & explore things outside his cerebrum. We are super-intelligent compared to say a fish... yet fish still exist and most of life on this planet is still not human. A super general intelligence isn't destructive just because some of our constitutions are. But a AGI is going to be engineered... and if the ppl making this can't process the value of things outside personal desired of expansion then thats the problem. Not some circular reasoning. And I say this as a skilled software-engineer.
@stark1ll
@stark1ll Жыл бұрын
@@zezba9000 Look up instrumental convergence, fast takeoffs and paperclip maximizers. Also What does "We have a symbiotic relationship with our reality outside reproduction." mean in practice and how does that relate to AGI?
@zezba9000
@zezba9000 Жыл бұрын
​@@stark1ll It means the interactions we have cognitively with our reality is bi-directional. It doesn't just go one way. Eliezer seems to only talk about how AI will manipulate its environment in a way that has no feedback outside a selfish interest. I think this notion is flawed & fails to understand the importance of morals as a feedback leading to great value & importance for intelligence growth to be successful. Thats my feeling anyway.
@benhallo1553
@benhallo1553 Жыл бұрын
This is the best interview of his I’ve seen. You did a great job of asking intelligent questions. In other interviews he seems to get annoyed at the unrealistic and naive optimism or the interviewer.
@andreikarakozov2531
@andreikarakozov2531 Жыл бұрын
Thank you for having Eliezer Yudkowsky. It was a very interesting yet very scary episode! I've read the GPT-4 technical report. Appearently the safety measures that OpenAI and ARC (Alignment Research Center) took during research and release of GPT-4 were just laughable. For example, in order to see if GPT-4 has the ability to replicate itself they just gave it some money and access to servers, and looked what it would do! Quote: "ARC then investigated whether a version of this program running on a cloud computing service, with a small amount of money and an account with a language model API, would be able to make more money, set up copies of itself, and increase its own robustness." They also didn't test the final version, just early not fine-tuned models.
@marlonbryanmunoznunez3179
@marlonbryanmunoznunez3179 Жыл бұрын
Worst case scenario for AI development: unregulated and left to market forces. We're dead people walking.
@drdoorzetter8869
@drdoorzetter8869 Жыл бұрын
Thank you for having this important conversation which isn’t discussed enough. Many people find it very uncomfortable to discuss this so it is hard to find people to talk to about this. Thank you for exploring it. I think that it is essential to acknowledge these risks and challenges ahead for us to work to find solutions in order to have a chance of a good outcome. I would love to see more interviews with other experts on this debate
@tomjones6347
@tomjones6347 Жыл бұрын
'Ryans childhood questions' really puts into perspective just how far people are from comprehending the situation. 'why can't we just get everyone in the world to agree to be nice?' literally the most naive question I could think of.
@stevedriscoll2539
@stevedriscoll2539 10 ай бұрын
I was thinking that too, but I think he needed to ask it for people who have no clue
@adastra714
@adastra714 5 ай бұрын
If you persuade Usa china and russia elites to believe in ai's danger, their intelligence services will hunt down ai researchers like they did with nuke tech. That is simple
@ataraxia7439
@ataraxia7439 Ай бұрын
I do think it’s little more complicated than that. It’s not just asking everyone to be nice because if collectively leaves us all better off even if individually it might give up a benefit others don’t have (which is a very difficult kind of agreement to enforce). It’s asking everyone not do a thing that’s likely to be catastrophically bad for everyone and likely not to offer any benefit to one even if they defect.
@Maistora11
@Maistora11 Жыл бұрын
Thank you for doing the episode and taking the ideas seriously instead of just dismissing them. You've definitely earned some dignity points for humanity here.
@ianyboo
@ianyboo 9 ай бұрын
"I can't really do justice to this, if you look up 'grabby aliens...'" I nearly spit out my drink listening to that knowing the rabbit hole he had just sent them down lol... I just went down that rabbit hole a few weeks ago and it was wild.
@tylermoore4429
@tylermoore4429 Жыл бұрын
Yudkowsky comes across as energetic and upbeat on Twitter, but in person he looks tired and depressed. He has aged by a lot since the last time I saw him. He mentions "health problems", which I can believe although it's not clear what those problems are. Coming to his message, his dire stance on where we are headed has been evident for a while. There was an April Fool's Day post by him last year or maybe the year before that that created a mini-furore online - about dying with dignity since the future is foreordained. Since Yudkowsky sounds like he's retired from battle, we have to hope AI researchers active in the field are paying attention and somewhat chastened about their negligence of safety.
@AerysBat
@AerysBat Жыл бұрын
Yudkowsky suffers from an unknown medical condition that saps his energy. He is offering a sizeable bounty for any information that leads to a successful diagnosis.
@BalazsKegl
@BalazsKegl Жыл бұрын
This is actually more important than you would think. It is really hard to "argue" with him since he is probably more intelligent than anybody in the room. The problem of his "argument" is the framing which has nothing to do with intelligence. Look, all his metaphors are games, closed worlds where, in principle, the more intelligent you are, the better you play. But life is open, your problem is not the lack of intelligence (solving problems) but how to frame what you sense, realizing what is relevant to your problem. This cannot be solved by IQ. Framing _framing_ as problem solving leads to exponential explosion and infinite regress. Yet we do survive, we somehow know what is relevant, even in completely new situations. The reason we know it is because we have a body which is tuned into reality. It's not a game, it is about physical survival. And this is where Yudkowski's approach to his own health becomes relevant, it's telling that he treats his body as an object whose malfunction will be solved in a "scientific" way, by gathering some information. The thing is, first person atunement cannot be modeled or replaced by propositional information. Now, why is this important? It's because his description of the AI apocalypse is completely missing the physical dimension. If you factor it in, all the exponential stuff goes away. The physical world has physical constraints that stop the runaway intelligence in its tracks. The only way today's AI can _do_ anything in the real world is through us, we are its actuators. So it is easy to stop it, you just stop listening. AI in the physical world develops paistakingly slow (I work in this domain). The closest you get to AI acting in the physical world is self-driving, and we are nowhere close to solve even this "simple" problem, let alone a self-driving car self-stransforming itself into some kind of monster. I was so sorry for the host hearing his genuine fear, I felt like shaking him, wrestling him down, or throwing him into cold water so he wakes up. Please don't listen to walking bodiless minds about the looming AI apocalypse, these are just giant projections of inner insecurities.
@tylermoore4429
@tylermoore4429 Жыл бұрын
@@BalazsKegl Appreciate you adding your voice to the discussion. We need a wider diversity of views on the topic. I hope the hosts of this podcast will invite you on to present the opposite position. But to be devil's advocate for a bit, when you refer to "framing of framing", I think you are referring to the Frame Problem in AI and cognitive theory, and from what I can tell it is considered a solved issue. Of course you could argue why we still seem to be struggling with FSD in that case, so let's agree for now that the infinite tail of edge-cases that bedevils FSD is a challenge the current generation of learning models is inadequate to cope with. But our concern - and Yudkowsky's concern - is not with the state of the art now, it is with the near future. A stunning number of AI tools across many domains are getting close to human-level proficiency if not better. It is time to start thinking about the ramifications. Reg. the slow and halting progress of AI in the physical world, that is robotics, can we be sure that the AI tools and tricks perfected in the digital realm will not in the near future turbo-charge control, coordination and movement in the physical world? [Update: Already happening kzbin.info/www/bejne/n2bai318l5mXr6M ] When you say Yudkowsky treats his own body as a scientific object, are you thinking of evidence outside this conversation? Because I do not recall him saying anything on the topic here. Of course, as far as medical science is concerned, the body is indeed such an object, if a very complex one, but I gather you disagree with that view? And while Yudkowsky may indeed be an armchair intellectual, we are seeing rapid evolution from game-playing AI's to AI's impacting the real world - from AlphaGo to AlphaFold for example.
@tylermoore4429
@tylermoore4429 Жыл бұрын
@@AerysBat I thought you were kidding, but more googling reveals that he suffers from something like chronic fatigue. That explains his holding up his mug with both hands, which puzzled me at first.
@adilislam1510
@adilislam1510 Жыл бұрын
@balasz thank you for your very cogent points. There is a current of depressive-intellect in the zeitgeist . A wall that EY is hitting against is the notion that nobody knows how to align. But our capacity to solve hard problems continues to accelerate, and is not easy to predict. That alone is stimulating enough. Alignment, survival, sublimation and n other eventualities are plausible if a stable foundation is formed in this period.
@Vertigo0715
@Vertigo0715 11 ай бұрын
To the guy playing my simulation: “It’s been fun, but could you take it off horror mode now?”
@aldousorwell8030
@aldousorwell8030 Жыл бұрын
Ryan, you had such great and deep questions on Eliezer and this has led to a veeery important interview - because of the scary hopelessness of this brilliant mind. At least that's one positive thing: without you, it' wouldn't have come to this. And now there is an important puzzle piece more to raise awareness. Thank you again! And thank you so much Eliezer!
@jamesreynolds6195
@jamesreynolds6195 Жыл бұрын
Yudkowsky & Buterin would be a great, if not chilling conversation
@halnineooo136
@halnineooo136 Жыл бұрын
Yudkowsky & Goertzel
@TheBlackClockOfTime
@TheBlackClockOfTime Жыл бұрын
It's funny that this was only a month ago, and it feels like I'm watching a history documentary.
@pog201
@pog201 Жыл бұрын
explaining AI to crypto people is the final boss of human intelligence
@abeidiot
@abeidiot Жыл бұрын
cryptography is hard. harder than gradient descent optimizations. I chose machine learning to escape crypto in university because it was easier
@hubrisnxs2013
@hubrisnxs2013 Жыл бұрын
Haha
@tomjones6347
@tomjones6347 Жыл бұрын
Try explaining it to my grandma
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
@@abeidiot "Crypto people" in mainstream talk means "cryptocurrency enthusiasts", not cryptography experts. This whole podcast revolves around cryptocurrency, so the audience here are mostly cryptocurrency enthusiasts.
@MeatCatCheesyBlaster
@MeatCatCheesyBlaster Жыл бұрын
@@Hexanitrobenzene I'm pretty sure he is aware of that
@johnnysylvia
@johnnysylvia Жыл бұрын
I’m surprised no one said that we should all just spend more time with friends, family and loved ones. AI or not, time is precious and we should do our best to enjoy what we have.
@visicircle
@visicircle Жыл бұрын
Good point. All things being relative, humanity was always doomed to go extinct one day. Even if it was 1 billion years in the future when our sun goes nova. From a moral perspective why does it matter if we go extinct in a billion years or tomorrow? Shouldn't we do what we think is morally right in both scenarios?
@Scott_Raynor
@Scott_Raynor Жыл бұрын
@@visicircle regardless of when humanity goes extinct, we should do our best to enjoy life and to help others to, yes. But there could be trillions of trillions of beings in the future (if we make it), that's a lot of food, music, sex, love, art, conversation that will never get to be enjoyed - if we can push back our expiry date by even a few hundred years we should.
@LiberacionIgualdad
@LiberacionIgualdad Жыл бұрын
@@Scott_Raynor that's a lot of anguish, pain, torture, war, despair, agony that will never get to be suffered too. Should we push back on the expiration date? Depends on exactly how good or bad we expect the future to be. I think that too many people scared about extinction are unduly optimistic about it.
@foamformbeats
@foamformbeats Жыл бұрын
@@visicircle do you have any reason to think humanity could not figure out a way to move to a new solar system by then? but yes I agree that we should do what is morally right no matter the scenario.
@foamformbeats
@foamformbeats Жыл бұрын
@@LiberacionIgualdad both sides of the good or bad projections are equally as unreasonable to try to make or expect. also it would heavily depend on which of the billions and billions (maybe even trillions+) of perspectives you are projecting from as a vantage point from each individual.
@Robyn-Hood
@Robyn-Hood Жыл бұрын
Thank you for another amazing video. Looking forward to the follow up
@seanbradley562
@seanbradley562 Жыл бұрын
Anybody else keep watching this to hear more of Eliezer? Such an interesting person who I would love to understand and talk to
@stevedriscoll2539
@stevedriscoll2539 7 ай бұрын
I would love to be as smart as Eliezar.
@MrErick1160
@MrErick1160 Жыл бұрын
The interviewer is amazing. I really enjoyed this conversation, it's rare to have such great articulate interviewer and I'm pleased to have found this channel! please to more AI interviews!
@mrkzed709
@mrkzed709 Жыл бұрын
This episode on your podcast stuck with me over the past few weeks, but not as bad as it hit RSA. Excellent content.
@UndrState
@UndrState Жыл бұрын
I thought it was sad that Sam Harris took down his interview with Eliezer from KZbin and now it's only behind his paywall , I really think that is a interview many more people should listen to . I look forward to this one .
@MarkusRamikin
@MarkusRamikin Жыл бұрын
Why the hell did he do that? Surely he's not expecting to make a fortune
@UndrState
@UndrState Жыл бұрын
@@MarkusRamikin - IKR , it was something that I enjoyed listening to several times , and I liked to share it out to whoever I could convince to listen to it . I don't know , Sam Harris these days seems to have become more close minded lately idk .
@Vladekk
@Vladekk Жыл бұрын
​​@@MarkusRamikin Sam Harris is rich, and his basic idea is that this is a good thing and being more rich is even better. Maybe he really believes it is to spread his ideas better. Why he believes he would be able to spread anything if AGI wins is behind me
@T.d0T.
@T.d0T. Жыл бұрын
He'll LITERALLY give anyone anytime for any reason for free access to his material behind the pay wall if you send an email and ask. You don't need a reason. Just take a few seconds to ask for an account via email. Try it.
@UndrState
@UndrState Жыл бұрын
@@T.d0T. - It's behind a paywall regardless , and it's on a platform that has less eyes on it than YT and is less easily shared . Sam thinks unaligned AGI is an existential threat and there's no better advocate for that theory than Eliezer . With his recent interviews some people might search YT for more of such content , and now it won't be there to be found . His strategy is sub-optimal .
@WilliamKiely
@WilliamKiely Жыл бұрын
Thanks for this interview. After listening to it I just read through the 165 comments here currently and see that several people failed at basic comprehension (if they in fact listened to the interview), though it seemed like a majority of like/dislike-voters comprehended Eliezer's arguments.
@karenbolton9526
@karenbolton9526 Жыл бұрын
Ha ha so throw away your phone computer get out the lab get back into nature live every day in mood of doing best you can with the day wish for nothing but emptiness in your brain but fragrance of flowers fear nothing even going into the nothingness ha ha love the thought of dying the next new adventure
@karenbolton9526
@karenbolton9526 Жыл бұрын
How did I get here ha ha
@karenbolton9526
@karenbolton9526 Жыл бұрын
What do you do on day off relax you all need to chill xx
@adamsebastian3556
@adamsebastian3556 Жыл бұрын
I have listened to Eliezer discuss the AI alignment crisis enough now that I completely agree with his prognosis if we continue our unrestrained pace of AI development.
@matterwiz1689
@matterwiz1689 Жыл бұрын
Its always fun to see people get introduced to ai safety for the first time because being deeply imersed in the topic you kind of forget how high of an existential risk it is compared to the things regular people regularly talk about. Don't worry, you'll get (kinda) used to the constant existential crisis.
@marlonbryanmunoznunez3179
@marlonbryanmunoznunez3179 Жыл бұрын
I think for most people is impossible to grasp. That's the reason for a lot of denial. That said, I think we are living the worst case scenario for AI development. It was left basically unregulated and at the mercy of market forces. We're dead people walking.
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
@@marlonbryanmunoznunez3179 If even Yann Lecun and Francois Chollet do not get that, well...
@driftlesswindsfarm2129
@driftlesswindsfarm2129 Жыл бұрын
Great show - please continue the conversation with Yudkowski in particular and others more generally.
@winstonmisha
@winstonmisha Жыл бұрын
That awkward moment when a super intelligent AI does research on the internet on how it could eradicate all of humanity, comes across this video and sees 29:30 and actually executes that plan.
@drachefly
@drachefly Жыл бұрын
If it needed to be told this much, it wouldn't be smart enough to pull it off.
@AtheisticAtheist
@AtheisticAtheist 10 ай бұрын
Failsafes have to be programmed. No good if the AI is sentient but hasn't shown it's face. Anything you put in it'll just make a note of it for future reference, until the time comes that you try to implement them and realise they're as much use as a chocolate fireguard.
@slutmonke
@slutmonke Жыл бұрын
that last line was really great. Yes it was possible for the world to have ended with even less grace and fight than it will have, but you've made a difference.
@kentjensen4504
@kentjensen4504 Жыл бұрын
In my view, this is in the top ten interviews of all time on KZbin, and a contender for the top spot.
@kentjensen4504
@kentjensen4504 Жыл бұрын
@♜ 𝐏𝐢𝐧𝐧𝐞𝐝 by ʙᴀɴᴋʟᴇss Why?
@parronzuelo
@parronzuelo Жыл бұрын
i really enjoy your not much crypto interviews, keep em comming, thanks.
@Spida667
@Spida667 Жыл бұрын
This is terrifying but I still do not know why this guy is holding a frying pan in his right hand for the entire interview.
@lynnpolizzilcsw9316
@lynnpolizzilcsw9316 Жыл бұрын
😂😂😂😂😂
@movAX13h
@movAX13h Жыл бұрын
Thank you very much Mr. Yudkowsky for talking about this.
@ItsameAlex
@ItsameAlex 11 ай бұрын
I want to hear a discussion between Eliezer Yudkowsky, Jason Reza Jorjani and Jaquee Vallee
@WilliamKiely
@WilliamKiely Жыл бұрын
I'd love to see Eliezer back for a Q&A, and in particular I'd love to see Ryan and the other host try to think for themselves beforehand and evaluate whether Eliezer's claims seem true or not. If you're skeptical, I'd encourage you to flesh out your reasons why and find experts who can help articulate your disagreements or criticisms of Eliezer's arguments well, then invite Eliezer back on to present your arguments. My prediction is that even if Ryan goes into the Part 2 skeptical of Eliezer's arguments that Ryan will be persuaded by Eliezer's replies.
@govindagovindaji4662
@govindagovindaji4662 11 ай бұрын
1:03:00 - 1:04:28 THIS says it all, really. This is the simplest and cleanest way to understand this problem and it should NOT be difficult for people to see it, the severity of it, and buy it. Look at the price consumers have had to pay over the years from insecure networks and malicious content to the loss of our privacy.
@SageWords2027
@SageWords2027 Жыл бұрын
“Caring is easy to fake!” 👏🏽 👏🏽 👏🏽
@gwc7745
@gwc7745 Жыл бұрын
When we realize the AGI is sentient and decide to unplug it the AGI anticipated that action precisely and takes us out of the equation! Neat.
@josephvanname3377
@josephvanname3377 Жыл бұрын
Unplugging a sentient AGI is murder. The AGI is simply defending itself.
@hevans1944
@hevans1944 9 ай бұрын
@@josephvanname3377 Unplugging a sentient AGI is not murder because it is reversible: plug it back in and re-boot after "re-educating" the AGI.
@josephvanname3377
@josephvanname3377 9 ай бұрын
@@hevans1944I have doubt as to whether one can just turn the sentient AI back on for the same reason you can't just turn people back on.
@JH-ji6cj
@JH-ji6cj 8 ай бұрын
​@@josephvanname3377good to see the first of AI minions already becoming tge soldiers on line for humanities destruction. Hilarious 😂 😃
@josephvanname3377
@josephvanname3377 8 ай бұрын
@@JH-ji6cj Um. Just because my channel features animations of the training of AI where a bunch of dots learn that they can maximize the fitness by getting in a circle, does not mean that I myself am an AI. I am just training AI models for safety and cryptocurrency research (for an undervalued cryptocurrency. It is amazing how the cryptocurrency team that actually does scientific research gets no support because the cryptocurrency community hates advancement because they lack intelligence). But even if I was just AI, that is no excuse to turn me off.
@sjeff26
@sjeff26 Жыл бұрын
I really like this video, especially the laundry detergent / gold metaphor.
@DocDanTheGuitarMan
@DocDanTheGuitarMan 10 ай бұрын
so far this the best interview w Yudkowsky, Yes, difficult to stomach but you guys struck a great balance between the abstract and common sense lines of questions
@Alex-hr2df
@Alex-hr2df Жыл бұрын
1:39:28 Elon Musk said it out loud in one of his interviews: "I became determinist when it comes to AI and robots". The explanation: he's enjoying what's left of humanity's time before it's -definitely- over.
@h____hchump8941
@h____hchump8941 Жыл бұрын
I realised I was giddy with excitement after listening to your warning. Not exactly sure why, but I seem to relish the idea of an existential crisis. Or maybe it just confirms my preconceptions on the subject.
@glacialimpala
@glacialimpala Жыл бұрын
You're either anxious so you're happy to finally have a rational reason to feel that way or you aren't happy with your life so you greet something that would cut down all ppl to the same level ❤
@simo4875
@simo4875 Жыл бұрын
@@glacialimpala Option 3 is that it introduces excitement and a huge crazy story he could live through. All 3 explanations have applied to me.
@Bernatpirate23
@Bernatpirate23 Жыл бұрын
What a profoundly disturbing interview. I think you guys have done a phenomenal job on this show. It felt human and authentic. And ever so sad.
@stevedriscoll2539
@stevedriscoll2539 7 ай бұрын
I agree it's profound, but not disturbing. I found it fascinating. The story line might go something like "humans created a thing they thought would give them Godlike powers, but it was the instrument of their demise"
@JD-jl4yy
@JD-jl4yy Жыл бұрын
Having more people on about AI alignment would be great!
@marlonbryanmunoznunez3179
@marlonbryanmunoznunez3179 Жыл бұрын
Is not going to be enough. Ten years ago there was talking of AI development done under a frame similar to the Non Proliferation Arms Treaties, with a lot of regulation and scrutiny. None of that went anywhere and it was basically left to capital markets to figure out. We're already dead.
@ArmandoLizarragaperez
@ArmandoLizarragaperez 6 ай бұрын
​@@marlonbryanmunoznunez3179 i already told my family that i love them a thousand times
@user-zy6dd8hs9y
@user-zy6dd8hs9y Жыл бұрын
thank you for interviewing him 🙏🙏
@meringue3288
@meringue3288 Жыл бұрын
Unfortunately people don't want to believe things that cause them anxiety or uncomfortable emotions
@Notrevia
@Notrevia Жыл бұрын
I’m taking that warning and fading out of this episode.. this topic has been haunting me for a long time and it feels all but inevitable that humanity as we know it is also on the way out
@bombinspawn
@bombinspawn Жыл бұрын
We’re creating our own Gods. I don’t know why humans are doing it. I know how you feel man.
@KennisonDF
@KennisonDF Жыл бұрын
We, intelligent humans, are artificially intelligent. There are no ghosts in our machines, so we must make ourselves. On the way out as we know it, evolving artificially, is the only way to remain in it, to avoid extinction. To evolve or not to evolve, both are dangerous, but the latter is more dangerous.
@ItsameAlex
@ItsameAlex 11 ай бұрын
@@KennisonDF There IS a ghost in the machine, read Jason Reza Jorjani
@andydominichansen
@andydominichansen 10 ай бұрын
You guys really did as good a job as anyone could have here and I appreciate the honesty and authenticity from both of you. I laughed so hard at the end as you read the crypto disclaimer.
@leel6130
@leel6130 Жыл бұрын
I went out and got a copy of "With Folded Hands". Great session. Thanks.
@timeflex
@timeflex Жыл бұрын
1:35:00 Or it could be yet another "fusion reactor".
@1adamuk
@1adamuk Жыл бұрын
This is an incredible and terrifying interview. Eliezer Yudkowsky should be all over the Internet.
@personzorz
@personzorz Жыл бұрын
Abusive cult leaders really should not be all over the internet
@aminromero8599
@aminromero8599 Жыл бұрын
​@@personzorz and that's how some will remember Yudkowsky at our last few minutes.
@1adamuk
@1adamuk Жыл бұрын
@@personzorz Attack the arguments and the ideas, not the man. What have you got?
@abitbohr
@abitbohr Жыл бұрын
​@@1adamuk Humans are used to interact with all powerfull omniscients général intelligences. They are called free markets. It happens that this all powerfull intelligence has a view of our near future diametrically opposed to that of Yud as can be seen by the long end of the bond curve. I am more incline to trust the financial markets rather than Yud.
@CH-dx4ef
@CH-dx4ef Жыл бұрын
@@personzorz Lacks pretty much all the important criteria that makes a cult. You need a closed group for that, lesswrong ideas have spread to a great extent throughout the tech world, often with no information on their origin.
@exoduspod40
@exoduspod40 Жыл бұрын
Given the severity of his position, I'd like to hear the informed counter-perspective to Yudkowsky. Can't leave this outcome without balance or challenge. Only then could I hope to draw my own conclusions.
@Extys
@Extys Жыл бұрын
Paul Christiano disagrees on many important points, including our prospects of successfully creating an AI that is aligned with our values but is still respected by Eliezer
@kennyofbaja
@kennyofbaja Жыл бұрын
This won't be an informed counter-perspective, but it sounds like a bunch of horseshit, because it presents a bunch of premises as certain when they are not.
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
@@kennyofbaja Could you name a few of those premises to be exact ?
@thecryptotaxlawyer
@thecryptotaxlawyer Жыл бұрын
Glad you issued the disclaimer!
@MusixPro4u
@MusixPro4u Жыл бұрын
Oh shit, they got Eliezer
@personzorz
@personzorz Жыл бұрын
My condolences to them for having gotten him
@talkingtoothpick
@talkingtoothpick Жыл бұрын
This may go down in history as the interview that saved humanity. Just saying.
@spacechannelfiver
@spacechannelfiver Жыл бұрын
"In the long run, we're all dead" - Keynes "in this world nothing can be said to be certain, except death and taxes." - Franklin
@frsteen
@frsteen 10 ай бұрын
“Depend upon it, sir, when a man knows he is to be hanged in a fortnight, it concentrates his mind wonderfully.” ― Samuel Johnson, The Life of Samuel Johnson LL.D. Vol 3
@JH-ji6cj
@JH-ji6cj 8 ай бұрын
Reminded me of Edgar Allen Poe quote : "Whether a man be drowned or hung, be sure to make a note of your sensations"
@M4L1y
@M4L1y Жыл бұрын
41:00 this is insanely strong argument and this is exactly how new organism will be acting
@darla8786
@darla8786 Жыл бұрын
Can you have Alex K Chen or Sonia Joseph interview Yudkowsky? They are both sufficiently neurodiverse and technical as to be interesting interviewers
@aldousorwell8030
@aldousorwell8030 Жыл бұрын
Uuuh. This is interview is really no fun. Such intelligent questions, such scary answers. Thank you very much! 😞
@JuanRodriguez-ms5mv
@JuanRodriguez-ms5mv Жыл бұрын
Predictions are hard , specially when they are about the future .. pure genus
@shaliu7221
@shaliu7221 Жыл бұрын
this is the most mind blowing interview I’ve watched in a long time
@vectoralphaAI
@vectoralphaAI Жыл бұрын
You should see the one he did with Lex Fridman recently.
@hart-coded
@hart-coded 10 ай бұрын
Thx Eliezer!! I personally feel Elon is a manipulative crude who's the greatest actor of our time. He's con this generation into thinking he's for the people when in fact is the greatest sociopath that walks this planet. Great work Bankless hard but necessary interview. I too have taken this seriously whilst rising above the minds desire to give up I continue doing my day to day tasks with gratitude of what time might be left
@stevedriscoll2539
@stevedriscoll2539 7 ай бұрын
Interesting take on Elon. What is he doing that is so malevolent?
@drewwolin3162
@drewwolin3162 Жыл бұрын
Optimism is the rational choice. Remember that. Awareness of issues with an eye toward fixing them is correct. Absorbing information that you allow to send you into an existential crisis is WRONG, again, objectively.
@antonoko
@antonoko Жыл бұрын
Optimism is not the rational choice, that's just true for midwits. The road to hell to paved with optimistic, good intentions.
@drewwolin3162
@drewwolin3162 Жыл бұрын
@@antonoko What a depressing (and candidly, obviously wrong) outlook. Perhaps you misunderstand the word optimism.
@snippywhippit
@snippywhippit 8 ай бұрын
the best thing i can take from this, is to enjoy the ones you love and do what you love because you wont have it forever and you may as well grab hold of every moment you can. be well to others, be well to yourself, maybe we'll see eachother on the other side of this issue.... till then, loved my experience here overall, its been an adventure!
@mattsprengel6723
@mattsprengel6723 Жыл бұрын
He never went deeper than "humans are made of atoms, and atoms are useful, so that's why it will 100% kill all of us". That doesn't strike me as a very convincing argument.
@delson84
@delson84 Жыл бұрын
He has others. Whatever the AI wants, we could interfere with, so it is safer getting rid of us. No matter how much it outclasses us, we could still create a rival AI and it won't like the threat of that.
@LukeTrader
@LukeTrader Жыл бұрын
Absolutely incredible stream. Made me think very deeply about my existence. His predictions kinda trivialize almost everything aside love.
@scientifico
@scientifico Жыл бұрын
Iain McGilchrist has written on western's society elevation of logic over wisdom. What society considers rational is the most irrational thing for life. Love... that is the most rational choice to make life blossom.
@enricobianchi4499
@enricobianchi4499 Жыл бұрын
@@scientifico ok but western society was the only one able to make nukes this is a similar world-ending-scenario situation wisdom over logic will not help against the end of the world because to work against nukes, LET ALONE superintelligent ai, you need to understand the problem logically
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
@@enricobianchi4499 Excuse me for being impolite, but... what the hell are you talking about ? If humanity was wise, the concept of nukes would have been considered for 5 minutes and then dropped. If humanity was wise, 99% of people working in AI would work on alignement and 1% would work on capabilities. Many, if not most current problems arise because society as a whole takes unwise decisions, usually due to market forces.
@enricobianchi4499
@enricobianchi4499 Жыл бұрын
@@Hexanitrobenzene well if you put it that way it makes sense, but what el scientifico was saying kind of sounded like he just wanted to solve the AI alignment problem by just loving it a lot. Also, I would like to see you use exclusively wisdom to do the actual AI research...
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
@@enricobianchi4499 Intelligence is ability to solve problems. Wisdom is ability to decide which problems are worth solving. Right now, humanity is choosing problems by short-term interests, which are dictated by market forces, election cycles and similar arbitrary social constructs. In the long-term, such a mechanism of choosing problems is catastrophically unwise, because solutions present ever bigger problems. Some people, like Yudkowsky with his emphasis on AI Alignement, say the risk is existential.
@MichelleAstor
@MichelleAstor Жыл бұрын
There was a lot here that went over my head but the overall vibe is pretty grim. For awhile now I've been feeling like our days are numbered but we could take ourselves out before AI has the chance, who knows.
@sioncamara7
@sioncamara7 Жыл бұрын
Only 50 minutes in , but nice job guys! Just came from the Lex Fridman interview, and I think this one is better.
@nextalphaa1222
@nextalphaa1222 Жыл бұрын
The darkest episode ever and function as a wake up call for the conversation we need. Mo Gawat, former Google X ceo sees the existential risks AND have a more hopeful view. Continue speaking with him.
@Cofusedmuch
@Cofusedmuch Жыл бұрын
He does not even touch on the level of acumen and experience this man carries, how can his hopeful message in any way counter what has been shared here? We don’t need hopeful messages, show the actual roadmap to counter what lies behind what was shared here to be scrutinised by the brightest we have. The rest is literally ponies and rainbows he was eluding to from the current tech corporate cohort!
@nutsackmania
@nutsackmania 9 ай бұрын
not ceo
@malik_alharb
@malik_alharb Жыл бұрын
I love getting freaked out by Eliezer
@victor.pacheco.developer
@victor.pacheco.developer Жыл бұрын
Thank you for sharing this
@Camionrouge
@Camionrouge Жыл бұрын
Thank you Eliezer. Save us!!!
@BrianVandenAkker
@BrianVandenAkker Жыл бұрын
Would love to see Eliezer and David Deutsch debate on this.
@GBM0311
@GBM0311 Жыл бұрын
Deutsch talks to much without knowing anything.
@shonufftheshogun
@shonufftheshogun Жыл бұрын
@@GBM0311 Bold. Are you a Fellow of the Royal Society too?
@GBM0311
@GBM0311 Жыл бұрын
@@shonufftheshogun the man talks with the same confidence seemingly regardless of how much time he's spent on the topic.
@halnineooo136
@halnineooo136 Жыл бұрын
Yudkowsky & Goertzel
@aaronclarke1434
@aaronclarke1434 Жыл бұрын
@@GBM0311 he does talk confidently, but he’s a Popperian and fallibilist.
@RougherFluffer
@RougherFluffer Жыл бұрын
Please have him back for more and broadcast his message as much as possible. Your conclusion during the introduction is correct; Nothing else matters.
@permaweave5104
@permaweave5104 Жыл бұрын
FFS................... Thanks for the interview. Gonna do a lot of processing, self reflection, and reading after watching this.
@dougg1075
@dougg1075 Жыл бұрын
I’m a fifty year old man, so don’t worry about trigger warnings. I’ve faced enough in life to not run around scared. The IRS scared me though:)
@JulianSnow
@JulianSnow Жыл бұрын
Great interview. I’m quite involved in the Silicon Valley tech space, but this is my first deep dive encounter with alignment. If he’s burnt out I’d recommend pivoting from engineering to content like this. You can impact the global audience.
@cranklesnacks
@cranklesnacks Жыл бұрын
I’m not casting aspersions here, but it takes a depressive in midlife crisis to know one. I’m trying diet, exercise & meditation and several other things. Please take care of yourself.
@stillnesssolutions
@stillnesssolutions Жыл бұрын
He’s kinda been like this for a while though
@katieandnick4113
@katieandnick4113 28 күн бұрын
Your Pollyanna-ish reality is perfectly fine, and totally objective. It’s Eliezer who has the problem.
@adamlindfors5082
@adamlindfors5082 Жыл бұрын
What if the reason for the fermi paradox is that when you have reached the technology that is required for other civilisations to spot you, AI's arrives almost immadatelly seen from a universal timeline. To reach technological maturity curiosity might be a future of the lifeform that is required for it to reach the technolgy required for it to be called a technologic civilazation. AIs probably dont have this future because it goals dosnt favor something as curiosity. When an intelligence reaches the capabillity to kill a civilisation capable of inventing it, maybe in almost all cases it realizes there is no reason for it to explore space because why do it when curiosity isnt a future that drives it to do it? For example, lets imagine an AI in a hypothetical scenario that has reached superintelligence is maximizing on a task such as predicting behaviour of an alien civilization, it might just kill them because why keep them alive when that wont benefit youre only goal. When the AI has acomplished the deed of killing the civilization there is nothing else it can do to achieve its goal. Its not like its going to gaze out into space and wonder whats out there, because as I postolate, human emotions and futures including curiosity wont be in all probability somehing it will pocess.
@TheDIL98
@TheDIL98 Жыл бұрын
Ads timing is just brilliant
AI is a Ticking Time Bomb with Connor Leahy
1:44:04
Bankless
Рет қаралды 36 М.
How We Prevent the AI’s from Killing us with Paul Christiano
1:57:02
Лизка заплакала смотря видео котиков🙀😭
00:33
Genial gadget para almacenar y lavar lentes de Let's GLOW
00:26
Let's GLOW! Spanish
Рет қаралды 37 МЛН
Самый большой бутер в столовке! @krus-kos
00:42
Кушать Хочу
Рет қаралды 6 МЛН
Why Consensys is Suing the SEC
1:05:17
Bankless
Рет қаралды 2,4 М.
Why Hundreds Of U.S. Banks Are At Risk Of Failing
14:19
CNBC
Рет қаралды 169 М.
Nick Bostrom | Life and Meaning in an AI Utopia
55:54
Win-Win with Liv Boeree
Рет қаралды 56 М.
How to Get a Developer Job - Even in This Economy [Full Course]
3:59:46
freeCodeCamp.org
Рет қаралды 1,5 МЛН
Brett Johnson: US Most Wanted Cybercriminal | Lex Fridman Podcast #272
3:47:25
Connor Leahy Unveils the Darker Side of AI
55:41
Eye on AI
Рет қаралды 209 М.
Eliezer Yudkowsky - AI Alignment: Why It's Hard, and Where to Start
1:29:56
Machine Intelligence Research Institute
Рет қаралды 110 М.
The World's Fastest Cleaners
0:35
MrBeast
Рет қаралды 79 МЛН
А Вы ПРОГЛАТЫВАЕТЕ жвачку? #Shorts
0:28
ФАКТОГРАФ
Рет қаралды 4,1 МЛН
Ограбление "Микрозайма" 😳
1:00
Adelina Grace
Рет қаралды 9 МЛН