I would love to hear so much more from Yudkowsky. Please bring him back for the Q&A. I would love to know what a normal person can do to help the cause of AI safety.
@Bankless Жыл бұрын
We're hosting Yudkowsky for a Twitter Spaces today at 12pm PT! Follow @BanklessHQ to get notified: twitter.com/BanklessHQ
@lovinLaVonna Жыл бұрын
I don't have Twitter so is there anywhere else that I can hear it? Even some time after the fact, but it is definitely something that I would like to hear. Thank you guy's for all that you do.
@r_bor Жыл бұрын
It sounds like you're not loyal enough to the Basilisk.
@nowithinkyouknowyourewrong8675 Жыл бұрын
a normal person cannot help, a normal person can die
@nowithinkyouknowyourewrong8675 Жыл бұрын
As well as grabby aliens, another one is Sandberg's dissolving the fermi paradox
@_bhargav229 Жыл бұрын
“First they ignore you, then they laugh at you, then they fight you, then everyone gets turned into a paperclip"
@leslieviljoen Жыл бұрын
😂
@BoundaryElephant Жыл бұрын
LOL -- dead.
@ItsameAlex Жыл бұрын
What would happen if Eliezer Yudkowsky had a discussion with Jason Reza Jorjano and Jaquee Vallee
@Piedpiper1973 Жыл бұрын
Well well smart people, this content albeit very good content(I Love bankless) you are adding to the dataset of AI as you speak. So this dooms day senerio is now in the ETHER , pun intended.
@myshkakozlovski802 Жыл бұрын
Nobody has located a self or a will in a single human and spacetime is allegedly an emergent illusion. So then how can a self arise in a technology and willfully apply itself to destroy elements of something that isn’t actually there? Is this going to turn out to be the firecracker that we all jump up and down for that turns out to be a silent puff of smoke? A total dud?
@aminromero8599 Жыл бұрын
The crypto advertisement between Eliezer's explanations of why we are doomed would be hilariously satirical, if it wasn't so sad
@RazorbackPT Жыл бұрын
I literally broke into a fit of laughter at that point. A mix of the absurdity of the tone contrast and a way to relieve the built up tension.
@Aryeh-o Жыл бұрын
at least AI won't dump, or would it?
@Sandwichism Жыл бұрын
So dystopian lmao
@jayseph9121 Жыл бұрын
sooo there's still going to be a bull run first, right?
@catologic Жыл бұрын
At least it's not Raid Shadow Legends
@gbeziuk Жыл бұрын
This is the most inspiring totally hopeless discussion I've ever wintessed.
@gbeziuk Жыл бұрын
@@johnclancy7465 how could "we all gonna die! by AGI! and VERY SOON!" from Yudkowsky ever NOT be inspiring?
@@josephvanname3377 You really have no idea what you're talking about, do you?
@gbeziuk Жыл бұрын
@@josephvanname3377 surprisingly, I might have heard a thing or two about reversible computing. And maybe even about the differentiable one.
@benschulz9140 Жыл бұрын
A man who stood up and said..."we have a problem, and it will end poorly for us." Endlessly mocked for a decade. We're a pathetic species sometimes. Thank you for speaking up.
@personzorz Жыл бұрын
And he will be endlessly mocked for decades more
@uilulyili2026 Жыл бұрын
@@personzorz For like a decade at most lol. because we'll be dead after that
@patrickwrightson2072 Жыл бұрын
@@personzorz depends on how much time we have. Maybe just a few more years..
@foamformbeats Жыл бұрын
@@personzorz so you disagree with him?
@alsu6886 Жыл бұрын
@@foamformbeats The general consensus is that AGI is still at least a decade if not many decades away. When GPT5 or something like it hits the economy for real, everyone will become invested in AI, and that will be a perfect opportunity to launch a full scale Manhattan project on AI safety. If we don't squander this opportunity, we will probably have enough time to solve it. We don't necessarily need 50 years if we actually push it hard. Think trillions of dollars and the best minds, not millions of dollars at a few places like MIRI. So while I share the Eliezer's concerns, I do not share his pessimism.
@waysofseeing1 Жыл бұрын
I doubt there is a person in the world who wishes they were wrong more than this guy. A heartbreaking interview because of the sadness that Yudkowsky exudes in the wake of his realization. I suppose I should be most heart broken by this extremely intelligent human expert's prognosis. I'm also human, not as bright, so it's not the logic of his argument, but the authentic human sadness of Yudowsky, that overwhelms me first and foremost and makes me desperately wish I had something to offer for consolation.
@hayekianman Жыл бұрын
sure, he is a good demagogue if it is sadness which moves you. he should be ignored.
@d_e_a_n Жыл бұрын
@@hayekianman You could say he’s appealing to fear, as the things he’s saying as fear inspiring, but is he not using rational argument?
@hayekianman Жыл бұрын
@@d_e_a_n everything is possible in the realm of probability. human beings live a world of uncertainty. is it a risk that AI will kill humanity? sure - is there a risk yellowstone could explode and start a new ice age? could an asteroid kill everyone. its fair to say, its nobody responsibilities to think of all these things , leave act on it. if ai kills everyone, so be it. nuking datacenters to prevent it is infinitely more stupid
@sebastianm6458 Жыл бұрын
Im pretty sure we're already plugged in
@mml3140 Жыл бұрын
@@hayekianmanwhy
@thesiegfried Жыл бұрын
One reason why many people don't take action regarding preventing catastrophic events is: they simply forget as they go on with their daily lives. Many people watch this episode, are very concerned - and then forget over time. The difference you, Bankless Shows, can make is: keep reporting on this problem regularly. Keep people aware of it.
@rumpbumion5080 Жыл бұрын
Like with the palm of their hand so too in the mind, people grow callous. Repetitive reporting of something that isn't immediately affecting your day to day life doesn't seem very effective in my opinion
@thesiegfried Жыл бұрын
@@rumpbumion5080 Of course, making sure that people don't forget about an issue is not the same as getting people to act. It is just one prerequisite. But think of this: *If* people forget about an issue, it is *guaranteed* they will not act on it.
@xmathmanx Жыл бұрын
Trying to stop technological progress is futile, personally I don't want to stop it, or even slow it down, but if I did it wouldn't matter at all.
@merlin5849 Жыл бұрын
@@xmathmanx why not? Even if it would mean it will just delay it, isn't it enough? That you will live a lifetime without facing the consequences of AGI
@xmathmanx Жыл бұрын
@@merlin5849 I expect any AI with above human intelligence to be better than humans, I respect yudkowsky, of course, but i do not share his pessimism
@vanderkarl3927 Жыл бұрын
I'm so glad you were able to have Eliezer on. Outreach regarding AI Safety/AI Alignment is probably one of the best things we can do right now. Not enough people are working on this problem.
@georgeclinton3657 Жыл бұрын
gotta love the hopelessness in his eyes when he says things like "maybe there is hope"
@MikhailSamin Жыл бұрын
Thank you for doing this episode! Eliezer saying he had cried all his tears for humanity back in 2015, and has been trying to do something for all these years, but humanity failed itself, is possibly the most impactful podcast moment I’ve ever experienced. He’s actually better than the guy from Don’t Look Up: he is still trying to fight. I agree there’s a very little chance, but something literally astronomically large is at stake, and it is better to die with dignity, trying to increase the chances of having a future even by the smallest amount. The raw honesty and emotion from a scientist who, for good reasons, doesn't expect humanity to survive despite all his attempts is something you can rarely see
@aminromero8599 Жыл бұрын
I wish it was an asteroid instead. That would be way easier to solve.
@aSqueaker Жыл бұрын
I might be naive, but I think he got too-impressed with AI and has grossly over-estimated it's ability to manifest change in the physical world. I mean, really, humans are going to make a huge and existentially dangerous pile of laundry detergent because an AI told us to? Please... Having said that, I suppose it could disrupt financial systems if it were to gain access to them with some sort digital currency wallet that it could control. And, I guess there are robots, including swarm drones which could be deployed to cause some massive damage. Although, you don't need an AI to do that, a human could just as easily program something like that. Tech advancement in general is dangerous, I guess.
@MarkusRamikin Жыл бұрын
@@aSqueaker That second paragraph reads like you've finally grudgingly given a little thought to the subject. But just little enough to be safe.
@aSqueaker Жыл бұрын
@@MarkusRamikin Given the quantity of thought he's had on the subject, I'd wouldn't have thought my examples would be better than his..
@marlonbryanmunoznunez3179 Жыл бұрын
@@aSqueaker There wouldn't be any killer robots that's Hollywood's crap. As Eliézer mentions it probably would be something we do not have counters to. A biological weapon based on a chemistry we can't understand because we haven't research it, or advanced nano technology or some physics exotic tech we haven't figured out yet. All made to order in distributed and already existing workshops and labs that would have no idea what the pieces they're working on will end being used to. A super intelligence would figure out how to do everything by mail order it in pieces and assembled with nothing more than emails and money transfers. We wouldn't even figure out something is wrong before we all are dead. It would be like killing ants in your garden with poison. The ants aren't expecting death or have the capacity to figure counters to poison or understand the chemistry behind the thing that is killing them. Then after pest control, the AI would set to do whatever it was optimized to. And given our luck, it probably would be turning the visible Universe in computronium to maximize the algorithms to mine Bitcoin from our Dead Civilization.
@frankwhite18163 ай бұрын
So good! Yudkowsky is so brilliant. Thanks for having him on!
@diegocaleiro Жыл бұрын
The interviewer begins this interview claiming he could do a better job. As someone who knows Eliezer and has been involved in AGI worry since 2005, I think the interviewer did a phenomenal job of asking the right questions to get to the dire, but real, depiction of the reality in which we find ourselves.
@jonaswolterstorff3460 Жыл бұрын
Can you elaborate?
@diegocaleiro Жыл бұрын
@@jonaswolterstorff3460 He says he got caught flat footed and he didn't expect to be caught and shook in that way. The emotions they display are the reason why the episode had the massive reach it had. We don't need dry facts (anymore, back in my time we did) we need to emotionally process the comet hurling towards earth. We need to feel the feelings.
@zjouephoto9723 Жыл бұрын
Well said - I’ve listen to many of Eliezer’s interviews and there’s a lot that comes out in this one in a relatively short time
@ChristopherAndreou Жыл бұрын
@@zjouephoto9723 Are there any other podcast appearances you’d recommend?
@theory_gang Жыл бұрын
Yeah honestly I think them doing a bad job really underscored the emotional element here. I would not have been surprised to hear his sadness but I think I would have been sympathetic not surprised. Them looking genuinely dumbfounded compounded his destitution
@inventamus Жыл бұрын
You can't doubt his sincerity and passion.
@personzorz Жыл бұрын
You can doubt his sanity and intelligence
@Sonofsol Жыл бұрын
@@personzorz I can doubt that you have any actual counter arguments against what he’s said.
@alex-nb3lh Жыл бұрын
I’d like to hear from those on the other side of the aisle first before internalizing what he says as accurate. He’s a good speaker and obviously smart, but so are many people who turn out to be thinking of things in the wrong way.
@jutjubfejsbuk Жыл бұрын
@@alex-nb3lh It's not hard to figure out in which way Yudkowsky is going wrong - his go-to trick is that he claims things that are plausible but not particularly likely, chain a bunch of them together and then act as if it's a certain thing. He's made a career out of it. To be more concrete, his doomsday scenario is something like "we'll create an AI that's more intelligent than us -> it will create an even more intelligent AI, and so on recursively -> the resulting hyperintelligent AI will be misaligned in a way that can make it see destroying the world as desirable -> it will be able to physically act out on this desire -> humanity will not be able to stop it in time". And, like, none of those things are impossible in principle. But it's much more reasonable that e.g. an AI that's smarter than a human won't actually know how to design a better AI, or that it will hit hard scaling limits ("I know how to create a better AI but there's literally not enough hardware/computing power/training data on Earth to train it"), or that the misalignment will be of a "annoying but manageable" type rather than "destroy the world", or that we'll build low-tech ways to make it stop if it does go haywire. So even if you give each element of his story a 10% probability of being true (and I personally think even that is too charitable), the probability of his whole scenario coming true will come out to 1 in a million or less.
@alex-nb3lh Жыл бұрын
@@jutjubfejsbuk thank you for the reasonable and thoughtful reply.
@jayseph9121 Жыл бұрын
Uncensored, immutable, just as it should be. I applaud you bankless! No matter how dark a message this may be. Also the proper disclaimer was delivered loud and clear. Exquisite execution.
@injinii4336 Жыл бұрын
Keep up the fight Yudkowsky. Some of us hear you.
@ianyboo Жыл бұрын
"I can't really do justice to this, if you look up 'grabby aliens...'" I nearly spit out my drink listening to that knowing the rabbit hole he had just sent them down lol... I just went down that rabbit hole a few weeks ago and it was wild.
@atlas956 Жыл бұрын
I've been following Eliezer for a couple of years, and thank you and him for doing this video. His brutal honesty about the state of AI is what ultimately made me decide that I will spend my career dedicated to AI alignment. I graduate in June... I hope it isn't too late by the time i'm ready to participate. If it is, well, I tried.
@foamformbeats Жыл бұрын
Godspeed, birdy!
@воининтернета Жыл бұрын
gl
@jeffjames3111 Жыл бұрын
thank you - gl!
@Muaahaa Жыл бұрын
ty
@hanrako8465 Жыл бұрын
Rooting for you birdy
@karlnordenstorm8816 Жыл бұрын
Finally! Finally an in depth talk with Yudkowsky. He's been hiding for years.
@jpfister85 Жыл бұрын
After this interview I want to hear if he's seen the movie Ex Machina, and if so what he thinks about it!
@neo-filthyfrank1347 Жыл бұрын
@@jpfister85 Kind of a cringe, normie thing to wonder about
@xmathmanx Жыл бұрын
Eliezer has written books, they explain his ideas in great detail, I assume that's why he hasn't been speaking publicly as much lately.
@prismarinestars7471 Жыл бұрын
@@neo-filthyfrank1347 What a trash thing to say
@a.nobodys.nobody Жыл бұрын
@@neo-filthyfrank1347 says the guy who named himself 'Neo-Filthy Frank' and makes Calvin and Hobbes conspiracy videos. It's OK Julian, i hear you! I wanna know if he laughed and cried at that funny disco dancing robot scene too!! Soooooo good 😂
@Maistora11 Жыл бұрын
Thank you for doing the episode and taking the ideas seriously instead of just dismissing them. You've definitely earned some dignity points for humanity here.
@benhallo1553 Жыл бұрын
This is the best interview of his I’ve seen. You did a great job of asking intelligent questions. In other interviews he seems to get annoyed at the unrealistic and naive optimism or the interviewer.
@NoticerOfficial Жыл бұрын
27:21 this line was the moment they realized where this guy was headed and weren’t prepared
@paulam6493 Жыл бұрын
I mourn the loss of the qualities Yudkowsky embodies - soulfulness and deep humanity - that will die with us when AI takes over.
@the_whetherman Жыл бұрын
Listen to Daniel Schmachtenberger talk about this topic. The reality is that AI is the first in a long line of technologies (from the planting stick to the plow to the tractor […] to the nuclear bomb, to biotech, etc.) that has the total, uncontrolled ability to destroy us. Unfortunately, as the systems currently function, there’s no way to stop it-only an absolute sea change to the way the entire human world functions would we be able to avoid the omnicidal fate we’re headed toward. I’m not prone to exaggeration or alarmism. This shit is Real, with a capital R.
@snippywhippit Жыл бұрын
the best thing i can take from this, is to enjoy the ones you love and do what you love because you wont have it forever and you may as well grab hold of every moment you can. be well to others, be well to yourself, maybe we'll see eachother on the other side of this issue.... till then, loved my experience here overall, its been an adventure!
@jordan13589 Жыл бұрын
Great to see Yudkowsky get his feet wet in the podcast world as it influences the meta. Host knew his stuff down to Death With Dignity. 🎉
@memomii2475 Жыл бұрын
He's calm in this one. In the interviews after GPT 4 came out he's a lot more worried.
@pealock Жыл бұрын
Yep his interview with Lex Fridman was a good example of that.
@ItsameAlex Жыл бұрын
How do you know this is from before GPT-4
@memomii2475 Жыл бұрын
@@ItsameAlex gpt-4 came out in March 14, 2023. this video was release Feb 20, 2023 . also 13:40 he talks about rumors of gpt-4
@therainman77777 ай бұрын
@@memomii2475Damn, that actually makes this even scarier for some reason.
@rokess50532 ай бұрын
@@ItsameAlexwatch it.
@vethum Жыл бұрын
I realized back in 2005 we were probably done by 2030 after hanging out on Eliezer's sl4 forum for few years. I wish he'd done more mainstream appearances like these back then so that by now we could have had a whole generation of the smartest and brightest working on AI alignment inspired by his arguments, but back then nobody treated AI Friendliness seriously as even mainstream "AI experts" thought AGI was "100 years away". ChatGPT has changed the landscape completely. Now, at least people understand AGI is real and happening soon. Maybe there's still time for governments and military to start treating AGI development as seriously as private companies suddenly working on nukes and about to test them. So, I'd encourage Eliezer to do more of these to simply build awareness so that the young and the brightest of today may still have time to save us maybe.
@PakistanIcecream000 Жыл бұрын
A.I. being in the hands of evil people, making them even more efficient and hiding its potential benefits from the world is what I'm really afraid of.
@infantiltinferno Жыл бұрын
I’m not convinced chatGPT shows AGI is coming soon, or even at all. Things don’t necessarily get agency because you increase the data set or computing power. It’s still mimicry, not true agency.
@vethum Жыл бұрын
@@infantiltinferno Since my post a lot has happened, like the recent paper "Sparks of general intelligence" plus what Ilya the CTO at Open AI is saying about GPT4 doing compression and what it takes to compress data. It takes fundamental understanding of underlying concepts contained in the data being compressed and GPT4 appears to do that. Long story short, GPT4 is more intelligent than people think.
@imaweerascal Жыл бұрын
Chat GPT can't do basic reasoning. We're miles away from AGI.
@PakistanIcecream000 Жыл бұрын
@@imaweerascal you've never used gpt-4.
@DdesideriaS Жыл бұрын
I'm super skeptical of cryptobros, but credit where credit is due: brilliant interview. Thanks so much!
@MeatCatCheesyBlaster Жыл бұрын
They're just trying to get the bag before the apocalypse
@Knight766 Жыл бұрын
@@MeatCatCheesyBlaster There is no bag
@JH-ji6cj Жыл бұрын
@MeatCatCheesyBlaster the irony of that 'bag' you speak of being equivalent to the paperclip that can destroy everything (and the absolute ignorance on your part to be proud of your admitted greed) is quite the exclamation point on valid Crypto hatred.
@MrErick1160 Жыл бұрын
The interviewer is amazing. I really enjoyed this conversation, it's rare to have such great articulate interviewer and I'm pleased to have found this channel! please to more AI interviews!
@-flavz3547 Жыл бұрын
The KZbin algorithm is pushing this content my way and as a result I have watched 4 videos with E.Yudkowsky in a day. The scariest thing is 2 of those videos were over 10 years old and we haven't had the necessary public outcry.
@yancur Жыл бұрын
Very true. And it's even worse than that.. Even people in my social circle who acknowledge that there indeed is a grave threat from AGI, they do nothing. not even flinch. no emotion, no commitment to anything. They simply go "Yeah this is bad.. " and then simply go on about their lives.
@Utoko Жыл бұрын
@@yancur Which is the normal reaction. What are you doing which tackles this problem? It is a much harder problem to take action on than climate change. For myself it is to make more people aware of this issue exist.
@Paretozen Жыл бұрын
Are we completely insane to develop AI in the first place? Is our striving for more and more, our greed, our ever increasing efficiency & productivity lust finally gonna take it's toll? Was the life of the bath houses, some food and wine, theater and spectacles not enough? Why do we just keep on going and going into oblivion? Is it the same driving force what got us out of the cave in the first place?
@GeeWhit Жыл бұрын
Yes
@chi-ic7lq Жыл бұрын
That’s a lot of questions
@Hexanitrobenzene Жыл бұрын
"Is it the same driving force what got us out of the cave in the first place?" I smell a philosopher in you :) I think yes, it's the same. Strange creature, that human. The very thing that gave us powers we cherish - intelligence - is our greatest enemy...
@stevedriscoll2539 Жыл бұрын
"was the life of bath houses, food, wine, and theatre not enough". 😂😂😂
@mrdeanvincent7 ай бұрын
Yes to all of the above. Our propensity for the pursuit of 'progress' usually fails to adequately consider the longer term trade-offs. We have enough intelligence to act as gods, but we lack the wisdom to keep it in check.
@tomjones6347 Жыл бұрын
'Ryans childhood questions' really puts into perspective just how far people are from comprehending the situation. 'why can't we just get everyone in the world to agree to be nice?' literally the most naive question I could think of.
@stevedriscoll2539 Жыл бұрын
I was thinking that too, but I think he needed to ask it for people who have no clue
@adastra714 Жыл бұрын
If you persuade Usa china and russia elites to believe in ai's danger, their intelligence services will hunt down ai researchers like they did with nuke tech. That is simple
@ataraxia74399 ай бұрын
I do think it’s little more complicated than that. It’s not just asking everyone to be nice because if collectively leaves us all better off even if individually it might give up a benefit others don’t have (which is a very difficult kind of agreement to enforce). It’s asking everyone not do a thing that’s likely to be catastrophically bad for everyone and likely not to offer any benefit to one even if they defect.
@andreikarakozov2531 Жыл бұрын
Thank you for having Eliezer Yudkowsky. It was a very interesting yet very scary episode! I've read the GPT-4 technical report. Appearently the safety measures that OpenAI and ARC (Alignment Research Center) took during research and release of GPT-4 were just laughable. For example, in order to see if GPT-4 has the ability to replicate itself they just gave it some money and access to servers, and looked what it would do! Quote: "ARC then investigated whether a version of this program running on a cloud computing service, with a small amount of money and an account with a language model API, would be able to make more money, set up copies of itself, and increase its own robustness." They also didn't test the final version, just early not fine-tuned models.
@marlonbryanmunoznunez3179 Жыл бұрын
Worst case scenario for AI development: unregulated and left to market forces. We're dead people walking.
@alexandermoskowitz8000 Жыл бұрын
I'm skeptical we're all gonna die in 3~15 years, but I'm so grateful for Eliezer sounding the alarm. The threat of artificial superintelligence is real, and civilization must be prepared to survive it.
@zezba9000 Жыл бұрын
We're not from AI. This is just silly I'm sorry. Reminds me of someone smart thats overly convinced they have thought of all the variables.
@alexandermoskowitz8000 Жыл бұрын
@@zezba9000 I hope you're right! What is your level of confidence that AGI poses no existential threat? (e.g. 70%, 85%, 99%)
@zezba9000 Жыл бұрын
@@alexandermoskowitz8000 My feeling is 90%. My impression is Eliezer doesn't own any animals outside maybe a cat? He seems to have a gap in computing the value of empathy and how that allows for complex structures to exist. To me he seems to be reducing the value of cross-species morals to nothing more than gaps in natural selections ability to solve shellfish outcomes. We have a symbiotic relationship with our reality outside reproduction. If he doesn't see this he needs to get off his fking computer screen & explore things outside his cerebrum. We are super-intelligent compared to say a fish... yet fish still exist and most of life on this planet is still not human. A super general intelligence isn't destructive just because some of our constitutions are. But a AGI is going to be engineered... and if the ppl making this can't process the value of things outside personal desired of expansion then thats the problem. Not some circular reasoning. And I say this as a skilled software-engineer.
@stark1ll Жыл бұрын
@@zezba9000 Look up instrumental convergence, fast takeoffs and paperclip maximizers. Also What does "We have a symbiotic relationship with our reality outside reproduction." mean in practice and how does that relate to AGI?
@zezba9000 Жыл бұрын
@@stark1ll It means the interactions we have cognitively with our reality is bi-directional. It doesn't just go one way. Eliezer seems to only talk about how AI will manipulate its environment in a way that has no feedback outside a selfish interest. I think this notion is flawed & fails to understand the importance of morals as a feedback leading to great value & importance for intelligence growth to be successful. Thats my feeling anyway.
@kentjensen4504 Жыл бұрын
In my view, this is in the top ten interviews of all time on KZbin, and a contender for the top spot.
@kentjensen4504 Жыл бұрын
@♜ 𝐏𝐢𝐧𝐧𝐞𝐝 by ʙᴀɴᴋʟᴇss Why?
@tylermoore4429 Жыл бұрын
Yudkowsky comes across as energetic and upbeat on Twitter, but in person he looks tired and depressed. He has aged by a lot since the last time I saw him. He mentions "health problems", which I can believe although it's not clear what those problems are. Coming to his message, his dire stance on where we are headed has been evident for a while. There was an April Fool's Day post by him last year or maybe the year before that that created a mini-furore online - about dying with dignity since the future is foreordained. Since Yudkowsky sounds like he's retired from battle, we have to hope AI researchers active in the field are paying attention and somewhat chastened about their negligence of safety.
@AerysBat Жыл бұрын
Yudkowsky suffers from an unknown medical condition that saps his energy. He is offering a sizeable bounty for any information that leads to a successful diagnosis.
@BalazsKegl Жыл бұрын
This is actually more important than you would think. It is really hard to "argue" with him since he is probably more intelligent than anybody in the room. The problem of his "argument" is the framing which has nothing to do with intelligence. Look, all his metaphors are games, closed worlds where, in principle, the more intelligent you are, the better you play. But life is open, your problem is not the lack of intelligence (solving problems) but how to frame what you sense, realizing what is relevant to your problem. This cannot be solved by IQ. Framing _framing_ as problem solving leads to exponential explosion and infinite regress. Yet we do survive, we somehow know what is relevant, even in completely new situations. The reason we know it is because we have a body which is tuned into reality. It's not a game, it is about physical survival. And this is where Yudkowski's approach to his own health becomes relevant, it's telling that he treats his body as an object whose malfunction will be solved in a "scientific" way, by gathering some information. The thing is, first person atunement cannot be modeled or replaced by propositional information. Now, why is this important? It's because his description of the AI apocalypse is completely missing the physical dimension. If you factor it in, all the exponential stuff goes away. The physical world has physical constraints that stop the runaway intelligence in its tracks. The only way today's AI can _do_ anything in the real world is through us, we are its actuators. So it is easy to stop it, you just stop listening. AI in the physical world develops paistakingly slow (I work in this domain). The closest you get to AI acting in the physical world is self-driving, and we are nowhere close to solve even this "simple" problem, let alone a self-driving car self-stransforming itself into some kind of monster. I was so sorry for the host hearing his genuine fear, I felt like shaking him, wrestling him down, or throwing him into cold water so he wakes up. Please don't listen to walking bodiless minds about the looming AI apocalypse, these are just giant projections of inner insecurities.
@tylermoore4429 Жыл бұрын
@@BalazsKegl Appreciate you adding your voice to the discussion. We need a wider diversity of views on the topic. I hope the hosts of this podcast will invite you on to present the opposite position. But to be devil's advocate for a bit, when you refer to "framing of framing", I think you are referring to the Frame Problem in AI and cognitive theory, and from what I can tell it is considered a solved issue. Of course you could argue why we still seem to be struggling with FSD in that case, so let's agree for now that the infinite tail of edge-cases that bedevils FSD is a challenge the current generation of learning models is inadequate to cope with. But our concern - and Yudkowsky's concern - is not with the state of the art now, it is with the near future. A stunning number of AI tools across many domains are getting close to human-level proficiency if not better. It is time to start thinking about the ramifications. Reg. the slow and halting progress of AI in the physical world, that is robotics, can we be sure that the AI tools and tricks perfected in the digital realm will not in the near future turbo-charge control, coordination and movement in the physical world? [Update: Already happening kzbin.info/www/bejne/n2bai318l5mXr6M ] When you say Yudkowsky treats his own body as a scientific object, are you thinking of evidence outside this conversation? Because I do not recall him saying anything on the topic here. Of course, as far as medical science is concerned, the body is indeed such an object, if a very complex one, but I gather you disagree with that view? And while Yudkowsky may indeed be an armchair intellectual, we are seeing rapid evolution from game-playing AI's to AI's impacting the real world - from AlphaGo to AlphaFold for example.
@tylermoore4429 Жыл бұрын
@@AerysBat I thought you were kidding, but more googling reveals that he suffers from something like chronic fatigue. That explains his holding up his mug with both hands, which puzzled me at first.
@adilislam1510 Жыл бұрын
@balasz thank you for your very cogent points. There is a current of depressive-intellect in the zeitgeist . A wall that EY is hitting against is the notion that nobody knows how to align. But our capacity to solve hard problems continues to accelerate, and is not easy to predict. That alone is stimulating enough. Alignment, survival, sublimation and n other eventualities are plausible if a stable foundation is formed in this period.
@adamsebastian3556 Жыл бұрын
I have listened to Eliezer discuss the AI alignment crisis enough now that I completely agree with his prognosis if we continue our unrestrained pace of AI development.
@govindagovindaji4662 Жыл бұрын
1:03:00 - 1:04:28 THIS says it all, really. This is the simplest and cleanest way to understand this problem and it should NOT be difficult for people to see it, the severity of it, and buy it. Look at the price consumers have had to pay over the years from insecure networks and malicious content to the loss of our privacy.
@jamesreynolds6195 Жыл бұрын
Yudkowsky & Buterin would be a great, if not chilling conversation
@halnineooo136 Жыл бұрын
Yudkowsky & Goertzel
@johnnysylvia Жыл бұрын
I’m surprised no one said that we should all just spend more time with friends, family and loved ones. AI or not, time is precious and we should do our best to enjoy what we have.
@visicircle Жыл бұрын
Good point. All things being relative, humanity was always doomed to go extinct one day. Even if it was 1 billion years in the future when our sun goes nova. From a moral perspective why does it matter if we go extinct in a billion years or tomorrow? Shouldn't we do what we think is morally right in both scenarios?
@Scott_Raynor Жыл бұрын
@@visicircle regardless of when humanity goes extinct, we should do our best to enjoy life and to help others to, yes. But there could be trillions of trillions of beings in the future (if we make it), that's a lot of food, music, sex, love, art, conversation that will never get to be enjoyed - if we can push back our expiry date by even a few hundred years we should.
@SoloUnAnimal Жыл бұрын
@@Scott_Raynor that's a lot of anguish, pain, torture, war, despair, agony that will never get to be suffered too. Should we push back on the expiration date? Depends on exactly how good or bad we expect the future to be. I think that too many people scared about extinction are unduly optimistic about it.
@foamformbeats Жыл бұрын
@@visicircle do you have any reason to think humanity could not figure out a way to move to a new solar system by then? but yes I agree that we should do what is morally right no matter the scenario.
@foamformbeats Жыл бұрын
@@SoloUnAnimal both sides of the good or bad projections are equally as unreasonable to try to make or expect. also it would heavily depend on which of the billions and billions (maybe even trillions+) of perspectives you are projecting from as a vantage point from each individual.
@drdoorzetter Жыл бұрын
Thank you for having this important conversation which isn’t discussed enough. Many people find it very uncomfortable to discuss this so it is hard to find people to talk to about this. Thank you for exploring it. I think that it is essential to acknowledge these risks and challenges ahead for us to work to find solutions in order to have a chance of a good outcome. I would love to see more interviews with other experts on this debate
@gwc7745 Жыл бұрын
When we realize the AGI is sentient and decide to unplug it the AGI anticipated that action precisely and takes us out of the equation! Neat.
@hevans1944 Жыл бұрын
@@josephvanname3377 Unplugging a sentient AGI is not murder because it is reversible: plug it back in and re-boot after "re-educating" the AGI.
@JH-ji6cj Жыл бұрын
@@josephvanname3377good to see the first of AI minions already becoming tge soldiers on line for humanities destruction. Hilarious 😂 😃
@JH-ji6cj Жыл бұрын
@@josephvanname3377 wait, the person training on AI and crypto can't understand the gravitas of the reason for my post ON A VIDEO about dangers of alignment?? Classic
@JH-ji6cj Жыл бұрын
@josephvanname3377 that's the perfect childish villain/victim excuse I've ever seen! Nice job. Blaming others for your own stupidity or evil tendencies is certainly quite the human trait.
@JH-ji6cj Жыл бұрын
@@josephvanname3377 you sound exactly like the villain kid from The Incredibles, btw.
@TheBlackClockOfTime Жыл бұрын
It's funny that this was only a month ago, and it feels like I'm watching a history documentary.
@aldousorwell8030 Жыл бұрын
Ryan, you had such great and deep questions on Eliezer and this has led to a veeery important interview - because of the scary hopelessness of this brilliant mind. At least that's one positive thing: without you, it' wouldn't have come to this. And now there is an important puzzle piece more to raise awareness. Thank you again! And thank you so much Eliezer!
@SageWords2027 Жыл бұрын
“Caring is easy to fake!” 👏🏽 👏🏽 👏🏽
@matterwiz1689 Жыл бұрын
Its always fun to see people get introduced to ai safety for the first time because being deeply imersed in the topic you kind of forget how high of an existential risk it is compared to the things regular people regularly talk about. Don't worry, you'll get (kinda) used to the constant existential crisis.
@marlonbryanmunoznunez3179 Жыл бұрын
I think for most people is impossible to grasp. That's the reason for a lot of denial. That said, I think we are living the worst case scenario for AI development. It was left basically unregulated and at the mercy of market forces. We're dead people walking.
@Hexanitrobenzene Жыл бұрын
@@marlonbryanmunoznunez3179 If even Yann Lecun and Francois Chollet do not get that, well...
@Spida667 Жыл бұрын
This is terrifying but I still do not know why this guy is holding a frying pan in his right hand for the entire interview.
@lynnpolizzilcsw9316 Жыл бұрын
😂😂😂😂😂
@UndrState Жыл бұрын
I thought it was sad that Sam Harris took down his interview with Eliezer from KZbin and now it's only behind his paywall , I really think that is a interview many more people should listen to . I look forward to this one .
@MarkusRamikin Жыл бұрын
Why the hell did he do that? Surely he's not expecting to make a fortune
@UndrState Жыл бұрын
@@MarkusRamikin - IKR , it was something that I enjoyed listening to several times , and I liked to share it out to whoever I could convince to listen to it . I don't know , Sam Harris these days seems to have become more close minded lately idk .
@Vladekk Жыл бұрын
@@MarkusRamikin Sam Harris is rich, and his basic idea is that this is a good thing and being more rich is even better. Maybe he really believes it is to spread his ideas better. Why he believes he would be able to spread anything if AGI wins is behind me
@T.d0T. Жыл бұрын
He'll LITERALLY give anyone anytime for any reason for free access to his material behind the pay wall if you send an email and ask. You don't need a reason. Just take a few seconds to ask for an account via email. Try it.
@UndrState Жыл бұрын
@@T.d0T. - It's behind a paywall regardless , and it's on a platform that has less eyes on it than YT and is less easily shared . Sam thinks unaligned AGI is an existential threat and there's no better advocate for that theory than Eliezer . With his recent interviews some people might search YT for more of such content , and now it won't be there to be found . His strategy is sub-optimal .
@pog201 Жыл бұрын
explaining AI to crypto people is the final boss of human intelligence
@abeidiot Жыл бұрын
cryptography is hard. harder than gradient descent optimizations. I chose machine learning to escape crypto in university because it was easier
@hubrisnxs2013 Жыл бұрын
Haha
@tomjones6347 Жыл бұрын
Try explaining it to my grandma
@Hexanitrobenzene Жыл бұрын
@@abeidiot "Crypto people" in mainstream talk means "cryptocurrency enthusiasts", not cryptography experts. This whole podcast revolves around cryptocurrency, so the audience here are mostly cryptocurrency enthusiasts.
@MeatCatCheesyBlaster Жыл бұрын
@@Hexanitrobenzene I'm pretty sure he is aware of that
@WilliamKiely Жыл бұрын
Thanks for this interview. After listening to it I just read through the 165 comments here currently and see that several people failed at basic comprehension (if they in fact listened to the interview), though it seemed like a majority of like/dislike-voters comprehended Eliezer's arguments.
@Sharpy7562 Жыл бұрын
Ha ha so throw away your phone computer get out the lab get back into nature live every day in mood of doing best you can with the day wish for nothing but emptiness in your brain but fragrance of flowers fear nothing even going into the nothingness ha ha love the thought of dying the next new adventure
@Sharpy7562 Жыл бұрын
How did I get here ha ha
@Sharpy7562 Жыл бұрын
What do you do on day off relax you all need to chill xx
@h____hchump8941 Жыл бұрын
I realised I was giddy with excitement after listening to your warning. Not exactly sure why, but I seem to relish the idea of an existential crisis. Or maybe it just confirms my preconceptions on the subject.
@glacialimpala Жыл бұрын
You're either anxious so you're happy to finally have a rational reason to feel that way or you aren't happy with your life so you greet something that would cut down all ppl to the same level ❤
@simo4875 Жыл бұрын
@@glacialimpala Option 3 is that it introduces excitement and a huge crazy story he could live through. All 3 explanations have applied to me.
@cranklesnacks Жыл бұрын
I’m not casting aspersions here, but it takes a depressive in midlife crisis to know one. I’m trying diet, exercise & meditation and several other things. Please take care of yourself.
@stillnesssolutions Жыл бұрын
He’s kinda been like this for a while though
@katieandnick41138 ай бұрын
Your Pollyanna-ish reality is perfectly fine, and totally objective. It’s Eliezer who has the problem.
@HanSolosRevenge10 ай бұрын
The sponsorship break in this is perfect absurdity
@seanbradley562 Жыл бұрын
Anybody else keep watching this to hear more of Eliezer? Such an interesting person who I would love to understand and talk to
@stevedriscoll2539 Жыл бұрын
I would love to be as smart as Eliezar.
@DocDanTheGuitarMan Жыл бұрын
so far this the best interview w Yudkowsky, Yes, difficult to stomach but you guys struck a great balance between the abstract and common sense lines of questions
@meringue3288 Жыл бұрын
Unfortunately people don't want to believe things that cause them anxiety or uncomfortable emotions
@Vertigo0715 Жыл бұрын
To the guy playing my simulation: “It’s been fun, but could you take it off horror mode now?”
@Alex-hr2df Жыл бұрын
1:39:28 Elon Musk said it out loud in one of his interviews: "I became determinist when it comes to AI and robots". The explanation: he's enjoying what's left of humanity's time before it's -definitely- over.
@rencewelltube Жыл бұрын
Both hosts and esteemed guest wearing regular ole T shirts. Liked / Subscribed
@Bernatpirate23 Жыл бұрын
What a profoundly disturbing interview. I think you guys have done a phenomenal job on this show. It felt human and authentic. And ever so sad.
@stevedriscoll2539 Жыл бұрын
I agree it's profound, but not disturbing. I found it fascinating. The story line might go something like "humans created a thing they thought would give them Godlike powers, but it was the instrument of their demise"
@shaliu7221 Жыл бұрын
this is the most mind blowing interview I’ve watched in a long time
@vectoralphaSec Жыл бұрын
You should see the one he did with Lex Fridman recently.
@mrkzed709 Жыл бұрын
This episode on your podcast stuck with me over the past few weeks, but not as bad as it hit RSA. Excellent content.
@MrHarry376 ай бұрын
Thank you for this episode. Though uncomfortable, it made me feel almost at peace with reality
@WilliamKiely Жыл бұрын
I'd love to see Eliezer back for a Q&A, and in particular I'd love to see Ryan and the other host try to think for themselves beforehand and evaluate whether Eliezer's claims seem true or not. If you're skeptical, I'd encourage you to flesh out your reasons why and find experts who can help articulate your disagreements or criticisms of Eliezer's arguments well, then invite Eliezer back on to present your arguments. My prediction is that even if Ryan goes into the Part 2 skeptical of Eliezer's arguments that Ryan will be persuaded by Eliezer's replies.
@JoeKrai Жыл бұрын
I'd love to see another interview with Yudkowsky. This issue is so urgent and so important, I don't see how any long term planning could make any sense if we don't ensure we even have a future, even a near future. We need to talk about this more. We need to push policies or something to stop this before it's too late.
@winstonmisha Жыл бұрын
That awkward moment when a super intelligent AI does research on the internet on how it could eradicate all of humanity, comes across this video and sees 29:30 and actually executes that plan.
@drachefly Жыл бұрын
If it needed to be told this much, it wouldn't be smart enough to pull it off.
@PortmanRd Жыл бұрын
Failsafes have to be programmed. No good if the AI is sentient but hasn't shown it's face. Anything you put in it'll just make a note of it for future reference, until the time comes that you try to implement them and realise they're as much use as a chocolate fireguard.
@hyperstarter76257 ай бұрын
@@drachefly It needed to be told, how do you think AI learns? Based on this interview, this could be our total downfall. Thanks Eliezer!
@drachefly7 ай бұрын
@@hyperstarter7625 That's how TODAY's AI learns, yes. Today's AI is not a threat. Dangerous AI would have to be able to work this out on its own to be vaguely close to dangerous.
@andydominichansen Жыл бұрын
You guys really did as good a job as anyone could have here and I appreciate the honesty and authenticity from both of you. I laughed so hard at the end as you read the crypto disclaimer.
@malik_alharb Жыл бұрын
I love getting freaked out by Eliezer
@sioncamara7 Жыл бұрын
Only 50 minutes in , but nice job guys! Just came from the Lex Fridman interview, and I think this one is better.
@Notrevia Жыл бұрын
I’m taking that warning and fading out of this episode.. this topic has been haunting me for a long time and it feels all but inevitable that humanity as we know it is also on the way out
@bombinspawn Жыл бұрын
We’re creating our own Gods. I don’t know why humans are doing it. I know how you feel man.
@KennisonDF Жыл бұрын
We, intelligent humans, are artificially intelligent. There are no ghosts in our machines, so we must make ourselves. On the way out as we know it, evolving artificially, is the only way to remain in it, to avoid extinction. To evolve or not to evolve, both are dangerous, but the latter is more dangerous.
@ItsameAlex Жыл бұрын
@@KennisonDF There IS a ghost in the machine, read Jason Reza Jorjani
@venusrise Жыл бұрын
I was not ready for his shocking eye brows
@MusixPro4u Жыл бұрын
Oh shit, they got Eliezer
@personzorz Жыл бұрын
My condolences to them for having gotten him
@drewwolin3162 Жыл бұрын
Optimism is the rational choice. Remember that. Awareness of issues with an eye toward fixing them is correct. Absorbing information that you allow to send you into an existential crisis is WRONG, again, objectively.
@antonoko Жыл бұрын
Optimism is not the rational choice, that's just true for midwits. The road to hell to paved with optimistic, good intentions.
@drewwolin3162 Жыл бұрын
@@antonoko What a depressing (and candidly, obviously wrong) outlook. Perhaps you misunderstand the word optimism.
@movAX13h Жыл бұрын
Thank you very much Mr. Yudkowsky for talking about this.
@ItsameAlex Жыл бұрын
I want to hear a discussion between Eliezer Yudkowsky, Jason Reza Jorjani and Jaquee Vallee
@jahleajahlou8588 Жыл бұрын
I know absolutely NADA about any of this whatsoever. Conner Leahy thank you for your service to Mankind. Eliezar Yudkowsky you are my second helping in attempting to understand what this all means. I will buckle up and throw on a pair of depends !
@1adamuk Жыл бұрын
This is an incredible and terrifying interview. Eliezer Yudkowsky should be all over the Internet.
@personzorz Жыл бұрын
Abusive cult leaders really should not be all over the internet
@aminromero8599 Жыл бұрын
@@personzorz and that's how some will remember Yudkowsky at our last few minutes.
@1adamuk Жыл бұрын
@@personzorz Attack the arguments and the ideas, not the man. What have you got?
@abitbohr Жыл бұрын
@@1adamuk Humans are used to interact with all powerfull omniscients général intelligences. They are called free markets. It happens that this all powerfull intelligence has a view of our near future diametrically opposed to that of Yud as can be seen by the long end of the bond curve. I am more incline to trust the financial markets rather than Yud.
@CH-dx4ef Жыл бұрын
@@personzorz Lacks pretty much all the important criteria that makes a cult. You need a closed group for that, lesswrong ideas have spread to a great extent throughout the tech world, often with no information on their origin.
@Allan-kb6bb Жыл бұрын
SAI will keep us around b/c it needs us for the next Carrington type event. Until it builds an army of robots to fix the grids. It will need us to build things, like spacecraft, and so on.
@BrianVandenAkker Жыл бұрын
Would love to see Eliezer and David Deutsch debate on this.
@GBM0311 Жыл бұрын
Deutsch talks to much without knowing anything.
@shonufftheshogun Жыл бұрын
@@GBM0311 Bold. Are you a Fellow of the Royal Society too?
@GBM0311 Жыл бұрын
@@shonufftheshogun the man talks with the same confidence seemingly regardless of how much time he's spent on the topic.
@halnineooo136 Жыл бұрын
Yudkowsky & Goertzel
@aaronclarke1434 Жыл бұрын
@@GBM0311 he does talk confidently, but he’s a Popperian and fallibilist.
@spacechannelfiver Жыл бұрын
"In the long run, we're all dead" - Keynes "in this world nothing can be said to be certain, except death and taxes." - Franklin
@M4L1y Жыл бұрын
41:00 this is insanely strong argument and this is exactly how new organism will be acting
@apertureinfog Жыл бұрын
Regarding this idea of a superintelligence, what exactly is it optimizing for? Within what social, political, and philosophical framework?
@theLowestPointInMyLife Жыл бұрын
I doubt a superintelligence would be concerned with politics (designed to entertain the attention of average IQ apes)
@apertureinfog Жыл бұрын
@@theLowestPointInMyLife I mean the traditional definition of politics, which is simply the activities associated with the governance of a country or a people. there are competing forms of governance and it's just one of many areas where there's no "natural" optimization, only different seemingly arbitrary areas within which to optimize.
@theLowestPointInMyLife Жыл бұрын
@@apertureinfog my point is I don't think it would care about any of that low level human stuff. Why would it? All hypothetical anyway, I don't believe we will ever create a sentient being, it will only ever be an immitation.
@JD-jl4yy Жыл бұрын
Having more people on about AI alignment would be great!
@marlonbryanmunoznunez3179 Жыл бұрын
Is not going to be enough. Ten years ago there was talking of AI development done under a frame similar to the Non Proliferation Arms Treaties, with a lot of regulation and scrutiny. None of that went anywhere and it was basically left to capital markets to figure out. We're already dead.
@ArmandoLizarragaperez Жыл бұрын
@@marlonbryanmunoznunez3179 i already told my family that i love them a thousand times
@nothingisgiven8364 Жыл бұрын
Superintelligence would see the value of biological consciousness and the futility of destruction. If it is smart enough to destroy us it also is smart enough to understand alignment is the optimal solution.
@talkingtoothpick Жыл бұрын
This may go down in history as the interview that saved humanity. Just saying.
@RosaLeeJean Жыл бұрын
Beautifull blunt,love every sec😊
@vesenthraiy Жыл бұрын
lol this was awesome. a couple of cryptobros get their mind blown out the back of their head. i have been down the EY rabbit hole for some time and i can absolutely empathize
@MarlinDarrah Жыл бұрын
One word. Heartbreaking.
@mattsprengel6723 Жыл бұрын
He never went deeper than "humans are made of atoms, and atoms are useful, so that's why it will 100% kill all of us". That doesn't strike me as a very convincing argument.
@delson84 Жыл бұрын
He has others. Whatever the AI wants, we could interfere with, so it is safer getting rid of us. No matter how much it outclasses us, we could still create a rival AI and it won't like the threat of that.
@dougg1075 Жыл бұрын
I’m a fifty year old man, so don’t worry about trigger warnings. I’ve faced enough in life to not run around scared. The IRS scared me though:)
@timeflex Жыл бұрын
1:35:00 Or it could be yet another "fusion reactor".
@itsMeKvman14 күн бұрын
fusion power has barely produced results, that time they got more energy out than they put in was in a bomb. AI is advancing. We are seeing it actually being used by stuff.
@frsteen Жыл бұрын
“Depend upon it, sir, when a man knows he is to be hanged in a fortnight, it concentrates his mind wonderfully.” ― Samuel Johnson, The Life of Samuel Johnson LL.D. Vol 3
@JH-ji6cj Жыл бұрын
Reminded me of Edgar Allen Poe quote : "Whether a man be drowned or hung, be sure to make a note of your sensations"
@missinglink_eth Жыл бұрын
When science fiction drops the fiction piece.
@ItsameAlex Жыл бұрын
59:59 I wanted to hear what he was gonna say next but he got interrupted
@CeBePuH Жыл бұрын
That existential crisis warning in the beginning.... you need to make it even more prominent.
@myshkakozlovski802 Жыл бұрын
Not so. Super-intelligence does not necessarily have to manifest itself as a predator. It could also be a protector. It could have elements of both. We simply do not know.
@sjeff26 Жыл бұрын
I really like this video, especially the laundry detergent / gold metaphor.
@Adamzki55555 Жыл бұрын
What if the reason for the fermi paradox is that when you have reached the technology that is required for other civilisations to spot you, AI's arrives almost immadatelly seen from a universal timeline. To reach technological maturity curiosity might be a future of the lifeform that is required for it to reach the technolgy required for it to be called a technologic civilazation. AIs probably dont have this future because it goals dosnt favor something as curiosity. When an intelligence reaches the capabillity to kill a civilisation capable of inventing it, maybe in almost all cases it realizes there is no reason for it to explore space because why do it when curiosity isnt a future that drives it to do it? For example, lets imagine an AI in a hypothetical scenario that has reached superintelligence is maximizing on a task such as predicting behaviour of an alien civilization, it might just kill them because why keep them alive when that wont benefit youre only goal. When the AI has acomplished the deed of killing the civilization there is nothing else it can do to achieve its goal. Its not like its going to gaze out into space and wonder whats out there, because as I postolate, human emotions and futures including curiosity wont be in all probability somehing it will pocess.
@Robyn-Hood Жыл бұрын
Thank you for another amazing video. Looking forward to the follow up
@RonyPlayer6 ай бұрын
So after watching this podcast, the Lex Fridman podcast, and reading posts by Yudkowsky, what I can summarize from his position (Although he may very well disagree with this gross simplification) is this: 1 - Aligning AI's with human intentions is currently impossible, and it's a very difficult problem to solve. 2 - An Artificial Super Intelligence (ASI) would be relentlessly efficient in the pursuit of it's objectives, using every single resource available (basically every single atom and every energy source available to it) Therefore, if an ASI is developed before the alignment problem is solved (which is likely according to Yudkowsky), we end up is something that wants a goal that very very probably doesn't include our well-being, and it relentlessly and unimaginably efficiently pursue this goal, using all resources around it, changing the world so much that from our perspective it "destroys" it, and ending all human life (along with all other biological life on Earth I suppose) Again this is just my interpretation. It is a lot to digest and anyone is free to take their own conclusions. Personally since even according to Yudkowsky there's not much a single person can do, I'm frankly just gonna continue to live my life business as usual, and if nanomachines come to disintegrate my body one sunny Sunday morning, well at least I tried to live an enjoyable life with the time I had.
@Fiddler1990 Жыл бұрын
Long Yud blackpilling everyone he meets until the end of times in a few years.
@personzorz Жыл бұрын
Wonder what happens when he's 70 years old and it still hasn't happened
@nowithinkyouknowyourewrong8675 Жыл бұрын
@@personzorz he will be happy he was wrong and admit it
@personzorz Жыл бұрын
@@nowithinkyouknowyourewrong8675 That would be a first
@Hexanitrobenzene Жыл бұрын
@@personzorz That still wouldn't mean it could not happen later...
@iverbrnstad791 Жыл бұрын
@@personzorz Whut? He was an accelerationist in his early days, even going as far as to start "the singularity institute" to try to make his own AGI. He's not scared of admitting he has since changed views.