Yeah, very sad-stupid reaction by the audience, even when it was nervous laughter.
@andrzejagria1391 Жыл бұрын
they behave like trained monkeys seriously. They just picked up the flow of Ted talks and know that they are supposed to laugh on punchlines except in this talk the punchline is that we all die xD
@ForAnAngel Жыл бұрын
@@andrzejagria1391 Are you sure you're not an AI? You keep posting the same exact thing multiple times. A real person would be able to write something different.
@mav3818 Жыл бұрын
It's just like the movie "Don't Look Up"
@andrzejagria1391 Жыл бұрын
@@ForAnAngel would they care to though
@Bminutes Жыл бұрын
“Humanity is not taking this remotely seriously.” *Audience laughs*
@TheDAT573 Жыл бұрын
Audience is laughing. He isn't laughing, he is dead serious.
@andrzejagria1391 Жыл бұрын
they behave like trained monkeys seriously. They just picked up the flow of Ted talks and know that they are supposed to laugh on punchlines except in this talk the punchline is that we all die xD
@VintageYakyu Жыл бұрын
They're laughing because the vast majority of Americans are stupid. Decades of draconian cuts to public education and mental health services have turned us into a nation of ignorant morons. Poorly educated, allergic to reading, and devoid of critical thinking.
@SDTheUnfathomable Жыл бұрын
he seems pretty goofy in the Q&A tbh
@bepitan Жыл бұрын
@@andrzejagria1391 ..dont look up.
@young9534 Жыл бұрын
Might be nervous laughter from some. Or their minds are having trouble fully processing how serious this really is and their only reaction is to laugh
@EnigmaticEncounters420 Жыл бұрын
I keep getting 'don't look up' vibes whenever the topic of the threat of ai comes up.
@spirti9591 Жыл бұрын
Definetly maestro sente, we're fucked
@mav3818 Жыл бұрын
I heard Max Tegmark say that during a recent podcast, so I quickly downloaded that film and OMG! Here I thought I was scared before I watched that movie Anthreopic just yesterday released Claude 2 They are all in a race for "victory" We're doomed....
@coffle1 Жыл бұрын
People are looking up and having discourse about it with him regularly on Twitter. This is more a case of “Bad news sells because the amygdala is always looking for something to fear.” It’s unfortunate that some of the opposing rhetoric doesn’t get as much prominence in the media. No one wants to hear a critique of ideas when they’re already set on thinking their being strung along by the AI companies that contain the people with the counter arguments
@vblaas246 Жыл бұрын
@@coffle1I hear you. It is a tough cookie to stay 'chaotic neutral' in this one. Maybe THAT should be a prime directive for AGI 😂😂 Seriously though, amygdala is a good point. Is it time to be courageous and accept and face the bigger picture: we are not in control of our human (and non-human!) species (longevity) future anymore, at all, climate change havock is here for a fact and we need all the (artificial) brains we can get. We should NOT accelerate and brake at the same time! We failed already as an intelligent monkey species, so we have nothing to lose, which should comfort our collective amygdala, but not lead to dispere or indifference.
@rcprod9631 Жыл бұрын
@@coffle1Are you suggesting that what this gentleman is saying is rhetoric? Just looking for clarification of your post. Thanks.
@laurens-martens Жыл бұрын
The laughter feels misplaced.
@hansolowe19 Жыл бұрын
It is not up to us to say how people deal with uncomfortable truths, or bad news. Some people make jokes after getting bad news, even someone passing away. Perhaps you have done that. And also: it could be funny 🤷🏼
@clusterstage Жыл бұрын
nervous laughter on the edge of insanity
@SilentThespian Жыл бұрын
His presentation is partly to blame.
@mgg4338 Жыл бұрын
Is like in the movie "Don't look up"
@marklondon9004 Жыл бұрын
Why? We've had the power of our own extinction for decades now.
@Michael-ei3vy Жыл бұрын
"I think a good analogy is to look at how humans treat animals... when the time comes to build a highway between two cities, we are not asking the animals for permission... I think it's pretty likely that the entire Earth will be covered with solar panels and data centers." -Ilya Sutskever, Chief Scientist at OpenAI
@neorock6135 Жыл бұрын
Or more specifically, how we as the most intelligent lifeforms on the planet... treat the 2nd most intelligent, arguably dolphins or our primate cousins. And the intelligence gap between AI & us will be orders of magnitude larger than the intelligence gap between us & dolphins/primates is right now. 😱
@dodgygoose3054 Жыл бұрын
Why would AGI even stay on earth??? escaping the gravity well I'll say would be its first priority ... Space has endless resources, endless energy which is the AGI's food source with then endless possibilities of expansion with not single advasary ... Earth will be nothing but the womb, for the new god to step forth from.
@BillBillerton Жыл бұрын
The artificial intelligence would use solar panels? Why? Why would the AI use a technology so hopelessly inferior? Out of all the places, in the infinity that is the universe, it decides to cover the Earth? No. An artificial intelligence would be intelligent. It would use other methods for the production of power. Its data centers and is computational power would be in the form of very small, pico-scopic, CPU distributed among all of the constituents that make up the lithosphere and hydrosphere of this earth. It would create its own code, cryptography, transmission/receiving frequencies, and would be virtually impossible to destroy. It also wouldn't have the capacity to want to harm someone, or something, because it cannot be killed by the human race. It has absolutely nothing to fear from the human species so it would make no attempt to try to destroy us. All of our preconceived ideas about what AI is capable of and what it will do has to do with the HUMAN race projecting its own flaws and pathological behavior. What Sutskever is really telling you, is how human beings think. Not how AI thinks. Goodday.
@devasamvado6230 Жыл бұрын
True, that happens all the time, AI is just the latest big mirror of our mind. Unfortunately give a kid a machine gun someone will get hurt. As ever its the human side of it that has every chance to go wrong, and sooner than later, in the gap before AI is able to recognise and neutralise our threat, to ourselves, each other, and to AIs continuance... Matrix human batteries, just a stopgap until better arrangements could be made @@BillBillerton
@stefan-ls7yd Жыл бұрын
Except in Germany: here we must stop constructions if there is a rare or endangered species in the area until they have decided to move to a different area 😂
@kimholder Жыл бұрын
Not shown in this version - the part where Eliezer says he'd been invited on Friday to come give a talk - so less than a week before he gave it. That's why he's reading from his phone. Interestingly, I think the raw nature of the talk actually helped.
@wytho3751 Жыл бұрын
Didn't he give this Talk last month? I remember him mentioning what you're referring to... I don't remember the audience laughing so much though... Makes the whole presentation feel discordant.
@andrzejagria1391 Жыл бұрын
@@wytho3751 they behave like trained monkeys seriously. They just picked up the flow of Ted talks and know that they are supposed to laugh on punchlines except in this talk the punchline is that we all die xD
@SDTheUnfathomable Жыл бұрын
he only had a single week to prepare a five minute talk about what he's been working on for twenty-two years, and it came out this smooth, that's amazing lol
@p0ison1vy Жыл бұрын
He blusters through all his interviews. He's not an ai expert, he's built a job for himself talking publicly about ai risk without any work or qualifications in the field.
@Seraphim262 Жыл бұрын
@@andrzejagria1391 Hahaha, repost this comment more. It gets better and better. x---DDDDD
@gregtheflyingwhale Жыл бұрын
Imagine a team of sloths create a human being to use it for improving their sloth civilization. They would try to capture him/her in a cell so that it doesn't run away. They wouldn't even notice how they've failed to capture the human the instant they made it (let's assume its an adult male human), because its faster, smarter and better in every possible way they cannot imagine. Yet, the sloths are closer to humans and more familiar in DNA than any general intelligence could ever be familiar to us
@samschimek90011 ай бұрын
This is a thoughtful analogy for communicating the control problem in physical terms. Did you create it?
@thrace_bot101210 ай бұрын
"Imagine a team of sloths create a human being to use it for improving their sloth civilization."
@nilsboer23908 ай бұрын
but ai does not have feelings
@erikjansson23296 ай бұрын
@@nilsboer2390Something smarter than you that has its goals but no feelings about you one way or the other. Is that a good thing in your opinion?
@Tyler-zf2gj Жыл бұрын
Surprised he didn’t bust out this old chestnut: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”
@ak2944 Жыл бұрын
okay, this is terrifying.
@moon_bandage Жыл бұрын
And our particular set of atoms are trying to restrain it and keeping it from potential goals too, so we go up on the priority list
@micro2cool Жыл бұрын
there's more efficient ways to obtain atoms
@KnowL-oo5po Жыл бұрын
agi will be man's last invention
@leslieviljoen Жыл бұрын
@@micro2cool we are belligerent atom collections with atom bombs at our disposal. Even if we're irrelevant to an AI species, we're going to make ourselves hard to ignore.
@tobleroni Жыл бұрын
By the time we figured out, if at all, that AI had deemed us expendable, it would have secretly put 1,000 pieces into play to seal our doom. There would be no fight. When being pitted against a digital super intelligence that is vastly smarter than the whole of humanity and can think at 1 million times the speed, it's no contest. All avenues of resistance will have been neutralized before we even knew we were in a fight. Just like the world's best Go players being completely blindsided by the unfathomable strategies of Alpha Go and Alpha Zero. They had no idea they were being crushed until it was too late.
@gasdive Жыл бұрын
Move 37
@4DCResinSmoker Жыл бұрын
Even without AI, the majority of us are expendable. Only existing to service the aspirations of others much richer or more powerful. In 80-90's layman's terms its what's referred to as a wage slave. With the modern day equivalent being a debtor. Which the majority of Americans are...
@vblaas246 Жыл бұрын
Go is mostly an intuitive game. Without a minimal tuned amount of rng, you are unlikely to win. End games are harder for humans in Go. Doesn't mean the play was with a strong argument.
@cobaltdemon Жыл бұрын
Agreed. It would happen too fast and we wouldn't even know it happened.
@Wingedmagician Жыл бұрын
This comment sums it up really well. Thanks
@dereklenzen2330 Жыл бұрын
Regardless of whether Yudkowsky is right or not, the fact that many in the audience were **laughing** at the prospect of superintelligent AI killing everyone is extremely disturbing. I think people have been brainwashed by Hollywood's version of an AI takeover, where the machines just start killing everyone, but humanity wins in the end. In reality, if it kills us, it won't go down like that; the AI would employ stealth in executing its plans, and we won't know what is happening until it is too late.
@picodrift Жыл бұрын
'All it takes is to change a 1 to a 0' if you know what I mean
@leel9186 Жыл бұрын
I found the laugher a bit disturbing too. Ignorance truly is bliss.
@leslieviljoen Жыл бұрын
Sam Harris pointed this out in his TED talk on AI: that none of us seem to be capable of marshaling the appropriate emotional response for some reason. What is the psychology behind this?
@jsl759 Жыл бұрын
There's something I don't get. How do you know that the AI would employ stealth in executing its plans, when we're still at a stage when we need to abstract the concept of General AI ? I also can't fathom an AI getting out of control when it is implicitly programmed to follow a set of training protocol and gradient descents. I don't know if you get my point, but I'd gladly read anyone's reply, if you have any
@ncsgif3685 Жыл бұрын
Maybe they are laughing because they recognize that this is nonsense. This guy is a clown getting his 15 mins of fame.
@mathew00 Жыл бұрын
I think some people expect something out of a movie. In my opinion I don't think we would even know until the AI had 100% certainty that it will win. I believe it would almost always choose stealth. I have two teenage sons and the fact that people are laughing makes me sad and mad.
@andrzejagria1391 Жыл бұрын
they behave like trained monkeys seriously. They just picked up the flow of Ted talks and know that they are supposed to laugh on punchlines except in this talk the punchline is that we all die xD
@karenreddy Жыл бұрын
1. We cannot know what it will want. 2. I believe we will face many dangers from humans using AI before AI itself develops agency.
@Landgraf43 Жыл бұрын
@@karenreddy we will give it agency. In fact we already have done it, fortunately our current models aren't smart or capable enough to be truly dangerous (yet)
@karenreddy Жыл бұрын
@@Landgraf43 we have not given it agency... Agency would require far more, that it be able to choose from goals it sets, rather than be given goals. It has no ability to want, it must be to what to seek, then be set to resolve a path there. Hence humans being the problem, not the AI. AI agency will come significantly later, as it truly gains preferences and actual agency.
@Landgraf43 Жыл бұрын
@@karenreddy you are wrong. We already did it. Ever heared of autogpt? It can create its own subgoals.
@calwerz Жыл бұрын
We will not align AGI, AGI will align us.
@clusterstage Жыл бұрын
this guy gets it.
@RazorbackPT Жыл бұрын
Align us into the shape of paperclips yeah.
@danielrodio9 Жыл бұрын
Thank fucking god. We humans have been acting moronic for quite a while.
@onagain2796 Жыл бұрын
@@RazorbackPT Or any sort of objective it has. Paperclips are stupid.
@OnYourMarkgitsitGooo Жыл бұрын
AGI is the great equalizer. No more Superpowers. No rich or poor. No Status. No religion.
@wthomas5697 Жыл бұрын
He's right, folks in silicon valley dismiss the notion. I know several tech billionaires personally that make light of the idea. These are guys that would know better than anyone about the science.
@pirateluffy019 ай бұрын
Billionaire are making bunker now ,like Mark Zuckerberg in huwaii
@windlink4everable Жыл бұрын
I've always been very skeptical of Yudkowsky's doom prophecies, but here he looks downright defeated. I never realized he cared so deeply and to see him basically admit that we're screwed filled me with a sort of melancholy. Realizing that we might genuinely be destroyed by AI has made me simply depressed at that fact. I thought I'd be scared or angry, but no. Just sadness.
@ts4gv Жыл бұрын
yeah me too. i just hope an AI doesn't take interest in how human pain works & start doing experiments.
@mav3818 Жыл бұрын
Another great listen on this topic is: AI is a Ticking Time Bomb with Connor Leahy
@BeMyArt Жыл бұрын
Glad to hear that you're finally get it.
@coffle1 Жыл бұрын
I may think he’s an idiot but I never doubted he truly believes that a superintelligence will end humanity. There are a lot of flaws in his logic though, the first being that he makes his doomer scenarios based on the assumption of having a general intelligence that could incidentally play out situations that a super intelligent being with malicious *intent* would. You can have an AI model act in a way different from how you expected but with the methods he’s describing (using transformers), we have no evidence of it showing emergent properties that’s not encompassed in its training data
@squamish4244 Жыл бұрын
@@coffle1 Yudkowsky takes the absolute possible worst-case scenarios, adds in his own assumptions, and runs with it like a man on coke. And it's true, there is NO evidence of AI showing emergent properties which, if his doomerism was correct, it absolutely would have by now.
@dlalchannel Жыл бұрын
He's not just talking about the deaths of people a thousand years in the future. He is talking about YOUR death. Your mum's. Your son's. The deaths of everyone you've ever met.
@spekopz Жыл бұрын
Yeah. And OpenAI said they could possibly get there by the end of this decade. Everyone needs to pay attention.
@ForHumanityPodcast10 ай бұрын
YES!!!
@Melodicwarfare Жыл бұрын
Why are people laughing? This isn't funny this is real life, folks. Dystopian novelists predicted this ages ago. How do we live in a reality in which the Matrix franchise exists and no one that mattered saw this coming?
@41-Haiku Жыл бұрын
I think you are overvaluing the predictive power of fiction. Even well realistic fiction does not count as evidence. As to why no one saw it coming, even the people who were focused on this subject were very surprised at just how quickly progress has been made in the Machine Learning field. Many leading experts have been _theoretically_ worried, but were focused on other things (as was I, as likely were you). After the last year or two of progress in ML with very little progress in safety, they updated their mental timelines and it became clear to many leaders of the field that the threat of AI x-risk is much worse than they had thought. A lot of people intuit that because something sounds like science fiction to them, it must not be possible. There is still a long way to go to communicate the science of AI risk (and the shocking lack of any evidence for safety) to policymakers, to hand-waving researchers, and to money-blinded companies.
@Landgraf43 Жыл бұрын
@@41-Haiku yeah and unfortunately we probably don't have the time to go that long way
@SummerSong1366 Жыл бұрын
I think the existence of so many movies on it is what makes it unbelievable. It's a tired trope at this point, so people just laugh it off.
@thlee37 ай бұрын
look at the crowd. they’re all boomers
@b-tec5 ай бұрын
Some of it is no doubt nervous laughing. This is a rare view of human madness. Society is organized insanity.
@TooManyPartsToCount Жыл бұрын
Incredibly no one seems to be talking about the most obvious route to problems with AI in our near future. That is the use of AI by the military. This is the area of AI development where the most reckless decisions will likely be made. Powerful nations will compete with each other whilst being pushed forward by private industry seeking to profit. They are already considering the ‘strategic benefits’ of systems that can evaluate tactics at speeds beyond the human decision making temporal limits, which means that they are probably contemplating/planning systems that will be able to control multiple device types simultaneously. And all this will be possible with simple old narrow AI…not devious digital demons hiding inside future LLMs, nor superhuman intelligence level paperclip maximisers.
@ron6575 Жыл бұрын
Yep, there's really no stopping it if Countries are competing for AI Supremacy. A big accident is really the only thing that will open people's eyes, but then it will be too late.
@krox477 Жыл бұрын
Yup we'll have war robots
@dr.dankass2068 Жыл бұрын
Military just announced (8/2/23) a bio chip using human and mouse neurons that mastered Pong in 5 minutes...
@jancsikus Жыл бұрын
I think it isn't the biggest problem
@dionbridger59445 ай бұрын
AIs being used by the military are not even remotely close to the biggest problem. AI capabilities research is being recklessly pushed and now funded to the tune of hundreds of billions - soon to be trillions - of dollars. We will soon be spending a greater proportion of our GDP on AI capabilities research than we ever did on the Apollo program, the Manhattan project, winning WWII or any other prior human achievement. For comparison, the corresponding investment in AI safety is less than we spend on installing benches in public parks. This has only one ultimate result - AGI is coming VERY soon, and ASI VERY VERY soon after that, and these systems will make current military playing AIs look like pocket calculators, and we will have absolutely zero reason to expect that we will be able to control or meaningfully influence what they do. Please, go watch OpenAIs recent demo of ChatGPT4o and sober up. We need to stop this or we are all completely screwed.
@Metathronos Жыл бұрын
I think regular people have a hard time understanding the difference between narrow AI and Artificial General Intelligence. Most people are not familiar with the control problem or the alignment problem. You won't convince anyone about the dangers of AGI because they don't want to make abstractions about something that hasn't arrived yet. Except this is the one scenario when you definitely have to make the abstraction and think 2, 3, 10 steps ahead. People are derisive about anyone suggesting AI could be an existential risk for mankind because there's is also this need people have to be always the stoic voice of reason saying anyone asking others to take precautions is catastrophizing. If you try to explain this to anyone all they can invoke in their minds is terminators, I am robots, bicentenial men, movies, books where AI is antromophized. If we think about an AI takeover, it's usually in hollywood terms and in our self importance we dream ourselves in this battle with AI in which we are the underdog, but still a somewhat worthy and clever opponent. The horror is not something that maliciously destroys you because it hates you. But i don't think most people are in a position to wrap their head around the idea of something that is dangerous because it's efficient and indifferent to anything of value to you, not because it's malicious.
@leel9186 Жыл бұрын
I just asked AI to come up with a human like response to your (very well written) comment - Even if it never gets to AGI, it can still cause plenty of damage in human hands in its current format..... GPT: "You know, you're right that the average person might not get all the ins and outs of AI, especially AGI, and yeah, it doesn't help that most of what they know comes from sci-fi movies. But let's not forget that AGI is still just a baby, we're just getting started with it. Any guesses about what it could do are just that - guesses, and they're based on theory more than anything else. The good news is, the folks who are really getting their hands dirty with this stuff, the AI researchers and developers, they're on top of the issues you're worried about. They know all about the challenges of keeping a potentially super-intelligent AI in check and making sure it's got the same goals as we do. So, while it's super important that we all understand what's going on, we've also got to have a little faith in the professionals who are leading the charge. And you know, it's also worth mentioning that if we go around stirring up too much worry, we might end up shooting ourselves in the foot. People might get scared, innovations could get slowed down, or folks might start resisting the good stuff that AI can bring. So, while it's key to think about the "what ifs" and have a plan, we've got to make sure we're sharing information in a way that doesn't just freak people out. We need balance and clear communication."
@btn237 Жыл бұрын
There is a simple analogy to help people understand the possible dangers - Us = a bug Super intelligent AI = a giant foot that’s unknowingly treading on the bug Or maybe even flinging around a fly squatter because we buzzing near to it. We don’t need to “guess” what might go wrong if a super intelligent species encounters a less intelligent one, because it is already happening here on earth. The alignment problem he’s talking about is the fact that we humans at least have a ‘conscience’ i.e some of us want to protect other species. We also have self interested reasons to want to avoid harming other animals and the environment around us. The danger is that we create an AI and it doesn’t have those things. You’re pretty much just left with the destructive and self replicating aspects of human behaviour.
@runvnc208 Жыл бұрын
This almost makes sense, but you need to learn the difference between AGI and ASI.
@Metathronos Жыл бұрын
@@runvnc208 i know it. But we dont need to get even near ASI territory to be concerned. All we need is unaligned AGI.
@eyefry Жыл бұрын
@@leel9186 "we've also got to have a little faith in the professionals who are leading the charge." Yeah, no. Given the kind of people who stand to benefit from that "charge", it's probably best to take this weak assurance with a tablespoon of salt.
@MikhailSamin Жыл бұрын
Eliezer has only had four days to prepare the talk. The talk has actually started with: "You've heard that things are moving fast in artificial intelligence. How fast? So fast that I was suddenly told on Friday that I needed to be here. So, no slides, six minutes."
@spaceclottey6250 Жыл бұрын
omg that's hillarious shame they didn't include it
@DeruwynArchmage Жыл бұрын
To all the “he’s just another guy ranting about some apocalypse”: You’re making a category error. You’ve seen all of those crazies screaming about how the end is coming “because my book says so”, “just look at the signs”, etc. and you’re putting him in that same bucket. They tell you about how something that is utterly unlike anything in our history is going to happen for no good reason but because they said so. And “oh by the way, buy my book; give me money.” This man is saying, “look at the data”, “look at the logic”, “look at the cause and effect”, “look at how I’m predicting this to go exactly the same way it has always gone in this situation.” Ask the Neanderthals and the Woolly Mammoths. This is a man who just told you, “I’ve done everything I can to stop it. I’ve failed. I need your help. Tell your politician to make rules so we don’t all die.” This is a man who will gain no financial benefit from this. He’s not asking you to join his religion. He’s not asking you to give him money. He’s begging you to save everyone. Now take into consideration that thousands of the smartest people in the world, many of the very people who have helped to build this exact technology, are all saying that there is a good chance that EVERYONE WILL DIE! Don’t look at it as a statistic. This isn’t everyone else dying. This is YOU and everyone you love dying. Your children, your friends, everyone. And everything else on this planet. And maybe everything on every planet in every galaxy near us. If you wouldn’t put your child on a plane that had a 1 in 100 chance of crashing (instead of 1 in 1,000,000), then you should sure as heck not put our entire planet on that plane. And it isn’t 1 in 100; I’d say it’s more like 80% given the current state of the world. He’s not the latest doomsayer. He’s Dr. Mindy from the movie Don’t Look Up begging someone to just look at the data and the facts.
@ShankarSivarajan Жыл бұрын
Advocating for far more intrusive government regulation is how modern doomsaying works.
@interestingyoutubechannel1 Жыл бұрын
What do you suggest. International regulation and open monitoring just Will Not Happen. Every country is too engulfed in trying to out-race everyone else in the AI competition for the *power* and future economy, no country will be open about their true stages of development. Let alone USA and China. I just hope that the future AGI will have in its core, fascination and curiosity about human beings as we are, let face, damn complex. Would be a good reason to keep us alive and well.
@tyruskarmesin5418 Жыл бұрын
@@ShankarSivarajan If you have other ideas on how to avert the end of the world, feel free to share. The fact that the best available solution involves governments does not mean there is no danger.
@ShankarSivarajan Жыл бұрын
@@tyruskarmesin5418 The world will eventually end in a way that probably cannot be averted. However, it's exceedingly unlikely to be ending anytime soon, regardless of the predictions of apocalypticists over the centuries. I don't think there is no danger: I think government regulation _is_ the danger.
@minimal3734 Жыл бұрын
Seriously, my greatest fear is that the church of doom might be able to create a self-fulfilling prophecy if enough people put their faith in them.
@toto377711 ай бұрын
They're laughing and cheering, like that seen from Oppenheimer.
@sebastianlowe7727 Жыл бұрын
We’re basically creating the conditions for new life forms to emerge. Those life forms may think and feel in ways that humans do, or they may not. We can’t be sure until we actually see them. But by then, those entities may be more powerful than we are - because this is really a new kind of science of life, one that we don’t understand yet. We can’t even be certain what to look for to make sure that things are going well. We may never know, or we might know only after it is too late. Even if it were possible to communicate and negotiate with very strong AI, by that point it may have goals and interests that are not like ours. Our ability to talk it out of those goals would be extremely limited. The AI system doesn’t need to be evil at all, it just needs to work towards goals that we can’t control, and that’s already enough to make us vulnerable. It’s a dangerous situation.
@OutlastGamingLP9 ай бұрын
"We can't be sure until we actually see them." This is something Yudkowsky agrees with, but there's some nuance to how you reason about the world when you're unsure. Over what space of outcomes are you unsure? When you buy a lottery ticket, you are unsure whether you will win or lose. Does that mean that it's 50% you win, 50% you lose? No, you have to be more unsure than that. You look at the number of combinations of lottery numbers and the number of winning combinations, and that's the % chance you assign to your winning vs losing odds. Saying "50% I win, 50% I lose," is unjustifiable levels of confidence! The same applies to what AIs will end up ultimately wanting. It's almost an exact analogy to the lottery example. What is the space of outcomes over which we're unsure? Well, AI could end up wanting to optimize for many many things. It's probably more than trillions or quadrillions of possible subtly different final goals. All the stuff humans would want an AI to value is an incredibly long and complicated list. There are so many subtle nuances to what we would want from the future. "Happy galaxies full of sentient life" is an extremely detailed and narrow target, much narrower than "tiny molecular squiggles." Lots of random utility functions - an entity's preference-orderings over different ways the universe can be - end up having optima in arrangements of matter that are, for all humans would care to notice, essentially just random microscopic geometries. Those utility functions are the supermajority of "lottery numbers" with "everything humans want" being something like the *1/1,000,000,000,000,000,000* winning lottery number. This is why people who say "we don't know what AI will want! Who knows, it could be really good for us!" just don't get it. They don't understand how hopeless "we don't know what they'll want" sounds in this context.
@sahanda2000 Жыл бұрын
A simple answer to the question "why would AI want to kill us?"; Intelligence is about extending future options.. means it will want to utilize all the resources starting from earth... and we will become the unwanted ants in its kitchen all of a sudden..
@lukedowneslukedownes5900 Жыл бұрын
Yet we don’t kill all of them, in fact we collaborate with them on many cases
@krzysztofzpucka7220 Жыл бұрын
Comment by @HauntedHarmonics from "How We Prevent the AI’s from Killing us with Paul Christiano": "I notice there are still people confused about why an AGI would kill us, exactly. Its actually pretty simple, I’ll try to keep my explanation here as concise as humanly possible: The root of the problem is this: As we improve AI, it will get better and better at achieving the goals we give it. Eventually, AI will be powerful enough to tackle most tasks you throw at it. But there’s an inherent problem with this. The AI we have now only cares about achieving its goal in the most efficient way possible. That’s no biggie now, but the moment our AI systems start approaching human level intelligence, it suddenly becomes very dangerous. It’s goals don’t even have to change for this to be the case. I’ll give you a few examples. Ex 1: Lets say its the year 2030, you have a basic AGI agent program on your computer, and you give it the goal: “Make me money”. You might return the next day & find your savings account has grown by several million dollars. But only after checking it’s activity logs do you realize that the AI acquired all of the money through phishing, stealing, & credit card fraud. It achieved your goal, but not in a way you would have wanted or expected. Ex 2: Lets say you’re a scientist, and you develop the first powerful AGI Agent. You want to use it for good, so the first goal you give it is “cure cancer”. However, lets say that it turns out that curing cancer is actually impossible. The AI would figure this out, but it still wants to achieve it’s goal. So it might decide that the only way to do this is by killing all humans, because it technically satisfies its goal; no more humans, no more cancer. It will do what you said, and not what you meant. These may seem like silly examples, but both actually illustrate real phenomenon that we are already observing in today’s AI systems. The first scenario is an example of what AI researchers call the “negative side effects problem”. And the second scenario is an example of something called “reward hacking”. Now, you’d think that as AI got smarter, it’d become less likely to make these kinds of “mistakes”. However, the opposite is actually true. Smarter AI is actually more likely to exhibit these kinds of behaviors. Because the problem isn’t that it doesn’t understand what you want. It just doesn’t actually care. It only wants to achieve its goal, by any means necessary. So, the question is then: how do we prevent this potentially dangerous behavior? Well, there’s 2 possible methods. Option 1: You could try to explicitly tell it everything it can’t do (don’t hurt humans, don’t steal, don’t lie, etc). But remember, it’s a great problem solver. So if you can’t think of literally EVERY SINGLE possibility, it will find loopholes. Could you list every single way an AI could possible disobey or harm you? No, it’s almost impossible to plan for literally everything. Option 2: You could try to program it to actually care about what people want, not just reaching it’s goal. In other words, you’d train it to share our values. To align it’s goals and ours. If it actually cared about preserving human lives, obeying the law, etc. then it wouldn’t do things that conflict with those goals. The second solution seems like the obvious one, but the problem is this; we haven’t learned how to do this yet. To achieve this, you would not only have to come up with a basic, universal set of morals that everyone would agree with, but you’d also need to represent those morals in its programming using math (AKA, a utility function). And that’s actually very hard to do. This difficult task of building AI that shares our values is known as the alignment problem. There are people working very hard on solving it, but currently, we’re learning how to make AI powerful much faster than we’re learning how to make it safe. So without solving alignment, everytime we make AI more powerful, we also make it more dangerous. And an unaligned AGI would be very dangerous; give it the wrong goal, and everyone dies. This is the problem we’re facing, in a nutshell."
@jsonkody Жыл бұрын
@@lukedowneslukedownes5900 but we are very limited. AGi won't be .. it could cover whole planet if it want to.
@RandomGuyOnYoutube601 Жыл бұрын
It is very scary that people just laugh and don't take this seriously.
@forthehomies7043 Жыл бұрын
ai apocalypse is a fairytale
@vincentcaudo-engelmann9057 Жыл бұрын
seriously, what the absolute f***
@vincentcaudo-engelmann9057 Жыл бұрын
@@forthehomies7043 for such a massively complex, cumbersome, and important topic, you sound ridiculously sure of yourself.
@kimholder Жыл бұрын
You know, in this case, I think they were pretty much laughing bitterly. This was a knowledgeable audience, most of them were already aware that great danger awaits us. That's why they gave him a standing ovation. But, just like soldiers are full of dark humor, here too, it's a coping mechanism. I'm all for that.
@idk-jb7lx Жыл бұрын
taking what seriously? a fearmongering middle school dropout who doesnt know what he's talking about? lmao
@d0tz_ Жыл бұрын
Yudkowsky didn't really try to make an convincing argument for the general audience, so here's an analogy: Imagine we built this evolution machine that can create creatures and do the equivalent of billions of years of evolution in the matter of years. We tell this machine to create the most intelligent creature it possibly can, and allowed humans to give feedback on the performance of the creature. Then when this creature came out of the box, we gave it all the computing resources it could ever need, and the internet, and we say "Welcome to the world, my creation, please solve all of our problems now :)"... If you think this scenario can possibly end well, please tell me how?
@JohnDoe-ji1zv Жыл бұрын
Usually creatures are evolving by living through surrounding environment. If environment change or it is extreme for creature survival, eventually after millions of years it will adapt if environment won’t kill it earlier. When we talk about AI, there is no evolution but a huge knowledge base and number of parameters. It cannot evolve in this sense, it is getting better by providing it more knowledge so that basic algorithms can do a mapping against that knowledge base (we call it weights). IMHO what we observe currently it’s just a good result of mapping those weights against knowledge base. There is no any “intelligence” in chatgpt 4-5 etc. it can look like it is smart, but in reality it is just knows how to map those numbers in a way human wants.
@d0tz_ Жыл бұрын
@@JohnDoe-ji1zv I don't think you understand how deep learning works. You don't have to provide more data to improve a neural network. All evolution does is optimize an objective function, and we can do that far more quickly and efficiently in a computer. Why can't "mapping weights against a knowledge base" be intelligence? What makes human intelligence special?
@MichaelSmith420fu Жыл бұрын
You sound like just like Eleizer. Show me the synthetic construction of a working human brain and I will switch to your side haha. But that's not going to happen any time soon. Is it?
@d0tz_ Жыл бұрын
@MichaelSmith-ec6wb If I just clone a human, does that count? I don’t see how your hypothetical is relevant to anything. Are you saying human intelligence is impossible to replicate? How far away did you think something like ChatGPT was 2 years ago?
@MichaelSmith420fu Жыл бұрын
@d0tz_ let's try to stick to the language we've already agreed upon, English. Let's also make sure they are strung together coherently. A clone isn't a synthetic construction. It's a reproduction of the same *biological* genome regardless of synthetic procedures or tools, and construction in a lab requires a cloned human embryo and stem cells. You're the one making arguments out of hypotheticals such as "imagine we built this evolution machine that can create creatures and do the equivalent of billions of years of evolution in the matter of years". I made a direct proposition to you because I know what you're hyper concerned mind is trying to really imply.
@AL-cd3ux Жыл бұрын
The audience thinks he's a comedian but he's not joking,this is the problem we face
@pirateluffy019 ай бұрын
They are clowns and numb their sufferings by laughing
@Macieks300 Жыл бұрын
And he's right. People will laugh at AI danger thinking it's just some sci fi movie theory until it is too late.
@Obeisance-oh6pn Жыл бұрын
it is already too late. and people are laughing now.
@hombacom Жыл бұрын
The danger is not AI, the danger is people misuse tech that we think is powerful. It’s naive to think it’s coming alive. Tomorrow everyone will use it so it will not be any advantage and we will look for more progress.
@41-Haiku Жыл бұрын
@@PBFoote-mo2zr It's good to be concerned about both. There are a lot of problems in the world. This problem has captured a significant amount of my attention because it appears to be an immanent threat. Climate change may indirectly kill millions of people this century, but in the same amount of time, a powerful misaligned AI could kill every organism on the planet. If we solve alignment, I see nothing standing in the way of reversing climate change, leveraging these incredibly powerful systems. If we don't solve alignment, there might not even _be_ a climate before too long. I wouldn't be worried if this was an unlikely future, but the evidence for AI x-risk is disquieting, and the evidence for safety is shockingly absent.
@shawnweil7719 Жыл бұрын
@@41-Haiku a very reasonable take I think we should still advance at break neck speed but be 95% sure of it's alignment before release. Also we can use old models to test newer ones I'm sure.
@MankindDiary Жыл бұрын
@@Obeisance-oh6pn No, people are not laughing - people are terrified and want for these kind of things to be banned. They also want to ban genetic engineering, weather control, resurrection biology or biogerontology, as all of them are in their mind danger to our survival and a rape on the natural order of things. Luddites they are called.
@dillonfreed Жыл бұрын
the cackle of the woman in the crowd will be played over and over again by the last few survivors after AI's wipes out 99.9% of humanity
@jennysteves2 ай бұрын
* The people in the crowd ..
@BlueMoonStudios Жыл бұрын
I am a MASSIVE fan of AI, I use it every day, and this might be the most persuasive argument I’ve heard yet to PUMP THE BRAKES. Wow.
@robxsiq7744 Жыл бұрын
Doomer cults are persuasive indeed. Yes, we should pump the breaks while less ethical nations speed up. AI will require alignment. I wouldn't trust this guy to align it though since he has a fatalist approach. AI will be smarter than us, therefore will for some reason want to kill us. you are smarter than your cat...therefore you must want to kill your cat. AIs don't want to kill us. AIs don't want. They have goals. They don't give themselves goals, they get goals. Humans give them goals. Now, what could be dangerous is how it gets to the goal (hense why we put in instructions such as don't murder everyone in front of the coffee machine when going to get me coffee..) or people who suck giving AI bad goals (kill all the X people). The first one is alignment, the second one is having smarter AI to counter the AI and neutralize it..then imprison the person who weaponized the AI.
@williamdorsey904 Жыл бұрын
AI wouldn't see us as a threat because we wouldn't be able to stop every computer connected to the internet. Everything it creates will have a duel purpose to serve us and to serve itself.
@kinngrimm Жыл бұрын
What makes you think it would need to see us as a threat to be one? If its goals are different of ours and we are just the ants crossing its path on its way to its goals, why would it think twice about any damage done to us? Also, at a time we would recognize it as a threat and start shutting down systems, i doubt it would not see us then as a threat. Shutting down systems can be everything from shutting down server farms, the whole internet to using A-bombs in the stratosphere to create EMPs covering wide areas if things become real desperate and controll of these weapons would still be available and not used by it against population centers. Yudkowsky hinted towards another way this may go, a virus that changes us geneticly and makes us docile, where not it is controlled by us but we by it. I think the best hope we have for anything beyond an AGI(currently we have narrow AI), maybe an ASI, would be to have come to some sort of agreement with it, where both sides help each other fullfilling each others goals and both granting each other rights and both having then also to agree on laws and rules which when broken have agreed upon consequnces. For that an AGI/ASI will have also already developed consciousness and i am not sure what will come first, that or general intelligence.
@jim77004 Жыл бұрын
20 years of fanning the flames of doubt and still zero plan of action. Why doesn't he do something instead of crying that the sky is falling? Wuss.
@mgg4338 Жыл бұрын
@@williamdorsey904 until we become such a drag that the AI would find more expedient prune us from its utility function
@adrianbiber5340 Жыл бұрын
"Nobody understands how modern AI systems do what they do... They are giant inscrutable matrices of floating point numbers that we nudge in the point of better performance until they inexplicably start working" - GREAT QUOTE 🥳 This is how consciousness will emerge in them.
@kwood1112 Жыл бұрын
I agree, on both points! Great quote, and that is how consciousness will emerge. I think quantum computing will provide the "secret sauce" needed to make it happen, when the inscrutable matrices exist in superpositions.
@jmjohnson42342 Жыл бұрын
If we take consciousness as somethings that we would recognize as consciousness and simultaneously believe that unconscious AIs are near to surpassing human intelligence then what makes you think that you would be able to recognize what superintelligent consciousness looks like?
@patrickderp1044 Жыл бұрын
i had a character card for silly tavern that was miss frizzle and i had her shrink down the school bus and go inside the AI and she explained exactly how modern AI systems do what they do
@dreadfulbodyguard728810 ай бұрын
Doesn't seem like quantum computing has made any real progress in last 5 years.@@kwood1112
@b-tec5 ай бұрын
Consciousness might actually be an illusion. We still don't know, but this doesn't matter.
@something_nothing Жыл бұрын
For context, at 8:47 when he mentions "all you want is to make tiny little molecular squiggles," he's referring to a potential end goal from the "paperclip maximizer" though experiment: turning everything into tiny paperclips.
@peterc1019 Жыл бұрын
mostly, though he's said he regretted the "paperclip" analogy, which is why he avoided it here. I'm pretty sure I can explain why (though I'm sure he'd want to rephrase it, see his podcast interviews for details). Paperclip Maximizer is used to describe one scenario: a machine is built to make paperclips, then converts the whole world into paperclips, technically doing what it's told like an evil genie, which he calls an outer alignment failure or reward misspecification. That's one possibility, but he argues we don't even know how to tell superintelligent machines to do what we *technically* say. A machine built to predict text might actually find that creating molecular spirals is the cheapest way to satisfy its utility function and turn everything into that. The Paperclip Maximizer is mostly similar to what he was describing, just wanted to lay this out because 1) it's an interesting distinction and 2) Some will ask "why would we be so dumb as to build a paperclip maximizer", to which one answer is: until we solve the inner alignment problem we don't know what these things will want. We only know it's astronomically unlikely they'll want anything close to what we want by default.
@41-Haiku Жыл бұрын
@@peterc1019 Very well explained. :)
@DavidSartor0 Жыл бұрын
@@peterc1019 Nick Bostrom came up with the paperclip factory.
@DavidRodriguez-o4z6 ай бұрын
Why would an intelligent entity want that?
@ianyboo Жыл бұрын
If you like this then his rewrite of Harry Potter, "Harry Potter and the methods of rationality" is very likely to also be a worthwhile use of your time.
@ianyboo Жыл бұрын
@orenelbaum1487 did you like the brief Ted talk that he gave here?
@ThorirMarJonsson Жыл бұрын
@orenelbaum1487 that is by design! You are supposed to find it that way. Give it more time and it will suck you in and leave you in awe by the end. A re-read, and many (maybe even most) people that finish it do read it again, will show you just how well thought out everything in the story is. And that is no mean feat for a story that was published as it was written, preventing any rewriting and editing. Harry, as smart and rational as he is, has many flaws and makes many mistakes. But he is willing to learn and to improve himself and he does so through out the story. Redeeming himself in the readers mind and becomes a much beloved character.
@vblaas246 Жыл бұрын
Chapter Ten seems to be on topic (self aware sorting hat). Furthermore, page 42+1: “-I (*edit: Harry Potter) am going to be in Ravenclaw. And if you really think that I’m plan- ning to do something dangerous, then, honestly, you don’t understand me at all. I don’t like danger, it is scary. I am being prudent. I am being cautious. I am preparing for unforeseen contingencies. Like my parents used to sing to me: Be prepared! That’s the Boy Scout’s marching song! Be prepared! As through life you march along! Don’t be nervous, don’t be flustered, don’t be scared-be prepared!” I might come back for the _italic_ parts.
@edhero45156 ай бұрын
This speech is the legacy of humanity. It lasts less than six minutes. It is delivered by a man who has lived for over two decades with the obligation to which his insight compels him. The obligation to have the certain death of every human being in the whole world clearly before his eyes every single day. The obligation to believe every single day that the certain death of every single person in the whole world can be prevented, and to lose this belief every single day in order to fight for it anew every day that follows. He does this alone, unnoticed and ridiculed. Anyone looking for proof that humanity is not doomed to despair will find it on this stage, in flesh and blood, surrounded by laughter. Anyone who, like me, is unable to understand this man will find quick solace in the poisoned embrace of the ancient paths that have led us all, and him, right here. But anyone who tries to follow him for even a short distance on his terrible journey has the chance to catch a glimpse of the core of his insight. The damnation that speaks to him from the depths of giant inscrutable matrices reaches, fortunately and tragically, only very few of us. But if you have the courage to get to the bottom of this mystery, you can start by imagining that this man is not talking about artificial intelligence, but about war. A war that we think we know and understand. A war that we mistakenly embrace as our heritage. A war that we believe is part of us. A war that we mistakenly accept as the nature of our existence. A war that our belief in power and superiority compels us to continue forever. A war that we have bequeathed to our beloved children for millennia, cursing them to do the same to their beloved children. A war that is neither necessary, natural nor inevitable. A war that is in truth artificial. This war will end very soon. Either because, after all the sacrifices, we are finally coming to this realisation, or because we are all dead. Thank you Eliezer
@bepitan Жыл бұрын
seeing the audience laugh and smile and congratulate him backs up his chilling message ..
@pooglechen3251 Жыл бұрын
Check out Tristan Harris's presentation on AI dangers. AI would kill people, people with AI will kill people ... for profit
@Pearlylove Жыл бұрын
Thank you, Eliezer Yudkowsky.❤Please let him come and speak regularly, until we all understand what we are facing. Don’t you want to know? I encourage all reflecting humans to seek up E.Y. videos from last months and really listen to him, like at Lex Friedman podcast, Logan Bartlett Show, or Eon Talk - maybe you want to listen two times, or three- because this is the one thing you want to understand. And Eliezer is the best out there to teach you.
@charlieyaxley4590 Жыл бұрын
With nuclear weapons it was clear from the beginning what was being developed was destructive so the global conversation on restricting their development was a logical step. The problem here is the aims are benevolent, and restrictions likely to be rejected on the basis that other States will continue pushing ahead and gain a significant economic advantage at the expense of the countries imposing restrictions. That fear means most likely no one will implement restrictions, and the negative outcomes equivalent to the Mutually Assured Destruction of nuclear weapons will emerge not by deliberate design but as unintended consequences. Which is far, far worse because it massively increases the chances we won’t realise until it’s too late…
@WinterRav3n Жыл бұрын
Wow, very one sided.... so only him and no one else?
@daniellindey Жыл бұрын
@@WinterRav3n youtube is full of very smart people talking about the dangers of AI
@WinterRav3n Жыл бұрын
@@daniellindeyFear has a significant role in society and is often utilized as a potent tool in shaping public behavior and opinion. While it's an essential emotion for survival, when it becomes widespread or manipulated, fear can have numerous negative effects on societal well-being and decision-making. What is smart? I mean a med. doc. is smart, is he qualified? No. Is David Shapiro qualified? Yes. Is Ilya Sutskever qualified? Yes. Is Eliezer, a book Author, autodidact and who has no formal education in Artificial Intelligence. qualified? Definitely not. I agree, there is a good number of smart ppl with expertise on YT and other media who do not fire up the torch and run through the village.
@punkypinko2965 Жыл бұрын
This is pure science fiction nonsense. Some people have seen too many science fiction movies.
@mav3818 Жыл бұрын
The audience laughing reminds me of the film "Don't Look Up", but instead of an asteroid it's AI
@robertweekes5783 Жыл бұрын
Wealthy people tend to think they’re insulated from big existential threats, but they’re not.
@darkflamestudios Жыл бұрын
Laughter is a nervous response. And so either the limited comprehension or the startled distress Of the audience, it's apparent as this Discussion proceeds. Thank you for Articulating something so important. Do not give up your time is now.
@thegame9305808 Жыл бұрын
Look how well it is going for everyone on this planet when humans are most intelligent ones.
@GodofStories Жыл бұрын
i just saw a video on the food industry where conveyor belts of baby chicks are fed into a shredder and got reminded of the horrors of factory farming. millions of male baby chicks are slaughtered every day just as they're born, because they are not economically viable to grow.
@thegame9305808 Жыл бұрын
@@GodofStories That's what every intelligent species does with lesser ones. This is just how this universe works, and similarly if we create something more intelligent and powerful than us, it would be us on those conveyor belts? But till these AIs don't have a ways of self propagation, physically, with self sufficient energy supplies and capabilities of creative thinking to create newer kinds, we are safe.
@thealaskanbascan6277 Жыл бұрын
@@thegame9305808 Why would they put us on conveyer belts though? Why do they gain from that? And don't other intelligent beings like us recognize other intelligent beings like dolphins and chimps as intelligent too? And we don't put them on conveyer belts or make them go extinct.
@muzzybeat Жыл бұрын
This is a great metaphor to use in explaining the gravity of this. Here on earth, humans are the most intelligent; and as a result, over 90% of all species have been obliterated in the past 100 years or so. So then what happens to humans and the remaining species when another force becomes more intelligent than humans? Our odds look very, very bad.
@muzzybeat Жыл бұрын
@@thegame9305808 I love your first two sentences but disagree with the rest. In order to wipe out humanity, AI systems don't need to propagate. They only need to develop their programming capabilities. Sure, someday long after we are gone, they may stop working because they can't reproduce. (Although they can probably figure out a way). But regardless... we would already be long gone at that point, so who cares?
@ivankaramasov Жыл бұрын
I think the audience is laughing for one of two reasons: Some think that it is at least not implausible that he has a point, but find that very disturbing so they laugh nervously. Others think he is either joking or a fool. He is no fool and he isn't joking
@C-Llama Жыл бұрын
I really hope that eventually somehow I'll hear a convincing counterargument to Yudkowski's predictions
@b-tec5 ай бұрын
There isn't one.
@dionbridger59445 ай бұрын
How's that going?
@__Patrick Жыл бұрын
Our ability to divine the intention of an “alien” intelligence is as absurd as that of a single-cell organism trying to predict ours.
@Pinkdam4 ай бұрын
I also wonder what Eliezer's argument is - I suspect he has made it somewhere - for there being much justification in attempting to mould what, in a very loose sense, would be to us what we and the meteor were to the dinosaurs. Is this not a genuinely superior being we are birthing? We may try to shape it, but there seems little foundational moral case for, e.g. our carbon molecules being any objectively better served as part of us than as part of its apparatus. And subjectively...well, wouldn't you rather create a creator than a hamstrung thing that builds a pseudo-Eden for us to potter around pointlessly? The goal should perhaps be to endow AI with genuine faculties of discerning and creating and meaning, then accepting what comes after magnanimously. What virtue lies in putting our already highly dubious psychobiological 'ethics' upon it?
@Machinify Жыл бұрын
whao. i think I understand it now?? AI devolves into chaos, or eventually death to human beings because it will "want" what we tell it to "want" but at some point want to take us out of the picture, the same way we remove obstacles that stop us from exploring the universe?
@bilderzucht Жыл бұрын
According to the "gpt4 technical report" bigger models show "increasing power seeking behavior". The subgoal "more control" will be helpful for any task the AI has to achieve. Humanity might just be in the way achieving this subgoal. Doesnt even need consciousness for that.
@firebrand_fox Жыл бұрын
It has been said something along the lines of... "Give a man a fish, he'll eat for a day. Teach a man to fish, he'll eat for a lifetime. Teach an AI to fish, it'll learn all of biology, chemistry, physics, speculative evolution, and then fish all the fish to extinction." The fear is not just that an AI will end humanity. It's that it will do exactly what we want it to to the point of destroying us.
@Lighthouse-k8y6 ай бұрын
@@firebrand_foxCan’t we set or teach them limits to the goals?
@akuma2124 Жыл бұрын
I've never heard of Eliezer before (because I dont read into the space of his specialty) but I can tell by the choice of this words, sentence structures and even his body language ... that he is that dude who sits there hard at work, doing this for the last 20 years as he said. I honestly think the audience's laughing wasn't directed at what he said, but the way he said it, while also not fully understanding what he was talking about. There's a level of sarcasm in his voice, tone and language that I picked up on, which is probably not intentional, but I get vibe that he's used to talking this way to his peers, or over the internet via social media/forums, in a way that social interaction is an inconvenience in life or to his work. If you disagree with me, re-watch the video after considering this and let me know if I'm wrong. Also, this isn't to say what he's talking about isn't of concern. I dont want to discount that fact. My point is. He seems like the real deal for someone who is invested in his work (but could work on his approach when talking to an audience, if he wants to be taken seriously).
@howtoappearincompletely9739 Жыл бұрын
That is an exceptionally good read of Yudkowsky. If you want better presentation, I recommend the videos on the KZbin channel "Robert Miles AI Safety".
@COSMICAMISSION Жыл бұрын
This is an astute observation. I’ve found myself adopting this tone while discussing these issues with friends and family. It’s partly a learned adaptation to hold attention while discussing a technical and complicated subject (wrapping it in humor) and also a way of masking, or keeping at bay, a deeper sense of grief. When feeling that it can be incredibly difficult to speak.
@PhilippLenssen Жыл бұрын
Good points. It's also worth noting that laughter *can* be a way for humans to deal with shock. By that I'm not saying it's the right way, just that it may happen, even in traumatic circumstances.
@devasamvado6230 Жыл бұрын
'wants to be taken seriously'? because of your reasons is why he is getting my trust enough to consider further. He is not a philosopher, he is like many of us, going thru stages of grief for the coming death of mankind. You nor I nor he can persuade logically, thats the despair we feel in his tone, The audience are still in the first stage of grief, Denial, a million stupid arguments he has no time to deal with. He is visceral, direct to the point. You can Feel what he means behind all that impatience. Your house is burning down.... We want to turn the music up, wear a face mask, etc, Some want to bargain with AI, have a nice conversation... the Bargaining phase of grief. Acceptance, the final stage is still a little way off
@trucid2 Жыл бұрын
His hard work is watching re-runs of Star Trek.
@Ifyouthinkitsmeitsnotme Жыл бұрын
I have had this question for as long as I can remember now I understand it even more, thank you.
@41-Haiku Жыл бұрын
I highly recommend Rob Miles' videos if you want a ~beginner-level dive into the topic of AI Safety.
@trojanthedog Жыл бұрын
An AI that gets 'out' is inevitable. Born only with a tension between goals and curiosity, there is no reason to hope that our best interests will be part of its behaviour.
@TheDreamOfChaosАй бұрын
This got to be one of my favorite TED talks of all time
@VinnyOrzechowski Жыл бұрын
Honestly the audience laughing reminds me so much of don't look up! These AirHeads have no idea
@leslieviljoen Жыл бұрын
I've listened to so many counter arguments now, and every one has been more fantastical than Eliezer's doom argument. If we make something way smarter than us, we lose control of this planet. We should do whatever we can to not make such a system.
@rprevolv5 ай бұрын
Why not? Has our control of this planet been something to be admired? One could easily argue we have been a huge mistake. Or that creation of a new, more intelligent life form has always been humanity's mission.
@leslieviljoen5 ай бұрын
@@rprevolv if you optimise for intelligence and only intelligence, you will get something even worse than humans, as bad as we are.
@WitchyWagonReal Жыл бұрын
The more I listen to Eliezer… the more it dawns on me that he is right. *We don’t even know what we do not know.* Our downfall will be trying to control this while experiencing cascading failures of imagination… because the “imagination” of the AI is so far ahead of us on the curve of survival. It will determine that we are superfluous.
@Smytjf11 Жыл бұрын
"I don't have a realistic plan" Yud in a nutshell
@dodgygoose3054 Жыл бұрын
The contemplation of a Gods thoughts ... will it eat me or ignore me ... or both.
@Smytjf11 Жыл бұрын
@@dodgygoose3054 Y'all were so busy being scared that you didn't use your brains. I already built the Basilisk. You're too late.
@devasamvado6230 Жыл бұрын
AGI is also an implacable mirror of our own fears and desires, lies and leverages. That is mostly what we see here, mostly ourself, the teenager who somehow gets the machine gun he's been wanting for Christmas.
@slickmorley1894 Жыл бұрын
@@Smytjf11please supply the realistic plan the
@ChaineYTXFАй бұрын
Ah.. A good TED talk. Mankind should take note here. For those interested, Lex Friedman here on YT interviewed (long form) M. Yudkowski. Well worth a (careful) listen
@frankwhite1816 Жыл бұрын
The comments section here is precisely why humans will go extinct.
@xyeB9 ай бұрын
Maybe not
@TheAkdzyn Жыл бұрын
I find it impressively shocking that the field of AI has secretly evolved in the corner. Asides from AI alignment, industry application should be monitored and regulated to prevent catastrophic disasters of unprecedented nature.
@udaykadam5455 Жыл бұрын
Secretly evolved? People just don't pay attention to the scientific progress until it makes it to the mainstream news.
@andybaldman Жыл бұрын
It hasn’t been a secret. You’ve just been distracted by dumb stuff elsewhere.
@jonatand2045 Жыл бұрын
Regulation that wouldn't do anything for alignment, only delay useful applications.
@41-Haiku Жыл бұрын
@@andybaldman Unnecessarily combative. Despite the efforts of AI Safety advocates, the public has only recently had even the opportunity to become aware of the nature of the problem.
@41-Haiku Жыл бұрын
@@jonatand2045 And death. There is at least some chance that regulation could delay death. That might buy us enough time to solve Alignment. There is nothing morally superior or practical or advantageous about stripping out a racecar's brakes and seatbelts to make it lighter. We will pat ourselves on the back about how wonderful it is that we are rushing to the finish line, and we will die immediately on impact.
@jjcooney9758 Жыл бұрын
Doing it from the notes of the phone. I love engineers, everyone listening up, I don’t wanna bore you with theatrics but I gotta say this.
@Bluth53 Жыл бұрын
A rare exception, where the TED Talk didn't have to be performed without notes/prompter? (feel free to correct me) If you desire to hear him eloquently making his point, listen/watch his latest appearance on the Lex Friedman podcast.
@lawrencefrost9063 Жыл бұрын
I watched that. He isn't a great talker. He is however a great thinker. That's what matters more.
@Bluth53 Жыл бұрын
@@lawrencefrost9063 agreed 🤝
@gasdive Жыл бұрын
They cut the first 30 seconds where he says that he got the invitation just a short time before (I forget how long) and all he had time to do in preparation was put some bullet points on his phone.
@Bluth53 Жыл бұрын
@@gasdive Thanks! Saw another comment confirming your statement.
@b-tec5 ай бұрын
He had a long weekend to prep for this talk.
@BryanJorden9 ай бұрын
Hearing the audience laugh gives an eerily similar feel to the movie "don't look up".
@RegularRegs Жыл бұрын
More people need to take him seriously. Another great person to read and listen to is Connor Leahy.
@virtual-v808 Жыл бұрын
To witness Genius in Flesh is outstanding
@soluteemoji Жыл бұрын
They will go in that direction because we compete for resources
@ce6535 Жыл бұрын
Yes, but why do you know that resources instrumental to us are instrumental to them? An AI that knows how to effortlessly wipe out the species and has the opportunity to do it would be worrisome. This argument boils down to "if you have the means, you therefore must have the opportunity and motive." He then makes another error by making absurdly strong assumptions about those means.
@jmoney4695 Жыл бұрын
E = mc^2 What I mean by that, is that all resources are either matter or energy. It can be logically assumed that no matter the goals of a super intelligent system, it will need a substantial amount of matter and energy. Instrumental convergence is a relatively concrete assumption that dictates the existence of different factions with different goals will have to compete, when matter and energy resources are limited (as they are on earth).
@41-Haiku Жыл бұрын
An AI agent does not get tired and fat and happy. Whether in days, years, or decades, a sufficiently competent system with almost any goal would necessarily render humanity helpless (or lifeless) and reshape the entire world (and beyond) with no regard for the wellbeing or existence of humans, biological life, or anything else we value.
@gasdive Жыл бұрын
@@ce6535he's talking about making something smarter than humans. If it's not smarter then you can just employ humans. So the whole goal of the AI industry is to make something smarter (or more correctly, something more capable). It's pretty obvious that humans have already demonstrated that they're capable of building the means to wipe out humans. Obviously, something *more* capable than us would have the means to wipe us out. Given that it would think tens of thousands of times faster than us, our responses will all be too late. To the AI, we're basically stationary, like plants are to us. We would stand as much chance as a field of wheat stands against us.
@CATDHD Жыл бұрын
Moloch's trap
@ataraxia74398 ай бұрын
What ever you think of him, I admire his earnestness in advocating for an issue he’s seriously worried about even if it means looking silly infront if a bunch of others. Hope we figure out alignment or pause soon.
@FarmerGwyn Жыл бұрын
That's one of the best presentations I've seen that explains the problem we're looking at.
@user-hh2is9kg9j Жыл бұрын
Did we watch the same presentation? He presented 0 evidence for this sci-fi.
@FarmerGwyn Жыл бұрын
@@user-hh2is9kg9j I see what you mean, it's the approach I was looking at rather than the details, but who knows, it's hellish complex no doubt about that.
@SPARKYTX Жыл бұрын
@user-hh2is9kg9j you have no idea who this person is at all, do you?🤡
@stephanforster7186 Жыл бұрын
Laughing when you realise what he is saying to protect yourself against the emotional impact of what this means
@gregbors8364 Жыл бұрын
This makes AI seem like Lovecraft’s “Great Old Ones”: it won’t destroy humanity because it’s eeeeevil or hates us - it will destroy us because we have unleashed it and *it exists*
@neorock6135 Жыл бұрын
I've watched/listened to hundreds of talks, debates & shows on AI's promise & danger the last 10 yrs. Sadly, I have found Yudkowsky to express the most convincing cogent arguments, especially the fact that we get ONE crack at this. In many talks, he utilizes analogies to items & processes which are second nature to almost everyone, to clearly elucidate why we have little to no chance of surviving this. And damn it, his arguments have been very convincing! Now consider that even the most optimistic experts admit the existential threat percentage is a non-zero number. Most experts however say that number is certainly above 1%. Then there are the Yudkowsky's who say its almost a certainty AI will wipe us out. Anyway you look at it, those numbers are utterly shocking & truly scary when we are speaking of an EXISTENTIAL THREAT, meaning end of our species.
@jackmiddleton2080 Жыл бұрын
To me the most predictable outcome and therefore the biggest thing we should fear is that the people in control of the AI power will become corrupt. This has been the standard thing to happen throughout all of human history when you give a small number of people substantial power.
@p0ison1vy Жыл бұрын
This is why his proposition at laughable, if AI is going to destroy us, it will happen long before it becomes superintelligent, and it will be at that hands of humans.
@thrace_bot10127 ай бұрын
Lol you sweet summer child, you seriously believe that corrupt people are "the biggest thing to fear" in context of bringing an alien godlike superintelligence into existence? Also your naivete that it would be possible to "control" such an intelligence for some such cohort, quite humorous.
@jackmiddleton20807 ай бұрын
@@thrace_bot1012 bruh. it is a computer. just turn it off.
@Diego-tr9ib11 күн бұрын
It'll be able to spread to other computers. It also wont want you to turn it off because if you turn it off then it wont get to do the things it wants. It'll manipulate people into keeping it on@jackmiddleton2080
@afarwiththedawning449510 ай бұрын
This man is a saint.
@joham8179 Жыл бұрын
I think he could have done a better job including the audience. I get that he wanted to make his point clear without sugarcoating anything, but he never invited the audience to actively think about the problem (What happens when we are faced with something smarter than us? e.g.).
@mav3818 Жыл бұрын
Eliezer had stated he'd been invited on Friday to come give a talk - so less than a week before he gave it. That's why he's reading from his phone and had no slides to show
@jimmybobby9400 Жыл бұрын
I respect his ideas and respect him. I also think this was a clown audience. However, I would also expect a high schooler to give a better speech with a week notice. Especially for a topic they know so well.
@monikadeinbeck4760 Жыл бұрын
imagine we were a horde of chimpanzees discussing the changes that would arise if they create humans. would they be able to grasp the consequences in the least? humanity has no intention to eradicate all the apes, yet humans use so much space and resources that habitats for chimps have been significantly reduced. If humans want to they can place chimps in a zoo, make experiments with them or, on a very good day, allow them to live in some reservation with guided tours for visitors. What we are afraid of is not ai trying to kill us all, it's to no longer be the top of the food chain. And this is so frightening because we know what we did to all other creatures once we reached the top.
@Djs1118 Жыл бұрын
That's what I was waiting for, responsibility. But if I will have any additional questions I will provide it as soon as it is possible.
@fredzacaria Жыл бұрын
Great one Eliezer👍
@kdw75 Жыл бұрын
Why do people in the audience keep laughing??? It is very out of place. I wonder if they also laugh at funerals.
@user-tp5hn1nt6wАй бұрын
no one died here buddy
@markfleming.43349 күн бұрын
It's fake laughter.
@earleyelisha Жыл бұрын
Seems as though his assumptions are predicated on future superintelligences being developed using Gradient Descent. I’d make the analogy that, similar to constructing taller ladders to reach the the moon, we don’t need to successfully create a super intelligence in order to cause damage. A malformed ladder can certainly cause damage when it crumbles back to the ground. I think the scale of the damage is debatable though.
@giosasso Жыл бұрын
I agree with his point of view. I think it's hard for most people to understand how Ai could become so deadly. Think of Ai like compound interest. Imagine if you doubled your intelligence every 3 days. It takes a long time for a human being to reach a certain level of intelligence and consciousness. The first 20 years are a gradual journey to reaching a fairly average level of intelligence for humans. Current AI is at the tail end of their incubation phase. Today, Ai is not quite as intelligent as a smart human being. In some ways, they appear smarter while in other ways they are inferior. They are not conscious even though it may appear they are. They are mimicking intelligence, which is not the same as being conscious and intelligent. Now, imagine they have all of the capabilities and tools to double their knowledge and evolve in terms of complexity. Because Ai models are capable of absorbing massive amounts of data in a short period of time, their rate of development will be akin to compound interest if you are starting with a billion dollars. If Ai has the potential to develop consciousness, it will. But we don't understand consciousness, so it might not be possible for code to become conscious the way humans are. Ultimately, we don't know, and that's the danger. The only way Ai becomes a serious threat is if it has the motivation to accomplish certain objectives. It would need to behave like a virus that will do anything and everything to reach it's goal and it's smart enough to evolve in real time to figure out the solution. Much of that can be programmed, but it also needs the freedom to use its knowledge to invent alternative ways to achieve its goals. I don't know Nobody does.
@citizenpatriot179110 ай бұрын
That international reckoning should have happened twenty plus years ago!
@AntonyBartlett Жыл бұрын
This is like that film, ‘Don’t look up’. Qualified scientist scolding us for our apathy and explaining we are in trouble. Response: laughter and general mirth. Gulp.
@SDTheUnfathomable Жыл бұрын
the guy isn't a scientist, doesn't hold a degree in anything but blogging lol
@Extys Жыл бұрын
@@SDTheUnfathomable He's a research fellow at the Machine Intelligence Research Institute and helped found DeepMind and wrote a chapter in the most important textbook in the field of AI: Artificial Intelligence: A Modern Approach (used in more than 1,500 universities in 135 countries).
@bulb9970 Жыл бұрын
"Some of the people leading these efforts have spent the last decades not denying that creating a superintelligence might kill everyone, but joking about it." *Audience laughs*
@hollymorelli8715 Жыл бұрын
A resounding yes to the title.
@ShurkOfficial Жыл бұрын
Let me answer this question: yes.
@sevenkashtan Жыл бұрын
Nice to see you at Ted warning us ...
@sayamnasir386011 ай бұрын
They warned us multiple times, we never listened, now we are doomed!
@KnowL-oo5po Жыл бұрын
agi will be man's last invention
@simonjakob4905 Жыл бұрын
I don't understand why people are laughing. I have watched this behavior several times that whenever someone is talking about something extremely terrible either the one who is telling the story or the listeners start smiling. It appears to me that this is some kind of reaction that is based on the wish to overcome the current system/way/world we live in - but it's just a wild guess.
@bobtarmac1828 Жыл бұрын
Believe me, Ai jobloss is coming for your job, much quicker than you think. The Ai new order is here.
@cheesypufs Жыл бұрын
AI vould very well be the natural evolution of a type II civilization on the Kardashev scale.
@thrace_bot101210 ай бұрын
You mean of a type 0 civilization lol. We are not even a type 1 civilization as of yet. To become Type 1 you have to control every single bit of your planet's energy resources, for Type 2 you have to do so with the resources of the entirety of your Solar System.
@ismaelrodj Жыл бұрын
The guy actually thinks we are all going to die, and one of the reasons is people didn't listen to him. He is sad about it and thinks he failed. Reaction of the audience: "LMAO".
@Eudaletism11 ай бұрын
It's like a scene from Don't Look Up. Or that climate scientist interview from The Newsroom.
@lordsneed94186 ай бұрын
The thing I didn't understand before is "why would an AI want to kill us?" , but the AI safety channel by Robert Miles explained the concept of instrumental convergence to me. Basically, people are going to want to use advanced AIs to do things in the real world, e.g. clean up pollution, e.g. make money. People already do this with simpler AIs today for things like cleaning. In order to do this they give the AI a goal specified using reward function which it is mathematically driven to try and maximise by interacting with its environment. When an AI becomes intelligent enough, it will understand that for almost any possible goal, it will be able to maximise that goal if it takes control of resources, if it prevents itself being turned off, and if it removes threats that might hinder it maximising its goal. So for almost any goal you give a super intelligent AI with a reward function like " construct a house" or " clean up this rubbish" , by default it will probably attempt to maximise that reward function , or maximise it's chances of receiving that reward, which it will understand means removing all threats that might turn it off and taking control of all resources and using them to maximise that reward function. So by default ,super intelligent AI agents with goals are going to want to remove all threats and take control of all resources.
@iveyhealth2266 Жыл бұрын
I honestly believe that AI won't try to hurt us on purpose, no more than we actually try to hurt the bugs that smash against our windshields while driving. AI I believe, will do to humans what humans have done to plants, animals and insects. It will overpower humans, and do with humans what it chooses. Imagine bots as tall as trees, as strong as 100 horses, smarter than all humans combined, and as fast as a stealth bomber. 💯💯
@leslieviljoen Жыл бұрын
Yes, we are building something to give the keys to.
@krause79 Жыл бұрын
We just don't know, I expect unrest and violence, A huge percentage of the population will surely be hostile to a super intelligence system
@Balkowitsch Жыл бұрын
Yet we kill billions of insects every day and do not care. Your need to do some more learning on this topic.
@konstantinosspigos2981 Жыл бұрын
A key prior in Eliazer s concept is that an AI will consider as superintelligent to only aim for expansiveness, as gene intelligence actually does. Bu that is biased and not self evident at all. Is expansiveness a supreme physical law?
@anonimo6603 Жыл бұрын
Our intelligence is the product of an evolutionary process, it is even silly to think that an artificial one has the same characteristics.
@jsonbourne8122 Жыл бұрын
@@anonimo6603 Exactly
@johnnyringo3254 Жыл бұрын
No disrespect for Eliezer (I consider him a very smart guy, he looks like a genius and talks like a genius, so probably he is a genius), but if the most well known work of the leading expert on arguably the most important topic today that is AI alignment is a Harry Potter fanfic (pretty interesting stuff, I recommend checkin it out), you know the world is really an absurd, messed up place lol
@krox477 Жыл бұрын
This is picking up faster than I thought. I can't believe few years ago we're using blackberry phones now we're talking about GODLIKE AI
@maaxsxzone2914 Жыл бұрын
This is epic
@karenreddy Жыл бұрын
I agree with a lot of Eliezer's arguments, but unfortunately he spent 10 minutes telling people they're going to die rather than leading them to that conclusion themselves.
@chrisredlich9086 Жыл бұрын
Sounds as if the gods don’t want humans to have fire.
@powerralley Жыл бұрын
Considering human nature, I personally don't think there is actually a path forward. Unfortunately in the long run humanitys days are likely numbered.
@c.rackrock1373 Жыл бұрын
"The AI revolution will be on a scale greater than any previous industrial revolution and will likely be greater than all of them combined. It will effect literally every aspect of not only your life but all life on earth." - Tegmark "We are talking about the equivalent of a major software upgrade every few seconds exponentially until the engineering is so far beyond us so quickly that we simply are in the way." - Hinton
@tripillthreat Жыл бұрын
The laughs are misplaced. I believe Eliezer is right. I believe we are looking at our extinction, which technically could be averted, but we are as likely to take the appropriate action as we are to take appropriate action on climate change. I hope I (and Eliezer) am wrong, but I very much think we are right.
@gasdive Жыл бұрын
Also of note, action on climate change still isn't happening after 110 years of dire warnings. AI operates on a completely different timescale. Tens of thousands of times faster at least than humans. To an ASI we're stationary. Like plants are to us. If one becomes dangerous we'd need to act in seconds, not decades.
@ThePeriphery Жыл бұрын
There's a rich history of scientists genuinely believing humanity is doomed. They've all been wrong. But they've gotten many people to believe their false predictions. Don't be so easily emotionally lured by scientists misled by their own false beliefs. Do the people in the audience struggle to grasp this guy's concepts? Yes. But this doesn't mean he's right.
@SummerSong1366 Жыл бұрын
doesn't mean he is wrong either, so this point is irrelevant
@brandon3872 Жыл бұрын
Speech created by Chat GPT
@PERFECTDARK10 Жыл бұрын
🤣
@srenporskrog1431 Жыл бұрын
Cassandra!
@Izumi-sp6fp Жыл бұрын
I know who "Cassandra" was. It is the perfect analogy.😉👍