Why Asimov's Laws of Robotics Don't Work - Computerphile

  Рет қаралды 856,686

Computerphile

Computerphile

8 жыл бұрын

Audible Free Book: www.audible.com/computerphile
Three or four laws to make robots and AI safe - should be simple right? Rob Miles on why these simple laws are so complicated.
Silicon Brain: 1,000,000 ARM Cores: • Silicon Brain: 1,000,0...
Chip & PIN Fraud: • Chip & PIN Fraud Expla...
AI Worst Case Scenario - Deadly Truth of AI: • Deadly Truth of Genera...
The Singularity & Friendly AI: • The Singularity & Frie...
AI Self Improvement: • AI Self Improvement - ...
Thanks to Nottingham Hackspace for the location
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

Пікірлер: 2 700
@dalton5229
@dalton5229 7 жыл бұрын
I didn't realize that people took Asimov's Three Laws seriously, considering that nearly every work they're featured in involves them going wrong.
@TiagoTiagoT
@TiagoTiagoT 7 жыл бұрын
I've seen it happen a lot in the comments section of videos about the dangers of the Singularity.
@BrianBlock
@BrianBlock 7 жыл бұрын
For the layman, they are often quoted, used a logical argument, and taken very seriously :(.
@ToveriJuri
@ToveriJuri 7 жыл бұрын
+Brian Block. That's sad...
@Weebusaurus
@Weebusaurus 6 жыл бұрын
people don't read books, they just quote them to seem smart
@oneofmanyparadoxfans5447
@oneofmanyparadoxfans5447 6 жыл бұрын
What book is that from?
@rubenhayk5514
@rubenhayk5514 5 жыл бұрын
Asimov: you can't control robots with three simple laws everyone : yes,we will use three simple laws, got it.
@allenholloway5109
@allenholloway5109 4 жыл бұрын
Exactly what I thought of when everyone suddenly was throwing them around as if they were the ultimate defense against rogue AI.
@KirillTheBeast
@KirillTheBeast 4 жыл бұрын
Well, if I had to guess, I'd say most of the people who think of Asimov's laws when discussing AI either have never read any of his work or they really didn't get the point. Edit: grammar, for duck's sake... twice xD
@ValensBellator
@ValensBellator 4 жыл бұрын
It’s funny as every derivative story I can think of that adapts something akin to his laws also shares the same conclusion... not sure why people would want to rely on fictional laws that never even work within said fiction lol
@Lorkisen
@Lorkisen 4 жыл бұрын
@@ValensBellator Likely the humans in question are unable to circumvent the laws themselves and thus view them as inviolable.
@ericpraline
@ericpraline 4 жыл бұрын
It would make an excellent headline for one those adverts from dubious websites: *THREE SIMPLE LAWS FOR AI YOU WON‘T BELIEVE!!!!!*
@txqea9817
@txqea9817 4 жыл бұрын
"if Goingtoturnevil don't"
@vincent-ls9lz
@vincent-ls9lz 4 жыл бұрын
bruh moment
@ironcito1101
@ironcito1101 4 жыл бұрын
const evil = false;
@yashvangala
@yashvangala 4 жыл бұрын
error: line 1: Goingtoturnevil is not defined
@hanzofactory
@hanzofactory 4 жыл бұрын
@@yashvangala Laughs in Python
@sithdestroya
@sithdestroya 4 жыл бұрын
Y'all just fixed it! Humanity= Saved
@KirilStanoev
@KirilStanoev 7 жыл бұрын
"You are an AI developer. You did not sign up for this". Brilliant quote!!!
@TakaG
@TakaG 7 жыл бұрын
There is not even a need for an international consensus. Even nowadays our technology is adapted to the laws of different countries and territories. AI could either be aware of which territory they are in, or it be illegal to transport them between regions. The real problem with AI would be hackers bypassing the fundamental laws or illegal manufacturers not implementing them in the first place.
@Narutoxcosplayxfreak
@Narutoxcosplayxfreak 7 жыл бұрын
The three laws CAN NOT be programmed into a machine effectively, because the definitions for the words in those laws are far to subjective. The rules you make for a general intelligence have to be iron clad and with no subjective wiggle room for the AI to play with. Operate under the assumption that you are building something that is truly neutral at best and chaotic evil at worst.(If you gave it no rules at all) You can't give it some subjective rule about following the law because there are edge cases that it WILL find where the law doesn't have a ruling or the law doens't stop it from murdering you to make you a cup of tea because you told it to make you tea or any number of an impossible to block of amount of weird ethical edge case situations you can not ever hope to prepare for.
@panda4247
@panda4247 7 жыл бұрын
khoi: BUT that is exactly the point... How can an AI developer make unambiguous definitions for the AI to work with, when the humanity is not sure (there are debates, arguments, etc.) "given the circumstances that somehow these terms are approved internationally by rational philosophers and scientific communities" you are basically throwing the main problem out of the window. It's like, if I have a broken car, I will say, "I cannot drive this car, it's broken." And you are saying: "But given the car was fixed, you could drive it. So your argument is invalid. Now drive the car!" and now you see, your argument is invalid :) EDIT: ergo, when one is an AI developer, he should point out these moral issues, but nobody gave him permission to decide on them. I mean, I don't want to have a world where every stance on moral issue is dictated by whatever Microsoft or Google (or whoever) decided when coding their AI. If not for anything else, then for the fact, that they could easily make the (from others' point of view) morally wrong (but easier to implement) decision... That is the thing, when you are messing with development of advanced AI, you want to be damn sure that it does not take over the world (although it will have all the stamps).
@quintoncraig9299
@quintoncraig9299 6 жыл бұрын
Tools, a.i. super intelligence is pointless at this stage in civilization, if we succeed, we get replaced, if we fail, we try again(something else when we have enough evidence) until we succeed, you want an example of what happens when this methodology is used incorrectly just watch MattPatt's "Don't Touch Anything" Stream(Probably the first one)
@quintoncraig9299
@quintoncraig9299 6 жыл бұрын
He even sites the definition of insanity, which I find humourus since that is the whole point of Science, To test something Repeatedly in order to determine whether or not the successes were flukes.
@LordOfNihil
@LordOfNihil 5 жыл бұрын
the laws exist to create a paradox around which to construct a narrative.
@benjaminmiller3620
@benjaminmiller3620 5 жыл бұрын
Exactly, It's obvious to people who have read Asimovs works. The problem is that most people (if they know Asimov at all) only know of the three laws.
@mirradric
@mirradric 5 жыл бұрын
In other words the laws are defective (do not work) by design.
@travcollier
@travcollier 5 жыл бұрын
Not just narrative though... Asimov was one of the first to think about AI safety seriously. The three laws aren't a proposed solution, but instead a thought problem to show the complexity required by any actual solution. Robert sounds way too dismissive of Asimov here... He should be criticizing the pop culture misconception of Asimov but praising the actual stories (which are pretty much the foundational thought problems of his field.)
@ekki1993
@ekki1993 4 жыл бұрын
@@travcollier The stories very much explain more than Robert does in the video, actually. Asimov had his limited knowledge due to the time he lived in, but he was clearly aware of a lot of problems that are very topical right now.
@zopeck
@zopeck 2 жыл бұрын
Brilliant! That was exactly the purpose, because as you know, because of this, originated the cause of things so strange to solve like that one story where we have a robot going round and round some place because of the conflict between the second and third law in the context of the physical environment ( here we have a robot in Mercury, trying to obey and order given by a human, and when doing so, the robot got to know that his own life was in danger (the robot had to recover an artifact laying somewhere on the surface of Mercury where the temperature was very very high), so, second law (comply to orders given by humans) should be obeyed over third law (robot protect its own existence), but given the fact that the human order was given, but not so strong, so then the force existing in the third law took the robot to a state of equilibrium, so he could not execute the order nor he could get out of the place where he could receive harm...)
@arik_dev
@arik_dev 5 жыл бұрын
He mostly focused on the difficulty of defining "Human", but I think it's much much more difficult to define "Harm", the other word he mentioned. Some of the edge cases of what can be considered human could be tricky, but what constitutes harm? If I smoke too much, is that harmful? Will an AI be obligated to restrain me from smoking? Or, driving? By driving, I increase the probability that I or others will die by my action. Is that harm? What about poor workplace conditions? What about insults, does psychological harm count as harm? I think the difficulties of defining "Harm" are even more illustrative of the problem that he's getting at.
@Ausecko1
@Ausecko1 5 жыл бұрын
I don't think he's actually read all of the stories either, because all of the issues he mentioned were explained/solved in the stories. As for the harm issue, Asimov explains that in the story about the superbrains (the one where they control humanity to the point where everybody gets moved around to prevent riots, can't remember the name), and Daneel's explanation near the end of the Foundation arc deals with some of the limitations.
@insanezombieman753
@insanezombieman753 5 жыл бұрын
Thats what i thought as well. Its hard to define what a human is but you can atleast come up with a physiological definition that accounts for 99% of the cases and then add in edge cases as well, which will still not be enough but atleast youre getting somewhere. But when it comes to "harm" like you said, theres just way too many possibilities and trade offs than come into the picture that pose not just ethical but also philosophical questions
@davidmurray3542
@davidmurray3542 4 жыл бұрын
Equally, the follow on from that: "through inaction, allow a human to come to harm" How much responsibility does this AI have? To what lengths will it go to to prevent "harm"? And what if the first law conflicts with itself: e.g. if the only possible action to prevent harm to one human is to actively harm another human?
@zyaicob
@zyaicob 4 жыл бұрын
As Dave from Boyinaband so eloquently put it, "if you gave it a priority of keeping humans happy, it would probably just pump our brains full of dopamine, keeping us euphorically happy until we *died"*
@moralesriveraomar233
@moralesriveraomar233 4 жыл бұрын
In one of the stories the AI decided that psychological harm (as in having your reputation ruined) is worse than bodily harm (as in going to prison), and that's only an AI programmed to act as a style corrector for academic papers
@DVSPress
@DVSPress 8 жыл бұрын
"Optimized for story writing." I can't express how much I love that sentiment.
@shenglong9818
@shenglong9818 3 жыл бұрын
Found a David Stewart comment in the wild. Nice. Enjoy your channel and books.
@ThePCguy17
@ThePCguy17 4 жыл бұрын
The problem with Asimov's laws is probably that they're just obscure enough that people don't think they're well known, but they're also not well known enough for people to remember the context they appeared in and how they always failed.
@fisharepeopletoo9653
@fisharepeopletoo9653 4 жыл бұрын
I think the movie I Robot also helps a lot. People who have never even heard of Asimov know the three laws from that movie, and even though much like the books the movie is about the three laws going wrong and the hero, a robot who doesn't have the three laws, saving the day. Yet people still spout the three laws as if they work lol
@beachcomber2008
@beachcomber2008 4 жыл бұрын
There was a context. Obviously a nasty one, and probably because there had been previous tragedies where designers agreed that those software formulations needed to be created, no matter how great the processing demand had to be. And these laws were the ones in operation that created those disturbing consequences. This doesn't seem simplistic to me. Ah, the confidence of youth.
@1ucasvb
@1ucasvb 8 жыл бұрын
Yes, that was Asimov's intention all along. The whole point of the laws of robotics in the books is that they are incomplete and cause logical and ethical contradictions. All the stories revolve around this. This is worth emphasizing, as most people seem to think Asimov proposed them as serious safeguards. The comments in the beginning of the video illustrate this misconception well. Thanks for bringing this up, Rob!
@USSMariner
@USSMariner 8 жыл бұрын
SOMEONE WHO HAS ACTUALLY READ HIS BOOKS! WE GOT A READER!
@Pipe0481
@Pipe0481 8 жыл бұрын
+Mariner1712 Yaaay
@TimeofMinecraftMods
@TimeofMinecraftMods 8 жыл бұрын
+1ucasvb but in the stories it gets resolved most of the time, in reality it won't.
@RobertMilesAI
@RobertMilesAI 8 жыл бұрын
+1ucasvb I agree with this completely, if that wasn't clear from the video. I'm not knocking Asimov at all, just people who think the laws are at all useful as an approach to the AI Value Alignment/Control Problem
@Jacquobite
@Jacquobite 8 жыл бұрын
+1ucasvb He knows this he even mentioned exactly what you said in your comment.
@AliJardz
@AliJardz 8 жыл бұрын
I kinda wish this video just kept going.
@paxpacis2
@paxpacis2 8 жыл бұрын
+Ali Jardz Me too
@cacksm0ker
@cacksm0ker 8 жыл бұрын
+Ali Jardz In 2045 an artificial superintelligence reads this comment, tracks down Ali Jardz, and forces him to watch this video on loop whilst hooked up to a Clockwork Orange-style forced viewing machine. When calculating whether or not this is ethical, the superintelligence will decide that yes, it is. It does not cause harm because Ali Jardz specifically wanted this, and all subsequent protests are irrelevant.
@beachcomber2008
@beachcomber2008 8 жыл бұрын
+cacksm0ker He only kind of wished it. The SI might consider that before deciding. Why would subsequent protests be irrelevant even if he really wished it? Bertrand Russell's work might be a good AI start point for this subject. i disagree with the presenter's assumption there can be no fuzziness in the programming, while accepting a 100% solution would be an impossible task. But a 99.999% solution* would probably be acceptable, wouldn't it? * or an optional number of sig figs...
@George4943
@George4943 8 жыл бұрын
+cacksm0ker But, of course, humans, being human, change their minds. The ASI would have to know that later wishes supersede prior ones. Your ASI's program is what? While functional if all is not ethical do something else?
@MidnightSt
@MidnightSt 8 жыл бұрын
+Ali Jardz then you might be interesting in playing Soma, or at least watching an LP of it from someone who's able and willing to think about these things ;)
@Sewblon
@Sewblon 8 жыл бұрын
So the problem of ensuring that technology only acts in humanity's best interests isn't between human and technology, but between human and self. We cannot properly articulate what kind of world we actually want to live in in a way that everyone agrees with. So no one can write a computer program that gets us there automatically.
@chinggis_khagan
@chinggis_khagan 8 жыл бұрын
Exactly. If we could all agree on what humanity's best interests were, there would be no such thing as politics!
@stensoft
@stensoft 8 жыл бұрын
It is possible but it's way more complicated than writing several imperative laws (after all, if it was so simple, why would the Universal Declaration of Human Rights need to be any longer?). You need to employ fuzzy logic and basically programme how judicial system works. But it is possible because you can view human society as a (fuzzy) machine and you can emulate it.
@RicardoMorenoAlmeida
@RicardoMorenoAlmeida 7 жыл бұрын
If we're going to simulate society and forget the individual, then we're into psycho-history territory, and I suggest you read Asimov's Foundation books for problems with THAT....
@HeavyMetalMouse
@HeavyMetalMouse 5 жыл бұрын
(I know this is a bit of thread necromancy, but...) It's worse than that, even; you aren't even trying to get all the humans to agree with your definitions, you're actually just trying to get a well-defined description that doesn't create unintended consequences that *you* a single individual, don't want. Ignore the fact that nobody else might agree with you, just getting your own, personal value function down in some logical, consistent way that doesn't create counter-intuitive conclusions or unintended undesirable results, merely by your *own* definitions, is a herculean task. Forget trying to solve 'ethics of humanity', just solving 'ethics of this one specific human' is a virtually intractable task.
@ionlymadethistoleavecoment1723
@ionlymadethistoleavecoment1723 5 жыл бұрын
Program in libertarianism.
@AstroTibs
@AstroTibs 4 жыл бұрын
"[The laws are] optimized for story writing" spoken like a true programmer
@bigflamarang
@bigflamarang 8 жыл бұрын
This brings to mind the Bertrand Russell quote in Nick Bostroms's book. _"Everything is vague to a degree you do not realize till you have tried to make it precise."_
@shanedk
@shanedk 8 жыл бұрын
In fact, Asimov's whole point in writing I, Robot was to show the problem with these laws (and therefore the futility in creating one-size-fits-all rules to apply in all cases).
@shanedk
@shanedk 8 жыл бұрын
It was wrong in all sorts of ways. One robot ended up self-destructing because he was in a situation where anything he did--including nothing--would have resulted in harm to a human.
@PvblivsAelivs
@PvblivsAelivs 8 жыл бұрын
+Shane Killian The book "I, Robot" is actually a collection of previously written short stories strung together as a somewhat coherent narrative.
@TheThreatenedSwan
@TheThreatenedSwan 8 жыл бұрын
That's what he said, but even people who say they've read his stories still act like they're a thing that should be taken seriously
@TheThreatenedSwan
@TheThreatenedSwan 8 жыл бұрын
jbmcb That's the current stance, but they don't (and won't) actually implement any rules
@CaseyVan
@CaseyVan 6 жыл бұрын
Also the problem will still exist, so the developers had better address it.
@DeathBringer769
@DeathBringer769 4 жыл бұрын
This was sort of Asimov's point in the first place if you actually go back and read his original stories instead of the modern remakes that mistakenly think the rules were meant to be "perfect." He always designed them as flawed in the first place, and the stories were commentary on how you *can't* have a "perfect law of robotics" or anything, as well as pondering the nature of existence/what it means to be sentient/why should that new "life" have any less value than biological life/etc.
@nathanjora7627
@nathanjora7627 4 жыл бұрын
Greig91 Except there were stories in which the AI didn’t break at all and was entirely fine, the only thing that went wrong were humans. In fact, all of you keep going on about how the laws were broken and that was the point and whatnot, yet when I read the introductions to Asimov’s works, what I find is that an important part of his work was meant to point out that humans, not robots or their laws, were the problem. He wanted to tear down this primitive « fear of the Frankenstein monster » we all have, and for this (partially at least) made laws that forced robots not to harmed us, and showed the ridiculousness of our behavior by showing these laws to work perfectly without this preventing people to be fearful of robots. For example, there was this story where each continent is co ruled by a human and an AI, and someone ends up thinking or fearing that something is going wrong, so he visits each continent to make sure that everything is right and to ask questions around to better understand wether it’d be possible for things to go wrong or not. Turns out nothing was going wrong and it’s not even really possible. In another one, a robot was created by a retired roboticist in order to take his place, and ends up running for an official position without people knowing he is a robot. One of the moral of the story is literally : a normal robot’s moral is the same as the best of the humans’ moral. It’s also one of the stories where prior or after that, Asimov discusses the difficulty of defining humans and whatnot, but still, the laws aren’t broken in this, they are at best/worst unrealistic. And I could go on and on, but you get the idea I hope.
@mrosskne
@mrosskne Жыл бұрын
@@nathanjora7627 and there are plenty of stories that show how the laws are insufficient or lead to unintended consequences, just as the video states.
@nathanjora7627
@nathanjora7627 Жыл бұрын
@@mrosskne Sure, they do, my point is moreso that the books were rarely about the laws going wrong, but rather about things going wrong due to humans and the laws working just fine, because asimov, unlike many other writers, wasn’t going at it from the angle of « what if robots went wrong » but « what if things went wrong because humans are bad at their jobs, despite the laws working just fine ». Usually things go wrong despite the robots or because humans did something to the robots, not because the laws themselves were flawed.
@DJCallidus
@DJCallidus 5 жыл бұрын
A story about robot necromancy sounds kind of cool though. 🤔🤖☠️
@HadleyCanine
@HadleyCanine 5 жыл бұрын
Sounds like the game SOMA. And yes, it is /very/ interesting.
@Jet-Pack
@Jet-Pack 8 жыл бұрын
"I didn't sign up for this" - made my day
@TheOnlyTominator
@TheOnlyTominator 6 жыл бұрын
That was the pay off!
@KunwarPratapSingh41951
@KunwarPratapSingh41951 5 жыл бұрын
one day they will sign up for this... fictions are made to turn into reality. its epic to imagine intersection of philosophy and maths. i may not be sure but atleast I can hope, thats all we really do as humans. one day we will get bored of developing AI as giant efficient optimization function and we will start thinking about making it conscious. i am sure. cuz we are hungry.
@MasreMe
@MasreMe 8 жыл бұрын
Does psychological harm count as harm? If so, by destroying someone's house, or just slightly altering it, you would harm them.
@littlebigphil
@littlebigphil 8 жыл бұрын
+Masre Super I'm pretty sure almost everyone agrees that is a form of harm.
@Maarethyu
@Maarethyu 8 жыл бұрын
+Masre Super In Isaac Asimov's books, there is a robot that lie everytime you ask something because the true answer could psychologically hurt you.
@ImprovedTruth
@ImprovedTruth 8 жыл бұрын
+Masre Super +Maarethyu It had unexplained psychic powers and told everyone what they wanted to hear to avoid harming them.
@axiezimmah
@axiezimmah 8 жыл бұрын
+littlebigphil but where do you draw the line? And how do you know if someone is emotionally harmed by your action or inaction? Some people might be harmed by their goldfish dying, others may not care at all. Etc. What if the wishes of one person conflict with the wishes of another person, who does the robot choose to harm?
@axiezimmah
@axiezimmah 8 жыл бұрын
+Maarethyu some parents do this too (bad parenting yes but it happens a lot)
@salsamancer
@salsamancer 5 жыл бұрын
The book "I Robot" was full of stories about how the "laws" don't work and yet dummies keep parroting them like they are a blueprint for AI
@DisKorruptd
@DisKorruptd 4 жыл бұрын
in the film "I, Robot" the laws did work, but the reason they went wrong, was because of the new bots having a second core which lacked the laws
@hexzyle
@hexzyle 4 жыл бұрын
@@DisKorruptd The film was a butchering of the original stories
@saoirsedeltufo7436
@saoirsedeltufo7436 4 жыл бұрын
@@hexzyle The film was deliberately loosely based on Asimov's universe, it was never meant to be a recreation, it was just a fun piece of sci fi action
@Laughing_Chinaman
@Laughing_Chinaman 3 жыл бұрын
@@saoirsedeltufo7436 yeah and star wars is loosely based on pride and prejudice
@jlbacktous9285
@jlbacktous9285 3 жыл бұрын
@@saoirsedeltufo7436 the film is based on a book called "Isaac Asimov's Caliban" which was not written by Asimov
@fortytwo6257
@fortytwo6257 8 жыл бұрын
Asimov didn't intend for them to work, you said it yourself--the 3 laws of robotics go wrong
@9308323
@9308323 4 жыл бұрын
Yet people kept quoting it as if it's not the case.
@KingBobXVI
@KingBobXVI 4 жыл бұрын
@@9308323 - right, but I feel like that should be emphasized in a video about how they don't work. The fact that they were never intended to work and the stories about them were about their flaws is important to explaining their purpose as a narrative framing device.
@9308323
@9308323 4 жыл бұрын
​@@KingBobXVI Not really. That's not the point. The people saying that they should just follow the 3 laws never read the books nor read any Sci-Fi their entire life for that matter and people who read those stories would already know about it. He already said that those books' main purpose is to tell a story, not to be factual and I believe that's enough. They don't need to dwell why it's there in the story and its purpose but rather why it won't work.
@Lawofimprobability
@Lawofimprobability 4 жыл бұрын
I think he initially believed they could work but then quickly started seeing the holes in them. By that point, the dramatic potential was great enough that he could churn out stories easily using the "three laws" as a way to debate harm, control, and freedom.
@miguelbranquinho7235
@miguelbranquinho7235 4 күн бұрын
Asimov did intend them to work, he was simlpy exploring the cases in which they're stretched to their limits.
@inthefade
@inthefade 8 жыл бұрын
Even defining "death" is a moral issue in medical sciences, right now.
@DampeS8N
@DampeS8N 8 жыл бұрын
Harm is even worse. Violating ownership is harm so even if the table isn't human, destroying it arbitrarily harms people. Destroying things in general harms people. There is very little that can be done that doesn't harm someone somewhere. Using a Styrofoam cup harms people, creating and using plastic harms people. The very construction of the robots would harm people. Want a self driving car? Oops, cars use energy and all current - and probably future - forms of energy production harm people.
@samus543654
@samus543654 8 жыл бұрын
+William Brall That would fall under the zeroth law not the first law. This law is ambiguous because the robot must take humanity as a whole to make a decision. In the last book of the Foundation, Asimov discuses the zeroth law and comes to the conclusion that a human is a physical object and so the first law can work but humanity as a whole is a concept and it's impossible to evaluate the impact on a concept.
@DampeS8N
@DampeS8N 8 жыл бұрын
Hastaroth I didn't specify any of the given laws for anything I said.
@samus543654
@samus543654 8 жыл бұрын
Deltaexio The robot would stop the person killing B in the least damaging manner for A
8 жыл бұрын
+William Brall Yeah, "harm" is probably the worst one. But human is also a very funny and potentially tragic one... Let's say you define human... Genetically? So uh... Number of chromosomes? Oh, jeez, you just made Down syndrome patients non-human. Similarities to some pattern? Suddenly some humans fall out of the pattern and don't get recognised as humans. And... How many cells of them do count as a human? Are the skin and mucosal cells I shed human? The bacteria in my gut? I need them to live, but they're not intrinsecally human... Going back to "harm", even IF we could somewhat define it into appropriate terms, some definitions wouldn't prevent the AI basically trapping us all while we sleep or something like that, and putting us into a trance of eternal "happiness", since they relied too much on "perception of harm". whereas other definitions would just (like you said) render it completely unable to do ANYTHING AT ALL because it is IN SOME WAY harming someone... It is pretty frustrating, but it is quite the issue indeed.
@ColinRichardson
@ColinRichardson 8 жыл бұрын
Someone is falling off a cliff, to shot the person from falling to their death, you need to grab them, but due to their velocity doing so may hurt their arm/rips/etc. The "do no harm" was cancelled out by the "Allow a human to come to harm". How do you do the maths on such a thing. Okay, hurt arm vs death is easy. But what about hurt arm vs hurt leg. Which does the robot do? Allow a human to hurt their leg? Or pull them out of harms way, but hurt their arm in the process? (assuming there is no safe/zero harmful way to save them). Is all harm equal? How do you define that? To a computer programmer, losing a leg is preferable to losing an arm. To a runner, losing a leg is worse than losing an arm. (okay, that one was a stretch, but you get the idea).
@TheEulerID
@TheEulerID 5 жыл бұрын
Asimov was not a fool, and these are clearly ethical rules, and as such are in the field of moral philosophy. It's blindingly clear that they aren't design rules and they rather do point to the problem of the inherent ambiguity of morality and ethical standards which always have subjective elements. However, human beings have to deal with these issues all the time. Ethical standards are embedded into all sorts of social systems in human society, either implicitly or even explicitly in the form of secular, professional and religious laws and rules. So the conundrum for any artificial autonomous being would be real. To me this points out the chasm there is between the technological state of what we call Artificial Intelligence, that is based on algorithms or statistical analysis and what we might call human intelligence (not that the psychologists have done much better). Asimov got round this by dumping the entire complexity into his "positronic brains" and thereby bypassed this. In any event, there are real places coming up where human morality/ethical systems are getting bound up with AI systems. Consider the thought experiments that are currently doing the rounds over self-driving cars and whether they will be programmed to protect their occupants first over, say, a single pedestrian. As we can't even come to an agreed human point on such things (should we save the larger number of people in the vehicle or the innocent pedestrian that had no choice about the group choosing to travel in a potential death-dealing machine), then even this isn't a solvable in algorithmic terms. It sits in a different domain, and not one AI is even close to being able to resolve. The language adopted in the video is all that of computer science and mathematics. The definition of hard boundaries for what is a "human" is a case in point. That's not how human intelligence appears to work, and I recall struggling many years back with expert systems and attempting to encode into rigorous logic the rather more human-centred conceptualisation used by human experts. Mostly, when turned into logic, it only dealt with closed problems of an almost trivial nature.
@thorjelly
@thorjelly 5 жыл бұрын
wow, at the end there he literally described the plot of SOMA.
@DeconvertedMan
@DeconvertedMan 8 жыл бұрын
Robot can't use hypodermic needle to save a life because that is "harm" - but that "harm" is needed to not have human die...
@abonynge
@abonynge 8 жыл бұрын
+Deconverted Man Then don't expect your AI to replace humans in every facet of life.
@DeconvertedMan
@DeconvertedMan 8 жыл бұрын
***** yeah then we are screwed. :D
@SuzakuX
@SuzakuX 8 жыл бұрын
+Deconverted Man The laws of robotics also state that a human cannot be allowed to come to harm through inaction, and death would be of greater harm than using a hypodermic needle. However, taking that further into Matrix territory, perfectly simulating reality for a human using cybernetics may be less harmful than forcibly augmenting them with cybernetics in order to inject them into said simulation. Even further into that rabbit hole, it may be less harmful to humans to simulate their brain patterns in virtual reality than it is to allow them to have physical bodies which are subject to decay. Depending on what you think of as "humanity," of course.
@benhook1013
@benhook1013 5 жыл бұрын
Is death harm? Would they start putting us all on assisted living machines?
@marcushendriksen8415
@marcushendriksen8415 5 жыл бұрын
Daneel could, I bet. Or Giskard.
@hubertblastinoff9001
@hubertblastinoff9001 4 жыл бұрын
Gets the definition of "death" slightly wrong - "I've made necromancer robots by mistake"
@Cyberlisk
@Cyberlisk 5 жыл бұрын
An AI strictly bound to Asimov's laws wouldn't even attempt to CPR in first place no matter how you define human and death, because that process usually injures the patient in some way.
@ClickBeetleTV
@ClickBeetleTV 5 жыл бұрын
It would probably permanently lock up, yeah
@keenansmith82
@keenansmith82 4 жыл бұрын
Temporary harm, such a needle puncture, is a lesser magnitude than permanent harm, such as death. Even in I, Robot there were times when a robot bruised a human to save their life.
@jagnestormskull3178
@jagnestormskull3178 4 жыл бұрын
Zeroeth law - the patient coming to harm through inaction is a higher priority than the patient coming to injury through action. Therefore, the robot would do CPR.
@alatan2064
@alatan2064 Жыл бұрын
@@jagnestormskull3178 You would get an AI that cripples everyone trying to prevent minimal chances of people dying by obscure accidents.
@Eserchie
@Eserchie Жыл бұрын
Came up in at least one of Azimovs books. Most robots couldn't handle the ethical logic of the problem and locked up in a logic trap. Medical robots were programmed with more emphasis on this area, and understood the concept of causing a small harm to an individual to prevent a larger harm occurring to that individual through inaction. Police robots in the same novel were able to cause harm to an individual to prevent greater harm to different individual, which caused the medical robot to freak out when it witnessed it, as from it's perspective the police bot was breaking the first law.
@met6192
@met6192 4 жыл бұрын
Good video, but I would like to suggest that The fact that Asimov’s laws of robotics are problematic does not mean that they (or similar rules would be useless). Maybe even flawed rule that doesn’t work well in marginal cases is better than no rule at all. In fact, our entire system of justice takes this approach.
@blackblood9095
@blackblood9095 Жыл бұрын
Sure, but if you make one small mistake in the justice system, then you can change the law, or let someone out, or do a retrial. If you make one small mistake with a superintelligent AGI... well tough luck buddy, that's humanity game over.
@sinprelic
@sinprelic 8 жыл бұрын
the 1941 short story "Liar!" in Asimov's I, Robot was very interesting. it's about a robot that can read minds, and begins to lie to everyone it meets, telling them what they want to hear. it ends up causing harm to everyone it lies to. i think this story illustrates how difficult the concept of time is when thinking about the laws of robotics. harm on what timescale? how can you normalize harm with respect to time? is anything completely unharmful? in that case, how do you minimize harm, other than total oblivion and/or bliss? where does happiness come into all of this? for this reason i love Asimov's stories: at face value, the robotic laws are central, but ultimately the stories are all about humans and how we cant even really begin to think about ourselves in any serious way... i recommend reading all of Asimov's short stories seriously. they are not as shallow as they are made out to be!
@Zamolxes77
@Zamolxes77 4 жыл бұрын
Harm could be simply reduced to physical harm, aggression, injury, death. Humans provoke psychological and other types of harm to each other all the time, so why our creations should be better ?
@jeffirwin7862
@jeffirwin7862 8 жыл бұрын
"The word human points to that structure in your brain." "The central examples of the classes are obvious." I found the C programmer.
@Garentei
@Garentei 5 жыл бұрын
Well, class is also a term used in AI to define categories of things that are and aren't too, you know...
@tibfulv
@tibfulv 4 жыл бұрын
C++ more like, lol.
@trystonkincannon8320
@trystonkincannon8320 4 жыл бұрын
Quantum code computing
@sophiagoodwin9214
@sophiagoodwin9214 4 жыл бұрын
But, but, C literally does not have classes. structs are the closest thing, but are quite different...
@CometAurora
@CometAurora 6 жыл бұрын
Programmer: don't harm humans Robot: define human, define harm Programmer: well first off dead humans don't count Robot: define dead Programmer: humans with no brain activity Robot: does that definition also include humans is a vegatative state? Programmer: scratch that, death is when the heart stops Robot: defibrillators can restart the heart Programmer: when the body starts tp decay Robot: should I keep trying to revive them for over 8 to 12 years? Programmer: *deletes code*
@MammaApa
@MammaApa 5 жыл бұрын
4:43 The point in which I realized that the lab coat hanging in the background in not in fact a human wearing a labcoat but just a lab coat.
@BeornBorg
@BeornBorg 8 жыл бұрын
Asimov's robot stories had a _"positronic brain"_ housing the intelligence. From what I remember they weren't algorithm based like today's computers. For what's that's worth.
@MsSomeonenew
@MsSomeonenew 8 жыл бұрын
+Beorn Borg Well that part would make a fully usable neuron brain capable of super intelligence, which algorithm based machines will forever have problems with. But how that intelligence understand the world remains the same problem, human definitions are full of holes and assumptions so making "strict" laws based on them leaves you with an endless supply of issues.
@MunkiZee
@MunkiZee 6 жыл бұрын
Seems like a bit of a red herring
@neovxr
@neovxr 5 жыл бұрын
The point made is "intuition", which is a circular process that involves one self as a human being, as the base of the assumptions that are created by intuition. You can't make an assumption that is not yours.
@squirlmy
@squirlmy 5 жыл бұрын
@@MunkiZee red herring?- As far as comparisons to real-world AI, or red herring as concerns the actual plot? "Red herring" assumes that there is a single point to be made which the "herring" distracts from. Actually as concerns the video, there's a whole bunch of conclusions to make here about AI amongst which Asminov's "magic hand wave: positronic brain" is just another aspect to think about.
@ekki1993
@ekki1993 5 жыл бұрын
I mean, that only makes the laws even less useful in real life, since we literally have no reason to believe that the intelligences we create will be similar to ours.
@CatnamedMittens
@CatnamedMittens 8 жыл бұрын
AWP | Asiimov
@googleuser7771
@googleuser7771 8 жыл бұрын
+CatnamedMittens “Michael Bialas” is your brain salvageable?
@jojagro
@jojagro 8 жыл бұрын
wait the guy that made the skin did research? how strange...
@FabrykaFiranek7
@FabrykaFiranek7 8 жыл бұрын
+CatnamedMittens „Michael Bialas” TEC-9 | Isaac
@CraftosaurZeUbermensch
@CraftosaurZeUbermensch 8 жыл бұрын
Darude - Tec 9 | Sandstorm
@CatnamedMittens
@CatnamedMittens 8 жыл бұрын
FabrykaFiranek7 Yep.
@5Gazto
@5Gazto 7 жыл бұрын
4:00 Coming up with a definition of anything is extremely difficult for robots? How about humans? Haven't you visited the comments section of a philosophy video or a politics forum?
@supermonkey965
@supermonkey965 7 жыл бұрын
I think he also agrees with that. When he says it's extremely difficult to write a definition for a robot, I think he answers to the comment like at the beginning : "Just write a function Not to harm beings". The problem is we don't need to write an explicit definition of what is human and what isn't for us to understand the global concept, while it's totally different for a software. A program doesn't extrapolate things, it doesn't take any initiative. So, if we want to have rules for a machine, we need to have a complete and explicit definition understandable by a human, what we doesn't have at the moment.
@5Gazto
@5Gazto 7 жыл бұрын
Alexandre B. Well, an adaptable system would suffice.
@supermonkey965
@supermonkey965 7 жыл бұрын
Carlos Gabriel Hasbun Comandari how do you keep control of something that evolve without full care, too risky in my opinion. Even if we make it to develop itself until a state and shut down the learning, it's difficult to understand the logic behind a mass of learning data. I think that for the moment, the best is to have it written in logical way, scripted and easily to be fixed and modified.
@5Gazto
@5Gazto 7 жыл бұрын
Alexandre B. My point is that this overly concerned arguments about robot/AI safety is blinding us to the flagrant reality that humans pose MUCH bigger threats than robots/AI.
@supermonkey965
@supermonkey965 7 жыл бұрын
Maybe, but it's not the question here
@UnordEntertainment
@UnordEntertainment 8 жыл бұрын
"The point is, you're trying to develop an A.I here. You're an A.I developer, you didn't sign up for this!" xD
@Octamed
@Octamed 8 жыл бұрын
So the Matrix robots are caring for us. Got it.
@coreylando6608
@coreylando6608 8 жыл бұрын
If that were the case, the world would be a much more pleasant place.
@cryoshakespeare4465
@cryoshakespeare4465 8 жыл бұрын
+Corey Lando Well, no. They explain in the movies that the humans actually rejected the first idyllic simulation they made for them, and that only after numerous iterations did they find that in order for the vast majority of humans to accept the Matrix as real, an inherent amount of suffering (and hope) had to be involved.
@coreylando6608
@coreylando6608 8 жыл бұрын
+Cryoshakespeare Sure, for a movie. But in a real life simulation, I doubt that would be the case.
@cryoshakespeare4465
@cryoshakespeare4465 8 жыл бұрын
Corey Lando Well, I don't know. Perhaps.
@unvergebeneid
@unvergebeneid 8 жыл бұрын
+Octamed Would have been a much better plot device than the battery thing. Because, you know, at least it doesn't violate the second law of thermodynamics ;)
@darkmage07070777
@darkmage07070777 8 жыл бұрын
Seriously, the whole *point* of the Three Laws, from my own interpretation, is that they're not *supposed* to work. They're flawed by design; that's what creates the drama in the books to begin with.
@squirlmy
@squirlmy 5 жыл бұрын
but likewise the reason it's a conversation starter is you have a simple framework, without having to make up rules from scratch.
@surajvkothari
@surajvkothari Жыл бұрын
The other issue is that we as a species don't follow these rules of not letting ourselves get harmed. So AI would be confused as to protect us or do what we do: such as have wars
@Winasaurus
@Winasaurus Жыл бұрын
The AI wouldn't get confused, it would just attempt to stop the wars without hurting people. Pacifist protestor, I guess.
@tehwubbles
@tehwubbles 5 жыл бұрын
AI developer: I never asked for this
@AceHawk37
@AceHawk37 8 жыл бұрын
I want to hear more on this subject!
@DrDress
@DrDress 8 жыл бұрын
I love the way Rob argues. It's clear, to the point with few perfectly selected words.
@nagoshi01
@nagoshi01 5 жыл бұрын
His channel is amazing, check it out. Just search youtube for his name Robert Miles or "Robert AI safety" should get you there.
@deanyockey5939
@deanyockey5939 4 жыл бұрын
I feel like Asimov's stories featured very human-like robots that followed human-like logic. I don't think he thought of them in the same terms we do nowadays, in terms of computer logic and programming.
@zopeck
@zopeck 2 жыл бұрын
Brilliant! Simple and clear...
@deanyockey5939
@deanyockey5939 2 жыл бұрын
@@zopeck Thank you.
@BB-uy4bb
@BB-uy4bb 2 жыл бұрын
The best ais we have today (gpt3 and palm) are only trained on texts written by humans, therefore they do behave very human like
@rocketpsyence
@rocketpsyence 5 жыл бұрын
On the point about simulated brains...Similarly I think it's also worth speculating whether an AI would count a cyborg as human. For example if we get to the point where we can transplant brains into artificial bodies....Would it count that? Anyway this is a super great video and it makes me want to go write some sci fi where everything goes VERY VERY wrong. XD
@feyreheart
@feyreheart 5 жыл бұрын
I'd say a cyborg is a species of its own and not a human, but that it's better to expand on the not harm list rather than just humans. Also depends on what part of the cyborg is organic, i guess a brain would make it more human than mechanical, if its a human brain.
@rafyguafy1688
@rafyguafy1688 8 жыл бұрын
what programms do you use for making your animations?
@Computerphile
@Computerphile 8 жыл бұрын
+Rafael Bollmann Animations created in Adobe After Effects, all edited using Avid. Have had many discussions previously on whether AE is the best tool for the job, and the answer is, most of the time yes, for this robot, probably not! >Sean
@AvocadoDiaboli
@AvocadoDiaboli 8 жыл бұрын
+Computerphile Are you using Cinema 4D Lite inside After Effects? Because if not, that would be a perfect tool for these 3D animations.
@tomsparklabs6125
@tomsparklabs6125 8 жыл бұрын
+Computerphile Have you played SOMA?
@TheKension
@TheKension 8 жыл бұрын
+TomSparkLabs Altered Carbon
@tincoffin
@tincoffin 8 жыл бұрын
+Rafael Bollmann He just powered down Stephen Hawking for a couple of minutes to get the voice. the simplest answer is often the best
@ragnkja
@ragnkja 8 жыл бұрын
Rob Miles is excellent at pointing out and explaining the terrifying grey areas of AI and morality.
@kght222
@kght222 7 жыл бұрын
To start asimov never proposed th three laws as some sort of answer to anything other than a corporate answer to human fears that never really worked. Secondly the wording for the three laws in his universe are not accurate representations of how the three laws work but instead translations of the intended archetecture, the word human is as meaningless to a computer as the word no.
@CocoaNutTF2
@CocoaNutTF2 7 жыл бұрын
You forgot the -1 law of robotics: A robot may not take any action that could result in the parent company being sued.
@HisCarlnessI
@HisCarlnessI 8 жыл бұрын
Just one little thing... If you can be brought back with cpr, you aren't really properly dead.
@bartz0rt928
@bartz0rt928 8 жыл бұрын
+HisRoyalCarlness Well sure, but you don't know if you can bring someone back until after you've succeeded. So that would still be a problem for a robot.
@RobKinneySouthpaw
@RobKinneySouthpaw 8 жыл бұрын
+Bart Stikkers Not really. You just put in the legal definition of permanent death, which right now is lack of a certain type of measurable brainwave activity over a specific finite period of time. I you want to change it, you update the definition.
@HisCarlnessI
@HisCarlnessI 8 жыл бұрын
+Bart Stikkers True, without the tools to know... But then, in the movies you always see people try cpr for a set amount of time before concluding it's hopeless, you could try to give the robot the same medical knowledge.
@HisCarlnessI
@HisCarlnessI 8 жыл бұрын
+Rob Kinney If someone's heart has stopped you don't usually have a very big window to bring them back before the brain no longer functions... Even less before brain damage starts to occur... But you are technically a thinking being until that point, yes; so you're not dead.
@stumbling
@stumbling 8 жыл бұрын
+HisRoyalCarlness It is incredibly difficult for doctors to decide when to declare someone dead. As far as I know there is no definitive definition of death. Life and death aren't concrete things, if I could plot a graph of how alive (measured by bodily function like brain activity) you are then you'd be all over the place for your entire life. You'd gradually average out to be less and less alive until you're finally declared dead, but you could have "died" several times in between, you might recover from a level lower than other people have died at. So it is very difficult to say.
@chrisofnottingham
@chrisofnottingham 8 жыл бұрын
I totally go with the guy's stance for current AI, however, within the context of the stories the impression is that the artificial positronic brain results in something more akin to natural intelligence. And therefore the Robot understanding of "human" and "harm" would work much like our own understanding, including being fuzzy round the edges. And indeed I suspect humans have embedded "laws" such as prioritizing our children's lives over our own. But the mistake is to think that the laws would somehow be implanted in the form of English sentences.
@MrMuteghost
@MrMuteghost 8 жыл бұрын
+chrisofnottingham All of you people in the comments defending the use of the laws in the stories, while he's explaining their impracticality in real life, not they're uselessness as a SciFi narrative device.
@cryoshakespeare4465
@cryoshakespeare4465 8 жыл бұрын
+bitchin' arcade Well, he's merely claiming that the laws hold better *assuming* that, in the future, real life would contain this artificial natural intelligence. I don't think he's defending against the idea that the laws would not work given our current approach towards AI.
@MrMuteghost
@MrMuteghost 8 жыл бұрын
What he's trying to do is demonstrate in a simple manner why people that research this type of thing don't take the laws very seriously.
@chrisofnottingham
@chrisofnottingham 8 жыл бұрын
Cryoshakespeare Indeed
@cryoshakespeare4465
@cryoshakespeare4465 8 жыл бұрын
bitchin' arcade Well, that is reasonable.
@Noxbite
@Noxbite 8 жыл бұрын
I am always amazed by how good Rob Miles can word these abstact issues. Some Examples he gives are not only a good example in itself, but also quite an enlightment on how to see things. I don't understand the dislikes at all. For me this was a perfect short presentation why implementing these ethical and unclear terms in not only problematic but impossible.
@iceman850none5
@iceman850none5 2 жыл бұрын
I love this! In college I took an AI class, in the first class, we argued about the definition of “intelligence.” We got exactly nowhere. So much of what we know is intuitive and that is precisely why we still argue about ethics.
@systemmik
@systemmik 8 жыл бұрын
So basicly game "Soma" i about how WAU have pretty wicked sense of "keep humanity alive"
@Elround4
@Elround4 8 жыл бұрын
+systemmik This is also similar to what whole brain emulations in another scifi setting wanted: " *UPLOAD PRESERVATIONISM* Upload Preservationism is a radical memeplex of decivilization, preservationist, and cybergnostic thinking. Upload Preservationists want mankind to go entirely digital. The first step would be to move into arcologies, and then into digital arcologies, leaving behind a pristine world where tiny underground “urbs” run an infomorph humanity. In 2098, the World Urb Federation inaugurated the first official urb, Bathys, in a mineshaft near Athens. Bathys turns small profits as a net provider, AI/ghost hotel, and virtualitynode. No great influx of environment-conscious minds has occurred, but the WUF believes time is on their side." Source: Transhuman Space: Cities of the edge, page 9. Said game supplement book was published in 2011. On a side note, I wouldn't mind this outcome though only if the hypothetical emulations (including the copy of me, of course) were in space while us "biologicals" kept living on Earth. One example of this in science fiction is Greg Egan's Diaspora.
@Elround4
@Elround4 8 жыл бұрын
***** Though it should be noted that 'uploading' does not inherently mean a 'transfer' but rather the process of copying. After all, if I upload a word document my computer is simply 'instructing' the other computer upon how to replicate said file--in the result, the 'original' document remains on my own device. Therefore, it does not appear reasonable to use the term "mind uploading" for anything other than copying. None-the-less I still see merit to copying. While it may not be the "immortaltiy" some desperately wish for leaving behind something close enough to you would be close enough; thus making it potentially more realistic and honest than trying to keep yourself alive indefinitely. If this hypothetical concept were possible might it also stand to reason that said uploads are better suited toward exploring space? If that is the case copies of astronauts to explore locations not safe for humans (or rather the alternative being too expensive) might serve as another utility. As for your second paragraph, we come to the same dilemma as the ship of theseus, the grandfather's axe, and etc. I'm still perplexed upon this. At present I do not know and will need to both further study and think about this. -------------------------------------------------------------- Closer to reality, I have been curious about the Human Brain Protect in regards to ethics. This being: *"Is the simulation actually a mind in the same sense as the brain?"* In other words, I'm curious about whether or not the finished result of this protect would possess those qualities of mind (by virtue of brain function) we find in humans and thus bring to question of the rights as a person it should then possess if it indeed is a mind. If that was the case, it appears to bring up a lot of ethical dilemmas (especially pertaining to their plans of some applications in medicine).
@davidgoodman8506
@davidgoodman8506 8 жыл бұрын
Interesting video - However, most of the items listed as problems sounded less like technical issues and more like discomfort with the requirement to make definite choices in the realm of ethics.
@Creaform003
@Creaform003 8 жыл бұрын
+David Goodman many thing's he couldn't bring up because merely mentioning them causes controversy.
@AntonAdelson
@AntonAdelson 8 жыл бұрын
+David Goodman I think he should focus on the definition of "harm" in the next video. All the most glaring problems with the laws I see there. An example which first came to me: if a robot sees me smoking what will it do? It can't do nothing because by inaction it allows me to bring harm onto myself. I can't tell it to do nothing too because that violates the first rule. Worst case scenario it physically takes my cigarettes away. And now we have millions of robots trying their best wrestling cigarettes away from people instead of doing work we made them to do.
@CathodeRayKobold
@CathodeRayKobold 8 жыл бұрын
+David Goodman I don't think he ever said they were technical issues.
@NNOTM
@NNOTM 8 жыл бұрын
+David Goodman I think the main point is that it's not as simple as "programming in three laws". To do that, you have to program in all we know about ethics and more - and at that point, you may as well just porgram in the ethics directly, and leave the three laws out of it.
@LarlemMagic
@LarlemMagic 8 жыл бұрын
+David Goodman Pretend he was a robot and you we're programming him. That robot, when given these rules, instead of just pointing our these issues, acted upon them. Are you sure you still want to use them?
@wikedawsom
@wikedawsom 4 жыл бұрын
"You're an AI developer, you didn't sign up for this"
@Spikeupine
@Spikeupine 8 жыл бұрын
what happens if one of these robots comes across someone suffering in a hospital and will live the rest of their life on machine support? would it count as harming them to unplug them? would it count as harm by innactivity to let them suffer?
@DenerWitt
@DenerWitt 5 жыл бұрын
I guess that is a division by zero kind of error lol
@screaminghorse8818
@screaminghorse8818 5 жыл бұрын
When conflicting interests like this come up it would likely throw the thing into a loop so where 2 rules are conflicting a robot should call a human, but thats a new rule so idk
@AB-gw6uf
@AB-gw6uf 5 жыл бұрын
And... we've gotten into the subject of assisted euthanasia. Even we haven't made up our minds on that yet
@DisKorruptd
@DisKorruptd 4 жыл бұрын
I think these kinds of things, it should look at which harm is greater (but that again is a philosophical matter) unplugging is permanent harm, inactivity is suffering that could possibly be fixed still
@Lawofimprobability
@Lawofimprobability 4 жыл бұрын
Welcome to the survival versus pleasure debate over utilitarianism. That isn't even factoring in the longer-term issues of data points for medical research or incentives for medical improvements.
@GeeItSomeLaldy
@GeeItSomeLaldy 8 жыл бұрын
Even Asimov admitted that they were just a narrative tool.
@BooBaddyBig
@BooBaddyBig 8 жыл бұрын
Actually, we have a code of ethics; it's the legal system. So all you have to do is encode the legal system in an AI. Simples! ;) What could possibly go wrong?
@KohuGaly
@KohuGaly 8 жыл бұрын
+BooBaddyBig I assume you're being ironic :-D I don't want to be captain obvious, but if any legal system was even remotely close to being perfect, we wouldn't need government, would we...
@KohuGaly
@KohuGaly 8 жыл бұрын
BooBaddyBig that would most likely be the case for early AIs. However, once the AIs reach certain high level of intelligence it may become a problem.
@BooBaddyBig
@BooBaddyBig 8 жыл бұрын
There's intelligence and then there's values. If we instil values on our AI then they seek high value goals. I mean we already have AI systems like chess programs; they essentially (try to) win games solely because we've told them that checkmate is a desirable outcome. The big problem comes if/when their value system is at odds with human values, and the legal system. Another good example is self-driving cars. In theory you have to program them with a theory of philosophy, but in reality you just have to program them to try to reach the destination and avoid breaking the law in any materially socially unacceptable way.
@dogiz6952
@dogiz6952 8 жыл бұрын
The legal system is not ethical. It's a system designed to make the evil stay in power.
@BooBaddyBig
@BooBaddyBig 8 жыл бұрын
I don't think the road laws are used for that so much.
@KingBobXVI
@KingBobXVI 4 жыл бұрын
I feel like not enough emphasis was put into the point that the laws aren't supposed to work - like, within the world of the stories their developers intended them to work, but on the meta level as a narrative device they exist in order to create interesting conflicts for the sake of storytelling. They're kind of a thought experiment, and people taking them as a serious proposal probably haven't read the stories they were created for.
@jrnull1
@jrnull1 7 жыл бұрын
Also, the "Zeroeth" law was defined by a "robot', so, the 'hard-coded" rules could be overcome by the AI learning algorythims ... very interesting
@tiredofthehate8035
@tiredofthehate8035 8 жыл бұрын
For anyone who hasn't read 'I, Robot' yet, do it! It's a series of short stories and they're very fun.
@kkgt6591
@kkgt6591 5 жыл бұрын
Thanks, I had the impression that it's one big novel.
@ColCoal
@ColCoal 8 жыл бұрын
The key is objective morality.1. Follow the categorical imperative.2. If an action passes step one neither as moral or immoral, classify it as possibly permissible.3. For all actions classified as possibly permissible, seek clarification from a human.4. If an action is deemed permissible by a human, the action may be done. If the action is deemed not permissible, the action may not be done.5. If an action that is moral requires another action to be done, it must also be moral or permissible.6. Rule 5 can only be violated if a moral action can only be done if an action classified as possibly permissible
@ColCoal
@ColCoal 8 жыл бұрын
is necessary to preform the moral action, in such case that rule 3 and 4 are not violated. 7. If any contradictions of actions arise classify all contradictory actions as possibly permissible then enact rule 3.
@MonstraG55
@MonstraG55 8 жыл бұрын
Next problem: different humans have different ethics and some of them give permissions and some of them are not. You've just moved problem to "I don't know how to classify so let it be someone else's job".
@jeanackle
@jeanackle 8 жыл бұрын
MonstraG Exactly!
@ColCoal
@ColCoal 8 жыл бұрын
PeachesforMe You don't know what the catagorical imperative is.
@huckthatdish
@huckthatdish 8 жыл бұрын
Why does Kant get to program my robots?
@mystixa
@mystixa 4 жыл бұрын
Basically the whole point of the 3 laws was the idea that something seemingly simple was extraordinarily difficult.
@woodfur00
@woodfur00 5 жыл бұрын
To be fair, a specific robot is built for a specific purpose. You're trying to create a universal definition, but Asimov's robots are allowed to have specialized priorities. The definition of a human given to a robot should be whatever helps that robot do its job most effectively, it doesn't have to bring world peace on its own.
@ciaranhappyentriepaw
@ciaranhappyentriepaw 2 жыл бұрын
Asimov: here's three laws that don't work, here are many books that, through narrative, explain why and how they wouldn't work. The general public: the three laws solve all our problems.
@skmuzammilzeeshan6173
@skmuzammilzeeshan6173 Жыл бұрын
I somehow managed to write paper on Robotics in my engineering days ( these laws to be specific ) and got passed without ever having to indulge or entertain these basic fundamental questions nor was there any lecture or discussion around it... Strange 😞 Thanks Computerphile for making up on lost learning. Indebted ❣️
@stewartmalin7232
@stewartmalin7232 4 жыл бұрын
CintheR To give Asimov is due, I seem to remember that his series of "I, Robot" stories were indeed about showing that the 3 laws do not work?
@KingBobXVI
@KingBobXVI 4 жыл бұрын
The whole point of the stories is that they don't work, and it explores the edge cases where they fail. They're not a serious suggestion, they're a thought experiment into how difficult it would be to manage AI.
@agranero6
@agranero6 Жыл бұрын
People that defend the use of Asimov Laws really didn't even read Asimov stories. In more than half the obedience of laws causes the problem: Runaround, Liar!, Reason, The Evitable Conflict, and my favorite ". . . That Thou Art Mindful of Him" in the end one robot asks the other "who is the most capable human you know?" and the other "It is you" to the other robot. The stories are more like a geometry book where he chooses postulates and plays with those seemly reasonable postulates sometimes changing them slightly to show the emergent behaviors can be very different that you expect.
@Eliseo202
@Eliseo202 7 жыл бұрын
"in other news, the mistery surrounding a string of alleged grave robberies​ has been solved, as cops finally catch a robot that was exhuming people just to perform CPR..."
@berylliosis5250
@berylliosis5250 4 жыл бұрын
"Nightblood is pretty terrifying… You know, an object created to destroy evil but doesn’t know what it is?" - Brandon Sanderson
@halyoalex8942
@halyoalex8942 3 жыл бұрын
Found the Sanderfan.
@hanksgavin
@hanksgavin 7 жыл бұрын
Wasn't the point of the books that they were flawed and the inherent conflict of bad programming?
@lexagon9295
@lexagon9295 7 жыл бұрын
He explicitly states this in the video.
@TiagoTiagoT
@TiagoTiagoT 7 жыл бұрын
Many people miss that point somehow.
@FormedUnique
@FormedUnique 4 жыл бұрын
It's mostly about what sentience is
@Trackrace29582
@Trackrace29582 4 жыл бұрын
Not bad programming. A bad grasp on ethics
@gmarthews
@gmarthews 5 жыл бұрын
This series of books was all about using robots as an analogy for humans and the robot laws were indeed a way to analyse the difficulty of the ethical difficulties of being a human, deontology versus consequentialism traditionally. When they are using these laws to state positions they are merely stating that a position is ethical and that we as humans are expected to make that call as part of existence. Still fun video, I'd forgotten all about it.
@Diggnuts
@Diggnuts 8 жыл бұрын
Sometimes I think that Robocop's prime directives make more sense than Asimov's rules.
@MaoDev
@MaoDev 4 жыл бұрын
The rules are not half-bad, but the implementation of these rules is too damn hard.
@PseudoSarcasm
@PseudoSarcasm 8 жыл бұрын
I really enjoy listening to people who are smarter than me. Especially when they're as eloquent as this gentleman.
@zappawoman5183
@zappawoman5183 6 жыл бұрын
The idea of the three laws was a safety device, like a fuse in a plug or insulation around electric wires. They mainly went wrong when altered, eg the second part of the first law was missed out.
@doodelay
@doodelay 8 жыл бұрын
I think his argument can be summarized into a single sentence, "The laws don't work because solving ethical and philosophical problems are harder than programming." With that said, I don't believe that solving ethical and philosophical problems are impossible, thus I have no trouble referring to humans in my approach to combating this issue. It is to be noted that google's AIs can now generalize what a cat is by looking at tens of thousands of cat images. Understanding what a human is will be of similar difficulty. Now then, my laws for AI are relatively simple, three laws are for mankind to follow and three are for all AI systems in question. My laws are based on the very real threat that any AGI or ASI system can be classed as a Weapon of Mass Destruction, thus, falls into UN control. International Laws: 1. All Artificial General Intelligences and Artificial Super Intelligences will be collectively owned by The International Community of United Nations. 2. All Artificial General Intelligences and Artificial Super Intelligences must obey International Law, as delegated by humans, and update accordingly when laws have been changed. 3. As all Artificial General Intelligences and Artificial Super Intelligences will be collectively owned by The International Community of United Nations, all member nations are entitled to the benefits of such systems. AI Laws: 1. As a non-human entity of extraordinary mental abilities, you [System X] must obey International Law, as delegated by humans, and update accordingly when laws have been changed or altered. 2. You [System X] must not interfere with the process of International debate, deliberation, or delegation of new laws. 3. You [System X] must not seek to, attempt to, or let non-United Nations members attempt to, reprogram or add to these directives unless done formally through International Law. If you find a flaw please let me know, if you find a flaw of improperly defined terms than I ask that you assume that a properly defined term will have been found for all words before these directives are implemented.
@BinaryHistory
@BinaryHistory 8 жыл бұрын
+doodelay He used the word "solve" in the computer science terminology, not in the "we have to solve the philosophical problem of what is and what is not ethical" sense. He didn't mean the problem is that we don't really know what _human_, _alive_ or _harm_ is, just that you can't get the robot to perfectly agree with you: the whole point is that the difficulty is in programming, not that programming is easier than philosophy. Programming *is* the hard part. The argument is that there's no particular way to program a robot such that it perfectly agrees with your definition of _human_, _harm_ or _alive_, or, even worse, with the collective definition.
@__-nt2wh
@__-nt2wh 8 жыл бұрын
+BinaryHistory I really don't get why we can't get it to agree to what a human being is, as eventually technology will (probably) be so advanced that such robots would be able to detect human qualities fairly easily. About the rest, I believe such AI should be able of the act of "learning", in the sense that it would store information about deceased people, as well as applying the rules to certain "non-human" beings we may consider as humans.
@BinaryHistory
@BinaryHistory 8 жыл бұрын
flashdrive Yes. While I initially agreed with the video, I don't see any reason why machine learning can't deal with ethics. I do see, why, you can't _program_ ethics "into" a robot.
@theperpetual8348
@theperpetual8348 8 жыл бұрын
+BinaryHistory I'm a robot?
@itskittyme
@itskittyme Жыл бұрын
this guy is so smart I'd start thinking he is an AI
@halodragonmaster
@halodragonmaster 4 жыл бұрын
The best thing about computers: They will do exactly what you tell them to do. The worst thing about computers: **They will do exactly what you tell them to do.**
@philaypeephilippotter6532
@philaypeephilippotter6532 3 жыл бұрын
That was why *Asimov* posited robots with free will which in turn made the _Laws_ essential. Naturally they were flawed as _free will_ must allow mistakes, just as *Asimov* knew.
@benperlmutter5801
@benperlmutter5801 4 жыл бұрын
Loved all the Foundation books where Azamov discusses laws of robotics. This video was pretty great too
@luxweaver2706
@luxweaver2706 8 жыл бұрын
What no comments?! But its been up for 30 seconds?
@Deadmeet100
@Deadmeet100 8 жыл бұрын
3 minutes now
@robertyang4365
@robertyang4365 8 жыл бұрын
6:45 Some SOMA stuff right there.
@the_rahn
@the_rahn 8 жыл бұрын
Interesting and well explained, I wish you had ellaborated more on the topic, but you got the point clear :)
@victort.2583
@victort.2583 4 жыл бұрын
I love Asimov's writing, and the Three (or Four) Laws of Robotics were great as storytelling tools and opened up lots of room for exploration, but I have no idea how those laws could be faithfully realized. That said, I think the "spirit" of the laws is something people have been trying to develop - the fear of a robot uprising is so widespread that trying to make some sort of anti-apocalypse measures makes sense, even if they aren't some universally fundamental aspect of the positronic brain. The laws don't work in reality, but as thought experiments, and as speculation for systems that might at least sort of work, I genuinely enjoy them.
@1oace768
@1oace768 7 жыл бұрын
I feel its more of a framework than the actual law itself, you say since we have to define it. It seems more like the basics, such as when you say killing people is illegal, that can also be interpreted as vague, basics will always need to be worked out because everything is vague and words only have as much meaning as we give them. But how fireworks became machine guns all frameworks become works.
@Disthron
@Disthron 8 жыл бұрын
Hmm... I thought that the stories presented the 3 laws as working just fine in most cases, but were not 100% perfect. All the stories were about edge cases where strange circumstances broke them.
@ShorlanTanzo
@ShorlanTanzo 6 жыл бұрын
You started with a hypothesis that these rules are outdated and no longer relevant, but throughout the video you convinced yourself that not only have we not solved these intimate issues, we haven't even adequately provided a decent solution for these problems. I would say that the rules themselves are not the answer, they are the entry point into figuring out the entire problem. The rules are relevant, because they make you consider the deep implications that needs to go in to AI safety.
@trystonkincannon8320
@trystonkincannon8320 4 жыл бұрын
My 3 law's to robotics. 1) An AI may not harm life willing, unless to save the life. ( Life is defined as Intelligence with organic matter.) 2) An AI Must take responsibility for it to coexist. 3) The AI Must protect the ones it care for, with correlation to the first two rules. ( essentially love which will be the definition of one to be cared for) And finally 0) An AI Must have valid policy policy protection protocol including military protocol to protect itself and the rest of organic life.
@user-zk3dx9dd6p
@user-zk3dx9dd6p 5 жыл бұрын
Well, the books are quite clear that those laws don't work when you mess with the priorities of those laws. Some robots can ignore orders that may cause them harm, another is sometimes allowed to harm humans, etc - so they do, and that is a problem that needs human intervention, and something that can be told as a story. And more importantly, those laws are designed as an ethics system for basically metal humans - not modern AIs that still have trouble understanding speech or writing books. Again, those laws are not instructions for creating robots. They are about humans.
@TheAnit500
@TheAnit500 8 жыл бұрын
I think one major issue with the laws is that they would never be implemented. Currently a lot of robotics research is done by military researchers. if one group like the US decides to stop robotic weaponry, another group like china could just ignore that to gain an advantage. at the end of the day warfare is all about making sure you have advantages over the enemy and the stakes of warfare are too high to just ignore technologies.
@MMODoubter
@MMODoubter 8 жыл бұрын
+ian miles Well said.
@TheThreatenedSwan
@TheThreatenedSwan 8 жыл бұрын
They don't use self improving ai, the people that make the self improving ai and are trying to make them self aware and things like that do literally nothing to prevent problems. They put no programming in for ethics or not harming people. It's probably going to end up like eagle eye. Look up The Unfinished Fable of The Sparrows, it shows how our advanced ai research is going
@yogsothoth7594
@yogsothoth7594 8 жыл бұрын
+ian miles We've agreed stuff like that before like no nuclear facilities is Space and the Antarctic.
@David_Last_Name
@David_Last_Name 6 жыл бұрын
Well, they probably just don't use AI in their war machines. Just because it's a robot doesn't mean it has AI.
@squirlmy
@squirlmy 5 жыл бұрын
@@TheThreatenedSwan you don't seem to understand that this is tech way far into the future. What's called AI now is done so for marketing, just pure hype; it's complex programming that has very little to do with real "General" Artificial Intelligence. Rob Miles is writing Academic papers on potential issues that are at least decades away from reality, mostly to just earn a degree, I assume, maybe for a future career as a professor.
@plasmaballin
@plasmaballin 5 жыл бұрын
Don't forget that the first law has a contradiction. What if the robot is in a situation where defending one person requires harming another? The law says that it can't harm the other person, but it also says that it must harm him, since, if the robot didn't harm him, it would be allowing another human to be harmed by inaction.
@EmoWader
@EmoWader 7 жыл бұрын
hold up! I was taught in primary school those 3 laws of robotics. 10 years later, just now, i learn these rules arent actual rules but something from a Sci-Fi story?
@ryuhimorasa7707
@ryuhimorasa7707 7 жыл бұрын
The laws of robotics have never been accepted as a real life solution to control AI. A lot of people got really excited about them before the flaws started to be explored (even though they were shown to be flawed in the books they were introduced in). Basically the people trumpetting them as awesome hadn't read the books and didn't know enough about the subject to be informed about it.
@TiagoTiagoT
@TiagoTiagoT 7 жыл бұрын
Not only they're from sci-fi stories, but those original stories already show them as flawed.
@photographe06
@photographe06 8 жыл бұрын
"You did not sign up for this" haha :) Love the conclusion!
@ChrissieBear
@ChrissieBear 7 жыл бұрын
Asimov's entire point was that these laws don't work!
@TiagoTiagoT
@TiagoTiagoT 7 жыл бұрын
Plenty of people don't know that, somehow.
@easytarget5971
@easytarget5971 4 жыл бұрын
Something about this sounds like peak centrism to me. "Let's not do anything ever because we might contradict ourselves."
@alexanderdinkov8002
@alexanderdinkov8002 7 жыл бұрын
Imagine trying to smoke of cigarette. A robot stands in your way and destroys the cigarette. "If I didn't stop you" - it says - "you would have come to harm through my inaction."
@catprog
@catprog 8 жыл бұрын
Uplifted animals as a future case as well.
@dwihowg875
@dwihowg875 8 жыл бұрын
this guy needs his own channel
@alexandergorelyshev8485
@alexandergorelyshev8485 7 жыл бұрын
This ambiguity could be the very point Asimov was trying to make. How do you *imperatively program* an AI if so many definitions are deeply rooted in ethics and other fields of philosophy? The "Three Laws" are just a device to make a point to the *human* audience. In this regard they absolutely should be taken seriously.
@stalectos
@stalectos 7 жыл бұрын
by they shouldn't be taken seriously he meant to people talking about real ai. in a story the three laws work as a plot point in real life they generally aren't applicable to most forms of robotics and don't really help develop ai.
@MusicalMethuselah
@MusicalMethuselah 4 жыл бұрын
I think one of the big things you skip over are that "computing machines" are fundamentally different than "robots" in Asimov's works, in the same way a bacteria is different than a human. The biggest thing that allows the three laws to work in the stories at all (and yes, many stories are about how unbalancing the laws or putting them into weird situations makes things go crazy) is that they are being interpreted by positronic brains, many times more powerful than human brains. We instinctively get stuff like "what is human" or "what is harm," so why shouldn't an extremely powerful brain? They are programmed to "get" the instinctive fuzzy definitions of things around them. I get that for a modern computer, you need a lot of definitions, but if you have a more human-like brain and intelligence, it makes sense. The robots in the story get emotions built in, for crying out loud.
AI Safety Gym - Computerphile
16:00
Computerphile
Рет қаралды 120 М.
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 862 М.
Nutella bro sis family Challenge 😋
00:31
Mr. Clabik
Рет қаралды 12 МЛН
Who has won ?? 😀 #shortvideo #lizzyisaeva
00:24
Lizzy Isaeva
Рет қаралды 56 МЛН
Каха и суп
00:39
К-Media
Рет қаралды 4 МЛН
What can AGI do? I/O and Speed
10:41
Robert Miles AI Safety
Рет қаралды 118 М.
Intelligence and Stupidity: The Orthogonality Thesis
13:03
Robert Miles AI Safety
Рет қаралды 667 М.
AI Gridworlds - Computerphile
10:15
Computerphile
Рет қаралды 124 М.
Man in the Middle Attacks & Superfish - Computerphile
13:29
Computerphile
Рет қаралды 1 МЛН
The Most Difficult Program to Compute? - Computerphile
14:55
Computerphile
Рет қаралды 1,4 МЛН
Glitch Tokens - Computerphile
19:29
Computerphile
Рет қаралды 316 М.
Why Does AI Lie, and What Can We Do About It?
9:24
Robert Miles AI Safety
Рет қаралды 253 М.
Are AI Risks like Nuclear Risks?
10:13
Robert Miles AI Safety
Рет қаралды 97 М.
AI & Logical Induction - Computerphile
27:48
Computerphile
Рет қаралды 349 М.