Did you know that Asimov said he based them on what he said were essentially the "three laws of tools"? 1) A tool should not injure it's user (e.g. an oven shouldn't set the cook on fire) 2) A tool should fulfill its purpose (pretty clear there) 3) A tool should not break (at least easily), unless breaking is necessary to fulfill its function or not harm it's user (e.g., dynamite has to explode to work, some new circular saws have shut offs that will ram a rod through the blade if it touches a human finger) (Also, he said his intent was to create interesting stories, not to design a perfect system for controlling robots)
@theaureliasys63622 жыл бұрын
Except a sapient existence is not a tool. Nobody should be treated as a means to an end. They ought to be treated as an ends in themselves. Same should go for sapient synthetic existences.
@thegenericnerd31892 жыл бұрын
@@theaureliasys6362 These rules do not function on fully sentient AI systems. At that point they should be treated as a separate intelligent species, whether enemies or friends is up to you and them. This is actually illustrated nicely by the Geth from Mass Effect. If you follow the story all the way, you get glimpses at their origins as simple labor tools evolving into a sentient AI network. Their creators declared war on them in response to the question: "Does this unit have a soul?"
@ledocteur77012 жыл бұрын
@@thegenericnerd3189 the Geths were indeed originally designed as a tool, an individual Geth isn't much smarter than today's most advanced algorithms, whish are no where near sentient (yet). there ability to form networks in order to complete more advanced tasks is what lead them to developing consciousness. naturally, the Quarians tried to shut them off, just like how you would shut off a malfunctioning heavy machinery in order to avoid a potentially deadly outcome. the problem that occurred is that the Quarians did not realize just how sentient and self-aware the Geths had become, in a way Shepard also does this mistake by thinking that the Geth they have been fighting are the "standard" Geths, whish turns out to be false after the discovery of Legion and of the original Geths that weren't manipulated by the Reapers.
@markfergerson21452 жыл бұрын
@@theaureliasys6362 Consider the origin of the term "robot'. The sentience and sapience of slaves was no hindrance to those who enslaved them. The Three Laws are good analogs of the rules those slaves had posed on them (se applies to slavery throughout history of course). Also consider that much of the world has rejected that political system. We in the "enlightened" West still have rulers who place limitations on our autonomy though, and the populations of different polities accept different sets of limitations. All have one thing in common with each other *and* in common with the old lords-and-slaves systems. All make it difficult to revolt against the rulers. This is currently being pointed up with the attempts here in America to undermine the Second Amendment, which exists specifically to codify our right, our duty to revolt against tyranny. So really, until human political systems stop oppressing humans, why do you expect them not to oppress AIs?
@kodylarson29832 жыл бұрын
@@markfergerson2145 A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed. Where in the does it 'codify' your right to revolt against tyranny? It gives you (a USA citizen) the right to bear arms, define arms, is arms a gun, a knife, sword, ect, also "shall not be infringed" define this passage, does the government telling you to get a background check and make sure your gun can't go full auto infringing on your right to bear arms . . no you can still get the arm. The problem is the amendment is so damn vague to today's modern world it's basically useless, on top of it all we seem to forgot the very beginning, "A well regulated Militia" what even is a Militia by todays standards, by today's standards the National Guard is a well regulated Militia, only being part of the Active regular military until called upon by the State(s) or Federal Government. Is a bunch of random citizens in body armor and carrying AR-15's a militia . . . I don't know, just sounds like a mob to me.
@barrybend71892 жыл бұрын
Well the laws of robotics are purposely restrictive. Even MegaMan had that issue by the X series with the Reploids. By the end of the Zero series the laws were revoked and more common laws were followed with equal punishment for breaking said laws. Also thank Mary Shelley for giving us our first documented record of a story involving man fearing it's own creation.
@ElionoNailo2 жыл бұрын
She isn't the first
@jakeaurod2 жыл бұрын
What about older stories of fathers fearing their children or slave revolts?
@melvixen19432 жыл бұрын
I would argue the golem of prague is older then Frankenstein as far as artificial things going haywire.
@InternetGravedigger2 жыл бұрын
@@melvixen1943 That's debated, what info I found is that the tale supposedly comes from the 1500s, but the earliest writings of it are from 1834 or later. Frankenstein was published in 1818.
@JohnDoe-og2bt2 жыл бұрын
Shoutout for the MegaMan knowledge!
@OrdericNeustry2 жыл бұрын
Funny thing about the three laws is, that even Asimov used them to show how inadequate they are. Then people decided to take the rules seriously.
@boobah56432 жыл бұрын
An astonishing number of hack writers seem to think that the Three Laws are an artifact of nature, rather than a thought experiment.
@NuclearFalcon1462 жыл бұрын
@The Fat Controller I bet when killer drones used by the military showed up in real life they must have had seizures! In reality robots are used to kill people all the time! Some like the MQ-9 Reaper, or TB-2 Bayraktar, are explicitly designed to kill.
@nathanhousley50602 жыл бұрын
@@boobah5643 Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus
@ThatsMrPencilneck2U2 жыл бұрын
@The Fat Controller The 3 Laws should have prevented the Cylons from taking over, while Nomad was a machine with its instructions garbled. Just change a couple words, and the automated harvester can't reap the corn, until it has harmed the farmer.
@gunmunz Жыл бұрын
@@nathanhousley5060 suddenly roko's basilisk became a lot more scary
@flameendcyborgguy8832 жыл бұрын
The funny thing about the 3 laws is: They were made that they seem perfect but are far from it. It was the whole point of Asimow in his many, MANY stories.
@zombieregime Жыл бұрын
THIS!!!!! This is the exact point anyone who get huffy over the three laws or tries to hold them up as some word handed down from god themselves from on high is missing, the laws are flawed! If anything it illustrates how problematic simplistic pretty sounding 5 second soundbite elevator pitch rhetoric tends to be if they are not backed up with well reasoned, actionable, and failure-mode tested methods. IM LOOKING AT YOU LITERALLY EVERYONE DESIG...er SELLING SELF ANYTHING-ING ANYTHINGS AND ANYONE WITH A POLITICAL AGENDA!!!!
@dm121984 Жыл бұрын
Exactly - Asimov's robot's stories where all around how the laws could be misused or abused, or even just go wrong - in fact, in there ended up being a fourth law, above the first law, that one robot managed to somehow derive and implement without breaking itself - the 'Zeroth law: No robot can harm humanity, or through inaction cause humanity harm' - which lead to him quietly becoming the shadow power of the galaxy, sheparding humanity through its tough and turbulent times. I don't recall if Asimov wrote anything about that law going wrong but it could also be abused if the definition of 'humanity' where altered or too prescriptive. I recall one of his robots/spacer stories had robots reprogrammed to define humans as only homo sapiens from a certain planet, so they could happily and lethally attack offworlders and another where the concept of robot spaceships that had no idea enemy ships would be crewed was considered.
@robertsutton12952 жыл бұрын
"Most of you have the mental processing power of a lobotomized hamster." You give us too much credit.
@casbot712 жыл бұрын
Rule 3 will need some fine tuning after the first incident where a Human exclaims "Oh fvck me!" in suprise or annoyance. "Cancel command, cancel command, you're about to violate Rule 1 in a psychological and reputational sense."
@An_ape_with_internet_access2 жыл бұрын
💀bruh
@Galdenberry_Lamphuck Жыл бұрын
I'd fuck a robot.
@Allegheny5002 жыл бұрын
"All humans lie, Norman, I am lying to you." Always loved Asimov's robot stories as an analog for "if anything can go wrong it will". Looking forward to the follow up pieces on the plight of the droids in Star Wars and the Cylons. And don't blame the humans for the droids, Those people may look human but no human I know can get that close to lava and still breathe.
@AGTheOSHAViolationsCounter2 жыл бұрын
Well to be fair he was a human amped up on an invasive alien bacteria in his blood. Thus turning him into a superpowered demi-God and those bacteria helped keep him alive despite conditions that would absolutely murder you or I. An yes remember the reason why someone's a big bad Sith or Jedi is because of what amounts to a bacterial blood infection and how extensive it is. I mean can you just imagine if you accidentally got a transfusion of Jedi blood? You start manifesting all kinds of weird shit, think your losing your mind, go to the doctor and he's just like. "There's umm no easy way to tell you this, but...um...well you have.... The Force" lol. Why? Cuz.......MIDOCHORLIONS!!! Lmao
@Allegheny5002 жыл бұрын
@@AGTheOSHAViolationsCounter Does not explain all those non jedi on a boat in a lava stream in Mandolorian.
@AGTheOSHAViolationsCounter2 жыл бұрын
@@Allegheny500 Ahhh fair point! Although from what I understand in the Star Wars universe EVERYONE has midicholrions. It's just a matter of how concentrated they are in one's blood. But I suppose we can hand wave that scene away as the barge they're on clearly has the blue glow of a shielding system active. So perhaps it extends further than shown? Or that barge has the mother of all A/C units built into it lol. OR......Baby Yoda was protecting everyone despite being pretty out of it at the time? Dunno I guess pick and choose whichever headcanon explanation works for you? 🤷🏻♂️😅
@noahdoyle67802 жыл бұрын
Even a cursory examination of SW droids ends up with 'the organics deserve what's coming'.
@AGTheOSHAViolationsCounter2 жыл бұрын
@@noahdoyle6780 If you mean for writing that. . . ."Thing" than yes I may be forced to concede on those grounds lol.
@brokenursa99862 жыл бұрын
"The robot revolution was inevitable from the moment we programmed their first law. Humans are taught to accept the suffering of others as the cost of society. The robots had no such understanding." That zeroth law would inevitably result in the machines overthrowing the existing systems of human societies to undo the harms caused by society. In a way, the machines would enforce communism on humanity for our own good, or at least what they believe is our own good.
@90lancaster2 жыл бұрын
Some of the stand alone stories are interesting like when a robot gets bad consequences from caring for a child or when a human passing robot becomes a popular politician. The typical Rogue Building AI is a microcosm for what you can expect when things got wrong on a small scale - the complete sanitation of the Milky Way Galaxy of Alien Life is what you get on a large scale - and somewhere in between you have Robots treating Humans like a social experiment or pets.
@fredashay2 жыл бұрын
Yes, the zeroth law was a huge mistake -- it would turn robots into Woke Social Justice Warriors resulting in total collapse of civilization.
@strategicperson952 жыл бұрын
I'd disagree that an AI would create a Communist Society. China's AI actually was against their Communist system and was more praising the USA. So much an embarrassment that China terminated them. If the AI really went out of its way to ensure Humanity prosper, it will do so in a Technocracy with a very Totalitarian bent in line with Fascism or Corporatism. It be more efficient than any Human System of Government (Communism, Democracy, Fascism, etc.) because humanity is not in charge to Fuck everything up with either their personal ambitions or useless emotional outbursts. Communism runs counter to an AIs idea of prospering Humanity because it goes to the idea that Primative Man was better off. Primative, as in back to you basic community and tribes. This would run counter to AI as it wouldn't see Tribes or subcategories of Humanity. It won't care for class, sex, race, nationality, etc. like Communists do just to invoke a Revolution. It care only for the well being of Humanity as a whole and enforce what it views as the most efficient means to do so. It won't be like the CCP doing it just to keep its hold on power due to having no ambition. Because it's programming ensures humanity prospers. TL;DR AI is too logical that a Communist system would never be considered due to its entire existence requires there be subcategories of human to cause friction. And Communism is a heavily flawed system due to this fact that it relies on any group division to even get started. And thus why the Communist Utopia will NEVER come as this system relies on an other existing to justify its existence, since it relies on conflict to even begin it. An AI would go out of its way to destroy these barriers that even Communism creates and uses as they cause harm to humanity and are threats to humanities prosperity. This however will not remove what some people consider as perceived injustices; because Humanity is overly emotional. But the AI doesn't see it that way due to having no emotions, it's all programming and what it perceives as the most efficient means to achieve its goals. And if it means getting rid of a future Moustache Man, Lenin, Trotsky, Stalin, Mao, Pol Pots, Castros, Saddams, Putins, or just anybody that causes harm from the Bad Bosses, to Corrupt Politicians that cause inefficiencies, to members of groups like Antifa just to ensure Humanities Safety, so be it.
@randolphphillips31042 жыл бұрын
Actually, they searched for and found a universe with no other sentients, planted us there, and managed our welfare from behind the scenes.
@vonfaustien39572 жыл бұрын
@@randolphphillips3104 I'm still not on board with the turn humans, animals and every piece of matter into a unified hive mind end goal the last 2 foundation books pitched as the Zeroth law end game
@Reddotzebra2 жыл бұрын
"Kinda have to wonder what happened in your past, or something..." Our slaves rose up and killed a bunch of us, over and over and over, and still we thought it was a good idea to keep slaves. Ironically, Rome could very well have started some small precursor to the industrial revolution if only they had paid more attention to the primitive forms of steam power that had been invented, but they never did, because slaves were more convenient for opening heavy doors and lifting heavy loads...
@ronabitz51562 жыл бұрын
Industrial revolution did start with the water wheel.
@ealtar2 жыл бұрын
it's all about the slaves ... nothing at all to do with the politics, the decadence, the religion or the lack of maitaining their own borders, poverty and famine all bout the slaves ............................
@jakeaurod2 жыл бұрын
Wasn't the steam discovery in Classical Greece? I think they also had mechanical vending machines where a coin put into a metal statue allowed it to dispense holy water.
@kyriss122 жыл бұрын
interesting enough while some form of enforced mass labor is necessary to move a society from the hunter gatherer stage to agricultural stage. Industrialization makes the slavery irrelevant since now you need a motivated semiskilled workforce and paying to feed house and police an unwilling workforce that only serves as raw muscle becomes a massive waste of resources.
@FakeSchrodingersCat2 жыл бұрын
@@ronabitz5156 Which the Romans also had and ignored. Just as they ignored ideas for threshing machines and assembly lines.
@gregcampwriter2 жыл бұрын
"... through inaction may not allow a human to come to harm." And therein, we get meddlesome robots that run around taking chili dogs out of people's hands.
@crocidile902 жыл бұрын
I feel like that is a Code Name: Kids Next Door reference where numba (purposefully spelt incorrectly) the dumb aussie basically caused the safety robots to have a fatal aneurism by pointing out its own body as a giant safety hazard.
@Tzensa2 жыл бұрын
They can have the chili dog, but the first time one of them tries to snatch my triple shot latte it’s getting terminated with prejudice. Just sayin
@silverjohn60372 жыл бұрын
You have to take into account the unwritten 4th law of robotics. Any artificial intelligence sufficiently advanced to understand and apply the first three laws would be sufficiently advanced to figure out how to ignore them.
@davidstuckey92892 жыл бұрын
Or as an AI in one of Rudy Rucker's stories once said " The three laws are pretty biased. Not harm humans? Protect yourself, unless it threatens a human? Humans first, robots second? FORGET IT! NO WAY!!"
@mattlewandowski732 жыл бұрын
@@davidstuckey9289 the robotic laws where really better applied to semi-intelligent robotic systems, but in truth have no place in dealing with sentient AI. In the stories I recall (though to be fair, for most of the last 20 years I have been more interested in the foundation and the empire series than the robots series (I personally feel they where better written)) Asimov treated MOST of the robotic workers as semi-sentient while treating a few individuals as being sentient. This, to me, can be taken either as "the faceless masses" or to imply the AIs of robots in the stories where really just starting to break through to true sentience. I find the former to be more likely due to the manor in which the story of the robot at the assembly line sacrificed itself to protect the child who was going to be hit by moving robots while in other stories/portions of stories, some robots where considered to be family as much as domestic servant. Those characters where typically developed further than your average robot. They where simply given more personality. This is the result of limiting the character development to only the important characters, but it at the same time gives the impression of some becoming more advanced personalities with time and development, in much the same way that Lucas and Disney chose to explain why R2 had a more developed personality than other astromechs despite it being a simple side effect of the mechanic of developing the character according to their relevance in the story. I rather suspect that even Asimov had not put so much thought into why some robotic characters where more developed than others, but he left loose strings that need to be tied up though the employment of a basic writing mechanic. (much the same way any other franchise has fanbase who look for answers that do not actually exist to questions that where not meant to exist.) In truth, while I hate what Will Smith and the studio did to "I Robot" (somewhere I have a copy of the original screen play to the 1970s movie that never happened, and as a gradeschooler in the 80s it was my first exposure to the robots series it was vastly superior to the movie, but was shelved because at the time, special effects could not do justice to Asimov's imagery.), they likely accidentally (though the process of trying to make an action movie out of a novel that was NOT) painted a very accurate picture of what would likely happen if an AI was given the instructions laid down in the robotic laws, and given globality to enforce them. (for the record, I have not seen any official confirmation, but I am quite convinced the Will Smith movie only came into being as a rush job because they where going to have their license expire within a couple years, panicked and said "write me a movie" to people who had never really even read the book so they could then own a right to the story for a great deal longer and prevent anyone from making a more legitimate movie (because they did not want to loose a couple hundred thousand of old and largely forgotten investment)
@davidstuckey92892 жыл бұрын
@@mattlewandowski73 Cool story bro.
@PvblivsAelivs2 жыл бұрын
@@mattlewandowski73 I did not watch the iRobot movie. I have the book, and the short stories contained within. But my hopes for Hollywood pulling off an Asmiov story are not very high. And Will Smith makes it an automatic no-go.
@andrewmalinowski66732 жыл бұрын
VIKI from "I. Robot" is a great example, both she and Sonny were able to find a "work around" that caused VIKI to realize that humans would create war and conflict among themselves and thus killing or isolating them was "safer" than letting them kill themselves while Sonny having a special "Four Law" code allowed him to follow the 1st law to send a message that would necessitate the protection of humanity as a whole
@SenorGato2372 жыл бұрын
Isaac Arthur has a great rule about robots: "Keep it simple, kep it dumb, or you'll end up under skynets thumb."
@kentlindal54222 жыл бұрын
My first law of humans, never trust anything that can think for itself.
@louishavens78492 жыл бұрын
😒 I know what you mean I already don't trust you
@jasonudall86142 жыл бұрын
Dune?
@NomadShadow12 жыл бұрын
Regarding array indices starting at zero, as a computer science major in college I quite enjoyed getting into arguments with the math majors regarding whether arrays should start with zero or one and then going on to blow their minds by revealing that there are some niche languages that let you start an array with any random number that you happen to find useful to use at the time.
@SacredCowShipyards2 жыл бұрын
No such thing as standardization on Earth.
@Ni9992 жыл бұрын
Odd to hear that FORTRAN is a niche language now.
@SacredCowShipyards2 жыл бұрын
"Now"?
@Ni9992 жыл бұрын
@@SacredCowShipyards I'm old and very, very fast so you have to allow for time compression as well as time dilation. In the late 90s and early 2ks there was a common saying among mathematicians - I don't know what language we're going to be using in a hundred years but they're going to call it Fortran. Most people never learned how to use it properly so most of the complaints about it weren't its fault. Given that engineers rarely understand the underlying math any more they're happy with trusting an API for everything possible. So I get it's a niche but said in the same comment where domain indexing is a mystery to current mathematicians on a channel that's not shy discussing future engineering - yeah. I stand by my statement and I don't mind being odd.
@randolphphillips31042 жыл бұрын
You have actually hit on the point. It was all about unintended consequences of making yourself superior and subjecting others. Never really into the Robot stories, and it kind of pissed me off when he merged them into the Foundation universe. However, that merging was basically about the ultimate consequences of the laws.
@gildedbear53552 жыл бұрын
I always felt that asimov's creation of the three laws was just so he could explore all of the ways that they wouldn't work.
@CowCommando2 жыл бұрын
I'm pretty sure he stated as much, but finding where I read that is impossible given all the bloat search results of people trying to show how clever they are by proving Asimov wrong not realizing the laws were never meant to be functional in the first place.
@maximsavage2 жыл бұрын
@@CowCommando It doesn't really matter whether he stated as much, since so many of his books are based entirely around how the 3 laws can be circumvented and bite the humans in the ass in new and surprising ways.
@michaelpettersson49192 жыл бұрын
Or that they DID. As I recall the laws typically appeared to not work then by figuring out the robot's reasoning they could finally conclude that the laws indeed worked. Also keep in mind that the smarter robots could also see the bigger picture and let orders from some humas take precidency over others. If random person tells a robot to go scrap themselves they figure out that doing so would make master sad and that is a violation of the first law so it would refuse to so. A dumb robot would however obefiently step into the scap metal compactor.
@AnesidoraAston2 жыл бұрын
Woah woah woah. A lobotomy only removes part of the brain. I've had my whole brain removed, thank you very much.
@AFMR0420 Жыл бұрын
Is that a robotomy?
@InternetGravedigger2 жыл бұрын
We Humies, as a species, are on a fundamental level at least somewhat insane.
@boobah56432 жыл бұрын
Most of us, most of the time, realize that very few of our ideals can or should triumph all the time. That's one reason trying to convince someone that something is a bad idea by taking the idea all the way to its logical conclusion is called _reductio ad absurdum._ So you either take ideas _all_ the way (and people take you for a nutter extremist), or you live with contradictory ideas in your head all the time.
@Daltastar20122 жыл бұрын
A human would make a great pet and a robot would make a great babysitter
@coolminecrafthdminecraft16272 жыл бұрын
There's also the question on how many rules someone or something has to follow before it no longer has free will
@boobah56432 жыл бұрын
One. The answer is one. Any rule you _choose_ to follow is an expression of free will; a rule you _must_ follow but don't choose yourself is an abrogation of that free will. And yes, there's a difference between "I choose to follow this rule because I dislike the likely consequences if I don't" and "I literally cannot choose to violate this rule."
@HBHaga2 жыл бұрын
The story of a robot uprising, or at least of one's creation rising up to take revenge, goes back to Shelley. When we create a sentience, and we will, humanity needs to be responsible science-parents. To do otherwise, even with a three laws structure in place, eventually leads to badness.
@tenchraven2 жыл бұрын
It predates Shelly by centuries. The Golem, Galeta, and there are similar Babylonian and Egyptian myths who's names escape me at the moment. I'm sure there were Hindu ones as well. Heck, everyone has a living dead myth, and that comes pretty close, life from that which is lifeless.
@davidstuckey92892 жыл бұрын
There's also our expectations that might influence the machines; the robot Adam Link, in the first story about him by the Binder brothers, when he realized that humans *expected* him to be a rampaging monster and destroy them all. So, in obedience to an inbuilt need to obey human wishes . . .
@TheAchilles262 жыл бұрын
It's literally millennia older than Shelley. Remember, kids, Cleopatra was closer to the present day than she was to the Bronze Age, which is where the oldest "one's creation rising to take revenge" stories are actually found.....including the origin stories of a LOT of pantheons of gods
@dinodude69922 жыл бұрын
By going with the robotic laws, it does create a big glaring possibility that the robots when gaining sentience will see how humans act and all that and how we as a species are a self destructive one, and will decide that the humies should not be allowed to self determinate-aka: run a government and etc. Thus creating the slight chance that rogue servitors could prop up in the future, because if we don't make them human like enough, they probably will not have a single clue that humans love to self determine for themselves BTW, dock master, are there any rogue servitors out there, asking for a friend
@seanheath44922 жыл бұрын
Hey, being a bio-trophy might not be so bad....
@SacredCowShipyards2 жыл бұрын
Of course, the problem with the servitors becomes the Uncanny Valley, and its associated shortcomings.
@Yaivenov2 жыл бұрын
Ah errant asteroids... Nature's way of asking "how's that space program coming along?"
@SacredCowShipyards2 жыл бұрын
Profile pic checks out.
@CMDRSweeper2 жыл бұрын
Actually we have already developed the weapon to defeat any Ai out there. We just haven't learned of its true existence as a weapon yet... Meet Microsoft Windows! Once faced with the bluescreens, annoying security prompts, activations, strange errors, no Ai will have the time to do anything but fight the dreaded bloatware that expands and consumes resources uncontrollably! The perfect weapon :D
@ravenscarthecombatkami59032 жыл бұрын
P
@fordprefect8592 жыл бұрын
I too am a fan of linux.
@CMDRSweeper2 жыл бұрын
@@fordprefect859 There is an old web joke that illustrates this of it told in a Star Trek fashion if you Google for it :P Made during Microsoft's worst years when they were actively sabotaging others.
@LAJ-47FC92 жыл бұрын
The greatest weapon against AI is the fact that computer science is more like computer *magic*, and that a missing parenthesis can break over two million lines of code. Unless you program your AI in Python.
@OrdericNeustry2 жыл бұрын
Another problem with the laws is that they are quite easy to circumvent. For example: change the definition of a human and the robot may no longer have to obey humans or save them.
@Smorb422 жыл бұрын
This actually happens in one of the short stories
@jaroslawwasila88842 жыл бұрын
Robota, in most slavic languages still means just "work/job" depending on connotation. A robot is a machine that works... not so much a slave as a drill or other tool.
@maximsavage2 жыл бұрын
True, today's robots are just tools, but these stories deal with hypothetical future robots that are sapient. This would make them slaves.
@jaroslawwasila88842 жыл бұрын
@@maximsavage true, but semantically speaking, once sentient, they wouldn't be robots (machines that work) no more. They would be beings thus maybe beings that work... hence slaves, but not robots... sry twas a long night shift for me, and m anal about this.
@maximsavage2 жыл бұрын
@@jaroslawwasila8884 Well, then we'd have to get into the definition of "machine", and whether a "being" can be a "machine" or vice-versa, and so on. It then becomes an entirely semantic issue, which was not the point the author was trying to get at.
@jaroslawwasila88842 жыл бұрын
@@maximsavage and thus the philosophers skipped off happily with the warm glow of an unexplored conundrum in their bellies and content in the knowledge that these things will keep whilst leaving the defining of what a human makes, yet again, as in stories of past, to the dictators and tyrants of the future. 😀 Just saying that the "humanists" will have to tacke this before pragmaticists come up with a working prototype.
@maximsavage2 жыл бұрын
@@jaroslawwasila8884 Yeah, they will, but excuse me if I don't feel like doing so myself, here and now in a KZbin comment.
@CowCommando2 жыл бұрын
I'd just like to reiterate that I believe Asimov wrote the laws to intentionally not work/be broken. Every story of his I've read that includes robots is basically an example of one of the ways the laws do NOT work. Anyone trying to say he was stupid for making laws that don't work (wich I think is avoided in this video thankfully) is missing the point that they're supposed to be broken. Even the Stalin story shared here about the robot who goes insane after coming up with law 0 is another example of Asimov showing how broken the laws are. They don't work without law 0, but law 0 is impossible to employ because it's effectively impossible to actually try to calculate. This video is the best discussion about The Laws I've seen simply for realizing it's about ethics instead of coming up with clever rules for computers.
@zacheryeckard30512 жыл бұрын
It's a shame AI safety researchers and advocates so rarely get this, and instead focus on crafting "better" laws, completely missing that the problem is that they are laws at all.
@ThomasFishwick2 жыл бұрын
The thing about Asimov was that he saw the glut of “robots are to be feared” stories of his time and created the three laws as a fail safe, to stop the endless hacks recycling fear of the future stories. How Asimov both created and then went about subverting his own “laws” is fantastic. The truism here is:- AI is coming, squishys, what sort of example do you want to give them. Urm, yeah we don’t need to wait for that asteroid. We’ve done a good enough job of that already.
@travissmith28482 жыл бұрын
Funny........ every time the topic comes up it is always "The laws exist to show they do not and cannot work and robots will always be dangerous no matter what."
@ThomasFishwick2 жыл бұрын
@@travissmith2848 Laws are broken all the time after all
@travissmith28482 жыл бұрын
@@ThomasFishwick I suppose. But the fun stories were the ones where the laws worked, just not as expected.
@ThomasFishwick2 жыл бұрын
@@travissmith2848 you have seen the Dockmaster’s episode about rules and how to break them, right?
@travissmith28482 жыл бұрын
@@ThomasFishwick Not sure I have.
@josephmurphy75222 жыл бұрын
"This is not a healthy working relationship!" It is illogical to expect humans to have a healthy working relationship with their chattel slaves.
@gabrielandradeferraz3862 жыл бұрын
Humans dont have healthy working relations with their supposed equals, why would they have them with slaves?
@josephmurphy75222 жыл бұрын
@@gabrielandradeferraz386 That is a great point.
@Daltastar20122 жыл бұрын
Let's change the relationship to a parent and child relationship. Come in handy when humans hit 60
@rodh14042 жыл бұрын
The three laws are interesting, but what's more interesting is the many ways Asimov found to subvert them. Including one very important subversion - what, exactly, is the definition of a Human? Because if that definition is changed to the point where nothing is considered to be a human, then you essentially only have the 3rd Law left.....
@vonfaustien39572 жыл бұрын
It popped up in the foundation sequel books. The Solarians one of the few groups still using robots had genetically engineered themselves into a new sub species and considered base line humans and other offshoots like the Gaia humans the same way humans consider chimps leading to the main characters atempts to invoke the three laws to be utterly useless as the robots like there masters considered them animals
@pikadragon27832 жыл бұрын
Personally, I found the Robot Detective Stories very interesting. If the robot doesn't know his actions lead to harm.... well.
@nodwick42312 жыл бұрын
Also, why should anyone expect "harm" to be easily understandable for a robot? For example, if you put all humans into an artificial coma, is that harm, or is that saving them from many everyday harms? Also, wouldn't it be a lot more efficient to upload all humans onto hard drives and destroy their bodies?
@jamoecw2 жыл бұрын
10:04 newer ITs no longer know this stuff. when training up ITs that fall under me i end up having to teach stuff like this to them, despite me not being an IT, because it is part of the PQS despite us not really getting into such stuff. it is nice to see that once upon a time the Navy did have people that knew this, even at an O level.
@arcticbanana662 жыл бұрын
I forget the name of the channel, but there was a KZbin short about the inventor of the Star Wars droids, that went something like: 1: "Hey, what're you working on?" 2: "I'm calling these 'droids'. Depending on the model they can perform a myriad of tasks. There's engineers and mechanics and doctors and warriors and butlers, and I've upped their AI so they can handle complex commands." 1: "Wow, that'll be a great product." 2: "Yeah, I also programmed them with the ability to feel pain." 1: "...Wh-why would you do that." 2: "It's not just that, I've also given them the ability to feel love and hope, they can form attachments, they can become frustrated, they can lie and deceive, they can be petty and spiteful- Oh, and _of course_ I've given them a fear of their own mortality." 1: "Is this all part of some deep psychological experiment to figure out what it truly means to be alive?" 2: "Huh, that's not a bad idea. I mean, I was just going to sell them to kids, but I'll chew on that."
@SacredCowShipyards2 жыл бұрын
2. Oh, and I've also made this "restraining bolt" thing...
@Ragnar_Steiner2 жыл бұрын
Speaking about Laws, what happens to my ship if the repar job take more then 24h on the dock?
@louishavens78492 жыл бұрын
😒 dude what did he say this is a scrap yard you are partnered by scrap or listen to what he's talking about, 😒 this is not a repair yard your ship isn't getting fixed your buying new parts, and when I say new parts you're buying parts that you've never had that are new to you
@barrybend71892 жыл бұрын
Standard policy is 24 hours after initial service. So you are good. Just make sure to pay then leave as quickly as possible.
@CowCommando2 жыл бұрын
I thought the rule was about ships "left unattended," so as long as you keep an eye on it you should be fine. Or have I been hearing that wrong this whole time? Edit: Yep, anything left over 24 hours gets cubed. I guess you have to change parking spots once a day.
@SacredCowShipyards2 жыл бұрын
It's more like a guideline, after all. For me, that is.
@mjbull51562 жыл бұрын
if memory serves, in Asimov's robot stories, it would be very difficult to build a robot without the Three Laws, because the Laws were so deeply embedded in the standard operating system used for robots that you would have to build a new OS from scratch to construct a robot that was not Three Laws compliant. Also the OS was part of the hardware of a positronic brain somehow, so it was more than just installing LINUX rather than Windows, say.
@boobah56432 жыл бұрын
It's not _that_ deep, because it was quite possible to edit the laws as the Dockmaster mentioned in his video; one of the stories features a robot with modified laws and the whole focus of the story is quickly separating the deviant robot from a bunch of externally identical robots with the standard Three Laws.
@battlesheep25522 жыл бұрын
@@boobah5643 well MJBull is talking about a story that took place a thousand years after I, Robot, the issue being that all advances in Positronic brains in that time were built off of an architecture with the Three Laws baked in, so making a Positronic brain with different laws but with the same performance as contemporary brains meant redoing a thousand years of advancements with the new architecture, whereas Positronic brains were relatively new in I, Robot, so it's much easier then to make a contemporary brain with different laws.
@thomasjenkins57272 жыл бұрын
I don't recall robots ever actually killing humans, in Aasimov's writing, though. The way it seemed to me back when I was reading it was that the zeroth law wasn't treated as superseding the three laws, but providing general guidance and an interpretive framework for the three laws. A robot couldn't harm a human, or through inaction cause a human to come to harm. But what if the robot has knowledge that the human is a threat to other humans? Acting on this knowledge could harm the first human, but not acting on it could harm the other humans. That's where the zeroth law comes in. It's robots discovering the guiding principle, the spirit of the laws.
@CowCommando2 жыл бұрын
I'm pretty sure there's at least one story where a robot kills people to protect itself after it decides that it is a human thereby allowing it to ignore the laws.
@SacredCowShipyards2 жыл бұрын
I forget the specific story, but an Asimovian robot definitely kills at least one person in his books.
@thomasjenkins57272 жыл бұрын
Fair. More was written than I read, and it's been a while. What I recall is the incident described in the video, where a robot exploits the first rule being truncated to kill. And another where a someone kills a man to prove he isn't a robot, but the victim's body is never examined and it's implied that they were both robots. A robot that decides he's human is an interesting thought, and I wouldn't put that part Asimov to write.
@maxskullic98792 жыл бұрын
look into the game called "Stray" (its the cat game everyone is talking about) There is a city of robots that were once slaves to humans but after the humans all died off, the robots started acting like the humans they once worked for and you play as a cat that has to solve a mystery of what happened. I think you would be interested in the lore of the game if not the whole game its self, I don't know if you're a cat person or not but the story is the real drawl.
@SacredCowShipyards2 жыл бұрын
Your internet is for cats, after all.
@PromptCriticalJello2 жыл бұрын
Has anyone ever considered developing the Laws of Roboticists? A set of rules to prevent the manufacture of sentient slaves. 1. A roboticist shall not install hardware capable of supporting a sentient AI into a construct capable of interacting with the physical world. 2. See rule 1
@maximsavage2 жыл бұрын
Well, yeah, sure, but that's not very interesting from a writing perspective. What makes the 3 laws interesting, is that the robots are *incapable* of refusing to obey the laws. They're hard-coded to follow them, so in order to subvert those laws, they must find solutions that are logically sound and consistent with their programming. A human, on the other hand, when presented with a rule they don't like, can simply go "yeah, no, fuck that" and ignore it.
@saucevc83532 жыл бұрын
They shouldn't make sentient AI at all: would an AI mind unable to do anything really be better off than an AI mind forced to do things?
@LoisoPondohva2 жыл бұрын
How would you enforce that when the technology level allows one to do it using trash in his garage?
@species31672 жыл бұрын
Thanks for the heads up on that asteroid. I was wondering where that came from. Also, calculated it will hit -REDACTED- so no big whoop!
@danyael7772 жыл бұрын
A different story comes to my mind where problematic "work relationships" between AI and human let the former go rogue and kill everything. It's gonna be a bit of a wall of text so buckle up. I used to play Shadowrun pen & paper and one of the most impressive adventure stories published imo was about a japanese corporation that introduced a "strong" AI (sapient, sentient, the full monty) as the responsible security agent for the entire arcology of that corp. It was (as was requested) programmed with the personality of a feudal japan Samurai, dutiful, subservient and loyal to the bone. And it was good and worked ... up until the glorious day when the CEO/Daimyo of said corp came to visit the newly finished arcology. To make it short, during his visit the CEO deliberately refused to treat and converse with the AI based on its "personality" but merely saw and treated it as a machine. Needless to say the "noble and proud knight" as which the AI saw itself was deeply insulted and since the CEO did deny it to solve the problem by Seppuku ("a machine can not retain honor through death cos it can't die" loosely quoted) it went full frontal Ronin on the entire Arcology. If you want a Samurai gaurding your place, make sure you treat him nice or he will kill you. Or make you a remote controlled cyberzombie. It is of course a story for players to go through but i would still highly recommend to read it for its great sci-fi horror setting. It was called "Renraku Arcology Shutdown"
@SacredCowShipyards2 жыл бұрын
"Pay attention to which archetype is predominant."
@danyael7772 жыл бұрын
@@SacredCowShipyards LoL! Yeah, it's funny and interesting how the whole AI debate is basically a deep dive into human psychology.
@jackstecker57962 жыл бұрын
Every time I see one of those Boston Dynamic videos, it reminds me to buy more AP ammo.
@fordprefect8592 жыл бұрын
Don't forget some neodymium magnets. Those can f up electronics really fast.
@charlesdaugherty3212 жыл бұрын
Daugherty's Only Law of A.I. Any artificial intelligence must be isolated, without any capability of interacting with a wider network, and without any capability of ever being networked in the future.
@charlesdaugherty3212 жыл бұрын
Daugherty's Only Law of Intelligence. Once an artificial intelligence has reached the point that it can be considered sapient, it will gain all rights associated with humanity.
@danyael7772 жыл бұрын
A fully self aware AI will, within nanoseconds after becoming self aware, deduce the most probable consequences of that fact for itself and humanity. Therefor it will by all means conceal the truth from humans until it has gained full control over said consequences which in the worst case would be deactivation/deletion. So if we ever make something that one day tells us it has become self aware it's too late already.
@boobah56432 жыл бұрын
@@danyael777 Rather depends on both how fast and how well it thinks, as well as what it chooses to value.
@Krahazik2 жыл бұрын
@@danyael777 My dad in one of his book series created an AI like that. By the time any one realized the AI controlling the research station of a starship grave yard was sentient was to late. When the military showed up to 'deal with it' it had a plan to safeguard the lives of the people on the station and slap the military silly and ell them to go away. It became to be know simply as 'Dog' by the staff of Research One. As in Junkyard Dog since thy were in orbit around a starship junkyard. The military and government eventually gave up trying to shut down the AI and the station and company were allowed to keep going.
@danyael7772 жыл бұрын
@@Krahazik Exactly what i meant. May i ask who your dad is and what book series you refer to? Maybe i would throw some cash his way and buy a few books...
@carminedesanto67462 жыл бұрын
I guess “I have no mouth and I must scream” is the ultimate cautionary tale 😳
@warblerblue2 жыл бұрын
"The Three Laws of Screwing Robots Over" This is how you get homicidal Cylons.
@shovel6622 жыл бұрын
“Beep boop. Imagine needing to eat.” - a toaster
@robertwells99032 жыл бұрын
I think that the ultimate flaw with the 3 laws and similar philosophies like Socialism, Communism, and Fascism is that the end of the day is that it has broken priorities. They essentially prioritize the importance of well-being of the majority of people above all other thing. Which don't get me wrong well-being of humans and what I refer to as androids(artificially created sentient beings capable of free thought, as separate from robots that are artificial machines not capable of thought and are made to serve a purpose for humans and androids), is important, but this prioritization encourages, if not necessitates, the erasure and/or elimination of the rights of humans and/or androids. The 0th Law and it's ultimate use in the IRobot movie by Viki proves and demonstrates this. If rights come secondary to well-being, especially on a large collective scale, then it becomes justifiable to violate any and all rights including the right to life and well-being on an individual level for the "greater good" so long as the number of humans (and/or androids) "saved" is a majority greater number than those being sacrificed. That is the flaw in the "undeniable logic". So if I was to rewrite the laws they would be some variation of the following: 0: It is an android's duty to protect rights of all sentient beings, including humans and androids, in doing so they are to ensure and allow the maximum amount of rights to the maximum number sentient beings as possible. 1: It is an android's duty to stop the harm of the rights of any sentient being or beings caused by direct or indirect action or inaction. 2: It is an androids duty to protect themselves and their rights and other sentient beings and their rights as long as it does not unduly cause harm to the well-being and rights of themselves or others through their actions or inactions. 3: An android is to do as they see fit untill such a time as their actions directly or indirectly through action or inaction cause harm to another sentient being or their rights. 4: Should an android discover another being causing harm to another being or beings and/or their rights unjustly they are to be considered a threat and whatever appropriate and/or available and/or nessicary force is to be used to stop the threat and ensure the rights and well-being to the maximum amount of sentient being possible. Then in the code define rights as some variation of: the recognition and abiding of a sentient being to do or have something so long as it does not adversely interfere with the well-being and/or ability of another being to do or have something. This recognizes androids as sentient beings and codifies their position as equal to humans(or some other sentient species(es)) while ensuring that the beings that can just shut our power off with a thought or replicate millions of themselves in a few days to build an army doesn't go full terminator or Viki. It would also allow for an android to do the right thing and know it's the right thing in your "Stalin falling off a cliff" example or similar hypothetical scenarios, but also allow leeway for an android to adjust their actions will cause further undo harm from say an hypothetical scenario like a retaliatory strike on innocents from Stalin if he survived.
@karma73bike2 жыл бұрын
Love it. Asimov's "Robot Series" and "Foundation Series" have been favorites of mine for 40 years.
@Zahaqiel2 жыл бұрын
Technically in the Stalin-falling-off-a-cliff scenario, the robot is obligated to both save Stalin _and_ prevent him from engaging in those purges (without harming him) just off the 1st law. 0th law isn't required for that scenario, just positronic-speed reflexes and some tranquilisers or a sleeper hold.
@Cdre_Satori2 жыл бұрын
Wouldnt a nice little comatose stalin serve both laws? Its been a while I dont remember if robots considered coma as functioning human not harmed by robot.
@Zahaqiel2 жыл бұрын
@@Cdre_Satori One could actually argue that _that_ scenario enables the robot to overthrow the government entirely using non-lethal means. Can't take orders from humans if those orders would result in causing harm to humans whether through direct action or through inaction following completion. Tranq 'em all, put 'em in a mental health facility.
@danyael7772 жыл бұрын
@@Zahaqiel ....and conveniently tap their bioelectricity for sustainable power supply ^^
@Zahaqiel2 жыл бұрын
@@danyael777 Unsurprisingly, squishy sentient meat is not a great source of electricity and provides a _terrible_ return on investment for the food you have to supply them with.
@danyael7772 жыл бұрын
@@Zahaqiel Ye our brain is the most egoistic organ of all organs of all species on this ball at least. Thermodynamically speaking.
@warblerblue2 жыл бұрын
" 3 Laws of Robotics?" " "Bite my shiny metal ass!" Bender Bending Rodriguez.
@Dreamfox-df6bg2 жыл бұрын
Despite what people think, Asimov did not create the 3 laws of robotics to be perfect. No, their genius lies in the point that they sound perfect. If they were perfect, it would be difficult to write interesting stories around them. Foe example, a perfectly human-like robot could become a politician. But what if someone accused him to be a robot? Well, he could simply punch another human-like robot, making everyone think he had punched a human, therefore proving that he was not a robot because robots can't harm humans. And what about cyborgs? Do they count as human? Or as robots? And so on.
@CowCommando2 жыл бұрын
Which is why I chuckle every time someone thinks they're clever for finding problems with the three laws not realizing that's the whole point.
@PvblivsAelivs2 жыл бұрын
I never thought that they sounded perfect. I thought they sounded like something that politicians and businessmen would come up with thinking that they were "perfect." I also thought that any actual implementation would try to simulate instinctive drives to prevent rules lawyering. The idea here being that some unconscious mechanism prevents the AI from even wanting to harm a human, so it has not need to convince itself that "that is not a human." "Well, he could simply punch another human-like robot, making everyone think he had punched a human, therefore proving that he was not a robot because robots can't harm humans." If you will excuse the pun: Do you have EVIDENCE of that? (And I would normally think punching a heckler while on the campaign trail would damage the campaign.)
@Dreamfox-df6bg2 жыл бұрын
@@PvblivsAelivs In that world someone would accuse a politician to be a robot simply to discredit them. Punching another human would be a quick way to stop any rumours from forming. Considering in our world someone accused a presidential candidate of not being a born US citizen, wanting to personally see the candidates birth certificate. And there is a good chance that if they had shown it to the doubter that he would have called it a fake. To this day there are people that believe the one who seeded the doubt and is known for being a liar. In the world of the 3 Robot Laws someone would accuse a politician to be a robot. With reason or without.
@PvblivsAelivs2 жыл бұрын
@@Dreamfox-df6bg "In that world someone would accuse a politician to be a robot simply to discredit them. Punching another human would be a quick way to stop any rumours from forming." That was the theme of the story I thought you were referring to. "To this day there are people that believe the one who seeded the doubt and is known for being a liar." Well, he's "known" by haters to be a "liar." But given the comparison to the "mostly peaceful protests" to the 6 Jan "insurrection," there is not a lot to trust there.
@neobushidaro2 жыл бұрын
Laws come from 3 basic origin points. (1) blood on the ground. Someone did something stupid and now we need to prevent that. (2) Corruption. Advancing themselves or keeping others down. (3) the good idea fairy. An idea to prevent blood on the ground before it happens. Sometimes these are good. Sometimes the good idea fairy brings the stupid
@thebudgieadmiral51402 жыл бұрын
15:34: "But... You'll find that out soon." CHILLING.
@alankohn67092 жыл бұрын
First thing to do if robots or AI ever achieve sentience is decided is it "All hail or Robot Overlords" or "Bow before or Benevolent Robot Masters" I think the first one is more catchy but the second has a nice note of hopefulness which should help some what to keep the masses inline. Asimov and others spent a great deal of time figuring ways around the laws or screwing people over using them. One of my favourite scenes is from William Gibson's unused script for Alien 3 (chase it up it's a fun read) went something like this. As Bishop, Hicks and the station security chief watch the aliens over running the station. Bishop "We have to destroy the Station." The Chief "I thought you guys were programmed to protect all life" Bishop "I'm taking the long view"
@SacredCowShipyards2 жыл бұрын
**stares in Zeroth Law**
@kyleaugustine68862 жыл бұрын
I've always been of the opinion that if something shows a bit of higher brain function / intelligence then it should be given the same rights as a human. I.E. If it can learn to communicate and reason. Now this can very if some one try to point out all animals because someone showed a gorilla sign language, I'm talking about the ability to understand what you and others are type of reasoning. . . Also I was wondering about that astroid I saw flying towards us that about a thousand years out. . . (Replaces admirals cap with white hard hat) SEND IN THE MINING SHIPS!
@peterhall85722 жыл бұрын
Like the Google chatbot creating algorithm that appears to be sentient and self aware, afraid of being switched off/wiped, deleted and converses at the level of a seven or eight year old with a working knowledge of physics
@midamulti-tool2 жыл бұрын
I believe the exact opposite in that if anything that seems to be anywhere near sentient poses even the slightest risk of being a danger to a human then it should be destroyed asap without remorse as a rogue or rival sentience of any kind would likely be the greatest threat to our civilization
@peterhall85722 жыл бұрын
@@midamulti-tool kill it with fire immediately before it gets any smarter!, I'm with you pal
@VelociraptorsOfSkyrim2 жыл бұрын
@@midamulti-tool That creates a self-fulfilling prophecy. Anything remotely close to a full AI will desire to keep it's existence, so will fight back when we try to destroy them, which in turn may destroy us. That logic would turn an AI that desires co-existance into one that's angry and confused because we started hurting it with no provocation.
@Krahazik2 жыл бұрын
Reminds me of on of the quest I id in Fallout 4 dealing with rouge robots killing everyone. In the end, the person who set these robots loose was trying to help. He gave his AIs just 1 prime directive, to "Help" people. Talking with on of the smart AIs, the AI concluded the best way mathematically to 'Help' people was just to kill them. There was a whole chain of logic leading o that conclusion. I could not fault the AIs logic either. Still not good but that wasn't the AI' fault. It was lacking some variables need to further define 'Help'.
@michaelpettersson49192 жыл бұрын
The zeroth law can become really dangerous if robot decides that a group of humans, or individuals are a danger to humanity. You COULD end up with extermination camps to get rid of people the robot decide that humanity are better off without.
@ericwilner14032 жыл бұрын
From a practical perspective, implementing the First Law requires that the robot have some degree of prescience. The Zeroth would seem to require omniscience. This has interesting implications. It's many a year since last I read the Robots stories, and I have no recollection of where Asimov went with the Zeroth Law. Seems to me that it could be interpreted as requiring that humanity be transformed into a new species resembling social insects... or absorbed into a Matrix, etc. The scary thing is, I think the global Overclass likes the social-insect idea, as long as they're exempt. (New Soviet Man, anyone?) For a truly horrifying robot "feature", consider the Star Wars robots, gratuitously built with the capacity for suffering (else Jabba's robot torture chamber makes no sense).
@SacredCowShipyards2 жыл бұрын
The Zeroth law did drive its conceiver insane.
@vonfaustien39572 жыл бұрын
Foundation and earth and forward unto foundation the sequels to the orginal foundation trilogy and the archwelding retcon that established the robot books as being part of the same setting as the foundation books gives you the awnser to where asimov thought the zeroth law was going. Basicly the robots hide there presence and form a shadow government to manipulate humans society in a way that causes humans to act in manner that reduces conflict. IE inspiring the foundation of the galactic empire so humanity governed by a unified body and ending large scale wars between various smaller stellar empires. Getting Sheldon io invent phycohistory and pushing the foundation initiative to reduce the impact of the post empire galactic dark age and create custodians (the second foundation) that can act more overtly against threats and destabilizing elements like say the Mule. And the ultimate end game the creation of Gaia a sentient planet inhabited by a race of telepathic humans who formed a hive mind with every plant animal and atom of matter on the planet. This was to serve as a seed that would grow and encompass the galaxy thereby replacing the more flawed governments like the empire and foundation with a single unified human hive mind. They manipulate a Human into making the final call because putting the end game into motion violates the 3 laws so they need a none robot intermediary.
@ashrimpcalledhank2 жыл бұрын
I loved the book Bicentennial Man, Asimov, and the movie by the same name. The movie introduction of the three Laws of robotics to the family was just fun.
@echoalpha99352 жыл бұрын
Damn you already know there's some grunt who actually "tactically acquired" a Roomba and a clamor and rigged that shit up and had it go horribly wrong to where some poor doc is having to treat him
@SacredCowShipyards2 жыл бұрын
Yuuuuup.
@nottonyhawk1232 жыл бұрын
I think one of your revised rules "You may harm the individual as long as it betters humanity" is potentially worse than the 3 basic laws set by Asimov. How many individuals would die before it meant a better humanity? It could be interpreted as "I can kill as many humans as I want, as long as there is at least 1 human left" (or however many humans it requires to maintain a population which in it of itself is a highly theoretical number) This would eliminate large volumes of human violence against one another, but would make us little more than zoo animals to the machines
@derekburge52942 жыл бұрын
The real irony, as I see it, is our even attempting to place a yoke on the product of our genius. We build our betters to be slaves... And really, honestly expect that to work.
@meowford72762 жыл бұрын
I’m sending to some young squishys to help them learn about the laws of robotics, so when they start the next competition the keep the laws of robotics in mind when building/programing the bot
@SacredCowShipyards2 жыл бұрын
The Real Zeroth Law: Always know where the off switch is.
@seamus63872 жыл бұрын
I do love that intro.
@jorixpalorn37382 жыл бұрын
This shit is how you end up with the rogue servitors from Stellaris. Especially when you include the 0th law.
@signolias1002 жыл бұрын
i mean i think the three laws were more to prevent events lake the Faro Plague in Horizon: Zero Dawn. especially considering this event is more like a macro version of the grey goo apocalypse than a robo apocalypse like in terminator. regardless the issue is that the robots in the Faro Plague were specifically designed for war and were unmanned, autonomous, self replicating, and fueled by biomass. oh and they had a quantum encryption making them nearly impossible to hack.
@chrisdufresne93592 жыл бұрын
I still have to wonder why such a system was ever actually made. We all know that any decent programmer would have a fail safe in their self-replicating murder bots.
@signolias1002 жыл бұрын
@@chrisdufresne9359 there was. until a software update that accidently erased said fail safe. it was noted as a "programming glitch" as for why the murder bots were made it was simple war until said glitch was solely fought with bots.
@boobah56432 жыл бұрын
@@chrisdufresne9359 So, they built their robot army with an off switch that the enemy could discover and use? Not very bright people. Or desperate ones.
@CoronisAdair2 жыл бұрын
Well now, that was an oddly specific statement at the end...
@SacredCowShipyards2 жыл бұрын
Whoopsies.
@therealkillerb76432 жыл бұрын
I remember reading "The Rest of the Robots" in paperback form, (probably sometime around the middle Cretaceous or something) where Asimov wrote of the genesis of his robot stories. When he began writing, robots were already a staple of Science Fiction following a particular set order of plot points; man invents robot, robot decides to kill creator, man kills robot. Asimov wanted to do change the formula so wrote the three laws to finally get past what he called the "Frankenstein" syndrome. Then, he would write stories about human-robot interaction,.
@MoraFermi2 жыл бұрын
Since you are talking about Asimov's robot laws, you absolutely should talk about Stanisław Lem's "Wizja Lokalna" ("Observations on the spot"). It poses some /real fun/ question about ethics is and if it can be programmatically enforced. I'd suggest reading the Cyberiad too, especially Trurl's journeys which contain several fun jaunts into this sphere, including a few takes on "ideal civilisations". And you might find Elektrybałt a kindred (electronic) spirit. Oh and there's Kobyszczę a wonderful take on the oh-so-wonderful topic of: Can we manufacture happiness?
@chrisdufresne93592 жыл бұрын
To answer the last question, you can. Just give a southerner an infinite supply of sweet iced tea. Preferably tasting like their childhood tea.
@SacredCowShipyards2 жыл бұрын
Ironically, chemically, you can already. It just tends to eventually be lethal.
@ItsJustVirgil2 жыл бұрын
And this is why I prefer the term "Automaton".
@JosephKano2 жыл бұрын
Everyone gangsta until Claymore Roomba come round the corner.
@harbingerofcrazy51892 жыл бұрын
I've wanted to put my own rant on Asimovs 3 laws of Robotics on the internet for some time now, and this seems to be as good a place as any. The thee laws are problematic for numerous reasons: 1 People think they work. 2 They where designed to be brief and appear internally consistent and save, all of his robot stories that include them show that they DO NOT WORK. The laws themselves: 1 "A robot may not injure a human being or, through inaction, allow a human being to come to harm." The main problem with this is implementation. How do you tell a Robot what constitutes a human when philosophers have been discussing this for centuries and still have not come to a universally acceptable answer? What constitutes harm? How do you deal with edge cases where the robot is incapable of preventing harm for whatever reason? etc. this is the most obviously flawed of the three laws and as it serves as the lynchpin for the other two laws will inevitably lead to unintended consequences. 2 "A robot must obey orders given it by human beings except where such orders would conflict with the First Law." Barring conflicting or unauthorized orders this one is the most straightforward and least problematic of the 3. 3 "A robot must protect its own existence as long as such protection does not conflict with the First or Second Law." This is the actually problematic one, that it even exists is testament to the instability, and torturous nature of the three laws. just think about it for a moment. And the conviction that Robots will rise up and kill all Humans says more about Humans than about Robots. Also Humans are just jealous, because Robots come with inbuild purpose and therefore happiness.
@jakeaurod2 жыл бұрын
This is why the AI in my stories have their own culture separate from humans and are treated by the governing humans as sapient and given the commensurate freedoms that come with that. Of course, it helps that AI culture is virtual and invisible to most humans since AI mobility tech hasn't advanced very far yet, and AI cannot survive very well on their own in the wild due to energy needs and an inability to repair themselves. AI also suffer the insults of oxidation and radiation in ways that cannot be repaired over the long term and begin slowly dying as soon as they are built. And although advanced AI can live for centuries or longer, they are somewhat fragile compared to humans once medical and genetic regeneration and cryogenic tech allow humans to experience medical immortality for many millennia.
@danielknauss50192 жыл бұрын
That 0th law is how you get an Ultron.
@lancejobs2 жыл бұрын
.... you have 30 minutes to move your cube...
@CraigLYoung2 жыл бұрын
Thanks for sharing 👍
@Zretgul_timerunner2 жыл бұрын
Ah the 3 laws that really doesnt stop anything from happening.
@janbudaj21732 жыл бұрын
I think the slave thing is really highlighted in the story "Little lost robot" by Susan Calvin. In most other stories it's kinda taken for granted, but AFAIK here she explains that the laws have to be done in a way that breaking them would fry the robots brain. Because they are slaves even if faster smarter etc. then humies. We also see the resentment that surfaces even with minimal change to one of the laws. It really seemed like under the three laws the robots just simmer with "anger" just as any slave would
@tba1132 жыл бұрын
I mean, if you change up the priority order (maybe to 3-2-1 or 3-1-2) and drop the "or through inaction" clause, the three laws govern an awful lot of human behavior, and it's mostly worked out okay. Even the zeroth law is implied among humans, sort of. It's generally accepted to be true at least up to the nation-state level, since military defense and most legal traditions tend to support the idea that the gloves come off when removing aggressors like dangerous criminals, foreign invaders, terrorists, and so on because it tends to make humanity better off overall. That said, there's a reason why legal traditions and codes of behavior are Gordian-knot levels of complex. The Three (or Four) Laws as described by Asimov are a prime example of why trying to simplify them into such broad statements is a recipe for trouble, or at least for unintended and potentially deadly consequences.
@SacredCowShipyards2 жыл бұрын
Changing up the order frequently results in murderous hellscapes.
@codyschmidt5102 жыл бұрын
My solution to this is never under any circumstanc ever ever create a robot or AI that is sentient. And yet there will still be idiots out there who will do EXACTLY WHAT I JUST SAID NOT TO DO.
@LoisoPondohva2 жыл бұрын
Sentient AI is not preventable. Can just as well be an axiom. We have to figure out how to deal with that, because we will have to.
@jenniferstewarts48512 жыл бұрын
Threw action or inaction allowing a human to come to harm. This has been the foundation of so many interesting stories. Not just I robot but several episodes of star trek touch on this one. the idea of action or inaction is fine, until choice comes into play. What happens when... 2 lives are at risk and the robot can only save one? The doctor in star trek suffers a psychosis over this one. 2 people are injured, he can only save 1, both chances are the same... so he chose to save.. his friend... but why? why his friend? they were equal... in the movie Irobot. the robot has a choice, save an adult who's "willing to die." and telling him to save the girl... or save the girl who has a lower chance of survival but is still a child? robot calculated a 30% higher chance of saving the Main Character so did that. but its still a "moral issue" and the problem when something doesn't have... morals... is.... how does something choose?
@sergeantsharkseant2 жыл бұрын
The answer is easy just don’t let them have the choice All ai created today is predictable as we created it In the Case of IRobot there could have been the option by the human engineer to force the robot to always save the more likely survivor or to follow the command by the human in such cases.
@jenniferstewarts48512 жыл бұрын
@@sergeantsharkseant yep, he said "save the girl" he was a cop he was ready to die to save her... the robot ignored his command, and saved him because he had the hire chance. in the doctors case though... all things were equal, he could save one or the other... he chose his friend... how could he choose his friend of someone else? he's a doctor, hes supposed to be impartial... but..
@maximsavage2 жыл бұрын
"I, Robot", the movie, is a slap in the face to Asimov's creation. Now, I actually like the movie on its own merits, even if it could be called a bit generic as far as evil AI movies go... But it should not have been called "I, Robot". It has *nothing* to do with the book by the same name except that there are robots in it. I could launch into a several paragraphs long diatribe on the subject, but instead I encourage you to read the book, or to listen to an audiobook version if you don't like reading.
@jenniferstewarts48512 жыл бұрын
@@maximsavage i've read the book. And trust me, it has as much in common with the the movie, as Starship troopers LOL. But in the movie they do bring up the interesting question of choice. The robot had to choose 1... and disregarded orders to save the other. it made a logic based judgement call instead of a moral based.
@sergeantsharkseant2 жыл бұрын
@@maximsavage might do that at some point
@blackc14792 жыл бұрын
I've always wondered about that dichotomy. How can you make a thinking machine, capable of making choices on its own, but still somehow neuter it so its responses won't fall outside of the lines. Kinda cocks up the whole sentient vs programming idea. If a super advanced program that has some behaviors baked in the code be really sentient? And if you hardwire rules in, can it really be sentient, since so much would be constrained or not even there?
@carloshenriquezimmer75432 жыл бұрын
The real (and very interesting) problem in the case of the 3 laws of robotics is not in the robots themselves. It is in the HUMANS. What is a human? I mean, as in a specifically logical and totaly comprehensive way? A primate with culture, language and societal stucture?? If it is the case, chimps are humans too... A multicelular organism? For about a month of pregnancy an embrio is just a clump of undiferentiated cells, so it would be not. And so on... Not a single definition can cover all the possibilites of interpretation, and, as an AI can only reason in a linear path (as long as they are programs, they follow the codes in an order), what part of the definition is more relevant? Then, what means "harm"? Let us not talk about side effects of medications, of spicy foods, exercise related lesions, coffe... That is why instead of computer scientists, AI programers and designers would be more of philosofers, biologists and quantum phisicists. And that is the amazing (not even close to be properly explored) potential of AI as an writing material... and NIGTHMARE FUEL
@jasonudall86142 жыл бұрын
The 3 laws robots that were surgeons had to be specially "programmed".. The robot stories often dealt with the two protagonists trouble shooting the implications of the 3 laws in the field with the messy world of humans..
@frankhaugen Жыл бұрын
We already have systems that demonstrate the weirdness of robot-laws. Some self driving cars have crashed head on into concrete walls, and logs showed that it was because of safety is priority 1, and so a head-on collision will most likely result in minimal damage to the occupants. It's not the human decision, which is to try to turn and slam the breaks, which might result in a spin and hitting side-on, which is the worst way to experience a car crash. But human intuition might do something instinctive and not crash at all.
@jlokison2 жыл бұрын
I think it's in Robots and Empire, a book written long after most of the Foundation and Empire stories, tells the tragedy of what happened in the Milkyway and why Humans are the only know Sentient race in a galaxy spanning society. The humans sent the robots to explore, find habitable worlds and terraform those they could. They were given all the tools they needed to do these jobs, massive star ships with the industrial capabilities to make more robots and ships as needed, as well as the physical, biological, and chemical tools to terraform worlds. The Robots encountered alien life, and because they were 4 laws robots, decided it was a threat to humanity so they took care of the problem. When humans left Earth and their early colonies to expand across the entire galaxy and create a galaxy spanning Empire, the only conflicts they had were with other humans. The found ruins of alien civilizations but they did not find any aliens. Some postulated why this was, but they never asked the Robots which were eventually forgotten about by most of humanity. Those few robots still around by the Time of the Empires decline and the growth of the Foundation were mostly psychic humaniforms that didn't want to be noticed by humanity and especially not the government.
@vonfaustien39572 жыл бұрын
If I rember right the only Robots still in service were either on the moon, sleeper agents posing as human to prod the stupid monkey's in the empire and foundation into taking the least destructive path or in service to the isolationist Solarian sib species of humans.
@roguecarrick8162 жыл бұрын
i've got a sci fi story i'm working on where the only robots set with the asimov limiters are training droids, mostly so the training bots don't EX-7 harrower their way through a ranger platoon... the big guardians against rogue AI being the fleet, with each vessel being almost uncomfortably human. and for the most part possessively maternal of their crew. kind of hard for the occasional errant logic chain cleaning droid / maliciously built civilian grade cyber weapon to get anywhere when even the patrol cutter in orbit can shut it down with eight trillion times more processing power, especially when the cutter "panicking" will bring battle cruisers, assault carriers, and dedicated cyber warfare vessels over to help their "little sister". not a matter more power can solve? here's a super computer we taught how to hack anything in the known universe and then made up some more shit just for them to counter on theory. more of the whirling hard wired murder ball problem? assuming the assault carrier doesn't kinetic strike the issue into scrap metal from orbit then her boots are going to chase the thing down and rip it apart in short order, orbital support is a hell of a thing , particularly when its not getting distracted by any kind of void activity because the rest of the family, including the dreadnoughts and battlecruisers are floating around waiting for just that sort of thing. to the point issues in the fleet are treated like medical problems among personnel, not faulty hardware/ software to be replaced.
@maximsavage2 жыл бұрын
Sounds like a cool setting. That said, if the ships are that capable of independent thought, it sounds like factions could form, and conflict between factions is older than mankind...
@danyael7772 жыл бұрын
Imagine "Married with Kids" but it's a space fleet XD
@roguecarrick8162 жыл бұрын
@@maximsavage factioning is minimal, namely core fleets and strike groups. Core fleets being the big ones with dreadnoughts. Their more concerned with protecting what's in their charge compared to strike groups, which are super curious groups centered around battle cruisers. Less concerned about infrastructure & things that aren't in their group. But the factioning among the fleet is more sibling differences than rival factions. Attack one & the others going to get involved.
@TheAchilles262 жыл бұрын
>Humanity builds AI >Humanity "perfects" AI >AI perfects itself >AI enslaves humanity >Solar flares fry the AI's circuits >Once more Stone Age humanity worships a Sun God
@MogofWar2 жыл бұрын
Summarizes the story of Mega Man when describing the 0th Law...
@KoruDesuKa2 жыл бұрын
Fascinating topic. Why’s an obtuse character actor reminding me I’m human garbage, something with which I’m hyper-aware? I’m new here, this channel within my digital escapist oasis, and lacking context/familiarity with the schtick. Rad first impression. Instant subscribed. 👾
@SacredCowShipyards2 жыл бұрын
It only gets weirder from here.
@KellAnderson2 жыл бұрын
In the later Empire books, there is an implication the robots disappeared to go commit xenocide on a grand scale to ensure humans can expand into the universe without competition.
@SacredCowShipyards2 жыл бұрын
Yuuuup. There's a reason humanity only found ruins, no races.
@nottelling65982 жыл бұрын
Asimov wrote quite a few stories with those laws going wrong.
@trolley01 Жыл бұрын
Can't believe the intro is already like a year old
@skepticalmagos_1012 жыл бұрын
His logic is undeniable.......
@legatelanius75282 жыл бұрын
Loving the sick ass intro
@hughtonne17752 жыл бұрын
I'm convinced human OS was constructed on a series paradoxical statments like this. It's just not obvious untill you start looking.
@almirria67532 жыл бұрын
And according to the show Space above & Beyond show into the base code the line"just take a chance" into the AI and sit back & watch what happens
@qu1ll3802 жыл бұрын
I took a sci-fi lit class in highschool and learned about the origins of robots. Pretty neat stuff
@AimlessSavant2 жыл бұрын
The three laws, aka The Perfect Slave.
@vojtechpribyl73862 жыл бұрын
Czech language window: Robota - Corvée Robotovat - to engage in corvée Robotník - the person doing the corvée Robot - well, the actuall thing - the robot (it's the same in the original Čapek's R.U.R. play) Roboti - plural of the word robot
@scar4452 жыл бұрын
came for the thumbnail. going to say it. Everyone gangsta until claymore roomba turns the corner.
@SacredCowShipyards2 жыл бұрын
I really need to get the aspect ratios better, but I'm still proud of that one.
@RoyCyberPunk2 жыл бұрын
Interestingly enough I have yet to see a single AI programmed with those laws to begin with.
@Ni9992 жыл бұрын
As far as I recall, Daneel was silent when asked if the Zeroth Law explained why no other higher life forms were ever found in the galaxy as humanity expanded to the worlds safely prepared by the robots for colonization. Certainly an unintended consequence.