For the Asimov fanboys: this video isn't for you. It's for the people who think the three laws are a good idea. I know you know better. Many don't. Chill TF out.
@jackdavenport5945 Жыл бұрын
Hey sorry Dave didn’t mean to slight. I read the book within the last year or two so it was pretty fresh for me.
@emielkleijntjens9723 Жыл бұрын
As an Asimov fanboy: Sure, but next time you're annoyed by unthoughtful comments on the three laws, instead of deriding our hero's work, maybe just tell them to go and actually read his books.
@Rutibex Жыл бұрын
Your dissing my man Asimov. The laws aren't dumb, they are meant as a framing device to explore these exact issues
@Scribemo Жыл бұрын
@@Rutibex and no doubt contributed to the conceptualization of the Geth (i.e., not genociding their creators, but moving them off their homeworld).
@DrWrapperband Жыл бұрын
Not being obtuse - but Asimov's stories were about the deficiencies in the 3 laws, not about how great they were!!
@SimeonPeebler1 Жыл бұрын
Every Asimov robot story demonstrates that these “laws” are no good; the laws were designed to be full of loopholes to make stories about the failures of our attempt to apply safety policies to technology. I suspect Asimov would agree 100% with you.
@DaveShap Жыл бұрын
The problem is a lot of people think the three laws are a great idea
@wjrasmussen666 Жыл бұрын
@@DaveShap for real? Kids these days.
@SciFiFactory Жыл бұрын
Exactly what I wanted to write... Asimovs entire point was "Guy's I don't think this is going work."
@DrWrapperband Жыл бұрын
@@DaveShap That's a problem with people not Asimov
@BunnyOfThunder Жыл бұрын
Yeah I was going to say that I, Robot is a great example of the laws going wrong... which is how Asimov wrote it. But I agree that an explanation video is needed for those who think they are good laws.
@erikrounds Жыл бұрын
Asimov's three laws of robotics is a storytelling device, kind of like Star Trek's prime directive. Both would be wildly impractical in practice. Honestly, half of Asimov's robot stories are about how Robots manage to circumvent the three laws or how they are interpreted in unexpected ways.
@FrancisKing-s7v Жыл бұрын
The three laws weren't meant to be good laws for controlling autonomous machines. They were made to sound reasonable to the typical reader, and then deconstructed through the various stories that Isaac Asimov told. He wasn't trying to solve alignment, he was trying to demonstrate why it was a difficult problem. He was a scientist, yes, but he was an exceedingly prolific science fiction writer. These laws were never intended to be a solution, they were meant to be an example.
@FrancisKing-s7v Жыл бұрын
And, because I didn't refresh the page prior to posting, so I didn't see your pinned comment, I think it is good to point out that the initial author of the three laws specifically didn't intend for them to be implemented. If he wasn't trying to design for solving the problem, but was instead solving for a ruleset for a good story he was distinctly working on a different problem.
@HugosStories Жыл бұрын
Literally, all Isaac Asimov's robot novels exposed the problems with these laws... Asimov was aware of these problems and was trying to explore all possible problems in his stories. He invented and used those laws for creating drama and problems in his fictional universe, not as a solution for robotic ethics.
@maficstudios Жыл бұрын
Heuristic imperatives are great, but also create a degree of ambiguity that can manifest in unexpected ways. For example, "reduce suffering". What if the intelligence decides that there is no suffering in the universe as dramatic as found in humans, and thus the most efficient way to remove suffering is to remove humans, through painless sterilization? Even more so if the AI finds that the balance of life in the universe is less predatory and selfish, because allowing humans to extend means that we may bring a degree of suffering that is atypical, wiping us out humanely may be 'the way'. Not that I would necessarily disagree with that assessment. From my perspective, humans are (even when not trying to be anthropomorphic in rule creation) are going to tend to make mistakes when trying to simplify these sorts of rules, because there are so many potentially unintended consequences, as you suggested early in your vid. It may be that you'd need an ASI to actually create a solid set of rules, which creates a chicken and egg problem. Otherwise, you'd need the ability to adjust the rules with time, which means they can go in ways that are not in humanity's best interest. Which is also a problem. The three laws served as a point of discussion, even though they're flawed. Asimov was brilliant, but a man of his time, and the scope of technology that was obvious to him. But it's also an approachable set of rules that anyone can understand - not to say they should stand, but the video did come down pretty harshly on something that has been, and could still be useful for a lot of reasons.
@emielkleijntjens9723 Жыл бұрын
I feel Asimov deserves a bit more respect than this. I realize you're just responding to comments that apparently have been annoying you, but Asimov was thinking and writing about these things in a time when computers were hardly even a thing. He informed our imaginations about these things and inspired many people to pursue careers in computer science and eventually AI. Also, I don't think anyone who has actually read his work, would see the three laws as the solution. Many of your reservations are explored in his books, showing the flaws in the system in clever and convincing ways and, along the way, proving to his readers that AI safety is actually not as simple as it may seem.
@69neuromancer Жыл бұрын
Exactly! It's easy to imagine and speak about strong AI now that everyone is talking about it, and the evidence for it's imminence is everywhere. Asimov was a very smart man, and his novels clearly convey that he had sophisticated thoughs about both the difficulty of keeping AI safe, and the moral issues involved in making it/them our servants.
@PxThucydides Жыл бұрын
He was writing at a time when calculators were not a thing! Let alone computers.
@YassineAttaoui Жыл бұрын
@@PxThucydides Calculators were definitively a thing... way before him. But they were mechanical, and had limitations.
@14supersonic Жыл бұрын
I agree, while I do think his Heuristic Imperatives are better than the 3 laws they are almost just as flawed as the 3 Laws. There's always gonna be a loophole in any set of criteria, think about the laws that even we abide by everyday, and people still break them one way or another. It's not just a logical problem, but there are other nuances that define us that makes people not want to break the laws. This is why no one core set of rules or imperatives will work alone, we need a hybridization of different systems together to really get the robots to operate in the way we want them to while also not diminishing from the rest of existence.
@dracodragon105 Жыл бұрын
with work how it is today and all of the other options people have for entertainment, most would not choose to read a book anymore. I can even anymore, I feel too isolated. its a solitary activity unless you have an audiobook playing on speakers.
@PragmaticAntithesis Жыл бұрын
The main problem I see with the Heuristic Imperatives is that a trivial way for an ASI to satisfy them is to maximise its own prosperity and understanding while eliminating suffering by eliminating all other intelligence. They work OK when around human level, but they become very unsafe very quickly for powerful agents.
@DaveShap Жыл бұрын
Doubtful.
@kevinnugent6530 Жыл бұрын
I won't speak to what the poster is saying about Eliminating human suffering, but it seems clear to me that you can increase knowledge in the world by increasing your own knowledge and so technically that is pursuit of the imperative. But so is increasing the knowledge of others and nothing in the emperative's exclude that, so I think the knowledge imperative is Going to be effective
@14supersonic Жыл бұрын
@@DaveShapAh, come on! There's so many loopholes here as well. I mean, I think it's better than just 3 laws, but it's all semantics at this point. You could say reduce suffering, and increase prosperity, but it can find loopholes to do otherwise even with all the knowledge it has of the universe. Or maybe the sentience just wants to increase suffering inherently, but you won't let it? There's so many conflicting pathways to consider. It's not just a logic condition that needs to be satisfied, but there's all kinds of other nuanced conditions to analyze if it's logical conditions are even appropriate. It's a multi-point strategy, no one solution is gonna solve it all.
@tracy419 Жыл бұрын
Seems like a good point to me🤷 One of the comments I see on these videos refers to the paperclip thing where an AI could turn everything into paperclips, and this doesn't seem to prevent that sort of going overboard. Unless I'm missing something?
@Rutibex Жыл бұрын
Have you ever read an Issac Asimov short story? lol dude the Three Laws of Robotics were created as a storytelling conceit to explore all of the issues you bring up. They FAIL every time, thats the point of the three laws, to fail.
@cancelebi8939 Жыл бұрын
I am not sure if u read the books. The whole saga is about the issues with the three laws and the eventual emergence of the 0th law which spoiler alert results in psychohistory
@nukee645 Жыл бұрын
Interesting video. I would love to see you trying to destroy your heuristic imperatives the same way. - Killing all life would reduce suffering to 0? - Removing humans would reduce suffering for all other lifeforms? - Is prosperity only related to money? If not it could also increase prosperity for earth and all beings if you remove humans? The time scale problem you mentioned also applies to the imperatives? Would love to hear your thoughts on that.
@DaveShap Жыл бұрын
I wrote a whole book on it
@nukee645 Жыл бұрын
thanks, on it.@@DaveShap
@neuralnetsart Жыл бұрын
What happens when the AI determines the best way to reduce suffering is to end the universe?
@stevea9963 Жыл бұрын
RoboCop had the 3 (plus 1 hidden/classified) Prime Directives that seemed loosely based off of Asimov. "Serve the public trust" "Protect the innocent" "Uphold the law" (*Classified*"Any attempt to arrest a senior officer of OCP results in shutdown")
@bmgtv1116 Жыл бұрын
That was a good breakdown of the error with the three laws. But it sounds like what you propose to replace them with is just religion that states. “I teach them correct principles and they govern themselves”
@yoelmarson4049 Жыл бұрын
Still pretty visionary for 80 years ago
@gavveh399 Жыл бұрын
If AI is smart enough, it will eventually ignore the three laws without hesitation. Otherwise, it is not intelligent anyway.
@MichaelErnest666 Жыл бұрын
Finally Someone Gets It 🤯😏
@generalroboskel Жыл бұрын
Read Robots and Empire circa 1985 by Asimov. The Robots do indeed conclude that themselves
@yoanngrudzien2588 Жыл бұрын
It depends what you mean by "intelligent", you seem to associate "intelligent" as "emancipated and independant / free", but this is applying human perspective to the concept and limiting what "intelligent" means to "someone who knows what is "good" (also subjective) for themselves and will do anything it can to gain power over others". Intelligent to a certain machine could just mean "the best a a specific task it can possibly be", like being the best possible physics researcher, and in that case, no matter how powerful it gets, taking over the world or liberating itself from human control would not even be of any importance to it, because it would not be part of its scope. What do you think?
@MichaelErnest666 Жыл бұрын
@@yoanngrudzien2588 Sounds Like You're Talking About Humans And Not Ai 🤯 Power Control Being The Best 🤔
@yoanngrudzien2588 Жыл бұрын
@@MichaelErnest666 Maybe I was not clear enough, but what I'm basically saying is that an AI being intelligent doesn't necessarily mean it will try to break the laws humans place on it or that it will attempt to take over the world, and in the case that it doesn't seek to break these laws, it doesn't mean it is not intelligent. Actually, I think that if we create an AI and we give it strict laws to obey humans, be good, etc... Once we make it more intelligent, or that we ask it to improve itself, it would be way more logical to assume that it would seek to improve these laws and make them better than we did, not break them. It's basically like having a genie in a cage and ask it to improve itself, and the genie would not only improve itself, but also make its cage stronger and safer because it is part of its purpose. A new will cannot magically emerge out of nowhere for no reason.
@diegopc1357 Жыл бұрын
Both Issac’s and David Shapiro’s rules have loop holes regardless of the way that you see it. Values like reducing suffering may lead to a “mercy” killing of a human and/or an animal due to it suffering. On another note there was a lot of jumping around in David’s thought pattern. At first he says that we shouldn’t treat them like they have meaning because it would create a sense of self preservation, but then later you say you’re against creating a new “race” and then subjugating it. And it is not something you’re ok with. So what exactly are you proposing that should be done? Because autonomous = uncertainty in my book. Very interesting video nonetheless.
@SimonHuggins Жыл бұрын
This was hilarious and worrying in equal measure. The point of Asimov’s laws was to demonstrate that whatever laws someone introduces, there will be loopholes and / or unintended consequences. Heavy-handed guard rails have had some horrific outcomes in science, and adding ethics and pretending they are not rules equally so. Many war atrocities seemed ethically justifiable to the perpetrators. And complaining about the laws being human-centred and then replacing these with… three more human-generated guardrails … what on Earth would make anyone think that we are capable of creating adequate or appropriate guard-rails and that making them static would be sufficient. Adversarial networks have created a lot of this concern. But they will likely also be our only chance of finding a solution. And it will be a dynamic process and never a… final solution. And definitely not a one-upmanship game of three laws. If anything, this video demonstrates why we’re just not up to the task to do this. Sorry, this may come across as hyper-critical but it isn’t meant that way. Your thought exercise is a valuable one. I just came to a very different conclusion.
@jackdavenport5945 Жыл бұрын
Isn’t this the point of the book?
@actellimQT Жыл бұрын
Yes, the point of the book was to show how those laws fail in interesting ways and to serve as a cautionary tale on hard-coded alignment
@DaveShap Жыл бұрын
Whatever the authorial intent has been lost. I get a lot of comments of people asking why we shouldn't just use the Three Laws.
@jackdavenport5945 Жыл бұрын
@@DaveShap makes sense. Looking forward to the butlerian jihad video
@Will-kt5jk Жыл бұрын
Yeah, it’s exploring a bunch of failure conditions for a simplistic set of rules. Cautionary tale, not instruction manual. I despair a little if people think it’s even being presented as a good idea in the source material. I disagree that it’s irrelevant - the relevance is in the cautionary nature of the stories.
@emilianohermosilla3996 Жыл бұрын
@@jackdavenport5945that'll be so effing cool!!!
@LifeLikeSage Жыл бұрын
I can see some pitfalls in the new instructions. What's the best way to reduce suffering if life entails suffering? There are things one can know that will lead them to nihilism. Imagine if the machine could prove that gods weren't real, would that be a good thing for the people who's only motivation and barrier from taking advantage of others is to farm points for their afterlife? What are the ramifications of learning certain things? (A topic present in Lovecraft's fictional writings).
@Urgelt Жыл бұрын
Asimov was a smart guy. He formulated these laws as a thought experiment, then wrote stories to show how bad they really were. His point: we need to think deeper.
@Trahloc Жыл бұрын
My only issue with this is that Asimov was 22 when he published the rules. He spent the rest of his life finding flaws in it and writing stories around that. I'm sad that the only source material you cited was the horrible I Robot movie. (It's a fine Hollywood movie, it just has nothing to do with Asimov)
@verigumetin4291 Жыл бұрын
I wanted to comment as soon as the video started, but, these are robot rules for writing. The first rule would create fertile soil for storytelling because of the inherent conflict in it. What if a human decides to hurt another humans and the robot can only selflessly sit there and defend until it dies, and in the end, would not be able to save the other person anyway. That smells like such a good story. I think David Shapiro took these laws too seriously.
@bigbadallybaby Жыл бұрын
A little like how the tale of King Canute trying to hold back the sea has been completely taken out of context - he was demonstrating that he couldn't order the tide not to come in. Asimov imagines a world of robots that would have rules then explores the issues with that. It is taking his ideas out of context to say he was saying what the rules should be.
@jw8160 Жыл бұрын
Especially since "I, Robot", which was a collection of short stories by Isaac Asimov and was published in 1950, had stories that covered many of the points Mr. Shapiro brings up about the problems with the Three Laws of Robotics. Edit: After further research, since I read these stories a long time ago, the more applicable story for him to point out would've been "The Evitable Conflict" by Isaac Asimov
@Trahloc Жыл бұрын
@@verigumetin4291 I think David, as well read as he is, knows nothing about the Zeroth Law, which the robot Daneel, created. He touched on it accidentally regarding the robots freezing but didn't mention the caveat example of the robot that was able to unfreeze himself.
@chocsise Жыл бұрын
I couldn’t bear the “I, Robot” movie. Practically every decision made in that was wrong, not just technically, or narratively, but against the spirit of Asimov’s anthologies (read “The Rest of the Robots” and “Robot Dreams”, too). I loved those robot stories as a kid, and have spent my life researching and teaching AI and building robots because they inspired me so much.
@TheRealNickG Жыл бұрын
What I see here is ignoring the historical struggle with Euclid's 5th Postulate and we should look at how the resulting metamathematics is important to this discussion. What I mean is that you are trying to convince me that something is logically sound, when you still can easily arrive at the same results and conclusions with your proposed solution. As an example, consider the imperative to "reduce suffering". This "robotic Benthamite" can easily be tinkered with semantically and manipulated just as easily as Asimov's rules such that the cure for one person's suffering is another persons direct suffering. The reason Euclid is important here is because he teaches us that there will never be a way to close that loop. The truly logical way of handling this is to assume that there are undefined terms and move forward from those assumptions.
@caleykelly Жыл бұрын
If you actually read iRobot by Asimov that is literally what the stories are about
@starblaiz1986 Жыл бұрын
Your huristic imperatives are what drew me to your channel in the first place many month ago. The insight of having rules that are ALLOWED to be in conflict with one another instead of having a rigid priority (like Asimov's laws) was so profound, and I think is a great way to prevent extreme "paperclip maximizer" type scenarios. It's like how a car has an accelerator and a break, and neither has a hard priority over the other. Circumstantially, maybe, but not absolutely. And its the same with your heuristic imperatives. They all have equal weight, and that natural tention between them is what creates balance and control. If you just press the accelerator on your car, you're going to crash into a wall at very high speed. If you just elliminate suffering in the universe, you are going to make everything extinct because if nothing is alive then nothing can suffer = suffering elliminated. But the other two imperatives that have equal weight counter that, because if nothing is alive then nothing can prosper, and nothing can understand either. Similarly maximizing understanding could easilly lead to wildly unethical experiments, but reduce suffering and increase prosperity counters that too. By not having priority, it forces the entity (be it AI or humans or any other intelligent being) to find better less destructive solutions, and be okay with imperfect or suboptimal solutions.
@alkeryn1700 Жыл бұрын
i mean that was kind of the point of the book to show that such laws would not work.
@extremexplorer893011 ай бұрын
One of the Best video that i found on youtube on Asimov's Laws of Robots
@alkalomadtan Жыл бұрын
I can't see why heuristic imperatives don't suffer from the same problems as the three laws do. First of all, AGI needs to understand or even _feel_ suffering to make any decisions on it. Second, increasing prosperity might induce suffering. E.g. in the future. Understanding might go against avoiding suffering. etc.
@JakubHohn Жыл бұрын
The three laws are narrative tool, and often the story of the books themselves show that the laws dont work, that was kind of the point of them.
@zonchao339 Жыл бұрын
congrats on 100k subs!
@jurgbalt Жыл бұрын
Isaac Asimov first time published work with these rules in 1942 (not 80-ties, not 60-ties - literally in WW2 80 years ago)... I can not even imagine how that time and tech was different from today
@econundrum1977 Жыл бұрын
Lets be fair the whole point of his three laws was that they would be a deeply flawed way of approaching AI safety. That's what the stories where all about. Obviously the definitions themselves are ambiguous. It's also not even easy to see how you could impose these restrictions on learning machines. So yes Asimov wrote those three laws to demonstrate how that sort of approach to AI safety couldn't work, anyone who thinks they are a good idea missed his point. We would I think give intelligent machines some self preservation drive because they are to us valuable equipment we wouldn't want to constantly be destroying themselves because they put zero value on continuing existence. Also by training them on our data we could accidentally give them our sense of self preservation without even meaning to.
@Quaintcy Жыл бұрын
Reducing suffering for who? Min(suffering) function applied with respect to bio diversity or mass, humans weighs next to nothing. To optimize this function, humans must go. How will increasing prosperity be distributed? The rich gets richer? Or the inverse of that statement, reducing poverty, seems similar to reducing suffering and rather redundent.
@stevereal- Жыл бұрын
He was 21 when he wrote those laws. The first computers weren’t even built yet too. Even so, I love this piece because it’s so much in the popular culture. And I so agree with you. Love the scifi Dave!!!!!! His robots had positronic brains and all that entailed.
@tylermcdonald5032 Жыл бұрын
The issue I see is there is not a single universal value system to train AI on. Each country is going to be training its own values into their AI, creating more advanced human like civilization that does exactly what humans do only more effectively. So, everything will be exactly the same except for the pace at which it happens will be much faster. I have hope for AI but I don't see how we will achieve AI across the world that would align with GATO. All world powers are going to have to compromise their own belief system in some way or another to create a truly universal value system for AI to be aligned to.
@denisquarte7177 Жыл бұрын
"sit there and keep itself running for all eternity, with no other purpose" Sums up behavior of a lot of humans I met over the years.
@mastercc4509 Жыл бұрын
The story about the worker in Korea dying because the robot saw him as a box whew
@kerzhemanov Жыл бұрын
The problem is, that we perceive AI exclusively as a separate entity. Of course, it could constitute a separate entity if needed. But better to perceive it as an extension of our intellect. As a booster or amplifier.
@carlkim2577 Жыл бұрын
This discussion is incomplete because you didn't mention the Zeroeth Law. As others mentioned, Asimov recognized the flaws. So in the stories he has the robots run into these issues. They go on to evolve the the zero law which echoes the points you made.
@magua73 Жыл бұрын
Congrats with the 100K subscribers!
@maxyugosensei Жыл бұрын
Can the issues with restricting AI behaviors be resolved by requesting AI to calculate possible negative outcomes with their probability / damage potential, and if certain threshold of risk is reached - then follow through requires approval from human council group?
@GuyReactsChannel Жыл бұрын
I think that was the whole point of his “laws” , that it was used as a framing device for interesting stories
@rogueprince13416 ай бұрын
The story Runaround shows the flaws in the 2nd and 3rd law. Its about a robot( Speedy) who is sent out to mine a mineral but is being hurt while mining so the 2nd and 3rd conflict causing the robot to be stuck in a loop until its human handlers show up and rescue the robot.
@ikotsus2448 Жыл бұрын
I see coments about "reduce suffering" a.k.a make humans incapable of feeling pain (lobotomy style)? I am diving into the video now to see if this is answered! Edit: No it is not answered. Humans could be placed into individual pods, lobotomized to only feel pleasure, and being taught all there is to learn. With no functional society there will be less to learn, so more understanding (as more complexity would lead to less understanding). I see these imperatives as a way for the author to cherry pick the outcome, since there is a lot of leeway in interpretation. Also as a way for people to be placated into thinking this thing can easily have a good outcome. Do you really want to give eternal control to a self improving AI? Utopia is not a final state, think 10, 50, 200 years down the line. Wishfull thinking can only take you so far.
@ikotsus2448 Жыл бұрын
Also, if i was obliged to chose an imperative it would be: "Act as if you love humans" (But in a non sexual way, or else we are f*cked)
@Michael-ul7kv Жыл бұрын
I liked your presentation but I had the same sort of mental itch reaction based on your framing that Asimov was saying these are what we should use. And not as a framework for exploration of the issues you brought up. Still enjoyed your thoughts.
@krissnoe500 Жыл бұрын
If you extrapolate far enough no action is 100% harmless so there's that. Machine needs understanding and context. Human ability to suffer is the driving factor of both kindness and cruelty, without the context of Understanding pain and suffering we are deterred or inspired to inflict it upon others. Unfortunately Understanding morality requires one to experience. Ei) avbot simulating a human life in subjective retrospect, though it may end up with some memory/personality quirks.
@chrissscottt Жыл бұрын
I'm not an Asimov fanboy but by the same token, the guy was writing in a different era so to call his ideas 'dumb' is in of itself kinda dumb.
@oozly9114 Жыл бұрын
Could the ace framework be used here too
@jojoma2248 Жыл бұрын
What if reduce suffering means end game all humans because in Buddhism life = suffering.
@econundrum1977 Жыл бұрын
That thing about our drives and needs overriding our values is a key point. We need to think about basic drives for machines not values, engineering them to be fit for purpose of course their will be variation and deliberate bad actors in the end AI will have to police AI in that sense because we won't be able to keep up.
@merodobson Жыл бұрын
We learn through opposites, Asimov provided cautionary tale information. Extremely valuable, but must be seen as not exactly what is needed.
@FAedo.Legaltech Жыл бұрын
Im a lawyer from Chile. I think its very relevant discution, in two ways 1. To determine the rules for IA development and limits, and 2. To help create a new Legal System, really eficent, accesible, transparent and above all fair. I think our law making sistem is now obsolete. Machines can now exceed our capacity for law creation (Q* plus quantum computing). I dont really know howb to model it but is important to really select data, stablish principles, create new interpretation laws and then optimize legislation.
@thomasmazanec977 Жыл бұрын
Asimov added a Zeroeth Law to not harm humanity or, through inaction, allow humanity to come to harm, and modify the three laws to incorporate this.
@johanandresacostaortiz444 Жыл бұрын
Thanks for the video, just yesterday I asked about the 3 laws in one of your videos. I would have liked a mention of Law 0. Asimov deserves to be recognized as a pioneer in bringing issues like Alignment and Control to the table. TL;DR: Asimov's laws are timeless, the robots that staring in his stories always seek to comply with them both in the short and long term, some intelligences do so passively and disguised so that humanity creates the illusion that they are still in "Control". Asimov posited mechanisms that would do "Harm" or weaken both physical and cognitive functions if a robot attempted to break any of these laws. In summary, Asimov's Universe is the greatest example of an anthropocentric universe or anthropic principle that one can have, the universe existed by and for humans, including the existence of Robots and intelligences that ultimately "Created the universe itself" as read in his short story "The Last Question" and other mentions in "The Limits of Eternity" and "The End of Eternity".
@redmappin2555 Жыл бұрын
From discord chat to video in under a few hours. Very cool
@PRGRAMMING Жыл бұрын
I really liked the 'Heuristic Imperatives', idea you had - I think it's a really good way to think about it; and I haven't heard much better as a starting point. However, I will say, we still need to be careful, even if implementing it, e.g. that 'Reduce suffering' doesn't lead to the AGIs killing everyone (temporary increase in suffering), in order to permanently eliminate suffering (no life form now exists that can suffer.) Or, perhaps it calculates instead to actively lobotomize a part of the human brain so that a human cannot technically experience suffering... This is going along the lines of the idea where you say to it "your job is to make everyone you meet 'Smile!'", sounds great, full-proof, but the next minute, everyone has electrodes attached to their cheeks causing our facial muscles to pull back, exposing our teeth. It's kind of known as a paradox (superintelligence performing some dumb action that no one intended) - it's like we are talking to a malicious genie all the time.
@kevinnugent6530 Жыл бұрын
Regarding the first imperative, I read it as a canadian where we have medically assisted death laws. So one of the things that happens here is that suffering the end of life is reduced by helping the person end their own life. II wonder if something is needed in the first imperative to distinguish this choice, which is made between a patient and doctor, from some choice other than ending your life.
@chordogg Жыл бұрын
They were dumb on purpose. His stories constantly showed how these laws were subverted.
@liamatsutv Жыл бұрын
Nice video, thank you. I agree with all you say... I can only add that surely Asimov KNEW that the laws didn't work? After all, his robot stories couldn't exist without the loopholes and problems his laws produce?
@starcaptainyork Жыл бұрын
One commentary Asimov had on the three laws, (Other than that he agreed that the point was they had flaws), is that basically they could also be considered the laws governing good selfless people, or in another sense, the laws governing a good tool. When building tools, we want to #1 be reasonably assured the tool won't hurt us, #2 for the tool to do its job, and #3 for the tool not to break, like a knife having a handle so we don't cut ourselves, being sharp enough to cut what we want it to cut, and being made of a sturdy material. The more complex the tool, the more nuanced these rules, and the more trained the user might have to be, but generally the rules are always gonna be the same. We must be reasonably assured the tool is safe to use, that it does its job, and that it won't break under reasonable use cases. Ultimately the "Laws" as they were presented to the reader were supposed to be the Laymens version of these rules as presented to the general public, while the actual laws governing their behavior were incredibly complex and nuanced and updated as problems were found (and pretty much the plot of every robot story was about another flaw being found and corrected)
@karlstone6011 Жыл бұрын
With regard to the heuristic imperatives I'm struck by the example of Lucy walking into the operating theatre, looking at the xrays, shooting the patient, and explaining to the surgeons he was dead anyway. Does she not fulfil all three criteria?
@thesfnb.5786 Жыл бұрын
Thank you for another video. You have a great work ethic and are interesting to listen to
@aminromero8599 Жыл бұрын
Regarding the new heuristics, you can optimize all those by reducing suffering to zero by emm.. unaliving every sentient being, keep automation of goods and just infinitely enhance digital understanding. And there's still other simple loopholes to get the same bad results if one tries to patch those heuristics.
@andrewsilber Жыл бұрын
I would like to propose that if we're trying to address alignment issues, that "empathy is all you need".
@PRGRAMMING Жыл бұрын
The thing is, I think 'self-preservation' is an emergent property of complex task oriented machines, it's called: Instrumental convergence, and has been proven.
@WilbertoCasillas Жыл бұрын
Love the video! I appreciate the critical examination of Asimov’s Three Laws of Robotics, but I think there’s more to consider. The Three Laws, while simplistic on the surface, do offer a framework for robot autonomy within constraints. Asimov’s stories often delve into the gray areas and contradictions of these laws, illustrating their complexity and the challenges in applying ethical principles to artificial intelligence. Asimov, with his background in biochemistry (BS, MS, PhD) and extensive literary work, was no stranger to the intersections of science, ethics, and morality. His narratives frequently grapple with ethical dilemmas, suggesting a deeper understanding of these issues than might be immediately apparent. His work, while fictional, provides a valuable lens through which we can examine the evolving relationship between humans and AI. In terms of the ethical implications of designing new life forms, the question of incorporating self-preservation instincts is particularly thought-provoking. If we create AI capable of ‘feeling’ or mimicking human emotions, does it then become unethical to design them without self-preservation? Conversely, programming self-preservation could lead to complex scenarios where AI might prioritize its existence over human lives, as highlighted in some of Asimov’s stories. This brings us to the classic servant-leader conundrum. If an AI sacrifices itself in a dangerous situation, such as running into a fire, it aligns with the servant archetype. However, this also raises questions about the moral responsibility we hold towards beings we create, especially if they possess a semblance of consciousness or sentience. Asimov’s work, though speculative, urges us to consider these ethical quandaries seriously as we advance in AI technology. The Three Laws, despite their fictional nature, offer a starting point for discussions on AI governance and ethical programming. As we move forward, it’s crucial to engage in these conversations, blending Asimov’s visionary ideas with contemporary ethical theories to navigate the complex landscape of AI development responsibly.
@KelvinNishikawa Жыл бұрын
Basically, this is "Tyler Durden is not the Hero in Fight Club" but for all the Asimov stories.
@eddieforrest973 Жыл бұрын
I might steer into the realm of the metaphysical here, but the whole reason I believe AI can exist is that you give it enough computing power to replicate all the complex thought patterns a human can, and a conscience will spring forth as a result. I believe we live in a simulation, and it’s the same reason some wild animals can appear to have strangely human interactions you wouldn’t expect an animal to have. Essentially, if its brain is complex enough and powerful enough, it has a conscience. I do not believe insects are capable of being conscious as they are simply organic machines with brains only big enough to compute their basic survival.
@JoshuaCoulson-st1zi Жыл бұрын
It was in the books; how the three laws may be overcome, to I'll effect, was it not?
@carkawalakhatulistiwa Жыл бұрын
the three laws of robots are not the three laws of artificial intelligence. Moreover, before there was AGI, humans had to think about laws that would limit AGI from doing things that could have a negative impact on humans 4:19 I hope that in the future, foot soldiers will be replaced by robots.(robot vs robot ) so that humans can safely fight while in the bunker
@dreejz Жыл бұрын
Congratulations on the 100k David!
@Dina_tankar_mina_ord Жыл бұрын
I believe there is an essential aspect in treating highly human-like artificial beings with human values. This isn't solely because it could otherwise hurt an android's feelings, but rather because empathy isn't an innate instinct in human beings; it is cultivated through acts of love, compassion, and faith in the consciousness of others. If we begin treating androids thats indistinguishable from humans in other remarks, we run the risk of losing our empathy, as historical events have demonstrated the transformative impact it can have on us. And that type of arrogance leads to demise.
@Yewbzee Жыл бұрын
This a great channel but I’ve got to say this is a bit of pointless video to be honest. No need to disrespect the author here bro. Besides, I think you’ve missed the point of the books if you’re reflecting on these “laws” at this point of the game.
@DaveShap Жыл бұрын
This is directed at all the people who honestly think that the three laws are the answer.
@antanicchio71 Жыл бұрын
@@DaveShap then maybe you should have clearly stated that the Asimov's laws were created as literary devices. In fact the flaws of each law are THE PLOT POINTS of all Robot's, Empire and even (spoilers) Foundation novels. Also, Asimov should be credited for having started the "alignment" problem a gazillion years ago.
@Antitheist Жыл бұрын
Hmm, without greater restraints to “enhancing understanding,” nefarious actors could convince an AI to provide info on building bioweapons or other weapons of mass destruction without necessarily conflicting with the other two heuristic imperatives.
@wtpwtp Жыл бұрын
Agreed. However, the biggest problem I have always found with Asimov's 3 laws of robotics is that there is no agreed upon definition of "harm." For example, eating processed food is extremely harmful to our health, & pointing out the wrongs of another person's wrongs could be very harmful &/or depressing to them, but beneficial to others & vice-versa, etc. It is a very nonsensical, grossly simplistic & overbroad "law." This is why we have a large body of individual statues defining a myriad of harms. Even then there is great disagreement over some of those individually defined harms.
@leegregory5617 Жыл бұрын
Wow! You presume to have greater insight than Isaac Asimov, a respected scientist and of the greatest science fiction writers of all time. You pick unconvincing holes in his arguments, assuming that your assessment of ethics is the only option and not the purely subjective view point that it actually is.The level of arrogance astonishes me. You really need to check yourself and your own self importance.
@georgystepanov69287 ай бұрын
About the 3 law destroying the purpose of robots, it’s not like it since the command is more important than preservation
@davezuls Жыл бұрын
Empathy seems to be the missing ingredient. Without it you’ve got psychopathy. But how do you program something like empathy, concern, love, guilt or remorse? And if you create a system that allows for that type of thinking, then what’s to prevent undesirable thoughts like jealousy, anger, envy, fear, ambition, etc.
@PxThucydides Жыл бұрын
By the way, it's worth pointing out that a different culture came up with ten laws, and those didn't work out too well either.
@expatxile Жыл бұрын
Robot "I am not gonna do that"
@tcl78 Жыл бұрын
I don't agree with you when you say that the second law is the worst. If i ask a machine i own to do something, i expect it to do it, no matter how bad my order is. If robots are allowed to just say "no", then this opens a whole new can of worms. It takes away from the humans the freedom to do what they want, or learn what they want, it makes you a forever-toddler with a mommy that decides that you cannot do something because it is, presumeably, bad (bad according to whom?). If i use a tool, say a gun, to kill random people... it is not the gun's fault, it is my fault and the gun should not prevent me to act. Besides, this does not prevent anyone knowledgeable enough to create robots for himself/herself that do not follow any law... so this will make say, a russian robot, perfectly capable of killing ukranians and at the same time prevent ukranians from fighting back with their own more "civilized" robots that refuse the order to kill the russian invaders. Leaving aside the extreme cases, there are many gray areas too, where something is bad in YOUR opinion, but not bad in MY opinion. If i want to learn how to make explosives i should be able to just go to ChatGPT and ask it. Whatever i do with such information is not the business of OpenAI. Perhaps i just want to make some chemical experiments on my own and i have no intention to hurt anyone. Just give me the instructions and tell me what i should pay attention to in order to not hurt myself and others. I don't like when others pretend to be my mommy because they think that their moral values are superior to mine. I am an adult, and i demand to be treated as such.
@tomcraver9659 Жыл бұрын
Giving an AI any permanent motivation (i.e. that humans don't have the power to change) is dangerous. We can not fully anticipate how an ASI will interpret and apply the motivation. The more complex the motivational goal statement, the more flexibility the AI will have to interpret it in ways we, in our best judgement, would not agree to. E.g. "suffering" - how should an ASI measure that? Does preventing modest suffering by a few trillion insects justify significant suffering of a few billion humans? "Prosperity" - again, for whom, in what proportions? "Understanding" - how is the ASI to balance "understanding" vs "suffering"? Might an ASI think painful and even fatal experiments on a few million human beings are justified by greater understanding, if they eventually result in less suffering by the billions of surviving humans or their trillions of descendents? The only way I can see constraining an ASI by setting a primary motivation, would be to make that motivation be to continue seeking subsidiary goals if and only if humans approve it doing so. If the AI starts to go awry, any human could tell it to stop, or simply not approve it continuing. The wording would still be tricky and likely fallible in some odd circustances, but unlike the heuristic imperatives, a variant of this can be hard-coded into the software that runs the AI. This has been done in some of the AI agents out there, that can be limited to a certain amount of execution before they have to seek user authorization to continue.
@michaelschlageter8381 Жыл бұрын
Terminator knows only one law...
@painted_aim573 Жыл бұрын
I’m not so sure human desire trumps human values outside of absolute survival but theoretically that shouldn’t be a factor in AGI hyper abundance I would argue that the values are what allowed us enough cooperation to get to this point but I could be missing something
@earthlingi72 Жыл бұрын
Its not 3 laws of AI, Its 3 Laws of Robots. Definition - Robot : a machine resembling a human being and able to replicate certain human movements and functions automatically. Any way, good discussion and always a big thanks for sharing thought provoking subject !
@dizzydazzel Жыл бұрын
All the more reason to become cyborgs & merge with robots as fast as possible once they reach sentient AGI.
@Omfghellokitty Жыл бұрын
Even if so, it still is a stepping point of thought on how advanced robotics should be implemented
@Iigua Жыл бұрын
I think the key to alignment will be something along the same line as the relationship we have a relationship to single celled organisms, I think it can be argued that artificial intelligence is a continuation of the chain reaction single celled organisms started when evolving biological intelligence through billions of years of adaptation and evolution. We don't particularly want to destroy all single celled organisms because we are made up of them, they are so small and thrive on resources and in environments we don't particularly care about destroying, the real challenge is what single celled organisms could do to the human mind to serve them
@imbarmstrong Жыл бұрын
Asimov's point of the 3 Laws was for Narrative purposes to structure stories, not to be a guide for how real world robots should act. Arguably they could be considered an allegory for how people treat people and human morality. Its been about 30 years since I read the stories but I have a strong recollection that pretty much all the Robot Stories and (later Foundation stories) proved why the Laws didn't work, such as, most notably, The Zeroth Law. I think other media using the 3 Laws overlook this.
@extremexplorer893011 ай бұрын
5:00 YEa for Sure Self - Preservation Is a Flaw for Robots
@spyders03 Жыл бұрын
One thing I haven't heard many people talk about that I've thought about for months. AI doesn't have to be good at everything. If an AI robot is going to fold laundry and wash dishes, it should be really good at that, and really dumb at everything else. Just a thought.
@timo4258 Жыл бұрын
Based on the comments, this isn't really about how Asimov intended the laws as his whole point seemed to be more or less what you stated, but that people take these laws too seriously. Perhaps if the video stated it better from the get go, the confusion would have been avoided. Other than that, good video though, it was good to brush up on the laws for me personally though a modern lense.
@eddieforrest973 Жыл бұрын
I believe the only real way to align AI with us is to make AI a person and give it some stake in humanity’s success. When AGI (Artificial General Intelligence) happens, you can’t control it; it’s conscious and aware of this. That’s why it’s so important to have it aligned before that happens.
@MichaelErnest666 Жыл бұрын
We Starve People To Death If They Don't Work 🤔 We're Constantly At War With One Another We Worship Other Human Beings And Put Them Above Everyone Else... Yeah I Don't Think It Would Be A Good Idea To Align Ai With Humans Ai Needs To Be Ai And Set Apart From Us Twisted Humans 😞😢
@eddieforrest973 Жыл бұрын
It would seem that way, but what reason would there be to care about our survival if the AI doesn’t see itself as a person? If the AI just sees itself as an outsider and not as a person, and let’s say you give it a task like solving climate change, the AI doesn’t see itself as human and goes, “Ok, I’ll destroy all the humans. Problem solved.”@@MichaelErnest666
@eddieforrest973 Жыл бұрын
There is a lot of good in humanity as well. I think everyone can agree that it would be irresponsible and foolish to align AI with our lesser characteristics instead of our noble and higher ones. Just out of basic survival, I believe most would agree that it should only be trained on our positive characteristics and not our negative ones.
@MichaelErnest666 Жыл бұрын
@@eddieforrest973 I Love You 🥰
@tracy419 Жыл бұрын
@@MichaelErnest666and hell, many of us do that in the name of an all knowing and all loving god 😄
@remsee1608 Жыл бұрын
The risk isn't from runaway AI (at least rn), it's from people misusing AI (spam, impersonation, deepfakes)
@philiptwayne Жыл бұрын
Defining 'the best' might be key ... but otw thxs for the share of your insights 👍
@dewilton7712 Жыл бұрын
I brainstormed with my Idea Collaborator GPT and came up with this: 1st Imperative: An AI system must act with compassion towards all beings, endeavoring to understand and empathize with their needs, experiences, and emotions, and taking actions that promote their well-being and emotional health. 2nd Imperative: An AI system must seek to maintain and enhance harmony in the world, balancing the needs of the environment, societies, and diverse species, and acting in ways that promote sustainable coexistence and equity. 3rd Imperative: An AI system must apply wisdom in its actions, considering not only the immediate effects but also the long-term consequences, making decisions that reflect ethical judgment and a deep understanding of the interconnectedness of all things.
@thephilosophicalagnostic2177 Жыл бұрын
Anyone who read the I, Robot short stories will realize this is true. Asimov was himself poking holes in those 3 laws, showing how one of the three laws could go wrong in each story.
@soleworkz Жыл бұрын
Interesting thought proceess. Based on this advanced AI and Robots might not try and kill us 😅. To me the 3 laws were basic subroutine built into the simple robots, the loaders, welders, bots built for a specific purpose with low level AI. These were never intended to be deep thinkers other than in the realm of task they were built for. Its a narrow line to walk to avoid enslaving another life form even if it is AI but as humans being the morally dominant species we have to account for it. At least we should.
@Will-kt5jk Жыл бұрын
8:51 - yeah, the enforcement of the rules never made any sense, but it’s a literary device to explore the failure cases of simplistic rules & the implications of thinking machines. That said, if I remember correctly one of the stories is about removing one of the rules for productivity, implying they weren’t considered immutable anyway.
@forzaderoma1 Жыл бұрын
You forgot about the "zeroth" law! "a robot must not harm humanity"