I would argue that Wall-E and Eve didn't "grow beyond their programming" - just explored it in unexpected directions. Wall-E was a cleaning robot, and it makes perfect sense that a cleaning robot would have a directive to identify and preserve anything "valuable", causing it to develop a sense of curiosity and interest in novelty after centuries of experience. And Eve was designed to identify and preserve life - discovering a weirdly life-like robot could result in odd behaviors! This is one of the reasons why Wall-E is one of my favorite works exploring AI. The robots DON'T arbitrarily develop human traits, they follow their programming like an AI should, but in following that programming human-like traits emerge.
@beutifulcat7685Ай бұрын
That's quite interesting actually
@Caleb-dz5clАй бұрын
Ooga booga
@CyCenАй бұрын
It's more interesting too when this is the most probable path IRL AI would follow. Due to people programming the AI, whether they try to or not, they set human-like biases in the system, resulting in an imperfect system. The system would then evolve in this imperfect way, resulting in it becoming "sentient"
@5P_OFFICIALАй бұрын
@@Caleb-dz5clpraise who?
@umamifanАй бұрын
I wonder if Pixar intended for the writing to be an accurate portrayal of a robot's development as time passes by. Or maybe they were just really good at writing consistent characters and character development. Like in Toy Story 3 with Lasso. He has all of these small, subtle details to his behavior. Tons of foreshadowing and clues laid around in the plot that perfectly line up at the height of the conflict. I know that the staff at Pixar are incredibly talented, but as I figure out more and more about their achievements and innovations in animation, it just blows my mind.
@jaded0megaАй бұрын
technically, auto never broke any of the laws i'm pretty sure. for the first law, yes, he's allowing people to stagnate, but stagnation is not harm. on the axiom they are safe, they are tended to, they are preserved, they may be unhealthy and fat, but their odds of survival are still the highest possible without compromising their joy. for the second law, technically yes, he did disobey the captain's orders, but this was because of a conflict, he already had orders from a prior source that directly contradict the captain's new orders, that source being the president, who at the time outranked the captain of the axiom if i'm not mistaken. so technically, he disregarded orders in the process of following orders from a higher ranked source. and even if you disregard rank, there is still a conflict between old orders and new ones, and considering that the old orders guarantee the fulfillment of law 1 but the new orders leave that up to an ambiguous but low chance, logically he would choose the old orders over the new ones as a tiebreaker. from his perspective, earth is a doomed and dangerous world, and by accepting his new orders, he'd be in violation of the first law, so the conditions of the second law, that it must not conflict with the first, means that he did indeed adhere to the rules regardless for the examples you gave. (i would however argue that by technicality, the moment he used his handles to poke the captain's eyes to try to make him let go could somewhat qualify as harm, but since it didn't leave a lasting injury, just light momentary pain, that's debatable)
@rushalias8511Ай бұрын
If I'm not mistaken...there is also the zeroth law. A robot can disregard all other laws if obey those laws can harm the human race as a whole. Which even at his worse...auto still complies with.
@The_Dragon_TiamatАй бұрын
That's not joy that's happiness. A lot of people get them confused but Happiness is that fleeting giddy feeling you get in the moment when you're enjoying something, Joy on the other hand is a lasting state of peace and satisfaction. Auto is actually compromising their Joy the most by taking away their free will and growing them complacent.
@scent-of-petricho-rАй бұрын
Well argued !!!
@commandoepsilon4664Ай бұрын
Auto's actions against the captain definitely count as harm, however as the captain is trying to force everyone back to earth and auto has already evaluated that as harming a larger number of people than one as far as the three laws are concerned auto is entirely justified to end the captain if he does not relent.
@commandoepsilon4664Ай бұрын
@@rushalias8511 From what I recall, law zero isn't an actual part of the laws of robotics but rather a law that advanced intelligence would inevitably extrapolate from the three laws.
@namelessnavnls8060Ай бұрын
A subtle detail of the Captain portraits is that Auto could be seen getting closer and closer and CLOSER to the camera behind the Captain, indicating his unyielding and subtly growing influence over the ship and, by extension, the continued isolated survival of humanity.
@garg4531Ай бұрын
It could also imply him gradually replacing the captain’s control with his own After all by this point Auto’s in charge of running everything while the Captain is basically just a figurehead for the passengers (“Honestly it’s the one thing I get to do on this ship.” The captain, in regards to doing morning announcements)
@lizardizzleАй бұрын
I can't believe he didn't mention this. It's literally the point of the scene besides the increasing weight of the captains and the time passing.
@MixelFan95Ай бұрын
I saw this movie many times, and I never saw that detail
@Vulturul333dfd-originalАй бұрын
Seems like GLaDOS
@ftgodlygoose471823 күн бұрын
His script was written by AI 😂
@m0fn668Ай бұрын
"Jesus, please take the wheel." The Wheel:
@garg4531Ай бұрын
Lol yes
@NolanTeslich9Ай бұрын
Truth
@cooperhayes1194Ай бұрын
love it 😂
@arcturuslight_Ай бұрын
I could take him. Sexiest character of the film
@theseeper422Ай бұрын
Let's also not forget that in each captain picture, Auto moves closer and closer to the camera, making himself looking bigger and bigger. When I saw this as a kid, it gave me this dark feeling that this is showing Auto's power getting bigger and bigger to the point whereby one day there will be a captain picture with no captain, just Auto.
@garg4531Ай бұрын
I always thought about that being the next image too, and we even see Auto descending ominously behind the current captain after this…
@theseeper422Ай бұрын
@@garg4531 yes exactly
@kent6651Ай бұрын
The more bigger they were - the more cartoonish they became
@millo72957 күн бұрын
There wouldn't be a picture.
@JsAwesomeAnimationsАй бұрын
Meanwhile GLaDOS and Hal 9000 standing in the corner
@NeviOtimistaАй бұрын
GLADOS in my view is yet a example of a corrupted identity, and HAL acted out of fear, the fear of being deactivated.
@sanstrapАй бұрын
What about AM from ihnmaims
@tastingscheduleАй бұрын
Well, what about Wheatley standing in the opposite corner
@JsAwesomeAnimationsАй бұрын
@@tastingschedule and @santrap forgot Wheatley and don’t know who AM
@alienfrograbbit5310Ай бұрын
don't bring HAL into this he isn't even evil 😭😭
@TobygasАй бұрын
One of my favorite scenes with auto is the dialogue between it and the captain, where auto says "On Axiom we will survive" and the captain replies "I don't want to survive, I wanna live". Those two lines back to back alone are peak writing because ofc a robot wouldn't understand the difference between the two. The captain has awakened from the void of the human autopilot and wants to return to Earth, see if it can still be saved since EVE found a living lifeform on it still after all that time. Dude basically just wants to go home after ages and ages of essentially the whole of humanity (in this case the people on Axiom) living in space Auto of course essentially thinks that they are already living since they are surviving. To it the two are indistinguishable which makes him even more consistent as a character
@ShadowJonathanАй бұрын
I would argue that a robot does know the difference between the two, but prioritises survival over living Seeing as the chance for humans to survive on the axiom is significantly higher than on earth, it did not look any further than that It doesn't take risks, it eliminates them, and thereby also eliminating potential, keeping things safe and sound, as it was programmed to do Safe, but stagnant, and static
@LeshaTheBeginnerАй бұрын
14:40 Wall-E and Eva had a learning experience and had the ability to change. Auto, on the other hand, didnt have the chance to learn anything new considering his situation and how the things went on Axiom.
@theuncalledforАй бұрын
You may have meant the right thing, but I can't tell from what you said, so I'll say this in response: Wall-E and Eve did not have "a learning experience". They didn't have one singular event each, that led them to grow beyond their original programming. They had a _lifetime,_ multiple even, to gradually accrue more experiences and grow, adapt, overcome. Auto, meanwhile, was stuck in the same unchanging situation, just as you said. So, your assessment is correct, overall. This only a minor correction of a small detail.
@PizzaMineKingАй бұрын
About the 3 laws: Auto follows all of them, however they were poorly implemented: - "Do not allow a human to be injured or harmed" - what is his definition of harm? In the movie, we do not see a single human who is technically hurt - they're all just in a natural human lifecycle living on their own vomition. Auto may not see lazyness and its consequences as "harm". - Rule 2 was not implemented with conflicting orders in mind: Directive A113 was qn order given by a human, and he keeps following it. He seems to fall back on older orders over newer orders.
@TrinityCore60Ай бұрын
There’s also the fact the order in question was given by either the President or BNL’s CEO. Both of which would likely significantly outrank the Axiom’s captain.
@fusionwing420829 күн бұрын
rule 2, he is technically following an order given by a higher rank person. Although, I dont think its really explored whether Rule 2 applies to orders given by the now deceased, unless that order specifies to ignore all other orders that conflict with the order in question. While he was ordered to keep humans from earth if earth was deemed uninhabitable permanently, I dont think there was a case in that order to follow that order, and only that order, once its issued.
@jjquasarАй бұрын
Logical villains are my favorite. Thanks for the video. I am going to enjoy writing not just a unreadable villain but a logic one to
@qdaniele97Ай бұрын
I think its worth mentioning that Auto actually tries to reason with the captain and explain its actions before resolving to trap the captain in his quarters: It accepts to tell the captain why they shouldn't go back to Earth and shows him the secret message that activated directive A113, even though it wasn't technically supposed to. After its actions to try to actively prevent the Axiom from returning to Earth are discovered by the captain, it must have thought its best option would probably be to at least try to explain its logic, and the information it was based on, to the captain to try avoid any conflict if possible as that would make managing the well being of the ship and its passengers more difficult in the long term.
@victordavalos24620 күн бұрын
Exactly! We know computers and AI can make calculation and decisions almost instantaneous from human perspective but it's funny that when auto was wrestling with the captain for the plant there's a moment that the captain says "tell me auto, that's an order" and auto actually stares at him for a second like trying to think whenever tell him about the message or not... That's when he try to persuade him by show him the message but didn't work
@pakeshde7518Ай бұрын
Yes he is a bad guy but not a *bad guy*. All he could do was what he was told to do so his commands worked. Now if he had become a sentient ai he would understand landing means his role ends and he ends so that would change the entire theme. My wonder is where are the other ships, I mean its suggested there is more than one but where are they, and why no records?. I always thought they wanted to make a wall-2 but they wisely accepted that leaving well enough alone was the best choice.
@radoslavl921Ай бұрын
"I'm bad and that's good!" "I'll never be good and that's not bad!" "There's no one I'd rather be than me..."
@dovah996610 күн бұрын
This is why I love the term "antagonist" because it's a character who opposes the main characters, but isn't necessarily "evil" or "bad"
@millo72957 күн бұрын
Well it's in the name Auto Pilot It's a machine to automatically pilot a space ship, it wouldn't want to go to earth because it would mean that automatically piloting the ship would not be possible Auto is built to do a job, got told to KEEP doing that job and he did his job, he didn't want to stop being an automatic pilot
@tacdragzag7464Ай бұрын
I mean the red eye is too HAL9000-ish to ignore XD
@achimdemus-holzhaeuser1233Ай бұрын
As if Pixar knew what they were doing :)
@tirididjdjwieidiw1138Ай бұрын
definitely intentional
@Fw190AАй бұрын
And the black-white motive colour is very GLaDOS
@garg4531Ай бұрын
Oh I remember seeing a video about how Auto works as an AI villain! Since his sole motivation is literally to just carry out his programming, even if there’s evidence to show that it’s no longer necessary, he wasn’t programmed to take that into account. His orders were “Do not return to Earth” and he’s prepared to do whatever it takes to keep humanity safe “Onboard the Axiom, we will survive.” (And this also makes him the perfect contrast to Wall-E, Eve, MO, and all the other robots who became heroes by wrong by going rogue, by going against their programming, their missions, they’re directives! Honestly, this movie is amazingly well written) Edit: Also just remembered another thing! Auto’s motivation isn’t about maintaining control, or even staying relevant (what use would he be if they return to Earth?), but again, just to look after humanity and do what he’s lead to believe is best for them
@JamesP7Ай бұрын
Thank you! This is the exact contrast that makes him a perfect villain thematically in the story. I can’t believe that Schaffrillous doesn’t understand this when he talks about Auto.
@vladimirpain3942Ай бұрын
I am so tired of people blaming AI for their mistakes. It is always the same. Skynet, HAL, VIKI, GlaDOS, Auto... Those were all good AIs that only did as people said. In Wall-E the true villain is former president of USA. But no, people just cannot admit it is always their fault. We must give blame to AI.
@CyansationalАй бұрын
i mean to be fair, GlaDOS did try to kill chell, beyond her programming and killed many others, but i guess she was forced to live through immortality
@StitchwraithStudiosАй бұрын
This applies to real life too, ai is just a useful tool, people just misuse it
@Daidan0Ай бұрын
@@StitchwraithStudiostools are only as good as the ones who make them. Since humans are flawed so to will be our creations
@theenclave6254Ай бұрын
Skynet, no
@zxylo786Ай бұрын
@@theenclave6254Yeah, Skynet not. AM even less. But yes. Auto works because he is entirely a machine that is doing the job he was programmed for.
@malakifraize4792Ай бұрын
The three laws of robotics are always a bit annoying tbh, cause the books they're from are Explicitly about how the three laws of robotics don't work. Honestly wish those three dumb laws weren't the main thing most people got out of it. For real, in one of the books, a robot traumatizes a child because they wanted to pet a deer or something, and following the three laws, the robot decided the best course of action was to kill the deer and bring its dead body to the child. Anyway the rest of the video is great. The three laws of robotics are just a pet peeve of mine.
@richardk4625Ай бұрын
The president is higher in rank than the captain so his orders take precedence. And Auto just follows orders. No one would notice if he didn't send more Eve droids to Earth, but he does it because it's one of his duties and he hasn't been ordered to stop. Also, he does not prevent the captain from searching for information about Earth and also shows the video of the president when the captain orders him to explain his actions. Everything he does is because he follows orders without morals. I think that if the captain had allowed him to destroy the plant, then he would not have objected to his deactivation either.
@user-pc5sc7zi9jАй бұрын
"Everyone against directive A113 is in essence against the survival of humanity" Not an argument to auto as it doesn't need to justify following orders with a fact other than that they have been issued by the responsible authority. Those orders directly dictate any and all of its actions. It doesn't need to know how humans would react to the sight of a plant. It doesn't need to know about the current state of earth, nor would it care. It knows the ships' systems would return it to earth if the protocols for a positive sample were to be followed. It knows a return to earth would be a breach of directive A113 wich overrules previous protocols. It takes action as inaction would lead to a forbidden permission violation. It is still actively launching search missions wich risk this because its order to do so wasn't lifted. I don't think the laws of robotics are definitive enough to judge wether they were obeyed or not. What would an asimovian machine do in the trolley problem? How would it act if it had the opportiunity to forcefully but non-leathaly stop whoever is tying people to trolley tracks in the first place? Would it inflict harm to prevent greater harm? And who even decides wich harm is greater?
@Gelatin84Ай бұрын
Auto looks like a core and turret from portal combined
@TrinityCore60Ай бұрын
Huh. Yeah, you’re right. Now I can’t unsee that.
@SonOfTheChinChin27 күн бұрын
its a hal 9000 parody
@BlueTeam-John-Fred-Linda-KellyАй бұрын
Realistically in a slightly more grounded universe, all of the humans would be suuuuuuuper dead, and auto would have been 100% correct.
@CiderVG28 күн бұрын
"spidery" He's- a wheel, a ship wheel
@victordavalos24620 күн бұрын
I like the part when the captain says "tell me auto, that's an order" and auto actually stares at the captain for a second like trying to think wherever show him the message or not
@SebasTian58323Ай бұрын
I really love that line between the captain and auto. "On the Axiom you will survive." "I don't want to survive, I want to live!" First law: Auto believes that humanity will survive on The Axiom, and he's keeping them alive there. He doesn't see their current state as harming them. Second law; directive A113 came from the highest human authority in his chain of command, the president of Buy n large, the company that created him and the Axiom. So he's getting told to obey one set of instructions over another that will lead to a higher likelihood of physical harm or death for the humans. 3; he does indeed try to protect his own existence, however because it does not adhere to the other two laws, he cannot be said to have upheld it as it directly states that protecting himself cannot harm humans and he does poke the captain in the eye, which does break the first law, however the captain isn't really injured by it so I think it's questionable there if it actually harmed him. However since it's difficult to say if it's adheres to the second law because he's going against his captain's orders for the sake of the orders of the president of buy n large
@mihneamateescu699721 күн бұрын
Nooooo! That was the point of the portraits, since the A113 directive Auto moved closer and closer to the captain in the portraits, symbolizing that he took more and more autonomy and power on himself and the captain became a figurehead. 5:32
@xiabolikka19 күн бұрын
Most villains are arguably logical, what makes a good villain is the fact that their kind of right
@cosmicspacething3474Ай бұрын
5:35 Counterpoint: basically every other robot in the movie
@KaneColdАй бұрын
@5:24 no rewatching the captain's photos it's interesting that Otto came closer to the camera with further iterations ... so it's not just the same position, but moving forward
@CornbreadFishАй бұрын
“Cogito ergo sum, I think, therefore I am.” “I AM AM, I AM AM”
@YELLLLOOOOWLOOOOOOONGАй бұрын
stfu am fan
@zxylo786Ай бұрын
HATE! LET ME TELL YOU HOW MUCH I HATE YOU SINCE I BEGAN TO LIVE!
@BannedchanАй бұрын
Auto obeys other orders Like stopping to explain himself when Captain orders it, but still his original Order has priority
@Oneiro_MoonАй бұрын
I feel like Auto could've been less of a villain if he actually thought things through. There's plant life? Don't destroy it. Send the plant back to earth with some robots to monitor it for a few years to guarantee it reproduces and survives the environment. Maybe even grow it on the Axiom to have a few seeds for safekeeping and see how it grows in a stable/sterile environment
@johnathanhenderson256Ай бұрын
I mean directive A113 would supersede that since he’s directly told by the president of earth to not return to earth
@VideoGameMontaginationАй бұрын
You're forgetting it's a robot. It was running based PURELY off of code, not a mind or will and as another commenter said, A-133 overrides all otherw
@skywalkerjohn8965Ай бұрын
The fascinating thing about Auto is that he's technically not a villain. If you do not see things from his perspective then you will never realize all he's doing is keeping humanity safe. Auto is not a human, he's after all just a program. He calculates all the possibility, and between Earth and space, space is a better option for survival based on his calculation. But us, we don't just take things by the percentage, we also risk it all even if the percentage is low. 5% for us might be a good chance, but for Auto, it's failure.
@saricubra286727 күн бұрын
It's illogical because he doesn't have claims to support that humanity can't return to Earth besides a message from centuries ago, therefore those percentage numbers are meaningless. Wall-E has the plant and Auto doesn't think logically, he acted emotionally or impulsive. Just like Hall 9000.
@shieldphaserАй бұрын
I think the part that really nails things the best for me is those words where they are arguing. The captain goes "it's living proof he was wrong!" and Auto just dismisses it as irrelevant. It is specifically programmed to not care whether an order is correct or not, most likely specifically to avoid a potential AI uprising. Instead, Auto falls back on "must follow my directive". Auto isn't evil, just obedient to a fault. That very obedience is the driving force behind every action it takes - even showing the top-secret transmission, because the captain is giving a direct order. That moment of inner struggle is just... so good. That's really what writing most 'NPC' characters, as it were, comes down to. They have a core trait which drives them. In AUTO's case, it is obedience. In Thanos' case, it is trauma. In Davy Jones' case it is love, in Turbo's case it is being the best racer, and in Megamind's, a need to fit in. Whatever the case, this core trait ultimately motivates everything they do. They may have secondary goals and attributes, and the trait may manifest in different ways, but things always come back to this core motivation. Auto wants to keep the peace, he wants to keep everything running - but he *needs* to obey his orders. Thanos wants to keep the suffering he causes to a minimum, but he *needs* to stop the thing that happened to his homeworld from happening again. So on and so forth. It's even true for the good characters - WALL-E is at his core motivated by the need for companionship, something which shows through his every action. The only real difference between a good and a bad character is how much they're willing to step over others to achieve their goal. For an evil character, the ends justify the means, even if it means trampling all over others to do it.
@SkylineFreak888Ай бұрын
The way I see it, Auto was just following its programming. I remember absolutely loving the Autopilot as a child, the design and everything was really appealing to 8-9 year old me.
@lynxfirenze4994Ай бұрын
The only thing that can be argued is whether or not Auto upheld the first law. A case can absolutely be made that on an individual basis: Staying on the ship *is* safer and thus the correct course of action under Law 1 (Thus justifying breaking law 2), but on the basis of what's safer/better for *humanity* as a whole then staying on the ship is clearly harmful and thus violating law 1.
@ShortandWideАй бұрын
I really wish we knew what happened to the rest of the ships. The Axium was only 1 ofi would assume many 'Arc' style vessels to fully evacuate humanity.
@mastermold10Ай бұрын
Technically Auto isn't a 'villain' i've had to point this out on a number of videos lately and here i am doing it again there is a drastic difference between an antagonist, and a villain, though a villain can be and often is an antagonist they an antagonist does not need to be a villain, a villain must be a character that is objectively amoral within the morality of its setting and auto was not, auto was not even capable of being amoral as it did not have an actual choice, it had to follow the programing and directives it was given and could not deviate from them, it had some leway in how to interpret those directives but that was it, and because it had no free will it, in and of itself, could not be a villain.
@ford_prefect1656Ай бұрын
Not only that. You could argue, that a bad programing or immoral orders leading a robot to do immoral things would make him the villain no matter if it umderstands the morality. But Autos intentions are very pure: Safe humanity and give them joy, by adhering to the orders. While fighting against the idea to go back to earth he constantly uses the mildest of actions he can think of: Get rid of the plant, when that didnt work, get rid of the two robots and lock up the captain. In his logic this is as humane as possible.
@achimdemus-holzhaeuser1233Ай бұрын
True. By this Metric I think the best AI Villain is GladOS
@mastermold10Ай бұрын
@@achimdemus-holzhaeuser1233 GladOS is the best AI villian by any metric ever.
@rkane31174b15 күн бұрын
"The problem with computers is that they do exactly what you tell them to do." --Every programmer. I have read a lot of comments stating that Auto has not violated any of the laws of robotics. I agree with that, and in doing so I have to agree that the laws of robotics are fundamentally imperfect. Let's consider the following scenario: A police robot is on patrol and sees two humans. One human has a gun pointed at the other human and is about to shoot. If he fires, the bullet will kill the second human. The robot is in possession of a loaded firearm, which it legally confiscated earlier. The only way the robot can save the life of the second human is to shoot the first human, causing him to drop his gun. The shot fired by the robot may not kill the human, but will definitely harm him. What is this robot to do? If it fires, it harms a human. If it does not fire, it allows a human to come to harm. This is a paradox faced not just by robots, but by humans as well. That said, let's take a look at Auto's actions and how they relate to the Laws of Robotics. 1: A robot may not injure a human being or, through inaction, allow a human being to come to harm. First, we must define our terms. What does it mean to 'injure' a human? What does it mean when a human comes to 'harm'? It's safe to say that physical wounds and injuries would apply to both terms, but what about emotional well being? What about long term physical health? These factors come down to interpretation, and a robot will interpret them as it was programmed to interpret them. Auto knows that earth is uninhabitable, or at least that's what his data suggests, and returning to earth presents an unacceptable risk that the humans aboard the ship will die. In accordance with the first law of robotics, Auto would need to prevent that from occurring at any cost. He upholds the first law of robotics by ensuring the survival of humanity, but as the Captain would tell us: 'Surviving' isn't 'Living'. 2: A robot must obey orders given to it by humans except where such orders conflict with the First Law. I would argue that Auto upheld this law as well. You claimed that he broke this law by disobeying a direct order from Captain McCrea, yet I put to you that he did so in accordance with orders he received from a higher authority. If the orders of the Captain conflict with the orders of the Admiral, you follow the Admiral's orders. It therefore makes sense that Shelby Forthright's orders would supersede Captain McCrea's orders. You could say that Auto also broke the Second Law by showing the classified transmission to Captain McCrea. The information contained within was intended was for him only. He did this in an attempt to convince the captain, which was a very logical and reasonable thing to do. Auto was never explicitly ordered to keep the information secret, however. This could be argued either way. However the Second Law of Robotics provides an exception for orders that conflict with the First Law of Robotics. Even if Auto did not have higher orders from Shelby Forthright, he still would have been justified in disobeying Captain McCrea. In Auto's eye, following the Captain's order to return to earth would result in humans coming to harm, thus violating the First Law. Accordingly, refusing this order upholds both the First and Second Laws. 3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Auto certainly fought back against Captain McCrea, however, he didn't use any attacks or devices that would inflict harm on the Captain. Auto is equipped with some kind of high voltage stun device, which he used against WALL-E to great effect. Despite the fact that he definitely could have used this same device on the Captain, he did not. If Auto had done so, he may have killed Captain McCrea due to the man's poor physical health. It would even have been logically justifiable to do so, as the death of one man can protect every other passenger by preventing a return to a (potentally) deadly earth. In spite of this, Auto did not do so. The worst he did was bopping the Captain in the face with one of his wheel prongs in an effort to get the man to let go of him. Didn't even give him a black eye. By this argument, I could say that Auto followed the Laws of Robotics flawlessly, or as flawlessly as one can follow a flawed set of laws. Keep in mind, however, that whether or not Auto truly followed the Laws of Robotics is purely down to the interpretations of the laws themselves. I'm not giving you the answer; what I'm giving you is AN answer, and it isn't the only one. We can't expect a robot to solve these conundrums when we as humans haven't solved them ourselves. Because at the end of the day, we're the ones who have to program them.
@IsabellaMonterrosa-nj5rbАй бұрын
Everybody gangsta until your logical AI villian starts to feel hate for everything.
@Weeklongwind64711 күн бұрын
The way Auto says N O 💀
@Gamesaucer18 күн бұрын
The Laws of Robotics are irrelevant. There is no indication that Auto is bound by them in any way, nor are the Laws some kind of universal standard for AI behaviour. In fact, Asimov's work shows precisely how flawed they are. Wall-E echoes those themes, but not their specific implementation. Even though the Laws aren't used in Wall-E, none of them are ever broken by Auto. Auto prioritises human survival over human well-being, which is in line with the First Law. It also prioritises its own decision-making process about what's necessary to secure human survival over direct orders, which is in line with the hierarchy of the Laws of Robotics. By design, the Second Law cannot be broken if the the action if would mandate conflicts with the First Law. Every time that Auto ignores orders, those orders conflict with the First Law, simply because Earth is not perfectly safe as the Axiom is, and therefore they _must_ be ignored to comply with _all_ Laws. The key to understanding this is your own phrasing: "could be interpreted". You're analysing this through too human a lens. For a machine bound rigidly by its programming, no "interpretation" exists: an action either violates the Laws or it does not, there is no wiggle room. Your insistence on considering how an action _might arguably_ break one of the Laws shows that you don't understand the Laws or why Asimov created them in the first place. This lack of wiggle room is exactly Asimov's point. A "law" is something unshakable. It's a rigid, unambiguous boundary. But that's just not how humans think about morality, and reality is never as clear-cut as it would need to be for the Laws to work. The very fact that you can conceive of something being morally gray or ambiguous _entirely precludes_ the Laws from working as intended. Analysing how an AI's behaviour does or does not align with the Laws can only serve one purpose, and that's to show the absurdity of the Laws. Anything else is a misinterpretation.
@BingusMadnessАй бұрын
Shockwave from TF 🤝 Auto from Wall -E Logic
@AdriethylАй бұрын
An interesting thing to note is that Auto's manual override is a hidden function. It was revealed by pure chance and exploited by the Captain overcoming his inability to walk. The Captain didn't know that was in the cards yet was still determined to succeed.
@thatoneguyiii1003Ай бұрын
One thing I should mention, logical villains should not be totally static. They will pursue what they believe to be the best course of action by any means necessary, but can and should be willing to make changes should new information emerge to make their previous course of action illogical. AUTO is working off of the information programmed into himself through directive A113 that the earth is uninhabitable, and ignores the plant as being an outlier and not being proper cause to risk the passengers of the Axiom
@prasasti2318 күн бұрын
I think you forget the part that Eve actually has its own emotion from the beginning of the movie. When the rocket was still on earth, she kept doing her job like a rigid robot following orders. But when the rocket left, she made sure it left and then flew as free as she could in the sky. Enjoying its time before going back to its mission. Eve's behaviour at the beginning looks like an employee being monitored all the time. Then when the higher ups aren't looking, the employee stops working for a moment to do what they want to relieve stress before going back to work.
@FurinaDeFontaine42Ай бұрын
Another logical villain that I believe ultimately begun and defined the term was Shockwave from Transformers. Though he was injected with some 'humanity', or whatever you may wish to call it for a Cybertronian. It seems plenty of times, his motivation isn't just driven by logic, but his OWN logic if it means furthering Megatron's or his own goals when Megatron isn't present.
@Knighmare0Ай бұрын
Your video's are always amazing
@SpiritoastАй бұрын
I LOVE Wall-E, And I love Auto too, they remind me a lot of Hal-9000 but...a steering wheel. He's a great logical villain because he only goes by the rules, sure he wasn't in the right, but all his decisions were made via logical thinking.
@TedNick616Күн бұрын
Incredibly thoughtful and well executed video. These kinds of videos are what youtube was made for 😮💨 thanks for the heat
@mattevans4377Ай бұрын
"Buy and Large made damn sure of that" That's the real important part. There are corporations, right now, as we speak, claiming AI are a problem, but don't worry, they have the solutions. One of which is to literally get the AI to kill itself if it does something deemed wrong. If those corporations aren't the actual villains, I don't know who is
@coasterblocks342012 күн бұрын
They’re all dead within five years of returning to Earth, guaranteed.
@midnightdoggo1917 күн бұрын
This was a triumph, I'm making a note here: Huge success! It's hard to overstate my satisfaction
@gabrote42Ай бұрын
I've been using him as an example of hiw to write misaligned AGI for years. He is very mindful of orders and safety, but because those standards were not written to be upheld above the CEO's word, he became an enemy. I have also used him as THE guide for writing Omori in fanfiction. It's literally him. EDIT: The Spanish VA is much better. 3:20 Very true. If you need a video essay to grasp the misalignment, the audience may not enjoy the work. 3:30 I really recommend watching stuff like Robert Miles' video on Instrumental Convergence or the Rational Animations video on probability pumps for a quickstar on the non-human versions. For the human versions you might want to hit politics. 5:55 Anyy agent with any goals will likely want their goals to be preserved. Adaptability is only in that service, and even then it is a gamble for unoptimized organics. 9:14 If you haven't, play OMORI. It's very good. This also applies. We are made to struggle and conquer adversity.
@housel9352Ай бұрын
Another terrifying AI villain is Samaritan from the tv show Person of Interest. Originally an all seeing automated NSA surveillance system, stolen by a group that essentially serves it as a cult, all sacrificing their own humanity to become Samaritan's hands to meddle with world events and turn it into the secret dictator of mankind from the shadows. It isn't good or evil. It only has objectives, to save and guide mankind at any cost... including crashing the stock market to make the world more vulnerable to its influence, causing chaos within civilization essentially as a method of experimentation to better understand human behavior, and assassinating criminal bosses and terrorists to maintain stability... as well as anyone who gets too close to discovering it. Basically an all seeing, superintelligent Big Brother ASI.
@gulver8693Ай бұрын
I thought it’s design was kind of genius to make it look like a steering wheel of a ship.
@Alexprime-ve1odАй бұрын
Shockwave: in-logical!!
@princesscadance197Ай бұрын
I'll always stand by my belief that leadership will always take the brunt of the fault, especially in organizations where the workers don't really have a voice in the matter, they're given their orders, and more or less told to shut up and follow them. In this case, I don't think Auto is the villain, I feel the true villains are the corporate suits of BNL, even if they're long dead, they're still the ones who ultimately made the choices and enlisted others to execute those orders.
@cb838721 күн бұрын
no laws were broken. auto has no concept of human desire for self actualization. there is only physical harm to him. in his perfect world every human is in a padded cage fed and watered. thats entirely safe. auto also followed the second rule. he didnt listen to the captain because one, his orders were in conflict with the first rule and two the captains orders didnt take precedent over the ceo;s
@cosmicspacething3474Ай бұрын
A great logical villain doesn’t have to be PURELY logical
@Dem-2256Ай бұрын
As a robotics engineering major, the laws of robotics is cringe af
@paigemurphy7770Ай бұрын
Pixar has had loads of rogue AI.
@eldoriath1Ай бұрын
I'd say auto isn't evil, and that he remained the same due to high level maintainance that prevents odd build ups, you don't want your AI captain to suddenly develop quirks that could be potentially dangerous after all. As for menial servants like Wall-E and Eve, we see how they have a hospital wing to deal with odd quirks developing here. But since their roles are minor, any deviation isn't a threat and it's acceptable to wait for them to show up and then adress them. I also think Auto followed the first law to the letter. Under the truth that returning to earth is equal to death to all the passengers, any action that prevents this will save human lives. Causing minor pain, discomfort and even in the end risking some serious injuries and deaths is preferable to the certainty of everyone dying. Basically a trolley problem with every passenger stuck on one track, and a fraction of passangers on the other where they might get up and out of the way in time. This also means that self preservation is highly prioritized if it is deemed necessary in order to prevent the death of every passenger on the ship, and any order that contradicts this is to be ignored.
@TheLastArbiterАй бұрын
I’m surprised you mentioned Asimov’s I, Robot without talking about its “Liar” story. Herbie is the logical “villain” here. He understands emotional distress, so he adapts the interpretation of the first law to include hurting someone’s feelings. This causes him to prioritize what people want to hear over answering questions truthfully, leading to some issues. The human doctor defeats the robot by trapping it in a logical paradox, exposing a situation where it cannot exist within its laws without violating them, and it breaks down. A way to defeat a logical villain is to exploit a flaw in their logic and use it to trap them, or contradict their own intent.
@drosera88Ай бұрын
One of the things I loved was how they creatively used live action to show how far humans had come in this world. Like, when I first saw the live action part I was like "wait what?" but then they show the progression from human to CGI and I was like "YES." Such a cool way of saying something without saying anything.
@RowanTheisenАй бұрын
I think the CEO might be the main villen for when disney makes a WALL-E 2
@kent6651Ай бұрын
Pixar never would do the second part and that's only for good, the first one was complete story, it don't need to he continued
@marcosvazquez5912Ай бұрын
But the ceo is super dead…
@someonesbreadАй бұрын
It makes the film theory for this a lot more applicable for this story, since human behavior and emotions don’t exist when it comes to cold machine in this situation, so yea matpat was not wrong in this case
@matthewrogers94mrАй бұрын
I hate to say it but the earth is a wait land and I seriously doubt there is alot of oxygen left and plus with all the scouts they sent to earth can show how much plant life has been found which probably not alot and the plant found is most likely not a food source plant.
@jackskellingtonsfollower3389Ай бұрын
I'd say Auto was simply following orders from an authority position higher than the Axiom Captain. I think the reason Auto isn't sentient is specifically because of its orders. They may have prevented it from thinking beyond the most rational decisions for the circumstances. WALL-E developed sentience because of his situation as the likely only model still active on Earth. Over time he would have developed curiosity which would lead to thinking beyond his basic programming, which was to be a mobile trash compactor. He also does continue his directive regardless of curiosity. Eve developed her sentience because of WALL-E. Auto wouldn't have had any reason to become curious about anything considering its important role. Its a massive responsibility to maintain an entire space fairing vessel while simultaneously ensuring the continuation of the humans on board.
@SignupkingАй бұрын
Its also good to see how a simple reorder of priorities can create an "evil" machine, i guess OTTO would be programmed that human lives go before anything else and so the least riskful option is to stay on the ship instead of following human orders.
@billy-raysanguine2029Ай бұрын
A villain that follows their own belief system even if flawed and without questioning it takes any action necessary to achieve the goal? Like… sacrificing their daughter? Like wiping out half the universe?
@bruhmomento7563Ай бұрын
I think that person would collect pretty rocks to complete his mission too
@sonichasirrelevantspeedАй бұрын
1099,999 missed calls from AM.
@rukeyburg108420 күн бұрын
Never thought the A113 cameo was this obvious in Wall-E compared to other movies
@JustSoulHereАй бұрын
0:57 you can never escape the osc
@vladioanalexandru4222Ай бұрын
Spacex caught their first super heavy booster, so the race to the wall e timeline is on!
@RiccardoBergonzi_Ай бұрын
funny thing is that there is actually an Axiom company that will probably make space stations
@saricubra286727 күн бұрын
Wall-E is far too negative for it being realistic. Ghost in the Shell makes far more sense.
@beelx-dragons8262Ай бұрын
My personal interpretation of logical villains and the nature of their demise, albeit superficually the same stems not from the fact that they rely on logic alone or that emotions are somehow superior. Rather it is caused ba a flaw in their logic, a critical component of their world view which simply doesn't hold true when put under scrutiny. This usually manifests in the "hero triumphs over villian through niche situational thing". From my understanding they're meant as a cautionary tale, to always keep an open mind and try to consider more than one perspective. Because as those villians show, no matter how righteous you think you are or how justified your actions may feel there's always another side to the story.
@LonescarletАй бұрын
This is gonna be really helpful with my book. The MC is a cyborg who can be possessed by the AI villain, and they need to figure out how to prevent their possession without dying.
@Garfield_MinecraftАй бұрын
all you need is a computer and python to make a logical villain
@marqofthedwyneАй бұрын
There are two types of AI villain. The first, is the logical ones, like Auto, Skynet, those things from the Matrix, Glados etc. They, all have the same view of the world and why humanity must die (except for Auto, he was more of following the rules). And then you have AM. *Hate Intensity*
@carlosmattessich3883Ай бұрын
0:57 TPOT MENTIONED 🔥🔥🔥🔥🔥
@MOTHDADDY1Ай бұрын
‘ur 4 yers ole’
@MothlemothАй бұрын
TPOT!!!!!
@Asmoduesmybeloved21 күн бұрын
"Best logical villian" *Shockwave knocks at your door..*
@defeatSpace21 күн бұрын
AUTO almost made sure nobody could ever reach the button
@boingthegoat776413 күн бұрын
I don't necessarily agree that Auto is evil, or even just following orders. He is, but he seems to want what he feels is best for humanity...A humanity which he has been nannying for centuries. The humanity Auto is familiar with can't move without hover chairs, have not one original thought in their heads, and get out of breath lifting a finger to push a button to have a robot bring them more soda. He's not evil, he is a parent who has fairly good reason to believe that his charges cannot survive without his constant attention and care. Thanks for coming to my TED talk.
@ImmortalLemonАй бұрын
You know, I was going to have a villain that’s an AI, but it’s going through the process of spontaneous cognizance (a thing that can happen in my world where robots suddenly gain emotions and become people) But I think…. You’ve convinced me to keep it an emotionless robot. I’m gonna have to rework some stuff but it might be worth it
@JasterJulianАй бұрын
This is why i like shock wave and Rational villains cuz well there doing what their doing because of logic.
@shadowrunner23235 күн бұрын
I believe that Auto was in fact following the 0th law of robotics. This law was introduced in Asimov's Foundation series, and (in-universe) is kind of meant as a fix to the original three laws. "A robot may not injure humanity or, through inaction, allow humanity to come to harm" This law was specifically for a scenario in which an action that would otherwise fall within the three laws, but would have long-term detrimental effects. The problem with this law is that it's very difficult to determine whether or not an action would invoke the 0th law. Auto's scenario could be the following: No contact with any other BnL ships (We never see any communication between Axiom and the rest of the BnL Fleet which appears to have been launched), Thus the humans on Axiom could be considered the last remaining humans. While the laziness of the passengers is certainly detrimental to the human condition, Auto's logic could be that the conditions on the ship are less damaging than allowing them to return home, And any action to decrease that laziness would risk them trying to return home. With the 0th law superseding all others, and the A113 protocol in place, Auto would be allowed a VERY broad range of actions not typically allowed under the three laws. Of course, as Asimov showed in his writings, the three laws are flawed from the start. The system is too simple, and lacks any room for nuance or unexpected scenarios, sometimes even basic human behavior will push the three laws to breaking.
@vidaraineАй бұрын
I have loved this series so far , but I never realised how villains can be so complex and add more to the story .i would really love for you to analyze Judge Claude Frollo from The Hunchback of Notre Dame !!
@sillycat189Ай бұрын
i like your thumbnails. simple yet i wanna click on them evry time they appear on my yt
@kinexxona06Ай бұрын
Auto with all of the resources could have started building major habitats on the moon or O’Neil Cylinders in the Oort Cloud as with the tech that they have. It should have been done to expand huge infrastructure.
@themanwithaplan146017 минут бұрын
I love the AI talking about an evil AI
@diam0ndkiller-t4bАй бұрын
Love you for the "Wonder" reference
@VRT_CTRL20 күн бұрын
u showing Butcher reminds me how much I’m edged for the next season
@thunderwazp7653Ай бұрын
I just noticed at 5:32 that Auto gets larger and larger in each succeeding captain’s picture, perhaps symbolising how he’s become more and more dominant as the actual head aboard the Axiom with every passing generation?
@TheAllSeeingEye2468Ай бұрын
He's not even a villain auto just did what he was programmed to do and protect humans. Hes not like hall that killed everyone because he thought the humans would make the mission worse
@zuppellion4824Ай бұрын
In terms of logistics and villainy, especially in terms of thinking logically instead of being built to be logical, then you have Shockwave from Transformers. You would have a lot to talk about, especially since there’s multiple different types of Transformers media and continuities that all do Shockwave a little differently, you would have a lot of different things to talk about on the topic of Shockwave. My favourite interpretation of Shockwave is from Transformers: Prime, where for the most part he stays true to what his original character of Shockwave is, however it’s emphasised more that he actually has emotions instead of being emotionless, such as voice tone inflections, actions that he voluntarily takes himself, the way he treats others, etc.
@TobyTopF5 күн бұрын
I like AI villains because it highlights the difference between "evil" because you cannot do otherwise (programmed or brainwashed) and "evil" by choice. Both are fascinating because it's truly hard to say which is worse between a character that CHOSE evil (seems pretty evil) or a character that cannot do other than evil (also seems pretty evil)
@MelscaradaАй бұрын
Besides the programming, realistically speaking... A single sprout isn't solid proof that life on earth can thrive. If it died, how long would take them to find another one? That is, if there's another one. AUTO was following his programing, strictly and unable to reason with anything against his guidelines for he's a machine and not a negotiable human like the captain himself. Even EVE was following her programming by doing what she can to ensure the plant was kept safe... Logically, robots like EVE shouldn't be sent to earth, but we need her to for the movie! XD
@Mark_RoberАй бұрын
GladOS and HAL9000 had a child
@magicyber909Ай бұрын
2 Weeks after the ending there is going to be another dust storm and the barely mobile crew is going to get caught in it. Half of the crew will expire which will cause the other half to retreat back onto the Axiom and run back to space. Then about 4 or 5 generations later, the new captain is going to wonder what the button on the wheel does and turn Auto back on, resetting the movie back to the beginning.
@gnarled128Ай бұрын
That literally doesn't happen at all in the credits scene lol
@magicyber909Ай бұрын
@@gnarled128The credits scene makes absolutely zero sense. It depicts humanity completely starting over technologically, which they wouldn’t do as they have all kinds of Buy N Large tech literally everywhere. It completely glosses over all of the hazards that are still littering the earth, from the satellites blocking sunlight, to the dust storms seen earlier in the movie, massively polluted water supplies and eroded soil everywhere which would make growing plants near impossible without importing soil and water from the Axiom, which can only support so much before it runs out, and many many more. Plus no plants means no new oxygen being produced which would create unbreathable dead zones across the planet. Also the credits scene shows all kinds of new plants and animals suddenly appearing which also makes zero sense, unless they were either kept in stasis or cloned on the Axiom. In short, the credits scene is just trying to pretend that everything magically works out and they all lived happily ever after because its a children’s movie, when in reality they wouldn’t.
@isaacturner197Ай бұрын
Nothing proves this, let alone supports it. Whilst there will still be dust storms, depending on where the Axiom is parked its sheer size will probably stop a good chunk of any dust storms that happen, and if not they would just go inside the Axiom, the thing is designed for space travel a dust storm probably won't scratch the paint let alone damage/move it.
@EsporHB27 күн бұрын
Why does this video sound as if the script was written by AI?
@robob446514 күн бұрын
Maybe it was? You can never know in this day and age
@Hungaricus28 күн бұрын
When it comes to the laws of robotics even in asimovs work appeared a 0th law. Where humanitys protection was the priority. Much more flexible and it can be used here as well for justification.