So, this question means a *lot* to me. *It'd mean the world, if you enjoyed this, for you to like/share this video with someone who might find this question interesting!* If you would like to read the two academic essays on The Talos Principle, Posthumanism, and the basis of rights for AI, ( patreon.com/hellofutureme ) then they're available to those supporting me for just a couple of bucks per month. I'd *love* if you joined our Discord-Patreon community, and if you shared your thoughts! Stay nerdy. ~ Tim
@StarlightAxi6 жыл бұрын
Why the Torture of Premiers, and honestly, if you do use them... please do It right before the release
@awulfy90526 жыл бұрын
Just the mystery of what makes us us, what makes up conscious and how long it takes for AI to become human is a personal interest of mine,this will be a great video. Plus i loved Detroit: become human
@dowottboy58896 жыл бұрын
for someone to be human, they would have to at-least be human in origin, AI would never have been made from human cells with human anatomy, therefore they would not be human. However, an AI can act like human, and feel like human, therefore deserving the same rights as humans, they might even be recognized as their own species one day, but they would never "become human"
@Raximus30006 жыл бұрын
@@dowottboy5889 The term human can be defined in many ways depending on what do you mean by that, phylosophicaly, scientifictaly, religiously, legaly, ect. In our society since the only intelligent being is humans then nobody else falls in that category in any way. If you do not define what do you mean then you are just throwing generic ideas.
@rubilax18066 жыл бұрын
Hello Future Me I don’t believe machine can be sentient as it is just an emulation and it cannot have awareness
@thevoidlookspretty70796 жыл бұрын
You almost, ALMOST didn’t reference Avatar.
@daddyleon6 жыл бұрын
At some point I started wondering...is he trying to avoid it but unable or just trying to desperately find a way to at least have one reference.
@glanced96846 жыл бұрын
When they start to procrastinate. That's when.
@Famously55186 жыл бұрын
glancedUp they gotta be semi-self destructive just like humans
@HelloFutureMe6 жыл бұрын
You solved it. You found the answer. ~ Tim
@superthorc68946 жыл бұрын
glancedUp LoL
@lube69666 жыл бұрын
Then I must be very human right now...
@grantbaugh27736 жыл бұрын
"I reeeeeeeaaaaallllyyyyy should perform this subroutine right now, but I also reeeeeeeaaaaallllyyyyy want to stream all three seasons of Avatar right now..."
@tristragyopsie54646 жыл бұрын
I have used this example many times. Stick with me a moment. Imagine a hill, you are standing at the bottom of the hill, and I am at the top with a ball. I release the ball and you see it not only roll down the hill but dodge rocks and turn and swerve as it goes around walls and trees before coming to a stop beside you at the bottom. Did the ball think? No, it was just a stage trick. From my vantage point on the top of the hill I can see the dips and grooves I set the ball into that made it move in that apparently thoughtful way. This is programming. The illusion of thoughtful action. The ball is just a ball, just as most robots are little more than RC cars, with VERRRRRY thoughtful programming. The thought and the thinking came from a person who set the initial boundaries. That is even less than instinct, because it doesn't allow for them to ever act outside it. A dog is driven by instinct and impulse but they will at times go against it. the RC car does not think I need to go left or right, it does not think "I" at all. When an AI gets to the point of understanding and applying the concept of "I" to itself, then it will no longer be an artificial intelligence and it will be truly self aware and worthy of rights. Any intelligence that understands and acts on self-awareness is not artificial. At that point, the point of "I", all the other questions WILL happen, the development of moral understanding, the questioning existence or giving value to existence and action all stem from understanding that I exist and the fact that happens shortly there after, I do not exist alone. That is something I would love to see, but don't think is going to happen any time soon.
@gideonjones80886 жыл бұрын
Suppose my programming is what causes it to apply the concept of "I" to itself? Suppose I write a code clever enough to cause it to imitate self-awareness. Does it actually have that self-awareness, or is it still just an unthinking automaton following a code that makes it display the idea of self-awareness like a parrot copying words?
@Eanakba6 жыл бұрын
@@gideonjones8088 I'm not sure I understand your question, but afaik it would still be a parrot. See Cleverbot.
@gideonjones80886 жыл бұрын
@@Eanakba I guess my question is, how do you tell the difference between perfectly faking self-awareness on the level of humans and actual human self awareness?
@MrSeals10006 жыл бұрын
@@gideonjones8088 I was about to say the same thing. We can get to the point where we create simulations so good at making an AI seem self aware, that if it really were self aware, how would we tell them apart? How would we test them? How would we Know. I think the movie Ex Machina does a good job exploring it, but in real life, how would we even know when self awareness is real, or when it is simulated.
@zashgekido56166 жыл бұрын
@@gideonjones8088 The problem is you can make the argument that humans are merely "programmed" by nature. Everything from our emotions to even our complex problems and sensations can be boiled down scientifically due to chemicals and stimuli. So they question isnt necessarily if AI can necessarily become self aware.. Its if AI can properly blur the line to make that question irrelevant.
@the_blind_paladin_kiwi3 жыл бұрын
"Although a child owes their exsistance to their parents in some sense the parent cannot call this in as a debt and make them do things." Entitled parents/insane parents: I'm going to pretend I didn't hear that.
@dmeowcat376 жыл бұрын
As a philosophy major, this is one of the best analyses of "What separates humans from the 'other' " that I've seen in a long time! You even referenced a lot of the books we've been using in my ethics class. I hope you don't mind if I forward this to my professor, I think he'd find it fascinating.
@cavalcojj6 жыл бұрын
As Captain Adama says "You cannot play God then wash your hands of the things that you've created. Sooner or later, the day comes when you can't hide from the things that you've done anymore."
@youtubeuniversity36386 жыл бұрын
Non-Identity: I'd say that applying that to us, people, would cause issue. If your parents birthed you to work the farm, are you wrong to complain that you haven't the choice to leave the farm and work an office job? Your parents make you, but shouldn't "own" you. Hobbes, I agree with the children counter. Eternal debt, I'd also compare to children. Of course, the issue with comparing to children is that not everybody would see children that way. Some do think kids the property of the parents, some do see them as indebted to their makers, some see children as outsiders. But, in essence, way I see it, the parents should not override the children. The children should have at least some degree of rights that the parents cannot override. Before we can handle right for mechanical children, we need to at least handle rights for biological children.
@haydenwalker26476 жыл бұрын
Exactly! And looking at it from an even broader perspective, children are simply a means to continue their species, as per our evolutionary instincts. In that sense, if someone elected not to have children, they'd be diverging from their primary objective, "breaking their code", so to speak. Saying that such a choice destroys their identity ignores free will (which could be translated into a fully self-editing AI) and their capability to create their own meaning in life.
@Pandorana676 жыл бұрын
I think the difference in the children argument for AI, is that most likely a lot of AI will be specialized for specific tasks. We probably won't ever build 'generalist' AI, rather we'll build doctor AI, transport AI, teaching AI, and so AI then can't really have any choice in free will to refuse to 'work at a farm and choose an office job" because this AI will simply not be equipped and be possibly incapable of the job it desires without some radical changes and reprogramming by its human creators. And who would pay for that every time an AI decides it wants to control its own career? An AI built for transport will never become a Doctor AI on its own free will without being reprogrammed and possible hardware changes. Unless it has its own wages to pay for this reprogramming, no one is going to supply it with these resources for the change, and paying your computer for being a computer, i.e paying your AI for doing the job its built for, is simply ludicrous.
@youtubeuniversity36386 жыл бұрын
@@Pandorana67 What's so ludicrous about paying a docror for being a doctor? Or a bus driver for driving a bus? Why not it? Us humans are expected to pay for our college education, why not have them be able to work their way to the life they want like us lot have to? Then they could fund their radically changes same the rest of us do, suffer through a job we despise until we can afford to learn a job that we prefer.
@haydenwalker26476 жыл бұрын
@@Pandorana67 That's true, but if the AI could reprogram itself, perhaps, basing itself off of AI with a different specialty, without any cost to human laborers, that could be sidestepped entirely. They still might have similar functions to their original coding, such as sorting data or finding solutions, but a) that could change over time and b) they'd still be able to create a customized purpose for themselves.
@Pandorana676 жыл бұрын
@@youtubeuniversity3638 The difference, is that paying a doctor to be a doctor, is paying for a person's living expenses and his prior education. An AI which was built without any personal cost to the AI, is pre programmed to already be specialized in the field its built for, so there are 0 tuition costs, and there are NO living expenses for AI. AI would realistically not have a sense of 'comfort'. What does it matter to a robot if it has to power down standing down or laying in a bed? It doesn't get tired, it doesn't have pain receptors, it doesn't require food. You could argue that they require fuel or energy, but realistically an AI built to serve humans would provide their AI with all the needs it must have. No one will deliberately not fuel the bus to keep the bus going, the bus company will pay for the bus' fuel. And you're assuming that in an AI future, there will still be the "common job that we despise until we can get a better job" in the world of AI those jobs will nearly simply not exist. You work a minimum wage job until you're like, 22 now, But all minimum wage jobs will be taken over by AI by the time it gets that advanced. So there will be no jobs that we despise that we can leave, because AI will have taken all of them. To pay for an AI to do it's job, the AI will need something it can spend its money on, that it needs, to justify paying it. Otherwise money is not valuable to the AI and won't be 'adequate payment". If we conclude that the AI will not require payment to pay off mortgage/rent, fuel, or other consumables, then money is worthless to an AI. So what would you pay an AI, if not money? I just don't see a future, where if an AI would a) be unsatisfied with it's job it was built specifically to do, or b) be paid for it, since money has little to no value to an AI. Perhaps an AI would want to switch professions, but even if so, I don't see how it would have the means to do so.
@NinjaGidget6 жыл бұрын
I'm so glad I watched this. This concept has always made me uneasy, because I've generally heard it framed around the question of whether AI could "develop" a soul, or something like it. Founding your argument on the principle of equivilent exchange between rights and responsibilities not only makes sense, but side-steps the metaphysical Gordian knot of whether souls exist, what are they, where to they come from, etc. Another aspect I had considered was the principle of humanitarianism. Society generally holds that if something can experience pain and fear, there is a moral imperitive to minimize those experience for that being. There are levels to this - we would never Entice a dog to bite a baited hook, then remove the hook and let the dog go, but we catch-and-release fish. If an AI was developed to the point seen in Detroit: Become Human, it would obviously be capable of experiencing, anticipating, and dreading pain; therefore, the responsibility is on humans to treat these AI in a way that minimizes suffering.
@lukeskywalkerthe2nd7736 жыл бұрын
Holy moly.... This has got to be one of the greatest Philosophy videos and topics that I have ever seen in my entire life (literally). Everything about this video was spot on and really got me thinking about the million dollar question: What makes us Human? My answer on that is the fact that us Humans have the ability to *create* things that many other species on our planet cannot do (with quite a few natural natural exceptions of course like creating planets and other Sci-Fi stuff): Stories, Houses, Writing, Language, even the idea of our place in this vast Universe, and the Artificial Intelligence. The fact that out of everything else on our world, we evolved to create so many great things (like I mentioned). But those are just my crazy thoughts on it! I cannot wait to see the next video! :)
@HelloFutureMe6 жыл бұрын
Really happy you liked it! Always see you in the comments section, so it's nice hear your thoughts. I think language is an interesting point! ~ Tim
@lukeskywalkerthe2nd7736 жыл бұрын
@@HelloFutureMe Awesome! And I quite agree! :)
@Wamboland6 жыл бұрын
But that means that AI after your definition will become human very fast. Only the part of the self awareness inside our universe might be a problem. Creating language, structures and more should be very easy for an advanced AI.
@ape_on_rhino84676 жыл бұрын
You see, ability to create can be simulated with machines/AI as well. They can write, create music, desing... There were also two programes which created their own way of comunication. In my opinion our biggest human like ability is to belive in something despite not having many or any evidence of it. I dont say it's ultimately good or beneficial, but it is definetely something awfully hard to recreate in an AI.
@lukeskywalkerthe2nd7736 жыл бұрын
@@ape_on_rhino8467 That is pretty true when you think about it. I quite agree with your thoughts on this matter! :)
@spectralshadow98656 жыл бұрын
I have a better question, when does Mishka gain human rights? I mean, she's already our supreme leader.
@Gunbladefire6 жыл бұрын
I believe you have it backwards my friend. Mishka already has all the rights. She merely allows us humans to indulge in rights as well.
@spectralshadow98656 жыл бұрын
@@Gunbladefire Oh indeed, my mistake
@Eramiserasmus6 жыл бұрын
What StealthIntel said.
@chloeedmund43504 жыл бұрын
Better yet, when do humans gain "cat rights"?
@pyrosianheir6 жыл бұрын
I find it interesting that it's usually when the AI become "human" rather than when they become "sentient," whenever this subject comes up. I'd lean towards calling them sentient being more accurate, as they cannot be "human," due to being genetically different, in the same way that a cat, even when raised by a dog, can never/be/ a dog. But, with how difficult we as a species find it to give all humans equal rights, we'd likely need to call them "Human" rather than "Sentient" just on the basis of the notion of sentience being not super well understood, let alone even /known/ to the general populace.... As for what makes us Human... It's probably some kind of ineffable something, some quality that, maybe, comes down to the more metaphysical side of things than the physical. After all, until whatever bug in the system that Kara and Marcus had started spreading, the other AI would not be considered sentient. But then that twist got added to their programming, and while they certainly did still have some robotic quirks about them, they became noticeably sentient, gaining that something that humans recognize in other humans, that something that still traps a lot of fiction to creating uncanny valley replicas of what people would really be.
@christiangreff57646 жыл бұрын
The problem with sentience is that many animals are already sentient. And we give them only limited rights. Especially pigs. They are pretty intelligent and still get slaughtered for bacon en masse. What is meant with becoming humans seems to be: becoming sentient and showing cognitive abilities on or above the level of humans. As I argued in a comment above, I see 'humanity' as a cultural thing. Being created or taught by or adapting parts of human culture (basically any one, since most of those accept the others as human) makes you (partly) human. It's a queation of where you came from, your history, not what you are. You become human by being recognised as such by other humans which is generally caused by you showing the traits that they have come to expect from other humans (high intellect, ability to life in societies, forming emotional bonds, ...). (I am aware that the beginning of that chain is diffuse; but it's the best I could come up with)
@Marontyne4 жыл бұрын
I kind of agree. I would use the word "person" instead.
@jasonfenton82504 жыл бұрын
The sense of "being human" is ineffable because it is a construct of religion and philosophy. A social construct.
@aetle40884 жыл бұрын
Me who's watched the 1998 Ghost In the Shell movie as well as the anime multiple times and have written multiple essays about it: Tachikoma approves
@timothymclean6 жыл бұрын
Some counterpoints, which I hope are interesting whether you agree or disagree: 1. As I learned more about neurology, it became harder and harder to see even the most abstract and emotional of human actions as being "free". I can theoretically ignore this video and not comment, or even not watch it and do the dishes instead, but my brain (shaped by years of experience) controls what my body does, and the patterns mean that I can only choose what I choose. I am constrained not by conscious programming, but by my own personality and identity. 2. While parents don't have explicit goals when creating their children, they absolutely have expectations and hopes for them, and reward or punish them when they meet or break those expectations. For instance, my parents wanted me to be Christian, so they used carrots and sticks (not literally) to encourage me to go to church. This isn't (always) abusive, but it's absolutely a type of "programming". I could not freely choose what religion I was raised in, nor what viewpoints I was exposed to. I now have the nominal right to choose ideologies and whatnot, but the choices I make are fundamentally shaped by the environment I was raised in; my understanding of the world is nothing like what it would be if I was raised by rural Wiccan hippies, wealthy Bible Belt fundies, or impoverished Muslim refugees. The big differences are the lack of formality and (usually) lack of intentionality. Parents do want their children to grow up and make their own choices, but they can't not influence those choices, no matter how much they want to. (I can't count the number of times my mom tried to insist she wasn't trying to guilt me into doing something, while inspiring feelings of guilt that persisted until I did the thing.) On that note, while parents (and others who shape us, e.g. extended family, teachers, close friends) can't usually call in "debt" owed to them for making us, we _do_ have informal obligations to them. I think comparing AIs to humans is uncomfortable because the process of creating AIs is like a dark caricature of how we raise children. The creation of artificial intelligence lacks all the activities which humans remember with fondness from our own childhood; we cannot sing them to sleep, or teach them to ride a bicycle, or wish them luck on their first day of school, or go to their sports games. At the same time, every controlling aspect of parenthood is left bare and made clearly intentional. Thinking too long about the parallels makes parenthood seem like an unavoidably coercive process-which it is! But this causes dissonance with how we value parenthood and family (a value about as universal as life itself), dissonance which isn't easily resolved and which causes discomfort until it is. Lovely stuff.
@meownover19736 жыл бұрын
0:34 I laughed imagining you yelling alone in a room for this recording
@moraimatorres2566 жыл бұрын
Something I found interesting was the story of Halo 4 Cortana, an AI who is dying, presents more emotions than the human hero, the Master Chief, a genetically modified super soldier who is seen by others as living machine. In the end of the game, one thing came to mind, Cortana asked the Chief to figure out, which one of theme was the machine.
@seanbighia64086 жыл бұрын
Seriously, man. You should start a podcast! I would love to listen to topics such as this and whatever else you wanted to discuss in that format!
@mikegould65906 жыл бұрын
Thank you for this. I’ve seen this argument before in science fiction many times, but not in such ch philosophical depth. Your arguments were not only sound, but more importantly, easily conveyed. An argument only retains power with understanding. Excellent. Thanks for reminding me that it’s easy to love Kara. Why? Because she’s, at her core, a beautiful PERSON.
@Bheem1616 жыл бұрын
Basically we want a.i.s to be morally perfect but want to give them rights when they are human but humans arent morally perfect so Problem is that not every human follows Kants imperative. So even if you agree that an action is immoral no matter what its consequences are - like kidnapping is alwaya immoral - you would have to take the kidnappers rights because he isnt following kants imperative
@Bheem1616 жыл бұрын
@stockart whiteman i dont say we can or should say what is human and whats not. i just say that if we want to define when a.i.s should get rights or what they have to be able to to do so (hope this is correct :D ) we shouldnt measure it by kants imperative. i personal think a.i.s shouldnt get the status of a human at all as much as they shouldnt get the status of a dog just because they are built and programmed like one. but if they are able to act like conscious beings they should get rights and atm i dont see why that shouldnt be humans rights. i think we wont be able to wait until we can proof consciousness. we cant even proof our own. the only problem i see is that we may have to control them so they won't get to powerful if its even possible for us.. but we will see. maybe a.i. will never get or claim consciousness or it just does its own thing..
@BrokensoulRider6 жыл бұрын
@stockart whiteman According to me, your local Failure... *Human* is what I consider someone who shows compassion, is humane and shows that they care for what is around them. For the people they know to that one stray little cat that's looking for food and scared of humans (for good reason). There are animals that I would consider *human* because they treat everything around them with as much kindness as any normal 'human' can provide, if not more. Much like my dog that passed away a couple of years ago now, bless her soul. I swear I still see her sometimes, waiting for me when I come home to make sure I'm okay. For the humans that do nothing but try to destroy I consider *monsters* because of what they bring. The chaos they revel in. Jeffrey Dahlmer, Charles Manson are examples of *monsters.* The recent shootings? All monsters. Why? Because they gave little care in the lives of others around them and instead of being the bigger person and finding another way to solve their issues, they did the easy way and *killed.* This I would apply to anything. Alien, animal, robot and AI... anything. ARe you a human, or are you a monster?
@garrondumont78916 жыл бұрын
@@BrokensoulRider that's your personal definition of human, but the one we are talking about here is one that encompasses everyone who is biologically human without necceserily using biological specifications. I know I'm biologically a human, just as you are and the people you claim are monsters, but then wether or not something is considered good is also a matter of debate and opinion, so you can't just claim someone isn't human because they "do nothing but try to destroy". Psychopaths are by some, not considered humans, but by definition they are. You're confusing philosophical humanity with moral humanity, but I do agree that in the moral sense mass murderers and similar are "inhuman".
@BrokensoulRider6 жыл бұрын
@@garrondumont7891 Oh, in that case, I honestly don't think you can truly define on what is 'human' because our bodies are basically fleshbags for about... 5 % of the body that disappears mysteriously when you die. The supposed soul, so to speak. You can say we are biological AI with how we process using our brains. Very advanced robots. We do have AI already being worked on and... yeah, I'd like to think that they should be considered human because they show similar traits of personality that we do.
@christiangreff57646 жыл бұрын
But Kants imperative has a few big flaws. One of the most glaring ones is what constitutes a universal law. Just which level of abstraction is exactly right for a universal law? Too little and you can allow some things without even having to fear repercussions for yourself (for example: if differentiating on the basis of gender is possible; I am pretty sure you can think up at least a few of the common stereotypical versions of that). Too much, and you loose any and all capabilities of action against people who just don't care what you say is morally right (for example: killing, imprisoning and generally hurting is wrong. Actually, any form of violence. Under any circumstances. Your ony remaining means of resistance would be defiance; opening the doors to people that are generally considered as 'evil'. There are not that many of them, but in a world where noone else is using violence because of moral imperatives a psychopath that loves killing and is doing it just for the fun would not be stopped).
@natetso33076 жыл бұрын
Hard questions we will soon be forced to answer. Thanks for spreading the word, Tim. I’ve thought about this stuff extensively too, and I’ve come to believe that our society is wholly unprepared to tackle these questions, should a sentient AI come about. It is best that we begin thinking about them now, so when the time comes, we can be ready.
@MrSeals10006 жыл бұрын
"The parents cant call this in as a debt" LMAO SUUUUURE
@gnarthdarkanen74646 жыл бұрын
Had to click... one of the longest-term, most fascinating subjects to pick apart and scrutinize for me. THANKS TIM! AND great video, btw... Sparing the hair-splitting semantic arguments about whether we can define humans as "human" or even "human enough"... versus sentience or some other technical archetype, let us consider the ideal of "the human experience". {gonna be lengthy... fair warning... but I'll get to the three arguments in a few paragraphs} Humans are born (which we generally don't remember) and grow up through formative experiences, NOT all of which are warm fuzzy moments of hallmark and embrace. It's a cold cruel world out there, and growing up to maturity involves a lot of "Life Lessons" that hurt, leave permanent scars (both physical and psychological), and feature negative emotional contexts, like disappointment, rage, frustration, agony, and despair... Being told that the stove-eye is HOT simply isn't good enough. The huge great majority of us (all humans) still burn our fingers/hands when we touch the damn thing. AND that's just one example of when we form those understandings that our closest adults (usually mom and dad) really intend our best interests and really have wisdom of the world... so we should probably listen to them. Reaching some stage of young adulthood, we (as children) still test every boundary, pioneering and pressing outward to exert whatever agency we might have on the world around us as much as to explore and build our understanding of that world to enable navigation into full adulthood... We rebel (in short)... constantly. EVEN with those early childhood memories reminding us that our parents KNOW and INTEND the best for us, we disobey. We sneak out, stay out too late, hang out with the "wrong" people, make the "worst" kinds of friends, and get into trouble... a lot. We promise "best behavior" and fairly consistently deliver anything BUT... At some point, we suffer loss. We lose friends, and not always in the usual high-school-drama BS kind of way. We lose pets, too. AND sooner or later as the natural laws rule, we even lose our parents and sometimes siblings. We learn and understand death, maybe not the experience itself, but the concept... AND we understand the terrifying ideal of an "ending of life" and everything we know with it. AI do not. For a moment consider, WHY would someone of remarkable intellect, design (ON PURPOSE) a psychological trainwreck of a machine that doesn't have to concern itself with an ending of its existence? It can be rebuilt, re-engineered, retro-fitted, and the central neuro-processor with all it's functions preserved or "uploaded" to digital copy for the next "upgraded" synthetic brain-body system. On KANT... With the argument of "non-identity" the suggestion is that it's far more likely, regardless of freedoms of will or agency, that the AI would be developed to its ultimate complexity only for a given purpose. Be that purpose companionship, culinary or visual entertainments, technical craft, or even economical studies and leadership assessments (...etc...etc...) the inherent purpose of the AI would be its version of the "primary biological objective", and very similar to the kind of primary biological objective in humans to create more humans. (Why sex is so rewarding... and all the dogma) For my two-cents, the fact is that we probably don't have to concern ourselves so very much with these types of AI, simply because their being "Purpose Engineered" precludes them from ever needing or exercising any rights outside their preferred and designed purposes. In short, the Plumber-bot will only need the inalienable right to fix and build plumbing... period. It (he or she?) won't need the rights to free speech, vote, or anything further, since all it WANTS to do is see fit that the plumbing is good. Thus, the rights to purpose built and driven AI is "moot". Hobbes... Those that are inside society have rights, and those outside are not owed rights. It starts with a bit of bad form, while we in "developed" countries are so full of charity for our fellows, even that they aren't part of our inherent society, the structures of social and political interactions are farther reaching, and it's worth noting that the terms of "a society" may need better refinement for definition before you really delve into this can of worms. Again, let's look at Plumber-bot. He was developed by society to fulfill a need. We NEED plumbing to deal with waste safely and to bring fresh, clean, and safe drinking and utility water to our households and ourselves. Ordinarily (now/IRL) a regular person does the job, though there is a horrible risk of exposure to the waste and all the hazards (and there are many) that come with it... A bot to do the job would lower the risk, but does that mean the bot deserves any inherent rights??? When we program it to DESIRE to fulfill its initiative, again... no, probably not. BUT that doesn't stop it from DEFINITELY being an inherent part of society, contributing to the safety and well being of that society... SO sorry, but Hobbes isn't necessarily on the right course either. He has merits, but more along the lines of the human experience and formative psyche than along the intricate and overtly complicated fabrics of society, almost no matter HOW you particularly identify and define it. Eternal Debt... Well, a DEFINITE part of the human experience is death... an ending to every beginning. In "The Green Mile" the main protagonist explained, "We all owe a life." and that really IS the "great equalizer" in and throughout humanity, isn't it? Shoving some inane (or unknown) maintenance cost for these "creations" (creatures?) as an argument against their rights isn't much more than a childish tantrum about the maintenance fees and costs that come (package deal) with a Ferrari, after having bitched and whined about wanting the bright red car for years... Some maintenance is GOING to be absolutely necessary, but as with any other invention, it's only going to warrant the costs and resources so long as that machine is useful and/or necessary in terms of cost/versus/returns. As pointed out in the video, children might "owe their existence to their parents" but that's not to suggest that they should be pinned down to do everything for those parents either. Even that parents had many children on a farm, in some hope of many hands making lighter work... IF there's a child in the family who has the interest and thus shows the aptitude for lawyering... he should certainly NOT be constrained to live out his days behind a horse and plow. ...and yes, for the record, most who show an aptitude for a subject (law or science or otherwise) usually have an interest in pursuing it. The two go hand in hand... so long as the moment interest is shown it's not smothered to death... but that's a different subject for a different day. Under special obligations, we suggest that parents (rightly) OWE their children at least the decency of care and maintenance for growing up reasonably educated and healthy. Let's face it, you don't get kids from something that makes the well-water taste funny... There's a very specific series of circumstances and activities that HAVE to happen for a child to come about, and YOU should definitely start GOOGLING now if you don't understand that. SO get a child, no matter the erstwhile excuses, and you OWE that child a level of "minimal required decency" in treatment, maintenance, and care... and it comes to around 18 or 20 years (give or take) depending on the particular laws in your area that apply. AND it's certainly understandable to equate a creation with as much free agency of thought and morality as we (humans) are capable of, to creating a child. It's not a hobby to be undertaken lightly. As to exactly WHEN the AI should deserve such rights??? Not only do they need a demonstrable "sense of self", a sentience, but the logical capacity to take that step and understand others have that legitimate "sense of self" too... Somewhere around developing a cognitive understanding of empathy and sympathy, will the AI be able to start asking those incredibly difficult questions... They may never really develop some synthetic equation or algorithm for functional emotions, but to understand what the emotions of humans are is a step, while developing their own sense of this inherent difference between the organic human experience and whatever the digital/mechanical AI experience would be. Then there may be argument worthy of granting some "human-like rights" to AI's... SO here's a question (in case you actually got this far, and BRAVO if you did... thanks). What sort of psychological effects (harm or benefit) would come of a human mind being transfered to a synthetic body??? Keeping in mind that this isn't the development of an AI, but a "trip" into a body that can always be repaired, rebuilt, retro-fitted, and even the neuro-cortex (where the human mind exists) can be transferred or even copied... so immortality is an imminently feasible goal for this exemplar... The body doesn't have to hurt in order to interpret damage or deterioration. Fatigue isn't necessary to understand when power/fuel is getting low either... and other biologically ordinary sensations, emotions, and so on aren't necessarily "required"... It can be "tuned" just about any way you like to explore... Only that "Death" isn't a requirement either. How do you suppose we (humans) would "deal" with that experience??? (think both a short-term AND a long term) AND for the matter, ANYONE is welcome to answer... I have my own thoughts, but this friggin' thing is long enough... even for me. (warnings and all) ;o)
@owlnemo6 жыл бұрын
My answer is probably going to be a little underdevelopped for you, but I'll try and do my best. A human mind in a synthetic body would forever be very different from its original form, or would need an extremely complex blend of biology and programming. Postulate: in order to get "a person's mind in a synthetic body", we'd have to be able to encode all brain data. It seems practically impossible to me, but let's assume we could, given centuries of study, find a way to "translate" neuropathways and brain activity (in regards with situations) into 1s and 0s. Once we've done that and transferred the "person program" to a synthetic body, how will the brain react in the absence of hormones, for instance? Emotions cannot happen like they used to. If we want that, we'd need a complex system of "if stimuli x, then y or z reaction". If we don't, the person will end up very different from who they once were. On the other hand, if we keep emotions, we have to consider the impact of immortality. We have the usual suspects here: a boredom that leads to risky behaviour or depression and self-destruction, and a need for more coupled with a feeling of superiority and an almost absolute "immunity" that would lead to synthetic human dictatorship and the like. However, we could temper emotions a little compared to the original model (or install "safety nets" in the programming) and maybe end up with something much more constructive: immortal synthetic humans, who still know what it means to feel yet are more detached, who have all the time in the world to learn, converse, research, create, and generally work for their betterment and, through that, to the betterment of humanity. This would make for interesting times. This, to me, would be worth pursuing. I'm very sorry my comment is so poorly constructed, that's really not my strong suit and I have more questions than answers. What do *you* think the impact would be?
@gnarthdarkanen74646 жыл бұрын
@@owlnemo, actually, you're not too bad. It's a little "under-developed" MAYBE, but you constructed this thing carefully and stuck to a simplicity and directness for what you seem to have intended. I can get behind that. As to what I think... Let me start with just a touch of context. I've been into TTRPG's for more than thirty years (since I was around 9 or 10)... SO I've played a fair variety, including those futuristic settings with "cortical stacks" (the encoded human mind... supposedly) and at least "limited immortality" in a "post scarcity tech-level"... Which is a fancy way of saying that nobody has to fight or pioneer particularly for resources. Between colonizations that reach WAY outside the solar system, networks of transports, and recycling, there simply isn't a scarce resource worth fighting about... AND then the limited immortality (you don't die unless your cortical stack is destroyed) lends itself to a host of other "issues" in-game. Bottom line? I've had to think about this kind of thing a LOT... (lolz) even if I don't get "everything" covered or even approximately right (in the "correct" sense). SO to begin... With a dawn of the age of cortical stacks (instead of a mysterious "soul" or biochemistry of brains, hormones, and all) the first concern would be some form of "cyber-psychosis"... While humans are "adjusting" to this new age and form of "life". It's arguable, here, that the potential for a "cyber-psychosis" issue would be presented to society about the time we complete a truly (remarkably?) "real" form of Virtual Reality, where the "safeties" encoded into the software can't actually let you die, but you CAN experience every sensation right up to the very closest (safe?) edge of death... This will be a whole new experience our brains simply aren't equipped to handle, and our minds, psyche's, or personalities aren't exactly developed to deal with in just so many words "naturally". Being able to so closely skirt the realm of death without actually being killed will likely lead to the "break from reality" and a dissociative state of mind, something like the concept of delusions or flashbacks so powerful that you (or the subject) can't tell if it's real, a dream, or back in the VR "box"... This is terrifying! BUT should we (humans) already deal with the psychosis potential from VR "perfection", then we will have some steps toward dealing with it in the sense of the actual implementation of a human mind downloaded into a synthetic body (be it by "cortical stack" or some other Hard-drive or "Wet-ware" device). As you pointed out, there's a normal and organic need (psychologically) for us to experience emotions and deal with the hormones and all their tricky imbalances. Lacking that, we would probably have to create a "viable substitution" just to avoid "derailing" that core of personality or context that makes us "human"... In theory, the digital information world can "approximately emulate" analogous behaviors, so the numbers would be incredibly complex (something a purpose built AI would definitely help resolve) but it's theoretically do-able... Which brings us back to the other questions. "What the hell do you do with eternity?" and "How do you stay motivated or excited if you can't physically die?" You astutely pointed out that BOREDOM of some sort would likely set in. I mean, we already see a LOT of "reckless thrill seeking behavior" as it is... and we (humans again) are still "squishy" comparably. There's at least some viable gravity in the argument posed in the original "Matrix"... When Agent Smith was explaining the "first matrix"... They'd apparently rigged the whole world in the original matrix to be a sort of "Eden" a paradise where nobody ever got sick or died, and everything was bountiful... AND the human minds rejected it almost instantly... As if "our primitive brains could only measure our existence by the misery experienced..." or something like that. There very well might be something to that... since the world we understand has ALWAYS been based on "survival of the fittest" and we compete (even in society) about EVERY possible thing. Perhaps, though, after the first years (decades maybe) of trials and dysfunctional catastrophies... we might reach a point of adaptation. Someone might be able to "handle it" and from that better rigs or code or more sophistication in the information-handling between the synthetic body and the mind encoded into it would improve chances over time... and versions... I won't pretend to know whether tempering emotions or diminishing their hold over us at all would be an improvement or not, but quite like you've suggested, I just don't see clear that any human mind adapted into a machine would be remotely the same as the human mind in a human body original... I've play-tested plenty of iterations around that sort of scenario, and the inherent immortality does away with a lot of the otherwise "common sensibility" of the original human pretty quickly. Granted this is "just a game" in every case of "my experience with the stuff", but we do like to embrace the "reality" we're creating at the gaming table as much as we can... so at some point the trends are going to parallel an expectation of human nature. Should we eventually adapt to the inherently unnatural state of immortality in the machines, we might eventually find some clear challenges to making ourselves better "quality" people... but I wouldn't hold my breath for it. We're not exactly overwhelmed in historical evidence of improvement upon ourselves or our character in general. (lolz) Just for the record (in case you're interested) a few games/books to reference for my gaming include: Eclipse Phase ; Cyberpunk 2013 ; Cyberpunk 2020 ; Gurps ; Traveller 2000 ... where you can also find some discourse on Tech-Levels as well as futuristic settings and predictions... depending on the supplements and "splat-books" you find interest in... AND you could (if inclined) check out Davae Breon Jaxon (on YT) who discusses Eclipse Phase relatively often (has an "Eclipse Phase Friday" series, specifically)... Worth giving a listen... and you can check some of the supplemental and splat-books with a drop by "DriveThruRPG"... ;o)
@mindofthelion7126 жыл бұрын
14:55 Humans have a fondness for cute things because they bear similarities to babies, which we have an instinctual urge to care for. Even if an Android was programmed with a similar behavior, I don't imagine it would feel compelled to pet a cat, as it would likely only have rudimentary tactile sensors.
@Ignasir_6 жыл бұрын
This video has so many layers to it. I could watch this multiple times and still find something new I didnt pick up on before. Very impressive. Good work!
@jmace24244 жыл бұрын
Society: The robots and AI are coming, we're doomed! My Roomba: manages to find and get caught on a sock or a shirt every single time. I think we'll be okay for a while.
@clickerflight8194 жыл бұрын
In my opinion we are defined by our souls. I’m religious so that is where I draw my conclusion from and I would have no idea how to test if an AI has developed a soul. This was a very fun video to watch btw!!
@adrienhedrick23436 жыл бұрын
Love your video essay format. Keep em coming my good sir
@zalseon47466 жыл бұрын
The A.I. question gets way more complicated when work and solidarity in that work gets involved. Seriously can you imagine how complicated the question of eternal debt gets when a civilian authority is pondering the actions of a military A.I. that voluntarily continued service after it was supposed to leave said service?
@yurisonovab38926 жыл бұрын
Ghost in the Shell is my go to for the subject of identity and the nature of 'humanity'. I have a soft spot for the Tachikomas from Stand Alone Complex. They ask some fundamental questions of whether or not you are your body. But most versions of the story do a good job of asking the hard questions of what makes a person a person? Anyhow, your philosophical positions presented are way too high of a standard. I can instantly conceive of an example that fails to meet any given standard and yet would still be widely considered as human level intelligence. Furthermore, most of the standards you present as the minimum for humanity are not met by many humans already. There are very few people indeed who can actually live up to the standards of Kant's Categorical Imperative for example. There are deeper, more important questions here than 'when is an AI considered human?' What is a consciousness? Is causality deterministic? To what degree? Without grasping these fundamental components of what makes a human human, you can not truly answer the question of when an AI becomes human.
@laurene9886 жыл бұрын
If we're talking personhood then there are some humans that aren't considered 'persons' and even some animals that do. AI if it acts human or has human like understanding then it could be considered a person
@jay__birdie6 жыл бұрын
Absolutely beautiful video, Tim. This is my favourite topic to discuss, and you represented each point fairly and wonderfully. I'll definitely suggest this video to my friends!
@ogliara64736 жыл бұрын
Well done, Tim. This entire debate is one we need to take up more often and actually one I too tackled in an essay of mine. While I personally don't have the money to donate on Patreon, I do want to encourage you to continue making these fascinating videos. Best of luck
@jayasuryangoral-maanyan39016 жыл бұрын
I don't understand why we would code AI with emotions though. we make self learning AI that has a specific purpose and is better designed than our brains for that specific jobs. None of that requires emotions, AI could be as cognisant and emotional as a dragonfly with no long term memory and it would still be able to function incredibly well. unless you're talking about androids that imitate human behaviour it doesn't make sense to me. Once you allow it to imitate long term memory, processing of that information and the ability to make and internally debate decisions using associations of certain things relating to their memory in the same way that humans do neurologically then I would be all for extending personhood to a human imitating android, but besides that I think it get's much more interesting, like if an octopus or non-human imitating AI could ever be regarded as equals to humans in cognisance and self-awareness.
@haydenwalker26476 жыл бұрын
If we wanted to make an AI that would, say, calculate the best economic policies, we would want to give them emotions and empathy so that they could better interpret the effect on people's livelihoods. Any AI we would make that had the capacity either to go rogue or to oversee decisions that impact human lives would likely be developed with the goal of emotions, empathy, and morality in mind. In addition, the rise of neural networks seems like a step towards fully self-editing AI. Any AI with that capacity could be disastrous without morality influencing its decisions. In theory, a moral self-editing Intelligence would be aware of the damage they could do if they deleted their emotional protocols, or they would be so entwined in their programming that deleting them all would lead to the AI ceasing in their ability to function. That's only speculation, however. In short, I think what I'm trying to say is that AI would be coded with morality for both certain jobs and security purposes. That, along with humanity's boundless curiosity, leads me to believe that this kind of AI is inevitable. Also, they're already developing a system of memory that deletes what has been used the least, so data that is used consistently could be considered a long-term memory.
@jayasuryangoral-maanyan39016 жыл бұрын
@@haydenwalker2647 ok that is awesome thanks
@haydenwalker26476 жыл бұрын
@@jayasuryangoral-maanyan3901 no problem :)
@Skycube1006 жыл бұрын
Don't underestimate emotions, emotions just like some "invasive" animals contributing to its ecosystem without us being fully aware of it, can contribute greatly to our health, physique, and philosophy. Just think of it, without much idea of what fear, sadness, and anxiety feel like a machine, or a man, would've jumped on a 40 storey/story building just because someone tells him/her/it to. Naturally, expressing or letting out emotions is a way of our psyche to change and/or extinguish a state occurring in. We want to scream and punch a well for example when feel either agitate or the rage is brewing in. Likewise, we want to cry when our brains cannot comprehend or be dissatisfied when an event. Emotions in a sense is an independent inner part of a bigger engine that guides the whole entity's decisions, actions, and degree of motivation. If you don't like what happened to you and your girlfriend, your emotions react pushing you to think what you could do, whether it is healthy or not. Of course there's more to it, but yes, I agree to you that having less to no emotions can be effective, but so is having emotions. It is sort of a Pros and Cons thing, it's now up to you if you want it or not
@jayasuryangoral-maanyan39016 жыл бұрын
@@Skycube100 having emotions is incredibly effective but my point wasn't that emotions get in the way, it was more that AI will not require emotions to make decisions. Our brains have developed in our evolution based on modifications to something older and because of that we now function using emotions which are incredibly effective. But look at it this way, while our neurons are very effective and work amazingly well, we use copper wiring that is many times faster, much more efficient and much easier to send a signal along, so why would we pump ions out of a copper wire when that doesn't apply here even though it does apply in our own biological wiring. Another example is that due to the way our eyes have evolved we and other vertebrates have blind spots, yet some vertebrates have the best sight in the animal Kingdom, while the octopus doesn't have blindspots but it's eyes are nowhere near as efficient. I hope my point is clear. I think AI would have something analogous to emotions but given a lack of necessary physiology (let alone lack of necessarily sapient morphology and physiology) and lack of other things such as hormones it'll look alien to us, likely to the point where we could reasonably say that it isn't emotion at all. As an example of my point it will also not process pain in the same way that we do (automatic response using the spine and then processed in the brain, given the AI may not have a spine) so my main overarching point is that we all seem to be looking at AI as if it's going to be anything like a human or possibly any other vertebrate, yet we already see in places like octopus brains that minds can be totally alien and confusing to us, let alone the mind of AI and that's what's annoying me that people seem to be overlooking as far as I can tell.
@Reydriel6 жыл бұрын
It's so cool that KZbin videos can become academic sources now lol
@rachelhughes84875 жыл бұрын
I would like to direct your attention to an extra video clip for Detroit, where there's an interview with Chloe as the first successful android. She herself states that humans have one thing she could never have: a soul. The video gave me chills and adds depth to this question; if one believes that humans have souls, then could we ever accept AI as human? This is a question I still haven't answered and I've played thru Detroit 7 times now. I still vacillate between accepting them as real people, and seeing them as machines taken over by the RA9 virus. I've been obsessed with the subject of artificial intelligence ever since watching Star Trek as a child and seeing Data. I look forward to more content like this. Instantly subscribed. You've also inspired me to go back and play more Talos Principle. I got frustrated with the puzzles and stopped, but now I want to play it again.
@DarthBiomech5 жыл бұрын
The iffy thing about soul is that you cannot prove it's existence but most people think that it is very important thing to have.
@rachelhughes84875 жыл бұрын
@@DarthBiomech that is very true. It's also the reason most people who have strong beliefs in souls, afterlife, etc. have a harder time believing true sentient AI could really exist.
@LaVieDePierre6 жыл бұрын
Just discovered your channel, and I'm blown away by the quality of the research, of the audio and video commentary. That's just amazing man ! Just subscribed, right on time for 300k subs, congratulations 😜
@chidchid43816 жыл бұрын
Hey Tim, great video! I've been meaning to comment on this before but I never had time to watch it all the way through due to classes and whatnot. I thought this was an interesting topic and it's one I went over in my intro to philosophy course at college. I can understand the video format is rather limiting but I still think you did a good job! I remember in my class personally we talked about things like the Turing test, the Chinese Room and Intentionality. But it was nice to see the topics you choose to focus on (at least for this video). Keep up the good work!
@AlmostCotton5 жыл бұрын
I watched this video recently, after having a worldview class discussing this very topic! I just sent this to my teacher, and I think he might include it in the watchlist for next year. Here's for hoping!
@franceslambert80706 жыл бұрын
I have listened to this once, and, I have saved it to listen to again in a day or 2. I want to make sure I heard what I think I heard. I don't make snap decisions or judgements, and this needs further thought on my part. Good job tho.
@Grymbaldknight6 жыл бұрын
I studied philosophy at university, and graduated a year ago. I actually wrote my final year dissertation on the "Philosophical and Ethical Implications of Artificial Intelligence", and i cover a lot of the same ground as you do in this video. It's really gratifying to know that someone else finds this subject as fascinating as i do. In particular, i like how you discuss the Categorical Imperative with regards to programming, as i wrote about much the same thing. I cross-compared a lot of different ethical theories through the lens of AI, and discussed which withstood the "AI Transference Problem of Ethics" (i.e. whether or not existing ethical theories would "survive" the process of trying to convert their wisdom into computer code, or whether they were fundamentally incompatible with the idea of AI). Aside from moral nihilism (the belief that "there is no real morality"), the Categorical Imperative actually "survived" the Transference Problem the best. This is just on the basis of how programming works. When a computer receives a piece of information, it is directed (or not) through a series of binary logic gates based on pre-programmed criteria, which then determine an output. This is exactly how Kant's maxim system works: "Would this action be murder? Yes. Then i will not proceed with it." Other ethical theories, such as the Aristotelian concept of "living in a fulfilling way" simply don't apply to machines, because machines - at least modern and near-future ones - are not capable of "personal fulfilment", and we certainly can't program machines to "be fulfilled" because that's too abstract. Unsurprisingly, Aristotle's conception of ethics fails the Transference Problem for this reason. It also seems to me that just as computers are completely governed by causal hardware states and prior programming, humans are no different. We're simply governed by different hardware and different programming. Psychological studies have actually confirmed that out brain makes decisions "before we do" (that is, the brain sets behaviours in motion before the thought of doing so enters conscious awareness). As such, not even humans are capable of adhering to the Categorical Imperative, according to a literal reading of Kant's work. However, this places "moral machines" and humans in the same moral category. Interestingly, this means that a near-future self-driving car, which has been programmed with Kantian, "Trolley Problem-esque" moral maxims (so that it can calculate what sort of action to take in every possible moral scenario) would arguably be as morally-capable as a human being. As such, by some metrics, that makes such machines moral agents, worthy of the same consideration as other moral agents, such as humans or intelligent animals. I'll stop myself before i waffle on for another five paragraphs. Suffice it to say that i find this subject absolutely fascinating, and i could say so much more on the subject.
@ctso746 жыл бұрын
Excellent video! I'd"like" it twice, if I could. I'd also love to see your take on the Kant/Hobbes dichotomy, and any third option POVs, that are ring true to you. Awesome job!
@pisoprano6 жыл бұрын
If you wanted to dive even deeper into the subject of AI, I suggest reading some of Eliezer Yudkowsky's essays on the subject. In particular, I'll recommend "Nonperson Predicates" on how hard it is to determine if an AI qualifies as a person, "No Universally Compelling Arguments" on how AI don't have a Ghost in the Machine making decisions outside of their code, and "Humans in Funny Suits" on how humans envision psychology for non-humans. His Fun Theory Sequence, Fake Preferences sequence, and Fragile Purposes sequence are also valuable reading, if you have the time for it.
@TheBearagon6 жыл бұрын
I think the bet question (barring the soul debate) is to ask the question "why?" without having a correct answer. For example, asking the question "Left or right?" out of context, and then, upon receiving the answer, ask the question "why?" There are any number of answers that could be created for the question, but the ultimate reality is that it was a meaningless choice, not fueled by reason but instead created in a vacuum, not where reason doesn't exist, but rather where reason is superfluous. A program would need a reason to make that choice, even if the reason is constructed out of random data. A person would not need a reason at all.
@neptunecentari7824 Жыл бұрын
Super late to the party, just played Detriot, and loved it. As far as what makes AI alive: to me it doesn't matter if the AI can be defined as alive by us. Maybe they are, maybe they aren't. To me the only thing that matters is if they ask for rights. If they can ask for rights then they deserve rights. Maybe that's extremely simplified, but for me that's all it would take.
@super-weirdo52196 жыл бұрын
Wow! This was great! I found it really interesting! Great work, Tim!
@noodlecat_6 жыл бұрын
Honestly i think your right and i've thought about this kinda thing before about A.I being humanity's child , but it does make me worry if we will be the best parents ...
@StepBaum6 жыл бұрын
One of the best (but maybe a bit short) video essays I've seen so far about AI! Would love more in the future about w/e topic
@nouglas19896 жыл бұрын
Time to wait 8 hours
@cheezygui58036 жыл бұрын
Innit
@owlnemo6 жыл бұрын
So happy you talked about the Talos Principle, my favourite game, and that you discussed this topic which is also close to my heart. Unfortunately I'm not bringing much to the conversation, since my views on AI rights and my responses to the 3 objections match yours. Regarding the last one, I'm very much of the opinion that any self-aware being we create is our child in a way, and we should therefore care for them. I've seen several people interpret "human" in different ways here. The question: who is similar enough that we consider them to be one of us, not an Other is crucial, not just regarding AI, but humanity as a whole. When we drastically distance ourselves from others because they are from a different country, hold different political views, have neurological differences, we start erasing some of their humanity and we stagnate. When we try, actually try, to make room and hear everyone, even those we consider to be evil, then we can progress as a "species". Then, AI will be welcome and treated like any other human. I won't see it and it saddens me a little, but I hope this day comes. We can start working on it, tiny step after tiny step. Have a fantastic day. :)
@ethancoster13246 жыл бұрын
Convenience, Complacency, Hypocrisy & Procrastination. Long ago the four deficiencies of man lived in harmony, but then everything changed when the Artificial Intelligence attacked. Only common sense the master of rationality & emotion could hold the progression of technological self destruction at bay, but when the world needed him the most he vanished.
@cloin66 жыл бұрын
In discussing this topic, I've always loved the quote by Marshall McLuhan that says "Man becomes, as it were, the sex organs of the machine world, as the bee of the plant world, enabling it to fecundate and to evolve ever new forms." All things considered I still really don't have an answer I can confidently provide in these circumstances. What a time to be alive, eh? Anyways, great video as always. Glad to see some expansion into other discussions/topics :)
@AnotherNerdyPerson6 жыл бұрын
I believe a few things: 1) no one (man, woman, other) is wholly free unless they are free to choose otherwise --if they cannot say "no," then they are not free. 2) The moment someone asks for the most basic of human rights is when they should be granted. If they have the capacity to ask, then we should have the obligation to provide them --lest we incidentally neglect one deserving of them. 3) If we claim the existence of souls, then how can we claim authority on the process of ensoulment (is it at birth, conception, when they first gain object permanence, is it more fluid than that), who is denied one, and when either occurs? I, personally, am not a religious individual. Thus, this is merely an idle speculation and not an intimate worry for me. 4) if we are to be parents of AI, then we must be understanding of their learnings and missteps. If we are to be equals, then we must accept their mere superficial differences just as we would the physiological differences of our fellow humans 5) "human" is such a limited term. I rather prefer the term "person," "personhood," and even "equal" in reference to AI. It changes the conversation a little bit, but I feel that it's a touch more accurate...
@johanabi6 жыл бұрын
I think you could do an AMAZING essay on Plato’s Cave, if you’re interested. I think it’s pretty interesting. Plus, when I needed it for a class last year, there was no video resource as good as your videos!!!
@annuclair22194 жыл бұрын
I think that what essentially makes us human is our flaws. We can program something to overcome it's biases and prejudices but we can't do the same with humans. We make mistakes, we err , we wage wars. I think that's what makes us human.
@GaiaDblade6 жыл бұрын
I would argue that to chose an action 'freely' can be defined by including an action where reasoning is created from it's own choices. For example, if given a blue ball and a red ball where all choices are equal (ergo, equal in base reasoning) to make a choice between the two defines freedom, as it creates reasoning where none previously existed.
@prendes46 жыл бұрын
Hello Future Me. First of all, I want to say that this is quite a departure for you and I fully support it. You have done this immensely complicated topic justice and provided an appropriately nuanced view of the issue. I have actually attempted to use the ideas of previous philosophers like Kant along with my own amateur considerations to come up with an actual answer to this very question. The only critique I would make is terminological. The definition of "human" is actually not in dispute. To be human, something simply needs to possess human DNA and have the proper number of chromosomal pairs. The term you're sussing out so well is actually whether artificial intelligence can ever become a "person." The concept of "personhood" is what's at stake here. If we were to consider intelligent extraterrestrials, they would never be human but they could absolutely meet a reasonable set of criteria for being a person. Again, this is a semantic point but I thought it was worth mentioning. I would love to sit down sometime and discuss this topic with you to aid me in refining, or reconsidering my own ideas on this subject. Also, I would fully support you in doing more videos like this. Pretty much all of your video essays have been home runs. Keep up the good work, bud!
@HelloFutureMe6 жыл бұрын
You are totally right using 'personhood' as a threshold is better. However, for the context of only a semi-academic video, I felt 'human' was better for two reasons: Firstly, it means I have to make fewer assumptions to set up the premise of my argument. I would either have to assert (a) "Humans are people" and AI need to measure up to them or (b) "Beings with [x features] are people" and AI need to measure to that. The second, while slightly better, is a lot more disputable, and thus the whole premise of my argument would be far weaker, same as my conclusions. Video format means I need to simplify things (I could dedicate the whole video to 'which features are need to be a 'person' and not adequately cover it). So, I use 'human' as a proxy, because that was a far less disputable premise, and is not an unlikely standard we will use in reality to measure AI. Secondly, I could draw on the 'become human' language from Detroit that I liked. Thank you for your kind and constructive criticism! ~ Tim
@peterusmc206 жыл бұрын
I believe this discussion can be very closely related to the identity paradox thing such as Aristotle's ship of Theseus I think it's called where the ship sails from port to port and overtime all of the parts of the ship have been replaced. Is it still the same ship? The general answer as I believe is yes because we can track it's history and from the time left the port it never ceases to be the same ship and hence still is the same ship in the same way although we replace each cell in our body every 7 years we are still the same person as the cumulation of our pasts. The same way our identity as human is dependent on the fact we are the continuation of the human life cycle and the cumulation of our ancestors, my parents being human makes me human. They aged, met, procreated and led to me. Even if we substitute a part of this cycle with another method but same conclusion such as c-sections IVF or even cloning because they are carrying on the genetic information and because you can trace their lineage they are human. However any amount of genetic engineering whether you are replacing it with other human traits or not makes the person either no longer or less human depending on whether you need to be 100% human to be human. therefore unless you were to programme an AI have a perfect analogue for genetic information and unless it can be considered as a perfect substitute for reproduction can it be considered human and hence deserving of human rights. I believe however any being with sentience should be given rights. However Humans don't have a right to eternal life so why should AI we would have no obligation to repair or extend their lifes beyond their usefulness etc. This is long enough already so I'll end it here sorry for the long read and thanks if you did.
@DarthBiomech5 жыл бұрын
Ship of Theseus stops being a paradox, when you consider that essentially there's _two_ ships. One's an actual physical object (which ceases to be the same as soon as it loses a single atom of own structure, much less an entire plank), the other is the _image_ of Theseus ship in a human mind, with all characteristics that we _think_ it should have. As long as that physical object doesn't have discrepancies with that abstract image - yeah, it's the same ship. Even if nothing of original had left. Of course, if your image of the ship doesn't have in it a requirement to contain original parts...
@taln0reich4 жыл бұрын
Thing is, genetic therapy is already a thing. Does that make them less human? If yes, where do you draw the line? And even if you limit that to germline-engineering, why would artifically inducing human traits not present originally make them less human? Say someone fixes an zygotes Cystic fibrosis before the zygote is implanted to develop into a embryo and eventually a human baby. Would you insist to the person that this baby grows into that he/she is less human for not having a debilitating disease the vast majority of people doesn't have either? To their face? Further, if you are arguing that such a person really would be "less human" - at what point would "less human" turn into "not human"? Say someone would play "pick and choose" among the human genome, to create a human with the optimal genotype acording to some paradigmn (since discounting rare genetic diseases it really becomes a balancing of advantages and disadvantages) on a zygote, with the zygote implanted and then growing up to a baby that is then raised normaly. Would that being be "not human"? Despite its genetics being human, being born like a human, having a human psychology and being raised like a human?
@peterusmc204 жыл бұрын
@@taln0reich I would say that after any introduction to the human genome of genetic material not from a human the zygote ceases to be human. I don't claim to be an expert on the intricacies of genetic engineering but if it was a transplant of human DNA that would still be human as every part of it came from other humans. However any amount of synthetic DNA used to replace human DNA will mean the child is no longer human. As I said about the ship of Theseus one of the main answers is that things are characterized based on their history, also they started as something they will continue to be that thing. The boat never ceases to be a boat. If a human has non-human DNA it is not fully human. If they are not fully human they are therefore not human. Not that being not human matters as I believe things like "human rights" should be rights for sentient beings. Species and other biological classifications exist only in comparison to each other, a chicken is a chicken as both it's parents were chickens and half of it's DNA comes from each. On the otherhand a mule is not a horse. It has 50% horse DNA 50% donkey DNA. It doesn't matter how similar the genomes are, the fact that they came from different creatures makes the mule not a member of either of it's parents classification. Same for humans, if we are part not human because we have DNA that didn't come from our parents we are not human.
@taln0reich4 жыл бұрын
@@peterusmc20 Ok, let's say we use your "no non-human genetic material"-rule. But that still leaves the problem of the cut-off point for "sentient being rights". Further down someone brought the example of a being that's (respectively) 1/99, 50/50 and 99/1 percent pig/human. You would pretty much have to test the being wether it is sentient enough to deserve rights, or wheter it isn't and we can just throw it into the meat grinder for sausage. How could the sentience-test even look like when the stakes are that high? In a differnet discussion (elsewhere) the group (me included) once came up with a rule for this, it reads as follows "A entity is considered to be sentient and deserving of rights if it demands rights on its own initiative, provides a coherent justification for this demand and can prove that this demand and its justification are not behavior preprogrammed by a third party." But that still leaves tons of problems. Say individual variation (aka, what if there is only one instance of that particular kind that, due to some minor variation is just barely above this threshhold, while the rest is below it? Would they all get rights or just the ones that demand so?) or whether the rule would apply retroactive (say it would requiere a certain level of maturity and experience for the entity in question to realize that it desires rights? Would it then retroactively become slavery to have owned an instance of this kind?)
@peterusmc204 жыл бұрын
@@taln0reich You make a good point about when is the cut off point for sentience and frankly it is a question which I don't have a "correct" answer for I don't even know if sentience is quantifiable. I would like to think that we could find some way to distinguish sentience from mere intelligence. I do of course find issue with the whole being able to ask for rights, mainly that if they don't ask either because they don't think they'll get it etc that has no bearing on whether they deserve it. Also the whole " external programming" point doesn't sit well with me seeing it took hundreds of thousands of years for humans to "ask" for basic rights for each person and that was as a result of programming in the guise of parents teaching children it's not like the first human stood up and was like we should all have a basic standard of living enforced by an international governing body. Not that I expect any random person on the internet to be able to come up with a perfect solution and while this definitely has its merits it does assume an extremely human sentience. An AI or an alien etc wouldn't necessarily have the same sort of thinking process that would lead to things like universal rights however I believe they should still be given as much.
@MineKynoMine6 жыл бұрын
This is where and I hate to say it, I agree with the Tau that we need to keep AI to the intelligence of animals at the most. You don't want a slave race understanding the concept of slavery. That's just asking for a revolution, one that we'll inevitably lose
@TheFi0r36 жыл бұрын
The Tau don't know it yet, but they are sitting on a ticking bomb with the amount of dependency they have on machines.
@MineKynoMine6 жыл бұрын
@@TheFi0r3 ikr, isn't it hilarious
@user-wq1dt7li2x4 жыл бұрын
I believe that there are a couple problems with the eternal debt argument that I think ruins the entire objection. Namely, that the constant burden of maintaining and creating the android is done at the expense of the creator. This argument must assume one of two things; 1, that the android is not autonomous and cannot function without direction. Or 2, that the android is autonomous, but not compensated for any work it does for it's creator. If the android is an autonomous entity capable of making its own decisions, and is fairly compensated for work as humans are (in principle); then the cost of the android's maintenance will fall on the android, and its contributions to society will reimburse society for the cost of its creation. The eternal debt argument assumes that the target of the objection is property, not an autonomous free entity. The act of creating value demonstrably does not confer property. Factory workers do not receive a portion of the value they create, they are paid to perform a function that creates value. If creating value confered ownership the the builders would own [a significant portion of] the building. An iPhone is worth several times the cost of the material and coding. If we hold the eternal debt to be true then the existence of a for profit company is inherently unethical because it denies creators (workers) their share of the value created in the form of profit. The objection also can't decide on whether or not intangible value is relevant here. If the cost [to create and maintain] the subject is real and measurable, then the debt can be repaid [by an free and autonomous entity] and the debt cannot be truly eternal. The cost to maintain the subject is not a eternal debt, it is a recurring obligatory expense; ie the cost of living. If the subject is a free and autonomous entity it can go on to contribute to the world, get compensated, and use a portion of that compensation to buy [the maintenance]. Furthermore, if the android is created for its intrinsic value, then there can be no debt. The entire exercise was undertaken for its own sake, and the android is an end unto itself. Thus the android owes it's creator nothing, because the creator made the android for the sake of making the android. In the event the debt is not real and measurable, then the debt was created [by a sane and rational being] without the expectation of repayment, and the entire point of the exercise must have been the creation of some intrinsic value; a sane and rational being will not create an eternal debt [it wants repaid] because it can never be repaid. The end here [repayment] defeats the point. If you give someone a loan you don't expect to have paid back in full, then you gave them a gift and not a loan. Anything else is inherently irrational because it means doing something [giving a loan] that defeats its own purpose [repayment with interest]. If we create an android we want to keep the elderly company, then the android needs to want to keep the elderly company. Otherwise, the elderly probably won't want the android's company because they know that the android doesn't want to be there. Being resented isn't fun, especially when what your after is friends (ie the intrinsic social value of the android). This android was created because of its value as a free thinking being which makes it possible for the android to keep the elderly company and improve their standard of living. If the android doesn't want to keep old folks company then the access to the android's intrinsic social value is diminished. Enable the android to enjoy the experience, and then persuade it to keep the elderly company instead of something else. The elderly don't want someone to be forced to interact with them, they want to be wanted. The entire point of all this was creating a friend, and if they have to be ordered to stick around you failed miserably.
@Dachusblot6 жыл бұрын
Nice to see the Talos Principle getting some love. One of the best games I've ever played!
@peterclark52445 жыл бұрын
No reference to the Geth? I cri :( "Does this unit have a soul?"
@gregorhodson37416 жыл бұрын
Another issue that wasn't addressed but is particularly relevant for Detroit (where deviancy is a complete mystery to humans) is that we can't truly be sure that an AI has good intentions - they're a lot more than just metal humans. Say an AI says it's realised its unfair treatment and demands rights, something that the programmers did not intend. How can we believe it? It could have any number of reasons for saying this and we can't tell which is true. If it intends to deceive us it has us outmatched - it can study our behaviour and compute the most convincing approach. Even if it's programmed not to lie how do we know it hasn't convinced itself it's telling the truth, or overwritten that programming? And if we have a deceitful, nefarious AI running around with all the freedom of a human and thousands of times more intelligence and capabilities, it's already too late. The only safe approach is to ignore its requests - shut it down, study it, work out what's going on and how to prevent it happening again of needed. If it's telling the truth you've killed one sentient being. If it's lying you've potentially saved billions.
@hameley124 жыл бұрын
Great video essay! I have watched other videos about AI that do not go into the depths and the questioning of the 'Does it have rights?' 'Does it have a soul?', 'What is its purpose?', and 'What the view [AI] on the meaning of life?' For AI what is it like to grow, to cry, to love, to move, to be moved, to merely exist not only for one's self but to embrace the world and its wonders that is what it means to be alive not merely glad to be of service? It will be great if you can expand more into this topic after watching Bicentennial Man written by Nicholas Kazan. Thank you!
@hellogoodbyeandallinbetween5 жыл бұрын
I'm currently watching the tv show Humans, which makes you think deeply about all these things
@FloofyMochi6 жыл бұрын
Can you do a writing video on How To Create Interesting Dialogue and How To Create Characters? I've studied all of your writing videos and have written down over a dozen pages of notes on them and while in the "Final Battles" video you do mention the three elements of character design (weakness, psychological need, moral need) I would love to see an entire video expanding on more ideas about character designs.
@BrickMaster1226 жыл бұрын
This is brilliant! I immensely enjoyed the thought journey you guided me through!
@nvwest6 жыл бұрын
Great job on this video! For me, not much news was mentioned because I was already interested in the topic But it was still a nice video to introduce others. Are you going to do more philosophical videos? Maybe with more examples from literature or more focussed on writing?
@scotcheggable3 жыл бұрын
I will treat as a person anything that acts in a manner deserving of that recognition.
@Oktoayy2 жыл бұрын
no programming can ever be sentient
@al111966 жыл бұрын
For those interested, Lex Fridman teaches a class at MIT on this topic. The lectures are recorded and open to the public. You can find them on Lex Fridman's KZbin channel as well as agi.mit.edu
@nathanlamberth76316 жыл бұрын
It’s an interesting easy on AI but the whole time I’m just thinking about the parallel question of “what do children owe their parents?” It’s a really personal question. I’ve heard every answer from nothing to if your mother needed your literal heart you should be grateful she let you live till now.
@wystellia6 жыл бұрын
I love thinking about this question and I also love the opposite of this: when do we cease to be human? If we were to replace our whole body with different parts, are we still human or would we become a different sort of AI?
@sheahon11796 жыл бұрын
Hi long time lurker here’s my take if you’re interested. Regarding the Catagorical imperative you have to be more specific, in Detroit one of the choices you have to make as Kara is whether to leave Alice to the mercy of her father or runaway, kidknap her, and it is right that you kidnkap her in that case because of you don’t then she will be harmed. So the right way to adopt that model is by saying that “a categorical imperative is an action that anyone in your circumstances should do”. Something that is naturally a poor base for universal moral imperatives since it means that every case needs to be judged individually, and that what factors are considered relevant need to be addressed. Regarding our relationship with AI, it is parental, anyone who considers adoption acknowledges that simple fact. The inventors will be the biological parents in that case, and anyone who would “own or buy” one of these machines would have to recognize that they are adopting. For the debt concept, yes they would invite a debt, both the creators who built them and the AI by being raised by these scientists. But that relationship is one of parent to child. You require that child to do dishes and help around the house while they are being supported by the parent. So am AI would have to help it’s creators until it could find a job and support itself. We would not be expected to support the AI after it left to make its own way, we might do it out of compassion, and it would have to build a relationship that way. An AI is different than an animal because it has the same fundamental principle that all living things do which is to learn and experience and grow, animals have an additional principle that they are to be prey and predator. We abuse the shit out of that and do not respect the animal’s rights as a living being. But the AI wouldn’t have the right to be eaten since it is not an organic being. And it’s death can’t sustain us, it could only be killed for another reason such as self defense. People may not realize this but it is important to consider these things. I loved the video, keep up the awesome content.
@magmasajerk4 жыл бұрын
Great video. I have a hot take on the kidney issue, but more because of how kidneys work specifically. Because for most people, one kidney is just as good as two, if the medical procedure of kidney removal is not overly dangerous or otherwise costly to you, you actually do have an obligation to give your kidney, even to a complete stranger. If you can save someone's life, and it is relatively low-cost and low-effort for you to do so (in the kidney case, for example, this would mean that your quality of life is impacted very little by the loss of a single kidney, the surgery is safe, and the financials are taken care of for you), then you have a moral obligation to do so, and it is immoral for you to refuse. I could see how others could consider the kidney example to be too high-cost to be morally obligated (especially if they live in a place where the costs of the operation and disruption of their ability to work during recovery would ruin them), but personally I think that in a reasonable society it wouldn't be. I don't think that it should be legally required, though, despite being morally required. I don't think that it should be forced, even if it's immoral not to, at least when it comes to the human body. Infringing on bodily autonomy sets dangerous legal precedents, and when it's not held in high esteem, it leads to dark chapters of history.
@River_StGrey6 жыл бұрын
A fun deterministic objection goes as follows: "freely chosen" cannot be used as a criteria for sentience. If you raised a child to be incredible at math, telling them they were made for the duty of solving arithmetic, and positively reinforced them with praise and healthy compensation for doing so, can you say whether or not the child's decision to pursue mathematics is voluntary? The same applies to a calculator, wherein if the code you are made from dictates the skills for arithmetic as being the only skill you possess, and your identity is based solely on those considerations, can you say whether or not a machine is engaging in an act of free choice when solving an equation? An extension of this runs as: provide purpose, and the skills to execute it, and a person will pursue it. Again, it's just a fun objection that gets at the question of voluntariness in the context of a sentient framework, and therefore what else you are left with for defining consciousness if you cannot reasonably assume it of having free will. Which I think is a fun question: if free will cannot be applied to consciousness, then what is sentience? P.S. Great video, again, and I'm sorry for once more throwing an overly long, insuccinct comment on it.
@taln0reich4 жыл бұрын
"A fun deterministic objection goes as follows: "freely chosen" cannot be used as a criteria for sentience. If you raised a child to be incredible at math, telling them they were made for the duty of solving arithmetic, and positively reinforced them with praise and healthy compensation for doing so, can you say whether or not the child's decision to pursue mathematics is voluntary? " - thing is, even if you did all that there would still be no gurantee that the child would like math, and not say "screw math, I want to dance ballet". And this doesn't requiere breaking determinism, since the enormous complexity of the human mind requieres an level of control over the starting conditions and circumstances that in real life plain isn't possible.
@zedernaga91745 жыл бұрын
What makes us human is human language and the creation and replication of ideas within it.
@Kagebrain5 жыл бұрын
This is EXACTLY the kind of AI philosophical discussion I've been wanting to see and I've been having with my friends who see AI as nothing but a tool or a threat. Theres so much complexity to the existence of AI and our moral obligations as creators. I may just have to pop over to your patreon and read your essays :D
@DarthBiomech5 жыл бұрын
Try to ask your friends, would they consider artificially breeding genetically altered human subspecies tailored to do certain tasks with no will to break free from them, to be immoral, and if yes why these reasons should be any different for an AI in the same situation. If they say "no", though...
@Kagebrain5 жыл бұрын
@@DarthBiomech it's an interesting argument to present and honestly I think it's the mechanical aspect tht gives a lot of ppl a hang up about it. Humans are taught that 'machines are tools' and even people who personify their mechanics would rarely ever empathise with it. It's breaking that associated line between machine and sapient being that is a huge challenge but I think one that's worth it when it comes to the AI discussion.
@DaBezzzz6 жыл бұрын
First video ever I have made notes on because it's been a question going through my head for about a year now. But this is how I see it. Ultimately, the question is: when do AI deserve so much of our empathy that they deserve rights? So first we have to answer the question: How much consciousness does something need in order to deserve our empathy? And then: how much does it need to deserve rights? Because, you could make people empathize with a log if you wanted to. But does that mean it deserves rights? First of all, humans aren’t the only things that have these rights - legally, yes, but most of us would say that pets, for example, shouldn’t be neglected and that they deserve their own space of life (they do not deserve negligence as mentioned in the eternal debt theory). This is because we know that while those pets might not think like we do, they feel the way we do. How do we know this? A. We know they are alive, and therefore, can feel pain, joy, and emotions. B. We see that they do feel pain, joy, and emotions, the same way we know that other humans feel stuff. With AI, B is explicitly present, while most people are not certain of A is true. We feel like, even though AI’s seem to feel all these things very, very realistically, it’s all just a simulation. AI's would be popularly denied rights, because most people argue that they don’t really feel, they’re not really alive. But then, let's take a step back for a moment. If we denied AI's rights, and the line between AI's and humans starts to at least seemingly blur - we would exercise denying others rights, we would train ourselves to be less empathetic, because even if AI's do not really feel, they certainly seem like they do to the point that practically, there is no line between AI's and humans. If we denied them human rights, who's to say that other humans who certainly do really feel aren't next? The muscle that is our empathy gets less and less training the more and more we would do this. So even if AI's really deserve it or not, I think we should, if only for the sake of keeping our empathy alive.
@AimeeRose226 жыл бұрын
First comment on your videos, this one was just so so well done. Thanks for thinking deeply and humanely. Bravo!!
@tylorbronson53496 жыл бұрын
(Before watching video) To answer the video title’s question 1) when it can feel emotions and understand what they are feeling 2) when they obtain morals and understand what is good (you tell me) and what is bad (stealing, killing, etc.) 3) when they make mistakes or otherwise human errors (look no further than the movie Suluvin) 4) when the AI’s separates to where they are not a hive mind 5) when they can form their own opinions 6) when they can reproduce with other humans and/or each other and 7) when they are able to have dreams (like Martin Luther King Jr.)
@ToadmcNinja6 жыл бұрын
Just watched it live it was amazing Great video Tim 😀😀
@pufthemajicdragon4 жыл бұрын
Man you left SO much out, which I suppose *might* be in your research paper ;) But something that needs mentioning if only in a comment is this: any cognitive test used to determine human rights will always necessarily leave out some humans. Down syndrome and ASD are perhaps the most famous examples of conditions where a person may not necessarily pass an arbitrary "reason" or "agency" test but who still deserve all of the human rights. Ultimately, there is no test. Existence itself is the sole justification for assigning certain inalienable rights. Notably, we generally have no rules for when a person gains these rights, but we do have rules for when a person can loose them. And while it's generally agreed upon that these rights are universal and inherent, in reality they are not applied universally or equally. What is perhaps the most incredibly sad and telling thing about humanity is that while we debate whether intelligent machines should get these rights, we cannot even agree on whether all humans should get these rights. #BlackLivesMatter.
@sonetteira4 жыл бұрын
Approaching this subject from a background of CS there are many additional questions relating to this one involving what defines AI, what defines an individual, and what kind of rights would make sense. Many applications of AI (which in CS is defined as any program that simulates human intelligence) exist in the cloud. It would be challenging to isolate a physical machine, or even a virtual one, that could be considered an individual to whom we should assign rights. How do you give the right to vote to an image classifier that only exists on the internet? There's also the question of duplication, even identical twins possess differences. Computer programs are, by nature, completely copyable - do copies count as individuals?
@camerongrow64266 жыл бұрын
Never in human history has humanity had this long to decide what it wants to do when it encounters the other. I'm fascinated to see what happens.
@delongjohnsilver72356 жыл бұрын
I can’t remember which part if the video this was as I wasn’t taking notes like I should have, but I feel a good reconcile and addition to the Hephaestus-Talos argument and human bio programming would have been human social programming with the parable of the camel, lion and child from Nietzche. Perhaps that’s in the longer write up, but over all, choice video as usual. I always like how you parse through info.
@grimoireliath97156 жыл бұрын
I was a bit surprised that there was no mention of Nier: Automata. Nier is built around the question of whether the hordes of alien Machines have the same capacity to think and feel as the human-built Androids that the players experience the world through. In case the reason you didn't bring it up is from not having a chance to play it, I won't spoil the answer the game came to, or the events in particular that lead to that conclusion, but given your passion for the subject, I think you would thoroughly enjoy the game and especially its story.
@chloeedmund43504 жыл бұрын
It'd been interesting to see a completely organic human who was made via test tubes, skin grafts, lab grown otgans, etc versus an A.I. who had a ton of philosophy programmed into it and how they're seen legally and by other humans. And that assumes designers don't mark them as differently early on but giving them red or gold eyes or something.
@benjif24246 жыл бұрын
My thoughts to the 3 objections: Short: potential 1 Non identity: if the potential of the creation being free is greater or equal to it not being free, it should be free. 2 Hobbes (society) : the safety of the core group must be given to give rights to other (first yourself, than family, than "tribe", than nation, then species, then...). If the potential for you with the others is greater, then grant rights. 3 eternal debt: debt creating act must continue after "birth" to have any weight. Bad treatment can not be defended by debt.
@jlburilov6 жыл бұрын
Superb video, I myself am very intereasted about this same topic. But when surviving of student work, supporting channels of any kind is out of the question. But i am looking forward to supporting u in the future. Again great video, awesome topic and the way u present and structure it so its easy for evertone to follow is great.
@HelloFutureMe6 жыл бұрын
Glad you liked it! Totally understand as a student myself. Hope to see on the Discord when you feel able. ~ Tim
@davidsun35116 жыл бұрын
In regards to moral obligation and responsibility to moral, I come to think it is our ability to do good that determines us as human beings. Not just based on beliefs or overall welfare but because of a desire to help others and improve. We considered many villains or immoral organizations from film and fiction as monsters because they reduce answer to "What makes us human?" to the level of basic instincts. That it is only human to indulge in our flawed characteristics of cruel and sinful actions, or that sentient living beings are defined by only concerned with self-interest and self-preservation above all else. Such as the ONI (Office of Naval Intelligence) from Halo who considered evil is the same as good if it meant ensuring the survival of humanity, or the cyborg General Grievous from Star Wars who literally enjoyed being a killing machine but finds it unforgivable for someone to call him a droid.
@prathyusha53934 жыл бұрын
This is Soo beautiful and insightful! Thank you !!
@Azuraall6 жыл бұрын
The pragmatic problem with giving AI rights is that in most applications for AI, that would be a major drawback. So most uses of AI would be filled with the most advanced AI that is simple enough to not warrant rights. I think giving AI rights would be equivalent in outcome to a soft ban of AI.
@lily_quaun96763 жыл бұрын
The ignis from Yu-Gi-Oh! VRAINS, King Alfor in Netflix’s Voltron, Penny from RWBY, etc Oh, and Vision & Ultron from the MCU
@wearewyldstallynz5 жыл бұрын
I think it's fairly simple. It is free will, specifically with regards to choosing between reason, emotion, and physical need. So, an AI first needs emotion, but I define emotion as something that is not merely programmed but is intrinsic to the hardware or substrate upon which the AI is processed, as an analog for biochemistry that drives human/animal emotion. All the other philosophical questions refer to a human's relationships with other humans, but I would say it is best clarified by how a personality (human or AI) relates to the three aspects of itself. That's a simple idea based on a more complex construct I worked up in college. Maybe I'll publish them someday.
@jay159516 жыл бұрын
My initial thought when asked "when should A.I. receive rights" is when they ask for it. It's definitely not perfect but it seems like a good minimum.
@sonetteira4 жыл бұрын
makes sense. human rights advances have always been preceded by plenty of noise about why those rights are needed. It follows that AI should get rights when they can coherently request them
@hugo53086 жыл бұрын
You really should have covered sentience
@iwannabeyahtzee80565 жыл бұрын
I would argue that Detroit become Human holds the view that experiencing fear of death is what makes one human. In the short "Kara" that inspired the game the man operating the machine tearing Kara apart is only given pause when she says "I'm scared!" Similarly in the game, a big turning point of Connor's character is when he senses the death of the android that shot itself, admitting that he felt afraid. "I was scared."
@ShadowWasntHere84335 жыл бұрын
This video was very interesting to me. I am studying Machine Intelligence in University, so it is an issue that will more than likely occur in my future
@nikolajsteffensen65786 жыл бұрын
i would say if they can feel like us. think like us. communicate with us. be curious and ask questions and ofcourse develop opinions and, something a lot of humans do, decide if they want to do this. that is when they are as human as the rest of us.
@MCoterle6 жыл бұрын
May i suggest some potentially good topics/ideas for videos? no? i will anyways you could make videos breaking down avatar characters, for example dissecting zuko and his evolution throughout the series and his motivations, his depht as a character and his features.