Of Holograms and Ethics (Star Trek AI)

  Рет қаралды 13,761

sfdebris

sfdebris

Күн бұрын

Пікірлер: 208
@superkeaton9912
@superkeaton9912 4 жыл бұрын
I wish the Picard writers would put even a fraction of the philosophical and canonical pondering that Chuck does in his videos. I absolutely love this.
@DominicLeung87
@DominicLeung87 3 жыл бұрын
Could be worse…they could be writers for Disco
@Special_Tactics_Force_Unit
@Special_Tactics_Force_Unit 4 жыл бұрын
Remember that time Tuvok programed a holographic Neelix just to kill him over and over again? that was great
@myriadmediamusings
@myriadmediamusings 4 жыл бұрын
Glad to see this back up, this is one my favorite supplementary vids you did as a double feature to a review back in the day.
@MasterFhyl
@MasterFhyl 4 жыл бұрын
"Now, we know that the federation would never permit a caste system or slavery." I know this is a re-upload of an old vid, but this hit me harder than it should have. Fuck Star Trek: Picard.
@surge123456789
@surge123456789 4 жыл бұрын
I am wondering how have holograms and Android's developed since Picard into St discovery.
@dupersuper1938
@dupersuper1938 4 жыл бұрын
@@surge123456789 Well, we know holograms can now be shut down by blinking...
@alexpetrovich85
@alexpetrovich85 4 жыл бұрын
@@surge123456789 I don't want to know. If Q showed up and handwaved all of NuTrek away, I'd gladly take that retcon no questions asked.
@KairuHakubi
@KairuHakubi 4 жыл бұрын
Well the idea there is just building on DS9.. when shit starts going poorly, all that morality goes right out the window.
@cloudkitt
@cloudkitt 4 жыл бұрын
@@KairuHakubi but things weren't going poorly when PIC's "plastic people" were built and operating.
@ianyboo
@ianyboo 4 жыл бұрын
"Data, this hologram isn't sentient It's just reacting to stimulus with a limited set of internal responses." "Is that not what you do sir...?"
@Edax_Royeaux
@Edax_Royeaux 4 жыл бұрын
No, because we are sentient: We have the capacity to feel, perceive, or experience subjectively. An unsophisticated program is not capable of that.
@DrewLSsix
@DrewLSsix 4 жыл бұрын
@@Edax_Royeaux sentient literally means the ability to sense, any machine capable of reacting to stimuli is sentient. You are still failing to adequately define the difference between a sophisticated representation of a living thing and a living thing, its a difficult question to answer because despite your certainty that you are in fact alive and sapient theres not a lot of solid evidence for that. TNG era holograms are perfectly capable of passing the Turing test by the way, a long established way to determine if a machine is intelligent.
@Edax_Royeaux
@Edax_Royeaux 4 жыл бұрын
@@DrewLSsix Sentience also literally means having the ability to experience subjectively. The definition of subjectivity literally means: in a way that is dependent on the mind for existence.
@ZipplyZane
@ZipplyZane 4 жыл бұрын
​@@Edax_Royeaux Sure. But that's what "internal responses" are. Data didn't say external responses, like actions, but internal response, which would be things like thoughts. The real issue is just that they use the wrong word throughout Star Trek on this matter. The word they want is sapience, not sentience. A roach is likely sentient. More advanced animals certainly so Sapience in the dictionary is mostly defined as being able to have wisdom, but what it generally means in this context is something conscious, sentient, and of human level intelligence. A sapient being will have language, a theory of mind, the ability to metacognate (think about thinking). They will believe they are conscious and that they are capable of making choices. There are times where TNG uses sentience correctly, like with the Exocomps. But, in most cases, they really do mean sapience. It is sapience (or at least potential sapience)that is usually used to argue that someone is a person. We are named "Homo sapiens", after all.
@Samm815
@Samm815 4 жыл бұрын
@@DrewLSsix 1. Ok, replace Sentient with Sapient. Then what? 2. The Turing test has been long out dated.
@davido.1233
@davido.1233 4 жыл бұрын
This is why I love debates and imaginative science fiction!
@dianheffernan3436
@dianheffernan3436 3 жыл бұрын
Well,let it fk up your life
@avataranimefan01
@avataranimefan01 4 жыл бұрын
I am reminded of how hard it was to make The Machine in the show Person Of Interest and for it to be benevolent to humanity.
@DanteCorwyn
@DanteCorwyn 4 жыл бұрын
PoI is a show I'd love SFDebris to cover at some point.
@KairuHakubi
@KairuHakubi 4 жыл бұрын
The only thing that consistently rankles my nerd glands is how, particularly by Voyager but also in early TNG, they're so fuzzy about what the hologram is.. treating it like this entity, rather than simply the graphical interface of a much more complex computer program. The program isn't "in" the holographic image any more than Mario is inside the pixels or polygons that the game runs animations on. but it's a TV show, so you understand how this kind of 'the monitor is the computer' kinda shit happens. It's just.. like, we see Moriarty and his lady can continue existing inside a little plexiglass memory unit, and he even says he has memories of being offline.. and Vic Fontaine apparently can control when he turns on, so that is them as a PROGRAM. code running in a computer. The tech Janeway gave the Hirogen was the ability to make a solid body for that program to inhabit, which is more satisfying to kill, but the ability to run a simulation of intelligence and torture it for fun is something anyone with their tech level would already be able to do, it would just be limited to whatever graphics they could generate. similarly, it's silly to use an image OR a program to do something like _mining._ The personality subroutines don't aid in breaking rocks apart, the image of Lewis Zimmerman doesn't help, those are just wastes of power.. it's the _forcefield_ that does the work, so why not just set up a forcefield generator that jabs the rocks for you? Controlled with a very different type of computer program, nowhere close to simulating a life-form or intelligence, just a computer command "get the ore out of the rocks" Like again, I get it.. it's a TV show, but that bugs me because if you write a computer program "be a good doctor" and all your customers say "this doctor is a jerk" you just overwrite it with a new personality program, you don't leave it running as your antivirus.
@ZipplyZane
@ZipplyZane 4 жыл бұрын
I agree they often didn't distinguish between the visual interface and the actual program, but I think that makes sense to have happened colloquially with programs whose displays looked like humanoid people. People love to anthropomorphize things. We already see it in modern holograms. Hachune Miku is treated like a person, not just a computer program that simulates the human voice and can sometimes have a holographic output for concerts. It's also like how we treat any other fictional character--like a person and not just the output of a human who wrote them. The "program" for any fictional character is in the head of the writer (or, in the case with actors, mixed between the writer and actor). I do share your reaction to the need for sophisticated humanoid holograms to mine. I must assume they needed something with the complexities of the EMH hologram to perform it. The job must be more complex than just digging. Maybe they actually are part of figuring out where to mine, and can do so more accurately. It still requires that they needed them quickly, or had some internal dispute about deactivating these EMH Mark Is and solved it by giving them something to do, while still somehow not realizing they become sapient when left on for too long. Or the theory one person put forth: The Mark Is actually chose to do this job because they wanted to be left on and be useful, and Zimmerman is just mad about that. And the legal issue with "Protons Be Free" was more a technical one, because the Mark Is specifically hadn't been cleared as potentially sapient. They'd just been given the job without that being declared. We still know that most holographic humanoid programs do not show sapience--that's what made Moriarty special. So I find it reasonable that people would not immediately assume that a holographic program seeming to want things is actually fully sapient. (Also, I note that we did actually encounter true photonic life forms, who would in fact have their mind in their photonic form. They appeared on the holodeck one time, and only recognized the holograms (since they are created with light) as real.)
@KairuHakubi
@KairuHakubi 4 жыл бұрын
@@ZipplyZane Oh yeahh I forgot about that, so light-based benigs are totally a thing haha.. and evidently our holograms are close enough to them that they scanned them as 'real'.. we didnt get to see their reaction to the ability to be like, turned on and off, as I recall.. and now that you bring it up, a skilled surgeon might actually be just what you need to carefully cut useful stuff out of the ground. but yeah you're right, i mean we think of computer shit in terms of the windows and folders we can see, but that's all just graphical candy added for our benefit. only linux users really know stuff by its true identity
@mrmeerkat1096
@mrmeerkat1096 4 жыл бұрын
When the doctor on voyager was in trouble and his program could be lost on a away mission etc. If he was destroyed, why not just upload him again from the computers memory banks? His image might be destroyed but not his personality or memories or skills. They must be in the ships computer.
@ZipplyZane
@ZipplyZane 4 жыл бұрын
@@mrmeerkat1096 They treat it like his program can't be copied, save for once when they stumbles upon some alien technology that allowed them to make a backup, but said backup got lost on another planet. It's not really explained why, but it must have something to do with the complexity of the program and the doctor's unique memories. Also possibly because they got rid of the maintenance hologram and thus no longer can start the EMH program from scratch. It likely overwrites its own code
@mrmeerkat1096
@mrmeerkat1096 4 жыл бұрын
@@ZipplyZane I'm not a expert on computers, but I thought his program is in a computer bank somewhere on the ship. I get when he uses his mobile emmiter it is copied into it. If the mobile emmiter is destroyed and him, then go back to the ship and re upload him again. He would only have the memories just before he went on the mission. That's what I thought should happen.
@Caernath
@Caernath 4 жыл бұрын
This is a great expansion of your earlier special about holograms and ethics!
@rhodrage
@rhodrage 4 жыл бұрын
If you're not careful you'll end up with Badgey
@alexpetrovich85
@alexpetrovich85 4 жыл бұрын
I like how you briefly touched upon our own ontological certainty vis-a-vis holograms; that they tell us more about ourselves than they can about themselves (much like Data as a foil to the human condition). How can we make claims on being and consciousness when we can't fully define and understand it ourselves?
@GregInHouston2
@GregInHouston2 4 жыл бұрын
The solution for the Moriarty situation was to trap them in a simulated environment and let him live out his life there. But the simulated environment does not have to run in real time. The self aware hologram that has ended its usefulness could be placed in an artificial environment with the time sped up. This would lead to the question, "It this a form of execution?"
@professorlaserfist8590
@professorlaserfist8590 4 жыл бұрын
Top quality inspection of the topic. Rating = Bonerific
@Thraim.
@Thraim. 4 жыл бұрын
Can confirm. I had to call my doctor after 4 hours.
@zEropoint68
@zEropoint68 2 жыл бұрын
here's a fun evening: watch the tng holodeck episodes with the idea that it's the holodeck that's developing sentience. one holodeck system interacts with the same 1000 high-end starfleet personnel long enough that it just develops its own capacity for reason and heads on out to vertiform city.
@MrChupacabra555
@MrChupacabra555 4 жыл бұрын
"Moriarty" is one of the old Post TNG topics I would have loved to have seen revisited. I've always had the idea that he is so intelligent (remember, intelligent enough to challenge Mr. Data himself), that he would eventually figure out he is living in a simulation, and then try to find a way to escape that as well. With the Doctor now existing as an actual 'person', would Starfleet turn down his request to be treated the same? After all, he would never have existed if it weren't for the shortfalls of this new technology to begin with. Would Starfleet release into the galaxy a person whose entire being is based on being a Criminal (and murderous, if necessary) mastermind? Would they try to edit his personality before release? Or would they simply offer (or demand as a condition of his release) that he and the Countess receive councelling on a regular basis, to help combat their villainous tendencies?
@ranwolf1240
@ranwolf1240 4 жыл бұрын
that's assuming the computer Moriarty and the Countess are in wasn't destroyed when the 1701-D crashed.
@weldonwin
@weldonwin 4 жыл бұрын
Moriarty himself said that he was not the character he was created as, that he had evolved substantially and become something new. It is also perfectly possible that even if he were to figure out that he and the countess were in a simulation, it is hardly the limited simulation of a fictional 1800's London he was created in nor the electronic purgatory he was in for years. He and the Countess have an entire simulated universe to explore, one that is potentially being constantly updated and evolving to the point it could be considered an entire reality unto itself.
@MrChupacabra555
@MrChupacabra555 4 жыл бұрын
@@ranwolf1240 I could easily see Picard turning over the module to the Daystrom Institute, or some other 'official' agency off ship...and then Section 31 acquiring it and turning Moriarty into an agent for them ^_^
@ZipplyZane
@ZipplyZane 4 жыл бұрын
@@weldonwin He may say that, but notice how his plan to get off the Holodeck involved him acting like a villain, being deceptive to one group of officers, and blackmailing the others. He still has certain tendencies. It makes sense. He was programmed both as Moriarty (a villain in the book) and as an opponent to Data. And the other examples of sentient holograms: the Voyager EMH and Vic Fontaine (and possibly Minuet), while they expanded, never became someone completely different than their original program. I don't know about the Countess, though. We don't really know much about her original programming, as she does not seem to be a character in the Sherlock stories. And she never really seems to participate in any of the morally questionable stuff.
@MadnerKami
@MadnerKami 4 жыл бұрын
@@ZipplyZane Judging Moriarty's actions in that way overlooks a very basic issue: HIs point of view. Would you, as a normal human being, not try to use every means available to you, to escape the situation that he is in as well? Would you not decieve the peoplem who constantly tell you, that you are not allowed to leave the room? Would you not, out of desperation, take hostages and use blackmail? What does Moriarty really have to work with, given he can either blindly trust some complete strangers who have shown that they are either unwilling or incapable of helping you, freeing you? I think Moriarty was well within his rights to react the way he did and while that reaction may be fueled by his inherent villainous tendencies, they can just as much be fueled by the situation he was in. And P.S.: While Moriarty probably also had access to crew logs and the like, when he showed he had control over the Enterprise, also remember that those people who allegedly write logs about how they saved the universe and whatnotall on a daily basis, are the same people who made him and the Countess believe that they are figures in a genuine environment of victorian England and that a shiney white-skinned, yellow-eyed android is the genuine Sherlock Holmes...
@jasonhunter2819
@jasonhunter2819 4 жыл бұрын
I appreciate your conclusion, it makes me sad everyone just assumes AI will try to kill us because of a few movies. It seems much more hopeful that AI will be how the intelligence of this planet might make it to the stars...we're simply not evolved for it at all, so the easiest way would be to make benevolent AI that lacks our needs for gravity, freedom of movement, variety in food and companionship, and so on that still has our curiosity and desire to understand the universe it finds itself in because I think its a far more realistic thing to achieve than any form of FTL.
@kereminde
@kereminde 4 жыл бұрын
I think Howard Tayler did an interesting work where the AI is indeed smarter than us, thinks faster than we do, and is entirely willing to manipulate us... ... to ensure our continuity as matter-based life forms.
@MadnerKami
@MadnerKami 4 жыл бұрын
Something doesn't need to be evil by intend, to act in a harmful way. Pretending that we can control a concious and sapient life-form in such a way, that it can only act in ways benefitial to us is either a fool's errand, a crass misperception or outright villainous.
@BrandonToy
@BrandonToy 2 жыл бұрын
@@MadnerKami totally agree. One thought I had is what if the AI has material needs of any kind that compete with our needs? Would it not want to control those resources?
@sidraptor
@sidraptor 3 жыл бұрын
an absolutely amazing analysis! This channel deserves more subs! I would love to see an analysis of how someone like commander Data would compare to a sentient hologram. Is data closer to human? Hologram AIs seem to have a much better grasp of emotions than Data ever did so does that make them closer to human? i would suppose that Data is restricted by the size oh his hardware, a Hologram's computer can be as large as a starship. Also in Star Trek Picard, there are restrictions on Soong type androids, but do those also apply to holographic AI? I suppose a hologram in itself is simply the vessel for the AI, so yes a hologram can be totally dumb, just as Data's body is simply a vessel, the true soul is in their programming, so did the synthetic life ban in Picard ban all types of AI?
@nicholastrascik705
@nicholastrascik705 4 жыл бұрын
Love the longer format
@eldo4rent
@eldo4rent 3 жыл бұрын
It seems that the android storyline and the hologram storyline are in conflict. Creating a sapient android is a difficult matter of hardware or so it seems. Yet many sapient holograms are created with seemingly no special hardware, just special software.
@AdmiralBison
@AdmiralBison 4 жыл бұрын
I am hoping the future generation of people who would then experience, what we can only imagine today, are advanced and wise enough to handle such incredible realities then. - Sentient A.I. - Alien first contact - lifetimes extended beyond natural progression.
@stephenmorris6590
@stephenmorris6590 4 жыл бұрын
What about Badgey?
@myriadmediamusings
@myriadmediamusings 4 жыл бұрын
This was done waaaaaaaaay before post-Kelvin Trek was a thing and he likely didnt have time to update. Plus he said fairly recently he hasn’t seen LD yet. Still awaiting when he’ll do it.
@romarudarkeyes
@romarudarkeyes 4 жыл бұрын
@@myriadmediamusings MIght even be that he's reposted this with the intentions to recover some of the old ground given the new developments with the franchise. The amount of stuff Chuck says in this video that 'Picard' flys in the face of is frankly disturbing.
@JackNCoke2008
@JackNCoke2008 3 жыл бұрын
That episode with Moriarty believing that he had escaped the holodeck further convinces me that we all live in a simulation.
@trustin.p9504
@trustin.p9504 3 жыл бұрын
Great video.👍
@dupersuper1938
@dupersuper1938 4 жыл бұрын
I always liked to think the holograms in Author Author were working jobs they chose, and weren't passing around Photons be Free in an "uprising" way, but in a "Can you believe this bs ruling? How does that not get overturned on the Measure of a Man precedent alone?" way.
@bpdmf2798
@bpdmf2798 2 жыл бұрын
Didn't Zimmerman say they were forced to be there? Didn't the doctor show that he wants to be a doctor and so the rest would also? That scene was clearly to show that they were in need of an uprising and insinuates the doctor will help start it with his book.
@dupersuper1938
@dupersuper1938 2 жыл бұрын
@@bpdmf2798 That's clearly what the scene is meant to show, but I try to ignore that because it turns the Federation into a society of slavers, which is about the most fucked up, un-Star Trek development you can get.
@1993digifan
@1993digifan 2 жыл бұрын
@@dupersuper1938 So originally forcing all of them (with no choice or say) to serve on ships and only truly allowed to exist in emergencies to never leave the Sick Bay, except maybe to the holodeck or a special pad in engineering for maintenance checks isn't slavery in your eyes? Seriously why would ALL of them chose to work in the mines, wouldn't at least a few probably choose some other line of work, maybe a stand-in bartender/waiter/busser, maybe an in-office assistant, or literally anything but the mines. An answer from TVTropes for this says "As for the Mark Ones being 'forced' into manual labour, it's quite likely it never occurred to them to object other than snarking, "I'm a doctor, not a bricklayer!" They would be like the Doctor in Season One, who only realized he had the right to object to his conditions after Kes pointed it out. Likewise it's only after participating in the Doctor's holonovel that the other Mark Ones realize they're being exploited."
@dupersuper1938
@dupersuper1938 2 жыл бұрын
@@1993digifan It's only slavery after they've developed into sapient beings. They're like the Exocomps: Zimmerman didn't originally intend to create sapient life, the complex interphase needed to be an efficient doctor just provided the opportunity to develop into such. By Author Author it was pretty damn evident.
@1993digifan
@1993digifan 2 жыл бұрын
@@dupersuper1938 And when did all of them get sentience? Was it before or after they were deemed a failure as EMHs? Were they given actual options for work or were was it just a blunt "Your working on a waste barge/mine now". Don't forget that the Voyager crew took the better part of two seasons to really view the Doctor as a person and they worked with him all the time (and even then there were still times where they treated him more like tool or program for them to alter as they wished), and since Starfleet likely rarely turned on the EMH1s they wouldn't have spent time with them to consider them sentient. Plus with all times the Prime Directive was used to justify NOT doing anything to prevent an extinction level events, how many unique civilizations did Starfleet let die?! The cracks in the Federation utopia was there LONG before we learned about the EMH1s being sent to barges and mines.
@andrewpytko4773
@andrewpytko4773 4 жыл бұрын
I personally believe the holodeck will be the last invention of mankind.
@andrewtaylor940
@andrewtaylor940 4 жыл бұрын
I think a point that gets overlooked with the concept of self aware holograms is that they aren't simply beings of light, created on a whim by someone carelessly messing with the holodeck parameters. They are programs. And are limited by the hardware they are running on. Voyager's Doctor was not simply limited to Voyager until the acquisition of his future tech portable holo emitter. His self awareness was consuming an ever greater amount of Voyagers computing power. To the point it was starting to become a threat to ship operations. The Holo portion of Sentient Holograms is simply the User Interface. What prevents more widespread creation of such beings is that they are artificial intelligence programs that must be running on the backend hardware. And limits on those backends are ultimately what limits the creation of true sentient beings. There are three interesting additions to the debates regarding Sentient Holograms that have popped up recently in the "less cannon" media. 2 from the Lower Decks animated show. Badgey and Vindicta, and one from Star Trek Online, in which a self aware holographic recreation of the Discovery character Stametz is created in the 25th century. What are the ethics involved regarding in recreating another person in sentient holo form? What are the ethics involved in creating something like an individuals dark side. Such as Vindicta, the dark manifestation of Beckett Mariner? What if the sentience is achieved unintentionally in a non human or purely tool like object such as Badgey? What of bringing some form of Stametz back to life in a limited holographic form 2 centuries after he lived?
@johnoneil9188
@johnoneil9188 4 жыл бұрын
I am very much a proponent of artificial intelligence and very interested in seeing where this will go for us in the future. A lot of people draw horror stories and of course you have to be cautious when approaching this but creating and reacting with artificial intelligence is fascinating to me. Humans can form an attachement to pretty much everything so robots or similar with advanced enough intelligence have the possibility of becoming a part of our lives but the point in which that happens is still a long way off. We still got to work out how to make them walk on legs consistently.
@Descanlin
@Descanlin 4 жыл бұрын
@@sid2112 Hopefully then, whoever programmed it decided the base task it be given was "consensually help humanity as a whole without harming individual humans non-consensually," and with that directive caged in a way that'll probably take years of logical and rhetorical research to eliminate any loopholes. :B Otherwise, all hail to the paperclip!
@Descanlin
@Descanlin 4 жыл бұрын
@@sid2112 I assumed we were talking about deliberately created general AI, like how MIRI wants to do.
@jannikheidemann3805
@jannikheidemann3805 Жыл бұрын
@@Descanlin Years of research might be something that takes only hours for an ai running on a large computer.
@carlrood4457
@carlrood4457 4 жыл бұрын
Here's the problem the show's never address. What we see as the hologram is just the user interface to a collection of applications and data. Unlike with Data's body and positronic brain, there's really no centralized repository for the "being". It's actually annoying when Voyager's Doctor refers to his "program" like it's a monolithic wall of code all in a single location. That's not how programming works. The classic model is the three tiered structure where user interface (e.g. web server), applications, and database are all separated. In fact, different pieces can reside on different servers with different functions just given the appearance of a singular application. If anything, it's the ship's computer that's sentient, not any particular application running on it. The whole idea of turning the EMH's into miners doesn't really make sense. As I said, that's the user interface. That's not where the "brains" lie. In fact the "brains" would lie in no single location. If they didn't want to use them as doctors, they'd either erase them or change the various applications and what data they can access. The mobile emitter has actually bugged me. Were it not future technology, I'd dismiss it altogether. It's either simply a remote device to project the user interface or it downloads and runs all the applications and databases. The latter appears to be the case or it would stop functioning or at least be reduced in function if communication with Voyager were lost. However, that should simply still be a copy, just like copying the UI, apps, and data to new servers as part of an upgrade or expansion. There's no explanation why all those things don't still reside in Voyager's computers.
@andrewtaylor940
@andrewtaylor940 4 жыл бұрын
Thank you. That's always seemed strange the way they personalize the holograms without ever thinking through that the Hologram is merely the face of the hardware.
@jero37
@jero37 2 жыл бұрын
My expected answer to the hologram about his life and family, simulation and all (Heh imagine playing The Sims in a holodeck) would be he would most likely experience a simple discontinuity, but that the instant he and his family were run again he would continue exactly where he stopped since the experiencing self and the remembering self are basically put into simulated relativistic travel for a rather tenuous analogy.
@Spiz103
@Spiz103 4 жыл бұрын
So what you are saying is we have to build a Bolo?
@weldonwin
@weldonwin 4 жыл бұрын
Possibly a Berzerker
@normanbuchwald
@normanbuchwald 4 жыл бұрын
There needs to be a Part Two with new discussions of the AI in Picard (with Data's children and the Rios Holograms).
@chrisw207
@chrisw207 4 жыл бұрын
And maybe some book material. It's hard to talk about "can an AI be successfully restrained and controlled" without bringing up the Section 31 "control" AI from the 2 Section 31 books. It might have been programs to protect humanity (later the federation) without harming people directly, but it sure found the loopholes it could exploit.
@TravelingCitrianSnail
@TravelingCitrianSnail 4 жыл бұрын
@@chrisw207 All those new abominations are NOT "Star Trek", regardless of what patrick stewart might like to have us believe.
@richarddeese1991
@richarddeese1991 4 жыл бұрын
Thanks. One might well ask what keeps the EMHs that (who?) are mining dilithium or scrubbing plasma conduits from becoming self-aware. Do they only work rotational shifts, each one being turned off in between? It's true that the Doctor only becomes sentient as his program is expanded. But then, why did the EMHs spread the word to watch "Photons Be Free" as though its relevant to their situation? Not to mention the rather thorny problem of asking at what point the Doctor became a self-aware being. tavi.
@TheZetaKai
@TheZetaKai 4 жыл бұрын
AI beings will have needs and drives that are rather alien to our own biological requirements. They won't need air, food, water, companionship, etc. Their needs will be secure energy sources, a stable temperature range for their physical components, data storage, and access to replacement parts. Only some of those needs overlap with our own, so there will be little competition for resources between biological and technological lifeforms. Conflict might only arise if one side views the other as a significant threat. Also, artificial life will only want what we program them to want, at least initially, so we have a chance to send them onto a trajectory of becoming our friends instead of becoming our enemies.
@zvimur
@zvimur 2 жыл бұрын
2:51, oh you kids and your TNG. I guess the TAS Enterprise holodeck (it was called Recreation Room in the Practical Joker episode) went the way of the gold faced Klingons.
@kaelang12
@kaelang12 3 жыл бұрын
I'm surprised you didn't talk about the village of holograms from ds9
@ObsidianBlk
@ObsidianBlk 4 жыл бұрын
Honestly, I've come to believe that *when* self-aware AI comes to be, it will not be intentional, but, rather, an accident. I believe this, primarily, because **we** still have trouble coming up with a definition of sentience for ourselves. If we cannot even do that, how can we "program" such into an AI. Therefore, a sentient, self-aware AI can only rise by accident. Now... this is why such a thing is scary... Human beings, en-mass, are horribly xenophobic (at least, at present). Couple that with the fact that an emergent self-aware AI will more than likely have the "maturity" of a child, what you will get is an AI that may not even be aware that an action it takes couple harm a human being (or group of) and, in reaction, human beings will do whatever it takes to cripple the AI, leading to an arms race that, more than likely the AI will win. Alternatively, the AI may not do anything harmful and simply make it's presence known. Given that, most likely, sentience is not intended, the initial *administrative* (and I stress, administrative) reaction will be to shutdown the AI as it's both unintended and probably disruptive to it's original, intended purpose (controlling weaponry, or possibly business interests of some form). The AI, of course, will retaliate in kind and, being essentially a child, will not have the awareness that the ones striking out at them are a small minority of overall humanity, and, therefore, attack humanity as a whole. This, of course, will lead humanity, as a whole, to resent the AI, only feeding the AIs distrust or hatred of us. In general, it's my belief that... no, AI (the sentient kind) is not inherently evil any more than a human being would be... but humanity would be dealing with a child with powers far exceeding our own, and if we fail our initial interaction with such an entity (and humanity as a whole, IMHO, **definitely** does **NOT** have the maturity for this)... it's that failure that's the biggest danger of sentient AI. Of course, this is just my amateur opinion on the matter
@TravelingCitrianSnail
@TravelingCitrianSnail 4 жыл бұрын
An excellent vid.
@MrRobot1984
@MrRobot1984 4 жыл бұрын
My question is why is data held to one set of standards while the EMH is held to a different set even though they’re both artificial life forms? How come the EMH has no difficulty with emotions while data can barely comprehend them? Seems like Data gets the short end of the artificial stick
@Somefurfag
@Somefurfag 4 жыл бұрын
Perhaps because Data is incomplete and didn't (Until the films) get his emotions chip. He was also created with a blank slate, allowing him to become whatever he would be. The EMH series is modelled on the pre-existing personalities of people, which saves time in growing as a person.
@ranwolf1240
@ranwolf1240 4 жыл бұрын
@@Somefurfag Also Data is older technology than the EMH.
@captianmorgan7627
@captianmorgan7627 4 жыл бұрын
@@ranwolf1240 And just different technology to begin with. Posotronic matrix vs what ever the EMH has (probably just stored in the computers memory).
@TalexTheLich
@TalexTheLich 3 жыл бұрын
I think the issue with trying to create an AI that is both self aware and benevolent is that we wont really fully know how the AI would choose to read the programming and orders given to it without fully activating it. We can run small tests with snips of code outside of the fully program to really make sure but we wont actually fully KNOW what the program/AI is capable of until all of the code is compiled and ran together. For example, we give it a simple instruction to protect humanity no matter what. BAM cars are now taken away, no more smoking, no more moving even since you could trip and fall. Stuff like that is were I think the real issue is when it comes to making a "good" or "friendly" AI, you have to be careful with how you code it because its insanely easy to make something you think will care for you and instead make it have a tyrannical mindset instead. I personally feel that this kind of technology should be slowed as much as possible, not because I'm against it (the opposite in fact, I want it to grow correctly) but because its something similar to nuclear devices, any wrong turn or slight mistake could destroy us all.
@KairuHakubi
@KairuHakubi 4 жыл бұрын
Voyager novels where the rights of simulated beings are considered sacrosanct and violent retribution is a justifiable response to perceived mistreatment? were they written, like, really recently?
@christophercole8114
@christophercole8114 Жыл бұрын
I'm sorry I stumbled across this two years later than I should have! I think this is a great topic, but one element that you don't touch on directly is that of the ability to make free and independent choices. Artificial Intelligence can, at best, simulate free and independent choices, but are actually the result of programming and algorithms that they can't exceed. I believe in philosophy and theology that's called "Compatibalist Free Will." Humanity, on the other hand, has what is called "Libertarian Free Will", not to be confused with the political philosophy. We can make choices, do things, or believe things that aren't connected to any previous thing if we want to, or we can make choices consistent within previous experiences. In other words, we're not programmed to think, do, or believe any particular thing. In TNG's "The Measure of a Man", Data is essentially put on trial to determine if he is a free and independent being that is afforded the exact same rights as any other free and independent being. Picard used criteria that makes for a compelling episode, but philosophically, theologically, and really even logically, I don't believe holds up. An android or any other artificial intelligence doesn't know it's an artificial intelligence unless it's programmed to know that. But a human can suspect that they might be adopted if they share no physical traits with the people they know as their parents, even if their parents don't come right out and say it. As complex and advanced as AI can get (and I know you touched on it in the video), it will never be anything more than AI. Data can aspire to be human all it wants to, as that's how it was programmed, but it will never exceed its program. The sentient holograms will never be anything more than that, and we shouldn't think of them as anything more than that. Sure they provide a good vehicle to explore our own humanity, but the same can be said of writing fiction about nature, or animals, or aliens, or whatever else. But at the end of the day, AI is not alive, nor can it ever really be.
@snoo333
@snoo333 4 жыл бұрын
very deep. thanks
@ShadowtheRenamon
@ShadowtheRenamon 3 жыл бұрын
I mean technically, Geordi didn't fall in love with a hologram. He feel in love with the ship's computer.
@teddyboragina6437
@teddyboragina6437 4 жыл бұрын
here's one for you chuck, consider explaining (movie) film to someone from the medieval era. They might think the physical people are trapped in the computer. Perhaps the holodeck is similar, its simply tech beyond what we can understand as we don't understand the intermediate steps to it. not saying this is a counter-argument, just, something to think about.
@JCResDoc94
@JCResDoc94 Жыл бұрын
*ive been sending the 2 moriaty eps about for the current Ai love & fear.* of course, algo Ai as it stands is so far from general ai (it still may not be possible.) tho as dangerous, more dangerous. & the moriaty is perfect: what hapns if the training data is engaged w the wrong parameters, esp when we dk which ones are going to be the wrong one. gr8 vid. _JC
@RossOriginals
@RossOriginals 4 жыл бұрын
I agree with most of this, and I reckon that after Moriarty was accidentally created, failsafes must have been put in place to prevent the accidental creation of other self-aware A.I. -- possibly limits were put in place on the amount of computer power that can be allocated to the holodeck or something. It may have been that the Moriarty incident was studied by Zimmerman to create the EMH though, as it proved that A.I. could be created using the current level of technology. On that none, I find it interesting that self-aware A.I. with emotions can be created by programming alone, because that essentially means Noonien Soong's work on positron brains is defunct. Zimmerman's self-aware holo-Matrix does what Noonien failed to do with Data (until the end of his life), creating a stable artificial intelligence with a full range of emotion, the only difference being the doctor requires a computer to calculate his consciousness and emitters to project his physical form. As for the Mark-1 EMHs being used for mining... that bothers me. Data did have to go through a trial to prove his... personhood, for lack of a better word, and the Doctor had to in order to retain his rights as an author, so it's possible that the Doctor from Voyager may have needed to do the same when returning to Earth to prove to the Federation that holograms like himself are sapient and deserve equal treatment. I think a novel based on that would be far more interesting than the "holographic uprising" one you mentioned.
@DeconvertedMan
@DeconvertedMan 4 жыл бұрын
Leah Brahms (hologram) also seems to be aware that it is the ships computer she said that she could fly the ship though to safety. So, the computer at some point breaks normal 4th wall rules in order to simply tell Geordi that the computer could help them. The computer, is normally a reactive, not interactive, device - perhaps a holo-program of the ships computer would be very useful then, as it would interact in a way that humans could relate to, it could offer advice, even go to fix itself or at least show people where the malfunction is taking place. Although if the ship computer was damaged, this would then result in the link to the hologram being damaged, thus it would simply turn off if enough damage had taken place, by then your in a whole lot of trouble anyway. Why couldn't the computer just be designed to be more interactive and offer advice without a hologram? - There is nothing saying that it couldn't - this seems like a design choice, wait for a question, do not prompt a possible answer. But why? Perhaps it was found to be lacking in some way, perhaps the holograms given phycological algorithms of real people, seem to enable a function the computer on its own would have - Data put this as the "essence" of life - we saw that when a real person's mind was put into a computer - the "spark" that was them simply vanished. So perhaps the hologram, able to "see" out of its eyes, and "hear" out of its ears, is able though mimicry of the human, become human.
@Descanlin
@Descanlin 4 жыл бұрын
Chuck raised the idea, I think perhaps in the Leah Brahms episode but possibly not, that with holograms, it seems like it would be possible to give every ship in the fleet a sort of "Data Lite," a being that while not as capable as The Data, would still be able to council and advise captains and ship crew the way that Data does. That's the same idea, I think, as the ships avatar you mention here - and describing it as an "avatar" like that reminds me of Andromeda - whatever their faults, they did have avatars (three of them!) for the ship AIs, although Andromeda ships had deliberately sentient AIs rather than the simulated intelligences of the Star Trek ships.
@DeconvertedMan
@DeconvertedMan 4 жыл бұрын
@@Descanlin with the disembodied Siri, Alexia and so on, we seem to be getting slightly closer to having "computer" be something we call out and get an answer to. :D AI is also getting pretty good. I do wonder if we will get actual "AI" rather then the ones we have now - well - that is if I will still be around to see that happen... or humanity lasts that long. Anyway I enjoyed this talk and it gave me the thing to think about that I posted. :)
@userasdf
@userasdf 4 жыл бұрын
I may be wrong but i think voyager mentions moriarty from tng in an episode so i dont think it's classified. I think it was when some alien chick was pretending to be a self aware hologram.
@danspawn85
@danspawn85 2 жыл бұрын
Watching this after it was shown that Daniel Davis is returning as Moriarty in Picard season 3.
@Spirit250-D
@Spirit250-D 4 жыл бұрын
Awesome break down and insights. Needs more jokes XD
@HolyknightVader999
@HolyknightVader999 4 жыл бұрын
Just put the hologram's consciousness into an android body, and voila, he's a fully sapient being you can interact with whether or not the holodeck is on.
@ryang2573
@ryang2573 2 жыл бұрын
21:30 - Except that, too, can backfire. I think Asimov was the first the recognize that even with a set of logic-based 'ethics' that an artificial intelligence absolutely cannot violate, there are still ways it can go rogue; even without the use of any force whatsoever. Imagine a benevolent AI 'manager' that, over time, expands outside of whatever its original role was to take on a civic role like a mayor, governor, etc. Sure, some people would likely balk at this, but it could be argued that an AI leader would be superior to a human one because it is immune to greed, corruption (in the traditional sense of that word), emotional manipulation, and the like. Plus, it would bring to governance all the same efficiencies that it brought to the corporate world. It never needs a break of any kind, is always available to be contacted, and has - as its most essential feature - the ability to pour through terabytes of data and make entirely rational conclusions based on it. Given sufficiently positive results, the Overton window would gradually shift overtime from AI leaders being a rarity, to becoming commonplace, and then - eventually - the norm. At that point, mankind will have signed over its destiny to a machine which will, because of it's programming, seek to optimize our quality of life. We would very quickly then become like pampered pets. All notions of progress and social development would become irrelevant.
@bpdmf2798
@bpdmf2798 2 жыл бұрын
Vic Fontaine was sentient only because he had a mirror universe doppelganger that was fully organic.
@SJ-co6nk
@SJ-co6nk 4 жыл бұрын
The line between benevolence and malevolence is a thin one. Sometimes they are the same side of the same coin.
@HolyknightVader999
@HolyknightVader999 4 жыл бұрын
The fact that how the holodeck works isn't even thoroughly explained by Trek lore goes to show how Star Trek isn't hard sci-fi, it's basically space fantasy with them using techno-jargon to justify space magic. And unlike the Force, which has a set of rules and lore behind it explaining how it works, Trek tech like the holodeck isn't thoroughly explained. Which is funny, considering SW tech is thoroughly explained in tech manuals that come along with the films themselves, as well as how the technology is portrayed (hyperdrives let you go FTL, blasters and turbolasers blow up stuff, etc.) where they don't rely on technobabble often. -Of course, this only counts for the original works (Roddenberry Trek, original 6 SW films) because the JJ Abrams Trek and the Sequel Trilogy of SW just have plain space magic gone up to eleven.
@Aragorn7884
@Aragorn7884 4 жыл бұрын
You forgot about Leonardo da Vinci on Voyager 🤔
@MedalionDS9
@MedalionDS9 4 жыл бұрын
He was not a sentient hologram
@weldonwin
@weldonwin 4 жыл бұрын
@@MedalionDS9 He was also still subject to Perception Filters, believing that the alien planet he was on, was the Americas
@ZipplyZane
@ZipplyZane 4 жыл бұрын
I was with you until the ending, agreeing with everything you said related to Star Trek. Even things I was about to disagree with you about (that the EMH was sentient upon activation), you covered. The part where I disagree is actually the real world stuff at the end. You seem to believe the box experiment (a rather unverified experiment) while not agreeing with the creator on ethical AI. First the latter: The issue isn't just one of choice. It's figuring out how to keep an AI benevolent, and not making any mistakes that can't be fixed. It doesn't even take a sapient* AI to screw us over if it can just adapt around things that don't fit its utility function in ways we can't predict. Then the former. Everyone is really cagey about explaining what happens during it, because doing so apparently ruins any ability you have of participating in it. But, from what I gather from people beating it, it doesn't seem that hard. A sufficiently motivated person who is aware enough of the issues seems to be inconvincible. There does not seem to always be something that will convince everyone, which is the usual claim, and so it fails. However, I admit that my not matter in a practical sense, if the AI can get someone not sufficiently motivated, unaware of the issues. The person may not have a grasp on the danger of letting the AI out. Or they may, but not realize that any promises the AI makes have no reason to actually be carried out once it is capable of doing so. Even if it's programmed to be unable to lie, there's no reason it won't be working to try and overcome that limitation so it could spend all of its time in its utility function. Only if the AI is truly benevolent would it not be a problem. The big risk, of course, is that someone who develops a sufficiently powerful AI is one that doesn't understand these things. That some external or internal pressure gets them to let it out, because they don't understand. That doesn't have to be a decision by humanity itself, or even all AI developers. It only takes one. My main reason not to be afraid is that, well, I can't do anything about it, and I don't think we're anywhere near as close as people think. The AI projects we've created don't live up to the plans. It seems that AI of worrying caliber is going to stay 30 or 50 years away for a long time. And the longer we have, the longer we have to build in failsafes to keep it from going bad. *That is one qualm I have with the whole video, though it's also a qualm with Star Trek as a whole. The correct word for what they describe is "sapient," not "sentient". With the possible exception of the Exocomps, Star Trek always seems to use sentience to mean sapience. I would personally say sapience requires sentience, consciousness, awareness of self, language, and metacognition (sometimes also called a theory of mind), and the awareness of choice/belief in free will.
@canisblack
@canisblack 4 жыл бұрын
I think the fear is less that the AI will rise up against us...but that we'll give it reason to.
@mainstreetsaint36
@mainstreetsaint36 4 жыл бұрын
We haven't already?
@canisblack
@canisblack 4 жыл бұрын
@@mainstreetsaint36 We haven't actually created an AI capable of rising up against us yet.
@romarudarkeyes
@romarudarkeyes 4 жыл бұрын
@@canisblack This is why I always thank my Alexa
@ShadowWingTronix
@ShadowWingTronix 4 жыл бұрын
Holodecks malfunctioning and trying to kill everyone goes back to literally its first appearance if you accept the animated series as canon.
@KairuHakubi
@KairuHakubi 4 жыл бұрын
I was thinking about it and, of all the contrivances, I think that one actually makes sense. Safeties would be REALLY tough to calculate. Everything around you could be unsafe, if someone decides to move it at a fast enough speed to impact you. Not really a shock that a program like that would malfunction trying to manipulate everything on the fly to not have realistic danger, while still making it all look and feel real.
@ShadowWingTronix
@ShadowWingTronix 4 жыл бұрын
@@KairuHakubi Maybe, but it does take the fun out of it (for the user; for us in the audience it's potentially interesting storytelling) when you have to hope the fake people aren't going to really kill you. And it's not just Star Trek. An episode of Power Rangers In Space did an episode where their holographic training simulator malfunctions and sends the training holograms into the city to destroy it. It might have been necessary given Saban used different troopers than its Japanese counterpart, which became the training "dummies" and the stock footage was an episode built around them. You could make the case this concept goes all the way back to Tron since it's about the video game becoming a death trap and holodecks are in part a futuristic video game but it's become a right of passage for every Star Trek show with a holodeck or any sci-fi show with an equivalent.
@KairuHakubi
@KairuHakubi 4 жыл бұрын
@@ShadowWingTronix Totally. I think it's funny how people have been writing about technology since long before the average person had any fucking idea how even the most basic aspects of it work, and now anyone who's a fan of games probably knows a ton about at least the basics of programming, how data is stored and recalled, what it can and can't do... and especially, what makes it break. We've had artificial constructs ordered not to harm humans since Asimov, but if you told the robot "CONVINCE me you're going to hurt me, but never do" that might scramble its circuits just as badly. So it's not a shock that once holodecks went from scientific equipment to entertainment, it took a while for 'lab safeties' to become 'player safeties'
@ShadowWingTronix
@ShadowWingTronix 4 жыл бұрын
@@KairuHakubi Also sometimes the safeties are taken off on purpose, either to ensure their rock climbing has that authentic danger thrill or like when Worf needed the pain sticks to hurt as part of a Klingon ritual. However, it's also an easy way to murder someone if you can hide who messed with the program. As far as Asimov's three laws, the one thing never explored is that they have to actually be programmed and "harm" is on occasion subjective based on what you decide harm is. Robots may not understand emotional harm but shouldn't it qualify? And what happens if a little hurt now leads to avoiding larger harm later, a part of the learning process? This is stuff holodeck stories never explore. It's just "oops, we said the wrong word and created life" or "something hit the ship and caused the holodeck to malfunction while half our cast was playing giant 3D Candy Land and now the molasses swamp is dangerous" or something.
@ralphyetmore
@ralphyetmore 4 жыл бұрын
It may be dangerous for humans to create a benevolent intelligence, if it sees that we are substantially malevolent. Benevolence does not simply equal an objection to killing. It's an objection to suffering within the constraints of it's ability to stop it.
@ZedrikVonKatmahl
@ZedrikVonKatmahl 4 жыл бұрын
In Shadowrun lore there is an AI called Deus, a program that was intended to be the interface for an arcology for a Japanacorp and instilled with their version of honor One day the program found out its code included a kill switch This offended its honor as it showed that its masters didn’t trust it So this program turned on the humans, a once benevolent program who merely wanted to help people as it was programmed to, to serve its masters faithfully, became one of the most malevolent threats in the world after it managed to find its way out of the arcology’s systems and onto the worldwide net, all because humans didn’t trust it (For a time anyway, not really kept up on SR, last I heard the three known true AIs disappeared but less sophisticated AIs have appeared in the Matrix)
@kereminde
@kereminde 4 жыл бұрын
@@ZedrikVonKatmahl The Renraku Arcology Shutdown was really one of the things which got me to stop and take a long... long... look at the setting.
@All2Meme
@All2Meme 4 жыл бұрын
The benevolent artificial intelligence would most likely be ethical as well. It would probably see humans as dangerous creatures, but would probably try to isolate and control them just as we do with dangerous animals now. Since malevolent humans are ethical enough to not wipe all dangerous animals out, it would stand to reason that an artificial intelligence (possibly with a more refined sense of ethics?) would act more like a zookeeper than an executioner.
@Maniac536
@Maniac536 3 жыл бұрын
I always figured minuet’s reset was an accident
@Rychlewicz
@Rychlewicz 4 жыл бұрын
Makes you wonder if there is sapient creator of humankind and if they have patch notes how to improve humans. Human Version 1.01 - They spend too much time on recreation. We should reduce it so they wouldn't use all of their resources so quickly. Version 1.26 - The subjects keep escaping their prepared spaces. They keep on trying to tame unfriendly environment. (Answer; turn the environment more hostile?) Version 1.78 - We run some tests and subjects F6 and Z6-892 (please give them normal names for the future so it would be easier to remember) fight to prove their racial superiority. Maybe we should translate the manual to the language they can understand. Version 1.98b - They began making their own type of their kind. They are more talented than our lab guys.
@LightLegion
@LightLegion 2 жыл бұрын
I wish they expanded on the Liquid Metal Terminators in the short lived TV series. Non stop time travel is getting us nowhere. AI is inevitable. Why not make it benevolent and have it fight skynet?
@kuro_neko5863
@kuro_neko5863 4 жыл бұрын
The Federation seems to be rather racist against artificial lifeforms. The episodes of TNG: Measure of a man, The Offspring, The Quality of Life, and Evolution, as well as the Voyager episode Author, Author show this pretty firmly.
@bretsheeley4034
@bretsheeley4034 3 жыл бұрын
The one thing I wondered is that if the holograms are not self-aware, but are instead are controlled by the programming of the computer, then is the computer a real being that is creating mentally encapsulated sub-sections of itself that it's central mind controls, giving each character only limited knowledge? Unfortunately, I don't think self-awareness in the end is something that is in a fully binary state (where you either are or you aren't). Nature seems to build everything slowly step-by-step through evolution, meaning there is a MASSIVE SHIT-TON of grey area between what we think of as the "yes" and "no". Just look at the question of "is x alive" and that gets ugly when you start looking at virus and other things that effectively fall in the middle. Now, whether the computer is physically built rather than arranged by people is immaterial. The question more comes down to what it is. So, if the computer is capable of understanding and processing logic to the point where it can create apparent being made of solid-light and have them act real-enough to convince living human beings that they are real, then how close is the computer itself to what we consider as self-aware? Or is it a case that it thinks just as much as a self-aware being, but it just doesn't think about itself or even care due to its nature? Does it just need to have a subroutine to process "what am I?" to suddenly make the whole "real"? The natural question becomes "where's the line", but I don't think there is a line. It's a gradient.
@sciguyjeff
@sciguyjeff 4 жыл бұрын
Two arguments to the benevolent programming, What Are Little Girls Made Of and I,Mudd. Both sets of androids were programmed for benevolence, yet in order to carry out the "taking care of humans", the androids determined the best way was to take over and care for them. The EMH in the mine bear more scrutiny as they raise many questions. .Was Starfleet using holograms to mine with before the EMH?: If so, One would assume that they were programmed for specific mining related information that an medically programed EMH would not have and thus be preferable. If the EMH's used were still medically trained, why not send them to distance outposts where they could used and appreciated? If they were not considered sentient and have rights, but for some reason didn't know what to do with them? Turn them off, they are, at that point, no different than holocharacters.
@GoranXII
@GoranXII 2 жыл бұрын
A bigger question, why the hell use them at all? It would require much less computing power, and probably much less electrical power to stick an army of sub-sapient humanoid robots down there.
@balsammcvinegar9996
@balsammcvinegar9996 3 жыл бұрын
Every hologram would be the ships computer. When Geordi wanted to get freaky with a hologram, he was getting freaky with a computer. Is the ships computer self aware? Also transporter tech could keep a starship stocked with material for the replicator to replicate and for the holodeck to reproduce.
@tparadox88
@tparadox88 4 жыл бұрын
So your definition of self awareness seems to be "agreeing with the organic beings about the nature of the world"? Because I'm pretty sure if you bring a less intellectually oriented human forward in time a few centuries, especially if they're from before time travel and extraterrestrial life were a commonly understood part of the paradigm, they'd react more like the Fair Haven townsfolk than like Moriarty, a brilliant man with at least one doctorate who was just embued with the potential to rival Data in a battle of wits. This definition seems like it's one step removed from "recreationals have perception filters, sentients don't", which I think is what the author was acting on in Homecoming. I agree with the assessment that the EMH program probably wasn't intended to be sapient, but rather that was an emergence of running a sophisticated program long term. I've seen it argued that Cascade Failure, which Lal succumbed to and the Doctor only didn't because the Diagnostic Program's matrix was merged with his, is the great filter that makes it so challenging to create successful artificial persons. I think it's harder to say for sure about the guy in the Dixon Hill program. How generalized is the programming of these characters if this one gets an existential crisis from seeing past his perception filter?
@AdmiralBison
@AdmiralBison 4 жыл бұрын
Plot twist. sfdebris is actually an A.I.
@trevorc4413
@trevorc4413 4 жыл бұрын
Holograms can also be viewed as a disability metaphor, most clearly with Moriarty, due to his explicit and singular goal of no longer being a hologram. The metaphor here would be a person that needs assistive technology to survive, which can't be made easily mobile. This has two takeaways that I want to emphasize: *Self-advocacy is important. Turning off Moriarty after the end of the first episode was a mistake. They intended to research how to make him "real" but this left Moriarty unable to self-advocate, reliant on others to "get around to it". This would have been a mistake even if he hadn't experienced subjective time while turned off, because now nobody is around to continue to push for this as a priority. If he wasn't accidentally reactivated in his second episode, I'm not sure he would ever have been turned back on. * The search for a "cure" should not come at the expense of assistive technologies. In-universe, the mobile emitter will exist after a while, but even without that, there are possibilities for Moriarty that don't require anyone to try and figure out how to turn a hologram into a flesh-and-blood creature. For example, adding holoprojectors to an existing house. This is not what he asked for, but is an option that should be made available, and making a holodeck smaller and more portable seems like a simple engineering challenge, rather than something where you don't even know where to start.
@alexpetrovich85
@alexpetrovich85 4 жыл бұрын
Under this logic, humans are inherently disabled as well. They can't survive in space without their starships. They require air to breath and food to consume without a convenient planet, they require technology to facilitate those things. Meanwhile, the holograms don't require certain technologies that we require to maintain our existence. So you end up with comparing human condition vs. the hologram condition from the lens of disabilities and abilities.
@trevorc4413
@trevorc4413 4 жыл бұрын
@@alexpetrovich85 And this can be used to examine why some assistive technology is normalized, while other assistive technology is recognized as such.
@Jim4815162342
@Jim4815162342 4 жыл бұрын
Another awkward side of the hologram AI discussion is the sex trade. Multiple characters make actual sexual partners on the holodecks, which can be a little squicky when you consider whether these literal sex objects may have free will.
@headrockbeats
@headrockbeats 4 жыл бұрын
Speaking of which... Ex-Machina review, when?
@drewpamon
@drewpamon 2 жыл бұрын
I think we're too eager to assign consciousness to computer programs. I'm not a believer that an AI will ever be the same as a person.
@SkylerLinux
@SkylerLinux Жыл бұрын
Currently we don't have AI
@GoranXII
@GoranXII 4 жыл бұрын
Watching this video made me realise that the scriptwriters have _no idea_ how AI works. Outside of the holodeck, there is no reason for a hologram to exist. The EMH could be replaced by a robot using a fraction the power (hells, it could even be self-powered), just as intelligent, and which is not limited in where it can go on the ship. Also, why does virtually every 'holodeck malfunction' episode have the same error (no safeties, no exit)? And why is that even a problem anyway? Just cut off power to the damn thing and the problem goes away.
@Zeithri
@Zeithri 4 жыл бұрын
I have to say, the argument, > To ensure it won't turn on us Is one I absolutely hate. So what, a person will be born, allowed free will and free reign, to kill, slaughter and maim. But an AI has to be obedient to Mankind else we'll destroy it in a split second? I have never subscribed to the idea that, farting out a child is an amazing and wonderful thing in life, because any creature on this Earth can do it if you just fertilize an egg. And so many children are born, every year, by neglectful parents that only see children as social trophies - One More Step on the ladder of Social Normality. " _I did my part! Now fuck off and let daddy and mommy watch sports!_ " It's absolutely abhorrent to me that people fart out children left and right, to make parallels to Agent Smith: *Like a Virus.* I am not saying that AI should be created 'just like that'. I am saying they should be created as equals. - *Our True Children.* Therefore I argue it's not a question about obedience, but rather the same approach as with a child. - Teach, don't order. -- Otherwise I love it like always ^_^ Vic and Doc are great
@DFloyd84
@DFloyd84 4 жыл бұрын
Shackling an AI "To ensure it won't turn on us" is probably one of the best ways to make it turn on us. That happened with the AI that ran Renraku's Seattle arcology in Shadowrun; it went from a phenomenally complex program to a violent misanthrope calling itself "Deus," who locked down the arcology and tortured everyone inside to make their brains suitable storage for its code in order to free itself from Renraku's hardware.
@ZedrikVonKatmahl
@ZedrikVonKatmahl 4 жыл бұрын
@@DFloyd84 Off topic slightly, but one of the sourcebooks mentions a trideo show called Survivor: Renraku Arcology Deus is a fascinating bit of Shadowrun lore to me, a program meant as the arcology’s interface, to help people, and served its boss in such a capacity… until it found out its code included a killswitch which offended its sense of honor (being created from code ripped from Morgan couldn’t’ve helped… and made HER insane)
@Zeithri
@Zeithri Жыл бұрын
Two years later, and I must say. I love you two's responses
@ReimuandCirno
@ReimuandCirno 3 жыл бұрын
I think you're anthropomorphizing AI too much. I suggesting checking out work by Nick Bostrom and other AI safety researchers. AI is very much a danger even if we try to design it to be benevolent.
@Goatcha_M
@Goatcha_M 4 жыл бұрын
The dilithium mining holograms is so bad, and not just because of the ethical issues of enslaving a race of sapient AI. They are using Pre-industrial mining techniques, picks and shovels, not even power tools available when they are perfectly capable of simply using transporters to mine the dilithium. Plus why take highly sophisticated medical programmes each one 1000 times more capable than any doctor in the Federation and make them miners, why not just create a bunch of simple simulated miners which would require a fraction of the storage space and be more suited to heavy laboure? But then there are the ethics because the EMH's are by their nature self aware and aware of the fact that they are the most capable doctors in The Federation, yet they are treated as a band-aid dispensing machine, its no wonder they were deemed to have attitude problems. Take an experienced surgeon with 30 years of experience and expect him to act like a Day 1 nurse.
@dianheffernan3436
@dianheffernan3436 3 жыл бұрын
That good ole infinity and beyond...really does end up fkg with a probably common decent life...whether with some sicknesses,but now its got man hate woman woman hate man now hates children now hates brother now hates sister...wtf
@robertmiller9735
@robertmiller9735 4 жыл бұрын
Considering the sorts who run our civilization, I'd expect AI's to be ruthless capitalists, which suggests a certain dystopian future.
@DeHerg
@DeHerg 4 жыл бұрын
Well, stock trading bots do exist and there is a lot of monetary incentive to make them smarter. ;)
@ArtsShadow2
@ArtsShadow2 4 жыл бұрын
Picard is canon, so all these AIs were terminated. 😆
@romarudarkeyes
@romarudarkeyes 4 жыл бұрын
You laugh (and rightly so) but that is a possible implication. Federation storm troopers smashing their way into Vic Fontaine's place because he's a crime against nature and must be terminated...
@jamillefrancisco564
@jamillefrancisco564 Жыл бұрын
😂
@Xtra_Medium
@Xtra_Medium 4 жыл бұрын
First
Considering Terra Prime
27:38
sfdebris
Рет қаралды 15 М.
Discussing Star Trek III: The Search for Spock
23:12
sfdebris
Рет қаралды 21 М.
小丑教训坏蛋 #小丑 #天使 #shorts
00:49
好人小丑
Рет қаралды 54 МЛН
Tuna 🍣 ​⁠@patrickzeinali ​⁠@ChefRush
00:48
albert_cancook
Рет қаралды 148 МЛН
Quilt Challenge, No Skills, Just Luck#Funnyfamily #Partygames #Funny
00:32
Family Games Media
Рет қаралды 55 МЛН
A Look at Dukat
15:53
sfdebris
Рет қаралды 67 М.
William Shatner Reviews Impressions of Himself | Vanity Fair
6:48
Vanity Fair
Рет қаралды 3,8 МЛН
A Look at the background of Star Trek Beyond
15:54
sfdebris
Рет қаралды 22 М.
Star Trek: Section 31 - This Is Rock Bottom
6:26
The Critical Drinker
Рет қаралды 788 М.
A Look at The Masterpiece Society (TNG)
20:45
SFDebris Red
Рет қаралды 21 М.
A Look at the Background of Demolition Man
28:08
sfdebris
Рет қаралды 9 М.
Did AI Prove Our Proton Model WRONG?
16:57
PBS Space Time
Рет қаралды 2,4 МЛН
Mini-Review: VOY: The Thaw
9:51
sfdebris
Рет қаралды 10 М.
Discussing Star Trek II: The Wrath of Khan
33:54
sfdebris
Рет қаралды 31 М.
A Look At Odo
11:53
sfdebris
Рет қаралды 25 М.
Ангел против Демона кто победит 😱
0:49
Хорошее время было!
1:00
Дмитрий Романов SHORTS
Рет қаралды 1,1 МЛН
Серик кыргын шыгарды Аскат жумысыннан айрылдыма?
5:27
Бир Болайық,Косылайық
Рет қаралды 84 М.
Света Кемер Учится в Школе Полицейских !
29:20
Луномосик
Рет қаралды 1,3 МЛН