Stop Button Solution? - Computerphile

  Рет қаралды 481,981

Computerphile

Computerphile

Күн бұрын

Пікірлер: 1 100
@willdarling1
@willdarling1 7 жыл бұрын
"This specific human does not reliably behave in its own best interests."
@TypingHazard
@TypingHazard 7 жыл бұрын
my next tattoo
@ZonkoKongo
@ZonkoKongo 7 жыл бұрын
Will Darling lol
@stoppi89
@stoppi89 7 жыл бұрын
Wise words, true for approximately 100% of humans.
@RobertShippey
@RobertShippey 7 жыл бұрын
Tag line to my life
@wolvenmoonstone8138
@wolvenmoonstone8138 7 жыл бұрын
I need a tshirt that days this
@YiamiYo
@YiamiYo 7 жыл бұрын
"I can't figure out why, but I feel like humans like me better when I tell them lies."
@NeasCZ
@NeasCZ 7 жыл бұрын
That's actually very interesting thing to point out.
@IdgaradLyracant
@IdgaradLyracant 7 жыл бұрын
Asimov's Liar story.
@Dima-ht4rb
@Dima-ht4rb 7 жыл бұрын
Damn, what a deep point.
@pleasedontwatchthese9593
@pleasedontwatchthese9593 7 жыл бұрын
It will have to figure out the long term effect of things that take a long time to show their effect.
@AlexandreLeite
@AlexandreLeite 7 жыл бұрын
And so borns politics!
@myrobotfish
@myrobotfish 7 жыл бұрын
7:43 Just like how I _usually_ avoid tripping and falling, but I do it every so often just to see if it provides a better alternative than walking.
@EliasMheart
@EliasMheart 4 жыл бұрын
Undervalued comment^^
@grn1
@grn1 3 жыл бұрын
Walking is just repeated controlled falling.
@rustycherkas8229
@rustycherkas8229 3 жыл бұрын
"You're walking and you don't always realize it but you're always falling. With each step you fall forward slightly and then catch yourself from falling. Over and over you're falling and then catching yourself from falling. And this is how you can be walking and falling at the same time." --Laurie Andersen, "Walking & Falling"
@jgm592
@jgm592 7 жыл бұрын
I love how Rob always gives clear, relatable examples to reinforce concepts.
@anandsuralkar2947
@anandsuralkar2947 5 жыл бұрын
Yup
@ptwob
@ptwob 7 жыл бұрын
Seen a learning algorithm play tertis, it was not to lose the game, so when it reached a point where losing was inevitable, it just paused the game indefinitely, it was the only way not to lose.
@Meganarb
@Meganarb 7 жыл бұрын
This was learnfun and playfun! It's great how you bring them up, as they learn from watching a human play the game, similar to how it was described in this video!
@daggawagga
@daggawagga 7 жыл бұрын
I was extremely amused when I saw that
@emailjwr
@emailjwr 7 жыл бұрын
SkiffaPaul "The only winning move is not to play."
@bcn1gh7h4wk
@bcn1gh7h4wk 7 жыл бұрын
who programmed a line for it to pause the game? pausing is outside the game. it's not a move, or play, it's a function of the machine containing the game.
@daggawagga
@daggawagga 7 жыл бұрын
*+Nighthawk "who programmed a line for it to pause the game?"* The whole point in machine learning is that nothing is programmed directly. It's not a matter of it being outside the game, pausing the game is a move as valid as any other as long as it is part of the input space. Defining what should or should not be part of the game definition can be tricky when these meta properties emerge.
@LordVoidFury
@LordVoidFury 7 жыл бұрын
This was phenomenally well articulated
@albertbatfinder5240
@albertbatfinder5240 5 жыл бұрын
I don’t know about that. He sounds exactly like someone who doesn’t really grasp the whole subject, but has learnt a few key phrases to explain the waypoints along the road. I know he must be able to code this up, but I don’t think he can explain it.
@alexismandelias
@alexismandelias 4 жыл бұрын
@@albertbatfinder5240 no. He _really_ knows it, and makes an effort to explain it in simple terms, because otherwise we, the audience, won't understand it. Either way, I understood every bit of this video so him knowing the subject or not is mostly irrelevant
@pilotavery
@pilotavery 4 жыл бұрын
@@albertbatfinder5240 he clearly doesn't understand this, in fact he even understands the ethics of it and even understands the dangers of the logic.
@GoldphishAnimation
@GoldphishAnimation 7 жыл бұрын
What makes this theory useful is that the button doesn't actually need to be a literal *off* button, it can be more of a symbolic *problem* button, so you come running over and hit the button so it'd see that you have something that you did wrong. I love that it has that the capacity to essentially think "oh shit", like that's the Pinnacle of intelligence.
@DustinRodriguez1_0
@DustinRodriguez1_0 7 жыл бұрын
What he was referring to and called 'common knowledge' is not common knowledge. It's an ability called 'Theory of Mind'. Very young children do not have this. Theory of Mind is your own conception that other people have their own internal mental state. It is a crucial psychological ability for things like empathy. I forget the exact details, but there is an easy way to test if young children have developed a theory of mind yet or not by telling them a simple story and asking a question. It has something to do with hiding an object, having one of the characters in the story leave the room, the hidden object gets moved, then that other character returns to the room and you ask the child where they will look for the object in order to get it. Children with no theory of mind will say that the character will go directly to the new location and retrieve the object. Children with a theory of mind will know that the character has a distinct mental state, different from their own and anyone elses, determined by their own experiences, and will go to the original hiding location because they would have no way to know the object had been moved. As far as I know, no AI system is even remotely close to having developed a theory of mind. They do not model what they are observing and keep account of what a different perspective from their own would be.
@tiagotiagot
@tiagotiagot 7 жыл бұрын
I'm not sure what's more scary, a super-AI that doesn't know what you mean, or one that can read your mind...
@DustinRodriguez1_0
@DustinRodriguez1_0 7 жыл бұрын
Well if an AI is going to display human-like intelligence, we will expect it. It would be very difficult to deal with a person or AI that couldn't even conceive of the idea that you might not know all the things it knows. One of the issues that might arise with a machine-based intelligence is also something that people never really have to deal with in their development - recognition of the idea that anyone else exists. There's no reason for machine-based intelligences to have "individuals." It would basically just be one large 'individual', and wouldn't have any real reason to recognize or communicate with humans. It would take a really abstract level of imagination on its part to guess that there might be separate conscious entities in the universe it inhabits and, hey, maybe some of them are made of meat and those weird fluctuations on one of your inputs might be caused by them blowing air through their meat in patterns in an attempt to coax changes in the glob of meat-based neurons of even OTHER conscious beings - and they might be trying to communicate with you in the same way!
@AexisRai
@AexisRai 7 жыл бұрын
18:33 is where he's saying the thing you're talking about. The surrounding context makes it seem very clear to me that he is talking about the technical concept of "common knowledge" (recursive "X knows that Y knows that X..."), in this case with regard to the common knowledge between the agent and the human about precisely what the goals of their mutual interaction are. I don't think he's talking about ToM, and I don't think it is even necessary for an agent to have an explicit ToM in order to behave as if it and another agent have common knowledge.
@DustinRodriguez1_0
@DustinRodriguez1_0 7 жыл бұрын
How could any entity contain the concept 'X knows that Y knows' without a Theory of Mind that enables it to understand that there are entities which know anything at all other than itself? Of course the representation will be abstract and not 'conscious' or anything like that, but even such an abstract representation is not something I'm aware of any AI system ever having displayed. Holding 2 ideas about something, what the AI understands the situation to be and something different which is what something else understands the situation to be, isn't something existing AI are capable of. Once capable of such things, AIs will be able to appear much more intelligent, being able to trick people (or each other), or teach people based on conclusions that the user does not understand something and needs to be informed (which would be BRILLIANT), etc.
@AexisRai
@AexisRai 7 жыл бұрын
Well, it looks like we moved from "he's not talking about CK" to "how could it have CK without a ToM". In any case I think I concede the latter point. The thing I originally thought up to be a counterexample was something like solipsism, something where initially all the AI's perceptions are just confusing and uniformly indistinguishable, but it is (somehow) occasionally able to recognize the form of "teaching situations" that tell it something it doesn't already know that pushes it toward the reward function. But this is essentially saying, the AI still acts _as if_ there is a single disembodied "other agent" doing the teaching and sharing the common knowledge, even though it does not have an idea of that "other agent" having a single consistent physical form. So I realized this basically just sounds like a minimal ToM and I roughly described some stage of infancy.
@wiadroman
@wiadroman 5 жыл бұрын
"We can't reliably specify what it is we want" - human beings in a nutshell.
@M104-q9y
@M104-q9y 7 жыл бұрын
I am but a simple astronomer with a basic understanding of computing and coding, but every Rob Miles video is damn fascinating.
@irrelevant_noob
@irrelevant_noob 5 жыл бұрын
I am but a simple programmer with a fairly well-developed understanding of computing and coding, but this video is still baffling. The AI stuff is so surreal. :-\
@samreciter
@samreciter 7 жыл бұрын
So... a machine that desperately tries to maximize an unkown reward function. Sounds pretty human to me.
@williamwesner4268
@williamwesner4268 5 жыл бұрын
@Hubert Jasieniecki It basically describes the process of child rearing.
@MackTheTemp1
@MackTheTemp1 5 жыл бұрын
@Hubert Jasieniecki I think you described the local/limited version pretty well
@stevensong6909
@stevensong6909 5 жыл бұрын
An unknown award function of another person. Sounds like my relationship with my father 😭
@bengoodwin2141
@bengoodwin2141 5 жыл бұрын
That’s really the point actually, we know the way humans learn things works, so making something similar is what may work
@alexandregermain8011
@alexandregermain8011 4 жыл бұрын
I think it also applies to most natural ecosystems,where us, humans, are currently doing pretty bad.
@unvergebeneid
@unvergebeneid 7 жыл бұрын
I have no idea what's in my own best interest. How the heck am I supposed to teach a robot that?
@TheLK641
@TheLK641 7 жыл бұрын
Juste stay alive and let it watch you. It'll figure it out. Or jump in a pool, one or the other.
@nmnm4952
@nmnm4952 7 жыл бұрын
Robot will teach you.
@the1exnay
@the1exnay 7 жыл бұрын
It doesnt need to do what's in your best interest, it just needs to do what you want. Drinking a soda right now might not be in your best interest but you dont want a robot which refuses to get you a soda
@unvergebeneid
@unvergebeneid 7 жыл бұрын
Firaro, no, but I might want a robot that tries to _encourage_ me to have a water instead, without being too annoying about it of course. Finding that balance is even hard for people to get right. At least the robot will be able to read all books on nudge theory in a few seconds or less ;)
@schok51
@schok51 7 жыл бұрын
You could always explicitly tell the robot to encourage you to avoid soda next time you ask for one. What's important is that if it does something which you strongly oppose, it will see your opposition as a negative reward.
@seC00kiel0rd
@seC00kiel0rd 7 жыл бұрын
"Hey Robot, maximize happiness please." *starts universe-sized rat and heroin factory*
@MackTheTemp1
@MackTheTemp1 5 жыл бұрын
Real AI can ask you to clarify what you mean. ML cannot. Still scared?
@seanjhardy
@seanjhardy 5 жыл бұрын
Mackenzie Karkheck real ai doesn't care what a human thinks, if its reward is hard coded, it wont care to figure out what we mean, it will follow it to the letter.
@zacharyliverseed8464
@zacharyliverseed8464 4 жыл бұрын
Lol, why start one when we already live in one?
@Nukestarmaster
@Nukestarmaster 4 жыл бұрын
"Instructions unclear, cyberdong stuck in toaster"
@thelordz33
@thelordz33 4 жыл бұрын
Humans being alive means there will be unhappiness. Therefore death, or 0 reward would be better than an inevitable negative reward.
@bellybooma
@bellybooma 6 жыл бұрын
I wish more people made videos of this nature. Slow, thoughtful explanations, rather than trying to cram in as much information as possible in little time. I like how Rob pauses to think before speaking, shows that he is putting thought into how to articulate.
@Andrew-od4vg
@Andrew-od4vg 7 жыл бұрын
So if a robot watches you teach it how to make tea, your goal is actually to create something that will make tea for you, so what if the robot learns to teach things to make humans tea instead of learning to make tea itself?
@unflexian
@unflexian 7 жыл бұрын
You get a teacher-bot!
@MrDoboz
@MrDoboz 7 жыл бұрын
you get a teacher bot, who only makes you tea, if you ask him how to do it
@RoboBoddicker
@RoboBoddicker 7 жыл бұрын
sub-contracting bot. everything you ask it to do it teaches someone else how to do it then takes the credit :D
@willowFFMPEG
@willowFFMPEG 5 жыл бұрын
Then we get lots of very delicious tea
@drdca8263
@drdca8263 4 жыл бұрын
What you want is for *it* to learn how to make you tea, not just for *anything* to learn to make you tea. Therefore, it should also want itself to learn how to make you tea.
@iliakatster
@iliakatster 4 жыл бұрын
Talking about doing steps 3 times, theres some interesting studies where human children and adult chimps were both shown how to unlock a puzzle box and get a reward with some unnecessary actions like tapping the top of the box. The chimps emulated all the necessary steps, while the humans over-imitated, performing the same exact actions. So in a way, we start out not quite understanding intentions the same as these A.I.
@shilohpell8077
@shilohpell8077 4 жыл бұрын
I think the facet of this that fascinates me most is that the human doesn't need to press the button. Just the information that the human intends to hit the button provides enough information to the AI that what it is doing is sub-optimal. The button becomes a symbol. It might as well be that little plastic button prop at that point. Because the AI's understanding that the button is associated with its information being incomplete means that just the intent to press the button is enough for it to stop and re-evaluate its actions with the new factor of "The method I was using was not correct. I need to seek out why it was deemed incorrect and incorporate that knowledge."
@ThirdEyeFish
@ThirdEyeFish 7 жыл бұрын
This is a fantastic explanation of a deep AI problem. It is very clear without being condescending. Thank you!
@TheNicolarroque
@TheNicolarroque 7 жыл бұрын
These videos on AI are the best
@tarcal87
@tarcal87 5 жыл бұрын
0:59 - _"I thought the easiest way to explain Cooperative Inverse Reinforcement Learning is to build it up backwards, right?"_ [chuckles] - [me] [chuckles nervously] _"Yeah, right!"_
@Dngrcrw
@Dngrcrw 7 жыл бұрын
Videos with Rob Miles are always really interesting. It's awesome to see the progress on this sort of stuff!
@Hahahahaaahaahaa
@Hahahahaaahaahaa 7 жыл бұрын
I really like that (around 7:30) we get into some pretty deep issues in human learning (in that case, confirmation bias), if only we could just do random stuff even if we think we know what the best outcome is :)
@maxmusterman3371
@maxmusterman3371 7 жыл бұрын
Imagine a machine torturing a human because it wants to know what its reward function is.
@DavidChipman
@DavidChipman 7 жыл бұрын
Why would it know that the reward function exists? Could the reward function not be some "subconscious" signal? It doesn't "know" it's there while still receiving an input from it.
@magellanicraincloud
@magellanicraincloud 7 жыл бұрын
David Chipman I don't know if a machine could even have a "subconscious". Surely it would be able to investivate the source of impulses and probably review its own code.
@KuraIthys
@KuraIthys 7 жыл бұрын
That depends on how complex the machine is, and what it's areas of expertise actually are. also we cannot realistically say whether a machine can be said to have a subconscious without having a practical working definition of consciousness (which we don't) and a way to identify consciousness objectively. (eg, a way of determining whether something is conscious or not that does not rely on being able to ask it whether it is or is not conscious, and what it's thinking about.) That's... Unlikely to ever be possible. If we can't answer whether animals, or even other humans are conscious in an objective sense, how would we know if a machine is?
@DavidChipman
@DavidChipman 7 жыл бұрын
I suppose I chose the wrong word. I was thinking about the functions that the brain (obviously) controls with no conscious action from the person that brain is in. Things like breathing. Yes we can change our breathing rate consciously, but we certainly don't have to keep an eye on our breathing in order to have our body supplied with the right amount of oxygen at any given time.
@cheaterman49
@cheaterman49 7 жыл бұрын
I agree with you (and with your original choice of words). The robot cannot assume the human consciously knows the reward function, and that's kind of the point of this system (and why the failsafe for the "child hitting the red button while robot is driving" situation works). The only thing the robot can do is observe more, which includes watching when the red button is pressed and trying to understand why.
@JM-us3fr
@JM-us3fr 7 жыл бұрын
I imagine this AI would read A Brave New World, and think to itself "This book is AMAZING! Why haven't we tried this?"
@frankbigtime
@frankbigtime 7 жыл бұрын
This strategy for teaching morality seems to have much in common with raising a child. That's probably reasonable, since raising a child is ALSO a case of creating a new intelligence whose utility function will cause them to make future decisions you wouldn't, decisions that might turn out very dangerous to you.
@TheAgamemnon911
@TheAgamemnon911 7 жыл бұрын
Please do not leave children unattended in the vicinity of scary killer robots.
@matthewadamsteil
@matthewadamsteil 5 жыл бұрын
please do not leave robots unattended with suicidal children
@jehovasabettor9080
@jehovasabettor9080 4 жыл бұрын
they might break the robots, and you don't have that kind of money
@PhantasmalBlast
@PhantasmalBlast 3 жыл бұрын
"BILLY! GET IN HERE RIGHT NOW! Did you teach the robot that its reward function includes DRAWING DICKS on the furniture???"
@OrangeC7
@OrangeC7 3 ай бұрын
This is the real lesson from all these thought experiments
@Jeff121456
@Jeff121456 7 жыл бұрын
We just have to ensure, that when the AGI thinks it knows better, it actually does.
@Meganarb
@Meganarb 7 жыл бұрын
This is a fantastic summary of AI safety honestly. I'm definitely going to use this!
@magicmulder
@magicmulder 6 жыл бұрын
That doesn't really make the problem any easier, you just restated it. "It's easy to build a time machine, we just have to ensure the tachyons move at minus six times the speed of light through a black hole the size of the universe!" :D
@hunted4blood
@hunted4blood 6 жыл бұрын
I mean, if the its goal is to optimize it's reward function, it's in its best interest to know accurately when it knows better.
@magicmulder
@magicmulder 6 жыл бұрын
Why? Optimizing the reward function may not have the slightest to do with "knowing better" (unless we are able to program that into it, which is the problem again). It may have to do with finding creative ways to maximize it that we never thought of (and which may be hazardous to us).
@Ioganstone
@Ioganstone 6 жыл бұрын
Technology has gone too far.
@floorpizza8074
@floorpizza8074 3 жыл бұрын
You can just see his brain going, "how am I going to say this in a way that mere mortals are going to understand?" Which is exactly what AI's will be thinking someday. This guys is perfect for this.
@darkmage07070777
@darkmage07070777 7 жыл бұрын
Regarding the last point: would it then be possible to build in an "admin list" of humans that the machine must *always* treat as knowing better then it regardless of how accurate it believes its model to be? As in "I may have a 99.99999...9% accurate model, but these particular humans are designated as having 100% at all times; since I can never have 100% accuracy, I should always obey these humans". And then anyone who's NOT on that list can have some accuracy rating assigned based on the robot's experience/knowledge/parameters, i.e. "the child has a calculated 20% accurate model which is beaten by my 99% so I'm going to ignore the stop button - though I will send a notice to my 100% accurate admin list later in case I was wrong to do so". Would also help with other examples, like helping patients in a hospital or providing emergency services. In this situation, I can see AI templates being taught in labs for several years until they "graduate" in the wider world and are allowed to become full fledged robots to help humanity. Of course, such a system could be highly abused by those who make the AIs in the AI manufacturing/learning centers, since they're the ones who have the initial keys. But that could be said about almost every model of AI building, and we as humans are still tackling with how to "untrain" a malformed/criminal intelligence even without AIs right now, so I'm all for it.
@BattousaiHBr
@BattousaiHBr 6 жыл бұрын
darkmage07070777 this would severely limit its potential though. For instance, if you want it to do or achieve something that requires superhuman capabilities, it'd be unable to because it's being limited to only what humans are also capable of. Examples are cures to diseases like cancer, HIV, Alzheimer, advances in theoretical physics, etc.
@BattousaiHBr
@BattousaiHBr 6 жыл бұрын
darkmage07070777 also, the model would fail at edge cases since it assumes these admins have 100% knowledge when in fact no human does, which means there could've been an unintended error in their training.
@ancapftw9113
@ancapftw9113 6 жыл бұрын
I would simulate them working with other robots, not humans, for a long time before letting them function in the real world. Then humans could join via VR or by controlling a character. Hopefully they would learn some form of moral code before being put in a situation where they could hurt us.
@anandsuralkar2947
@anandsuralkar2947 5 жыл бұрын
Cool
@StrangerStone
@StrangerStone 5 жыл бұрын
@@ancapftw9113 the whole point of this field is to ensure safety so that we don't let IA chilling around, "hopefully" not being hurting people
@vsiegel
@vsiegel 3 жыл бұрын
I had a hardware packman game, and was training a lot for a while. At some point, I had learned the first level in a completely different way than the other levels. The ghosts always moved in the same way, every time in the same pseudo random pattern. I could play the first level without looking at the maze. In all other levels I got extremely fast, with not a single wrong step, but needed to look at the maze. That seems to be, despite being very different in performance, still the same method, but optimised to the limit. The step between level one and level two was like a different kind of memory. The relevance here is that an AI also can learn both ways. I think it is the step between almost knowing the whole map, when the map is only needed for one step, normally - and knowing the whole map. The the map and the access to the map is no longer needed, it becomes drastically different.
@sillygoogle9630
@sillygoogle9630 7 жыл бұрын
In this system, how would the AI determine what value the human would assign to its action? Lets say the AI correctly gets a cup of tea for a human, and the human is happy about it, how does the AI determine that the human is happy? (and that the happiness i caused by the AIs action, and not something else unrelated to it).
@josealvim1556
@josealvim1556 7 жыл бұрын
Miles' end-of-the-world AI videos are seriously the best content of this channel.
@madichelp0
@madichelp0 7 жыл бұрын
21:00 Imagine a depressed person cutting their wrist to cause pain, and the robot comes over and cuts their arm off.
@recklessroges
@recklessroges 7 жыл бұрын
The AGI would ask, "would you like a little bit of peril?"
@unflexian
@unflexian 7 жыл бұрын
seems like a win-win
@iosenski
@iosenski 7 жыл бұрын
madichelp0 де
@iosenski
@iosenski 7 жыл бұрын
sagiksp нъ
@magicmulder
@magicmulder 6 жыл бұрын
It basically turns into a cliché villain - "I promised I would take away your pain [shoots guy]".
@TheRealFaceyNeck
@TheRealFaceyNeck 7 жыл бұрын
This particular series with Dr. Miles is just astonishing. In a good way. Really complicated problems arising from trying to create intelligence in a safe way. Great stuff!
@StezzerLolz
@StezzerLolz 7 жыл бұрын
Well, that's both fascinating and deeply unsettling.
@uyaratful
@uyaratful 7 жыл бұрын
I watched all yours videos about AI and various learning methods, and I find very surprising how many problems that you presents here are very similar to problems we struggled with on university (especially in the field of epistemology), back when I studied philosophy. And even, to a lesser extent, to the ones from my cultural anthropology degree. And that scares me. Because those are still very, VERY open questions, with definitions that are sometimes blurred (I remember how reviewer torn apart dissertation of one of our Doctors, because he believed that one definition was used too broadly - and because of that, conclusions based on that definition were unjustified).
@yaerius
@yaerius 7 жыл бұрын
Maybe that's why people also don't know the purpose of life. Not knowing the reward function makes us better at cooperating with others.
@magicmulder
@magicmulder 6 жыл бұрын
That's probably the most insightful thing I've read in weeks! :)
@Klayperson
@Klayperson 6 жыл бұрын
there is no real purpose and we have to derive our own high-level reward function (based off of the biological reward functions of pleasure/pain). humans were genetically engineered by ancient aliens to perform slave labor, our masters left us all alone and we evolved more intelligence than we need for doing labor and don't know where to direct it. what a conundrum
@pointblank129
@pointblank129 6 жыл бұрын
Drug addicts do.
@latioswarshowdown1202
@latioswarshowdown1202 6 жыл бұрын
@@Klayperson meh you smoke too much weed our inteligence is a miracle and if you cant explain why this universe has rules that resemble inteligent design like the fine tuning argument then why are you saying human inteligence is an obstacle for our existence when in fact it helps us, the problem you have is you waste too much time on a computer putting you on a nihislitic loophole try to create something or help other human beings use that inteligence for something
@zhulikkulik
@zhulikkulik 6 жыл бұрын
Well... what if there is no purpose of life? Universe will be just fine without humans and animals. We all build our own purpose.
@PplsChampion
@PplsChampion 5 жыл бұрын
all these Rob Miles videos are insanely interesting
@MatthewMarshall96
@MatthewMarshall96 7 жыл бұрын
Could a flaw in this proposed solution not be that the AI wants to satisfy our desire, and the more it satisfies that desire the more "score" it gets, so why not in secret (so we never don't want it to do so) find a way to control our desire so as to make it very easy to satisfy. E.g. make us all catatonic and just stimulate pleasure centres in our brains or something? Would it be easy enough (and would it actually fix this problem) to require it to always consider the value of each individual action in light of us knowing about said action? Then again, I feel like such a restriction means we'd have AGI that can't really use its capabilities to deal with the hard problems (problems where solutions might not in the short-term be satisfactory but in the long-term would with some likelihood be desirable - such as national economic plans).
@davidstoneback6159
@davidstoneback6159 7 жыл бұрын
Matthew Marshall Yeah, that's right where my mind went to. You would have to add a negative reinforcement function to the algorithm where it gets reward taken away for doing things we deem negative ex. Putting us in a catatonic state
@MatthewMarshall96
@MatthewMarshall96 7 жыл бұрын
David Stoneback my problem with that approach is that the solution ends up with the same weakness as others: how do we create a comprehensive list of things we don't want to happen? The only solution I've thought about that might work would have to be something like the conservative AI Rob has talked about before and some way of requiring all new actions be trialled publicly (i.e with a human aware). Though even then some long-term strategy that would not be desired could still potentially be developed as we couldn't be sure we'd know specific individual actions would result in some greater emergent action we'd dislike. I don't know, the complexity of all this is mind boggling.
@Ormusn2o
@Ormusn2o 7 жыл бұрын
Problem with this is that humans have morals and even though if something like that would make us happier, the robot would undestand that we value how real the happiness is and that achieving it in that way would not be better in thier own value function, even if we would be objectively more satisfied in the simulated reality. But what you said actualy affects another thing. Culture, AI might affect culture that would make us more satisfied, but then we could question if that is a bad thing. There are a lot of things in our cultures that make us unhappy, so should we be so attached to it?
@julianw7097
@julianw7097 7 жыл бұрын
+Ormus n2o Oh no, the robot would understand that we value the belief that the happiness is real. This may explain why you believe that your happiness is "real".
@Ormusn2o
@Ormusn2o 7 жыл бұрын
It does not actualy matter what i think is real. It would be more in objectively observator sense, something that does not rly exist. What you think as "real" are eletromagnetic impulses going through your brain, what i meant that people value is just the philosophical basic values that people have. For me personaly it does not matter. If computer would want to put me in a virtual reality that would make me happy for rest of my life then go ahead, but society as a whole might not like that.
@simonstrandgaard5503
@simonstrandgaard5503 7 жыл бұрын
The light intensity/color saturation was going up/down, so I noticed it several times. I don't usually notice it. Awesome challenge. Keep the videos coming.
@gregkrobinson
@gregkrobinson 7 жыл бұрын
You pass butter.
@beckettfordahl5450
@beckettfordahl5450 7 жыл бұрын
LONG LIVE THE KINGDOM OF THE NORDS
@yungchop6332
@yungchop6332 7 жыл бұрын
The one who controls the pants controls the galaxy
@unflexian
@unflexian 7 жыл бұрын
Your profile pic is either Philip or Toast.
@jacobsebastian8640
@jacobsebastian8640 7 жыл бұрын
MORE!! these videos are great. Also "it doesn't think it knows better than me" seems like its going to be an important feature in the safety of most if not all coming A.I systems. Very clever and well presented.
@darrennew8211
@darrennew8211 7 жыл бұрын
It's an important feature in safety systems already. Your anti-lock brakes already know better than you how hard to push the brakes. You stomp the pedal to the floor, and the wheels will still turn to keep you from skidding. The elevator won't move when the doors are open no matter how hard you push the buttons.
@matthewneiman
@matthewneiman 6 жыл бұрын
Reminds me of that Vsause video where he showed an AI that played Tetris, and rather than loose, when it was the only option, the AI just paused the game.
@petrokustov3203
@petrokustov3203 4 жыл бұрын
Wow. Now I understand something, I didn't, and am confused over things, I wasn't... Your videos are great! That's for sure :)
@jlouzado
@jlouzado 7 жыл бұрын
Just my feeling is that once we solve AI safety we'll end up creating optimal parenting strategies as well. :D
@General12th
@General12th 7 жыл бұрын
I love Rob Miles! Best computerphile guy by far!
@wezyap
@wezyap 7 жыл бұрын
I for one welcome our new tea-making overlords
@matheuswohl
@matheuswohl 3 жыл бұрын
the British? lol
@bilalsulaiman2177
@bilalsulaiman2177 5 жыл бұрын
Robert Miles, You ROCK! ❤️ I just love the way you explain things, much love and respect.
@pommeskrieger
@pommeskrieger 7 жыл бұрын
Really reminds me of one of the new Doctor who episodes, dont be unhappy, or else
@Traagst
@Traagst 7 жыл бұрын
"This particular human is not necessarily behaving in his best interests." Yeah, I think we're going to need a lot of ignored stop buttons.
@robertweekes5783
@robertweekes5783 Жыл бұрын
This _inverse cooperative reinforcement learning_ seems promising, although the “cooperative” part means real humans need to monitor the training - like teachers, parents or judges… It rings true with what I’ve been thinking for some time, that some kind of human interaction is the only way to train AI to think like a human, and act appropriately. AI needs to learn about ethics from principles of psychology and child development, not equations and hard score targets.
@RobinSongRobin
@RobinSongRobin 3 жыл бұрын
"I thought the best way to explain cooperative inverse reinforcement learning was by building it up backwards" In this episode of computerphile; Rob invents french grammar
@0Luxis0
@0Luxis0 7 жыл бұрын
This was one of the most meaningfull videos from Computerphile I've ever watched. I wanna marry Rob Miles. ahaehahea
@joshsmit779
@joshsmit779 6 жыл бұрын
My brain explodes when I listen to that guys AI philosophy. He is so smart.
@Dusk-MTG
@Dusk-MTG 4 жыл бұрын
I'll probably just get up and do the tea myself.
@DutchDread
@DutchDread Жыл бұрын
"The robot is desperately trying to maximize a reward function it does not know"...most relatable robot ever
@Kram1032
@Kram1032 7 жыл бұрын
Ok here's a weird idea: Use that system and analyze the reward function it came up with for a hyper-realistic version of The Sims. Like, don't pick a single expert player and specific task. Try with large groups of people just going about their days. For that clearly necessary privacy intrusion aside, would that be workable?
@ancapftw9113
@ancapftw9113 6 жыл бұрын
You could expose it to the recorded lives of hundreds of humans to have it learn more general behaviors. That way it didn't pick up one person's bad habits. It would also help robots fit better with new people, as it wouldn't get too set on the ways of one human. One example would be inheriting your perverted uncle's nurse robot. You wouldn't want it to act the way he made it act, but the general nursing behavior would be fine.
@TandalfBeast
@TandalfBeast 7 жыл бұрын
This sounds almost like programming compassion which seems like possible solution. The heart of the problem (not sure if Rob said it directly) seems to be our lack of understanding of what we are optimizing for. We know how to optimize for survival, which is clearly part of our goal, but optimizing for love is a little more difficult. I think that's what we want the machines to do.
@Twisted_Code
@Twisted_Code 4 жыл бұрын
I was thinking about this some more... in theory at least, I think I get why this is such a compelling concept. Essentially, you are getting the program to help you by figuring out what you're trying to do and learning to do it better. I'm sure there's a flaw here somewhere that we will have to watch out for, such possibly as what I suggested in my first comment 5 days ago, but this definitely has potential as it relates to "AI safety by design". Hopefully, if there is a flaw anywhere, the (seemingly inevitable? I hesitate to assume that, though, since assumptions lead to mistakes) corrigibility will allow us to more easily steer the model away from situations where it becomes a problem. The hope is that we can make it want what we want without having to perfectly and completely understand it, right?
@Amund7
@Amund7 6 жыл бұрын
These videos are awesome! Great talent of explaining super complicated subject matter, and (maybe credits to the video editors) keeping it interesting all the way through!
@山田ちゃん
@山田ちゃん 4 жыл бұрын
What's about 2 buttons? One for stopping or -1 score and a second one that is a "potentially score losing" button that if you hold down the ai knows it is doing something wrong? As soon as you see that it is doing right you release the button. Does this make sense? Edit: It probably would confuse the ai because it doesn't know what it is doing wrong and even if it would stop doing it, I can't know for sure that it understands the issue, also when I release the button too late it will think that I meant something else. I hope my English is not too bad so you can understand.
@JJomoro
@JJomoro 2 жыл бұрын
I never know what this guy is talking about, I just like listening.
@jakefrench1795
@jakefrench1795 7 жыл бұрын
What are you doing, Dave?
7 жыл бұрын
I love probing these questions... great conversation about AI learning. Thanks Rob! So much further conversation needs to be had about the big red button!
@TheDuckofDoom.
@TheDuckofDoom. 7 жыл бұрын
Beard has improved. High five.
@Twisted_Code
@Twisted_Code 5 жыл бұрын
“If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfereeffectively...we had better be quite sure that the purpose put into the machine is the purpose whichwe really desire.”(Weiner, 1960, as cited by the scholarly article cited in this video)
@micahwaring8224
@micahwaring8224 7 жыл бұрын
That one dislike is probably an ai.
@watcher314159
@watcher314159 7 жыл бұрын
The main issue with this class of solutions is, of course, that of defining what a human is well enough to get things to work, and in many ways that's as hard of a problem as figuring out how to hardcode ethics into an AI. But it does seem to be the most elegant class of solution.
@Markhammano
@Markhammano 7 жыл бұрын
Very interesting video. Does this variable reward function basically translate to a 'mood' equivalent in humans? People around me seem to be happy > my reward function is currently high > I will continue to act as I currently am He seems unhappy with me > this lowers my reward function > I will change my behaviour. If this is the case, then surely it will learn what makes the human the happiest, and resort to that function all the time? Also, what does it have as a reward input when the human is not around/asleep? It doesn't have the input of human behaviour to gauge reactions to actions so the same problem would exist where it believes it knows best and has nothing to say otherwise. In the example of giving a child a lift to school, there is no responsible adult there to issue commands, so what is the situation called for utilisation of the stop button? Or what is another adult approached the robot to shut it off, and through doing so, abduct the child (extreme example, but fully within the realms of possibility if robots have been programmed to trust the commands of human adults) Obviously, lots to think about before AGI will be safe, but these seem to be some of the glaring issues in the argument presented here.
@srelma
@srelma 4 жыл бұрын
I think the complication to human happiness is that we have short and long term happiness. If you only look at how we're just now, you'll maximize the short term happiness. "Let's pump more heroine into the human as the lowering heroine level in him seems to make him unhappy". So, sometimes you have to make human temporarily less happy to reach a higher level of happiness. "Go to gym and pump iron which is painful and hard work, but will make you fit, which in turn will you make happy in long term". How to balance these two is the tricky thing. Also finding those long term happiness goals when they in short term lead to decrease of happiness is going to be hard.
@forestpepper3621
@forestpepper3621 7 жыл бұрын
It seems that this "stop button" problem is quite similar to the "Halting Problem". Let us suppose that the robot exists in a purely deterministic, non-random world. Then the perfect reward function is one which correctly identifies which sequences of actions by the robot will be stopped by the button, and which sequences will never stop (because they never require the button to be pushed). In this case, you have a reward function that essentially solves the "Halting Problem", and it has been established that there is no solution to the Halting Problem. So perhaps you can only find "fairly good" reward functions, which let the robot deal with a "stop button" most of the time; but perhaps no matter the reward function, there are always pathological cases that will make the robot behave badly because of the "stop button".
@RobertShippey
@RobertShippey 7 жыл бұрын
I wonder how useful an AGI like this would actually be. I struggle to see how it would come up with novel solutions, if it only learns from things we already do. Also, humans don't really act in our own long term self interest. So, if we asked it to help us combat climate change, how would it balance doing the things that need to be done vs. the humans reactions against it.
@AndDiracisHisProphet
@AndDiracisHisProphet 7 жыл бұрын
It is about learning what humans WANT, not HOW humans do things they want.
@ashleytelson7497
@ashleytelson7497 7 жыл бұрын
That's how human intelligence works. We would essentially make an infinitely scalable, instantly iterable, and immortal human brain that we could use and improve at will.
@emailjwr
@emailjwr 7 жыл бұрын
Robert Shippey You have to consider synthesis of information into broader ideas. Humans can take everything they know and have a novel idea/approach come to them. AI's could take the sum total of human knowledge and do the same. It's not unclear to me at all how bots will be more "creative" than humans.
@Sagolel4797
@Sagolel4797 7 жыл бұрын
They dont need to be creative, they just need to make life easier and more fun for us.
@magicmulder
@magicmulder 6 жыл бұрын
Without creativity there will be a hard limit to "easier and more fun" very soon. Just imagine the AGI would have to perform said task for people in the Middle Ages. It could not invent a sewage system to get rid of all the feces, it could not invent the internet to make information available to everyone, it could not do much except keep humans alive until they come up with all that by themselves.
@nand3kudasai
@nand3kudasai 2 ай бұрын
13:44 awesome point. Ive been thinking that happens with people in professions as well.
@ironman85000
@ironman85000 7 жыл бұрын
tldr; align AI's values with human values so if humans want to turn it off, it will also want to be turned off
@henitmandaliya
@henitmandaliya 7 жыл бұрын
Humans would not want themselves to be "turned off", your solution gives rise to the same problems it views itself as "itself" and not a generic robot
@fredhenry101
@fredhenry101 7 жыл бұрын
Sociopath, not psychopath. And there are hundreds of sociopaths living in our society already, perfectly functional once they have learned to mimic emotion. Humans do it all the time. We call it peer pressure: emulating what we think others want to see.
@how2pick4name
@how2pick4name 6 жыл бұрын
I don't want to be around people in general because i have to use lies to be in social environments. People don't want the truth. So what you can do is use NLP. It's perfect for making new friends, manipulating those new friends, etc, etc. This is exactly what an AGI is doing. Using motors in their face to look . Just like we use muscles to look surprised if someone tells you something you already know, but shouldn't. AGI will be the best liars ever.
@dosmastrify
@dosmastrify 6 жыл бұрын
HahHah do even you know all your OWN values let alone humankinds
@TheGreatRakatan
@TheGreatRakatan 6 жыл бұрын
I honestly believe the best solution is for AGI to never be given agency. Make its utility function so that it gives humans instructions on how to construct solutions to problems we pose to it, but for it to never take it upon itself to act. It will act like the ultimate teacher. We come to it, ask it questions, and the best solution for it is to explain the answer in a way that we understand so that it never acts on its own. I think this kind of implementation is necessary for humans to survive the advent of AGI because it necessarily slows down progress to a human pace.
@KohuGaly
@KohuGaly 6 жыл бұрын
hehe >/D rookie mistake! Giving humans instructions/proposals IS agency. You just added extra element (the human) between the decision and the outcome that the AI must account for when solving the problem. It also largely kills the point of building AGI in the first place instead of making a human do that thing. The point of doing AGI is to have agent that can do things that humans can't, like exploring surface of venus or driving a car more safely than human can...
@arvidsundvall7702
@arvidsundvall7702 6 жыл бұрын
Cup of Tea: Reward = 5 Difficulity = 1 Pressing Button: Reward = 5 Difficulity = 25
@jaimeduncan6167
@jaimeduncan6167 7 жыл бұрын
It's pretty clever. On the other hand it appears to tradeoff optimization for safety. Maybe that is one reason humans are not really moving to maximize anything , in general. Maybe a combination where one of this AI is the supervisor of a set of optimal agents that work on dedicated domains (no general intelligence on them ) could work
@laurenlewis4189
@laurenlewis4189 2 жыл бұрын
So the reward function that's available to the AI is "figure out the reward function the human is using." It's being rewarded to figure out another reward function
@NStripleseven
@NStripleseven Жыл бұрын
It’s being rewarded to figure out _and use_ another reward function. Small difference, but it means action over inaction.
@Binyamin.Tsadik
@Binyamin.Tsadik 7 жыл бұрын
The question is basic parenting with children. If you give a parent the ability to give rewards and punishments in addition to the intrinsic ones (and give them more weight), you can teach the "robot" how to act. Just give an operator a remote with the ability to provide reward and punishment values that can be associated with actions. The robot could even request feedback from the human after preforming an action during the learning phase.
@ayraen120
@ayraen120 7 жыл бұрын
Can nobody else hear the hissing???
@misterj9817
@misterj9817 7 жыл бұрын
snake somewhere near, eh ? xD
@luffyorama
@luffyorama 7 жыл бұрын
What hissing? I didn't hear any hissing sound. Are you sure it's not your speaker/headphone problem? jk. lol
@sherwinparvizian2414
@sherwinparvizian2414 7 жыл бұрын
You mean the noise? It's pretty loud.
@Computerphile
@Computerphile 7 жыл бұрын
+Sherwin Parvizian yeah I have an ongoing problem with the mic, haven't traced it yet.... >Sean
@styleisaweapon
@styleisaweapon 7 жыл бұрын
+Computerphile notch filter - you are welcome
@mapesdhs597
@mapesdhs597 7 жыл бұрын
Strange, this idea reminds me of a version of Battleships I write for the Electron many moons ago; it didn't have anything AI or learning, etc., but the idea of having no awareness of the overall evironment state, of simple rules based on the immediate surroundings, that's how it worked and it was surprisigly effective, my friends found they could only beat the game about half the time.
@M3HWW
@M3HWW 7 жыл бұрын
Sounds like how GLaDOS was programmed
@erikbrendel3217
@erikbrendel3217 4 жыл бұрын
This was a triumph
@PHHE1
@PHHE1 3 жыл бұрын
Everything went so well and in the last minute we still get age of Ultron, just smarter than in the last video
@quarkmarino
@quarkmarino 7 жыл бұрын
All fine and dandy, but just imagine when we become the other agent and we are the ones that have to cooperate and try to undertand the AI, just to make it not to "push the stop button" on us
@bimbumbamdolievori
@bimbumbamdolievori 2 жыл бұрын
This is the most human and natural learning process i can think of
@Laykun9000
@Laykun9000 7 жыл бұрын
Why did you program me to feel pain?!
@yes904
@yes904 6 жыл бұрын
Flüg Because you Pass butter.
@millanferende6723
@millanferende6723 3 жыл бұрын
15:25 - I can't quite wrap my head around this... isn't if then else, like the simplest of logic functions?
@Obez45
@Obez45 7 жыл бұрын
I can't even use Maya or 3ds max without errors in the software and a multitude bugs fixes that are constantly needed just for the program to function as intended now we are thinking programming a consciousness - I already feel sorry for that AI.
@donaldhobson8873
@donaldhobson8873 7 жыл бұрын
use blender
@bengoodwin2141
@bengoodwin2141 7 жыл бұрын
Obez45 just because YOU can’t do it doesn’t mean other people can’t
@madscientistshusta
@madscientistshusta 6 жыл бұрын
Obez45 as long as it isn't made by Bethesda we will be fine.
@benjaminlavigne2272
@benjaminlavigne2272 6 жыл бұрын
Deepmind just made a successful version of an AI that designed an AI that designed an image recognition program which is more efficient than the previous AI that was designed by humans. Keep in mind that in a world where software evolves constantly in environments (OS for example) that also evolve, is evidently a recipe for disaster and the fact these applications work at all is a tremendous compliment to the people working on them. While we cannot say the rate of buggy software is not going down with time it is certainly not going down exponentially the same way computer tech is evolving exponentionnaly which means progress in software design is being made at a faster rate than the chaos this evolution is creating. 50 years ago facial, vocal or text recognition were science fiction. 10 years ago, youtube's automatic voice-to-caption was completely useless (like 10% accurate). Today, the same caption AI is near human efficency. maybe, with a little imagination, we can wish that 1000 years from now, 3DsMax will finally not crash every 5 min lol
@anthonyandrade5851
@anthonyandrade5851 3 жыл бұрын
For a number of ressonância, I think that's one of the smatest and most unsettling videos of this very smart & unsettling series. On one hand it sounds a little like Asimov's 3 laws, in the other sounds like an intelligent being getting stucked in a toxic relationship with a human. "I get rewarded when master is happy, what could I do to make master the happiest and get infinite reward...?"
5 жыл бұрын
I'd love to hear this guy on conversation with Sam Harris.
@pappyman179
@pappyman179 7 жыл бұрын
Fascinating topic. Thank you, Rob. I really enjoy your explanations.
@garlic-os
@garlic-os 7 жыл бұрын
There is no green ghost in Pacman!
@RobertMilesAI
@RobertMilesAI 7 жыл бұрын
Also Pacman can't move down without turning to face down
@Computerphile
@Computerphile 7 жыл бұрын
It's an interpretation of PacMan!! :o) >Sean
@RobertMilesAI
@RobertMilesAI 7 жыл бұрын
Also, in episode 2F09, when Itchy plays Scratchy's skeleton like a xylophone, he strikes that same rib twice in succession yet he produces two clearly different tones
@OneBigBug
@OneBigBug 7 жыл бұрын
Boy, I hope somebody got fired for that blunder
@Dongufo15
@Dongufo15 7 жыл бұрын
in my experience of trying to help friends I find it really hard to figure out the reward function for humans; still, more often then not it's easier for me then for them. unless the person the AGI is watching knows himself pretty well the assumption that the human will act in it's best interest in an optimal way will likely be false. the strategy I use is: find out their assumptions and knowledge of the word, judge their actions based on those, imagine what they're trying to achieve, dialogue to verify my theory, and then, if i have a better understanding of the situation, give my advice.
@PyroTyger
@PyroTyger 7 жыл бұрын
Rewatch this video from 15.30 and tell me he's not talking about raising a child to be a civilised adult...
@TheNthMouse
@TheNthMouse 7 жыл бұрын
PyroTyger : interesting. With people, we tend to escalate the effects of the STOP button. This, ultimately, behaviorally turns into "Might Makes Right" politico-socio-morality - descriptively, at least.
@PyroTyger
@PyroTyger 7 жыл бұрын
Yes, but that's how child-raising begins. Parents know and can do infinitely more than their children - who don't know the rules of the game and are just trying to figure it out according to their parents' cues - but we try to raise our children to have progressively more agency and a better understanding of the world and society. The stop-button with an indeterminate negative utility value works perfectly as a metaphor simply for parental disapproval or discipline. Well, it's just a thought :)
@AguaFluorida
@AguaFluorida 6 жыл бұрын
There's another tool to parenting: distraction. This could perhaps be applied to machine learning as well.
@codemiesterbeats
@codemiesterbeats 6 жыл бұрын
The last one sounds like a good solution because it reminds me of a kid... say the kid goes over to touch the stove... you rush over and the kid knows to stop, sometimes, even before you get there because it knows you know better. People want robots to solve all our problems but expecting them to know better than us might be a tall order.
@xvlcuiae787
@xvlcuiae787 7 жыл бұрын
Maybe at the point where it thinks it knows better, it does know better
@imaginerus
@imaginerus 7 жыл бұрын
Laurens Peter But what does better mean? Humans have defined the meaning of "better", that's why a robot can't know better than a human.
@Colopty
@Colopty 7 жыл бұрын
If humans have defined the meaning of "better", why do you need to ask what "better" means?
@xvlcuiae787
@xvlcuiae787 7 жыл бұрын
With "knows better" I mean fulfilling the human's/AI's utility function more.
@TheNthMouse
@TheNthMouse 7 жыл бұрын
But what if it still doesn't? Granting that there's a point where the AI *will* know better, it's not proven that there isn't a segment along the curve where the AI *thinks* it knows better, but doesn't. Then there's the question of people getting unnerved by the thought that the AI *does* know better - and whether that might have a reasonable foundation.
@Eunostos
@Eunostos 7 жыл бұрын
Maybe so, but that doesn't mean the implementation will be one healthy for the continuation of humanity.
@Twisted_Code
@Twisted_Code 11 ай бұрын
If you think about it, this is really similar to how the two hemispheres of our brains cooperate to achieve a shared reward, the emotional triggers that drive us to every action.
@smurfyday
@smurfyday 7 жыл бұрын
Devil's advocate: But what if the kid knows bad things would happen at school that day? Ultimately, we're asking if there's an algorithm (even if we don't know or don't need to know) that can safely account for all situations. The answer is no. The more important questions are, what happens when the wrong decision is made, and who is responsible for deciding. And increasingly, are the technologist billionaires the ones who get to decide.
@tuxino
@tuxino 7 жыл бұрын
Even then, the robot should probably not allow itself to be shut off *while driving 70 mph on the motorway.*
@my_temporary_name
@my_temporary_name 7 жыл бұрын
We assume there is a low level mechanism that safely slows the vehicle down when the steering AI is not responding.
@willdarling1
@willdarling1 7 жыл бұрын
Actually I think 'Stop Button' really means totally off, otherwise they would be using a term like Standby or Reduced Function Mode.
@cheaterman49
@cheaterman49 7 жыл бұрын
Can't we build such algorithm though? Fairly sure its exit code would be 42 - and nobody would remember why.
@my_temporary_name
@my_temporary_name 7 жыл бұрын
When you stop your OS there is still the BIOS that manages the hardware and makes the complete shutdown safe.
@igvabe
@igvabe 5 жыл бұрын
« Le désir de l'homme est le désir de l'Autre » is a quote from French psychoanalysis, stating that a person's desire is originally the desire of the other(s) which is then induced and incorporated as it's own. As this is true for human development, this might also be the way to go for AI.
@djdrwatson
@djdrwatson 7 жыл бұрын
I have been thinking about the previous 'Stop button' video due to two recent news stories: 1. Steve the security robot which supposedly committed suicide. 2. Facebook shutdowns chatbots which supposedly starting communicating together in a new language which only they could understand. What would've happened if any of these robots learnt so much they refused to be turned off when the humans hit the stop button on them?
@Selur91
@Selur91 7 жыл бұрын
You need to revise your sources, they're either yellow or just social-media, the robot did not commit suicide, there's no proof pointing to that, it's just that a story about a robot killing himself gets more clicks than "A prototype robot that was still mapping the streets got a path wrong and ended in water". The chatbots new is not a recent one, it's from June, just got buzz now from cnet, half a year later and the two IAs where meant to communicate in anyway they could, it was a success, even if we cannot understand what they said. And again a story that reads "Bots meant to communicate with each other did it and the experiment ended." would not sell as much. And yes, it was expected to be in a "secret" language, you cannot expect anything to learn a language unless you expose it to said language. Be careful, journalist today would put focus on a unimportant end of a story to make it look like something that is not there just to sell more. There's no point in being afraid, at least not yet, those experiments end in a expectable way. And, said button, will probably be a secure separate system so the robot has no control over it, and probably wont be a button, more like a remote controller.
@djdrwatson
@djdrwatson 7 жыл бұрын
Yes, I know it was an accident and Steve the security robot didn't commit suicide (that's why I put 'supposedly'). I know that the media loves sensational headlines. But it did get me thinking: 1. What if Steve the robot had been pushed into the water? 2. What if Steve had a gun fitted to him? Should he be able to defend himself if attacked? 2a. What action(s) against Steve would constitute as being an 'attack' anyway? (we humans can be quite devious and could trick him) 3. What role would the 'stop button' on Steve the robot play in an attack situation? So many questions I couldn't figure it out!
@stoppi89
@stoppi89 7 жыл бұрын
The thing is: although very sophisticated by todays standards, these robots are not "more mighty than their stop button", they cannot choose. If you have a security robot which wants to go on a killingspree, but has a remote "deadswitch", it cannot rewire itself to not turn of once it is switched off. The "power disconnect" is not part of the "AI" part of the maschine, the AI has no control over it. But if we would build an AI that learns and has Internet access, it might spread itself as a virus, so as to not get shut down. We're not quite there yet, but worry not, the end is nigh.
@MrDoboz
@MrDoboz 7 жыл бұрын
anyone who would have enough computing power for a decent AI, would also have proper protection against an AI virus. Also you still have your circuit breaker when your server facility is hijacked by an AI.
@tappajaav
@tappajaav 6 жыл бұрын
MrDoboz Proper protection against what? If the AI is self-learning and this happens in increasing spiral you can't know what it will be capable of doing and what not. What if your server facility is completely controlled by the AI which prevents you from reaching the circuit breaker?
@jason2mate
@jason2mate 6 жыл бұрын
Honestly the last part with the baby sounds like the premise of all the Sci-Fi AI take overs, where eventually said AI realises that we as humans very rarely work in our own interests, and starts stopping us from doing stupid stuff. I'd be curious to see if we actually have a work around to stop AI from doing that, while still being able to understand "Yes baby's pushing my button are unreliable" then to not equate it to humanity as a whole.
AI "Stop Button" Problem - Computerphile
20:00
Computerphile
Рет қаралды 1,3 МЛН
AI's Game Playing Challenge - Computerphile
20:01
Computerphile
Рет қаралды 743 М.
Lamborghini vs Smoke 😱
00:38
Topper Guild
Рет қаралды 64 МЛН
А я думаю что за звук такой знакомый? 😂😂😂
00:15
Денис Кукояка
Рет қаралды 6 МЛН
I thought one thing and the truth is something else 😂
00:34
عائلة ابو رعد Abo Raad family
Рет қаралды 23 МЛН
Training AI Without Writing A Reward Function, with Reward Modelling
17:52
Robert Miles AI Safety
Рет қаралды 241 М.
AI & Logical Induction - Computerphile
27:48
Computerphile
Рет қаралды 351 М.
Generative Adversarial Networks (GANs) - Computerphile
21:21
Computerphile
Рет қаралды 651 М.
10 Reasons to Ignore AI Safety
16:29
Robert Miles AI Safety
Рет қаралды 342 М.
AI Self Improvement - Computerphile
11:21
Computerphile
Рет қаралды 424 М.
Magnus shocked by world’s most advanced climbing AI
16:45
More Magnus
Рет қаралды 636 М.
GPT3: An Even Bigger Language Model - Computerphile
25:57
Computerphile
Рет қаралды 434 М.
General AI Won't Want You To Fix its Code - Computerphile
8:54
Computerphile
Рет қаралды 405 М.
The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment
23:24
Robert Miles AI Safety
Рет қаралды 236 М.
I never understood why you can't go faster than light - until now!
16:40
FloatHeadPhysics
Рет қаралды 4,6 МЛН