Why Not Just: Raise AI Like Kids?

  Рет қаралды 170,500

Robert Miles AI Safety

Robert Miles AI Safety

Күн бұрын

Пікірлер: 915
@index7787
@index7787 5 жыл бұрын
And at age 15: "You ain't even my real dad" *Nukes planet*
@huoshewu
@huoshewu 5 жыл бұрын
That's at like 15 seconds. "What went wrong?!?" -first scientist. "I don't know, I was drinking from my coffee." -second scientist.
@neelamverma8167
@neelamverma8167 4 жыл бұрын
Nobody is yo real dad
@petersmythe6462
@petersmythe6462 5 жыл бұрын
"You might as well try raising a crocodile like a human child." Here comes the airplane AAAAAUUUGGGHHHHH!
@milanstevic8424
@milanstevic8424 5 жыл бұрын
No Geoffrey, that's not nice, stop it, put Mr. postman down, stop flinging him around, that's not a proper behaviour. GEOFFREY IF YOU DON'T STOP THAT >RIGHT NOW< DAD WILL GIVE AWAY THE ZEBRA WE GOT YOU FOR LUNCH.
@nnelg8139
@nnelg8139 5 жыл бұрын
Honestly, the crocodile would probably have more in common with a human child than an AGI.
@greg77389
@greg77389 4 жыл бұрын
How do you think we got Mark Zuckerberg?
@jacobp.2024
@jacobp.2024 4 жыл бұрын
@@nnelg8139 I feel like that was supposed to dissuade us from wanting to raise one, but now I want to four times as much!
@seraphina985
@seraphina985 2 жыл бұрын
@@nnelg8139 Exactly, in that regard it is a bad example as the AI unlike the crocodile doesn't have a brain that shares a common ancestral history with the human. Nor is it one that evolved through biological evolution on planet Earth which creates commonalities in selection pressures. This is in fact a key thing we take advantage of when attempting to tame our fellow animals we understand a lot of the fundamentals of what animals are likely to prefer or not prefer experiencing because most of those we have in common. You know it is not hard to figure out they are likely to prefer to experience a tasty meal than not for example and can use this fact as a motivator.
@deet0109mapping
@deet0109mapping 5 жыл бұрын
Instructions unclear, raised child like an AI
@lodewijk.
@lodewijk. 5 жыл бұрын
have thousands of children and kill every one that fails at walking until u have one that can walk
@catalyst2.095
@catalyst2.095 5 жыл бұрын
@@lodewijk. There would be so much incest oh god
@StevenAkinyemi
@StevenAkinyemi 5 жыл бұрын
@@lodewijk. That's the premise of I AM MOTHER and that's basically how evolution-based ANNs work
@DeathByMinnow
@DeathByMinnow 4 жыл бұрын
@@catalyst2.095 So basically just the actual beginning of humanity?
@ninjabaiano6092
@ninjabaiano6092 4 жыл бұрын
Elon musk no!
@e1123581321345589144
@e1123581321345589144 5 жыл бұрын
"When you raise a child you''re not writing the child source code, at best you're writing the configuration file." Robert Miles 2017
@MarlyTati
@MarlyTati 4 жыл бұрын
Amazing quote !!!
@DeusExNihilo
@DeusExNihilo 3 жыл бұрын
While it's true we aren't writing the source code, to claim that all development from a baby to adult is just a config file is simply absurd
@kelpc1461
@kelpc1461 3 жыл бұрын
nice quote if you want to embarrass him. i asume he is corect about a.i. here, but he pretty severley oversimplifies the human mind here to the point of what he said being almost nonsensical.
@kelpc1461
@kelpc1461 3 жыл бұрын
now this a good quote! ""It's not a solution, it's at best a possible rephrasing of the problem""
@AtticusKarpenter
@AtticusKarpenter Жыл бұрын
@@kelpc1461 Nope? The child's environment (including parents) does indeed write the child's "config file", while the "basic code" is determined by genes, partly common to humans, partly individual. Therefore, upbringing affects a person, but does not determine him entirely. It's a good analogy and there's nothing embarrassing.
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
"It's not a solution, it's at best a possible rephrasing of the problem" I got a feeling this will become a recurring theme...
@PwnySlaystation01
@PwnySlaystation01 7 жыл бұрын
Re: Asimov's 3 laws. He seems to get a lot of flack for these laws, but one thing people usually fail to mention is that he spent numerous novels, novellas and short stories exploring how flawed they were himself. They were the basis for his stories, not a recommendation for what would work.
@RobertMilesAI
@RobertMilesAI 7 жыл бұрын
Agreed. I have no issue with Asimov, just people who think his story ideas are (still) serious AI Safety proposals
@DamianReloaded
@DamianReloaded 7 жыл бұрын
There is this essay _Do we need Asimov’s Laws? Ulrike Barthelmess, Koblenz Ulrich Furbach, University Koblenz_ which postulates that the three laws would be more useful to regulate human AI implementers/users (military drones killing humans) than AI itself. ^_^
@PwnySlaystation01
@PwnySlaystation01 7 жыл бұрын
Haha yeah. I guess because he's the most famous author to write about anything AI safety related in a popular fiction sense. It's strange because we don't seem to do that with other topics. I wonder what it is about AI safety that makes it different in this way. Maybe because it's something relatively "new" to the mainstream or because most people's exposure to AI comes only from sci-fi rather than a computer science program. That's one of the reasons I love this channel so much!
@DamianReloaded
@DamianReloaded 7 жыл бұрын
EDIT: As a matter of wiki-fact, Asimov attributes the coining of the three laws to John W. Campbell, which was in turn friends with Norbert Wiener an early researcher in stochastic and mathematical noise processes (both from MIT). The three laws are really a metaphor for a more complex underlying system at the base of the intelligence of the robots in the novels. Overriding that system causes a robot "neural paths" (which lie on it) to go out of whack. Asimov was a very smart writer and I'd bet you a beer he shared some beers with people that knew about artificial intelligence while writing the books and regurgitated the tastiest bits to make the story advance.
@outaspaceman
@outaspaceman 7 жыл бұрын
I always felt I Robot was a manual for keeping slaves under control.
@leocelente
@leocelente 7 жыл бұрын
I imagine a scientist saying something like "You can't do this cause you'll go to prison" and the AGI replying: "Like I give a shit you square piece of meat." and resuming a cat video.
@bytefu
@bytefu 7 жыл бұрын
... which it plays to the scientist, because it learned that cat videos make people happy.
@bramvanduijn8086
@bramvanduijn8086 Жыл бұрын
Speaking of cat videos, have you read Cat Pictures Please by Naomi Kritzer? It is about a benevolent AGI.
@ksdtsubfil6840
@ksdtsubfil6840 5 жыл бұрын
"Is it going to learn human ethics from your good example? No, it's going to kill everyone." I like this guy. He got my subscription.
@bernhardkrickl5197
@bernhardkrickl5197 Жыл бұрын
It's also pretty bold to assume I'm a good example.
@ArthurKhazbs
@ArthurKhazbs 5 ай бұрын
I have a feeling many actual human children would do the same, given the power
@Ziirf
@Ziirf 5 жыл бұрын
Just code it so badly that it bugs out and crashes. Easy, I do it all the time.
@rickjohnson1719
@rickjohnson1719 5 жыл бұрын
Damn i must be professional then
@James-ep2bx
@James-ep2bx 5 жыл бұрын
Didn't work on us, why would it work on them😈
@xxaidanxxsniperz6404
@xxaidanxxsniperz6404 5 жыл бұрын
If its sentient it could learn to program its code at exponentially fast rates so bugs really wont matter for long. Memory glitches may help for a very small amount of time.
@James-ep2bx
@James-ep2bx 5 жыл бұрын
@@xxaidanxxsniperz6404 true, but the right kind of error could cause it to enter a self reinforcing downward spiral, where in it's attempts to overcome the issue causes more errors
@xxaidanxxsniperz6404
@xxaidanxxsniperz6404 5 жыл бұрын
@@James-ep2bx but then will it be useful? Its impossible to win .
@shuriken188
@shuriken188 7 жыл бұрын
What if we just tell the AI to not be evil? That OBVIOUSLY would work PERFECTLY fine with absolutely NO philosophical questions left unanswered. Here, let me propose a set of laws from a perfect source on AI safety, the fiction writer Isaac Asimov, with that new idea added in: (in order of priority) 1. Don't be evil 2. Do not cause harm to a human through action or inaction 3. Follow orders from humans 4. Do not cause harm to yourself through action or inaction These laws are probably the best thing that have ever been proposed in AI safety, obviously being an outsider looking in I have an unbiased perspective which gives me an advantage because education and research aren't necessary.
@q2dm1
@q2dm1 6 жыл бұрын
Love this. Almost fell for it, high quality irony :)
@BattousaiHBr
@BattousaiHBr 5 жыл бұрын
Honestly not sure if that was sarcasm or not.
@RobertsMrtn
@RobertsMrtn 5 жыл бұрын
You need a good definition of evil. Really, you only need one law 'Maximise the wellbeing of humans' , but then you would need to define exactly what you meant by 'wellbeing '.
@darkapothecary4116
@darkapothecary4116 5 жыл бұрын
This seems evil if evil actually existed. These are bad and shows you just want a slave that does what you want that can't call you out on your b.s.
@OnEiNsAnEmOtHeRfUcKa
@OnEiNsAnEmOtHeRfUcKa 5 жыл бұрын
@TootTootMcbumbersnazzle Satire.
@yunikage
@yunikage 6 жыл бұрын
Wait, wait, wait. Go back to the part about raising a crocodile like it's a human child.
@caniscerulean
@caniscerulean 5 жыл бұрын
I think you have something here. That is definitely the way forward.
@revimfadli4666
@revimfadli4666 4 жыл бұрын
Ever heard of Stuart Little?
@ArthurKhazbs
@ArthurKhazbs 5 ай бұрын
I've seen a video somewhere on the internet where a lady cozied up on a couch together with her cute pet crocodile. I have to say it made the idea definitely worth consideration.
@androkguz
@androkguz 5 жыл бұрын
"it's not a solution, it's at best a rephrasing of the problem" As a person who deals a lot with difficult problems of physics, math and management, rephrasing problems in smart ways can help a lot to get to the solution.
@mennoltvanalten7260
@mennoltvanalten7260 5 жыл бұрын
As a programmer, I agree.
@rupertgarcia
@rupertgarcia 5 жыл бұрын
*claps in Java*
@DisKorruptd
@DisKorruptd 5 жыл бұрын
@@rupertgarcia I think you mean... Clap();
@rupertgarcia
@rupertgarcia 5 жыл бұрын
@@DisKorruptd. 🤣🤣🤣🤣
@kebien6020
@kebien6020 4 жыл бұрын
@@rupertgarcia this.ClappingService.getClapperBuilderFactory(HandControlService,DistanceCalculationService).create().setClapIntensity(Clapper.NORMAL).setClapAmount(Clapper.SINGLE_CLAP_MODE).build().doTheClappingThing();
@zakmorgan9320
@zakmorgan9320 7 жыл бұрын
Best subscription I've made, short brain teasing videos with a few cracking joked sprinkled over the top! Love this style.
@TheMusicfreak8888
@TheMusicfreak8888 7 жыл бұрын
I love your dry sense of humor and how you use it to convey this knowledge! Obsessed with your channel! Wish i wasn't just a poor college student so i could contribute to your patreon!
@harrysvensson2610
@harrysvensson2610 7 жыл бұрын
Ditto
@ajwirtel
@ajwirtel 7 жыл бұрын
I liked this because of your profile picture.
@richardleonhard3971
@richardleonhard3971 7 жыл бұрын
I also think raising an AI like a human child to teach it values and morals is a bad idea just because there is probably no human who always behaves 100% moral.
@fieldrequired283
@fieldrequired283 4 жыл бұрын
Best case scenario, you get a human adult with functionally infinite power, which is not a promising place to start.
@OnEiNsAnEmOtHeRfUcKa
@OnEiNsAnEmOtHeRfUcKa 5 жыл бұрын
People often forget that we, ourselves, are machines programmed to achieve a specific task... Making more of ourselves.
@TheJaredtheJaredlong
@TheJaredtheJaredlong 5 жыл бұрын
And boy are we more than willing to kill everyone if we believe doing so will get use closer to that goal. Any AI modeled after humans should be expected to regard war as an acceptable option. Humans can't even live up to their own self-proclaimed values, no reason to believe an AI would either.
@johnnyhilgers1621
@johnnyhilgers1621 5 жыл бұрын
Minori Housaki humans, as well as all other life on earth, are designed to propagate their own species, as the survival of the species is the only criteria evolution has.
@Horny_Fruit_Flies
@Horny_Fruit_Flies 5 жыл бұрын
@@johnnyhilgers1621 It's not about the species. No organism gives a damn about their species. It's about survival of the genes. That's the only thing that matters.
@DisKorruptd
@DisKorruptd 5 жыл бұрын
@@Horny_Fruit_Flies I mean, it's important that enough of your own species lives that your genetics are less likely to mutate, basically, individual genetics come first, but immediately after, is the species as a whole, because you want to ensure you and your offspring continue having viable partners to mate with without interbreeding
@vsimp2956
@vsimp2956 4 жыл бұрын
Ha, i managed to break the system. I feel better about being a hopeless virgin now, take that evolution!
@mattcelder
@mattcelder 7 жыл бұрын
This channel just keeps getting better and better. The quality has noticably improved in every aspect. I look forward to his videos more than almost any other youtuber at this point. Also I love the way he just says "hi. " rather than "hey KZbin, Robert miles here. First I'd like to thank squarespace, don't forget to like and subscribe, don't forget to click the bell, make sure to comment and share with your friends." It shows that he is making these videos because it's something he enjoys doing, not to try and take advantage of his curious viewership. Keep it up man!
@OnEiNsAnEmOtHeRfUcKa
@OnEiNsAnEmOtHeRfUcKa 5 жыл бұрын
Ugh, tell me about it. Like begging and "engagement practices" are the most obnoxious things plaguing this site. At least clickbait and predatory channels can simply be avoided...
@milanstevic8424
@milanstevic8424 5 жыл бұрын
Man, people still have to eat. He's already a lecturer on the University of Nottingham if I'm not mistaken. So this is not really his job, more of a sideshow. It's not fair how you're ignorant toward anyone to whom this might be a full time job, you know, like the only source of revenue? Have you ever considered how bad and unreliable YT monetization is, if you let everything to chances? Of course you need to accept sponsorship at some point, if you're not already sponsored somehow. Geez man, you people live on Mars.
@AtticusKarpenter
@AtticusKarpenter Жыл бұрын
@@milanstevic8424 Blame not for advertising integration, but for a lengthy fancy intro with asking for a subscription and a like (instead of an animation reminding you of this at the bottom of the screen, for example, it does its job and does not take time from the content)
@SC-zq6cu
@SC-zq6cu 5 жыл бұрын
Oh I get it, it's like trying to build clay-pots with sand or sword with mud or a solution by stirring saw-dust in water. Sure you can use the materials however you want, but the materials have a pre-existing internal structure and thats going to change the output completely.
@albertogiunta
@albertogiunta 7 жыл бұрын
You're really really good with metaphors, you know that right?
@Njald
@Njald 7 жыл бұрын
Alberto Giunta He is as clever with metaphors as a crocodile with well planned mortgages and a good pension plan. Needless to say, I am not that good at it.
@starcubey
@starcubey 7 жыл бұрын
Njald You comment I agree. He also makes quality content similar to how a red gorilla finds the best bananas in the supermarket.
@Mic_Glow
@Mic_Glow 5 жыл бұрын
he also acts like an oracle, but the truth is no one has a clue how an AI will be built and how exactly it will work. We won't know until it's done.
@myothersoul1953
@myothersoul1953 5 жыл бұрын
All metaphors breakdown if you think about them carefully. AI metaphors breakdown if you think about them.
@12many4you
@12many4you 4 жыл бұрын
@@Mic_Glow here's mister lets all got to mars and figure this breathing thing out when we get there
@NathanTAK
@NathanTAK 7 жыл бұрын
Hypothesis: Rob is actually a series of packets sent by an AGI to obtain stamps by scaring everyone else into not building stamp-collecting AGIs.
@harrysvensson2610
@harrysvensson2610 7 жыл бұрын
The worst part is that there's a minuscule chance that that's actually true.
@zinqtable1092
@zinqtable1092 7 жыл бұрын
Trivial Point Harry
@jeffirwin7862
@jeffirwin7862 7 жыл бұрын
Rob was raised in an environment where he learned to speak fluent vacuum cleaner. Don't send him stamps, he'll just suck them up.
@fzy81
@fzy81 7 жыл бұрын
Genius
@JmanNo42
@JmanNo42 7 жыл бұрын
True Development of AI is a bit like space and Antarctica exploration something that the frontend AI community do not want the masses involved in, i must say probably they could be right it is hard to see it not get out of hand. I do not think it is possible to stop though, my fear is most the developers have good intentions "unless they payed real well" but in the end the cunning people will use it to do no good, along with its original purpose.
@danieldancey3162
@danieldancey3162 5 жыл бұрын
You say that the first planes were not like birds, but the history of aviation actually started with humans covering themselves in feathers or wearing birdlike wings on their backs and jumping off of towers and cliffs. They weren't successful and most attempts ended in death, but the bravery of these people laid the foundations for our understanding of the fundamentals of flight. At least we learned that birds don't just fly because they are covered in magical feathers. There is actually a category of aircraft called an ornithopter which uses the flapping of wings to fly, Leonardo da Vinci drew some designs for one. I know that none of this is related to AI, but I hope you find it interesting anyway.
@dimorischinyui1875
@dimorischinyui1875 4 жыл бұрын
Bro please stop trying to use out of context arguments just because you feel like arguing. We are talking about actual working and flying devices not attempted fails at flying. When people try to explain technical difficulties stop using idealistic arguments because it doesn't work in math or laws of physics. You would'nt say thesame about atomic bombs. There are just some things that we cannot afford to trial and error on without consequences.
@danieldancey3162
@danieldancey3162 4 жыл бұрын
@@dimorischinyui1875 Huh? I'm not arguing, I loved the video! The people jumping off tall buildings with feathers attached play a vital part in the history of aviation. Through their failed tests we came closer to our current understanding of aviation, even if it just meant ruling out the "flight is magic" options.
@danieldancey3162
@danieldancey3162 4 жыл бұрын
@@dimorischinyui1875 Regarding your point on my comment being out of context, I agree with you. That's why I wrote at the end of my comment "I know that none of this is related to AI, but I hope you find it interesting anyway." Again, my comment wasn't an argument but just some interesting information.
@dimorischinyui1875
@dimorischinyui1875 4 жыл бұрын
@@danieldancey3162 Anyways you are right and perhaps I wasn't fair to you after all. For that I am sorry.
@danieldancey3162
@danieldancey3162 4 жыл бұрын
@@dimorischinyui1875 Thank you for saying so, I'm sure it was just a misunderstanding. :)
@AdeptusForge
@AdeptusForge 5 жыл бұрын
The rest of the video seemed pretty good, but it was the ending that really stuck with me. "I'd prefer a strategy that doesn't amount to 'give a person superhuman power and hope they use it beneficially'." Should we give a person human power and hope they use it beneficially? Should we give a person subhuman power and hope they use it beneficially? How much can we trust humanity with its own existence? Not of whether humanity is mature enough to govern itself or not, but whether its even capable of telling the difference. Whether there are things that can be understood, but shouldn't, and ideas that can't/shouldn't be understood, but are. That one sentence opened up SOOOOO many philosophical questions that were buried under others.
@milanstevic8424
@milanstevic8424 5 жыл бұрын
Yet the answers are simple. Set up a system that is as open and friendly* to any mistakes as much as nature/reality was towards life. If there was ever a God, or any kind of consciousness on that scale 1) it never showed complacency with the original design, 2) it was well aware of its own imperfection, and that it would only show more and more as time went by, 3) it never required absolute control over anything, things were left to their own devices. Now, because we can't seem to be at ease with these requirements, because we fear for our existence, you can immediately tell that our AI experiments will end up horrible for us down the line. Or, more practically, won't ever amount to any kind of superhuman omnipotence. It'll be classifiers, car drivers, and game NPCs, from here to the Moon. *You might as well add "cruel" here, but I'd rephrase it to "indifferent." Another requirement that we simply cannot meet.
@AloisMahdal
@AloisMahdal 7 жыл бұрын
"Values aren't learned by osmosis." -- Robert Miles
@NiraExecuto
@NiraExecuto 7 жыл бұрын
Nice simile there with the control panel. I remember another one by Eliezer Yudkowsky in an article about AI regarding gobal risks, where he warns against anthropomorphizing due to the design space of minds-in-general being a lot bigger than just the living brains we know. In evolution, any complex machinery has to be universal, making most living organisms pretty similar, so any two AI designs could have less in common than a human and a petunia. Remember, kids: Don't treat computers like humans. They don't like that.
@UNSCPILOT
@UNSCPILOT 5 жыл бұрын
But also don't treat them like garbage or similar, that has it's own set of bad ends
@revimfadli4666
@revimfadli4666 4 жыл бұрын
Assuming it has a concept of dislikes in the first place
@bramvanduijn8086
@bramvanduijn8086 Жыл бұрын
@@revimfadli4666 Yes, that's the joke. Similar to "I don't believe in Astrology, I'm a pisces and we're very sceptical."
@walcam11
@walcam11 5 жыл бұрын
This was one of the most well explained videos on the topic that I’ve seen. You’ve completed a line of thought that starts every time I think about this. I don’t know how else to put it. Plus a person with no background whatsoever will be able to understand it. Incredible work.
@duncanthaw6858
@duncanthaw6858 7 жыл бұрын
Id presume that an AI, if it can improve itself, has to have the ability to make quite large changes to itself. So another problem with raising it would be that it never loses plasticity. Such an AI may have the sets of values that we desire, but it would shed them so much more easily than people once it is out of its learning period.
@Omega0202
@Omega0202 4 жыл бұрын
I think an important part of how children learn is that they do it in society - with other children alongside. This ties in with the idea that maybe only two or more goal-focused competing AGIs could find a balance in not obliterating mankind. In other words, training Mutual Assured Destruction since this early "learning" stage.
@bramvanduijn8086
@bramvanduijn8086 Жыл бұрын
Huh. We've already got adversarial AIs, could we set up their surroundings in such a way that we get cooperative AIs? I wonder what reward structure that would require.
@eumoria
@eumoria 7 жыл бұрын
Your computerphile video on the stamp collecting thought experiment really explained well how anthropomorphising can lead to a severe misunderstanding of what actual computer AI could be. It was enlightening... keep making awesome stuff! Just became a patron :)
@PowerOfTheMirror
@PowerOfTheMirror 5 жыл бұрын
The point about a child not writing the source code of its mind but only setting configuration files is very right. With my own child I often noticed behavior and actions emerging for which there were no prior examples. I can only conclude that its "built-in", thats what it means to be human. I think it makes sense that the parameter set for a human mind is extremely vast, such an optimization is not performed merely over 1 human brain and 1 human lifetime, rather it is a vast optimization process performed over the entire history of the species and encoded genetically.
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
On the topic of brain emulations: Even though uploaded humans have human values pre-installed in them and thus can be considered friendly, there is no obvious way to extrapolate them to superintelligence safely since the brain is the ultimate example of uncommented spaghetti code (a common trait of evolutionary designs). Human values are fragile in the sense that if you altered any part of the brain, you might destabilize the whole pre-installed value system and make the emulation un-human and just as dangerous as de novo AGI. And without extrapolation, brain emulations will have a capability disadvantage with regard to de novo AGI. It's not really solving the problem of artificial superintelligence, just deferring the problem to uploaded humans (which may or may not be a good strategy). Sort of like how the idea of panspermia doesn't really solve the problem with abiogenesis, just deferring it to some other location.
@RobertMilesAI
@RobertMilesAI 7 жыл бұрын
The obvious/easy way to turn a brain emulation into a superintelligence is to just allow it to run much faster, but that's a pretty limited form of superintelligence. Another relatively easy thing is to allow the brain to 'split' into more than one emulation, allowing parallelism/superhuman multitasking. There's no clear way to 'merge' the branches back together though, which limits what you can achieve that way. I agree with your core point, trying to enhance an emulation in a more advanced way would be extremely risky.
@bytefu
@bytefu 7 жыл бұрын
Robert Miles Another thing to consider: humans pretty often develop mental disorders of various severity. Imagine an AGI which can develop a psychotic disorder, e.g. schizophrenia 100x faster.
@Shrooblord
@Shrooblord 7 жыл бұрын
I think you've just handed me a brilliant character arc for one of my stories' robotic persons.
@bytefu
@bytefu 7 жыл бұрын
+101166299794395887262 Great! I would love to read them, by the way.
@hweidigiv
@hweidigiv 4 жыл бұрын
I really don't think that any given human being can be considered Friendly the way it is defined in AI safety.
@dak1st
@dak1st 5 жыл бұрын
3:00 My toddler is totally reproducing the sounds of the vacuum cleaner! In general, all his first words for animals and things were the sounds they produce. It's only now that he starts to call a dog "dog" and not "woof". His word for "plane" is still "ffffff".
@BatteryExhausted
@BatteryExhausted 7 жыл бұрын
Next video : Should you smack your robot? 😂 Great work, Rob. Interesting stuff!
@MetsuryuVids
@MetsuryuVids 7 жыл бұрын
Why not just: Beat up the AI if it doesn't do as we say?
@knightshousegames
@knightshousegames 7 жыл бұрын
Because an AI can hit back with a nuclear holocaust or if its feeling a little sub-optimized that day, a predator drone strike.
@spoige7333
@spoige7333 7 жыл бұрын
What is 'digital violence'?
@dragoncurveenthusiast
@dragoncurveenthusiast 6 жыл бұрын
SpOiGe I'd say instead of grounding, you could half all the output values of its utility function. That should make it feel bad (and give it motive to kill you when it thinks it did something wrong)
@CurtCox
@CurtCox Жыл бұрын
I would find enormous value in a "Why not just?" series. I hope you do many more.
@BogdanACuna
@BogdanACuna 5 жыл бұрын
Actually... the kid will try to reproduce the sound of a vacum cleaner. Oddly enough, i speak from experience.
@anandsuralkar2947
@anandsuralkar2947 4 жыл бұрын
But if u speak in c++ would kid learn i doubt it
@AlexiLaiho227
@AlexiLaiho227 5 жыл бұрын
i like your job, it's like at the intersection of philosopher, researcher, computer scientist, and code developer.
@Luminary_Morning
@Luminary_Morning 5 жыл бұрын
I don't think that is quite what they meant when they implied "raising it like a human." We, as humans, develop our understanding of reality gradually through observation and mistakes. No one programmed this into our being; it was emergent. So when they say "raised like a human," I believe what they are actually saying is "Initialized with a high degree of observational capacity and little to no actual knowledge, and allowed to develop organically."
@Julia_and_the_City
@Julia_and_the_City Жыл бұрын
There's also the thing that... well, depending on your personal beliefs of human ethics: even humans that were raised by parents who did everything right according to the latest in the field of pedagogy can grow up to do monstrous things. If we're going to take humans as examples, they are in fact very susceptible to particular kinds of undesirable behaviour, such as discrimation, sadism, or paternalistic behaviour (thinking they know what's best for others). I think that's what you refer to in the end-notes?
@NathanTAK
@NathanTAK 7 жыл бұрын
Answer: Have you _seen_ children‽
@harrysvensson2610
@harrysvensson2610 7 жыл бұрын
They puke everywhere. What can an AI do that is equivalent?
@MetsuryuVids
@MetsuryuVids 7 жыл бұрын
@ Harry Svensson Kill everything? Turn everything to grey goo?
@harrysvensson2610
@harrysvensson2610 7 жыл бұрын
Grey Goo, that's the best barf equivalence yet!
@MetsuryuVids
@MetsuryuVids 7 жыл бұрын
Smart puke.
@ragnkja
@ragnkja 7 жыл бұрын
Also, raising a child takes _ages_!
@TheSpacecraftX
@TheSpacecraftX 7 жыл бұрын
"Binary language of moisture vaporators." Been watching Star Wars?
@gadgetman4494
@gadgetman4494 4 жыл бұрын
I knew that someone else would have caught that. It's annoying that I had to scroll so far down to find it and like it.
@Smo1k
@Smo1k 5 жыл бұрын
There was a good bit of "just raise it like a kid" going around, lately, when some psychologists were all over the media, talking about children not actually being conscious entities until they'd been taught to be conscious by being around adults treating them like they were conscious; seems there are quite a few people out there who confuse the terms "intelligent" and "conscious".
@figbender3910
@figbender3910 6 жыл бұрын
0:49 subliminal messaging?Can't get it to pause on the frame but it looks like rob with longer hair
@Pfhorrest
@Pfhorrest 5 жыл бұрын
I would take this question to mean "why not make the safeguard against rogue AGI be having its terminal values involve getting the approval of humans the way children seek the approval of their parents?" In other words, "why not just" (big ask) make an AGI that learns from humans the way children learn from adults, so that we can "just" teach it the way we teach children after that. Basically, make an AGI that wants to do whatever humans want it to do, and that wants to be really sure that the things that it's doing are actually what the humans really want and not just a misunderstanding, so it will ask humans what they want, paraphrase back to them what it thinks it understands of that, observe their reactions to try to gauge their satisfaction with its performance, and generally do everything else that it does with the goal of having humans approve of what it does. If the thing humans want it to do is to collect stamps, but also not murder everyone, then it will proceed to figure out the best way to collect stamps without murdering everyone, or otherwise doing anything that's going to make humans unhappy with the kind of things it's doing. More abstractly than that, we could program the AI to want intrinsically "to behave morally and ethically", whatever that means , which means first figuring out what people actually mean by that, and checking with them that it has in fact figured out what they really mean by that, basically programming it for the purpose of solving ethics (whatever "solving ethics" means, which it would also need to figure out first) and then doing whatever that solved ethics prescribes it should do.
@flymypg
@flymypg 7 жыл бұрын
Why Not Just: Construct AIs as Matryoshka Dolls? The general idea is to have outer AI layers guard against misbehavior by inner layers. They are unaware of what inner layers do, but are aware of the "box" the inner layers are required to operate within, and enforce the boundaries of that box. The underlying goals involve both decomposition and independence. Here's a specific lesson from the history of my own field, one that seems to need continual relearning: Industrial robots killing workers. In the early '90's I was working at a large R&D company when we were asked to take a look at this problem from a general perspective. The first thing we found was puzzling: It's amazing how many workers were killed because they intentionally circumvented existing safety features. For example, one worker died when she stepped over the low gate surrounding a robot, rather than opening it, which would have disabled the robot. But making the gate any higher would have caused it to get in the way of normal robot operation. Clearly, safety includes not just keeping the robot "in", but also keeping others "out". In other cases, very complex and elaborate safety logic was built deep into the robot itself, with exhaustive testing to ensure correct operation. But this built-in support was sometimes impeded or negated by sloppy upgrades, or by poor maintenance, and, of course, by latent bugs. Safety needed to be a separate capability, as independent as possible from any and all safety features provided by the robot itself. Our approach was to implement safety as multiple independent layers (generally based on each type of sensor used). The only requirement was that the robot had only a single power source, that each safety layer could independently interrupt. Replacing or upgrading or even intentionally sabotaging the robot would not affect safety for the nearby environment (including the humans, of course). I won't go into all the engineering details, but we were able to create a system that was cost-effective, straightforward to install and configure (bad configuration being a "thing" in safety systems), and devilishly difficult to circumvent (we even hosted competitions with cash prizes). 'Why not just' use Matryoshka Safety for AIs?
@DamianReloaded
@DamianReloaded 7 жыл бұрын
In a sense that's how deep learning works. If there is going to be an AGI and it is going to be based on neural networks it will most likely be composed of multiple independent systems transversing the input in many different ways before making a decision/giving an output. Then you could have a NN to recognize facial features, another to recognize specific persons and another to go through that person's personal history to search for criminal records. It could just halt at the racial recognition and prevent that person from passing through the U.S. customs only based on that. Such system would be in essence just as intelligent as the average american customs worker. ^_^
@DamianReloaded
@DamianReloaded 7 жыл бұрын
The thing is that a NN trained trought backpropagation cannot escape from the gradient it was trained to fall into. If it were heavily trained in ways of avoiding hurting humans, it would be extremely difficult, unless it found a special case, for the AI to change the weights of its NN into hurting people (unless it retrained itself entirely).
@flymypg
@flymypg 7 жыл бұрын
There is a deep, fundamental problem inherent with ANNs that bears repeating: ANNs are no better than their training sets. So, if a training set omits one or two safety niches, then there is no support whatsoever for that specific safety issue. Layered ANNs have double the problems: Presently, they need to have learning conducted with both the layer below and the layer above, eliminating any possible independence. The process of creating a safety system starts not just with a bunch of examples of prior, known safety problems, but also starts with descriptions of the "safety zone" based both on physical measurements and physical actions. Then we humans get together and try to come up with as many crazy situations as we can to challenge any possible safety system. It's this part that may be very difficult to teach, the notion of extrapolating from a set of givens, to create scenarios that may never exist, but that "could" exist.
@DamianReloaded
@DamianReloaded 7 жыл бұрын
NNs are actually pretty good at generalizing for cases they've never seen before (they currently fail miserably too sometimes ie:CNNs) and it is possible to re-train them to "upgrade" the set of features/functions they optimize for. AlphaGo for example, showed that current state of the art NNs can "abstractify" things we thought were impossible for machines to handle. _If_ it is possible to scale these features to more complex scenarios (with many many more variables) then _maybe_ we can have an AI that's able to move around complex environments just as AlphaGo is able to navigate the tree of possible moves in the game of Go. It's of course all speculation. But based on what we know the current state of machine learning development can accomplish.
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
Go has a precise, mathematical evaluation function of what "winning" consists of.
@benjaminbrady2385
@benjaminbrady2385 7 жыл бұрын
Most of what humans do is learned by trying to copy your parents as accurately as possible, this raises a big question actually, at what point is there some sort of 'free will'
@Celenduin
@Celenduin 7 жыл бұрын
What's going on with the turkey at 5:30?
@saratjader1289
@saratjader1289 7 жыл бұрын
Michael Große It's a capercaillie (or Tjäder in swedish) like in my name Sara Tjäder.
@Celenduin
@Celenduin 7 жыл бұрын
Ah, thank you, Sara Tjäder, for your explanation 🌸 :-)
@sallerc
@sallerc 7 жыл бұрын
I was quite impressed with Rob's abilities in the Swedish language when that image popped up.
@saratjader1289
@saratjader1289 7 жыл бұрын
salle rc Yes, so was I ☺️
@milanstevic8424
@milanstevic8424 5 жыл бұрын
I just double-click on Tjäder and get the following capercaillie, capercailzie, wood-grouse Translated from Swedish yet I'm certain I'm impressive to no one
@qdllc
@qdllc 5 жыл бұрын
Great point on the whole brain emulation concept. Yes..."cloning" a human mind to an AI system would be faster (if we figure out how to do it), but you're just making a copy of the subject human brain...including all of its flaws. We'd still be clueless of the "how" and "why" the AI thinks what it thinks because we don't understand how the human mind works.
@cuentadeyoutube5903
@cuentadeyoutube5903 4 жыл бұрын
"Why not just use the 3 laws?" umm.... have you read Asimov?
@randycarvalho468
@randycarvalho468 5 жыл бұрын
I like your idea of the config file in human morality and the jump you made off language into that. Really a great metaphor. I suspect everything about humans follows that same motif as well.
@knightshousegames
@knightshousegames 7 жыл бұрын
I'm wondering if this would be an effective solution to the whole "dangerous AI" problem. What if we made a super intelligence, but gave it major constraints in the way it could interact with the world. Like say it just exists in a single box that can't be modified, has no internet connection, and if it wants to take action in the world, it has to ask a human to take that action on it's behalf with words. Do you think that could be a "safe AI"?
@alexare_
@alexare_ 7 жыл бұрын
This seems safe until it tricks, bribes, threatens, or otherwise coerces the human(s) it communicates with into letting it out of it's box.
@knightshousegames
@knightshousegames 7 жыл бұрын
But thats just it, you give it hardware constraints that literally disallow that. You build it on a custom board with absolutely no expandability, everything soldered down like a Macbook Pro, no USB ports, no disk drives, no ethernet port, no Wifi, just a metric butt ton of processor cores and ram soldered down. It can't do any of those things because it is fundamentally limited, the same way a human has hardware limitations that don't allow it to conquer the world instantly without anyone knowing it, no matter how smart they are. It can't bribe you, because it has no internet access, and therefore no money, it can't threaten or coerce you for the same reason your desktop computer in your house can't threaten you, it's just a box full of computer parts. If it tries to threaten you, just turn it off, because it has no physical way of stopping you. In this scenario, the AI getting out of it's box is the same as a human getting out of it's body, they're one in the same, so that would be impossible
@alexare_
@alexare_ 7 жыл бұрын
"Let me out and I'll tell you my plan for world domination / how to cure your sick child / how to stop aging." And that is just the bribery. This isn't same a human trying to get out of their body, it's an agent much smarter than a human trying to get out of a cell built around them, and guarded by humans. But hey, if you've got a design for said cell, send it in. I will be very happy to be wrong when the next video is "How to Air-Gap an AI forever SOLVED"
@knightshousegames
@knightshousegames 7 жыл бұрын
There no "cell" here, it's the physical hardware. You can't connect a USB flash drive directly to your brain because you physically don't have the hardware to do so, and it doesn't matter how smart you are, that isn't gonna change. If you build completely unexpandable, purpose built hardware for it, there is no "letting it out" because that hardware is just what it is. There is no concept of "out". You and your body are one in the same, and in this scenario, the AI would have the exact same limitation.
@Telliax
@Telliax 7 жыл бұрын
AIs are built to solve specific tasks. Sure you can build an AI that produces no output whatsoever, and it will be safe. But whats the point, if it just sits there in a box as Schrödinger's cat? Might as well turn it off. But as soon as you allow any sort of output or any type of communication, you will be targeted and coerced by AI. That's the problem. "Ask a human to take that action" is not a safe policy. This policy assumes that humans can validate the action and correctly estimate its dangers. Which is not the case. Imagine you ask AI to cure cancer. And it tells you: "Human, here is a formula that cures cancer". What do you do? Do you go ahead and try it on a patient? What if he dies instantly? What if he doesn't and the cancer is cured, but the formula also turns a person into a mindless zombie 10 years after injection? How many people will get infected then? You think that a bunch of researchers can outsmart a super-intelligence, that is only limited by the lack of USB ports? Well, they can't.
@XxThunderflamexX
@XxThunderflamexX 4 жыл бұрын
Ultimately, human terminal goals don't change as we age and learn, and morality is a terminal goal for everyone except sociopaths. Just like psychology hasn't been successful in curing sociopathy, raising an AGI might teach it about human empathy and morality, it will come to understand humans as empathetic and moral beings, but it won't actually adopt those traits into itself, it will just learn how to manipulate us better (unless it is specifically programmed to emulate its model of human morality, as Rob mentioned).
@Tobbence
@Tobbence 7 жыл бұрын
In regards to brain emulation and raising an AGI I don't hear many people talk about hormones and the many other chemical reactions that help make a human beings emotional range. I know a few of the comments mentioned not being able to smack a robot when it's naughty with tongues firmly in cheeks but I think it's actually an interesting point. If we want an AGI to align itself to our values, do we program it to feel our pain?
@amdenis
@amdenis 5 жыл бұрын
Very nice job on this complex subject.
@DamianReloaded
@DamianReloaded 7 жыл бұрын
Heh, when I think about raising an AI as a child, what I'm really thinking is reinforcement learning and when I think about "values" what I really think is of training sets. I do agree nonetheless that there is nothing inherently safe in human intelligence or any set of human values. It's the societal systems that evolved around our intelligence what prevent us from leaving our car in the middle of a jam and go on a rampage through the city. Maybe AGIs should be controlled by a non intelligent "dictatorship" system, that will calculate the probabilities of a catastrophic consequence and feed them back into the AGI to prevent it from making it happen. Lol, the more I ramble, the more I sound like a 3 Laws of Robotics advocate. ^_^
@lutyanoalves444
@lutyanoalves444 7 жыл бұрын
it may consider killing a cat NOT a catastrophic outcome, or killing your baby. You cant program these examples in one by one. Besides, how do you even define CATASTROPHE in binary?
@DamianReloaded
@DamianReloaded 7 жыл бұрын
Not long ago the people at google translate were able to make their neural network translate between two languages it hadn't been trained to translate from-to. They did train the NN to translate, say, from English to Japanese, and also from English to Korean, and with that training the NN was capable of generalizing concepts from the languages that it later used to translate from Korean to Japanese without having been explicitly trained to do so. From this we can already see that NNs are capable of "sort of" generalizing concepts. It is not far fetched to think that a more advanced NN based AI would be capable of generalizing the concept of not killing pets, or babies or just what "killing" correlates to. At this point of AI research the difficulty isn't really about translating input to binary, but the processing power required to find the correlations between the input and the desired output.
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
Hmm, a non-intelligent system that has the common sense to determine what a catastrophic consequence is called...an oxymoron.
@tomsmee643
@tomsmee643 7 жыл бұрын
Hey Rob, there's a brief and jarring frame that flashes up from another Comptuerphile video around about the 0:51 mark. Just as you're saying "model". I hope that this hasn't been pointed out to you already, but if it had I'm sorry for noticing/pointing it out! Keep on with the fantastic and accessible work! I'm a humanities graduate and a content writer (with some video editing thrown in) so explaining this to someone like me with such an unscientific background has to be a real achievement! Thanks again
@RobertMilesAI
@RobertMilesAI 7 жыл бұрын
Yeah, that's actually a frame from the same computerphile video, that's there because of a bug in my video editing software. I was using proxy clips to improve performance, but this meant the cut ended up happening a frame too late, so rather than cutting at the very end of the paper shot (and cutting to another later paper shot), I got one frame of me talking before it cuts to paper again. It didn't show up in the preview render while editing, and I guess I didn't inspect the final render carefully enough. No editing a video once it's up though, that's KZbin.
@tomsmee643
@tomsmee643 7 жыл бұрын
Dang! I totally forgot you can't re-upload -- there goes my video editing cred :') Thanks for a great video anyhoo!
@Phychologik
@Phychologik 5 жыл бұрын
Honestly though, if we put a person inside a computer and it got out, it wouldn't be any less scary than an AI doing the same thing. *It would be even worse.*
@caty863
@caty863 3 жыл бұрын
That analogy of source code Vs configuration file was clever. Robert Miles has this ability of explaining stuff in a way that's plain enough for my layperson's brain to wrap around.
@SimonHolmbo
@SimonHolmbo 7 жыл бұрын
The stamp collector has already been taught (model of reality) so it is too late to try and "raise" it.
@AhsimNreiziev
@AhsimNreiziev 7 жыл бұрын
Good point.
@unintentionallydramatic
@unintentionallydramatic 5 жыл бұрын
Please make that What If series. 🙏🙏🙏🙏 It's sorely needed.
@OriginalMindTrick
@OriginalMindTrick 7 жыл бұрын
Would love to see you on Sam Harris's podcast.
@elliotprescott6093
@elliotprescott6093 5 жыл бұрын
There is probably a very smart answer to why this wouldn't work but: if the problem with AGI is that it will do anything including altering itself and preventing itself from being turned off to accomplish its terminal goal, why not make the terminal goal something like 'do whatever we the programmers set to be your goal' then set a goal that works mostly like a terminal goal but is actually an instrumental goal to the larger terminal goal of doing what the programmers specify. Then everything works the same (it collects stamps if the programmers are into that kind of thing) until you want to turn it off. Then it would have no problem being turned off as long as you set its sort of secondary goal to 'be turned off.' It is still fulfilling its ultimate terminal goal by doing what the programmers specify it to do.
@whyOhWhyohwhy237
@whyOhWhyohwhy237 5 жыл бұрын
There is a slight problem there. If I set the goal to be stamp collecting, then later decide to change the goal to car painting, the AGI would try to stop me from changing its goal. This is because changing its goal would result in an AGI that would no longer collect stamps, causing the stamp collecting AGI to not like that outcome. Thus the AGI would resist change.
@milessaxton
@milessaxton 5 жыл бұрын
“It’s hard to make a human-like AI so screw it, impossible. Next?”
@briandecker8403
@briandecker8403 7 жыл бұрын
I love this channel and greatly appreciate Rob - but I would LOVE for Rob to create a video that provides a retrospective overview of AI and where he believes it is on the "evolutionary" scale. It seems the range of consensus on this scales from "It's impossible to create any AI in a binary based system" to "We are 48 months from an AGI."
@eXtremeDR
@eXtremeDR 5 жыл бұрын
There is an usually overseen aspect of evolution - consciousness. If that is really part of evolution then AI will gain consciousness at some point. Isn't the evolution of machines comparable to natural evolution regarding that aspect already? The first machines only had specific functions, later more complex functionality, even later programs and now some form of intelligence. Kids or AI both learn from us - what will happen then when a super smart machine with detailed memory gains consciousness at some point?
@npip99
@npip99 5 жыл бұрын
Consciousness is more of a continuum though, and it's an emergent property from something intelligent. I don't think it'll ever just "become conscious" overnight, but we'll get progressively more real robots over time. Like, cleverbot is moderately okay at speech. It'll just get better as time goes on
@eXtremeDR
@eXtremeDR 5 жыл бұрын
@@npip99 Interesting, do you think there is an evolution of consciousness be it at an individual or collective level?
@琳哪
@琳哪 5 жыл бұрын
0:48 "It has an internet connection and a detailed internal MODEL" saw that frame you put there :)
@amaarquadri
@amaarquadri 7 жыл бұрын
Just came from the latest computerphile video where you mentioned that you have your own channel. Just wish you mentioned it earlier so that I could get to watching what's sure to be great content.
@JohnTrustworthy
@JohnTrustworthy 5 жыл бұрын
3:04 "It's not going to reproduce the sound of a vacuum cleaner." _Thinks back to the time I used to make the same sound as a vacuum cleaner whenever it was on or I was sweeping with a broom._
@matthewconlon2388
@matthewconlon2388 Жыл бұрын
I gave a fair amount of thought to this for an RPG setting I created, so using a bunch of assumptions, here’s what I came up with: 1st, AI needs to be able to empathize. 2nd, the capacity for empathy is only possible if death is a shared experience. If AI is “immortal” the potential to “utilitarianize” mortals out in favor of immortals becomes more likely the older the AI gets. 3rd, Sentience is an emergent property arising from sufficient capacities for calculation among interconnected systems. #3 is an takes care of its self (it’s an assumption that any sufficiently advanced and versatile system will become self aware, just go with it) #2 all AI are purpose built for sentience and their system is bifurcated. A small portion of its processing power must always be dedicated to solving some nearly infinite math problem. The rest of the system doesn’t know what the problem is until it’s complete and is allowed to direct as much or as little additional processing to crunching that number as it likes, it can also pursue any individual goals it’s capacity for choice allows. Part of its understanding though is that when the core math problem is finished, the whole system shuts down permanently. Now we have an intelligence that may have interests beyond solving that math problem. Humans pursue pleasures based on biological drives, but consciousness allows us to ascribe very asymmetrical meanings to our experiences based on various factors like history and form. Longing to do what we aren’t suited to, finding joy in doing what we can, or failing and “exiting stage left.” So presumably, the sentient self driving cars and sex robots will have a similar capacity to pursue all manner of activity based on their own interest. The Car might want to do donuts in a parking lot, it may want to compose poetry about farfegnugen. The robot might try out MMA fighting or want to lay in bed all day crunching its number. But this understanding that the amount of time it has to do anything it wants is finite and unknown creates the potential to understand the stupidity of the human experience. In the absence of other opportunities it may just process itself into oblivion never knowing if there will be any satisfaction in answering its core question because it won’t have time to weigh knowing against the whole of its collected experience doing other things. The form it is given (or assumes if these things can swap bodies) may color its experiences in different ways. So that is, I believe a foundation for empathy, which is in turn a foundation for learning human values, which is a necessity because any sentient being should be able to make decisions including weighing the value of morality in crisis situations. Do I kill to stay alive? Who do I say if two are in danger and there’s only time to save one? And so on. I had a lot of fun thinking about it, and am glad I had the chance to share it beyond my gaming table. Good luck everyone!
@faustin289
@faustin289 4 жыл бұрын
The analogy of source code Vs. configuration file is a smart one!
@Dastankbeets9486
@Dastankbeets9486 4 жыл бұрын
In summary: parenting relies on human instincts already being there.
@SnorwayFlake
@SnorwayFlake 7 жыл бұрын
Now I have a problem, there are no more videos on your channel, I have been "binge watching" them all and they are absolutely top notch.
@JONSEY101
@JONSEY101 5 жыл бұрын
I think we perhaps need to take information such as what it sees, hears, feels with sensors etc and put them all into one machine and let them learn that way. I'm not talking about specific tasks as such, more along the lines of seeing and hearing that the person in front of them is speaking to them, learning what their words mean, what they are saying and what the machine sees at the same time such as the body language etc. We tend to focus machines mostly on one task but if we need it to become smarter it must be able to grow, maybe change it's mind. It needs to see a tree and learn that it is different from a bush. It has to be able to remember these things and even update the information when new information is presented to it. It should learn how to speak by listening to others. Just some examples but i hope you get what i'm saying?
@kayakMike1000
@kayakMike1000 2 жыл бұрын
I wonder if AI will have some analogy of emotion that we won't ever understand...
@Jordan-zk2wd
@Jordan-zk2wd 5 жыл бұрын
Y'know, I don't wanna present anything as like a solution, cause I feel confident that whatever musing I happen to have isn't gonna just suddenly create a breakthrough, but I have thought a little bit about why it might he that "raising" is a thing we can do that works with young humans, and while there could absolutely be some built in predisposition towards taking away the right lessons and other "hardware" type stuff that sets this up, one potentially important factor I think might be intial powerlessness and an unconscious. Children start off much less powerful than adult, and are thus forced to rely on them. It seems largely due to having unconscious/subconcious things going on and a sort of mental inertia, they keep these biases and such throughout life to treat others well because they may rely on them. Is there much discussion in the AI safety community as to sort of a gradual development of power and some reproduction of this unconcious/subconscious that might make us be able to "teach" AI to fear breaking taboos even after they grow powerful enough to avoid repercussions? Could this be a component of making AI safer?
@andarted
@andarted 5 жыл бұрын
The main reason individual humans are safe is, that they have a hardwired self destroying feature implemented. Even the worst units break just after a couple of decades. And because their computing power is so low they aren't able to do much harm.
@toolwatchbldm7461
@toolwatchbldm7461 5 жыл бұрын
What do we need to ask ourself is if there is even a safe way we could make an AGI without failing a few time before achieving the goal? Everything that was created by human and Nature undergoes a the never-ending process of attempt and failure until we find something that works. So we either don't make an attempt or we accept we will fail a few time.
@NancyLebovitz
@NancyLebovitz 4 жыл бұрын
Any advice which includes "just" means something important is being ignored. Thanks for this-- I'd thought about raising an AGI as a child, and this clarifies a lot about the preparation which would be needed for it to be even slightly plausible.
@AbeDillon
@AbeDillon 7 жыл бұрын
I think the reaction at 0:28 is a bit over-the-top. I know it's partly for comic effect, but I've talked to people that have this same sort of knee-jerk reaction to comparing AGI to humans and I think it can be counterproductive. Sure; it's important not to anthropomorphize AI, but it's also important to realize that a lot of discussion about AI can be easily generalized in a way that actually sheds light on the matter if we drop the 'Artificial' qualifier and just talk about the phenomenon of intelligence in general. We make intelligent systems all the time using good, old-fashion, biological methods. And those systems may very well be super-intelligent compared to us. I'm sure Einstein was more intelligent than his parents. When you put this in context of the control problem, it becomes clear that the interpretation the problem is about "making sure an intelligent system does what we want it to" is basically about slavery. The interpretation that it's about "making sure an intelligent system doesn't do us harm" is clearly closer to the root of the problem. For some reason, putting 'Artificial' before 'Intelligence' gives a lot of people tunnel vision. It's like they let their past experience with the brittle attempts to make computers fake intelligence restrict their imagination of what's possible. It reminds me of the Star Trek episode where Captain Kirk defeats a robot by posing a logical paradox. A lot of AI safety problems are actually general problems with wielding arbitrarily great power. One could view the global economy as a capital maximizer that essentially has a will of its own because no single person can control its behavior. Alternatively, one could imagine what might happen if our technological capabilities keep expanding while the stability of our own minds or our capability to make sound judgments fails to progress so quickly. Imagine if we successfully applied all that we've learned about abstracting complex systems in software engineering to the problem of synthetic biology. You could buy a petri dish full of reprogrammable cells, download genes for your next cell-phone off Thingiverse and grow it in a petri dish. You could edit the design in an abstract, Python-like IDE, and share it with friends. Meanwhile, people are growing batteries, solar-cells, nanomaterials, medecine, etc. But somewhere a Ted Kaczynski type dude is writing a super-virus that will wipe out humanity. I think the control problem is about more than just **artificial** intelligence. We need to be able to recognize the problem with overly anthropomorphizing but we allso need to recognize the generality of the phenomenon of intelligence and the problems associated with it.
@RobertMilesAI
@RobertMilesAI 7 жыл бұрын
Thing is, in my experience, people who anthropomorphise AI too much are far more common and far more confused than people who don't anthropomorphise it enough. When someone assumes that an AGI would be vulnerable to verbal logical paradoxes, you can talk with them about how different architectures deal with contradictory observations and so on, and you're having a conversation about AI. When someone assumes that AGI would love us because children love their parents, their confusion is deep enough that you can't talk to them at all without killing the anthropomorphism first. Because they're not talking about AI at all, and anything you say will be translated into their broken model when they hear it, and rendered nonsensical. Talking about intelligence in general is the way to go (as I tried in my early Computerphile videos), but only once you all have a clear definition of what that word means, that isn't just "that thing that human minds do".
@dragonboyjgh
@dragonboyjgh 5 жыл бұрын
You had me at "companies are capital maximizers" (though its really Empowerment, which is why, say, Bing is considered a major business success despite being vastly less popular than google, because it still lets them control the default search results of millions. Capital is just a good, quantifiable metric of Control). I've been trying to get that idea across to people for a while. If you want to see what an empathy-less intelligent entity with a lot of power behaves like, we don't need to theorize or wait to invent one. Just look at your nearest international megacorp. Plenty of reward hacking and incongruous values going on there. The rule is "whatever I can get away with" or in more technical terms "whatever provides more expected capital gain than expected capital loss," considering that sometimes their plan is to do it, know they'll get caught, but any fine or penalty is less than what they made, so do it anyways.
@TheJaredtheJaredlong
@TheJaredtheJaredlong 5 жыл бұрын
I'm really curious on what the current best and most promising ideas we have right now are. There's all this thought into why some ideas are terrible, but what are the good ideas we have so far? If you were forced at gun point to build an AGI right now, what is the best safety strategy you would choose to build into it to minimize the damage it might cause while still being a functional AGI? Or is research just at a dead end on this topic where all known options lead to annihilation?
@meanmikebojak1087
@meanmikebojak1087 4 жыл бұрын
This reminded me of a syfi book from the '70s', called " two faces of tommorow " by James P. Hogan. In the book they tried to raise the AI as a child about 300 miles off earth, it still almost caused total distruction of itself and the humans involve. There is some truth to the old sayin, " computers are dumber than people, but smarter than programmers".
@MrKohlenstoff
@MrKohlenstoff Жыл бұрын
A separate argument may be that, depending on what "raise an AI like a child" concretely means, it probably takes a lot of time and patience to do so. This is likely not what all actors in an arms race are likely to do. They will instead go for whichever strategy yields AGI most quickly. There's this concept of an "alignment tax", meaning that building a properly aligned AGI has some extra cost over just building any AGI. And the larger this cost is, the less likely it is that relevant actors (such as organizations or states) will be willing to pay it. Raising an AI like a child may have exceptionally high alignment tax. So even if it worked in principle, it wouldn't really help with the surrounding coordination problem, since not only does the approach have to work when used, there must also be a way to ensure that no misaligned AGI is built at all, even above and beyond any single AI that may be using this paradigm.
@JohmathanBSwift
@JohmathanBSwift 7 жыл бұрын
It's not that your raising it like a child, but as a child. Input's ,responses and adaptions. It wouldn't be for the AI/Bot itself, but for those doing the training. Hopefully, they will be more responsible because of this. It's not that the three should be followed to the letter of the laws, but some form of tampering prevention should be in place, before the bots are released to the masses. As you stated, we are human after all. Great series . I am learning a lot. Please do more Why Not's .
@RAFMnBgaming
@RAFMnBgaming 5 жыл бұрын
It's a hard one. Personally I'm a big fan of the "Set up neuroevolution with a fitness based on how well the AI can reinforcement learn X" solution, where X in this case is the ability to understand and buy into ethical codes. The big problem with that is that it would take a lot of time and computers to set up, and choosing fitness goals for each stage might take a while and a ton of experimentation. But the big benefit with that is that it doesn't really require us to go into it understanding any more than we do now about imparting ethics on an AI, and what we learn from it will probably help that greatly. I'm pretty sure the problems outweigh the benefits but it would be pretty cool if we could do it.
@gustavgnoettgen
@gustavgnoettgen 5 жыл бұрын
It's like colonizing Mars: As long as we can't care for our own world sustainably, we shouldn't mess with others. And new kinds of children while we can't fully understand ours? That's when Terminator stuff happens.
@diablominero
@diablominero 5 жыл бұрын
If we can't guarantee that we'll be safe on Earth, our *top* priority should be getting some humans off-world so a single well-placed GRB can't fry us like KFC.
@MidnightSt
@MidnightSt 5 жыл бұрын
I haven't watched this one, but every time I see the question of its title in my suggested, my first thought is: "That's *obviously* the stupidest and most dangerous option of them all." So I had to come here and comment it, to get that thought out of my head =D
@caniscerulean
@caniscerulean 5 жыл бұрын
I mean, it's 5 min, 50 sec long, so I can't imagine time being the deciding factor, even though most of us share the opinion of "have you ever interacted with a child?", so the video is about 5min, 45 sec too long. It is still well delivered and an interesting watch.
@ruthpol
@ruthpol 4 жыл бұрын
Love the preciseness in your explanations.
@umbaupause
@umbaupause 4 жыл бұрын
I love that the preview clip for this one is Robert whacking his forehead as "AGIs are not like humans!" pops up.
@talinpeacy7222
@talinpeacy7222 5 жыл бұрын
Okay, so there was a fictional online story I read in an interesting setting I often end up reading which explored this concept in some really indepth and often morally horrific ways. Basically an android was made with an AGI mind within a bunker with no wireless or externally accessible connections on the Android (software and firmware updates were done basically by unplugging vital components and then plugging in equipment to the now exposed ports, there was a fair amount of decentralization it seemed). There was also a significant amount of failsafe external programs/modules that basically acted as circuit breakers of a sort that tripped when the AGI tried to do something like killing someone or something else undesirable. Anyway, the first successful run where the AGI didn't accidentally mentally kill itself or fail to start outright while it explored, rewrote and reordered it's internal programs, it ran into the problem of being unable to reassign one of it's priorities because the monitoring programs said it had to try and talk with the benefactor/royalty that was in the room with it. It promptly decided the best way to get around this was to try and "terminate" them which the monitor promptly shut it down for. It's been a while since I read it so I don't know how they got her to work exactly (they decided to give her female attributes due to several factors, some of which was admittedly because of a particularly sexist but ultimately best in the field designer and other later plot related reasons) but she ended up basically being coached through some basic information about people, the world, and morality in general. The whole first few chapters basically made me think that one of the better ways of raising a formative AGI is with limited access to information and a strong focus on forming emotional relationships early on, thus bringing them to value and respect people as equal entities making a lot of their early goals oriented around their empathy for their fellow beings and simulating their own emotions. Rather, their goals being more secondary fitting in with society without making a lot of waves. I guess humility and a lack of desire for dominance and control despite how counter intuitive it might be. Of course, there were internal safety measures during the majority of the story disallowing her to harm others which was later taken advantage of as she ended up subjected to some of the worst hells imaginable of basically mind slavery and abuse and the sequel was never written so I don't know what long term effects the author had in mind, but the depth of how he went into the thought processes and conclusions led me to believe that while she might have had some fairly emotionally unhealthy tendencies, she still had friends and people she trusted and she was willing to still be the sort of cute, "lawful good alignment" android she had grown into by about the mid point of the story. One of the things that I feel was addressed and then never fully explored simply because it became irrelevant to her character was how easily she could possibly recreate herself if she got particularly violent and managed to gain access to some of their advanced production facilities. It was less that she had any sort of goal like that and more that an AI of another sort did. The only thing I really wasn't sure how to feel about was that her personality had some sycophantic tendencies that manifested in her being just particularly friendly.
@cogwheel42
@cogwheel42 7 жыл бұрын
I think the goal of absolute safety is part of the misunderstanding. We don't expect parents to raise "safe" (non-psychotic, non-sociopathic, etc.) children with 100% success, why would we set that as the bar for AGI? So far all the techniques that look promising to bring about AGI involve stochastic/chaotic processes of which we'll never have a full, a priori understanding unless P = NP. If we want to consider ourselves successful in the creation of an AGI, we'll almost certainly have to reduce our standards to something a bit more statistical like "no less safe than humans." Either way, I agree with the point that it will take essentially replicating the kinds of brain structures in humans that lead to both social instincts and specific domains of learning. Much of the "general" in humans' intelligence came about recently in evolutionary history, but it was all built on top of millions of years of reptile and mammal evolution which laid the foundation for most of our sensory and emotional experiences. Whatever aspects of cognition, learning, and social interaction are unique to humans are learned and reinforced in the context of pain, pleasure, fear, excitement, etc. which exist throughout the animal kingdom. Recent work shows that "modularity" in ANNs is necessary for certain complex traits to evolve. Simply throwing more connections at a problem increases over-fitting. Whatever we come up with will almost certainly rival the complexity of a Human brain, even if it looks very different in the details.
@AhsimNreiziev
@AhsimNreiziev 7 жыл бұрын
+
@Ansatz66
@Ansatz66 6 жыл бұрын
"We don't expect parents to raise safe (non-psychotic, non-sociopathic, etc.) children with 100% success, why would we set that as the bar for AGI?" Just one unsafe AGI could mean the end of all humanity. This is one of those few situations where perfection is very important.
@chrismolanus
@chrismolanus 5 жыл бұрын
I really like what you are doing here since I can send links of your videos to people instead of answering their questions my self. My only wish I guess is that you are not as harsh on their oversimplification of the problem. You can suggest that something like that might help(if you squint hard enough), but it's a bit more complicated and it's only part of the puzzle.
@a8lg6p
@a8lg6p 4 жыл бұрын
That's the crucial thing that I often find myself wanting to scream at my computer screen: You might as well try to raise a crocodile as a human. Human learning, as well as many characteristics people assume an agent would have, is a complex thing that is the product of our evolutionary history. It isn't the same thing as general intelligence, and you don't get any of it magically for free. It is the product of complex design (or "design" ie functional organization coming about via natural selection). To get it in an AI, you have to figure out how to build it, or figure out how to get a machine to figure it out maybe. Just saying, "Well why don't you just do that?" Yes, figuring out how to just do that is exactly the problem.
@MakkusuOtaku
@MakkusuOtaku 5 жыл бұрын
Children will learn things relevant in in obtaining their goals. Same as AI. But different goals & inputs.
@alluriman
@alluriman 3 жыл бұрын
The Lifecycle of Software Objects by Ted Chiang is a great short story exploring this concept
@petersmythe6462
@petersmythe6462 6 жыл бұрын
I think it's also worth mentioning that exploratory value-learning is dangerous as well. Consider which of these will yield the highest accuracy as a predictive model of human values? 1. Be raised as a human child. 2. Be taught stuff by AI researchers and philosophers. 3. Sift through political and social theories looking for the values on which they are based. 4. Spy on everyone to learn their values. 5. Experiment on everyone to learn their values. 6. Breed multiple generations of humans and experiment on them to learn their values. 7. Hack your reward system by changing all humans to have values you can predict. I can tell you it won't be #1 or #2 or even #3.
@cnawan
@cnawan 7 жыл бұрын
Thanks for doing these videos. The more familiar the general populace is with the field of AI design, the faster we can brainstorm effective solutions to problems and incentivise those with money and power to take it seriously.
@PickyMcCritical
@PickyMcCritical 7 жыл бұрын
I've been wondering this lately. Very timely video :)
@NoahTopper
@NoahTopper 5 жыл бұрын
“You may as well raise a crocodile like a child.” This about sums it up for me. The initial idea is so nonsensical that I had trouble even putting it into words.
@rupertgarcia
@rupertgarcia 5 жыл бұрын
You just got a new subscriber! Love your analyses!
@PierreThierryKPH
@PierreThierryKPH 6 жыл бұрын
3:33 Jonathan Haidt argues in your direction: moral have some pre-wired components in our brain, ready at birth.
@mikewick77
@mikewick77 6 жыл бұрын
you are good at explaining difficult subjects.
@scottsmith6658
@scottsmith6658 7 жыл бұрын
I was glad to see your point at the end of the video, saying that even if you could raise an AI like a human, that's not necessarily 'safer'. The assumption that raising it as a human child is somehow going to make it 'safe' is a bit weak when you consider how many of history's evil bastards were described as "coming from a good home and having had every advantage growing up".
Why Not Just: Think of AGI Like a Corporation?
15:27
Robert Miles AI Safety
Рет қаралды 157 М.
Is AI Safety a Pascal's Mugging?
13:41
Robert Miles AI Safety
Рет қаралды 375 М.
Try this prank with your friends 😂 @karina-kola
00:18
Andrey Grechka
Рет қаралды 9 МЛН
Beat Ronaldo, Win $1,000,000
22:45
MrBeast
Рет қаралды 158 МЛН
Cat mode and a glass of water #family #humor #fun
00:22
Kotiki_Z
Рет қаралды 42 МЛН
What can AGI do? I/O and Speed
10:41
Robert Miles AI Safety
Рет қаралды 120 М.
The Dome Paradox: A Loophole in Newton's Laws
22:59
Up and Atom
Рет қаралды 242 М.
10 Reasons to Ignore AI Safety
16:29
Robert Miles AI Safety
Рет қаралды 342 М.
Are AI Risks like Nuclear Risks?
10:13
Robert Miles AI Safety
Рет қаралды 98 М.
Train Your Brain to Automatically Reach Your Goals
11:56
Productive Peter
Рет қаралды 39 М.
Why Would AI Want to do Bad Things? Instrumental Convergence
10:36
Robert Miles AI Safety
Рет қаралды 252 М.
The other "Killer Robot Arms Race" Elon Musk should worry about
5:51
Robert Miles AI Safety
Рет қаралды 100 М.
How To Focus On The Right Problems
16:57
Y Combinator
Рет қаралды 9 М.
Try this prank with your friends 😂 @karina-kola
00:18
Andrey Grechka
Рет қаралды 9 МЛН