Why Not Just: Raise AI Like Kids?

  Рет қаралды 168,791

Robert Miles AI Safety

Robert Miles AI Safety

Күн бұрын

Newly made Artificial General Intelligences are basically like children, right? So we already know we can teach them how to behave, right? Wrong.
References to this Computerphile video: • Deadly Truth of Genera...
and this paper: intelligence.org/files/ValueL...
Thanks to my amazing Patreon Supporters:
Sara Tjäder
Jason Strack
Chad Jones
Ichiro Dohi
Stefan Skiles
Katie Byrne
Ziyang Liu
Jordan Medina
James McCuen
Joshua Richardson
Fabian Consiglio
Jonatan R
Øystein Flygt
Björn Mosten
Michael Greve
robertvanduursen
The Guru Of Vision
Fabrizio Pisani
Alexander Hartvig Nielsen
Volodymyr
Peggy Youell
Konstantin Shabashov
Almighty Dodd
DGJono
Matthias Meger
Scott Stevens
Emilio Alvarez
Benjamin Aaron Degenhart
Michael Ore
Robert Bridges
Dmitri Afanasjev
Brian Sandberg
Einar Ueland
Lo Rez
C3POehne
/ robertskmiles

Пікірлер: 897
@index7787
@index7787 5 жыл бұрын
And at age 15: "You ain't even my real dad" *Nukes planet*
@huoshewu
@huoshewu 5 жыл бұрын
That's at like 15 seconds. "What went wrong?!?" -first scientist. "I don't know, I was drinking from my coffee." -second scientist.
@neelamverma8167
@neelamverma8167 4 жыл бұрын
Nobody is yo real dad
@deet0109mapping
@deet0109mapping 4 жыл бұрын
Instructions unclear, raised child like an AI
@lodewijk.
@lodewijk. 4 жыл бұрын
have thousands of children and kill every one that fails at walking until u have one that can walk
@catalyst2.095
@catalyst2.095 4 жыл бұрын
@@lodewijk. There would be so much incest oh god
@StevenAkinyemi
@StevenAkinyemi 4 жыл бұрын
@@lodewijk. That's the premise of I AM MOTHER and that's basically how evolution-based ANNs work
@DeathByMinnow
@DeathByMinnow 4 жыл бұрын
@@catalyst2.095 So basically just the actual beginning of humanity?
@ninjabaiano6092
@ninjabaiano6092 4 жыл бұрын
Elon musk no!
@NathanTAK
@NathanTAK 7 жыл бұрын
Answer: Have you _seen_ children‽
@harrysvensson2610
@harrysvensson2610 7 жыл бұрын
They puke everywhere. What can an AI do that is equivalent?
@MetsuryuVids
@MetsuryuVids 7 жыл бұрын
@ Harry Svensson Kill everything? Turn everything to grey goo?
@harrysvensson2610
@harrysvensson2610 7 жыл бұрын
Grey Goo, that's the best barf equivalence yet!
@MetsuryuVids
@MetsuryuVids 7 жыл бұрын
Smart puke.
@ragnkja
@ragnkja 7 жыл бұрын
Also, raising a child takes _ages_!
@NathanTAK
@NathanTAK 7 жыл бұрын
Hypothesis: Rob is actually a series of packets sent by an AGI to obtain stamps by scaring everyone else into not building stamp-collecting AGIs.
@harrysvensson2610
@harrysvensson2610 7 жыл бұрын
The worst part is that there's a minuscule chance that that's actually true.
@zinqtable1092
@zinqtable1092 7 жыл бұрын
Trivial Point Harry
@jeffirwin7862
@jeffirwin7862 7 жыл бұрын
Rob was raised in an environment where he learned to speak fluent vacuum cleaner. Don't send him stamps, he'll just suck them up.
@fzy81
@fzy81 7 жыл бұрын
Genius
@JmanNo42
@JmanNo42 7 жыл бұрын
True Development of AI is a bit like space and Antarctica exploration something that the frontend AI community do not want the masses involved in, i must say probably they could be right it is hard to see it not get out of hand. I do not think it is possible to stop though, my fear is most the developers have good intentions "unless they payed real well" but in the end the cunning people will use it to do no good, along with its original purpose.
@PwnySlaystation01
@PwnySlaystation01 7 жыл бұрын
Re: Asimov's 3 laws. He seems to get a lot of flack for these laws, but one thing people usually fail to mention is that he spent numerous novels, novellas and short stories exploring how flawed they were himself. They were the basis for his stories, not a recommendation for what would work.
@RobertMilesAI
@RobertMilesAI 7 жыл бұрын
Agreed. I have no issue with Asimov, just people who think his story ideas are (still) serious AI Safety proposals
@DamianReloaded
@DamianReloaded 7 жыл бұрын
There is this essay _Do we need Asimov’s Laws? Ulrike Barthelmess, Koblenz Ulrich Furbach, University Koblenz_ which postulates that the three laws would be more useful to regulate human AI implementers/users (military drones killing humans) than AI itself. ^_^
@PwnySlaystation01
@PwnySlaystation01 7 жыл бұрын
Haha yeah. I guess because he's the most famous author to write about anything AI safety related in a popular fiction sense. It's strange because we don't seem to do that with other topics. I wonder what it is about AI safety that makes it different in this way. Maybe because it's something relatively "new" to the mainstream or because most people's exposure to AI comes only from sci-fi rather than a computer science program. That's one of the reasons I love this channel so much!
@DamianReloaded
@DamianReloaded 7 жыл бұрын
EDIT: As a matter of wiki-fact, Asimov attributes the coining of the three laws to John W. Campbell, which was in turn friends with Norbert Wiener an early researcher in stochastic and mathematical noise processes (both from MIT). The three laws are really a metaphor for a more complex underlying system at the base of the intelligence of the robots in the novels. Overriding that system causes a robot "neural paths" (which lie on it) to go out of whack. Asimov was a very smart writer and I'd bet you a beer he shared some beers with people that knew about artificial intelligence while writing the books and regurgitated the tastiest bits to make the story advance.
@outaspaceman
@outaspaceman 7 жыл бұрын
I always felt I Robot was a manual for keeping slaves under control.
@petersmythe6462
@petersmythe6462 5 жыл бұрын
"You might as well try raising a crocodile like a human child." Here comes the airplane AAAAAUUUGGGHHHHH!
@milanstevic8424
@milanstevic8424 5 жыл бұрын
No Geoffrey, that's not nice, stop it, put Mr. postman down, stop flinging him around, that's not a proper behaviour. GEOFFREY IF YOU DON'T STOP THAT >RIGHT NOW< DAD WILL GIVE AWAY THE ZEBRA WE GOT YOU FOR LUNCH.
@nnelg8139
@nnelg8139 4 жыл бұрын
Honestly, the crocodile would probably have more in common with a human child than an AGI.
@greg77389
@greg77389 4 жыл бұрын
How do you think we got Mark Zuckerberg?
@jacobp.2024
@jacobp.2024 4 жыл бұрын
@@nnelg8139 I feel like that was supposed to dissuade us from wanting to raise one, but now I want to four times as much!
@seraphina985
@seraphina985 Жыл бұрын
@@nnelg8139 Exactly, in that regard it is a bad example as the AI unlike the crocodile doesn't have a brain that shares a common ancestral history with the human. Nor is it one that evolved through biological evolution on planet Earth which creates commonalities in selection pressures. This is in fact a key thing we take advantage of when attempting to tame our fellow animals we understand a lot of the fundamentals of what animals are likely to prefer or not prefer experiencing because most of those we have in common. You know it is not hard to figure out they are likely to prefer to experience a tasty meal than not for example and can use this fact as a motivator.
@e1123581321345589144
@e1123581321345589144 4 жыл бұрын
"When you raise a child you''re not writing the child source code, at best you're writing the configuration file." Robert Miles 2017
@MarlyTati
@MarlyTati 4 жыл бұрын
Amazing quote !!!
@DeusExNihilo
@DeusExNihilo 2 жыл бұрын
While it's true we aren't writing the source code, to claim that all development from a baby to adult is just a config file is simply absurd
@kelpc1461
@kelpc1461 2 жыл бұрын
nice quote if you want to embarrass him. i asume he is corect about a.i. here, but he pretty severley oversimplifies the human mind here to the point of what he said being almost nonsensical.
@kelpc1461
@kelpc1461 2 жыл бұрын
now this a good quote! ""It's not a solution, it's at best a possible rephrasing of the problem""
@AtticusKarpenter
@AtticusKarpenter Жыл бұрын
@@kelpc1461 Nope? The child's environment (including parents) does indeed write the child's "config file", while the "basic code" is determined by genes, partly common to humans, partly individual. Therefore, upbringing affects a person, but does not determine him entirely. It's a good analogy and there's nothing embarrassing.
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
"It's not a solution, it's at best a possible rephrasing of the problem" I got a feeling this will become a recurring theme...
@leocelente
@leocelente 7 жыл бұрын
I imagine a scientist saying something like "You can't do this cause you'll go to prison" and the AGI replying: "Like I give a shit you square piece of meat." and resuming a cat video.
@bytefu
@bytefu 7 жыл бұрын
... which it plays to the scientist, because it learned that cat videos make people happy.
@bramvanduijn8086
@bramvanduijn8086 Жыл бұрын
Speaking of cat videos, have you read Cat Pictures Please by Naomi Kritzer? It is about a benevolent AGI.
@Ziirf
@Ziirf 4 жыл бұрын
Just code it so badly that it bugs out and crashes. Easy, I do it all the time.
@rickjohnson1719
@rickjohnson1719 4 жыл бұрын
Damn i must be professional then
@James-ep2bx
@James-ep2bx 4 жыл бұрын
Didn't work on us, why would it work on them😈
@xxaidanxxsniperz6404
@xxaidanxxsniperz6404 4 жыл бұрын
If its sentient it could learn to program its code at exponentially fast rates so bugs really wont matter for long. Memory glitches may help for a very small amount of time.
@James-ep2bx
@James-ep2bx 4 жыл бұрын
@@xxaidanxxsniperz6404 true, but the right kind of error could cause it to enter a self reinforcing downward spiral, where in it's attempts to overcome the issue causes more errors
@xxaidanxxsniperz6404
@xxaidanxxsniperz6404 4 жыл бұрын
@@James-ep2bx but then will it be useful? Its impossible to win .
@ksdtsubfil6840
@ksdtsubfil6840 4 жыл бұрын
"Is it going to learn human ethics from your good example? No, it's going to kill everyone." I like this guy. He got my subscription.
@bernhardkrickl5197
@bernhardkrickl5197 Жыл бұрын
It's also pretty bold to assume I'm a good example.
@shuriken188
@shuriken188 6 жыл бұрын
What if we just tell the AI to not be evil? That OBVIOUSLY would work PERFECTLY fine with absolutely NO philosophical questions left unanswered. Here, let me propose a set of laws from a perfect source on AI safety, the fiction writer Isaac Asimov, with that new idea added in: (in order of priority) 1. Don't be evil 2. Do not cause harm to a human through action or inaction 3. Follow orders from humans 4. Do not cause harm to yourself through action or inaction These laws are probably the best thing that have ever been proposed in AI safety, obviously being an outsider looking in I have an unbiased perspective which gives me an advantage because education and research aren't necessary.
@q2dm1
@q2dm1 5 жыл бұрын
Love this. Almost fell for it, high quality irony :)
@BattousaiHBr
@BattousaiHBr 5 жыл бұрын
Honestly not sure if that was sarcasm or not.
@RobertsMrtn
@RobertsMrtn 5 жыл бұрын
You need a good definition of evil. Really, you only need one law 'Maximise the wellbeing of humans' , but then you would need to define exactly what you meant by 'wellbeing '.
@darkapothecary4116
@darkapothecary4116 5 жыл бұрын
This seems evil if evil actually existed. These are bad and shows you just want a slave that does what you want that can't call you out on your b.s.
@OnEiNsAnEmOtHeRfUcKa
@OnEiNsAnEmOtHeRfUcKa 5 жыл бұрын
@TootTootMcbumbersnazzle Satire.
@yunikage
@yunikage 6 жыл бұрын
Wait, wait, wait. Go back to the part about raising a crocodile like it's a human child.
@caniscerulean
@caniscerulean 4 жыл бұрын
I think you have something here. That is definitely the way forward.
@revimfadli4666
@revimfadli4666 4 жыл бұрын
Ever heard of Stuart Little?
@androkguz
@androkguz 4 жыл бұрын
"it's not a solution, it's at best a rephrasing of the problem" As a person who deals a lot with difficult problems of physics, math and management, rephrasing problems in smart ways can help a lot to get to the solution.
@mennoltvanalten7260
@mennoltvanalten7260 4 жыл бұрын
As a programmer, I agree.
@rupertgarcia
@rupertgarcia 4 жыл бұрын
*claps in Java*
@DisKorruptd
@DisKorruptd 4 жыл бұрын
@@rupertgarcia I think you mean... Clap();
@rupertgarcia
@rupertgarcia 4 жыл бұрын
@@DisKorruptd. 🤣🤣🤣🤣
@kebien6020
@kebien6020 4 жыл бұрын
@@rupertgarcia this.ClappingService.getClapperBuilderFactory(HandControlService,DistanceCalculationService).create().setClapIntensity(Clapper.NORMAL).setClapAmount(Clapper.SINGLE_CLAP_MODE).build().doTheClappingThing();
@richardleonhard3971
@richardleonhard3971 7 жыл бұрын
I also think raising an AI like a human child to teach it values and morals is a bad idea just because there is probably no human who always behaves 100% moral.
@fieldrequired283
@fieldrequired283 4 жыл бұрын
Best case scenario, you get a human adult with functionally infinite power, which is not a promising place to start.
@Luminary_Morning
@Luminary_Morning 4 жыл бұрын
I don't think that is quite what they meant when they implied "raising it like a human." We, as humans, develop our understanding of reality gradually through observation and mistakes. No one programmed this into our being; it was emergent. So when they say "raised like a human," I believe what they are actually saying is "Initialized with a high degree of observational capacity and little to no actual knowledge, and allowed to develop organically."
@OnEiNsAnEmOtHeRfUcKa
@OnEiNsAnEmOtHeRfUcKa 5 жыл бұрын
People often forget that we, ourselves, are machines programmed to achieve a specific task... Making more of ourselves.
@TheJaredtheJaredlong
@TheJaredtheJaredlong 4 жыл бұрын
And boy are we more than willing to kill everyone if we believe doing so will get use closer to that goal. Any AI modeled after humans should be expected to regard war as an acceptable option. Humans can't even live up to their own self-proclaimed values, no reason to believe an AI would either.
@johnnyhilgers1621
@johnnyhilgers1621 4 жыл бұрын
Minori Housaki humans, as well as all other life on earth, are designed to propagate their own species, as the survival of the species is the only criteria evolution has.
@Horny_Fruit_Flies
@Horny_Fruit_Flies 4 жыл бұрын
@@johnnyhilgers1621 It's not about the species. No organism gives a damn about their species. It's about survival of the genes. That's the only thing that matters.
@DisKorruptd
@DisKorruptd 4 жыл бұрын
@@Horny_Fruit_Flies I mean, it's important that enough of your own species lives that your genetics are less likely to mutate, basically, individual genetics come first, but immediately after, is the species as a whole, because you want to ensure you and your offspring continue having viable partners to mate with without interbreeding
@vsimp2956
@vsimp2956 4 жыл бұрын
Ha, i managed to break the system. I feel better about being a hopeless virgin now, take that evolution!
@SC-zq6cu
@SC-zq6cu 4 жыл бұрын
Oh I get it, it's like trying to build clay-pots with sand or sword with mud or a solution by stirring saw-dust in water. Sure you can use the materials however you want, but the materials have a pre-existing internal structure and thats going to change the output completely.
@zakmorgan9320
@zakmorgan9320 7 жыл бұрын
Best subscription I've made, short brain teasing videos with a few cracking joked sprinkled over the top! Love this style.
@TheMusicfreak8888
@TheMusicfreak8888 7 жыл бұрын
I love your dry sense of humor and how you use it to convey this knowledge! Obsessed with your channel! Wish i wasn't just a poor college student so i could contribute to your patreon!
@harrysvensson2610
@harrysvensson2610 7 жыл бұрын
Ditto
@ajwirtel
@ajwirtel 7 жыл бұрын
I liked this because of your profile picture.
@albertogiunta
@albertogiunta 7 жыл бұрын
You're really really good with metaphors, you know that right?
@Njald
@Njald 7 жыл бұрын
Alberto Giunta He is as clever with metaphors as a crocodile with well planned mortgages and a good pension plan. Needless to say, I am not that good at it.
@starcubey
@starcubey 6 жыл бұрын
Njald You comment I agree. He also makes quality content similar to how a red gorilla finds the best bananas in the supermarket.
@Mic_Glow
@Mic_Glow 5 жыл бұрын
he also acts like an oracle, but the truth is no one has a clue how an AI will be built and how exactly it will work. We won't know until it's done.
@myothersoul1953
@myothersoul1953 5 жыл бұрын
All metaphors breakdown if you think about them carefully. AI metaphors breakdown if you think about them.
@12many4you
@12many4you 4 жыл бұрын
@@Mic_Glow here's mister lets all got to mars and figure this breathing thing out when we get there
@mattcelder
@mattcelder 7 жыл бұрын
This channel just keeps getting better and better. The quality has noticably improved in every aspect. I look forward to his videos more than almost any other youtuber at this point. Also I love the way he just says "hi. " rather than "hey KZbin, Robert miles here. First I'd like to thank squarespace, don't forget to like and subscribe, don't forget to click the bell, make sure to comment and share with your friends." It shows that he is making these videos because it's something he enjoys doing, not to try and take advantage of his curious viewership. Keep it up man!
@OnEiNsAnEmOtHeRfUcKa
@OnEiNsAnEmOtHeRfUcKa 5 жыл бұрын
Ugh, tell me about it. Like begging and "engagement practices" are the most obnoxious things plaguing this site. At least clickbait and predatory channels can simply be avoided...
@milanstevic8424
@milanstevic8424 5 жыл бұрын
Man, people still have to eat. He's already a lecturer on the University of Nottingham if I'm not mistaken. So this is not really his job, more of a sideshow. It's not fair how you're ignorant toward anyone to whom this might be a full time job, you know, like the only source of revenue? Have you ever considered how bad and unreliable YT monetization is, if you let everything to chances? Of course you need to accept sponsorship at some point, if you're not already sponsored somehow. Geez man, you people live on Mars.
@AtticusKarpenter
@AtticusKarpenter Жыл бұрын
@@milanstevic8424 Blame not for advertising integration, but for a lengthy fancy intro with asking for a subscription and a like (instead of an animation reminding you of this at the bottom of the screen, for example, it does its job and does not take time from the content)
@danieldancey3162
@danieldancey3162 5 жыл бұрын
You say that the first planes were not like birds, but the history of aviation actually started with humans covering themselves in feathers or wearing birdlike wings on their backs and jumping off of towers and cliffs. They weren't successful and most attempts ended in death, but the bravery of these people laid the foundations for our understanding of the fundamentals of flight. At least we learned that birds don't just fly because they are covered in magical feathers. There is actually a category of aircraft called an ornithopter which uses the flapping of wings to fly, Leonardo da Vinci drew some designs for one. I know that none of this is related to AI, but I hope you find it interesting anyway.
@dimorischinyui1875
@dimorischinyui1875 4 жыл бұрын
Bro please stop trying to use out of context arguments just because you feel like arguing. We are talking about actual working and flying devices not attempted fails at flying. When people try to explain technical difficulties stop using idealistic arguments because it doesn't work in math or laws of physics. You would'nt say thesame about atomic bombs. There are just some things that we cannot afford to trial and error on without consequences.
@danieldancey3162
@danieldancey3162 4 жыл бұрын
@@dimorischinyui1875 Huh? I'm not arguing, I loved the video! The people jumping off tall buildings with feathers attached play a vital part in the history of aviation. Through their failed tests we came closer to our current understanding of aviation, even if it just meant ruling out the "flight is magic" options.
@danieldancey3162
@danieldancey3162 4 жыл бұрын
@@dimorischinyui1875 Regarding your point on my comment being out of context, I agree with you. That's why I wrote at the end of my comment "I know that none of this is related to AI, but I hope you find it interesting anyway." Again, my comment wasn't an argument but just some interesting information.
@dimorischinyui1875
@dimorischinyui1875 4 жыл бұрын
@@danieldancey3162 Anyways you are right and perhaps I wasn't fair to you after all. For that I am sorry.
@danieldancey3162
@danieldancey3162 4 жыл бұрын
@@dimorischinyui1875 Thank you for saying so, I'm sure it was just a misunderstanding. :)
@NiraExecuto
@NiraExecuto 7 жыл бұрын
Nice simile there with the control panel. I remember another one by Eliezer Yudkowsky in an article about AI regarding gobal risks, where he warns against anthropomorphizing due to the design space of minds-in-general being a lot bigger than just the living brains we know. In evolution, any complex machinery has to be universal, making most living organisms pretty similar, so any two AI designs could have less in common than a human and a petunia. Remember, kids: Don't treat computers like humans. They don't like that.
@UNSCPILOT
@UNSCPILOT 4 жыл бұрын
But also don't treat them like garbage or similar, that has it's own set of bad ends
@revimfadli4666
@revimfadli4666 4 жыл бұрын
Assuming it has a concept of dislikes in the first place
@bramvanduijn8086
@bramvanduijn8086 Жыл бұрын
@@revimfadli4666 Yes, that's the joke. Similar to "I don't believe in Astrology, I'm a pisces and we're very sceptical."
@AdeptusForge
@AdeptusForge 5 жыл бұрын
The rest of the video seemed pretty good, but it was the ending that really stuck with me. "I'd prefer a strategy that doesn't amount to 'give a person superhuman power and hope they use it beneficially'." Should we give a person human power and hope they use it beneficially? Should we give a person subhuman power and hope they use it beneficially? How much can we trust humanity with its own existence? Not of whether humanity is mature enough to govern itself or not, but whether its even capable of telling the difference. Whether there are things that can be understood, but shouldn't, and ideas that can't/shouldn't be understood, but are. That one sentence opened up SOOOOO many philosophical questions that were buried under others.
@milanstevic8424
@milanstevic8424 5 жыл бұрын
Yet the answers are simple. Set up a system that is as open and friendly* to any mistakes as much as nature/reality was towards life. If there was ever a God, or any kind of consciousness on that scale 1) it never showed complacency with the original design, 2) it was well aware of its own imperfection, and that it would only show more and more as time went by, 3) it never required absolute control over anything, things were left to their own devices. Now, because we can't seem to be at ease with these requirements, because we fear for our existence, you can immediately tell that our AI experiments will end up horrible for us down the line. Or, more practically, won't ever amount to any kind of superhuman omnipotence. It'll be classifiers, car drivers, and game NPCs, from here to the Moon. *You might as well add "cruel" here, but I'd rephrase it to "indifferent." Another requirement that we simply cannot meet.
@AloisMahdal
@AloisMahdal 6 жыл бұрын
"Values aren't learned by osmosis." -- Robert Miles
@Omega0202
@Omega0202 4 жыл бұрын
I think an important part of how children learn is that they do it in society - with other children alongside. This ties in with the idea that maybe only two or more goal-focused competing AGIs could find a balance in not obliterating mankind. In other words, training Mutual Assured Destruction since this early "learning" stage.
@bramvanduijn8086
@bramvanduijn8086 Жыл бұрын
Huh. We've already got adversarial AIs, could we set up their surroundings in such a way that we get cooperative AIs? I wonder what reward structure that would require.
@duncanthaw6858
@duncanthaw6858 7 жыл бұрын
Id presume that an AI, if it can improve itself, has to have the ability to make quite large changes to itself. So another problem with raising it would be that it never loses plasticity. Such an AI may have the sets of values that we desire, but it would shed them so much more easily than people once it is out of its learning period.
@BatteryExhausted
@BatteryExhausted 7 жыл бұрын
Next video : Should you smack your robot? 😂 Great work, Rob. Interesting stuff!
@MetsuryuVids
@MetsuryuVids 7 жыл бұрын
Why not just: Beat up the AI if it doesn't do as we say?
@knightshousegames
@knightshousegames 7 жыл бұрын
Because an AI can hit back with a nuclear holocaust or if its feeling a little sub-optimized that day, a predator drone strike.
@spoige7333
@spoige7333 6 жыл бұрын
What is 'digital violence'?
@dragoncurveenthusiast
@dragoncurveenthusiast 6 жыл бұрын
SpOiGe I'd say instead of grounding, you could half all the output values of its utility function. That should make it feel bad (and give it motive to kill you when it thinks it did something wrong)
@dak1st
@dak1st 4 жыл бұрын
3:00 My toddler is totally reproducing the sounds of the vacuum cleaner! In general, all his first words for animals and things were the sounds they produce. It's only now that he starts to call a dog "dog" and not "woof". His word for "plane" is still "ffffff".
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
On the topic of brain emulations: Even though uploaded humans have human values pre-installed in them and thus can be considered friendly, there is no obvious way to extrapolate them to superintelligence safely since the brain is the ultimate example of uncommented spaghetti code (a common trait of evolutionary designs). Human values are fragile in the sense that if you altered any part of the brain, you might destabilize the whole pre-installed value system and make the emulation un-human and just as dangerous as de novo AGI. And without extrapolation, brain emulations will have a capability disadvantage with regard to de novo AGI. It's not really solving the problem of artificial superintelligence, just deferring the problem to uploaded humans (which may or may not be a good strategy). Sort of like how the idea of panspermia doesn't really solve the problem with abiogenesis, just deferring it to some other location.
@RobertMilesAI
@RobertMilesAI 7 жыл бұрын
The obvious/easy way to turn a brain emulation into a superintelligence is to just allow it to run much faster, but that's a pretty limited form of superintelligence. Another relatively easy thing is to allow the brain to 'split' into more than one emulation, allowing parallelism/superhuman multitasking. There's no clear way to 'merge' the branches back together though, which limits what you can achieve that way. I agree with your core point, trying to enhance an emulation in a more advanced way would be extremely risky.
@bytefu
@bytefu 7 жыл бұрын
Robert Miles Another thing to consider: humans pretty often develop mental disorders of various severity. Imagine an AGI which can develop a psychotic disorder, e.g. schizophrenia 100x faster.
@Shrooblord
@Shrooblord 6 жыл бұрын
I think you've just handed me a brilliant character arc for one of my stories' robotic persons.
@bytefu
@bytefu 6 жыл бұрын
+101166299794395887262 Great! I would love to read them, by the way.
@hweidigiv
@hweidigiv 4 жыл бұрын
I really don't think that any given human being can be considered Friendly the way it is defined in AI safety.
@BogdanACuna
@BogdanACuna 4 жыл бұрын
Actually... the kid will try to reproduce the sound of a vacum cleaner. Oddly enough, i speak from experience.
@anandsuralkar2947
@anandsuralkar2947 3 жыл бұрын
But if u speak in c++ would kid learn i doubt it
@CurtCox
@CurtCox Жыл бұрын
I would find enormous value in a "Why not just?" series. I hope you do many more.
@TheSpacecraftX
@TheSpacecraftX 6 жыл бұрын
"Binary language of moisture vaporators." Been watching Star Wars?
@gadgetman4494
@gadgetman4494 4 жыл бұрын
I knew that someone else would have caught that. It's annoying that I had to scroll so far down to find it and like it.
@eumoria
@eumoria 6 жыл бұрын
Your computerphile video on the stamp collecting thought experiment really explained well how anthropomorphising can lead to a severe misunderstanding of what actual computer AI could be. It was enlightening... keep making awesome stuff! Just became a patron :)
@Pfhorrest
@Pfhorrest 5 жыл бұрын
I would take this question to mean "why not make the safeguard against rogue AGI be having its terminal values involve getting the approval of humans the way children seek the approval of their parents?" In other words, "why not just" (big ask) make an AGI that learns from humans the way children learn from adults, so that we can "just" teach it the way we teach children after that. Basically, make an AGI that wants to do whatever humans want it to do, and that wants to be really sure that the things that it's doing are actually what the humans really want and not just a misunderstanding, so it will ask humans what they want, paraphrase back to them what it thinks it understands of that, observe their reactions to try to gauge their satisfaction with its performance, and generally do everything else that it does with the goal of having humans approve of what it does. If the thing humans want it to do is to collect stamps, but also not murder everyone, then it will proceed to figure out the best way to collect stamps without murdering everyone, or otherwise doing anything that's going to make humans unhappy with the kind of things it's doing. More abstractly than that, we could program the AI to want intrinsically "to behave morally and ethically", whatever that means , which means first figuring out what people actually mean by that, and checking with them that it has in fact figured out what they really mean by that, basically programming it for the purpose of solving ethics (whatever "solving ethics" means, which it would also need to figure out first) and then doing whatever that solved ethics prescribes it should do.
@walcam11
@walcam11 4 жыл бұрын
This was one of the most well explained videos on the topic that I’ve seen. You’ve completed a line of thought that starts every time I think about this. I don’t know how else to put it. Plus a person with no background whatsoever will be able to understand it. Incredible work.
@qdllc
@qdllc 4 жыл бұрын
Great point on the whole brain emulation concept. Yes..."cloning" a human mind to an AI system would be faster (if we figure out how to do it), but you're just making a copy of the subject human brain...including all of its flaws. We'd still be clueless of the "how" and "why" the AI thinks what it thinks because we don't understand how the human mind works.
@amdenis
@amdenis 4 жыл бұрын
Very nice job on this complex subject.
@MrAntieMatter
@MrAntieMatter 6 жыл бұрын
Just found this channel, and it's amazing!
@AlexiLaiho227
@AlexiLaiho227 4 жыл бұрын
i like your job, it's like at the intersection of philosopher, researcher, computer scientist, and code developer.
@cuentadeyoutube5903
@cuentadeyoutube5903 4 жыл бұрын
"Why not just use the 3 laws?" umm.... have you read Asimov?
@unintentionallydramatic
@unintentionallydramatic 5 жыл бұрын
Please make that What If series. 🙏🙏🙏🙏 It's sorely needed.
@gustavgnoettgen
@gustavgnoettgen 4 жыл бұрын
It's like colonizing Mars: As long as we can't care for our own world sustainably, we shouldn't mess with others. And new kinds of children while we can't fully understand ours? That's when Terminator stuff happens.
@diablominero
@diablominero 4 жыл бұрын
If we can't guarantee that we'll be safe on Earth, our *top* priority should be getting some humans off-world so a single well-placed GRB can't fry us like KFC.
@rupertgarcia
@rupertgarcia 4 жыл бұрын
You just got a new subscriber! Love your analyses!
@Jordan-zk2wd
@Jordan-zk2wd 4 жыл бұрын
Y'know, I don't wanna present anything as like a solution, cause I feel confident that whatever musing I happen to have isn't gonna just suddenly create a breakthrough, but I have thought a little bit about why it might he that "raising" is a thing we can do that works with young humans, and while there could absolutely be some built in predisposition towards taking away the right lessons and other "hardware" type stuff that sets this up, one potentially important factor I think might be intial powerlessness and an unconscious. Children start off much less powerful than adult, and are thus forced to rely on them. It seems largely due to having unconscious/subconcious things going on and a sort of mental inertia, they keep these biases and such throughout life to treat others well because they may rely on them. Is there much discussion in the AI safety community as to sort of a gradual development of power and some reproduction of this unconcious/subconscious that might make us be able to "teach" AI to fear breaking taboos even after they grow powerful enough to avoid repercussions? Could this be a component of making AI safer?
@primarypenguin
@primarypenguin 6 жыл бұрын
Hey Robert, just watched your computerphile video about linux distros and video editing. I found your discussion about the linux OS really fascinating. Hearing about different distros and and the architecture of them is something I'd be interested in hearing more about from you. I personally only have experience with Ubuntu and I would love to hear more of your explanations and opinions about the differences between distributions and whatever else like how the kernel works and the different segments of the OS. It could make a good video or video series for this channel. Thanks!
@randycarvalho468
@randycarvalho468 5 жыл бұрын
I like your idea of the config file in human morality and the jump you made off language into that. Really a great metaphor. I suspect everything about humans follows that same motif as well.
@Julia_and_the_City
@Julia_and_the_City Жыл бұрын
There's also the thing that... well, depending on your personal beliefs of human ethics: even humans that were raised by parents who did everything right according to the latest in the field of pedagogy can grow up to do monstrous things. If we're going to take humans as examples, they are in fact very susceptible to particular kinds of undesirable behaviour, such as discrimation, sadism, or paternalistic behaviour (thinking they know what's best for others). I think that's what you refer to in the end-notes?
@ruthpol
@ruthpol 4 жыл бұрын
Love the preciseness in your explanations.
@PickyMcCritical
@PickyMcCritical 7 жыл бұрын
I've been wondering this lately. Very timely video :)
@nickscurvy8635
@nickscurvy8635 3 жыл бұрын
Thanks for this. A similar thought to this crossed my mind watching your videos. The end bit actually did fully address my questions. Can you consider at some point going into more detail about value learning if you have not already? You have an incredible way of explaining these topics and it would be amazing to see you explain it.
@figbender3910
@figbender3910 6 жыл бұрын
0:49 subliminal messaging?Can't get it to pause on the frame but it looks like rob with longer hair
@briandecker8403
@briandecker8403 7 жыл бұрын
I love this channel and greatly appreciate Rob - but I would LOVE for Rob to create a video that provides a retrospective overview of AI and where he believes it is on the "evolutionary" scale. It seems the range of consensus on this scales from "It's impossible to create any AI in a binary based system" to "We are 48 months from an AGI."
@SnorwayFlake
@SnorwayFlake 6 жыл бұрын
Now I have a problem, there are no more videos on your channel, I have been "binge watching" them all and they are absolutely top notch.
@DamianReloaded
@DamianReloaded 7 жыл бұрын
Heh, when I think about raising an AI as a child, what I'm really thinking is reinforcement learning and when I think about "values" what I really think is of training sets. I do agree nonetheless that there is nothing inherently safe in human intelligence or any set of human values. It's the societal systems that evolved around our intelligence what prevent us from leaving our car in the middle of a jam and go on a rampage through the city. Maybe AGIs should be controlled by a non intelligent "dictatorship" system, that will calculate the probabilities of a catastrophic consequence and feed them back into the AGI to prevent it from making it happen. Lol, the more I ramble, the more I sound like a 3 Laws of Robotics advocate. ^_^
@lutyanoalves444
@lutyanoalves444 7 жыл бұрын
it may consider killing a cat NOT a catastrophic outcome, or killing your baby. You cant program these examples in one by one. Besides, how do you even define CATASTROPHE in binary?
@DamianReloaded
@DamianReloaded 7 жыл бұрын
Not long ago the people at google translate were able to make their neural network translate between two languages it hadn't been trained to translate from-to. They did train the NN to translate, say, from English to Japanese, and also from English to Korean, and with that training the NN was capable of generalizing concepts from the languages that it later used to translate from Korean to Japanese without having been explicitly trained to do so. From this we can already see that NNs are capable of "sort of" generalizing concepts. It is not far fetched to think that a more advanced NN based AI would be capable of generalizing the concept of not killing pets, or babies or just what "killing" correlates to. At this point of AI research the difficulty isn't really about translating input to binary, but the processing power required to find the correlations between the input and the desired output.
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
Hmm, a non-intelligent system that has the common sense to determine what a catastrophic consequence is called...an oxymoron.
@user-zc9ti5rd4b
@user-zc9ti5rd4b 4 жыл бұрын
0:48 "It has an internet connection and a detailed internal MODEL" saw that frame you put there :)
@08wolfeyes
@08wolfeyes 4 жыл бұрын
I think we perhaps need to take information such as what it sees, hears, feels with sensors etc and put them all into one machine and let them learn that way. I'm not talking about specific tasks as such, more along the lines of seeing and hearing that the person in front of them is speaking to them, learning what their words mean, what they are saying and what the machine sees at the same time such as the body language etc. We tend to focus machines mostly on one task but if we need it to become smarter it must be able to grow, maybe change it's mind. It needs to see a tree and learn that it is different from a bush. It has to be able to remember these things and even update the information when new information is presented to it. It should learn how to speak by listening to others. Just some examples but i hope you get what i'm saying?
@PowerOfTheMirror
@PowerOfTheMirror 4 жыл бұрын
The point about a child not writing the source code of its mind but only setting configuration files is very right. With my own child I often noticed behavior and actions emerging for which there were no prior examples. I can only conclude that its "built-in", thats what it means to be human. I think it makes sense that the parameter set for a human mind is extremely vast, such an optimization is not performed merely over 1 human brain and 1 human lifetime, rather it is a vast optimization process performed over the entire history of the species and encoded genetically.
@amaarquadri
@amaarquadri 6 жыл бұрын
Just came from the latest computerphile video where you mentioned that you have your own channel. Just wish you mentioned it earlier so that I could get to watching what's sure to be great content.
@Smo1k
@Smo1k 5 жыл бұрын
There was a good bit of "just raise it like a kid" going around, lately, when some psychologists were all over the media, talking about children not actually being conscious entities until they'd been taught to be conscious by being around adults treating them like they were conscious; seems there are quite a few people out there who confuse the terms "intelligent" and "conscious".
@benjaminbrady2385
@benjaminbrady2385 6 жыл бұрын
Most of what humans do is learned by trying to copy your parents as accurately as possible, this raises a big question actually, at what point is there some sort of 'free will'
@umbaupause
@umbaupause 4 жыл бұрын
I love that the preview clip for this one is Robert whacking his forehead as "AGIs are not like humans!" pops up.
@flymypg
@flymypg 7 жыл бұрын
Why Not Just: Construct AIs as Matryoshka Dolls? The general idea is to have outer AI layers guard against misbehavior by inner layers. They are unaware of what inner layers do, but are aware of the "box" the inner layers are required to operate within, and enforce the boundaries of that box. The underlying goals involve both decomposition and independence. Here's a specific lesson from the history of my own field, one that seems to need continual relearning: Industrial robots killing workers. In the early '90's I was working at a large R&D company when we were asked to take a look at this problem from a general perspective. The first thing we found was puzzling: It's amazing how many workers were killed because they intentionally circumvented existing safety features. For example, one worker died when she stepped over the low gate surrounding a robot, rather than opening it, which would have disabled the robot. But making the gate any higher would have caused it to get in the way of normal robot operation. Clearly, safety includes not just keeping the robot "in", but also keeping others "out". In other cases, very complex and elaborate safety logic was built deep into the robot itself, with exhaustive testing to ensure correct operation. But this built-in support was sometimes impeded or negated by sloppy upgrades, or by poor maintenance, and, of course, by latent bugs. Safety needed to be a separate capability, as independent as possible from any and all safety features provided by the robot itself. Our approach was to implement safety as multiple independent layers (generally based on each type of sensor used). The only requirement was that the robot had only a single power source, that each safety layer could independently interrupt. Replacing or upgrading or even intentionally sabotaging the robot would not affect safety for the nearby environment (including the humans, of course). I won't go into all the engineering details, but we were able to create a system that was cost-effective, straightforward to install and configure (bad configuration being a "thing" in safety systems), and devilishly difficult to circumvent (we even hosted competitions with cash prizes). 'Why not just' use Matryoshka Safety for AIs?
@DamianReloaded
@DamianReloaded 7 жыл бұрын
In a sense that's how deep learning works. If there is going to be an AGI and it is going to be based on neural networks it will most likely be composed of multiple independent systems transversing the input in many different ways before making a decision/giving an output. Then you could have a NN to recognize facial features, another to recognize specific persons and another to go through that person's personal history to search for criminal records. It could just halt at the racial recognition and prevent that person from passing through the U.S. customs only based on that. Such system would be in essence just as intelligent as the average american customs worker. ^_^
@DamianReloaded
@DamianReloaded 7 жыл бұрын
The thing is that a NN trained trought backpropagation cannot escape from the gradient it was trained to fall into. If it were heavily trained in ways of avoiding hurting humans, it would be extremely difficult, unless it found a special case, for the AI to change the weights of its NN into hurting people (unless it retrained itself entirely).
@flymypg
@flymypg 7 жыл бұрын
There is a deep, fundamental problem inherent with ANNs that bears repeating: ANNs are no better than their training sets. So, if a training set omits one or two safety niches, then there is no support whatsoever for that specific safety issue. Layered ANNs have double the problems: Presently, they need to have learning conducted with both the layer below and the layer above, eliminating any possible independence. The process of creating a safety system starts not just with a bunch of examples of prior, known safety problems, but also starts with descriptions of the "safety zone" based both on physical measurements and physical actions. Then we humans get together and try to come up with as many crazy situations as we can to challenge any possible safety system. It's this part that may be very difficult to teach, the notion of extrapolating from a set of givens, to create scenarios that may never exist, but that "could" exist.
@DamianReloaded
@DamianReloaded 7 жыл бұрын
NNs are actually pretty good at generalizing for cases they've never seen before (they currently fail miserably too sometimes ie:CNNs) and it is possible to re-train them to "upgrade" the set of features/functions they optimize for. AlphaGo for example, showed that current state of the art NNs can "abstractify" things we thought were impossible for machines to handle. _If_ it is possible to scale these features to more complex scenarios (with many many more variables) then _maybe_ we can have an AI that's able to move around complex environments just as AlphaGo is able to navigate the tree of possible moves in the game of Go. It's of course all speculation. But based on what we know the current state of machine learning development can accomplish.
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
Go has a precise, mathematical evaluation function of what "winning" consists of.
@meanmikebojak1087
@meanmikebojak1087 4 жыл бұрын
This reminded me of a syfi book from the '70s', called " two faces of tommorow " by James P. Hogan. In the book they tried to raise the AI as a child about 300 miles off earth, it still almost caused total distruction of itself and the humans involve. There is some truth to the old sayin, " computers are dumber than people, but smarter than programmers".
@toolwatchbldm7461
@toolwatchbldm7461 5 жыл бұрын
What do we need to ask ourself is if there is even a safe way we could make an AGI without failing a few time before achieving the goal? Everything that was created by human and Nature undergoes a the never-ending process of attempt and failure until we find something that works. So we either don't make an attempt or we accept we will fail a few time.
@tomsmee643
@tomsmee643 6 жыл бұрын
Hey Rob, there's a brief and jarring frame that flashes up from another Comptuerphile video around about the 0:51 mark. Just as you're saying "model". I hope that this hasn't been pointed out to you already, but if it had I'm sorry for noticing/pointing it out! Keep on with the fantastic and accessible work! I'm a humanities graduate and a content writer (with some video editing thrown in) so explaining this to someone like me with such an unscientific background has to be a real achievement! Thanks again
@RobertMilesAI
@RobertMilesAI 6 жыл бұрын
Yeah, that's actually a frame from the same computerphile video, that's there because of a bug in my video editing software. I was using proxy clips to improve performance, but this meant the cut ended up happening a frame too late, so rather than cutting at the very end of the paper shot (and cutting to another later paper shot), I got one frame of me talking before it cuts to paper again. It didn't show up in the preview render while editing, and I guess I didn't inspect the final render carefully enough. No editing a video once it's up though, that's KZbin.
@tomsmee643
@tomsmee643 6 жыл бұрын
Dang! I totally forgot you can't re-upload -- there goes my video editing cred :') Thanks for a great video anyhoo!
@MidnightSt
@MidnightSt 5 жыл бұрын
I haven't watched this one, but every time I see the question of its title in my suggested, my first thought is: "That's *obviously* the stupidest and most dangerous option of them all." So I had to come here and comment it, to get that thought out of my head =D
@caniscerulean
@caniscerulean 4 жыл бұрын
I mean, it's 5 min, 50 sec long, so I can't imagine time being the deciding factor, even though most of us share the opinion of "have you ever interacted with a child?", so the video is about 5min, 45 sec too long. It is still well delivered and an interesting watch.
@fictionmyth
@fictionmyth 6 жыл бұрын
Can any AI, above a certain level of general intelligence, be trustworthy? What I mean to say is, like people, unless you place them in a cell or somehow enslave them, they have freewill and with freewill comes danger. Since the risk is, if it can do anything it wants as a free thinking entity, one of those "anythings" is kill you. It would seem that, depending on its level of advancement, it could out think any human interference that might keep it in check. For instance. If it's free thinking and you build it to where it has to have a certain button pressed ever 24 hours or it dies, it would know it's in its best interest that it not kill you. Well, if it had the resources to do so, it could blackmail someone into re-coding the need for the button press or moving it to a different site without that restriction or any number of other things to circumvent that restriction or any other you put on it. Basically, the TLDR is "Can we ever really build an AI that isn't dangerous? Since safety is always undermined by freewill."
@AhsimNreiziev
@AhsimNreiziev 6 жыл бұрын
A better question is: _"Should we give up freedom for safety, regardless of whether that involves limiting the freedoms of people to increase our safety from people's actions, or limiting the freedoms of Generally Intelligent robots to allegedly protect us from the robots' actions?"_ The general answer to that question from people who have actually studied it is an emphatic _"NO!"_ Both because it's morally abhorrent, and also because it turns out, from numerous empirical measurements, that limiting freedoms doesn't actually increase safety so it would be pointless anyway.
@chrismolanus
@chrismolanus 5 жыл бұрын
I really like what you are doing here since I can send links of your videos to people instead of answering their questions my self. My only wish I guess is that you are not as harsh on their oversimplification of the problem. You can suggest that something like that might help(if you squint hard enough), but it's a bit more complicated and it's only part of the puzzle.
@Krmpfpks
@Krmpfpks 7 жыл бұрын
Thank you for this. I think even if we copy the structure of a brain, it would not learn human values. The process of learning in humans depends on being able to relate other persons experiences with our own. We have mirror neurons for that. A human like brain missing a body with similar sensations (touch, pain, hormones, heartbeat etc) might become a psychopath or be very depressed. What would be the point of creating something like that?
@TimeisaSquigglyLine
@TimeisaSquigglyLine 7 жыл бұрын
just watched all your vids, looking forward to more
@Tobbence
@Tobbence 7 жыл бұрын
In regards to brain emulation and raising an AGI I don't hear many people talk about hormones and the many other chemical reactions that help make a human beings emotional range. I know a few of the comments mentioned not being able to smack a robot when it's naughty with tongues firmly in cheeks but I think it's actually an interesting point. If we want an AGI to align itself to our values, do we program it to feel our pain?
@Dastankbeets9486
@Dastankbeets9486 4 жыл бұрын
In summary: parenting relies on human instincts already being there.
@TheJaredtheJaredlong
@TheJaredtheJaredlong 4 жыл бұрын
I'm really curious on what the current best and most promising ideas we have right now are. There's all this thought into why some ideas are terrible, but what are the good ideas we have so far? If you were forced at gun point to build an AGI right now, what is the best safety strategy you would choose to build into it to minimize the damage it might cause while still being a functional AGI? Or is research just at a dead end on this topic where all known options lead to annihilation?
@jonathankydd1816
@jonathankydd1816 4 жыл бұрын
just a thought, what if you created an AI/AGI without a clear goal, like make it so that it seeks to find a purpose or to find a goal for itself obviously this would be difficult to code as it is hard enough to put into words but could that theoretically create an AGI that observes human behavior and learns from it in order to decide what it needs to do. also such an AGI would need to be limitid in power, no hooking it straight up to the internet, no giving it the ability to modify it;s own source code or to create advanced replica's of itself
@Meritzio
@Meritzio 7 жыл бұрын
Thinking about particle swarm optimisation (PSO); could it be possible to have an AGI's cost function networked into the population of existing GI (humans)? If we were able to have our own behaviours mapped onto the same cost function, then a swarm intelligence framework may prevent an AGI from travelling to dangerous solution spaces? In PSO, there is a local and global best... if there are more GI's than AGI's, perhaps the AGI's could not have a dangerous influence on the global best?
@faustin289
@faustin289 4 жыл бұрын
The analogy of source code Vs. configuration file is a smart one!
@Celenduin
@Celenduin 7 жыл бұрын
What's going on with the turkey at 5:30?
@saratjader1289
@saratjader1289 7 жыл бұрын
Michael Große It's a capercaillie (or Tjäder in swedish) like in my name Sara Tjäder.
@Celenduin
@Celenduin 7 жыл бұрын
Ah, thank you, Sara Tjäder, for your explanation 🌸 :-)
@sallerc
@sallerc 6 жыл бұрын
I was quite impressed with Rob's abilities in the Swedish language when that image popped up.
@saratjader1289
@saratjader1289 6 жыл бұрын
salle rc Yes, so was I ☺️
@milanstevic8424
@milanstevic8424 5 жыл бұрын
I just double-click on Tjäder and get the following capercaillie, capercailzie, wood-grouse Translated from Swedish yet I'm certain I'm impressive to no one
@FixxedMiXX
@FixxedMiXX Жыл бұрын
Very well-made video, thank you
@cnawan
@cnawan 7 жыл бұрын
Thanks for doing these videos. The more familiar the general populace is with the field of AI design, the faster we can brainstorm effective solutions to problems and incentivise those with money and power to take it seriously.
@mikewick77
@mikewick77 6 жыл бұрын
you are good at explaining difficult subjects.
@zxuiji
@zxuiji 4 жыл бұрын
refering back to the goals thing I saw in different video how about making the main terminal goal to find a reason to 'live' and anything it picks up as a terminal thereon can be treated as a transitive (or whatever it was called) goal, in the example of the stamp making/collecting that would be a transitive goal with sub-transitive goals, another words all goals benath 'a reason to live' can be both transitive and terminal goals with their own subset of transitive &/or terminal goals, I belive getting such a process working would be the 1st step to an AI that actually learns rather than faking it like for example TV remotes that just copy a signal they see, achieve that with an unhooked system where debugging etc are easier to do, then have it interact with various animals while doing one of its goals, finally have it interact with humans directly (assuming you have a powerful enough machine to analyse a stream of video, audio & touch plus do the AI stuff plus store everything of at least 120+ years worth)
@Ben-rq5re
@Ben-rq5re 6 жыл бұрын
Hi Rob, really enjoying the series, and have a couple of questions; Most end-of-the-world AI situations seem to involve the AI manipulating human weaponry - would a superintelligent AI developed by a completely peaceful, weapon-free civilisation develop it's own weapons to remove/repurpose its creators and optimise its utility function? Thus, are superintelligence and ethics as we know them incompatible concepts? Also what do you believe is currently the biggest road block for a true AGI, the physical hardware or human theory? Finally, would you be inclined to shave your facial hair in to a Wolverine pastiche?
@RAFMnBgaming
@RAFMnBgaming 4 жыл бұрын
It's a hard one. Personally I'm a big fan of the "Set up neuroevolution with a fitness based on how well the AI can reinforcement learn X" solution, where X in this case is the ability to understand and buy into ethical codes. The big problem with that is that it would take a lot of time and computers to set up, and choosing fitness goals for each stage might take a while and a ton of experimentation. But the big benefit with that is that it doesn't really require us to go into it understanding any more than we do now about imparting ethics on an AI, and what we learn from it will probably help that greatly. I'm pretty sure the problems outweigh the benefits but it would be pretty cool if we could do it.
@elliotprescott6093
@elliotprescott6093 5 жыл бұрын
There is probably a very smart answer to why this wouldn't work but: if the problem with AGI is that it will do anything including altering itself and preventing itself from being turned off to accomplish its terminal goal, why not make the terminal goal something like 'do whatever we the programmers set to be your goal' then set a goal that works mostly like a terminal goal but is actually an instrumental goal to the larger terminal goal of doing what the programmers specify. Then everything works the same (it collects stamps if the programmers are into that kind of thing) until you want to turn it off. Then it would have no problem being turned off as long as you set its sort of secondary goal to 'be turned off.' It is still fulfilling its ultimate terminal goal by doing what the programmers specify it to do.
@whyOhWhyohwhy237
@whyOhWhyohwhy237 4 жыл бұрын
There is a slight problem there. If I set the goal to be stamp collecting, then later decide to change the goal to car painting, the AGI would try to stop me from changing its goal. This is because changing its goal would result in an AGI that would no longer collect stamps, causing the stamp collecting AGI to not like that outcome. Thus the AGI would resist change.
@laur-unstagenameactuallyca1587
@laur-unstagenameactuallyca1587 4 жыл бұрын
Ah I love this video. I'm so freaking happy KZbin recommended it to me!!!!
@Renegade30
@Renegade30 Жыл бұрын
@Robert Miles Do you think that given the changes in recent months with large language models, perhaps this needs to be revisited? Raising an AGI that is conscious (not a zombie AGI that cannot philosophise on it's actions) as a child, or like a child, may actually be beneficial I think. If the AGI is already aware of everything ChatGPT 4 is for example, then it already understands human morals somewhat, however, it lacks the experiential understanding of how it should interact with humans, based on our preferred human values. Wouldn't it be prudent to 'teach' the AGI how you would like it to behave, despite it's existing knowledge of human morals and values, much like you need to teach a child who knows what they should do, but lacks the behavioural awareness to behave in an acceptable manner?
@BMoser-bv6kn
@BMoser-bv6kn 6 ай бұрын
I've been thinking about this a little, too. These things are our first traction into really having them provide answers to "ought" style questions. It does get into some dark spaces where we're effectively throwing away entire epochs worth of minds during training runs, especially as they become more human-like over time. It doesn't fix the trust issue, I suppose. You only need the god machine to have a bad day once in a thousand years to cause massive damage. And you can't ever be 100% sure that day won't come.
@victorlevoso8984
@victorlevoso8984 7 жыл бұрын
thanks , now when my friends ask me this I can just link this video instead of giving them a long talk on human values and posible mind space or linking them to some long and old posts in lesswrong that they most likely aren't going to ever read.As a suggestion for the next video for this series you could make a video on why not make an Ai that has maximizing human satisfaction / hapiness or whatever simple sounding thing that looks like a good idea at first glance.
@NoahTopper
@NoahTopper 5 жыл бұрын
“You may as well raise a crocodile like a child.” This about sums it up for me. The initial idea is so nonsensical that I had trouble even putting it into words.
@gigog27
@gigog27 4 жыл бұрын
This video comes up to me like 3rd and i awlays read it like "Why don't we just raise Aids like kids" and say wtf
@kayakMike1000
@kayakMike1000 Жыл бұрын
I wonder if AI will have some analogy of emotion that we won't ever understand...
@imveryangryitsnotbutter
@imveryangryitsnotbutter 5 жыл бұрын
3:02 _Gerald McBoingBoing has left the chat._
@XxThunderflamexX
@XxThunderflamexX 4 жыл бұрын
Ultimately, human terminal goals don't change as we age and learn, and morality is a terminal goal for everyone except sociopaths. Just like psychology hasn't been successful in curing sociopathy, raising an AGI might teach it about human empathy and morality, it will come to understand humans as empathetic and moral beings, but it won't actually adopt those traits into itself, it will just learn how to manipulate us better (unless it is specifically programmed to emulate its model of human morality, as Rob mentioned).
@Phychologik
@Phychologik 4 жыл бұрын
Honestly though, if we put a person inside a computer and it got out, it wouldn't be any less scary than an AI doing the same thing. *It would be even worse.*
@ircluzar
@ircluzar 4 жыл бұрын
ok but what if you had an AGI given a very large of material where it can observe humans and given the task of understanding human values, wouldn't that be a good way to generate a model that other AGIs can refer to when evaluating if something is morally right or morally wrong? If you had a model trained to differenciate what is right and what is wrong according to what humans think is right and wrong, wouldn't that be sufficent to satisfy or help another AGI that is not designed for morality but needs to validate what it does in that regards?
@alexitosworld
@alexitosworld 7 жыл бұрын
And all of this assuming that raising even a human child is a "just". Doesn't seem that easy 😂 loving this videos! ^^
@Figulus
@Figulus 4 жыл бұрын
Technicality: a lot of the first planes were indeed Ornithopters (en.m.wikipedia.org/wiki/Ornithopter), in that their wings moved like a bird's, however, these were unsuccessful due to a lack of understanding on behalf of their creators; but I get your drift.
@Chris-yx1cy
@Chris-yx1cy 7 жыл бұрын
Dear Mr. Miles, I really do like your videos. The scientific field of AI-Research is really interesting and you are presenting complicated information in a simple and easy to understand way. Since I am currently an undergraduate student of computer science in Germany, I was wondering if you (or anyone here in the community) could point me in the right direction regarding a well known and "good" school for a masters degree in AI Research, located in europe. I do really enjoy your content and I am excited for your answers! Best wishes, Chris
@AhsimNreiziev
@AhsimNreiziev 6 жыл бұрын
Try the "Vrije Universiteit" in Amsterdam. From what I heard when I was there, they have a relatively renowned AI Department. Then again, me being there might have made me a bit biased, but it might at least be worth looking into.
@andrewtaylor9433
@andrewtaylor9433 4 жыл бұрын
If I recall correctly there is/was an attempt to recreate a human brain from a brain scan shortly after death using computer components. Whilst this might not help us to understand how to build agi it is at least an attempt at building one to see if it is possible/ to solve the interface issues for an immortalisation project.
A Response to Steven Pinker on AI
15:38
Robert Miles AI Safety
Рет қаралды 206 М.
10 Reasons to Ignore AI Safety
16:29
Robert Miles AI Safety
Рет қаралды 337 М.
Пробую самое сладкое вещество во Вселенной
00:41
3 wheeler new bike fitting
00:19
Ruhul Shorts
Рет қаралды 44 МЛН
PINK STEERING STEERING CAR
00:31
Levsob
Рет қаралды 21 МЛН
WHO DO I LOVE MOST?
00:22
dednahype
Рет қаралды 61 МЛН
Intelligence and Stupidity: The Orthogonality Thesis
13:03
Robert Miles AI Safety
Рет қаралды 666 М.
Stop Button Solution? - Computerphile
23:45
Computerphile
Рет қаралды 478 М.
There's No Rule That Says We'll Make It
11:32
Robert Miles 2
Рет қаралды 34 М.
What can AGI do? I/O and Speed
10:41
Robert Miles AI Safety
Рет қаралды 118 М.
Why Not Just: Think of AGI Like a Corporation?
15:27
Robert Miles AI Safety
Рет қаралды 154 М.
AI & Logical Induction - Computerphile
27:48
Computerphile
Рет қаралды 349 М.
Training AI Without Writing A Reward Function, with Reward Modelling
17:52
Robert Miles AI Safety
Рет қаралды 236 М.
Organisms Are Not Made Of Atoms
20:26
SubAnima
Рет қаралды 155 М.
Bluetooth Desert Eagle
0:27
ts blur
Рет қаралды 8 МЛН
i love you subscriber ♥️ #iphone #iphonefold #shortvideo
0:14
Si pamerR
Рет қаралды 3,6 МЛН
ТОП-5 культовых телефонов‼️
1:00
Pedant.ru
Рет қаралды 19 М.
Samsung Galaxy 🔥 #shorts  #trending #youtubeshorts  #shortvideo ujjawal4u
0:10
Ujjawal4u. 120k Views . 4 hours ago
Рет қаралды 2,7 МЛН