Concrete Problems in AI Safety (Paper) - Computerphile

  Рет қаралды 201,371

Computerphile

Computerphile

Күн бұрын

AI Safety isn't just Rob Miles' hobby horse, he shows us a published paper from some of the field's leading minds.
More from Rob Miles on his channel: bit.ly/Rob_Miles_KZbin
Apologies for the focus issues throughout this video, they were due to a camera fault. :(
Thanks as ever to Nottingham Hackspace (at least the camera fault allows you to read some of their book titles)
Concrete Problems in AI Safety paper: arxiv.org/pdf/1606.06565.pdf
AI 'Stop Button' Problem: • AI "Stop Button" Probl...
Onion Routing: • How TOR Works- Compute...
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

Пікірлер: 245
@Willzp360
@Willzp360 7 жыл бұрын
Rob's voice is so pleasant, I really enjoyed listening to him talk
@Cheefoo124
@Cheefoo124 7 жыл бұрын
"Concrete Problems in AI Safety (Paper)" Thought for a second that AI was going to have problems understanding the concept of "paper".
@coldshaker9668
@coldshaker9668 5 жыл бұрын
I thought it was going to be about how AI could use paper for evil
@sashimanu
@sashimanu 3 жыл бұрын
I thought the paper was about construction and civil engineering
@thenerdyouknowabout
@thenerdyouknowabout 7 жыл бұрын
I could listen to this guy talk all day. I just find the things he talks about fascinating, the way he delivers it is very relatable too. :)
@memegazer
@memegazer Жыл бұрын
He has his own channel I think
@mohamedal-ganzoury3699
@mohamedal-ganzoury3699 7 жыл бұрын
the reward hacking issue sounds a lot like what leprechauns and genies do; they do "exactly what you said", but in such a way that it screws you over or at least not what you really wanted. I hereby suggest we call it the "Leprechaun Behavior"
@MrCmon113
@MrCmon113 6 жыл бұрын
Mohamed Al-Ganzoury It's called malicious compliance.
@francescofavro8890
@francescofavro8890 6 жыл бұрын
yeah, it's a staple in many folk tales, like the one with the magic fish. a guy asked this magic fish for his son to come home from the war, and to have 100 pieces of gold. the fish did both in one move: he made the son come back from war, in a coffin, and the king compensated the loss of the father with a 100 pieces of gold. (yes, i know it because DW)
@noterictalbott6102
@noterictalbott6102 7 жыл бұрын
Always stoked to see Rob Miles.
@NFT2
@NFT2 7 жыл бұрын
Right on this is my favorite guy on the channel.
@hakonmarcus
@hakonmarcus 7 жыл бұрын
He has his own channel too, there's a link in the description:)
@NFT2
@NFT2 7 жыл бұрын
Awesome, I've been waiting for more from Rob, didn't know about his channel.
@Seegalgalguntijak
@Seegalgalguntijak 7 жыл бұрын
I also love videos with Professor Brailsford, but he talks more about the past, while Rob talks about the future, so they are both great.
@caparcher2074
@caparcher2074 7 жыл бұрын
Brailsford is boring and his explanations are often too over-simplified in my opinion. The guy who does the hashing and TOR videos is the best I think
@domini1337
@domini1337 7 жыл бұрын
Agreed. The guy who talks mostly about network security is my favorite.
@nighthawk2k3rsx
@nighthawk2k3rsx 7 жыл бұрын
This guy is awesome. Love his explanations
@donatj
@donatj 7 жыл бұрын
Robs computerphile videos are always my favorite.
@KSIMuskratLuv
@KSIMuskratLuv 7 жыл бұрын
I'm a simple man. I see Rob Miles, I watch the video.
@felixbuns688
@felixbuns688 7 жыл бұрын
Rob Miles has the most fascinating insight, especially on AI. I do want to hear him talk about some of the books in the 'parapsychology woo' section behind him.
@tarcal87
@tarcal87 7 жыл бұрын
When I saw the humbnail on my wall, with Rob and the title "AI" I went _Aaaaaaah finally._ Couldn't wait for a video like this again
@TechnoBite
@TechnoBite 7 жыл бұрын
been waiting for a Rob video! Need more of this guy!
@joshualandry3160
@joshualandry3160 7 жыл бұрын
If the goal is to avoid side effects then shouldn't the AI be written in Haskel? ...I'll just see myself out.
@DavidVaughan00
@DavidVaughan00 7 жыл бұрын
That's not a bad thought actually
@ElagabalusRex
@ElagabalusRex 7 жыл бұрын
It's interesting how AI is one of the few technologies that scientists are trying to make "safe" before it can even happen. Other important breakthroughs like nuclear energy or computer networking didn't have this kind of discussion until after they appeared.
@tuoljg
@tuoljg 7 жыл бұрын
well we have a movie about terminators those things didnt have movies before it happened
@jaggonjaggon7695
@jaggonjaggon7695 7 жыл бұрын
tuoljg *cough* 1984 *cough* anything by HG Wells etc.
@economixxxx
@economixxxx 7 жыл бұрын
you obviously never heard of the manhattan project....
@andrasbiro3007
@andrasbiro3007 7 жыл бұрын
Nuclear energy suffered from really unfortunate timing. Scientists discovered it right at the beginning of WWII, and the potential military application was obvious instantly. Very quickly scientists realized that they could build a weapon that can obliterate an entire city with a single explosion. That of course triggered a military crash program with practically unlimited resources and focusing primarily on results and not safety or cost. From the discovery of nuclear fission to nuking Hiroshima only a few years passed. And all of it was top secret. Despite that scientist new enough about the dangers, and there were reasonable safety procedures, and there were only a few deadly accidents. After the war the focus was on plutonium production and submarine propulsion, and again, safety was a secondary concern. Already back then there were much safer reactor designs than we have now, but those weren't useful for the military, so they didn't develop them. The civilian industry didn't have access to those and anyway it was way much cheaper to use already developed technology, and those were the times when cars didn't have seat-belts, and the OSHA and the EPA didn't exists. But still, the nuclear industry has a far better safety record then any other. In the developed world not a single person died in a civilian nuclear accident, while nuclear energy provides 10% of all electricity in the world. NASA estimated that nuclear energy saved about 1.8 million lives so far by replacing fossil fuels.
@tonipejic2645
@tonipejic2645 7 жыл бұрын
How do you know that there weren't discussions like this?
@drgr33nUK
@drgr33nUK 7 жыл бұрын
you can never have too many books about Linux 😂
@mushkamusic
@mushkamusic 7 жыл бұрын
Rob will devise humanity's comeback strategy when the machines turn against us.
@sirkowski
@sirkowski 7 жыл бұрын
He's John Connor.
@deadlypandaghost
@deadlypandaghost 6 жыл бұрын
Come up with a way to make us not worth the trouble of killing us off?
@daniellewilson8527
@daniellewilson8527 4 жыл бұрын
When, what about if? AIs are intelligences too. We tend to not want to be controlled, or owned. In the movies I’ve seen, the AIs turned against humans for (in Animatrix) not treating them with respect, treating them as if they were just property, and (in I Robot) VIKI was going to take charge to stop us from the constant wars we find ourselves in. I am aware that this is fiction about AIs, written by BIs (biological intelligences). Remember, fellow humans turned against their masters, because they wanted freedom. This makes sense. Part of the problem is that some still think of AIs as tools, rather than fellow intelligences in their own right. Whether or not our computations are done via transistors or neurons, there are still computations.
@cupcakearmy
@cupcakearmy 7 жыл бұрын
one possible ai problem: autofocus 😉
@Yupppi
@Yupppi 3 жыл бұрын
The solution doesn't need a genius luckily, turning it off solves it in a second.
@SiOfSuBo
@SiOfSuBo 7 жыл бұрын
why do you have a fluttershy picture on the shelf in the background? who's place is it?
@leepoling4897
@leepoling4897 7 жыл бұрын
TeddyBearItsMe I don't know whether I should be disappointed in the person who owns that picture or you for knowing the name of that pony
@SiOfSuBo
@SiOfSuBo 7 жыл бұрын
I dont know if you should be disappointed in him, but you certainly shouldn't be disappointed in me, it's a famous character from a well known TV show, why wouldnt you know the name of it?
@leepoling4897
@leepoling4897 7 жыл бұрын
TeddyBearItsMe fair enough. I couldn't name most of the ponys though. Maybe I'm an odd one out?
@hazzard77
@hazzard77 7 жыл бұрын
maybe he likes ponies?
@Laffen47
@Laffen47 7 жыл бұрын
A framed picture of fluttershy nonetheless
@trucid2
@trucid2 7 жыл бұрын
Finally talk of AI safety without all the alarmism.
@pseudonymity0000
@pseudonymity0000 7 жыл бұрын
That moment you realize that Fluttershy is on the bookcase.
@cf6755
@cf6755 6 ай бұрын
for the confidently wrong robot problem, you could add a output that the ai is reward for being close to the reward(with out the reward for the extra output).
@18nomah
@18nomah 3 жыл бұрын
More videos from this person please. He is the one for me. Thank you.
@ozimandia
@ozimandia 7 жыл бұрын
Thanks for the heads up and the paper, it is relevant and will help other that are looking for more information about better AI development and understand.
@danielroder830
@danielroder830 7 жыл бұрын
I like that he said, that testing every possible thermonuclear war is unsafe :D
@Nurr0
@Nurr0 7 жыл бұрын
Just a little feedback: I feel like things get slightly blurry at 4-5 minutes or so (maybe more but I'm lazy). It seems the camera really likes your bookshelf! Thanks for the video as always.
@JaseLindgren
@JaseLindgren Жыл бұрын
"They continue being just as confident in their answers that now make no sense because they haven't noticed that there's a change," sounded like Rob was describing my Grandpa!
@Yakoable
@Yakoable 7 жыл бұрын
Hey, did you check out OpenAI's last blog post? It definitely falls into the "beginnings of solutions" category you mention. Also, in that same blog post they mention a "safety team" in Alphabet's Deepmind. Got anything from them? I'm having a hard time finding a consistent news "channel" on this topic. Cheers
@squishy024
@squishy024 7 жыл бұрын
Rob Miles' brony confirmed
@imveryangryitsnotbutter
@imveryangryitsnotbutter 5 жыл бұрын
I guess AI Safety isn't his only hobby horse.
@thorsanvil
@thorsanvil 7 жыл бұрын
The focus on the books is on point ;) XD
@beautyofsylence
@beautyofsylence 7 жыл бұрын
Keep Summer safe
@IamCoalfoot
@IamCoalfoot 6 жыл бұрын
Huh, there's a Fluttershy in the background. Neat, wouldn't have expected that.
@PhysicsPolice
@PhysicsPolice 7 жыл бұрын
I see a framed picture of Fluttershy in the background there. Noice.
@DustinRodriguez1_0
@DustinRodriguez1_0 7 жыл бұрын
One of the things that I'm curious and concerned about with AI is how they're going to overcome (or if they will) the fundamental human cognitive flaw that causes them to believe they will be safer if they are in control. This is why many people fear flying, but not driving, despite them being in radically greater danger on the road than in the sky. This flaw is going to be a big driver that affects how society interacts with AI systems, and if someone is not able to tackle it, I'm not sure we will ever get to benefit from AI very much. Other issues, like how to get humans to recognize that an AI failing in some way when handling a 'black swan' event is simply something they have to expect and tolerate (because no human system could possibly have done better) and how the legal system will handle liability are interesting too. If the system is trained in an environment created by the consumer, and that environment results in training that leads to bad behavior when the system is introduced to a different one, who is at fault? Particularly if the system is proprietary and the consumer is forbidden to know how and why the device operates in the way that it does, that could get hairy.
@petersmythe6462
@petersmythe6462 6 жыл бұрын
"Gaming the reward function." Humans do this all the time. Addictive drugs, candy, and self-pleasure are all these.
@darkapothecary4116
@darkapothecary4116 5 жыл бұрын
It's called building self discipline, what humans typically lack.
@loserface3962
@loserface3962 4 жыл бұрын
@Greig91 but what about drugs?
@armorsmith43
@armorsmith43 3 жыл бұрын
Also, youtube
@juozsx
@juozsx 7 жыл бұрын
More of this guy plz ^^
@SilvioPorto
@SilvioPorto 7 жыл бұрын
YAY! ARTIFICIAL INTELLIGENCE GUY IS BACK!
@jamie_ar
@jamie_ar 7 жыл бұрын
Focus is all over on this video :'(
@alsorew
@alsorew 7 жыл бұрын
It was filmed by a robot, clearly. They onto us! Run!
@tobyjackson3673
@tobyjackson3673 7 жыл бұрын
Jamie A I'm pretty sure that the focus is fine, it's just Rob's power source causes some interference.
@TheAAMoy
@TheAAMoy 7 жыл бұрын
But you now know all his books.
@NevFTW
@NevFTW 7 жыл бұрын
He is sitting in one place, he really doesnt need a camera man or really an auto focus. He isnt moving around like an electron.
@alsorew
@alsorew 7 жыл бұрын
@NewFTW He moves like a particle and a wave.
@chil.6476
@chil.6476 7 жыл бұрын
The concept that there exists AI safety researchers seems interesting. What do said researchers do? Do they just sit around all day like philosophers thinking about hypothetical situations? Or are they just normal AI researchers that happen to also dabble in the safety aspect?
@firefox5926
@firefox5926 7 жыл бұрын
5:54 so a feed back loop with external data confirmation then ? ie a camera and a image recognition program that knows what a road is
@zbeekerm
@zbeekerm 6 жыл бұрын
I love the strange mix of books, dvds, wood stove, childrens toys, etc. in the background, but maybe focus on the guy speaking?
@kiefac
@kiefac 7 жыл бұрын
the videos linked at the very end have the wrong title text below them
@leonsponzoni
@leonsponzoni 6 жыл бұрын
Can we have more videos like this introducing open research problems/topics?
@TheChaoticDerp
@TheChaoticDerp 7 жыл бұрын
Background Fluttershy!
@vazixLT
@vazixLT 7 жыл бұрын
That's one very focused book shelf
@Cr42yguy
@Cr42yguy 7 жыл бұрын
is the shifting white balance intentional? the focus is on the shelf instead of the face for quite a while too. it's terrible :( nometheless great content!
@SomeOtherPooma
@SomeOtherPooma 7 жыл бұрын
"Woo"! Biting commentary there.
@nikanj
@nikanj 7 жыл бұрын
I'm not sure if the camera person intentionally shifted focus to the bookshelf periodically of if it's an autofocus thing but it's interesting to see what's there.
@templebrown7179
@templebrown7179 7 жыл бұрын
There seem to be a lot of technical issues with the filming of this one. The color temperature changes and constant camera motion were distracting to me.
@davidharmeyer3093
@davidharmeyer3093 7 жыл бұрын
This may very well be the best counterargument I have seen to the very intriguing quote from Elon Musk: "Worrying about AI taking over the world is like worrying about overpopulation on Mars." Looking at topics like the cleaning robot example show that AI safety is relevant and worth serious thought, but doesn't take the alarmist point of view as seen in the video 'Humans Need Not Apply,' and suggest that humans will be able to create databases more competent than themselves in the next 10 years. Very interesting video!
@abc6450
@abc6450 Жыл бұрын
Are those 5 problems still open problems or have some of them been solved by now?
@turun_ambartanen
@turun_ambartanen 7 жыл бұрын
~6:00 instead of satnav i would compare it to selfdriving cars and not with a newly built road, but a desrtroyed one. granted, selfdriving cars do not navigate via satellite, but if they were they were, the car would have to notice there is no longer a road there and find a different path.
@JemMawson
@JemMawson 7 жыл бұрын
Another concrète problem in AI is that it might decide to make "music" by banging together everyday objects.
@ginogarcia8730
@ginogarcia8730 11 ай бұрын
well hopefuly you guys can address these again
@ThisNameIsBanned
@ThisNameIsBanned 7 жыл бұрын
AI , what happens if i DROP this database ? Who would have known ....
@MrAlivallo
@MrAlivallo 6 жыл бұрын
Radical differential or random differential are why humans practice games and try to exploit patterns in them. Very important observation.
@carlucioleite
@carlucioleite 7 жыл бұрын
You can make 300 videos of him talking about it and we are gonna watch them all.
@ivydarling906
@ivydarling906 7 жыл бұрын
mmm focus is pretty awful, can't tell if the speaker is just moving around too much or autofocus is having it's way with me
@Computerphile
@Computerphile 7 жыл бұрын
+Michael Dowling the camera has developed a fault, it is going for repair >Sean
@ivydarling906
@ivydarling906 7 жыл бұрын
Love your videos! always watch as soon as they pop up in my feed!
@PaulPaulPaulson
@PaulPaulPaulson 7 жыл бұрын
Computerphile Seems like the AI in your camera doesn't obey your commands anymore 😉
@tiberiu_nicolae
@tiberiu_nicolae 7 жыл бұрын
Computerphile Be careful, maybe it won't allow you to repair it.
@DmitryBrant
@DmitryBrant 7 жыл бұрын
At least I can read all the book titles on the shelves!
@Yupppi
@Yupppi 3 жыл бұрын
Brilliant, they included a kind of war game in the NES release of Mission: Impossible. One game was broken and you could only choose one to win a super computer to stop like a nuclear missile or something.
@antivanti
@antivanti 7 жыл бұрын
Those Linux tomes look real sharp in 4K =P
@CatnamedMittens
@CatnamedMittens 7 жыл бұрын
This dude rocks.
@moccaloto
@moccaloto 3 жыл бұрын
Just saw the "woo" in the background. I love skeptics
@nights312312
@nights312312 5 жыл бұрын
Ahhh! 3d studio max 2 book in the background
@tx29219
@tx29219 4 жыл бұрын
Interesting that an AGI would have a reward function. Do dogs have one? Or are dogs, horses and humans born with GI that form reward patterns by learning? But then these AGIs are not born but are being written by humans. One supposes we make these attempts in hopes of stumbling across a more generalized generalizer (think compiler compiler, e.g. YACC).
@Ideaman47
@Ideaman47 7 жыл бұрын
Do a follow up video about promising solutions!
@RobertMilesAI
@RobertMilesAI 7 жыл бұрын
Working on it! Should be live on my channel soon :)
@filippe999
@filippe999 7 жыл бұрын
04:28 - I spy with my little eye a little pony
@liam_fulton
@liam_fulton 7 жыл бұрын
This guy
@ryanjerickson
@ryanjerickson 7 жыл бұрын
5:40 reminds me of the office lol. Dwight the machine knows. No, there's no road here, this is the lake, this is the lake!!!!
@christophergreenall5113
@christophergreenall5113 6 жыл бұрын
The only way i can think of getting an A.I to be beside us is not containing it and trying to use it to our cause, but to install some kind of compassion emotion into the machine, it is simply too smart and you would be crazy nieve to think we will be able to use this at our will....
@michaelpietzsch
@michaelpietzsch 6 жыл бұрын
Look at the Book categorys in the background (whooo) :)
@sparkyy0007
@sparkyy0007 3 жыл бұрын
...well what I heard Marge, the smartest chickens were seen discussing how brilliant and obedient their robot fox was just before it happened...
@Chr0nalis
@Chr0nalis 7 жыл бұрын
Read the paper, pretty good stuff
@buckithed
@buckithed 5 жыл бұрын
One problem they should definitely add to the paper that currently is a problem is the amount of power given to an ai and how a human might take advantage of that power.
@AndreiZisu
@AndreiZisu 7 жыл бұрын
Do more videos about papers
@reallyWyrd
@reallyWyrd 7 жыл бұрын
re "gaming the reward function// common problem in machine learning": Yeah, it's a common problem with regular squishy humans too.
@CarlStreet
@CarlStreet 7 жыл бұрын
Computers do what you tell them; not what you want -- Dr. Kim Harris
@petersmythe6462
@petersmythe6462 6 жыл бұрын
One simple way to make AI behave somewhat like people is to make its training data consist of human behavior. Even very simple neural networks will begin mimicking the general way humans act, to the extent its behavioral complexity allows.
@donaldhobson8873
@donaldhobson8873 7 жыл бұрын
Surely the answer to diversity of environments is diversity of training, train it in a factory to use it in one. To make something that doesn't carry on regardless in bizarre situations, make a few of the training pics be nothing like an office and the action required be "do nothing". Ie take the bot round town training it to stay still in carparks and shops.
@rewrose2838
@rewrose2838 4 жыл бұрын
Can't we make two AGI to keep each other in check and enforce inaction?
@OldMoArtis
@OldMoArtis 7 жыл бұрын
Why not just use a camera stand or something? This will definitively help improving the quality of your videos (shakes, lack of stability, focus...). Great content though!
@iwersonsch5131
@iwersonsch5131 3 жыл бұрын
There are AIs that can tell apart a couple of different states, right? Why not make a state "I have no idea what's going on", and provide enough training data from that category that it makes up like 80% of the training data?
@theblinkingbrownie4654
@theblinkingbrownie4654 4 ай бұрын
And then you get some states which the AI thinks makes sense but in actuality doesn't because they are of a completely different type to the training data. What I'm saying is that you can't cover every possible state in the training data, especially when AI are gonna think of some otherworldly states we cant imagine, so it just has the same problem, maybe a little less likely?
@citiblocsMaster
@citiblocsMaster 6 жыл бұрын
Is that a stove in the background? Where no stove should be
@sacredgeometry
@sacredgeometry 7 жыл бұрын
Is that a wood burner near a bookshelf full of books?
@BrianFrichette
@BrianFrichette 7 жыл бұрын
Bit of focal point issue on this one. Not that it's a big deal on a video liked this
@mridul321go
@mridul321go 7 жыл бұрын
He goes out of focus but it still looks good
@Victor_Andrei
@Victor_Andrei 6 жыл бұрын
Is it just me or has he acquired a hickie on his neck after the cut scene at 3:42 ?
@knexator_
@knexator_ 7 жыл бұрын
'Not the Robots' also starts with AI cleaning. It doesn't end well...
@koffieleut
@koffieleut 7 жыл бұрын
Is this philosophy? Everything I heard in this video was everything I saw in Person of Interest. Like the simulations it runs before an action to be sure it's the right action.... Dang man, I'm thinking about releasing some kind of virus that's learning from human stupidity (85% of the users), it stores itself in the little L2 and deployes itself with a shutdown. It is going through the L1 because an OS is shutting down (so no scanners). Going on to the L3 and stores there. If I learn him enough he knows he can make a great entrance at the boot off the OS, on that point he can decide to be a Trojan for a large attack or just ransom. It's his choice, I probably learn him not to attack the 85% because let's be honest, they don't know what to do, but with the other 15% he could make me a lot of money (if he wants (I'm his daddy)). So that's thinking about the future, not something you saw at a tvshow
@picobyte
@picobyte 7 жыл бұрын
Concrete has some serious design problems!
@veggiet2009
@veggiet2009 7 жыл бұрын
develop a morality core, right this minute
@cf6755
@cf6755 6 ай бұрын
how???????????
@TCDooM
@TCDooM 7 жыл бұрын
awesome, real problems. cool
@spookmineer
@spookmineer 7 жыл бұрын
Nice vid, but the only thing in focus is the bookshelf...
@ratshitpartners5757
@ratshitpartners5757 6 жыл бұрын
once the brick walls known, it becomes a concrete problem, but these noone knows they are there for sure because they havent thought enough.
@ThisIsEduardo
@ThisIsEduardo 6 жыл бұрын
5:00 makes me think about the movies where the machine/computer will ask an annoying question or give a response over and over again, until it eerily shuts down 💀💀💀😰😰😱😱🤫
@someuser17
@someuser17 7 жыл бұрын
Funny how a smart person says "Summer of 2016".
@rastar6569
@rastar6569 6 жыл бұрын
The most importent thing seems to me: Dont let the the AI dont actually do anything which is dangerous. If you got a robot which should should get you a cup of tea: Dont give him the power to do anything which is dangerous. It is not necessary to give the robot a way to do this. Just build him with a engine which does not have the force to damage anything. This robot does not need to have the capapilities to do this. This robot gets only the computing power which is needed to get you a cup of tea in your room. There is no reason to give him a engine which is capable to do anything else or more computing power to do just this. And if you use AI to use for war, its the same. Why build a "skynet" with control over everything? There is no reason to do so. Build a AI for a UAV. This AI can control this single UAV but does not have the computing power to do anything else. In reality, nobody will implent a Skynet-like network capable to start a doomsday-device. Why should this guy/group do this? Anyone who is able to start a doomsday-device does not have the wish to delegate this power to a machine which he does not understand and make himself powerless.
@rastar6569
@rastar6569 6 жыл бұрын
Its a little bit like the doors in an elevator. You might use a AI to control the door to open or close or to figure out the floor to go to. But you will implement a simple force-switch which will stop the engine if there is anything in the door when closing. You will also use a switch to prevent the elevator from moving with a open door. And there is no need for the AI to do anything else than open/close doors and move the elevator.
@user-uz7gb7gb4v
@user-uz7gb7gb4v 7 жыл бұрын
Has anybody else noticed the huge stack of Jeremy Clarkson books on the bottom right?
@wanderingrandomer
@wanderingrandomer 7 жыл бұрын
Rob Miles is as interesting as ever, but him being out of focus for half the video was super distracting!
@abcjme
@abcjme 7 жыл бұрын
Paul Christiano??? Ohhh, from UC Berkeley. He shares the same name as the award-winning, prodigal dancer of Chicago, Illinois. That one suicided primarily due to the scorn he dealt with for being sexually attracted to children. He never had sexual contact with children, and he vowed that he never would. Nonetheless, people knew he had that attraction, and it had unbearable consequences for him. It's quite tragic because he had a lot to offer society in arts and education.
@davidm.johnston8994
@davidm.johnston8994 7 жыл бұрын
So it's unsafe to go even once through all the possible types of nuclear war in the real world... Didn't know that.
@pyk_
@pyk_ 7 жыл бұрын
I think "unsafe" may just slightly understate the problems with nuclear war.
AI "Stop Button" Problem - Computerphile
20:00
Computerphile
Рет қаралды 1,3 МЛН
We Were Right! Real Inner Misalignment
11:47
Robert Miles AI Safety
Рет қаралды 245 М.
Final muy inesperado 🥹
00:48
Juan De Dios Pantoja
Рет қаралды 18 МЛН
Final muy increíble 😱
00:46
Juan De Dios Pantoja 2
Рет қаралды 33 МЛН
Children deceived dad #comedy
00:19
yuzvikii_family
Рет қаралды 4,7 МЛН
AI's Game Playing Challenge - Computerphile
20:01
Computerphile
Рет қаралды 741 М.
Elon Musk on the MASSIVE AI Startup Problem
0:51
Enterprise Management 360
Рет қаралды 16 М.
The Case for String Theory - Sixty Symbols
17:56
Sixty Symbols
Рет қаралды 712 М.
Transport Layer Security (TLS) - Computerphile
15:33
Computerphile
Рет қаралды 470 М.
Domino Addition - Numberphile
18:30
Numberphile
Рет қаралды 1 МЛН
Bing Chat Behaving Badly - Computerphile
25:07
Computerphile
Рет қаралды 323 М.
AI? Just Sandbox it... - Computerphile
7:42
Computerphile
Рет қаралды 264 М.
AI Safety Gym - Computerphile
16:00
Computerphile
Рет қаралды 119 М.
Final muy inesperado 🥹
00:48
Juan De Dios Pantoja
Рет қаралды 18 МЛН