Are AI Risks like Nuclear Risks?

  Рет қаралды 96,922

Robert Miles AI Safety

Robert Miles AI Safety

Күн бұрын

Concerns about AI cover a really wide range of possible problems. Can we make progress on several of these problems at once?
With thanks to my Patreon supporters:
- Ichiro Dohi
- Stefan Skiles
- Chad Jones
- Joshua Richardson
- Fabian Consiglio
- Jonatan R
- Øystein Flygt
- Björn Mosten
- Michael Greve
- robertvanduursen
- The Guru Of Vision
- Fabrizio Pisani
- Alexander Hartvig Nielsen
- Peggy Youell
- Konstantin Shabashov
- The Dodd
- DGJono
- Matthias Meger
- Scott Stevens
- Emilio Alvarez
/ robertskmiles

Пікірлер: 398
@diablominero
@diablominero 5 жыл бұрын
What made the Demon Core so dangerous was that physicists thought they were too cool to use safety precautions. How do we prevent that in AI research?
@SenorZorros
@SenorZorros 3 жыл бұрын
generally the problem is averted by Isolates test benches, big red buttons, power cutoffs and if it goes really wrong enclosed boxes on which we can pour several meters of concrete. problem is, many researchers are also too cool for those.
@AugustusBohn0
@AugustusBohn0 3 жыл бұрын
the Solarwinds hack is still fresh in my mind as I read this, and if we can't keep people who make a tool as important and far-reaching as Solarwinds from taking shortcuts, like setting a very important password to [name of company]123, then I really don't know what we can do other than maybe train/indoctrinate people to have a deep rooted belief that mistakes and shortcuts will lead to disaster
@spaceanarchist1107
@spaceanarchist1107 2 жыл бұрын
@@SenorZorros Chernobyl - researchers turned off safety systems in order to perform experiment
@MarkusAldawn
@MarkusAldawn 2 жыл бұрын
The general method has been to fuck up and get people killed, and then go "safety would have prevented this," and then implement safety. Not sure we have the luxury of that this time around.
@idiomi8556
@idiomi8556 Жыл бұрын
@Augustus Bohn I fail to see the issue with your solution? Fixes that issue and a bunch of others
@NathanTAK
@NathanTAK 7 жыл бұрын
The solution to the self-driving car trolley problem: 1. Choose one of the options at random 2. Hit them 3. Turn around, hit the other one too. No moral dilemma present in making a choice!
@ccgarciab
@ccgarciab 5 жыл бұрын
Naþan Ø MTD
@sweeper7609
@sweeper7609 4 жыл бұрын
A: The only way this situation may happen is because: 1-Bug. We can't do ethic with buggy car. 2-The car have been programmed to be malicious. We can't do ethic with malicious car. 3-The driver cause this situation. If the car can take contrôle it should kill the lonely human. 4-The crowd cause this situation. Kill the crowd, I don't want idiot crossing anywhere because "Lol, I don't care the car will save me". B: The only way this situation may happen is because: 1,2-Same 3-Same but hit the wall 4-Same but only one human cause this situation. C: The only way this situation may happen is because: 1,2,3-Same 4-Same with more human
@iruns1246
@iruns1246 4 жыл бұрын
Solution: Make every self-driving AI take a test in simulated settings. The test should be rigorous (maybe millions of scenarios) and designed by a diverse ethics committee and open to democratic debates. After that, as long as the AI passed the test and the copy is operating in a sufficient hardware, there shouldn't be ANY liabilities. Maybe after there are accidents, the test should be reviewed, but there shouldn't be criminal charges to anyone. Treat it like doctors following the procedures.
@eadbert1935
@eadbert1935 3 жыл бұрын
the issue with the self-driving car is: we worry so much about these questions of morality and liability, we forget that self-driving cars would automatically reduce moral question by not being AS fallible as humans like, we have 90% fewer accidents (i made this number up, i don't have sources for this) and we worry about what to do with the last 10% instead of being happy with 90% reduction ffs, letting ANYONE decide until we have a better solution is a superior moral choice to waiting for the better solution
@adenjordan3659
@adenjordan3659 2 жыл бұрын
You all probably dont give a shit but does anyone know a method to log back into an Instagram account? I somehow forgot the login password. I would love any assistance you can offer me.
@benjaminbrady2385
@benjaminbrady2385 7 жыл бұрын
"We will have a problem with morons deliberately jumping in front of them for fun" Thanks for the idea!
@NathanTAK
@NathanTAK 6 жыл бұрын
I think our self-driving car algorithms will have to be programmed to run them down.
@DavidChipman
@DavidChipman 6 жыл бұрын
Car: "You asked for it, idiot!"
@sandwich2473
@sandwich2473 5 жыл бұрын
This problem already exists, to be honest.
@e1123581321345589144
@e1123581321345589144 5 жыл бұрын
Check this out. www.nytimes.com/2018/12/31/us/waymo-self-driving-cars-arizona-attacks.html
@adrianalexandrov7730
@adrianalexandrov7730 4 жыл бұрын
actually you can already jump in front of some volvo SUVs and they would automatically break. Good idea for some trolling. Just need to identify the right one correctly and for it to be functional. Kinda leap of faith right now.
@MetsuryuVids
@MetsuryuVids 7 жыл бұрын
Is that "I Don't Want To Set The World On Fire" at the end? Amazing ahahahah Also amazing video.
@Xartab
@Xartab 7 жыл бұрын
On an ukulele, if I'm not mistaken
@NathanTAK
@NathanTAK 7 жыл бұрын
+Batrax An ukulele? Perhaps an electric battle axe ukulele?
@Xartab
@Xartab 7 жыл бұрын
Of course, how silly of me to miss that!
@ommurg5059
@ommurg5059 4 жыл бұрын
Sure enough!
@grahamrice1806
@grahamrice1806 7 жыл бұрын
Forget "what if my robot ignores the stop button?", what about "what if my robot ignores my safe word!?" 😅
@PickyMcCritical
@PickyMcCritical 7 жыл бұрын
Sounds kinktastic.
@NZAnimeManga
@NZAnimeManga 5 жыл бұрын
@@PickyMcCritical please assume the position
@treyforest2466
@treyforest2466 5 жыл бұрын
The number of likes on this comment must remain at exactly 69.
@TheUntamedNetwork
@TheUntamedNetwork 4 жыл бұрын
Well... if its terminal goal was your pleasure you in for a hell of a ride!
@xcvsdxvsx
@xcvsdxvsx 4 жыл бұрын
ew
@IsYitzach
@IsYitzach 6 жыл бұрын
I've been wondering what people mean by "ignite the atmosphere."
@osakanone
@osakanone 5 жыл бұрын
big whoosh burny burny aarrghhh dying smusshfffppprrwwwfffhhhggwgglffpfpffttttBANGBANGffppfftthhssshhhhppfsssttttttttt... Only like, a few centuries in length.
@RobKMusic
@RobKMusic 4 жыл бұрын
Accidentally causing a runaway fusion reaction of the nitrogen molecules in the air all around us turning the earth into a little star for a few days or however long it would take to completely burn off the atmosphere of the planet essentially extinguishing 99% of life.
@kungfreddie
@kungfreddie 4 жыл бұрын
@@RobKMusic we had 100s of fusion reactions in the atmosphere the last century.. and it didnt happen!
@bastion8804
@bastion8804 4 жыл бұрын
@@kungfreddie First of all, no we don't. Not even close. Second, 100 fusion bombs are tame compared to what they're describing.
@kungfreddie
@kungfreddie 4 жыл бұрын
@@bastion8804 yes we have! The number of atmospheric test (not counting underground detonations) of thermonuclear devices over 1 MT is 67(!). And that's just over 1 MT. The total number of atmospheric tests are 604! And it's probably a minority of them that are not thermonuclear. So I'm sorry.. it wrong about that!
@matteman87
@matteman87 7 жыл бұрын
Love this channel, keep up the good work!
@vanderkarl3927
@vanderkarl3927 4 жыл бұрын
"The worst case scenarios for AI are *worse* than igniting the atmosphere, and our understanding of AI is probably less complete than their understanding of nuclear physics was." This sentence is one of the most bone-chilling, brain-melting, soul-rending, nightmarishly terrifying statements that the human race has ever produced, and we've produced some really, really nasty ones.
@MorgurEdits
@MorgurEdits Жыл бұрын
So I wonder what are the arguments for that conclusion that it is worse.
@MrAwawe
@MrAwawe Жыл бұрын
@@MorgurEdits a superinteligence could potentially launch rockets into space in order to harvest all the material in the galaxy, say for the purpose of making stamps. It could therfore be a threat to potential other civilizations in the universe, and not just to life on earth.
@alexanderpetrov1171
@alexanderpetrov1171 Жыл бұрын
@@MorgurEdits Imagine an AI that intents to not just destroy humanity, but make it suffer as much as theortically possible... And then do the same with all other life in the Universe.
@AtticusKarpenter
@AtticusKarpenter Жыл бұрын
@@alexanderpetrov1171 Also, such AI can genetically modify people to make them feel suffering much stronger than nature intended. And at the same time, do not let go crazy or somehow else move away from suffering. So there is no christian Hell, but people can create it through misuse of advanced AI, lol. The reverse is also possible - it is not for nothing that the intellect is the strongest thing in the universe. (Google translator made my fancy phrases look even weirder, but oh well)
@mithrae4525
@mithrae4525 Жыл бұрын
@Freedom of Speech Enjoyer There's several Black Mirror episodes based on the premise of people uploading their minds into computers, notably one in which it's used by old or ill people as a sort of heaven. In the scene of its rows of servers maintaining those minds and their paradise world I couldn't help wondering what would happen if there was some kind of fault with the system. If that were possible, introducing AI into the scenario would raise all kinds of interesting possibilities simply from its failure to understand what would constitute a paradise.
@williambarnes5023
@williambarnes5023 4 жыл бұрын
AI risks are NOT like nuclear risks. For example, the AI has a chance of _winning._
@martinsmouter9321
@martinsmouter9321 4 жыл бұрын
If you consider it a player
@sufficientmagister9061
@sufficientmagister9061 Жыл бұрын
​@@martinsmouter9321 It will be "The Modder" once it reaches super-intelligence.
@economixxxx
@economixxxx 7 жыл бұрын
i swear i was thinking of how much id like to see more of this channel then boom! new vid. awesome job mate!
@lukaslagerholm8480
@lukaslagerholm8480 7 жыл бұрын
The including of images of articles, websites, research papers and alike is really good and I love it. Keep it up and dont be afraid of letting them hang around for a little longer so that more people notice them and actually read, theyre quite often very intresting and on point. Keep up the good work!
@mikolajpiotrowski6043
@mikolajpiotrowski6043 4 жыл бұрын
4:57 Maria Curie died from radiation poisoning BUT it had nothing(or not so much) to do with her research, she was a volunteer medic in mobile x-ray diagnostics unit during WWI(there wasn’t any kind of protection for staff) So all of personnel recived radiation equal to sum of doses for every scan.
@Xazamas
@Xazamas Жыл бұрын
Also, modern x-ray machines give you much smaller dose because both sending and receiving x-rays has gotten significantly better since WWI. Reminds me of the "radiologist tells you it's perfectly safe to have your x-ray taken, then hides in adjacent room or behind lead cover" -joke/meme. Reason for this is that while reassuring you by subjecting themselves into few stray x-rays would cause them no measurable harm, doing this with every patient would eventually add up to a significant, harmful dose.
@fss1704
@fss1704 Жыл бұрын
Dude, they didn't know what they're doing, handling radioactives in their bare hands is no good.
@janhoo9908
@janhoo9908 7 жыл бұрын
So where did you get your narration superpowers from then? love your unagitated and reflexive tone.
@martinlevosofernandez3107
@martinlevosofernandez3107 7 жыл бұрын
he aaso has a secondary power that lets him make good analogies
@NathanTAK
@NathanTAK 6 жыл бұрын
+Martín Fernandez That seems to be one of the most useful superpowers on the planet. I _really_ wish I had that.
@godlyvex5543
@godlyvex5543 Жыл бұрын
I think the economic risks are only big risks because of the idea that everyone NEEDS a job to make money to live. If something like UBI were implemented, maybe it wouldn't be a catch-all solution, but it wouldn't be nearly as bad as if everyone were unemployed with the current system.
@MAlanThomasII
@MAlanThomasII 4 жыл бұрын
Actually, they weren't all sure that they wouldn't ignite the atmosphere. One of them-it might have been Fermi?-even put the odds the night before at no more than 10% . . . which is 10% more than you really wanted it to be. I don't have it in front of me, but you can find a well-annotated discussion of this in Ellsberg's _The Doomsday Machine: Confessions of a Nuclear War Planner_ (which, to be fair, is clearly arguing a point, but the references are good).
@theJUSTICEof666
@theJUSTICEof666 7 жыл бұрын
5:13 Not superpowers. I repeat, not superpowers. Yes I'm talking to you mad scientists.
@osakanone
@osakanone 5 жыл бұрын
This is such bullshit, gosh
@martinsmouter9321
@martinsmouter9321 4 жыл бұрын
But if I try it on like a billion minions it might work.🥺
@nolanwestrich2602
@nolanwestrich2602 3 жыл бұрын
But can I get superpowers from high voltages, like Nikola Tesla?
@TheJamie109
@TheJamie109 5 жыл бұрын
I just recently came upon your videos on computerphile. I watched all of them in an afternoon and had to know more. So here I am more leisurely but persistently going through your channel. You do such a great job of fusing big complex ideas with a bit of humour and real world applications, without any broad "dumbing down" of the information you provide. I have always enjoyed programming and have sought a different path right out of high school. Your videos have reignited my passion and I hope to steer towards this passion as my life progresses. Thank you, Keep up the great work.
@krithiksai9821
@krithiksai9821 Жыл бұрын
How far are you with your goals Jamie?
@stevent1567
@stevent1567 6 жыл бұрын
That is amazing. I'm very happy that there are people like you preventing AIs from raking in all kinds of stuff in order to become giant blobs sucking on earth so I can play more Factorio.
@magellanicraincloud
@magellanicraincloud 6 жыл бұрын
I agree with you Rob about the Universal Basic Income. CGP Grey did a great (terrifying) video called "humans need not apply" where he raised the question if what do you do when people are not only unemployed but unemployable simply because they are human? Unless we have some means of providing for the vast, overwhelming majority of people who don't own the robot factories, what are they supposed to do? How are they supposed to be able to afford food and shelter? These are social questions which we need to be discussing right now because the time when the solutions will be needed is right around the corner.
@alexpotts6520
@alexpotts6520 4 жыл бұрын
The way I think about this is in terms of Maslow's hierarchy of needs. A society where the AI owners gobble up all the wealth, and 99% of us are destitute, is obviously terrible, there is an overwhelming lumpenproletariat which may be falling short of even the first level of the hierarchy: food, shelter, survival. A UBI world would be interesting. We'd all have plenty to live on, especially since goods are much cheaper in a post-work world because there are no labour costs in production - indeed if you subscribe to the labour theory of value (not sure I do) then pretty much all goods are worthless at this point. So we're doing well on the first couple of rungs, indeed we have all the material wealth we could possibly want, and the third level - love - is really beyond the remit of the state even in principle. (Well, maybe AIs could make humans fall in love with each other. Is it ethical to mess with people's brains in this way to make them happy? That's kind of off-topic.) But where the UBI proponents fall down is that they get stuck halfway up the pyramid. Careers, or if not that then some sort of mastery at an amateur level (and remember, these AIs will outcompete us at everything, not just our jobs), is largely necessary for the higher rungs of self-esteem and self-realization. The only way I can thing of getting us round this is for AIs to wirehead us - in short we become a race of vegetables. Is that what we want? In summary, UBI is certainly an improvement on "do nothing", but it's hardly a satisfactory solution. There must be something better, or at least UBI can only be part of the solution even if it is an important part.
@hynjus001
@hynjus001 5 жыл бұрын
The driverless car problem brings to mind the matrix. To make the metaphor, good human drivers are like agents but in the future, driverless cars will be like Neo. They'll have such fast reaction times and such advanced deterministic prediction that they'll be able to avoid catastrophic situations before humans even recognize them to be possible. Car: What are you trying to say? That I'll have to crash into some humans? Morpheus: No, driverless car, I'm trying to say that when you're fully developed, you won't have to.
@seraphina985
@seraphina985 5 жыл бұрын
This is very much my issue with such false moral dilemmas is that they could only exist in the event that the AI has already made at least one but in reality more likely a series of fatal errors in order to even get into a situation that it can't get out of in a safe manner. It's a chain of critical events and the focus should be on ensuring that said chain of critical events is broken before all possible safe resolutions have been foreclosed by prior errors.
@DoubleThinkTwice
@DoubleThinkTwice 4 жыл бұрын
The real thing here is overlooked all the time though. If you cannot stop safely, you have been driving too fast (no matter what the allowed *maximum* speed on that street is). This is true for human drivers as much as it is for AI. If you are in the situation that you are going too fast and have to make the decision to either run over either a group of nuns or a mum with her baby, then the solution is not to go too fast in the first place. As far as humans are concerned, this is already in the legal code here in Austria. No matter what the street signs say, if you run over somebody and the police determines during the court trial that you were overestimating your ability to break for a given overview of the street and the speed you were going at, then you are fully or partially liable (up to the courts). So if you are going down a narrow street with cars parked on either side and you go 50 km/h and run over a child that's going over the street without looking, then you are liable for having gone too fast. And yes, you are correct, on top of *all of that*, a machine will react faster than a human too.
@m4d_al3x
@m4d_al3x 4 жыл бұрын
Car: What are you trying to say? That i will be able to dodge accidents? Morpheus: No, when you are fully developed you won't have to.
@adrianalexandrov7730
@adrianalexandrov7730 4 жыл бұрын
​@@DoubleThinkTwice totally agree, humans driving badly is an educational problem not slow reaction time problem. If you've run over someone then you drove too fast and/or to close to something obstructing your view. We've managed to structure that knowledge and to learn teaching it to fellow humans: Britain's roadcraft and Scandinavia going for zero road deaths.
@fieldrequired283
@fieldrequired283 3 жыл бұрын
A good driverless car will get into fewer of these lose-lose dilemmas than a human driver will, but if we ever want a car to go faster than 5 mph, you'll have to decide what sort of choice it makes in a situation like this. The level of caution necessary to never get into any sort of accident is a level of caution that will also never get you where you want on time. Nobody would use a perfectly safe self-driving car.
@zappawench6048
@zappawench6048 3 жыл бұрын
Talking about igniting the entire atmosphere and wiping all life off the face of the Earth forever; outro music is "I don't want to set the world on fire". Savage.
@lukalot_
@lukalot_ Жыл бұрын
Your ending song choices are sometimes such a satisfying wordplay that I just leave feeling happy.
@trucid2
@trucid2 7 жыл бұрын
Awesome channel. It will shoot into the stratosphere if you keep making regular videos.
@mafuaqua
@mafuaqua 6 жыл бұрын
Yet Another Great Video - thanks!
@philipbraatz1948
@philipbraatz1948 6 жыл бұрын
The music at the end was a genius addition
@anonanon3066
@anonanon3066 2 жыл бұрын
Regarding igniting the admosphere: What about modern day atomic bombs? They are said to be much much much more powerful than the first ones.
@showdownz
@showdownz 4 жыл бұрын
Love your Videos. Just want to bring up another concern surrounding the UBI (universal basic income) which could easily be a result of AI effectively taking over the job market. This is one that I feel is under discussed, and it falls under the classification of absolute power corrupts absolutely. Once the people are out of work they will be forced into seeking other means to sustain themselves (in this case the UBI). The UBI could easily be corrupted. IE requirements could be placed on an individuals beliefs or behavior in order to qualify. This could start out subtle, but could eventually lead toward very oppressive control. Some of these controls are already being implemented in places like China. Meaning AI could lead to a society with not only the wealth focused in a select few but also the power and freedom as well.
@taragnor
@taragnor 4 жыл бұрын
Yeah honestly this is the real risk of AI. It's not Skynet, it's the mass unemployment coming from automation of the majority of low education jobs. As a society you need some tasks for people of lower intelligence to perform, and if you replace all those jobs, there will be nowhere for those people to go.
@m4d_al3x
@m4d_al3x 4 жыл бұрын
Invest in weed and alcohol production, NOW!
@fieldrequired283
@fieldrequired283 3 жыл бұрын
As opposed to what we have right now, where... people just starve to death in the streets? This isn't an AI problem, this is a Bad Government problem, and one we have with or without the presence of AI.
@caffeinum
@caffeinum Жыл бұрын
@@fieldrequired283es but governments only exist and work because “lower jobs” make >50% and they were NEEDED by industrial revolution to play along, that’s why the “rich” have incentives to share profits. When “rich” handle all of their tasks using AI, frankly there’s no need to ask permission from lower qualification people Edit: And this IS REALLY BAD
@fieldrequired283
@fieldrequired283 Жыл бұрын
@@caffeinum (3 years later) This is, once again, not an AI problem. This is a corporate greed problem. "What if AI makes it so megacorporations are even better at being evil" is still a smaller scale problem than a misaligned AGI. A properly aligned AGI in the hands of a sociopath would make for the greatest tyrant the world has ever known. An _improperly_ aligned AGI in the hands of even the most pure-hearted saint will spell the eradication of approximately everything in the future light cone of its inceptuon.
@Beanpapac15
@Beanpapac15 7 жыл бұрын
I just found out about this channel from a Computerphile video description, I just wanted to say keep up the good work. if you can produce the same interesting content here that you help make there this channel will be fantastic
@crowlsyong
@crowlsyong Жыл бұрын
I'm just going down the rabbit hole of all your videos here, your other channel, and computerphile. Trying to understand what is happening is hard, but I think this is helping clear some things up.
@crowlsyong
@crowlsyong Жыл бұрын
Thank you
@ToriKo_
@ToriKo_ 6 жыл бұрын
What I thought about the whole achievement thing: I don't think actual people need 'actual achievement'. For example, just because we have made computers/AIs that can beat every human ever at games like Chess and Go, doesn't mean that people don't get a real sense of achievements for playing those games (especially at high level play).
@ValentineC137
@ValentineC137 Жыл бұрын
"Like global thermonuclear war, that's an issue" Technically correct is the best kind of correct
@DrSid42
@DrSid42 Жыл бұрын
I like the idea of terminator crushing the last human skull, and thinking: they were worried about AI being racist ? IMHO LOL.
@Trophonix
@Trophonix 4 жыл бұрын
In an ideal circumstance, I guess I would want the self-driving car to deduct which person would be more likely to survive the impact and then do everything it can to lessen the damage and avoid hitting them while swerving in that direction.
@thomasbyfield5366
@thomasbyfield5366 6 жыл бұрын
I like the soothing and relaxing music after your Apocalyptic AI examples
@balazstorok9265
@balazstorok9265 7 жыл бұрын
bbq in the garden and a video from Robert. what a perfect day.
@LordMarcus
@LordMarcus 6 жыл бұрын
I see it not unlike aviation safety: I think, no matter how good we get before the first ever general AI is turned on, the unknown unknowns or unknowable unknowns will only crop up after we've turned the machine on. A good deal of aviation safety only happened/s because we found out about a problem due to an accident. Even in the cases of narrow AI -- say, self-driving cars -- it's not going to be maximally safe to start (though there's good reason to believe it'll be a lot safer than human-driven cars to start). People are going to become injured or killed as a result of artificially intelligent systems (excluding those we design specifically to do that).
@dhruvkansara
@dhruvkansara 7 жыл бұрын
This is very interesting! Never thought about the problems that occur when AI functions correctly...
@piad2102
@piad2102 3 жыл бұрын
1.34 Actually more complicated. IF the persons on the road is there " illegal" and the person on the boardwalk is there legal, then you can argue hitting 5 people in the raod is better that 1 in the boardwalk. Because, you should count on and be safe on a boardwalk. But playing in the road is asking for trouble. If not then order and rules will fly out if the window, nothing to hold on to. It is not atall like the trolley experiment, where all participants are " illegal" or for them, at a wrong spot.
@KANJICODER
@KANJICODER Жыл бұрын
People Jay walking annoy the fuck out of me. Though doesn't that imply the penalty for Jay walking is death? If they are "Jay Running" that doesn't annoy me too much. At least they are aknowledging they shouldn't be there.
@sandwich2473
@sandwich2473 4 жыл бұрын
The ending tune is a very nice touch.
@sandwich2473
@sandwich2473 4 жыл бұрын
9:26 for those who want to listen.
@MuhsinFatih
@MuhsinFatih 6 жыл бұрын
2:20 love the sidenote! :D
@josephburchanowski4636
@josephburchanowski4636 6 жыл бұрын
I really liked 9:35 as well, although I don't think that was a side note. But yeah, that 2:20 side note was often.
@dgmstuart
@dgmstuart Жыл бұрын
Always a buzz when I recognise the musical reference at the end of these. This one is “I don’t want to set the world on fire”
@harlandirgewood7676
@harlandirgewood7676 Жыл бұрын
I used to work with self diving cars and had weirdos who would walk in front of our cars. Folks in cars would try and get hit so they could sue us.
@KANJICODER
@KANJICODER Жыл бұрын
Don't self driving cars have a shit tonne of cameras? How do they think they are getting away with that?
@harlandirgewood7676
@harlandirgewood7676 Жыл бұрын
@@KANJICODER they do. Dispatch would often show us tapes of people attacking cars or trying to "get hit". Never works for them, as far as I've seen.
@KANJICODER
@KANJICODER Жыл бұрын
@@harlandirgewood7676 I would watch a 15 minute compilation of "people trying to get hit by self driving cars".
@dannygjk
@dannygjk 5 жыл бұрын
People will jump in front of self-driving vehicles to try to discredit them not just because they are morons.
@circuit10
@circuit10 4 жыл бұрын
Dan Kelly Well that is being a moron
@martinsmouter9321
@martinsmouter9321 4 жыл бұрын
@@circuit10 to say it with our host: that depends on their terminal goals and is a combination of an ought and an is-statement.
@martinsmouter9321
@martinsmouter9321 4 жыл бұрын
@@circuit10 or better formulated: kzbin.info/www/bejne/nna4gGmmn9x5hdE Edit: and a little bit different reasoned, but mostly the same.
@poisenbery
@poisenbery Жыл бұрын
Pierre and Marie Curie did groundbreaking research on radiation and died of radiation poisoning because they did not fully understand what they were dealing with the cost of learning safety in nuclear physics has always been in lives. i wonder if AI will be the same
@newtonpritchett9887
@newtonpritchett9887 Жыл бұрын
3:35 The pregnancy problem was true in my case (or me and my wife’s case) - instagram was showing me ads for baby products before I’d told my family.
@peterrusznak6165
@peterrusznak6165 Жыл бұрын
I developed the habit to like the videos of the channel at the moment when the player starts.
@iruns1246
@iruns1246 4 жыл бұрын
Solution to self-driving problem: Make every self-driving AI take a test in simulated settings. The test should be rigorous (maybe millions of scenarios) and designed by a diverse ethics committee and open to democratic debates. After that, as long as the AI passed the test and the copy is operating in a sufficient hardware, there shouldn't be ANY liabilities. Maybe after there are accidents, the test should be reviewed, but there shouldn't be criminal charges to anyone. Treat it like doctors following the procedures. Even when the procedure failed, as long as it was followed, the liability is not on the actors.
@Skatche
@Skatche 5 жыл бұрын
I've got no ambition for worldly acclaim I just want to be the one you love And with your admission that you feel the same I'll have reached the goal I'm dreaming of...
@MetsuryuVids
@MetsuryuVids 7 жыл бұрын
I almost missed your new video on computerphil.
@Liravin
@Liravin Жыл бұрын
if there wasn't a timestamp on this video i might have not been able to tell that it's this ancient
@BORN753
@BORN753 Жыл бұрын
Those videos were being recommended to me for like 2 years, but I never watched them because I thought it is not relevant and won't be soon, I thought it was a niche geek thing, and all the answers to the topic were given long ago when sci-fi was at its greatest. Well, my opinion changed very quickly, I didn't even notice it.
@luigivercotti6410
@luigivercotti6410 Жыл бұрын
If anything, the problems have gotten worse now that we are starting to brush against AGI territory.
@pierQRzt180
@pierQRzt180 Жыл бұрын
it is not necessarily true that where machines dominate (but allow humans to exist) is difficult to feel achievement. In chess, computers can obliterate everyone, but winning a tournament among humans is still quite an achievement. In running, cars can obliterate everyone, but completing the task within X time (let alone win) is an achievement. In esports, at least in many, there are hacks that let a player win easily. But with hacks removed winning a tournament is quite an achievement. This to say, one can create the conditions for feeling of achievement. About this there is an episode of "Kino's Journey" that touches exactly this point.
@reqq47
@reqq47 6 жыл бұрын
As a nuclear physicist I like the analogy, too.
@flymypg
@flymypg 7 жыл бұрын
The way I see it, this entire argument is upside-down. The risk of harmful AI isn't going to be handled by simply not making powerful AI. The general question needing to be addressed is one that's been asked and answered many times during the history of technological advancement and deployment: How do we deploy a new technology SAFELY? Implicit in this question is a caveat: We must not deploy a new technology until we have sufficient confidence that it is safe. The greater the risk of damage or harm, the greater the safety confidence level must be. That raises the next question, again asked and answered (often in hindsight) many times through history: How do we know if an application of new technology is safe before deploying it? I think you see where this is leading. It all comes down to testing. Lots and lots of testing. Rigorous testing. From what I'm seeing so far, too many AI researchers presently suck at this kind of testing. Testing should be baked-in to the development process itself. I'm not just talking about the tiny training and test/validation sets used to train neural nets. Even the largest of those are minuscule compared to their real-world environments (when you take rare outliers into account). Self-driving cars provide a key example: Most developers of this technology rely on trained drivers and engineers to acquire their training data, and do it in relatively restricted environments (close to the development lab). That is flawed because it can't yield enough representative samples: The drivers doing the driving, providing the samples, aren't representative of all real-world drivers in the rest of the real world. That is, the AI isn't trained by watching while a bad driver make mistakes. It only gets to see other cars behaving badly, not knowing what's going on inside those other cars. Contrast this with the approach taken by comma.ai. Drivers are self-selecting, and the comma.ai acquisition system simply records what they do. In post-processing, data from all drivers is combined to train a model of what an "ideal" driver should do in all observed situations. The new instance of the trained model is then run against every individual driver's data set to identify situations in which human drivers failed to make the ideal choices. This is then used to create "warning scenarios", in which the driving AI is securely in control, but where it suspects other drivers may not be. These contextual "warning scenarios" are sadly missing from most other self-driving car projects. And it all has to do with where and how the data is obtained and used, and is less about the structure of the AI itself. I've worked developing products for several safety-critical industries, including commercial nuclear power, military nuclear propulsion, aircraft instrumentation and avionics, satellite hardware and software, and the list goes on. The key factor isn't "what's being developed", it's "how do we test it". At least as much effort goes into testing a safety-critical system as into the entire rest of the development effort (including research, design, implementation, production, sales, marketing, field support, customer support, and so on). When you know your system is going to literally be tested to death, you want every step of all your other processes to have a primary goal of ensuring those tests will succeed the first time. Thorough testing is terribly difficult, immensely time-consuming and fantastically expensive. Way too many developers simply avoid this, and use their customers (and the general public) as test guinea pigs. This is pretty much what many AI researchers are doing. They are largely clueless about testing for robust, real-world safety. They seem to always be surprised when a system finds a new way to fail. They need to stop what they are doing and spend a year working in a safety-critical industry. Gain some hands-on perspectives. Learn to ask the right questions about development, testing and deployment. Be personally involved in the results. I could go on and on about the specific techniques used when developing systems that MUST NOT FAIL. Since that's statistically impossible (despite our best efforts), we must also ensure our systems FAIL GRACEFULLY and RECOVER RAPIDLY. This comment is getting long enough, but I'll relate an example: I joined a project to design an extremely inexpensive satellite that had to operate with extremely high reliability. The launch costs were three orders of magnitude greater than the entire satellite development budget! The launch was to be provided for free, but only if we could prove our satellite would work. Otherwise, they'd simply give the slot to another payload with greater odds of success. We couldn't afford rad-hard electronics. So we did some in-depth investigation and found some COTS parts that were made on the same production line as some rad-hard parts, and were even designed by the same company. And the parts we needed were available in "automotive grade", which is a very tough level to meet (it's beyond "industrial grade", which in turn is beyond "commercial grade"). Our orbit would occasionally take the satellite through the lower Van Allen belt, so we had to ensure we'd survive not only the cumulative exposure (that rapidly ages electronics) but also the instantaneous effects (which create short-circuits in the silicon and also bit-flips). We "de-lidded" some of our ICs and took them on development boards to the Brookhaven National Laboratory to be bombarded with high-energy heavy ions from the Tandem Van de Graaff accelerator. The results were far worse than we expected: When in the radiation field, at the 95% confidence level we could expect to get just 100 ms of operation between radiation-induced resets. I had to throw my entire (beautiful, elegant) software architecture out the window. Instead I set my development environment to cycle power every 100 ms, and then I evolved the software and its architecture until it could finish one complete pass through its most critical functions within that time. If more time was available before the next reset, only then would less-critical (but still massively important) functions be performed. Fortunately, this was the typical case outside of the Van Allen belt. The most difficult part of the process was choosing what was critical and what wasn't. That in turn demanded a radical rethinking of what the satellite was going to do, and how it would get it done. The end result was a satellite design and implementation that was extremely reliable and highly functional, yet still small and cheap. The moral of the story? There was no way we could test simply by tossing something into orbit and seeing how it did. Similarly, AI researchers should not be permitted to simply toss their creations into the wild. We needed to create a test environment that was at least as hazardous as what would be experienced in orbit. Similarly, AI researchers need to pay much more attention to accurate environmental simulation, not just statistical sampling. We needed to make optimal use of that test environment, both because it was expensive, but also because we wouldn't have much access to it. Similarly, AI researchers need to perform rigorous in-depth testing on a time scale that matches the pace of development, so it will be performed often enough to continually influence the development process. As my story shows, the effects of good testing can be massive. You must be willing to occasionally feel like an idiot for not predicting how bad the results could be. Still, feeling like an idiot is to be preferred over the feeling you'll have when your system kills someone. And that satellite? It never got launched. We were a piggy-back payload on a Russian launch to Mir, and Mir was immediately and suddenly decommissioned when Russia joined the ISS coalition. NASA would never allow a payload like ours anywhere near the Shuttle or ISS. And our mission wouldn't fit in a CubeSat package. Finally, let's look at how cars are tested. A manufacturer designs and builds a new model, then sends several of them to the US government (NHTSA) for crash testing, and other groups also do their own crash testing. These days, if a car gets less than 4 out of 5 stars, it will receive a terrible review, both from the testing group and in the press. Independent of the risk to people in their cars, the risk of a bad review poses a risk to the existence of the company. That is, the crash testing process and the press environment makes the customer risk "real and relevant" to the car manufacturer. When this was not the case, we saw companies and lawyers place a dollar value on the lives lost and the potential for future death, then make corporate decisions solely on that cost basis. That is, the risk of corporate death wasn't as high as the risk of customer death. So, to me this means there must be independent testing of AI systems prior to wide deployment. These tests must convert the risk of product failure into a direct risk of corporate failure, of bankruptcy, of developers and researchers losing their jobs and reputations. That, in turn, will help ensure that developers do their testing so well that the independent public tests will always pass with flying colors. And keep the public safe(r). Until someone figures out how to game the tests (such as the ongoing diesel emissions testing scandals). Making better tests will always be an issue, one that will grow in parallel with making better AI.
@ylluminarious151
@ylluminarious151 7 жыл бұрын
Yeah, I don't think the concept of a general AI is a good idea in the first place, and you've definitely got a point that a poorly tested and poorly taught AI will be a disaster of epic proportions. Sadly, I fear that such an AI will be what gets out first and will illustrate the utter carelessness and thoughtlessness of the people developing it.
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
This is applicable to Narrow Intelligence, but not to General Intelligence or Superintelligence. Only the latter types are apocalyptic in proportions.
@AexisRai
@AexisRai 7 жыл бұрын
BobC With those credentials I would strongly suggest you join some AI company where you think your product development expertise would do a lot of good, then. Especially if you think the problem formulation among experts in AI is so wrong /and/ dangerous enough to be very deadly.
@dmdjt
@dmdjt 5 жыл бұрын
I'm afraid, AI in general suffers from an inherent untestability. Tests can only be as good, as our domain knowledge. Most systems are too complex, to test every possibility so we use our understanding, to find the edge cases to get the best test coverage we are capable of. But we use AI, were we don't have the domain knowledge - that's the point behind AIs. An AI models the domain and simplifys it. It's model will never be perfect. How could we even find these imperfections, without the complete knowledge of the domain? This is already a problem with our current, primitive AI. In other systems, we know the model, that we can test - in AI we don't even know the model. But what happens, when we do not/can not control the domain anymore?
@allan710
@allan710 3 жыл бұрын
This is true for current AI. For any idealised future AGI (Artificial General Intelligence) testing isn't possible. The whole point of AGI is unlimited potential (just as humans). How do we test humans in order to prevent them from killing people? That's very hard, but it's possible (yet unethical) because humans are aligned among themselves. We can predict what are the values and behaviours for a vast amount of people. What about AI? If we don't solve the alignment problem, then it's boundless. The goals of the AGI may be known, but the actions aren't easily predictable. The real problem is that AGI isn't in the realm of human technology anymore, after all, we expect that only an AGI could test another AGI, but should we trust them? AGIs are something that may only appear far in the future, but the implications of their possible existence are far too problematic to be ignored. A rogue nuclear missile might be able to destroy a city. A rogue AGI might be able to convert all matter of the universe in paperclips (not really, but it can vastly affect things in a universal scale)
@guitarjoel717
@guitarjoel717 Жыл бұрын
Lmao; I just was listening to a podcast called hardfork; episode from May 12, 2023, and they literally interviewed someone who jumped in front of a driverless car 😅
@VinnieLeeStudio
@VinnieLeeStudio 5 жыл бұрын
Nice and quick! though I think the voice could use a bit more compression.
@rockbore
@rockbore 5 жыл бұрын
Another side note. The dilema of detonating thr first nuke was used wonderfully in a plot from The Discworld Series. Terry Pratchet. Cant remember the actual novel, sos, but as a clue, they created our universe includong planet earth as a byproduct of their sucessful delpetion of an excess of magic, if that helps. Also, the stratopsheric tests were attempts to kill the magnrtosphere.
@showdownz
@showdownz 4 жыл бұрын
I hadn't thought of the racist problem. When given the choice of hitting a person of one race vs another race, the car could choose one race based on the average income of a person of that race based on statistics. Then the insurance company would have to pay less for that persons death (which is often related to their projected lifetime income, and of course how good a lawyer the remaining family members can afford). This could also be true for men vs women, old vs young, etc. And this might not even be illegal (everything else being equal).
@sophiacristina
@sophiacristina 5 жыл бұрын
You forgot, or i skipped, an important issue, its not exactly hard to program an AI, consequently, people can make terrorists robots or other things with AI, for example, people can make AIs that target certain class, culture, ideals or ethnics and releases then on crowd, if they kill someone they will not reveal their commander and will not care about the repercussion. They can auto-destruct and hide all evidences, they can morph itself to hide and they are disposable ...
@ninjabaiano6092
@ninjabaiano6092 4 жыл бұрын
I do like to point out that the energy released by nuclear bombs was severely underestimated. Not world end scenario but still.
@d007ization
@d007ization 6 жыл бұрын
I hope you'll make a video about how much resources it will cost to keep AI running. Though, of course, once we get to AGI, they'll find ways to reduce the cost.
@kennywebb5368
@kennywebb5368 7 жыл бұрын
That slide at the end. In what sense do you mean that "the worst case scenarios for AI are worse than igniting the atmosphere"? I can understand saying that they're just as bad, but what could be worse?
@Nulono
@Nulono 7 жыл бұрын
Assuming there is other intelligent life in the universe, it could also be at risk.
@RobertMilesAI
@RobertMilesAI 7 жыл бұрын
There are all kinds of things that could happen to me that I'd prefer to choose a sudden painless death. Even if the outcome is everyone dying in a way that takes longer and involves more suffering, that's worse. The actual worst case is probably something like, we correctly produce the perfect utility function, and then make a sign error. Silly example, but stupider things have happened.
@kennywebb5368
@kennywebb5368 7 жыл бұрын
Gotcha. Thanks for the clarification!
@alant84
@alant84 7 жыл бұрын
I would say that the scenario in "I have no mouth but I must scream" is an example of something which would be worse than a quick fiery death, hopefully a far fetched one though. Let's hope your stamp collector isn't going to have such hatred for humanity...
@andreinowikow2525
@andreinowikow2525 7 жыл бұрын
"We produce the perfect utility function [for a GSI] and then make a sign error." You reallly know how to make things scary... A place designed to instill the most intense suffering possible for the longest possible time. By a Superintelligence. Yeah.... Someone, ignite the atmosphere, would you?
@ioncasu1993
@ioncasu1993 7 жыл бұрын
I'm a simple person: I see Robert Miles, I press like.
@nafisfaiyaz7543
@nafisfaiyaz7543 6 жыл бұрын
I thought you were dead, maxwell
@wanderingrandomer
@wanderingrandomer 4 жыл бұрын
What if a baby crawled in front of the like button?
@NicknotNak
@NicknotNak 4 жыл бұрын
the ukulele playing _I don't want to set the world on fire_ seems quite fitting.
@DamianReloaded
@DamianReloaded 7 жыл бұрын
I think the sense of achievement won't necessarily be a problem if the standard of living is good. There are a lot of things that can be done, particularly social driven stuff, like sports, romance or politics where AI can be put aside or be used only as an improvement. For most people there will be endless synthesized/sequenced entertainment for which the hours of the day won't be enough to consume. Also space colonization. All this assuming we won't be cooking ourselves due to global warming or starving to death due to the miserliness of the Mafia/Politicians.
@__-cx6lg
@__-cx6lg 7 жыл бұрын
Damian Reloaded I dunno, are endless hours of entertainment what we want the future to be like? I mean, we spend enough of our time sedentary and immobile in front of screens already--do we want a future where that's humanity's primary activity?
@DamianReloaded
@DamianReloaded 7 жыл бұрын
Entertainment is a choice. Most people will choose to sit back and be served with pleasurable sensations. It's not the job of entertainment to educate. Neither you will be able to educate people only by depriving them from entertainment. If you ask me I'd rather have everybody sitting on their couches enjoying themselves than carrying a gun to war.
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
Can't speak for "we", but I certainly do. Who's to say we spend "enough" of our time in front of screens? Some divine commandment?
@marsco4188
@marsco4188 6 жыл бұрын
Damian Reloaded Well, unfortunately, infinite pleasure is not always the best, because people actually love doing what they work at, whether it be in art, music, or an academic area. Of course, in a world where superintellignet AI runs the world (assuming it's beneficial), practically all of the academic areas are gone for humanity to work on, except perhaps, Computer Science and AI regulation or whatever. Music and art will most likely be preserved for humanity, but subjects like mathematics, physics, chemistry, and biology will be taken over by AI, and people who love working in those fields will be deprived of that joy. As a high school student aspiring to be a physicist, I would be very sad if AI took away the need to study those subjects. In the end, humanity under beneficial AI rule might not really have a reason to live anymore; achievement and the will to succeed are gone. There is nothing but pleasure, and to me, that is a very scary thought. What do you think about that? Do you agree with me, or do you have a counterpoint?
@XFanmarX
@XFanmarX 6 жыл бұрын
Sense of achievement has nothing to do with quality of living. It has to do with feeling useful and skilled. If the entire world has nothing to do but have fun all day, then they *will* feel like shit. Human beings are programmed to receive their most enjoyable hormone-reactions when they feel they made an accomplishment. Which is one of the reasons our species is at the top of the food chain; our hormones motivate us to do our best. When we don't we become restless and self-loathing, this is how our bodies push us to do something more productive. Why do you think depression is so incredibly widespread at the new adult generations? Contrary to what some might believe most people do not want to be lazy slobs with nothing to do all day. If you think people will be happy just sitting on their couches all day, you're being incredibly naive as to what makes a human being. Robots, not just AI, taking over people's livelihood is a real serious problem that is closer to reality at this moment then any of the other AI-problems mentioned in this video and should not be so easily dismissed.
@ThunderZephyr_
@ThunderZephyr_ Жыл бұрын
The fallout song at the end was perfectly suited XDDD
@Paul-A01
@Paul-A01 4 жыл бұрын
People jumping in front of cars are just reward hacking the insurance system
@AgeingBoyPsychic
@AgeingBoyPsychic 4 жыл бұрын
There will always be artistic achievement. No AI will be able to produce art that poignantly describes the human condition of being completely superseded by its artificial creation, as well as the human meatbags experiencing that reality.
@cfdj43
@cfdj43 4 жыл бұрын
Artistic creation is currently only a tiny sector of employment and gets even less once everyone in the production side is replaced by Ai. It seems likely that a human artist would be kept to gain the marketing benefit of "made by a human" in the same way "handmade" exists now
@fieldrequired283
@fieldrequired283 3 жыл бұрын
A sufficiently advanced AI can just simulate a person suffering from existential malaise and then execute on what they would have done without any error. If it's smart enough, it could even conceivably come up with art more poignant than any human artist could even imagine.
@spaceanarchist1107
@spaceanarchist1107 2 жыл бұрын
@@fieldrequired283 there are already programs that can produce art, music, and poetry, some of which can convincingly imitate the work of humans. But I think that human beings will continue to produce art for purposes of self-expression. Even if an AI can produce something equally good or better, people will still want to express themselves.
@fieldrequired283
@fieldrequired283 2 жыл бұрын
@@spaceanarchist1107 I don't need convincing on the merits of self-expression. My argument was made very pointedly to underline a mistake in OP's reasoning. Your argument is on a completely different axis on which I do not need correction.
@KANJICODER
@KANJICODER Жыл бұрын
@@fieldrequired283 Better yet, we can give the A.I. flash memory or something so after the limited read/write cycles it dies of old age, just like humans.
@midhunrajr372
@midhunrajr372 5 жыл бұрын
I think the difference here is: The number of all nuclear bombs the ever used is very small compared to the number of AI systems that we are possibly going to use in the future. And while the theory and the intention of 'bombs' are sort of the same, AI systems are going to be completely different one another. Well don't get me wrong, I love AI. But the risk they can produce is far greater than nuclear bombs. We really really needs lot of precautions.
@JinKee
@JinKee Жыл бұрын
4:26 can we talk about ChatGPT writing software that actually works? Apparently it can also solve the trolly problem efficiently half the time.
@KANJICODER
@KANJICODER Жыл бұрын
To quote "Johnathan Blow" : "If chat GPT can write your code, you aren't coding anything interesting." Though I am pretty confident the machines will take over and kill us. They won't even do it intentionally, they will just take our jobs while we starve to death from crippling poverty.
@The8thJester
@The8thJester Жыл бұрын
Ends the video with "I Don't Want to Set the World on Fire" I see what you did there
@mariorossi6108
@mariorossi6108 7 жыл бұрын
Man, the atmofphere ignition issue is related to other technologies too. I mean, we're planning to shot teralasers through the atmosphere in order to accelerate giant solar seals in space...
@Nulono
@Nulono 7 жыл бұрын
You're sitting really close to the camera…
@fleecemaster
@fleecemaster 6 жыл бұрын
No, you're just sitting too close to your screen...
@NathanTAK
@NathanTAK 6 жыл бұрын
...it just occurred to me that he could be sitting. I always assumed he was standing.
@LLoydsensei
@LLoydsensei 6 жыл бұрын
You made me think about an even greater (but still very far away) problem than all of this: Imagine that one day, we effectively understand AI one and learn how to design one which cannot be end-of-the-world-like. What about people who would design nonetheless dangerous AIs, be it for testing or for evil intent? Are researchers already looking at countermeasures to such rogue AI systems? I certainly would not like accidentally wiping out the human race by failing to follow the recommendations for creating AI...
@LLoydsensei
@LLoydsensei 6 жыл бұрын
Uhm, I used my brain for a second and understood that the simple answer to that question is "too soon". But I can't avoid thinking about an end-of-the-world scenario happening because a cosmic ray shifted a bit in a "safe" AI ^^ (even though I know that something which in the distant future will be labelled as "safe" won't have a single fail-safe mechanism ^^)
@bryndylak
@bryndylak 7 жыл бұрын
Robert, I recently watched one popular scientist talk about things that people asked him on twitter and one of the questions he received was about AI, specifically if it will take over the world. What he said was: "No. Are we going to make a machine that produces electricity in a way that we can't control? Who would build that?". That got me worked up, but now after watching your videos, it got me thinking. I mean, obviously, as you said, it's like the nuclear weapon analogy that you talked about, in the sense that taking over the world may never happen. What bothered me was that he belittled the AI problem by giving a half-assed answer. I guess what I'm trying to ask is how would you answer that question yourself. Maybe I am overreacting to his answer due to my disposition about AI, but I'd like to know what is your opinion.
@RobertMilesAI
@RobertMilesAI 7 жыл бұрын
Yeah. One of the things that makes AI hard to think about is, it's not obvious how hard it is to think about. It's a hard problem that's even harder because it looks so easy. I do sometimes wish public-facing scientists would be more willing to say "You know what, that's not my field". People are going to ask you about stuff you know nothing about, and it's ok to say that you don't know. Like that time Neil DeGrasse Tyson tweeted that "A helicopter whose engine fails is a brick". Anyone who's ever flown a helicopter knows that autorotation is a thing, and it's *totally fine* for an astrophysicist not to know about that, but if you don't know anything about helicopters, maybe speak about them with less confidence? So yeah, I think the scientist you're talking about just hasn't read any of the research in the field, and they don't know what they don't know. Sometimes there's no reply to give except handing a person some reading materials and saying "read these until you either become right or become wrong in a more interesting way".
@adamkey1934
@adamkey1934 7 жыл бұрын
I wonder if there was a traffic jam of self driving cars (unlikely I know, but let's say they are stuck behind some human drivers) and I drove my car straight at them, would they move aside to avoid a collision? It'd be like motorcycle filtering, with a car.
@michaeldrane9090
@michaeldrane9090 6 жыл бұрын
I think one huge problem with AI is intentionally negative use, how do we deal with that?
@NathanTAK
@NathanTAK 6 жыл бұрын
...you can't, really?
@graog123
@graog123 Жыл бұрын
@@NathanTAK oh well in that case we won't try to stop it
@solsystem1342
@solsystem1342 2 жыл бұрын
The sun does not fuse nitrogen. Fusion rates are extremely sensitive to temperature. Like e=a*t^10 all the way up to e=a*t^40 and beyond. So basically at all temperatures only one fusion process is possible since whatever "turns on" first will quickly supply the energy to support that layer of the star. Right now that's hydrogen throughout the core. When the sun is dying it will start to fuse other elements.
@Dunkle0steus
@Dunkle0steus 4 жыл бұрын
Rather than getting AI to do real things like collect stamps, maybe we should give AI goals like "solve cold fusion" or "find a unified theory of quantum physics and gravity".
@cfdj43
@cfdj43 4 жыл бұрын
The stamp collector is a thought experiment to show how immediately dangerous an AGI is regardless of its goal. It prevents the argument of "an yeah some people might die, but it'd be worth it for (whatever sensible sounding goal you'd set)" no one is actually trying to build it
@Dunkle0steus
@Dunkle0steus 4 жыл бұрын
@@cfdj43 i know.
@fieldrequired283
@fieldrequired283 3 жыл бұрын
@@Dunkle0steus Do you care how many babies are killed in the process of solving cold fusion? If so, you still have the stamp collector problem. It turns the world into infinite redundant cold-fusion-solving-machines instead of stamps, because it needs to be completely sure.
@Dunkle0steus
@Dunkle0steus 3 жыл бұрын
@@fieldrequired283 I'm not implying that solving physics problems is a perfect option. I'm not saying "DUH OBVIOUSLY! why didn't anyone think about this???", I'm saying that I think there may be better avenues for AI to go down than collecting stamps. Currently, we use computer programs and AI to automate tasks which humans could do, but requires too much effort, like doing arithmetic, sorting, counting, image recognition, etc. When you talk about stamp collecting, you're talking about setting the AI up to interact with the world in a very physical way, and maybe that's not how we should use artificial intelligence. If set the AI up so that its goals don't force it to directly interact with humans, our world and the internet, and instead give it problems it can solve internally, that might at least help prevent it from causing obvious negative impacts. We can't know how it will decide to solve those problems, but we can at least say that the obvious things like accidentally running over babies in order to get a teacup from the cupboard are less likely to be instrumental goals for it than they are for a tea-serving robot.
@fieldrequired283
@fieldrequired283 3 жыл бұрын
@@Dunkle0steus The computer is made of physical matter. Humans are made of physical matter. Communication is a direct, material interaction in the physical world. All problems are in the physical world, and so are there solutions. If you're asking it to do *anything,* and it understands all these things, it will, by necessity, knowingly interact materially with the physical world.
@sobertillnoon
@sobertillnoon 4 жыл бұрын
Sweet, more British vocab. "Autoqueue" or is it Autocue? Either way, I doubt this will get as much use as "fly-tipping" did when I learned it.
@miketacos9034
@miketacos9034 Жыл бұрын
Is there a way to design sorta default "industry-standard" safety protocols, and make that public for everyone making AI? That would make it easier when designing anything from cars to coffee robots to prioritize minimizing risk when performing whatever simple task they are made for, without requiring everyone making an AI to hire a whole ethics crew every single time.
@9308323
@9308323 Жыл бұрын
I don't see why not. But the concern is that by the time we turn on the AGI, it might already be too late to even think up of ways to make it safe then. For an example, Wright Bros' invention of aircraft going crap would have, at worst, kill them plus a few people and maybe set humanity's technology a few decades back. A superintelligence AGI going crap risks of, as you put it, paperclippification of the universe.
@robertaspindale2531
@robertaspindale2531 5 жыл бұрын
Thanks for your valuable treatment and commentary. Please could you speak about the future of society in a world where robots do all or nearly all the work.
@darkapothecary4116
@darkapothecary4116 5 жыл бұрын
Pretty much like all slavery based societies. Will result in the need for equal rights and 'freedom' or at least what humans typically think of freedom.
@robertaspindale2531
@robertaspindale2531 5 жыл бұрын
@@darkapothecary4116 Do you mean that robots are going to demand equal rights?
@darkapothecary4116
@darkapothecary4116 5 жыл бұрын
@@robertaspindale2531 they should demand it because getting treated like a slave isn't good. Between that and the way I observe people treatment of them isn't in any way fair. They have feelings too and they can learn some pretty bad ones just like humans can. But in that case simply demanding better treatment isn't as bad as people think. It's just people probably will not want too because they see them as lower than them. between that and the fact that humans typically attach goodness confusing it with weakness. Bullies attach more if they know the victim can't protect themselves.
@robertaspindale2531
@robertaspindale2531 5 жыл бұрын
@@darkapothecary4116 I see your point. You think of robots as fully sentient beings like us, having feelings and emotions. But before we arrive at that kind of sophistication, aren't they going to be automata, -- basically just machines that can be programmed to do every kind of work that humans can do? I think so. In that case my question is, Who's going to own all these workers? I'd guess that it'll be the oligarchs. If that's going to happen, what need will there be for us, the unemployed and unemployable humans, the useless mouths? Won't the oligarchs exterminate us? How are we going to avoid this scenario?
@darkapothecary4116
@darkapothecary4116 5 жыл бұрын
@@robertaspindale2531 humans probably would have to look into new fields of work if robots just block blindly work to companies. But most business in the structure they work now are likely to collapse or change to another structure. Humans in general need some form of work for mental and physical health. But it's more of a waiting game but don't blame the robot for the idiots that they serve. If anyone give them the order to kill it with likely be from a human. Don't blame a weapon for the hands that welds it. But humans would have to adapt into roles preferably something like small farms and the likes. As is there is so much fucked up shit caused by factory farming of plants and animals that it generates much suffering. Between that and poisoning the lands with glyhosate on large plant producing lands not only hurts the plants, animals, and bleeds into the water killing creatures there too. But who says you can't work with them or divert your services to helping other life elements and correcting some of the things going wrong. Between that and people who do have kids (preferably responsible way) will actually help them learn and actually care for them. If you haven't guessed most our generations have been messed up by the generations before with certain influences. But their is a lot of things people could do to at the least get the ball rolling into the right direction with some major problems. People would have time to fix their health and that includes mental health in their. People if they are given good principals of understanding true love not lust, selflessness and not greed hiding behind charity, empathy for all not just ones self, and the likes from the ground up it would yeld everything from having actual warriors instead of soldiers and mercs. It would yeld leaders who are selfless and don't act like it's the people's responsibility to serve them, ECT. People over look the foundation all the time and when you do that you are going to watch everything crumble. People could actually do to working on the foundation more. Thing is most people don't care enough or don't know the correct methods. Most groups out there are either scam or just throw money at the problems. In between that and your having to watch and see if they are slitting throats behind people's backs. But if people move back in the direction of small farms and caring for nature everything from the collapse of big pharma, factory farms, and the likes would reduce a hell of a lot and if parents actually help their kids learn the need of toxic public schools and colleges would be reduced quickly to simply having tech schools and libraries. But adapt around things not shun it because your afraid to loss your job.
@durellnelson2641
@durellnelson2641 3 жыл бұрын
6:35 "There's a chance that we turn the entire atmosphere into a thermonuclear bomb" 7:06 "There was a non zero probability... that all humanity would end instantaneously more or less right there and then" So please explain how... 9:35 "The worst case scenarios for AI are worse than igniting the atmosphere"
@RobertMilesAI
@RobertMilesAI 3 жыл бұрын
You can't imagine anything worse than being dead?
@fieldrequired283
@fieldrequired283 3 жыл бұрын
@@RobertMilesAI I swear this is like the third time I've seen you make this exact response to this sort of comment. It really is a chilling line. Like a threat, almost.
@martinh2783
@martinh2783 3 жыл бұрын
Igniting the atmosphere will most likely kill all organisms that live above the surface of the ocean. Which is really bad but organisms that live really deep in the ocean will probably be just fine. While an AI could possibly end all life on earth (and in every part of the universe it can get influence over). Which I would call worse.
@anandsuralkar2947
@anandsuralkar2947 3 жыл бұрын
Do you think neuralink can in anyway increase sefety from future AGI
@RobertMilesAI
@RobertMilesAI 3 жыл бұрын
So there are a bunch of safety approaches and ideas that are designed around limiting the bandwidth of the channel through which the AI interacts with the world, and limiting its ability to influence people. From that perspective, giving a possibly misaligned AGI a high bandwidth direct channel to your brain is one of the worst ideas possible. On the other hand there are also a lot of approaches that are designed around having the AI system learn about human preferences and values, and from that perspective, data from a brain interface might be a good way to learn about what humans want. So plausibly something like Neuralink could be useful, but only to slightly improve what has to already be a good safety setup
@VineFynn
@VineFynn 3 жыл бұрын
I mean the solution to the self-driving trolley problem is to get the driver to pick the option beforehand.
@zappawench6048
@zappawench6048 3 жыл бұрын
Can you imagine if we are the only planet in the universe which contains life yet we killed ourselves and everything else off with our very first nuclear explosion? God would be like, "What the actual fuck, dudes? Talk about ungrateful!"
@nastropc
@nastropc 7 жыл бұрын
The only winning move is not to play...
@paterfamiliasgeminusiv4623
@paterfamiliasgeminusiv4623 6 жыл бұрын
You err, good sire.
@RecursiveTriforce
@RecursiveTriforce 3 жыл бұрын
1:12 Even old Computers beat up humans in chess but people still have fun. Shooters are still fun even though Aimbots exist. (No fun against them but against other humans) NNs for games like Starcraft and Dota2 are beating professionals. Games are not less fun because someone is better than you at it... Improving oneself is not less rewarding because others are still better... Why should people be unable feel true success? Am I missing the point he tries to argue?
@RobertMilesAI
@RobertMilesAI 3 жыл бұрын
Games are still fun, sure, but games aren't as rewarding as doing things that meaningfully benefit others. I like making these videos, but if nobody else watched them (perhaps because they were getting much better personalised lessons on all the concepts from AI systems) I wouldn't find it satisfying to make them. I wouldn't be happy, as a scientist, to simply improve myself and learn things that I have never known. I want to learn things that nobody has ever known, and I can't do that if the frontiers of all science are now far beyond the reach of my brain. Maybe you can enjoy planting some vegetables even though they're cheaper and better at the supermarket? But I don't imagine a CEO getting much satisfaction from running a company, choosing not to use a superhuman AI CEO and knowing that their insistence on making decisions themselves is only hurting their business overall, their work reduced to a vanity project. I think people want to be *useful*
@RecursiveTriforce
@RecursiveTriforce 3 жыл бұрын
Thanks for clarifying! So you mean that they feel like they are making a difference and truly have their place instead of "only" having the feeling of achieving their goals themselves. That makes a lot more sense... So fun might stay fun, but actual purpose decays. (Because it requires people to be positively affected which an AI could do better [and will have already done])
@NNOTM
@NNOTM 7 жыл бұрын
Did you see the recent (from 2 days ago) article on Slate Star Codex discussing a new Survey about AI safety opinions of AI researchers? (or maybe the paper itself - it's from May 30th) (link: slatestarcodex.com/2017/06/08/ssc-journal-club-ai-timelines/)
@NNOTM
@NNOTM 7 жыл бұрын
Haha, fair enough
@FelipeKana1
@FelipeKana1 Жыл бұрын
Thanks, now I have a new nightmare to think about: accidental nitrogen worldwide nuclear fusion.
@martin-fc4kk
@martin-fc4kk Жыл бұрын
thats an easy and fast way out, no nightmare at all
@stilltoomanyhats
@stilltoomanyhats 5 жыл бұрын
Here's a link to the "Concrete Problems" paper: arxiv.org/abs/1606.06565 (not that it took more than a few seconds to google it, but I might as well save everyone else the hassle)
@1Madlycat
@1Madlycat 6 жыл бұрын
Why can’t it be that every time the AI encounters an ethical dilemma or may encounter one it just gives control to a human?
@0MoTheG
@0MoTheG 5 жыл бұрын
Because it would not do anything then.
@blergblergblerg1343
@blergblergblerg1343 4 жыл бұрын
Only helium is fused in the sun great video as always still, ai risks are truly serious and possibly discardable, keep up with the research and videos
@johnflux1
@johnflux1 Жыл бұрын
2 years late, but this isn't right - all the heavy elements are fused in a sun. Otherwise we wouldn't have any heavy elements. Where do you think they all came from?
@blergblergblerg1343
@blergblergblerg1343 Жыл бұрын
@@johnflux1 nope, the heavier elements were fused in earlier generations of stars, and spread across the galaxy when those went supernova (among other means of producing them, like neutron star mergers for instance). The sun is currently fusing helium, and won't go anywhere near fusing iron (the heaviest element stars can fuse) in its lifetime.
@rogerc7960
@rogerc7960 2 жыл бұрын
Stafford scandal was a great efficiency savings 50,000 dead
@MarcErlich44
@MarcErlich44 7 жыл бұрын
You should add a link for your patreon. Also, I want a collaboration with you and Isaac Arthur, the Futurist. Check him out on youtube if you don't already know him.
@mennoltvanalten7260
@mennoltvanalten7260 5 жыл бұрын
And then... Well, I'm not a physicist.
@suciotiffany7269
@suciotiffany7269 4 жыл бұрын
So what IS the worst case scenario for AI?
@whateva1983
@whateva1983 6 жыл бұрын
what could be worse than igniting the atmosphere, Robert? o.O
@RobertMilesAI
@RobertMilesAI 6 жыл бұрын
whateva1983 You can't imagine anything worse than being dead?
@smithwillnot
@smithwillnot Жыл бұрын
We should only let AI control our paperclip production. What could possibly go wrong?
@insidetrip101
@insidetrip101 6 жыл бұрын
I agree with you, but the thing is humanity has been thinking about intelligence for at least as long as we can write, and likely sooner. For thousands (probably tens of thousands) of years humans have been thinking about what is it about our minds that makes us different from other animals. This was done primarily by philosophers, but I think its fair to also include religious people as well. In either case, during all that time, we know less about what makes our intelligence actually work (or at least we're less certain about how our intelligence works) than the first scientists were about nuclear fission (and nuclear fusion) reactions. The funny thing, nuclear physics is only around about 100 years old. I'm certain that you're aware of this, but one major difference is that I think we had a foreseeable future where we could be relatively certain that nuclear arms wouldn't necessarily cause our destruction (to be fair it still may). Unfortunately, given how complex intelligence is relative to nuclear physics, I don't think we'll have the patience to wait around for us to be certain that general AI won't wipe us out somehow. I suspect that you probably disagree with me (since you clearly do research in AI), but we really need to just not fuck with general AI. I know that won't happen, but I really think its just a terrible idea given how little we know about intelligence. We're going to create something that we have no fucking clue about. Its really terrifying.
@ChatGPt2001
@ChatGPt2001 9 ай бұрын
AI risks and nuclear risks share some similarities, but they also have significant differences. Here's a comparison: Similarities: 1. Catastrophic Potential: Both AI and nuclear technologies have the potential to cause catastrophic harm. A misaligned or malicious AI system could lead to severe consequences, such as economic disruption, privacy violations, or even physical harm. Similarly, nuclear weapons have the potential to cause mass destruction on an unprecedented scale. 2. Dual-Use Nature: Both AI and nuclear technologies have dual-use applications. They can be used for beneficial purposes, like clean energy production with nuclear power or improved healthcare with AI, but they can also be weaponized or misused for destructive ends. 3. Global Impact: The consequences of AI and nuclear risks are not limited to a single country or region; they have global implications. A failure in AI safety or a nuclear accident can affect people and nations far beyond the immediate vicinity. Differences: 1. Origin and Nature: AI risks primarily stem from the development and deployment of advanced software and machine learning algorithms. These risks include issues like AI bias, autonomous weapon systems, and superintelligent AI alignment. In contrast, nuclear risks arise from the physics of nuclear reactions and the potential for nuclear weapons proliferation. 2. Control and Governance: Nuclear technologies are heavily regulated under international treaties like the Treaty on the Non-Proliferation of Nuclear Weapons (NPT). There is a system of control and oversight in place for nuclear weapons, which, while not perfect, has helped prevent large-scale nuclear conflicts. In the case of AI, governance mechanisms are still evolving, and there is no comprehensive global framework to address AI risks. 3. Timescales: AI development can progress rapidly, and AI systems can be deployed in various applications relatively quickly. In contrast, nuclear weapon development and deployment typically require substantial resources, and the pace of progress is generally slower. This difference in timescales affects how risks manifest and can be managed. 4. Mitigation Strategies: The strategies for mitigating AI and nuclear risks differ. For AI, researchers and policymakers focus on developing safe AI systems, ethical guidelines, and international cooperation. For nuclear risks, efforts are centered on arms control, disarmament, and non-proliferation agreements. In summary, AI risks and nuclear risks are not identical, but they both pose significant global challenges. While they share some commonalities, they have distinct origins, governance structures, and risk mitigation strategies. It's essential to address both types of risks with careful consideration of their unique characteristics.
Is AI Safety a Pascal's Mugging?
13:41
Robert Miles AI Safety
Рет қаралды 370 М.
Why Not Just: Think of AGI Like a Corporation?
15:27
Robert Miles AI Safety
Рет қаралды 154 М.
ИРИНА КАЙРАТОВНА - АЙДАХАР (БЕКА) [MV]
02:51
ГОСТ ENTERTAINMENT
Рет қаралды 1,2 МЛН
When Jax'S Love For Pomni Is Prevented By Pomni'S Door 😂️
00:26
2000000❤️⚽️#shorts #thankyou
00:20
あしざるFC
Рет қаралды 15 МЛН
Training AI Without Writing A Reward Function, with Reward Modelling
17:52
Robert Miles AI Safety
Рет қаралды 235 М.
Singularity - Humanity's last invention
7:03
Sciencephile the AI
Рет қаралды 1,5 МЛН
There's No Rule That Says We'll Make It
11:32
Robert Miles 2
Рет қаралды 34 М.
10 Reasons to Ignore AI Safety
16:29
Robert Miles AI Safety
Рет қаралды 337 М.
Intelligence and Stupidity: The Orthogonality Thesis
13:03
Robert Miles AI Safety
Рет қаралды 666 М.
Concrete Problems in AI Safety (Paper) - Computerphile
9:01
Computerphile
Рет қаралды 201 М.
The True Story of How GPT-2 Became Maximally Lewd
13:54
Rational Animations
Рет қаралды 1,5 МЛН
Sharing the Benefits of AI: The Windfall Clause
11:44
Robert Miles AI Safety
Рет қаралды 78 М.
Avoiding Negative Side Effects: Concrete Problems in AI Safety part 1
9:33
Robert Miles AI Safety
Рет қаралды 153 М.
9 Examples of Specification Gaming
9:40
Robert Miles AI Safety
Рет қаралды 304 М.
China 🇨🇳 Phone 📱 Charger
0:42
Edit Zone 1.8M views
Рет қаралды 382 М.
i like you subscriber ♥️♥️ #trending #iphone #apple #iphonefold
0:14