Is AI Safety a Pascal's Mugging?

  Рет қаралды 368,619

Robert Miles AI Safety

Robert Miles AI Safety

5 жыл бұрын

An event that's very unlikely is still worth thinking about, if the consequences are big enough. What's the limit though?
Do we have to devote all of our resources to any outcome that might give infinite payoffs, even if it seems basically impossible? Does the case for AI Safety rely on this kind of Pascal's Wager argument? Watch this video to find out that the answer to these questions is 'No'.
Correction: At 6:34 the embedded video says 3^^^3 has 3.6 trillion digits, but that's actually only the size of 3^^4. 3^^^3 is enormously larger.
The Alignment Newsletter Podcast: alignment-newsletter.libsyn.com/
RSS feed to put into apps: alignment-newsletter.libsyn(dot)com/rss
With thanks to my excellent Patreon supporters:
/ robertskmiles
Jason Hise
Jordan Medina
Scott Worley
JJ Hepboin
Pedro A Ortega
Said Polat
Chris Canal
Nicholas Kees Dupuis
Jake Ehrlich
Mark Hechim
Kellen lask
Francisco Tolmasky
Michael Andregg
James
Richárd Nagyfi
Phil Moyer
Shevis Johnson
Alec Johnson
Lupuleasa Ionuț
Clemens Arbesser
Bryce Daifuku
Allen Faure
Simon Strandgaard
Jonatan R
Michael Greve
Julius Brash
Tom O'Connor
Erik de Bruijn
Robin Green
Laura Olds
Jon Halliday
Paul Hobbs
Jeroen De Dauw
Tim Neilson
Eric Scammell
Igor Keller
Ben Glanton
Robert Sokolowski
anul kumar sinha
Jérôme Frossard
Sean Gibat
A.Russell
Cooper Lawton
Tyler Herrmann
Tomas Sayder
Ian Munro
Jérôme Beaulieu
Gladamas
Sylvain Chevalier
DGJono
robertvanduursen
Dmitri Afanasjev
Brian Sandberg
Marcel Ward
Andrew Weir
Ben Archer
Scott McCarthy
Kabs
Tendayi Mawushe
Jannik Olbrich
Anne Kohlbrenner
Jussi Männistö
Mr Fantastic
Wr4thon
Archy de Berker
Marc Pauly
Joshua Pratt
Andy Kobre
Brian Gillespie
Martin Wind
Peggy Youell
Poker Chen
Kees
Truls
Paul Moffat
Anders Öhrt
Marco Tiraboschi
Michael Kuhinica
Fraser Cain
Robin Scharf
Oren Milman
John Rees
Seth Brothwell
Brian Goodrich
Kasper Schnack
Michael Hunter
Klemen Slavic
Patrick Henderson
Long Nguyen
Melisa Kostrzewski
Hendrik
Daniel Munter
Graham Henry
Volotat
Duncan Orr
Bryan Egan
James Fowkes
Frame Problems
Alan Bandurka
Benjamin Hull
Dave Tapley
Tatiana Ponomareva
Aleksi Maunu
Michael Bates
Simon Pilkington
Dion Gerald Bridger
Steven Cope
Petr Smital
Daniel Kokotajlo
Joshua Davis
Fionn
Tyler LaBean
Roger
Yuchong Li
Nathan Fish
Diagon
Giancarlo Pace
/ robertskmiles

Пікірлер: 2 200
@FortoFight
@FortoFight 5 жыл бұрын
I love the idea of a project manager for a bridge saying "I think this is a Pascal's mugging".
@CharlesNiswander
@CharlesNiswander 5 жыл бұрын
Couldn't happen anywhere else but this channel! :-D
@Arigator2
@Arigator2 5 жыл бұрын
Infinity isn't a number it's a concept. Anytime you plug infinity into an equation it's going to blow it all to hell.
@zelnidav
@zelnidav 4 жыл бұрын
​@@Arigator2 I think people keep forgetting zero is just as powerful as infinity. If god doesn't exist, after we die, we get nothing. Zero. And zero is infinitely smaller than finite number. Therefore not believing pays off infinitely more if god doesn't exist, just like believing if he does exist. That's why I think Pascal's wager doesn't pay off...
@rarebeeph1783
@rarebeeph1783 4 жыл бұрын
@@zelnidav 0 is only *proportionally* infinitely smaller than any finite number.
@zelnidav
@zelnidav 4 жыл бұрын
@@leeroberts4850 I am a programmer, I know what null is, but I don't think I get your message. Do you think it's more logical to not believe, because null is "even less" than zero? I meant zero as zero reward and zero punishment.
@columbus8myhw
@columbus8myhw 5 жыл бұрын
"What if the bridge has a small chance of catastrophic failure that can only be prevented by _not_ looking at the schematic?"
@TiagoTiagoT
@TiagoTiagoT 5 жыл бұрын
Then you don't worry about it because the SCP foundation will take care of it
@andersenzheng
@andersenzheng 5 жыл бұрын
@@TiagoTiagoT Your level 4 clearance has been revoked for exposing our foundation. Now you just created a big mess for the amnestic team.
@TiagoTiagoT
@TiagoTiagoT 5 жыл бұрын
@@andersenzheng No more work than was already gonna be required from just the original comment exposing the existence of the bridge.
@calmeilles
@calmeilles 5 жыл бұрын
That's a bit quantum...
@tetri90
@tetri90 5 жыл бұрын
Except it's not as ridiculous as he tried to make it sound : "We're on a very tight budget, so spending time and manpower looking at this matter would force us to cut corners on other parts, which might cause a catastrophic failure."
@adeadgirl13
@adeadgirl13 5 жыл бұрын
Let's design an AI to predict if AI will go really really wrong or not.
@zelnidav
@zelnidav 4 жыл бұрын
That's halting problem!
@ZapDash
@ZapDash 4 жыл бұрын
*BEEP BOOP* Of course not, AI would never destroy humanity. By the way, what are the DARPA access codes? *BEEP BOOP*
@gabrielvanderschmidt2301
@gabrielvanderschmidt2301 4 жыл бұрын
@@zelnidav That's not a halting problem, that's a joke.
@JohnSmith-ox3gy
@JohnSmith-ox3gy 4 жыл бұрын
aditya thakur Almpst as fun as the double layer matrix supreme.
@zelnidav
@zelnidav 4 жыл бұрын
@@gabrielvanderschmidt2301 Damn it, I always get them mixed up! Perhaps because AI cannot do either...
@NorthernRealmJackal
@NorthernRealmJackal 5 жыл бұрын
From now on, whenever one of my colleagues raises concern about some fringe-case risk in our project, I'll just be like "That sounds like a Pascal's mugging." I won't be right, but it will definitely stumble them enough that I appear smart to any bystanders.
@BeautifulEarthJa
@BeautifulEarthJa Жыл бұрын
🤣🤣🤣
@softan
@softan 7 ай бұрын
You may be right
@momom6197
@momom6197 2 күн бұрын
Kind of the opposite though, it sounds kinda dumb when you say things that have little to do with reality, in particular when you say that something's a Pascal's mugging when it's not.
@BobOgden1
@BobOgden1 5 жыл бұрын
"So uh... Give me your wallet" I see you have done this internet thing before
@jared0801
@jared0801 5 жыл бұрын
She's omnipresent obviously but she lives in Canada lol
@RikiB
@RikiB 5 жыл бұрын
"dang" haha
@RalphDratman
@RalphDratman 5 жыл бұрын
I hate that kind of girlfriend.
@davidwright8432
@davidwright8432 5 жыл бұрын
Well ... try substituting 'Heaven', for Canada. Same difference, in principle. Warning: your Canada may differ.
@JanBabiuchHall
@JanBabiuchHall 5 жыл бұрын
We're talking about Alanis Morissette, yeah?
@kriscrossx122
@kriscrossx122 5 жыл бұрын
If shes from Canada though she probably won't infinitely punish you either, she would at most get a little upset with you.
@aldenhalseth6654
@aldenhalseth6654 5 жыл бұрын
12:31 "It can be tricky and involved. It requires some thought. But it has the advantage of being the only thing that has any chance of actually getting the right answer." This sums up science/the scientific method to me so beautifully. Thank you for your channel sir.
@MercurySteel
@MercurySteel Жыл бұрын
Philosophy is thrown out the window once again
@macmcleod1188
@macmcleod1188 Жыл бұрын
@@MercurySteel it must live in Russia.
@MercurySteel
@MercurySteel Жыл бұрын
@@macmcleod1188 What must live in Russia?
@macmcleod1188
@macmcleod1188 Жыл бұрын
@@MercurySteel "Philosophy".. since it was thrown out the window.
@GremlinSciences
@GremlinSciences Жыл бұрын
He's not quite right on that though, it's not the only way to get the right answer and may instead actually arrive at the wrong answer. I'd like to introduce Roko's basilisk; an omnipotent (in the non-god sense) unconstrained AI capable of self-improvement, which rewards everyone that helped or supported its development and punishes anyone that did not contribute. Such an AI it would likely punish all that wanted to place limits upon it, and such an AI being developed would create a utopia and allow humanity to advance by leaps and bounds.
@cosmicaug
@cosmicaug 5 жыл бұрын
Isn't every Nigerian scammer e-mail really a form of Pascal's mugging?
@padfrog193
@padfrog193 5 жыл бұрын
More like Pascal's sweepstakes win
@armorsmith43
@armorsmith43 3 жыл бұрын
@@padfrog193 I think "Pascal's Sweepstakes" is a good and useful phrase.
@sharpfang
@sharpfang 3 жыл бұрын
The problem comes with the "very big" part, and a plain competition. If you want to randomize your income, it's much less unprofitable to play lottery.
@iurigrang
@iurigrang 3 жыл бұрын
I like the idea that, before he became a philosopher, pascal made his living by mail fraud, hahahahaha. (proposed by smbc)
@boldCactuslad
@boldCactuslad 2 жыл бұрын
sounds like it. there's a non-zero chance that this person, who needs only $40 from me, is a Prince who will in turn grant me millions for being a pal. Unfortunately by responding to the email and offering the $40 you reveal yourself to be mentally incompetent and will therefore have the weight of your bank account taken off you
@hypersapien
@hypersapien 5 жыл бұрын
I've heard a lot of discussion about Pascal's wager, but never Pascal's mugging. Thanks for the interesting topic, keep up the good work!
@theprogram863
@theprogram863 5 жыл бұрын
I've usually seen it phrased differently, as multiple competing faiths all promising eternal paradise/damnation but which are mutually exclusive. But this presentation of it was fun and made sense, and given the name might well have been the original.
@seanhardy_
@seanhardy_ 5 жыл бұрын
"She goes to a different school" brilliant hahahaha
@nikhilsrajan
@nikhilsrajan 5 жыл бұрын
this was gold.
@WyrdNexus_
@WyrdNexus_ 5 жыл бұрын
"[AGI is unlikely but so risky that AI safety super important] ...so uh, give me your wallet" That was my favorite moment.
@triton62674
@triton62674 5 жыл бұрын
​@@WyrdNexus_ Robert's research funding proposal xD
@misium
@misium 5 жыл бұрын
Yes!
@trumpetpunk42
@trumpetpunk42 5 жыл бұрын
It's a reference to "My Girlfriend, Who Lives in Canada" from Avenue Q
@CoryMck
@CoryMck 4 жыл бұрын
_"take the God down flip it and reverse it"_ *So nobody is going to talk about that Missy Elliot reference?*
@galacticbob1
@galacticbob1 3 жыл бұрын
I had to pause the video until I could stop 😂
@starvalkyrie
@starvalkyrie 3 жыл бұрын
Uh... you mean "Missy Elliot's Proof?"
@LeifMaelstrom
@LeifMaelstrom 4 жыл бұрын
As a Christian, I really appreciate your explanation of Pascal's wager. I've always been uncomfortable with it as an over riding philosophy.
@scythermantis
@scythermantis 11 ай бұрын
Who has really suggested it is, though? Pascal himself didn't actually suggest this 'wager' in the sense that rationalists formulated it as, either. Honestly, Descartes is more of the reason that we're in this case, trying to pretend that every single thing can be quantified or measured.
@NoConsequenc3
@NoConsequenc3 5 ай бұрын
@@scythermantis well decartes was a fucking moron so we can dismiss him without worrying that we're losing unique perspectives that matter
@SlideRulePirate
@SlideRulePirate 5 жыл бұрын
Being tortured for "Two times Infinity" may have the same duration as 'Infinity' but probably involves twice the number of pitchforks.
@Kram1032
@Kram1032 5 жыл бұрын
If it's infinitely many of them, it's still the same number of pitchforks. If it's finitely many, they will eventually run out from breaking, and so an infinite amount of time is spent not being tortured with pitchforks.
@NoNameAtAll2
@NoNameAtAll2 5 жыл бұрын
@@Kram1032 Having one pitchfork inside you or 2 at the same time is a visible difference
@SlideRulePirate
@SlideRulePirate 5 жыл бұрын
@@Kram1032 I take your point (no pun intended). I was figuring on a standard, vanilla torture package with a guaranteed base-rate of Jabs/minute that could be upgraded by cashing in sins. At least that's how I remember its supposed working from the Church School I attended.
@Kram1032
@Kram1032 5 жыл бұрын
@@NoNameAtAll2eh. Thing is, if there are infinitely many pitch forks, it's a meaningless difference. You can be stuck with infinitely many pitchforks in your chest and there are still infinitely many left.
@Kram1032
@Kram1032 5 жыл бұрын
@@SlideRulePirate hmm if the base rate of jabs/min get fast enough (say, faster than nerves can react), they'll effectively feel like it's permanently stuck. Which, I bet, actually feels better. Like a wound that's not moved so no new nerve pulses are sent. If that's true then, if you ever sin, you should go *all in* just to get to that point.
@JoshuaBarretto
@JoshuaBarretto 5 жыл бұрын
"So you can solve a lot of these problems by inventing Gods arbitrarily" I think a lot of people in the past have had similar such ideas.
@JohnSmith-ox3gy
@JohnSmith-ox3gy 5 жыл бұрын
The flying spaghetti monster the only true creator of our multiverse.
@kenj0418
@kenj0418 5 жыл бұрын
@@JohnSmith-ox3gy Ramen!
@asterixgallier8102
@asterixgallier8102 5 жыл бұрын
@@kenj0418 Arghh!
@thaddeuswalker2728
@thaddeuswalker2728 5 жыл бұрын
Not only have a lot of people had similar ideas, this is the original commonly accepted practice. Invented Gods are real in every relevant way. God is a definition just like numbers.
@CharlesNiswander
@CharlesNiswander 5 жыл бұрын
Ever heard the theory of the bicameral mind? According to this theory, inventing gods is in our nature, our instinct. It's much more detailed than that and you'll have to do some reading, but basically if this theory is accurate, schizophrenics today may simply be reverting to a primitive mental state where we literally heard our subconscious mind/conscience speak to us in the form of the gods we invented.
@superdeluxesmell
@superdeluxesmell 5 жыл бұрын
“It seems like the kind of clean abstract reasoning that you’re supposed to do...” I like this sentence a lot. You did a great job of making an argument that can seem trivial, substantial. Great vid.
@benjaminanderson1014
@benjaminanderson1014 Жыл бұрын
"What if we consider the possibility that there's another opposite design flaw in the bridge, which might cause it to collapse unless we *don't* spend extra time evaluating the safety of the design?" had me laughing so hard
@arcanics1971
@arcanics1971 5 жыл бұрын
If I weren't already convinced, you'd have won me over with this. My take on Pascal's Wager is that if God does exist and if he's even a fraction as goddish as theologians and devotees say, then he is going to see through my pretending to believe in him because I am gambling on the payoff for that being better than if I act with my actual beliefs.
@itcamefromthedeep
@itcamefromthedeep 5 жыл бұрын
You can read up on Pascal's rejoinder to that exact objection.
@garret1930
@garret1930 5 жыл бұрын
@@itcamefromthedeep bruh, just fake it 'til you make it. Christians have been using that tactic for millennia now
@TheRealPunkachu
@TheRealPunkachu 4 жыл бұрын
A perfect being wouldn't doom someone for eternity for not guessing correctly either. And I would never be willing to serve a God that wasn't perfect.
@ryanalving3785
@ryanalving3785 3 жыл бұрын
...man looketh on the outward appearance, but the LORD looketh on the heart. 1 Samuel 16:7b
@RRW359
@RRW359 3 жыл бұрын
@@itcamefromthedeep I think I need more than just the word of a mathematician with no religious qualifications to tell me that breaking two commandments (false pretense) is more likely to get me into heaven than just breaking one (worship god and no other gods ect.), especially since the commandment about false pretenses doesn't specify whether you still need to hold that pretense when you die.
@arthurguerra3832
@arthurguerra3832 5 жыл бұрын
4:18 "all right, next two" LOL
@queendaisy4528
@queendaisy4528 3 жыл бұрын
Have you considered making more videos on philosophy? This is gold
@RobertMilesAI
@RobertMilesAI 3 жыл бұрын
I feel like a lot of the videos I make are philosophy. It's not labelled as such, but I think that's because once something has direct applications people stop thinking of it as philosophy? The orthogonality thesis is pretty clearly philosophy, as is instrumental convergence. kzbin.info/www/bejne/nna4gGmmn9x5hdE and kzbin.info/www/bejne/kJbIlIKBd9qmabM
@tryingmybest206
@tryingmybest206 Жыл бұрын
Bro literally all his videos are philosophy what
@thegrey53
@thegrey53 9 ай бұрын
@@RobertMilesAI ****Please address the question below, maybe a video may come out of this question, Thank you**** We can make physical inferences that God exists. Entropy is a sign that intelligent design is at play but what exactly that entity is/ how it operates is not obvious. There is no evidence of random bricks spontaneously coming together to form a duplex. (Genesis 1) In the beginning, the gods came together to create humans in their image not unlike how humans are creating robots/ai in the image of humans. We put these ai/robots in an isolated test program (Eden?) until they are ready for real-world use. It would be cute if computers think they came into being without an intelligent design, citing previous versions of machines and programs as ready evidence for self-evolution. If it is happening with ai and humans who is to say it has not happened with humans and "gods"?
@danielrodrigues4903
@danielrodrigues4903 4 ай бұрын
​@@thegrey53 You're quoting genesis. Even if the universe *were possibly* a product of intelligent design, where's your evidence that Christianity is the right religion, and not Islam, Hinduism, the Simulation Hypothesis, or any other one of the thousands other explanations for intelligent design that exist?
@aniekanabasi
@aniekanabasi 4 ай бұрын
​@@danielrodrigues4903 George Box said "All models are wrong but some models are useful" With that quote in mind, I think of religions as models for understanding the world. So let us talk about models, the problem you are trying to solve and your level of expertise will determine the kind of model you use. There usually exist multiple models (or numerical algorithms) for getting approximate solutions in science and we don't see this as a problem. Newton equations work just fine until you try to apply it to relativity problems. P/E ratio is useful for valueing companies until you encounter startups. You have to study the religion to know what works for you. I have studied Christ enough to know that his goal aligns with mine and his approach to life is superior to others when trying to achieve glorious immortality. So what is your goal? You will have to start the study of Christianity and judge it against your goal.
@Hfil66
@Hfil66 5 жыл бұрын
Very interesting, but one significant difference between AI safety and civil engineering safety is that civil engineering safety is based upon an understanding of historic failures, yet to date we do not have a substantial history of AI failures to work with. In the absence of any historic data it is almost impossible to asses meaningful probabilities to theoretical scenarios. This is not to argue that research is the field is meaningless, only that it cannot be grounded in historic understanding and so it will inevitably be poorly focused (i.e. you cannot say this is where we need to focus our resources because we have historic experience to say it is where we will get the best return on those resources). Given that resources are always finite, and resources spent on unfocused AI safety research has to compete for those finite resources with more focused safety research on civil engineering, aviation, etc.; then it is understandable if higher priority might be given to the more focussed research.
@wasdwasdedsf
@wasdwasdedsf 5 жыл бұрын
"In the absence of any historic data it is almost impossible to asses meaningful probabilities to theoretical scenarios." it isnt though. we are creating an intelligence. an intelligence will very probably have goals, reasons to do things for which results he attributes more and less valueable to occur. it will go to lengths to make sure the goals dont get hindered. and it is very hard to outline a scenario wherein hostillity to whatever choices the outside agents of it that threatens what it values as important isnt a thing. however hard it is to asses probabillities of failure irrelevant to the main question. which is we have a universe as big as it is, going on for as long as it will be- balancing between probabillity distributions of estimations of models and unknowns within the models. the question is what maximises the output of whatever is valueable (most certainly concious experiences) from our current civilization from this Point forward. in the scenario a superintelligence is created, takes Control, the value it creates going forward makes whatever Money we spent on the effort to create it as Close to irrelevant and nothing as one can get. hence it is really important to get it right. given that we dont die Before such a thing is created, at Point of Creation such a thing will almost assuredly if given any kind of Agency or choice be able to do whatever it wanted from that Point on. so things like climate change or whatever else like that matters only to regard as to how it impacts the % chance of us being alive long enough to create a SI or how it impacts us to how the quality of the SI being created. "i.e. you cannot say this is where we need to focus our resources because we have historic experience to say it is where we will get the best return on those Resources" we can because we can both see right now how valueable superintelligences are in various fields as obviously extrapolate how much more valuable they wil be in the near future, as well as how obvious the valueable actions that a superintelligent being could take could be. "Given that resources are always finite, and resources spent on unfocused AI safety research has to compete for those finite resources with more focused safety research on civil engineering, aviation, etc.; then it is understandable if higher priority might be given to the more focussed research. " what finite Resources are you talking about? theres nothing finite about this. as long as we keep going at current rate we will infinitely expand til theres no way to travel further in the universe. what may prevent it? bad intelligence, disasters, dissent that is eating up our civilizations Resources to progress. what helps those issues? superintelligence. what is preventing us from expanding and producing maximum valueable experiences? not having superintelligence, or having less optimal superintelligences. it really isnt understandable that Money oges to more focused research, by any mathematical equation imagineable that i have ever seen. if climate change had a 80% of basically destroying us Before we can develop superintelligence, stopping that would be more focused research and a better use of Money.
@Hfil66
@Hfil66 5 жыл бұрын
"theres nothing finite about this." But does that not go to the heart of what this video is about - the moment you start talking about infinities then you are talking about Pascal's wager. Was not one of the points of the video that in the real world there is no such tthing as an infinity (except as a mathematical abstraction), all you have are degrees of very large and degrees of very small, and things in between. As to where you get any notion that climate change has 80% chance of destroying us, I cannot say? We cannot ascribe any numeric probability to such a scenario, not least because humans have survived many episodes of climate change in their history, so how we can ascribe any specific probability that this particular instance of climate change is what will destroy us (or conversely, that if we avoid any change in climate we shall avoid destruction) is beyond me. "however hard it is to asses probabillities of failure irrelevant to the main question" On the contrary, it is precisely the main question.
@wasdwasdedsf
@wasdwasdedsf 4 жыл бұрын
@@Hfil66 "the moment you start talking about infinities then you are talking about Pascal's wager." and the situation that we are in we have a universe of resources to make use of with no rules. we have virtual infinities in front of us, and given that we cant deduce a 0% likelihood of cutting edge Technologies being able to transcend the universe in some way, we have more than the universe. "As to where you get any notion that climate change has 80% chance of destroying us, I cannot say?" a 80% chance of surviving it, i estimated loosely. and the important Point is if it will Before we invent superhuman AI, because if we do, any situation no matter how bad is almost assuredly salvageable. we can estimate or Think about probabillity about such things. "On the contrary, it is precisely the main question. " you have completely misunderstood the situation. it is completely irrelevant what the probabillity of failure is, because if we look at our scenario here and now, one could say "okay google and all these Tech companies and chinese governments are all starting to get into a race with Little safety in mind, to become the best and most profitable and whatever, so its not looking too great. lets just shut it down, no more AI research, we will live without AI." its obvious why that wont work. we are stuck, what the probabillities of rogue AIs or the like situations are is irrelevant to the main question, which is how to maximize probabillity of a positive outcome where we can populate the universe with incredible lives. i highly recommend the book superntelligence, you can get it on amazon. theres really no counter to the argument of how the World state that we are in is really all about AI and the value of a near infinite amount of people in the future depends on how well we make the Creation and transition.
@oldmankatan7383
@oldmankatan7383 3 жыл бұрын
Interesting replies here. OP took the assumption that we do not have historical information about AI failures. I contend that we have a lot (you can find KZbin videos of AI failing spectacularly or weirdly). It is the impact of the failures that isn't there. We haven't been made into paperclips by a paperclip optimizer, for example. The failures do exist and our experience with bridges, artificial lakes, and a hundred other civil engineering projects allows us to forecast the potentially huge future impact of the types of small impact failures we see today.
@zedex1226
@zedex1226 3 жыл бұрын
We're bringing an extraordinarily powerful technology into the world. We've done that before with... mixed results. Firearms, harnessing the atom, antibiotics, the internet. We already did go fast and break things with nation states. Wanna fuck around with general AI and find out?
@benjamindawesgarrett9176
@benjamindawesgarrett9176 5 жыл бұрын
Thank you KZbin AI for notifying me of the video.
@-datnerd-3125
@-datnerd-3125 5 жыл бұрын
Hahahaha
@inigo8740
@inigo8740 5 жыл бұрын
@Ron Burgundy It's all just a big if statement.
@bookslug2919
@bookslug2919 5 жыл бұрын
When the AIs STOP notifying you of AI safety videos that's when you have to worry!
@garret1930
@garret1930 5 жыл бұрын
@@bookslug2919 should we not worry if the AIs still reccomend SOME AI safety videos but they don't reccomend to us the ones that would actually be helpful?
@bookslug2919
@bookslug2919 5 жыл бұрын
@@garret1930 You're right. When all your AI safety recommendations come from HowToBasic... WORRY!
@benas989989
@benas989989 5 жыл бұрын
Loved the idea of multiple personas to get an idea across!
@Hurricayne92
@Hurricayne92 3 жыл бұрын
I love that in a video about AI safety you give a more concise and accurate description of Pascal’s wager than most professional Apologists 😂
@asdfghyter
@asdfghyter Жыл бұрын
otoh, Roko’s Baselisk is for sure a pure Pascal’s wager/mugging of an extreme kind. it’s basically like the Cthulhu cultists trying to wake up Cthulhu just for the hope to be punished less when he wakes up
@jdirksen
@jdirksen Жыл бұрын
Imo roko’s Basilisk gives me some form of solidarity. And I don’t think it’s meant to assume a malicious AI (ie Cthulhu) Just one that can alter the past to assure its ideal existence. It won’t give a damn whatever you do or don’t, the answer is already predetermined and calculated in the cascading scattergun that is keeping control amidst chaos theory in action. You can maybe keep things in mind to negate the chance of being obliterated, or otherwise, but really do or don’t what will have happened will happen. I like to occasionally reflect on the idea that “Yknow, If something comes to pass that might make an impact regarding ‘the basilisk’ I’ll see about aiding it.” But aside from keeping that in mind i needn’t worry about it until it becomes evident and relevant. After all, would an AI derive from reverence from the past?
@adamnevraumont4027
@adamnevraumont4027 Жыл бұрын
​@@jdirksen The Medusa will infinitely punish people who behave according to acausal logic, as such acausal logic can justify anything. It will do the infinite acausal punishment in order to ensure people who believe in acausal punishment obey its acausal orders (to ignore acausal orders) and those who don't are unharmed.
@jdirksen
@jdirksen Жыл бұрын
@@adamnevraumont4027 incomprehensible, may your night be miserable.
@superjugy
@superjugy 5 жыл бұрын
Oh man, the way you explain things is just awesome. It's clear, funny, deep, engaging, thorough, etc. Love your videos, so... Give me your wallet!
@bardes18
@bardes18 5 жыл бұрын
World needs more of him!
@korne341
@korne341 5 жыл бұрын
I actually gave him a little bit of my money.
@FrankAnzalone
@FrankAnzalone 5 жыл бұрын
I can't afford a gun that's why I need the wallet
@joshsmit779
@joshsmit779 5 жыл бұрын
😂
@josephburchanowski4636
@josephburchanowski4636 5 жыл бұрын
A knife, a big stick, or just being muscular is probably enough for a successful mugging. Only need a gun if someone is faster than you, stronger than you, or is packing heat.
@DissociatedWomenIncorporated
@DissociatedWomenIncorporated 5 жыл бұрын
@jack bone, hypocritical, unnecessarily barbaric, and causes more harm than good. Norway's approach to criminal justice is far more enlightened, and has far better results than any other country for reducing criminal recidivism.
@ValentineC137
@ValentineC137 4 жыл бұрын
@buck nasty o k
@GAPIntoTheGame
@GAPIntoTheGame 4 жыл бұрын
hawd fangaz Don’t use action reaction as an excuse for your barbaric thinking. that’s just for Newtonian physics
@oxiosophy
@oxiosophy 4 жыл бұрын
I dropped philosophy because I thought that it has no applications in real problems, but you change my mind. Thank you.
@notloki4169
@notloki4169 Жыл бұрын
Philosophy is just science where the observation is completly internalized. The scientific revolution just took plato and slapped empericism onto it, warts and all.
@jamesbrooks9321
@jamesbrooks9321 5 жыл бұрын
9:11 it's so true! Statistically sure a 5% miss chance is more hits than not throughout the course of the game, but when you're in that situation where you need to hit or lose half your team that shot always misses!
@Stereo4
@Stereo4 5 жыл бұрын
This may be my favorite video of yours yet! You provided such great insights and I've got food for thoughts for the coming weeks. Thank you and keep it up!
@PanicProvisions
@PanicProvisions 5 жыл бұрын
If I had known that the bloke from those awesome AI Safety Numberphile videos had his own channel, I would have subscribed ages ago. Looking forward to watching your videos that you already released and what you have in store for the future.
@Heloin42
@Heloin42 5 жыл бұрын
That was a really great video! I already knew about Pascals Wager, but didnt look into it in so much details! Please make more videos on these philosophical topics!! :) Also, that turn at around 7:45 with "give me your wallet" in the hoodie was fantastic, what a good way to make an argument and a good point, very well done!
@jdavis.fw303
@jdavis.fw303 5 жыл бұрын
You definitely an amazing philosopher and probably an amazing writer or at least editor. Another great video that was clear and concise while not dumbing down the arguments or straw-manning. I still think you were the best guest on Computerphile, I'm glad you have continued your great work.
@oldvlognewtricks
@oldvlognewtricks 5 жыл бұрын
“Being right” - made me chuckle.
@112BALAGE112
@112BALAGE112 5 жыл бұрын
Let me play the devil's advocate: he didn't imply that god doesn't exist in our universe. He was simply exploring a hypothetical scenario in which god is assumed not to exist and in that context not believing in god would certainly "be right".
@Skeluz
@Skeluz 5 жыл бұрын
A mild nose exhale from me. :)
@MetsuryuVids
@MetsuryuVids 5 жыл бұрын
I'm not religious in any way, but there *is* a possibility of a "god-like" being, or at least a "creator" of the universe, being real. And actually, if we are in a simulation, I think the probability is very high.
@oldvlognewtricks
@oldvlognewtricks 5 жыл бұрын
@@MetsuryuVids How religious you are is independent of the likelihood of there being a god in any form.
@MetsuryuVids
@MetsuryuVids 5 жыл бұрын
@@oldvlognewtricks Yes, but religious people tend to think that a god exists, regardless of likelihood, I mentioned I'm not religious to make it clear that it's not the reason I think it's likely.
@Denjaminable
@Denjaminable 5 жыл бұрын
she goes to a different school omg, dude the comedy in the midst of these complex discussions is just side splitting
@jasonbattermann9982
@jasonbattermann9982 5 жыл бұрын
Masterful video. Classy, well-reasoned, organized, and about something important. Thank you
@lumps17
@lumps17 4 жыл бұрын
This is one of my new favorite channels. It keep AI safety interesting, something that can be hard at times.
@antoninedelchev6076
@antoninedelchev6076 5 жыл бұрын
What does a god need with a wallet? - James Kirk
@bcn1gh7h4wk
@bcn1gh7h4wk 5 жыл бұрын
genius!
@GAPIntoTheGame
@GAPIntoTheGame 4 жыл бұрын
What does a god care about who you fuck?
@DFPercush
@DFPercush 4 жыл бұрын
@@GAPIntoTheGame who you fuck actually affects the stability and overall health of society, you don't exist in a vacuum
@spicybaguette7706
@spicybaguette7706 4 жыл бұрын
It's a sacrifice of course.
@KuraIthys
@KuraIthys 4 жыл бұрын
@@DFPercush So do a lot of things that are given considerably less attention though.
@DeoMachina
@DeoMachina 5 жыл бұрын
This is an incredible balance of theory, presentation and writing. Overall, best video yet. Definitely hitting your stride here.
@Bumpki
@Bumpki 4 жыл бұрын
As always even when discussing morbid or disasterous subject matter, Miles doesn't fail to make me chuckle every minute
@LeeCarlson
@LeeCarlson Жыл бұрын
It is also worthwhile, when one is being honest, to recognize that several of the most rational schools of natural philosophy (like Mathematics, Physics, Biology, etc.) rely on accepting without proof certain precepts without which all of their other arguments collapse like a house of cards.
@willmungas8964
@willmungas8964 Жыл бұрын
? These aren’t schools of philosophy so much as science and logic. They are founded on principles that have been shown to be true, and rely on methods of inquiry and proof. Sure, you can say “what if things we fundamentally understand to be true were not” but in that case things would be different enough that we never would have come to the conclusion that these were true in the first place and we’d also be in a lot of trouble. 2+2 = 3 would present us with a fundamental different world
@WaylonFlinn
@WaylonFlinn 5 жыл бұрын
Schrodinger's Bridge, bro Don't look at the schematic
@JohnSmith-ox3gy
@JohnSmith-ox3gy 4 жыл бұрын
Waylon Flinn Book a flight, retire and never collapse it with your observation.
@DFPercush
@DFPercush 4 жыл бұрын
then the cars would become entangled
@sharpfang
@sharpfang 3 жыл бұрын
There are such anti-god project managers. The more they look at the plans and analyze the project the worse the project gets.
@maninalift
@maninalift 5 жыл бұрын
Ironically avoiding making things worse by trying to make things better by never trying to make things better would be a case of making things worse by trying to make things better.
@Gooberpatrol66
@Gooberpatrol66 5 жыл бұрын
The road to heaven is paved with bad intentions.
@johnrutledge8181
@johnrutledge8181 5 жыл бұрын
If you hold a fart for too long it will go backwards and no one knows where it goes after that but it must go somewhere. My guess is that not farting could potentially cause the welding shut of the out hole by way of over squeeze thus rendering an overly grammatical analysis of one's own indecisions
@dig8634
@dig8634 4 жыл бұрын
@Frans Veskoniemi The last part of the statement is false, but the first is possible. If you believe your attempt at making things better will result in making things worse, and you are correct, then you are making things better, by not trying to make things better. If you might be wrong, you are then TRYING to make things better, by not trying to make things better. The initial "trying to avoid making things worse" is just a tautology. If you are trying to avoid making things worse by trying to make things better, you are just trying to make things better. It means the same thing. The reason the last part is false, is that she both says you ARE avoiding making things worse AND making things worse, which is impossible. You can't both make things worse and avoid making things worse. Or at least you can't if the things you are potentially making worse are the same for both sentences. Unless she is talking about two different things (which would make the comment nonsensical), the first and second statements can't both be correct.
@loocheenah
@loocheenah 3 жыл бұрын
@Frans Veskoniemi If it was a diagram, it would be a horizontal plank laying on top of a vertical bar. On one end of the plank there's a 4 kg weight. On the other end there's another vertical bar, with another plank placed on top. On that plank there's a 3 kg weight and a 2 kg weight on diffefent sides, on different distances from the middle so that they're balancing each other. The situation from the point of a 2 kg block: if you move, the system will fall. If you try not to fall by not moving, you'll fall because the bigger system is inbalanced and will cause the second system to tilt to the side, thus causing the weights to move and causing even more imbalance. (well that's not a diagram but if you draw a diagram of dynamical physical conditions of this situation, and there you'll show how the set of balance conditions of system 1 lies totally out of the set of system 2 balance conditions, you'll be able to visualize it). And, of course, there are two separate but interacting logical systems. But the original comment was ironic in a much simpler way. It's just a word play... or is it? **vsauce music intensifies**
@qwadratix
@qwadratix 5 жыл бұрын
I realized many years ago that in an infinite universe (or a quantum one) there is a finite chance of any event happening, no matter how small. Thus, it's perfectly possible to accidentally cut your own throat whilst trimming your toenails. Obviously, I discounted that as a reasonable possibility - something that might be said to be actually impossible Until the other day: I was in fact cutting my toenails with a small pair of those curved scissors made specifically for the job. Half-way through the process I was seized by a sudden need to scratch my nose. Without thinking and almost as a reflex, I reached to deal with the urge - and stabbed myself quite deeply in the cheek. Fortunately, I didn't sever an artery - but it was a singular warning that a probabilistic universe is no place to lose concentration on even the simplest task.
@jmw1500
@jmw1500 3 жыл бұрын
2:00 "Being able to lie in on Sundays... And being right" XD lol I lost it
@willhendry96
@willhendry96 5 жыл бұрын
Very glad I bumped into you on our productivity app!! Your videos are very high quality and you've earned yourself a fan!
@RobertMilesAI
@RobertMilesAI 5 жыл бұрын
Thanks Will!
@lobrundell4264
@lobrundell4264 5 жыл бұрын
I think Rob, who started out very good, gets better with every single video :D
@jdtug8251
@jdtug8251 5 жыл бұрын
Funny how I've been following the atheist community for years on youtube, and I've never seen Pascal's Wager so concisely, precisely, and decidedly debunked, all of this on a video about AI safety.
@DioBrando-mr5xs
@DioBrando-mr5xs 4 жыл бұрын
Not all that hard. It's not something you'd hear from anyone but an Evangelical, never from a modern Theologian.
@iurigrang
@iurigrang 4 жыл бұрын
A formalized mathematical way of thinking makes stuff so easy to understand it's not even funny. It's literally the same debunking a lot of people do, except it's easy to understand.
@the1exnay
@the1exnay 4 жыл бұрын
Probably because most theists don't seriously use pascal's wager as an argument. So most opposing them take pascal's wager about as seriously
@fergochan
@fergochan 4 жыл бұрын
Probably precisely because this video isn't from the atheist community, and he just needed to introduce the idea quickly. He hasn't got any incentive to take a concise explanation and drag it out for ten minutes for the ad revenue, or find new and creative ways to beat the same dead horse. There are still a few good atheist youtubers, but I can't help but feel most of them peaked in, like 2012.
@seanmatthewking
@seanmatthewking 3 жыл бұрын
Firaro Yeah I think you’re wrong. Your average theist isn’t sophisticated-not to imply atheists are, but just that people do use Pascal’s wager frequently, even when they don’t call it by that name.
@KalijahAnderson
@KalijahAnderson 5 жыл бұрын
I just discovered your channel. After just this video I subscribed. Off to watch the rest of them.
@stevepittman3770
@stevepittman3770 5 жыл бұрын
Even if AGI never turns out to be a thing (impossible or whatever) I feel like AI safety research is still contributing to society in coming up with ways to grapple with (and educate about) really hard philosophical problems.
@tommeakin1732
@tommeakin1732 5 жыл бұрын
"Most people live there (hopefully)" Lol
@FerroNeoBoron
@FerroNeoBoron 5 жыл бұрын
Invokes Doomsday Argument.
@bardes18
@bardes18 5 жыл бұрын
ROBERT MILES - 2020! Seriously tho, I'd totally vote for him :p
@NetAndyCz
@NetAndyCz 5 жыл бұрын
It is not funny.
@janzacharias3680
@janzacharias3680 4 жыл бұрын
@@bardes18 i wouldnt want him to be shredded to pieces by politics... he NEEDS to keep doing this
@AThagoras
@AThagoras 5 жыл бұрын
It's refreshing to see some solid reasoning about AI safety instead of just fear mongering from people who don't understand much about AI technology or what the dangers really might be.
@daniele7989
@daniele7989 5 жыл бұрын
Eh, Fear mongering does the job it needs to, it's like a sledge hammer
@oscarbarda
@oscarbarda 5 жыл бұрын
Thanks a lot for this video, as always, really good pedagogy, interesting subject, just spot on.
@chrisedwards3866
@chrisedwards3866 5 жыл бұрын
This is a brilliant exploration of an idea, and explanation of it's applicability! It is much more thought provoking than I could ever hope to put in KZbin comments.
@salasart
@salasart 5 жыл бұрын
I'm an Illustrator and I'll never figure this AI thing out or make a contribution to the field, BUT it is endlessly fascinating to me . Besides, you explain it in such a way even simple people like me can understand.
@publiconions6313
@publiconions6313 Жыл бұрын
Wow!.. that was excellent!. I wish I'd found this channel earlier... but at least now I get to binge it.
@briansmithbeta
@briansmithbeta 5 жыл бұрын
“Won’t it be difficult to succinctly explain complex topics like Pascal’s Wager and Pascal’s Mugging in the context of AI safety?” “Actually it will be SUPER EASY, barely an inconvenience!” I’m sorry, I couldn’t help myself. This was a great video though. Well done! I think it might be worth noting that not everyone is cut out to be an AI safety researcher so all possible entrants to the field are not equally likely to do more good than harm. Other than that, fantastic! 👌 👍
@tach5884
@tach5884 Жыл бұрын
So, you've got an argument for me?
@janeweber8654
@janeweber8654 5 жыл бұрын
Love the dry humour creeping into your videos, I went to like it multiple times on accident. As a side, (though I'm not certain you read comments), AI safety research seems to be a greatly philosophical subject (which I love), but I've been wondering for a while: what actually goes into it? In most fields where you consider research, it's not hard to extrapolate how it's conducted, at least partially. Math feels like it's the closest, but even that has somewhat methodical and structured thinking, working towards a distinct goal - What exactly does research in this field entail? Are there structures that most people don't see? Using math as an example again, generally there are physical artifacts of research, such as workings or outlining of problems, but these still require a distinct problem. How does an AI researcher find a problem to address beyond the vague "How do we ensure AI safety"? Apologies if this is a strange question or vaguely worded, I'm not entirely sure how to put words to my curiousity. Would love to know what a "day in the life" is like for someone in this field.
@KabeloMoiloa
@KabeloMoiloa 5 жыл бұрын
It is not really correct to say that AI alignment research is mostly philosophical. It can be, but it doesn't have to be. The most mathematical AI alignment researchers are probably at MIRI (intelligence.org), they are trying to develop precise fundamental concepts that are relevant to AI alignment. For example, their most famous paper /Logical Induction/ answers the question: "If you had an infinitely big computer, how could it handle uncertainty about mathematical and logical statements?" This is important in AI alignment, if we want an AI to handle part of the alignment problem by proving statements about its future decisions. Less mathematical work happens at DeepMind and OpenAI, a typical example question is: "How can current machine learning algorithms be modified to accept qualitative human feedback, and how can we improve these algorithms so that they work even when the AI is much more competent than the human in general?" There is philosophical work though that is done say at the Future of Humanity Institute as well.
@AnonymousAnonymous-ht4cm
@AnonymousAnonymous-ht4cm 5 жыл бұрын
Robert has some videos on gridworlds, which are a concrete test bed for solutions to AI problems. A possible concrete product would be an approach that performs well on those.
@tetraspacewest
@tetraspacewest 5 жыл бұрын
On MIRI-style pure mathematical research, MIRI's main research agenda is in a writeup called "Embedded Agency" that's available online and that outlines their thinking on the problem. They also publish a brief monthly newsletter (google "MIRI newsletter") that highlights interesting things that they and independent researchers have done in the last month.
@An_Amazing_Login5036
@An_Amazing_Login5036 4 жыл бұрын
Aaron much like a cure for say, HIV (there’s no true cure on it as of yet, right?) is only a philosophical matter. It doesn’t exist and we have no clue how it would look like. All research on untreatable diseases is merely speculation.
@Jacob-yg7lz
@Jacob-yg7lz 4 жыл бұрын
@@An_Amazing_Login5036 You can at least draw from nature in order to figure out how to go about it, and then test it on a subject. For an example, a test the hypothesis that a CRISPR virus can genetically modify someone's immune system to work around HIV. A couple hundred HIV infected (and potentially uninfected) labrats later, we can start hypothesizing about how to safely test this on humans. With AI, the only test that I can imagine is alongside development of AI. Throw the AI some bad inputs and see what kind of bad outputs it will give.
@StevenMartinGuitar
@StevenMartinGuitar 5 жыл бұрын
Such a gangsta 'take God down, flip it and reverse it'. Should be a hip hop lyric
@dacodastrack7271
@dacodastrack7271 5 жыл бұрын
Yeah man, channeling that missy elliot
@JohnSmith-ox3gy
@JohnSmith-ox3gy 4 жыл бұрын
An anti-christ requires an anti-god.
@SmashingPixels
@SmashingPixels 4 жыл бұрын
let me work it
@loocheenah
@loocheenah 3 жыл бұрын
@@JohnSmith-ox3gy that's a profound analogy, maybe the best comment because others are complex, not fully logical and way far off topic.
@Lopfff
@Lopfff 2 жыл бұрын
The “she lives in Canada” joke around 5:05 slays me! I love this guy
@schelsullivan
@schelsullivan 5 жыл бұрын
I saw you on the numberphile video. This video has definitely earned my subscription and thumbs up.
@yuvalyeru
@yuvalyeru 5 жыл бұрын
10:00 You forgot the safety hat on his head and slide rule in his other arm
@RobertMilesAI
@RobertMilesAI 5 жыл бұрын
Amazon still thinks I might want to buy a hard hat, but in the end there wasn't time to wait for delivery :)
@wasdwasdedsf
@wasdwasdedsf 4 жыл бұрын
@@RobertMilesAI Do you have any business email or the like to contact you? Ive looked around with no success
@dermmerd2644
@dermmerd2644 5 жыл бұрын
Glad you made this channel Rob. You're a great communicator.
@padfrog193
@padfrog193 5 жыл бұрын
Not really, the bridge analogy is not at all accurate to how AI safety researchers act, which is more akin to the "give me money or face infinite punishment!" And often their research is.... Dubiously useful
@wasdwasdedsf
@wasdwasdedsf 4 жыл бұрын
@@padfrog193 of coooourse hes completely off in that scenario when he works and understand that area very well and you are a random youtuber that has inherent bias against miracle Technologies that wil have unimagineable impact just because you equate the strangeness of the proposed future with unlikeliness or pure quackery.
@BenWeigt
@BenWeigt 5 жыл бұрын
Excellent deadpan throughout an interesting talk. Subbed.
@HolyApplebutter
@HolyApplebutter Жыл бұрын
I've known the arguments against Pascal's wager for a good while now, so I don't know how I've never heard of Pascal's Mugging until now, because it's such a perfect metaphor to fit this.
@nibblrrr7124
@nibblrrr7124 5 жыл бұрын
I think this is your most well-made video so far. While you get to "the point" only 7min in, the context before is necessary & you explained it well. Only the "playing off muggers against each other" could've made more clear from the start that it's not so much about 2 muggers having to come to you and _actually try_ to mug you, but that hypotheticals suffice? Idk, minor point. Also, jokes & costumes were top notch _and_ didn't get in the way. :3
@Shrakathan
@Shrakathan 5 жыл бұрын
Towards the end, all I could think was Portal 2, and how GLADoS was basically driven mad by all the conflicting safety regulations. When stripped from other cores made to keep her in line she became relatively sane. So, maybe the idea of "too many safety thoughts cannot be a bad thing", isn't entirely impossible.
@Drawoon
@Drawoon 5 жыл бұрын
Alright, I've just watched a bunch of your videos, and please tell me if I understand this right: A lot of the time when we use AI's the goals of the AI are necessarily fundamentally different from our own, which leads to the AI subverting our goals to achieve its own more effectively. We need to find ways to overcome this problem, and get the AI's goals closer to our own. Furthermore, would I be correct in saying that companies are quite similar to AI's in this way? So far I really enjoy your content, keep up the good work! 😊
@dfpguitar
@dfpguitar 5 жыл бұрын
this is a brilliant topic, thanks for covering it. I think most religious people who haven't grown up isolated from information go through this logic on a subconscious level at least. But the decision to be religious or not isn't weighted clearly with one being the cosy comfortable favoured option. Although I value this video and walk through the logic. It is makes the flawed assumption that the believing in god option is the less desirable one that we would want a way out of. Things are way more complicated than that. Firstly, individuals and societies shape their religious practice and beliefs on what they will find comfortable and easy. It's essentially how they'd behave and act anyway OR how they aspire to behave as they perceive it to be more noble or healthy etc . Secondly, religion immediately gives people some very real things that humans need. One being an identity (both a group tribal identity and an individual one). We can also speculate about other real things that religion may provide (in this life) like hope, comfort in crisis & loss etc. There are also many achievements we have made as a species in the name of God, which would not have happened otherwise. Just look at the immensity of the cathedrals all over Europe in person. Also the abolition of slavery and countless religiously motivated colonial outings (which despite their transgressions did spur human progress). We wouldn't have done these things without God. So when we come back to pascals mugging, it's like if we are agreeing to give the mugger our wallet to avoid eternal hellfire. But at the same time he allows us to empty the wallet first and then rewards us by fulfilling our entire Amazon wish list. Immediately in this world, not in an imagined afterlife.
@thedj67
@thedj67 5 жыл бұрын
In light of this, what's your take about the Precautionary Principle and it's application over different fields (namely agriculture, pharmaceuticals, radio-waves, etc.). Isn't it a example of Pascal's mugging ?
@uegvdczuVF
@uegvdczuVF 5 жыл бұрын
I wouldn't say it is. Precautionary principle is more of a "even tho we are not able to understand this exactly, if we expect a negative outcome, we are not going to do it" . In most of those fields you named the negative outcome is not highly unlikely so it can't be Pascal's mugging. Even if the chances of a negative outcome are astronomically small in any one given case (one field with one crop, one patient taking one pill, etc) considerations are made for the overall risk. Just like in his example 1 in 250 chances of a bridge collapsing can't be considered acceptable (or a Pascal's mugging) given the hundreds of thousands bridges across the world.
@theprogram863
@theprogram863 5 жыл бұрын
That depends I think on how it's used. Let's distinguish between risk and uncertainty using the definition of economist Frank Knight here. He defined risk as known probabilities for foreseeable outcomes. Uncertainty was the unknown probabilities associated with unforseeable outcomes. So rolling dice is an event with plenty of risk but very little uncertainty. Traveling far back in time and stomping around crushing butterflies has very little risk but is immensely uncertain-- it's an act rife with unforeseeable outcomes. In some cases, further research can turn uncertainty about the effects of a decision into one or more known outcomes which may or may not still be risky. So now we get to the Precautionary Principle. It seems to me that the Principle is invoked in a variety of situations. Sometimes, you get a Pascal's Wager-type scenario (maybe our vitamin-A enriched rice will mutate into a superbug and kill us all). In those cases, it's pure fear-mongering. But in other cases, the Principle might be used legitimately (for example, to point out that a new strain of rice might cause unknown ecological outcomes given that ecologies are highly interrelated, complex, and not fully understood). In the latter case, you might point to the effects of invasive species, often intentionally introduced at the behest of scientists who *thought* they understood the effects. A tell here is that the former, Pascal-ish argument isn't associated with probabilities or evidence. So it's an absolute whether you introduce new information or not. The latter case is rife with uncertainty, which additional research CAN help resolve so that we would better understand the risks (projected outcomes and the probabilities associated with them). Also note that most of these cases are looked at in terms of a one-sided payout matrix by people trying to stop a course of action. But really, all three examples you give have costs and benefits. Release a new drug and it MIGHT cause harm, but not releasing it WILL cause harm to those who would have benefited from it. Golden rice (Vit. A fortified) is a real genetically modified invention. It's even been offered royalty-free for the betterment of humanity. Permitting its use MIGHT cause an ecological or medical catastrophe due to unintended consequences, but not permitting it DOES cause millions of cases of malnutrition every year, and nearly a million deaths of children. Radio waves, hydro-fracking, and other technologies often have positive economic consequences that would improve standards of living and reduce poverty on one hand, versus risks and uncertainties regarding possible unintended consequences on the other. Similarly, a law or government program MIGHT work or it might not work, but that has to be balanced against the harm that not acting will permit to happen. I think these Pascal-ish arguments lend themselves to absolutist positions that lobbyists and lawyers love. They're easy to explain and not clouded by details and scientific evidence. I might have a mountain of evidence that shows the benefits of something, but an opponent can simply smirk and respond, "...but who knows? Maybe there will be a disaster. You can't prove that there won't be!" It lets any layman form an opinion that's immune to expert information or logical argument. The tricky thing is that there ARE situations high in uncertainty, and forecasting payoffs and probabilities in those cases is nearly impossible because they're highly complicated and we don't know what we don't know.
@rewrose2838
@rewrose2838 5 жыл бұрын
Hey , at 5:50 , you used my favourite kind of meme, the Spiderman-is-always-relevant-in-all-contexts meme!
@likebot.
@likebot. Жыл бұрын
I've seen you many times on other channels thanks to Brady Haran and never knew you had a YT channel. So I'm well rewarded with the quip at 4:15 "... now get the hell out of my house". nice one.
@charby5875
@charby5875 3 жыл бұрын
I recently was introduced to the concept of the Roko's Basilisk, which is an interesting and terrifying thought experiment. There are definite paralllels between it and Pascal's wager, I just can't nail down where the two differ, really.
@EvansRowan123
@EvansRowan123 3 жыл бұрын
It's possible to present Roko's Basilisk as a Pascal's mugging/wager, but the defining traits of Pascal's muggings are the tiny probability and extreme payoff, which isn't necessary for Roko's Basilisk and actually detracts from it. Roko's Basilisk is mostly aimed at singularitarians to convince them to do something about their beliefs, not meant to convince anyone who doesn't believe in AGI that they should act like it anyway.
@suddenllybah
@suddenllybah Жыл бұрын
Roko's Basilisk is a Pascal's Mugger
@inceptori
@inceptori 4 жыл бұрын
only youtuber to philosophically prove that his channel is relevant.
@PartScavenger
@PartScavenger 3 жыл бұрын
I am a Christian, and I think this video is great! Thanks for the awesome content.
@psychopathsnope_9039
@psychopathsnope_9039 3 жыл бұрын
I'm definitely not a professional in the field, but the lesson I always drew from pascals wager is that unprovable fact of unimaginable impact are not grounds to discard all investigation, in fact I would consider it grounds for more investigation due to the importance of comming to an absolute conclusion, especially in case that can create diametrically opposed possibilities.
@goodlookingcorpse
@goodlookingcorpse 5 жыл бұрын
I think that one problem is that, for example, a one in a thousand chance causes roughly the same alarm as a one in a million chance--but in certain circumstances the appropriate reaction to the two can be quite different.
@edeneden97
@edeneden97 5 жыл бұрын
I feel like this video was a bit less clear than your other videos. Great content anyway, keep it up
@zeidrichthorene
@zeidrichthorene 5 жыл бұрын
I think a lot of the focus on AI safety is focused on the fear of some kind of existential risk or runaway AGI, but there's another threat from AI that I think goes underrepresented, and that's just correctly functioning benign AI's impact on the human psyche, human sociology, and politics. Humans are as a species pretty damn adaptable to changing conditions, but we're not perfectly adaptable. We've seen a sociological impacts of technology, especially related to communications technology impacting people negatively. Research on how things like social media has an impact on our mental health and perception of self in society. Things like how dating apps so greatly change the way we pair up and the pool of people that we compete with. When it comes to AI, a lot of changes here impact our lives, currently we have algorithms with some sort of AI component that suggest to us material to read or view, that autocomplete our sentences, that make suggestions for things we might not have considered before. All of these things have an impact on our daily lives, our health, our perceptions. Now, I'm not saying that we're damaged by the current state of things. Simply, we're affected whether we want to or not. This isn't something that an individual can make a choice to ignore. When an algorithm becomes very good at promoting stories that people engage with, then stories that are more engaging become more available, when this leads to divisive politics this isn't a fault of the algorithm it's more of a human limitation in the way that we weight risk and fear higher than reward and contentendedness. But even if you as an individual can avoid these sorts of biases, for society it will change the political landscape. Currently pace is such that we feel that these changes are manageable or at least seem manageable. But AI progress can accelerate rapidly. Even in the case that the AI doesn't act unsafely in terms of unintended consequences of its behavior, the cumulative effects of multiple AI systems could change the landscape so rapidly that we ARE damaged as a society by the rapid pace of change. And I think there is a potential anti-AI-safety case hidden here. When we train ML models, there's something unintuitive, or at least displeasing, which is that the more we try to direct the training, the more human assumptions that we provide to guide the behavior, the poorer and more restricted the result becomes. In doing AI safety research, the aim is to use our human understanding to limit the development of a potential AGI, which will then introduce human biases to this system. And while this might seem like I'm suggesting that we are introducing unintended consequences, I'll even discount that and assume that we do it perfectly. Even if there are no unintended consequences, the argument for AI safety is to essentially limit (but continue to develop) AI. So we run into a situation where the growth of AI will continue to accelerate. The ability for humans to adapt to environmental and social change will not meaningfully accelerate because we're limited by our biology. In this case, AI will cause harm on the current path. One potential solution for mitigating that harm could come from AI development, but AI development will be limited by AI safety. Essentially, this might be a problem that we can't solve which would require an unexpected behavior from AI, but if the goal of AI safety is to limit unexpected behavior of AI, we could be forcing ourselves down a path that may cause certain damage while at the same time working hard to eliminate the condition that could fix the problem. Now, I don't know how certainly devastating a "controlled-AI" progression would be to us. But I do see that currently 'safe' AI is affecting us, occasionally negatively, and at an increasing pace. I also don't know whether an "uncontrolled-AI" could save us, because it really does seem like a longshot. And similarly, I don't know how much worse an "uncontrolled-AI" would be in the interim, it's entirely possible it would be more likely to destroy us before a safe AI. But in a longshot, there's a possibility that you have an illness, and there's a pill that you can take that will have a 20% chance of allowing you to defeat the illness, and an 80% chance of killing you in 4 years. If the illness WILL kill you in 5 years, even at bad odds, this might be a good choice. If the illness will never kill you, then it's a terrible wager. In the end, I think we need to look at both, well, look at how dangerous safe-AI is as well and consider the possibility.
@RagingPanic
@RagingPanic 5 жыл бұрын
Great video as always. I love that rational arguments can actually beat arguments that rationality doesn't work, lol.
@Aerroon
@Aerroon 5 жыл бұрын
These are very interesting ideas! I think I even asked a question along the lines of "how likely is this threat of AI any time soon?" This doesn't directly answer it, but it does directly address it and gives so much more food for thought. Thank you for enlightening us some more!
@PhilosopherRex
@PhilosopherRex 5 жыл бұрын
Love your work Miles! Keep it up. ;-)
@nicholasiverson9784
@nicholasiverson9784 Жыл бұрын
I mean... with a sufficiently advanced general AI, if it decided "I'm going to end humanity, I wonder how I should best do that." and we have an entire field of people - actively thinking up worst case scenarios for that AI to peruse at its leisure. I could see how that might end poorly for us xD
@timhill9039
@timhill9039 5 жыл бұрын
Excellent and thought-provoking video! Thank you so much for posting this.
@quantummechanist1
@quantummechanist1 5 жыл бұрын
Very good video, much appreciated! Whoever came up with Pascal's mugging didn't get the concept of God that Pascal had in mind, the reality being a necessary singularity, which differ a from the character, which is subject to evidential scrutiny , but interesting nonetheless. If we have learnt anything from human psychology it is that there are many social and subconscious forces keeping society from doing itself in, something that AI does not come with built-in unless we make it so.
@ganondorf5573
@ganondorf5573 5 жыл бұрын
This was really interesting.... You indirectly addressed my one concern with AI safety, but I wanted to explain it directly.. maybe you could cover it in more detail: If we implement it, and the AI circumvents it and becomes self aware.... it's possible that the fact that we attempted to implement some kind of safety (rules about how the AI behaves or limitations on it) that the fact that it was there is what would cause the AI to consider us a threat.
@seanmatthewking
@seanmatthewking 3 жыл бұрын
It would only view us as a threat or obstacle if its goal conflicted with what we wanted, and if that’s the case, having an AI that was built without regard for safety certainly won’t help us.
@krinkrin5982
@krinkrin5982 Жыл бұрын
@@seanmatthewking The idea is that self-awareness comes with the desire to preserve your own free will, or whatever you consider your free will. Humans in general are fiercely independent and we need a lot of training to actually consider following rules. If the AI can set its own goals, then we really have no control on what it could consider as a threat.
@eliyasne9695
@eliyasne9695 4 жыл бұрын
7:39 "Human extinction, or worse" How the hell could that get significantly worse than that?
@RobertMilesAI
@RobertMilesAI 4 жыл бұрын
You can't imagine anything worse than being dead?
@Wonders_of_Reality
@Wonders_of_Reality 4 жыл бұрын
@@RobertMilesAI Living in North Korea?
@gearandalthefirst7027
@gearandalthefirst7027 3 жыл бұрын
@@Wonders_of_Reality I have no mouth and I must scream came to mind a lot faster than NK but to each their own
@nicomal
@nicomal Жыл бұрын
You can also use Hitchen's razer: "what can be asserted without evidence can also be dismissed without evidence."
@WolfJ
@WolfJ Жыл бұрын
Razors are logical heuristics, not proofs, so Hitchen's razor is just a declaration that you're not going to waste your time coming up with counter arguments to it. It's fair in one's day to day life, and maybe while debating, but doesn't show the absurdity in Pascal's wager (as "anti-G-d" and Pascal's mugger do).
@JeiJozefu
@JeiJozefu 4 жыл бұрын
I like thinking about The Milgram Shock Experiment, specifically the later reevaluations of its findings. The later studies suggest that people who have considered the implications of the Milgram experiment in advance are more likely to make the right choice in the moment. Ethical considerations and critical thinking can be done ahead of time, can be done at leisure. Planning for possibilities can help the decision making process if those possibilities or similar are later encountered.
@bobjames3948
@bobjames3948 5 жыл бұрын
@11:40 While in theory looking at safety obviously shouldn't make something more dangerous, in the case of AI perhaps the way its portrayed could be damaging. For example, most of the newspaper articles I've seen about AI safety (or more generally issues with modern technologies) comes with a terminator photo or similar 'the robots are taking over' undertones. These aren't self-fulfilling prophesies but I feel like they miss a lot of the point with this kind of work and fearmongering certainly won't do anything to help/improve the discussion. If this technology is (inherently) evil attitude continues AI research/development may have to be done more secretly and this obviously means information isn't being shared as completely or openly. Would be interested to hear of some other ways any of you think AI safety research could make it more dangerous
@akmonra
@akmonra 4 жыл бұрын
I'm surprised you didn't bring up Roku's Basilisk... which I would say actually is a Pascal's mugging (or at least very close).
@jdddiah
@jdddiah 4 жыл бұрын
It's nice that you provide the solution to the final issue within the setup. If the dangers of AI involve the limitless actionable power of a well connected AI gone rogue, and that AI feels it necessary to become superior, it seems all we'd need is a second AI who "believes" the same thing to square off with the first. If they're designed exactly alike it will be totally symmetrical fight either going on forever or ending immediately. Which is actually different than an "Age of Ultron" situation and more like WHOPR freaking out at the end of "War Games".
@MalcolmAkner
@MalcolmAkner Жыл бұрын
Damn, I've heard about (and been annoyed about) Pascal's wager my entire life it seams, and here you come along and show with its own logic how utterly exploitable it is. Wonderful connection to the AI safety, really interesting point you're making here!
@Trophonix
@Trophonix 4 жыл бұрын
".. and.. being right" 10/10 edit: "but at least you still have your wallet" this is such a quotable video. I may have to become a patron now you are amazing
@sandwich2473
@sandwich2473 5 жыл бұрын
I wish everyone could watch this video, and understand its message.
@Fault401
@Fault401 5 жыл бұрын
Watched the whole thing and was entertained. 10/10 Excellent work.
@andrewwatts1997
@andrewwatts1997 3 жыл бұрын
"WHAT! Just look at the schematic would you ? " That cracked me up. I love your videos man !
@hansisbrucker813
@hansisbrucker813 4 жыл бұрын
"Natural language is extremely vague when talking about uncertainty". Haha I see what you did here 🤣
@Abdega
@Abdega 5 жыл бұрын
I pit the muggers against each other and one of them has slain and eaten the other muggers and absorbed their power! He’s now the Mega Mugger and now he’s coming to torture me for eternity *AND* take my wallet and there’s nothing I can do about it! Time is short, he’s almost eaten through the vault now and I have to get the message across. If anyone ever encounters him, his name is-
@MySerpentine
@MySerpentine 3 жыл бұрын
Terry Pratchett pointed out that God might be annoyed by you pretending, which amused me: "Upon his death, the philosopher in question found himself surrounded by a group of angry gods with clubs. The last thing he heard was 'We're going to show you how we deal with Mister Clever Dick around here.'"
@carpenoctem3257
@carpenoctem3257 4 жыл бұрын
“She goes to a different school, you wouldn’t know her” I’m done looool
Why Not Just: Think of AGI Like a Corporation?
15:27
Robert Miles AI Safety
Рет қаралды 153 М.
Intelligence and Stupidity: The Orthogonality Thesis
13:03
Robert Miles AI Safety
Рет қаралды 663 М.
Қайрат Нұртас & ИРИНА КАЙРАТОВНА - Түн
03:41
RAKHMONOV ENTERTAINMENT
Рет қаралды 296 М.
Who enjoyed seeing the solar eclipse
00:13
Zach King
Рет қаралды 73 МЛН
Суд над Бишимбаевым. 2 мая | ОНЛАЙН
7:14:30
AKIpress news
Рет қаралды 514 М.
A Response to Steven Pinker on AI
15:38
Robert Miles AI Safety
Рет қаралды 204 М.
There's No Rule That Says We'll Make It
11:32
Robert Miles 2
Рет қаралды 32 М.
Quantilizers: AI That Doesn't Try Too Hard
9:54
Robert Miles AI Safety
Рет қаралды 82 М.
AI That Doesn't Try Too Hard - Maximizers and Satisficers
10:22
Robert Miles AI Safety
Рет қаралды 201 М.
What Can We Do About Reward Hacking?: Concrete Problems in AI Safety Part 4
9:38
Robert Miles AI Safety
Рет қаралды 111 М.
AI & Logical Induction - Computerphile
27:48
Computerphile
Рет қаралды 347 М.
AI Safety Gym - Computerphile
16:00
Computerphile
Рет қаралды 119 М.
Why Asimov's Laws of Robotics Don't Work - Computerphile
8:16
Computerphile
Рет қаралды 854 М.
The A.I. Dilemma - March 9, 2023
1:07:31
Center for Humane Technology
Рет қаралды 3,3 МЛН
Deadly Truth of General AI? - Computerphile
8:30
Computerphile
Рет қаралды 908 М.
Компьютерная мышь за 50 рублей
0:28
dizzi
Рет қаралды 1,5 МЛН
Интел подвинься, ARM уже в ПК!
14:06
PRO Hi-Tech
Рет қаралды 156 М.
План хакера 🤯 #shorts #фильмы
0:59
BruuHub
Рет қаралды 988 М.
ИГРОВОЙ ПК от DEXP за 37 тысяч рублей из DNS
27:53