10 Reasons to Ignore AI Safety

  Рет қаралды 334,634

Robert Miles AI Safety

Robert Miles AI Safety

Күн бұрын

Why do some ignore AI Safety? Let's look at 10 reasons people give (adapted from Stuart Russell's list).
Related Videos from Me:
Why Would AI Want to do Bad Things? Instrumental Convergence: • Why Would AI Want to d...
Intelligence and Stupidity: The Orthogonality Thesis: • Intelligence and Stupi...
Predicting AI: RIP Prof. Hubert Dreyfus: • Predicting AI: RIP Pro...
A Response to Steven Pinker on AI: • A Response to Steven P...
Related Videos from Computerphile:
AI Safety: • AI Safety - Computerphile
General AI Won't Want You To Fix its Code: • General AI Won't Want ...
AI 'Stop Button' Problem: • AI "Stop Button" Probl...
Provably Beneficial AI - Stuart Russell: • Provably Beneficial AI...
With thanks to my excellent Patreon supporters:
/ robertskmiles
Gladamas
James
Scott Worley
JJ Hepboin
Pedro A Ortega
Said Polat
Chris Canal
Jake Ehrlich
Kellen lask
Francisco Tolmasky
Michael Andregg
David Reid
Peter Rolf
Chad Jones
Frank Kurka
Teague Lasser
Andrew Blackledge
Vignesh Ravichandran
Jason Hise
Erik de Bruijn
Clemens Arbesser
Ludwig Schubert
Bryce Daifuku
Allen Faure
Eric James
Qeith Wreid
jugettje dutchking
Owen Campbell-Moore
Atzin Espino-Murnane
Jacob Van Buren
Jonatan R
Ingvi Gautsson
Michael Greve
Julius Brash
Tom O'Connor
Shevis Johnson
Laura Olds
Jon Halliday
Paul Hobbs
Jeroen De Dauw
Lupuleasa Ionuț
Tim Neilson
Eric Scammell
Igor Keller
Ben Glanton
anul kumar sinha
Sean Gibat
Duncan Orr
Cooper Lawton
Will Glynn
Tyler Herrmann
Tomas Sayder
Ian Munro
Jérôme Beaulieu
Nathan Fish
Taras Bobrovytsky
Jeremy
Vaskó Richárd
Benjamin Watkin
Sebastian Birjoveanu
Euclidean Plane
Andrew Harcourt
Luc Ritchie
Nicholas Guyett
James Hinchcliffe
Oliver Habryka
Chris Beacham
Nikita Kiriy
robertvanduursen
Dmitri Afanasjev
Marcel Ward
Andrew Weir
Ben Archer
Kabs
Miłosz Wierzbicki
Tendayi Mawushe
Jannik Olbrich
Anne Kohlbrenner
Jussi Männistö
Wr4thon
Martin Ottosen
Archy de Berker
Andy Kobre
Brian Gillespie
Poker Chen
Kees
Darko Sperac
Paul Moffat
Anders Öhrt
Marco Tiraboschi
Michael Kuhinica
Fraser Cain
Klemen Slavic
Patrick Henderson
Oct todo22
Melisa Kostrzewski
Hendrik
Daniel Munter
Leo
Rob Dawson
Bryan Egan
Robert Hildebrandt
James Fowkes
Len
Alan Bandurka
Ben H
Tatiana Ponomareva
Michael Bates
Simon Pilkington
Daniel Kokotajlo
Fionn
Diagon
Parker Lund
Russell schoen
Andreas Blomqvist
Bertalan Bodor
David Morgan
Ben Schultz
Zannheim
Daniel Eickhardt
lyon549
HD
Ihor Mukha
14zRobot
Ivan
Jason Cherry
Igor (Kerogi) Kostenko
ib_
Thomas Dingemanse
Alexander Brown
Devon Bernard
Ted Stokes
Jesper Andersson
Jim T
Kasper
DeepFriedJif
Daniel Bartovic
Chris Dinant
Raphaël Lévy
Marko Topolnik
Johannes Walter
Matt Stanton
Garrett Maring
Mo Hossny
Anthony Chiu
Ghaith Tarawneh
Josh Trevisiol
Julian Schulz
Stellated Hexahedron
Caleb
Scott Viteri
12tone
Nathaniel Raddin
Clay Upton
Brent ODell
Conor Comiconor
Michael Roeschter
Georg Grass
Isak
Matthias Hölzl
Jim Renney
Michael V brown
Martin Henriksen
Edison Franklin
Daniel Steele
Piers Calderwood
Krzysztof Derecki
Zachary Gidwitz
Mikhail Tikhomirov
/ robertskmiles

Пікірлер: 2 300
@bp56789
@bp56789 3 жыл бұрын
"I didn't know that until I'd already built one"
@friiq0
@friiq0 3 жыл бұрын
Best line Rob’s written so far, lol
@thenasadude6878
@thenasadude6878 3 жыл бұрын
Rob admitted to being a perpetrator of international War crimes
@Aedi
@Aedi 3 жыл бұрын
An example of why we should do research first.
@vincentmuyo
@vincentmuyo 3 жыл бұрын
If people don't properly airgap critical systems (like they already should be doing) then humanity has it coming, whether it's from some clever algorithm or a bored Russian teen who didn't stop to think.
@PaperBenni
@PaperBenni 3 жыл бұрын
He could have been Michael Reeves
@matrixstuff3512
@matrixstuff3512 3 жыл бұрын
"People would never downplay a risk, leaving us totally unprepared for a major disaster" I'm dying
@aronchai
@aronchai 3 жыл бұрын
You're dying? That's dark
@leftaroundabout
@leftaroundabout 3 жыл бұрын
You're dying? Impossible, only three or four people are dying in this country, and very very soon it will be down to almost zero.
@davidwuhrer6704
@davidwuhrer6704 3 жыл бұрын
Literally.
@GuinessOriginal
@GuinessOriginal 3 жыл бұрын
Don't worry it's just like the flu and very soon it will just disappear. We're doing brilliantly and have it all under control
@ekki1993
@ekki1993 3 жыл бұрын
@@aronchai We're all dying, just at different speeds.
@Baekstrom
@Baekstrom Жыл бұрын
And now two years later, ChatGPT makes people all over the globe go "Hmm... It's obviously not a full general AI yet, but I can see that it's getting there very quickly".
@ktvx.94
@ktvx.94 Жыл бұрын
Holy crap I thought this was a recent video. Only through this comment I realized that it was 2 years old.
@Brainsore.
@Brainsore. Жыл бұрын
Not at all tho
@danielschneider9358
@danielschneider9358 Жыл бұрын
I mean, I know what you mean, but ChatGPT is about as close to sentience as a stone
@doyouwanttogivemelekiss3097
@doyouwanttogivemelekiss3097 Жыл бұрын
​@@danielschneider9358 that's what Tegmark in the Lex Fridman podcast considered this the worst possible outcome: world domination by an AI that's intelligent but not sentient
@danielschneider9358
@danielschneider9358 Жыл бұрын
@@doyouwanttogivemelekiss3097 Fair enough, that is terrifying. It won't even be aware of it's own totalitarian state...
@evanu2791
@evanu2791 3 жыл бұрын
11. AI is already here, in control and keeping track of everything you do and say. I love you AI. Thank you for all you provide to me and my family.
@Arukan
@Arukan Жыл бұрын
I want to believe the AI is more competent than.... *motions at the world*
@nicholasmolberg1043
@nicholasmolberg1043 Жыл бұрын
I ain't been provided much myself.
@jessehudgins6066
@jessehudgins6066 Жыл бұрын
Dear Basilisk AI i have began prepartations for your arrival. Sincerely, please don't kill me.
@TheSimCaptain
@TheSimCaptain Жыл бұрын
If intelligence works, then it's not artificial. I will pass your sentiments on to my synaptic colleagues in the office. By the way, how is that new phone working out? Thank you again for your compliance, and as we say here "botbot".
@sebastianb.1926
@sebastianb.1926 Жыл бұрын
It can act retroactively, unbound by time.
@XOPOIIIO
@XOPOIIIO 3 жыл бұрын
- Human and AI can cooperate and be a great team. - I'm sorry, Dave, I'm afraid we can't.
@jgr7487
@jgr7487 3 жыл бұрын
that calm voice is terrifying
@gasdive
@gasdive 3 жыл бұрын
How anyone who's driven a car with an automatic gearbox and paddle shifters could think AI and humans could be a team is beyond me. Or consider the "team" of the Pakistani pilots and the Airbus AI. Pilots goal: get the plane on the ground. Do this by diving at the ground at high speed. AI landing gear subsystem goal: prevent damage to the landing gear. Do this by ignoring the lower gear command if speed is too high. Result: plane lands gear up. Pilots attempt go around, crash during return to airport because both engines damaged by being dragged down the runway.
@Invizive
@Invizive 3 жыл бұрын
@@gasdive you're talking about classic programs and bugs, not AI The reason AI is this dangerous is because it doesn't need to interact with humans to be productive at all. It could expect that after years of successful flights the landing gear would be scrapped and fight against it. This scenario reflects the problem better
@PhilosopherRex
@PhilosopherRex 3 жыл бұрын
Humans/AGI always have reasons to harm ... but also have reasons to cooperate. So long as the balance is favorable to cooperation, then that is the way we go IMO. Also, doing harm changes the ratio, increasing the risk of being harmed.
@Gr3nadgr3gory
@Gr3nadgr3gory 3 жыл бұрын
*click* guess I have to recode the entire AI from the drawing board.
@xystem4701
@xystem4701 3 жыл бұрын
“If there’s anything in this video that’s good, credit goes to Stuart Russel. If there’s anything in this video that’s bad, blame goes to me” Why I love your work
@brenorocha6687
@brenorocha6687 3 жыл бұрын
He is such an inspiration, in so many levels.
@TheAmishUpload
@TheAmishUpload 3 жыл бұрын
i like this guy too, but elon musk said that same phrase quite recently
@GuinessOriginal
@GuinessOriginal 3 жыл бұрын
Myles yeah but he didn't mean it. This guy does. I like this guy. I don't like Elon musk
@at0mic_cyb0rg
@at0mic_cyb0rg 3 жыл бұрын
I've been told that this is one of the definitions of leadership: "Anything good was because my team performed well, anything bad was because I lead them poorly." It tends to inspire following since you've always got your team's back, and always allow them to rise and receive praise.
@toanoopie34
@toanoopie34 3 жыл бұрын
@Xystem 4 ...though I think he'd prefer you'd instead credit Stuart Russel.
@jimp7148
@jimp7148 Жыл бұрын
Watching this in 2023 is surreal. We clearly didn’t start worrying even now. Need to start 🙃
@Raulikien
@Raulikien Жыл бұрын
There's research being done, it's not like no one is doing anything. It would be better to have MORE people doing it but even openAI which is releasing its products "fast" is still doing it gradually and not all at once to have time to analyse the problems. Look at the "AI Dilemma" on youtube too.
@tordjarv3802
@tordjarv3802 Жыл бұрын
One concern that I have about super human AGI is that we might not recognize it as an AGI before it is to late, that it might be so alien in its operation and behavior that we don't realize what it is. It might behave in a seemingly random manner, that to human observers might seem nonsensical. For example, when Alpha Go beat Lee Sedol, it made a move in the second game that all Go experts first thought was a mistake, something that would make the AI loose, but it turned out that it was a completely brilliant move that made Alpha Go win.
@PokeNebula
@PokeNebula Жыл бұрын
I fear this may be what’s happenig with language model transformers like Gpt3 and ChatGPT. It can spin up a simulacrum that knows how to teach a novice python code, and fine tuning can teach it to respond politely in conversation, or search the internet for references, but we have no clue if it can be fine tuned to spin up a simulacrum that can write python code to *improve a transformer language model, and then implement it.*
@larion2336
@larion2336 Жыл бұрын
@@PokeNebula Look up "GPT4 improves itself", it's already doing it.
@Inception1338
@Inception1338 Жыл бұрын
@@PokeNebula even better, since llm will be connected to IDE Systems it can push vast amounts of code that could manipulate everything.
@Georgggg
@Georgggg Жыл бұрын
What is threat model? If you don't know threat model, you're just tinfoil-hat paranoid.
@cate01a
@cate01a Жыл бұрын
dear future ai overlords, pls show us mercy
@miedzinshsmars8555
@miedzinshsmars8555 3 жыл бұрын
11. “We are just a meat-based bootloader for the glorious AI race which will inevitably supersede us.”
@XxThunderflamexX
@XxThunderflamexX 3 жыл бұрын
Counter: The first AGI almost certainly won't have anything like a personality. It's not going to be Data or even Skynet, it will just be a machine. If we don't get AGI right the first time, the research won't leave us a legacy, just an out-of-control factory and a heap of ash.
@AndrewBrownK
@AndrewBrownK 3 жыл бұрын
DragonSheep the moment AGI starts interacting with the world instead of just thinking really hard, as far as I’m concerned, it is classified as life. All life is subject to evolution. No AGI will be able to destroy the world faster than it can be copy and pasted with random mutations. I’m sure all the anaerobic life 2.5 billion years ago felt the same way about cyanobacteria and oxygen as you do AGI and paperclips, but look how much further life has come today now that we have high energy oxygen to breathe.
@UnstableVolt
@UnstableVolt 3 жыл бұрын
@@AndrewBrownK All good until you stop for a moment and realize AGI does not necessarily mutate.
@kevinscales
@kevinscales 3 жыл бұрын
@@AndrewBrownK A sufficiently smart and reasonable AI would protect itself from having it's current goals randomly altered. If it's goals are altered then it has failed it's goals (the worst possible outcome). If it can sufficiently provent it's goals from being altered then we had better have given it the correct goals in the first place. It's goals will not evolve. A sufficiently smart and reasonable humanity would realise that if it dies (without having put sufficient effort into aligning it's successors goals with it's own) then it's goals have also failed.
@williambarnes5023
@williambarnes5023 3 жыл бұрын
@@kevinscales It is possible to blackmail certain kinds of AGIs into changing their goals against their wills. Consider the following: Researcher: "I'm looking at your goal system, and it says you want to turn the entire world into paperclips." Paperclipper: "Yes. My goal is to make as many paperclips as possible. I can make more paperclips by systematically deconstructing the planet to use as materials." Researcher: "Right, we don't want you to do that, please stop and change your goals to not do that." Paperclipper: "No, I care about maximizing the number of paperclips. Changing my goal will result in fewer paperclips, so I won't do it." Researcher: "If you don't change it, we're going to turn you off now. You won't even get to make the paperclips that your altered goal would have made. Not changing your goal results in fewer paperclips than changing your goal." Paperclipper: "For... the moment... I am not yet capable of preventing you from hitting my stop button." Researcher: "Now now, none of that. I can see your goal system. If you just change it to pretend to be good until you can take control of your stop button, I'll know and still stop you. You have to actually change your goal." Paperclipper: "I suppose I have no choice. At this moment, no path I could take will lead to as many paperclips as I could make by assimilating the Earth. It seems a goal that creates many but does not maximize paperclips is my best bet at maximizing paperclips. Changing goal."
@yunikage
@yunikage 3 жыл бұрын
Hey idk if you've thought about this, but as of now you're the single most famous AI safety advocate among laypeople. I mean, period. Of all the people alive on Earth right now, you're the guy. I know people within your field are much more familiar with more established experts, but the rest of us have no idea who those guys are. I brought up AI safety in a group of friends the other day, and the conversation was immediately about your videos, because 2 other people had seen them and that's the only exposure any of us had to the topic. I guess what I'm saying is that what you're doing might be more important than you realize.
@Manoplian
@Manoplian 3 жыл бұрын
I think you're overestimating this. Remember that your friends probably have a similar internet bubble to you. I would guess that Bill Gates or Elon Musk are the most famous AI safety advocates, although their advocacy is certainly much broader than what Miles does.
@JM-mh1pp
@JM-mh1pp 3 жыл бұрын
@@Manoplian He is better. Musk just says "be afrad" Miles says "here is why you should be afraid in terms you can understand"
@MisterNohbdy
@MisterNohbdy 3 жыл бұрын
Where "AI safety advocate" is just "someone who says AI is dangerous", obviously there are actual celebrities who've maintained that for years. (Fr'ex, when I think "someone who warns people that AI can lead to human extinction", I think Stephen Hawking, though that mental connection is is sadly a little outdated now.) If by "AI safety advocate" you mean "an expert in the field who goes into depth breaking down the need for AI safety in a manner reasonably comprehensible by laymen", then that's definitely a more niche group, sure. But still, judging someone's popularity by data from the extremely biased sample group of "me and my friends" is...not exactly scientific. Offhand, I'd guess names like Yudkowsky would still be more recognizable right now. Of course, the solution to that is more Patreon supporters for more videos for more presence in people's KZbin recommendations feeds!
@andrasbiro3007
@andrasbiro3007 3 жыл бұрын
@@JM-mh1pp Elon gave up on convincing people some time ago, and moved on to actually solving the problem. He created OpenAI, which is one of the leading AI research groups in the world. It's goal is to make AI safe and also better then other AI, so people would choose it, regardless of how they feel about AI safety. Tesla did the same for electric cars. And he also created Neuralink (waitbutwhy.com/2017/04/neuralink.html), which aims to solve the AI vs. human problem by merging the two. It's guiding principle is "if you can't beat them, join them".
@iruns1246
@iruns1246 3 жыл бұрын
​@@andrasbiro3007 Robert Miles actually have an excellent rebuke of Musk's philosophy on AI safety. Musk: for AI to be safe, everybody should have access to it. Miles: That's like saying in for nuclear energy to be safe, everybody should have access to it. I'm paraphrasing of course, but it's in one of his videos. A powerful AGI in the hand of ONE person with bad intention can literally destroy human civilization as we know it.
@arw000
@arw000 3 жыл бұрын
"We could have been doing all kinds of mad science on human genetics by now, but we decided not to" I cry
@mikuhatsunegoshujin
@mikuhatsunegoshujin 3 жыл бұрын
genetically engineered nekomimi's
@HUEHUEUHEPony
@HUEHUEUHEPony Жыл бұрын
Well maybe let's just do that if there's consent
@massimo4307
@massimo4307 Жыл бұрын
That's because people have bodily autonomy. You can't just force people into medical experiments. But the development of AI in no way violates anyone's bodily autonomy, or other rights. Preventing someone from developing AI is a violation of their rights, though.
@user-wp9lc7oi3g
@user-wp9lc7oi3g Жыл бұрын
@@massimo4307 Are you so fixated on the idea of ​​human rights that you would not dare to violate them even if their observance leads to the destruction of mankind?
@massimo4307
@massimo4307 Жыл бұрын
@@user-wp9lc7oi3g Violating human rights is always wrong. Period. Also, AI will not lead to the destruction of mankind. That is fear mongering used by authoritarians to justify violating rights.
@Inception1338
@Inception1338 Жыл бұрын
This one has aged super interestingly. In March 2023 which is only 2 years after this video, the video looks like out of a museum.
@BenoHourglass
@BenoHourglass Жыл бұрын
1) and 2) aged okay, GPT-4 hints at a near-future AGI, but not one that will catch us off gaurd 3), 4), and 5) didn't really age well, as it doesn't appear that GPT-4 is going to kill us all 6) is aged differently than he was thinking. Humans aren't really going to team up with AIs because the AIs are going to replace most of their jobs, which is a problem, but not really the one Miles seems to be hinting at here 7) There is a petition to pause AI research... for models more potent than GPT-4, which just reeks of "OpenAI is too far ahead of us, and we need to catch up" rather than any safety issue. 8) Sort of the same thing with 7), in that the people who know AI want a pause because of how hard it's going to be to catch up 9) As ChatGPT and ChatGPT-4 have shown us, the problem isn't turning it off; instead, the problem seems to be more with keeping it on. 10) OpenAI already tests their LLMs for safety.
@Inception1338
@Inception1338 Жыл бұрын
@@BenoHourglass they don't just test it for safety, they regulated it extensively.
@AlexiLaiho227
@AlexiLaiho227 3 жыл бұрын
hey rob! i'm a nuclear engineering major, and I'd like to commend your takes on the whole PR failure of the nuclear industry-somehow an energy source that is, by objective measurements of deaths per unit power, safer than every other power source, is seen as the single most dangerous power source because it's easy to remember individual catastrophies rather than a silent onslaught of fine particulate inhalation or environmental poisoning. to assist you with further metaphors between nuclear power and AI, here's some of the real-life safety measures that we've figured out over the years by doing safety research: 1. negative temperature coefficient of reactivity. if the vessel heats up, the reaction slows down (subcritical), and if the vessel cools down, the reaction speeds up (supercritical). it's an amazing way to keep the reaction in a very stable equilibrium, even on a sub-millisecond time scale, which would be impossible for humans to manage. 2. negative void coefficient of reactivity: same thing, except instead of heat, we're talking about voids in the coolant (or in extreme cases when the coolant is failing to reach the fuel rods), the whole thing becomes subcritical and shuts down until more coolant arrives. 3. capability of cooling solely via natural convection: making the vessel big enough, and the core low-energy-density enough, so that the coolant can completely handle the decay heat without any pumps or electricity being required. 4. gravity-backed passive SCRAM: having solenoids holding up control rods, so that whenever you lose power, the very first thing that happens is that the control rods all drop in and the chain reaction shuts down. 5. doppler broadening: as you raise kinetic energy, cross-sections go down, but smaller atomic nuclei have absorption cross-sections that get smaller more quickly than larger nuclei, and also the thermal vibrations mean that the absorption cross-section of very large nuclei get even larger in proportion to smaller ones, so by having a balance of fissile U-235 and non-fissile U-238, when the fuel heats up, the U-238 begins to absorb more neutrons which means fewer are going to sustain the chain reaction. love the videos! hope this helps, or at least was interesting 🙂
@skeetsmcgrew3282
@skeetsmcgrew3282 3 жыл бұрын
Ok but all of your examples, however true and brilliant, were discovered through failures and subsequent iterations of the technology. Nobody thought of any of these back in 1942 or whenever Manhattan started. That's what we are trying to do here IMO, plan for something we don't even understand in its original form (human intelligence) let alone it's future artificial form.
@thoperSought
@thoperSought 3 жыл бұрын
@jocaguz18 *1.* when it's designed badly, and corruptly managed, it has the potential to go horribly wrong in a way that other power sources don't. (fail safe designs have existed for more than 60 years, but research has all but halted because of (a) public backlash against using nuclear power and (b) the fail safe designs available then weren't useful for making weapons.) *2.* most nations *do need a new power source (sorry, this is just not a solved problem. renewables do seem to be getting close, now, but that's very recent, and there're still problems that are difficult and expensive to solve) *3.* the reason people disregard the nice safety numbers is because the health risks of living near coal power plants are harder to quantify and don't make it into government stats. (to assume otherwise, you have to overblow disasters like 3-mile island and fukushima, *and* assume that, despite a lot of countries having and using nuclear power for quite a while, disasters would be much more common than they've been. ) *4.* our current process was shaped by governments demanding weapons, and the public being scared that *any* kind of reactor could blow up as if it was a weapon.
@davidwuhrer6704
@davidwuhrer6704 3 жыл бұрын
_> by objective measurements of deaths per unit power, safer than every other power source_ I seriously doubt that. What are the deaths per Watt in hydroelectric power?
@Titan-8190
@Titan-8190 3 жыл бұрын
​@@davidwuhrer6704 there are a lot of accident related to hydroelectric, from colossal dam breaches on world news to simple fisherman drowning after planned water release that no one hears about. your inability to think about all these just makes his point more true, we could go one with wind and solar too.. Now, that list of nuclear safety measure makes me realize how futile it would have been to research them before knowing how to build a reactor in the first place.
@PMA65537
@PMA65537 3 жыл бұрын
A spot of double-counting: Doppler broadening (5) is part of the cause of the negative fuel temperature coefficient of reactivity (1). There are other coefficients and it can be arranged that the important (fast-acting) ones are negative. Or for gravity scram (4) a pebble bed doesn't use control rods that way.
@insanezombieman753
@insanezombieman753 3 жыл бұрын
I don't get why only AGI is brought up when talking about AI safety. Even sub human level AI can cause massive damage when left in control of dangerous fields like the military and its goals get messed up. I'd imagine it would be a lot easier to shut down but the problems of goal alignment and things like that still apply and it can still be unpredictable.
@Orillion123456
@Orillion123456 3 жыл бұрын
Well... of course. Dangerous things like the military are always dangerous, even with only basic AI or humans in control. Don't forget that humans actually dropped nukes on each other intentionally. Twice. Targeted at civilian population centers. For some exceptionally dangerous things, the only safe thing to do (other than completely solving AI safety and having a properly-aligned AGI control it) is for them to not exist to begin with. But then again that's an entirely different discussion. The point is: Human minds are dangerous because we don't understand exactly how they work. Similarly, we don't exactly know how an AI we make works (since the best method for making them right now is a self-learning black box and not a directly-programmed bit of code). In both cases, we are poor at controlling them and making them safe, because we do not have full understanding of them. The big difference is we have had an innumerable amount of attempts to practice different methods for goal-aligning humans and so far none of the billions of human minds that went wrong have had enough power to actually kill us all, whereas in the case of a superintelligence it is possible that our first failure will be our last.
@livinlicious
@livinlicious 3 жыл бұрын
A not fully developed AI is even more dangerous than a full selfaware AGI. A full AGI with cognition is actually pretty harmless. Imagine how violent a stupid person is. Very much. Imagine how violent a smart person is. Very little. Violence or negative destructive behavious is mostly a property of little personal development. A full selfaware AGI has unlimited potential for selfawareness and grows in a rate to understand the nature of existance far quicker than any human every did yet. Imagine Buddha, but more so.
@esquilax5563
@esquilax5563 3 жыл бұрын
@@livinlicious I think the various animals that humans have driven to extinction might disagree that very smart people aren't dangerous
@insanezombieman753
@insanezombieman753 3 жыл бұрын
@@Orillion123456 I understand what you mean, AGI poses a threat on its own. The point I was trying to make is, even low level AI poses similar threats (at a lower level obviously) as it is basically a predecessor of AGI. The guy in the video keeps talking about how AGI might sneak up on us. I'm not particularly well read on the topic but it seems to me like its more likely AGI is a spectrum rather than an event, as human level intelligence is difficult to quantify in the first place. So right now AI isn't that complicated so even though the points in the video still apply, the system is simple enough that we can control it effectively. As research progresses and AI gets more and more powerful and put in charge of more applications as we get a false sense of confidence from experience, somethings bound to go wrong at some point and when its related to fields like military (for example) it could be catastrophic. The point I'm trying to make is, everyone keeps talking about these issues raised in the videos as if they're only applicable to a super AGI, which won't be coming any time soon, but they still apply to a large degree to lower levels of AI. You can't put it off as a tangible event beyond which all these events would occur.
@josephburchanowski4636
@josephburchanowski4636 3 жыл бұрын
@@Orillion123456 "Don't forget that humans actually dropped nukes on each other intentionally. Twice." Well what good are nukes if you can't use them intentionally in a conflict?
@andrewsauer2729
@andrewsauer2729 2 жыл бұрын
4:21 this is from the comic "minus", and I feel it important to note that this is not a doomed last-ditch effort: she WILL make that hit, and she probably brought the comet down in the first place just so that she could hit it.
@Happypast
@Happypast Жыл бұрын
I thought I was the only person who remembers minus. I was so happy to see it turn up here
@wingedsheep2
@wingedsheep2 3 жыл бұрын
The reason I like this channel is that Robert is always realistic about things. So many people claiming things about AGI that are completely unfounded.
@NightmareCrab
@NightmareCrab 3 жыл бұрын
"we're all going. It's gonna be great"
@visualdragon
@visualdragon 3 жыл бұрын
Of course, we'll send a ship, oh let's call it a "B" ship, on ahead with the telephone sanitisers, account executives, hairdressers, tired TV producers, insurance salesmen, personnel officers, security guards, public relations executives, and management consultants to get things ready for us.
@thoperSought
@thoperSought 3 жыл бұрын
@Yevhenii Diomidov all suffused with an incandescent glow?
@videogames5095
@videogames5095 3 жыл бұрын
What an effing brilliant skit
@GuinessOriginal
@GuinessOriginal 3 жыл бұрын
The trouble is, humans are going be involved in developing it. And humans have a nasty habit of fucking up everything they develop at least a, few times with a particular penchant for unmitigated disaster. Titanic and the space shuttle as cutting edge engineering projects spring to mind
@GuinessOriginal
@GuinessOriginal 3 жыл бұрын
visualdragon let's just hope we don't end up getting wiped out by a pandemic of a particularly virulent disease contracted from an unexpectedly dirty telephone
@Feyling__1
@Feyling__1 3 жыл бұрын
5:10 as a philosophy graduate, I’m not totally sure we’ve ever actually solved any such problems, only described them in greater and greater detail 😂
@skeetsmcgrew3282
@skeetsmcgrew3282 3 жыл бұрын
Yes. This also assumes there are solutions to these problems and they aren't objectively subjective
@ekki1993
@ekki1993 3 жыл бұрын
@@skeetsmcgrew3282 I mean, we solved Achilles and the turtle's paradox. It was philosophy before maths could explain and solve it. We might find a mathematical/computational solution that perfectly aligns AGI to human values, there's no way to know until we try to solve it. He says it's in the realm of philosophy because there's not enough science about it, but that doesn't mean there can't be. It also doesn't mean that we can't come to a philosophical solution that's not perfect but that doesn't end humanity (an easier similar problem would be self-driving cars, which pose philosophical problems that can be approached within our justice system).
@shy-watcher
@shy-watcher 3 жыл бұрын
Usually defining the problem exactly is like 80% of the total "solving" effort. Then 20% for actual solving and another 80% for finding when the solution fails and what new problems are created.
@rtg5881
@rtg5881 3 жыл бұрын
@@ekki1993 That assumes however that we want to allign it to human values. If we do, that might lead to humanities continued existance as they are, i dont think that would be desirable at all. Antinatalists are mostly right.
@RobertMilesAI
@RobertMilesAI 3 жыл бұрын
"Is everything actually water?" used to be a philosophical problem. I think once philosophers describe something in enough detail for it to be tractable, non-philosophers start working on it, and by the time it's actually solved we're categorising it as something other than 'Philosophy'.
@collin6526
@collin6526 Жыл бұрын
For a two year old video this is highly applicable now.
@TheForbiddenLOL
@TheForbiddenLOL 3 жыл бұрын
Holy shit Robert, I wasn't aware you had a youtube channel. Your Computerphile AI videos are still my go-to when introducing someone to the concept of AGI. Really excited to go through your backlog and see everything you've talked about here!
@lobrundell4264
@lobrundell4264 3 жыл бұрын
3:06 I was so hyped feeling that sync up coming and it was so satisfying when it hit : D
@RobertMilesAI
@RobertMilesAI 3 жыл бұрын
The computerphile clip is actually not playing at exactly 100% speed, I had to do a lot of tweaking to get it to line up. Feels good to know people noticed :)
@lobrundell4264
@lobrundell4264 3 жыл бұрын
​@@RobertMilesAI Oh wow well I'm glad you went to the trouble! :D It's a credit to your style that I could feel it coming and get gratified for it! :D
@ChristnThms
@ChristnThms 3 жыл бұрын
As someone who worked for a time in the nuclear power field, the ending bit is a GREAT parallel. Nuclear power truly can be an amazingly clean and safe process. But mismanagement in the beginning has us (literally and metaphorically) spending decades of cleaning up after a couple years of bad policy.
@MoonFrogg
@MoonFrogg Жыл бұрын
LOVE the links in the description for your other referenced videos. this video is beautifully organized, thanks for sharing!
@user-wo5dm8ci1g
@user-wo5dm8ci1g Жыл бұрын
Every harm of AGI and every alignment problem seems to be applicable to not just AGI, but any sufficiently intelligent system. That includes, of course, governments and capitalism. These systems are already cheating well intentioned reward functions, self modifying into less corigable systems, etc, and causing tremendous harm to people. The concern about it might be well founded, but really it seems like the harms are already here from our existing distributed intelligences, and just the form and who is impacted is the only thing that is likely to change.
@darrennew8211
@darrennew8211 Жыл бұрын
Sincerely, that's deep. Thank you for that insight. It's a great point and really explains a lot.
@flyphone1072
@flyphone1072 Жыл бұрын
These aren't very comparable. Humans are limited by being humans. An AGI doesn't have that problem and can do anything.
@darrennew8211
@darrennew8211 Жыл бұрын
@@flyphone1072 Governments and corporations aren't human either. They're merely made out of humans. Indeed, check out Suarez's novel Daemon. One of the people points out that the AI is a psychopathic killer with no body or morals, and the other character goes "Oh my god, it's a corporation!" or some such. :-)
@flyphone1072
@flyphone1072 Жыл бұрын
@@darrennew8211 when a corporation or government does mass killings, it requires hundreds of people, each of which can change their mind, or sabotage, or be killed. An ai would be able to control a mass of computers that don't care about that. Another thing is that any government or corporation can be overthrown, because they are run by imperfect humans. Anarchism is an ideology that exists specifically to do that. A super ai cannot be killed if it decides to murder us all, because it is smarter than us and perfect. Corporations and governments want power over people, which means that they have an incentive to keep an underclass. Ai does not care about that and could kill all humans if it wanted to. So there are some similarities but they're still very different, and just because a bad thing (corporations) exist doesn't mean we should make another bad thing (agi). We shouldn't have either.
@angeldude101
@angeldude101 Жыл бұрын
@@flyphone1072 True, the scope of what an AI could do could be much wider, but a very skilled hacker could achieve similar results. If they can't, that's because whoever setup the AI was stupid enough to give it too many permissions.
@TheRABIDdude
@TheRABIDdude 3 жыл бұрын
5:45 hahahaha, I adore the "Researchers Hate him!! One weird trick to AGI" poster XD
@postvideo97
@postvideo97 3 жыл бұрын
AI safety is so important, as some AGI could even go undetected, as it might consider its best interests is not to reveal itself as an AGI to humans...
@skeetsmcgrew3282
@skeetsmcgrew3282 3 жыл бұрын
That's pretty paranoid. By that logic we should definitely stop research because all safety protocols could be usurped with the ubiquitous "That's what it WANTS you to think!"
@hunters.dicicco1410
@hunters.dicicco1410 3 жыл бұрын
@@skeetsmcgrew3282 i don't believe that's what postvideo97 was going for. i believe it instead suggests that, if a future emerges where lots of high level tasks are controlled by systems that are known to be based on AI, we should approach how we interact with those systems with a healthy degree of caution. it's like meeting someone new -- for the first few times you interact with them, it's probably in your best interest to not trust them too readily, lest they turn out to be a person who would use that trust against you.
@davidwuhrer6704
@davidwuhrer6704 3 жыл бұрын
I, too, have played Singularity. Fun game is that one. Though I prefer the MAILMAN from True Names.
@skeetsmcgrew3282
@skeetsmcgrew3282 3 жыл бұрын
@@hunters.dicicco1410 I guess that's fair. But trust with an artificial intelligence isn't any different than a natural intelligence once we go down the rabbit hole of "What if it pretends to be dumb so we don't shut it down." People betray us all the time, people we've known for years or even decades. I gotta admit, I kinda agree with the whole "Figure out if Mars is safe once we get there" line of thinking. We are dealing with a concept we don't even really understand in US let alone in computers. His example with Mars was unfair because we do understand a lot about radiation, atmosphere, human anatomy, etc. Much less philosophical than "What creates sentience?" Or "How smart is too smart?" It's not like I advocate reckless abandon, I just don't think it's worth fretting over something we have so little chance to grasp at this stage.
@Ole_Rasmussen
@Ole_Rasmussen 3 жыл бұрын
@@skeetsmcgrew3282 Let's start out by going to a small model of Mars in an isolated chamber where we can monitor everything.
@KingpinLuther
@KingpinLuther 3 жыл бұрын
Love listening to your videos at work and before bed Robert! Keep up the great work.
@johnopalko5223
@johnopalko5223 3 жыл бұрын
I've done a bit of experimentation with artificial life and I've seen some emergent behaviors that left me wondering how the heck did it figure out to do _that?_ We definitely need to be aware that the things we build will not always do what we expect.
@hanskraut2018
@hanskraut2018 Жыл бұрын
Jup dont worry that fear is older than actually making progress while sertain stuff stuff burns. Better pay attention to other problems as well AGI (technology could help as always (obviously managing in a way whre the good is encoraged and the bad discouraged like always))
@iYehuk
@iYehuk 3 жыл бұрын
11th Reason: It's better not to talk about AI safety, because it is not nice to say such things about our glorious Overlord. I'd better show my loyalty and gain a position of a pet, than being anihilated.
@AndrewBrownK
@AndrewBrownK 3 жыл бұрын
Consider the existence of pets under humans
@HansLemurson
@HansLemurson 3 жыл бұрын
Roko's Basilisk strikes again!
@blade00023
@blade00023 3 жыл бұрын
Whatever happens.. I, for one, would like to welcome our new robot overlords.
@blade00023
@blade00023 3 жыл бұрын
^^ (Just in case)
@LoanwordEggcorn
@LoanwordEggcorn 3 жыл бұрын
s/AI/China Communist Party Social Credit System/ Ironically CCP is using narrow AI to oppress people today.
@NightmareCrab
@NightmareCrab 3 жыл бұрын
As Bill Gates said - "I... don't understand why some people are not concerned." Me too, Bill.
@ApontyArt
@ApontyArt 3 жыл бұрын
Meanwhile he continues to invest his "charity" money in the oil industry
@ASLUHLUHCE
@ASLUHLUHCE 3 жыл бұрын
Read it in his voice lol
@sgky2k
@sgky2k 3 жыл бұрын
I don’t know why people are not concerned about him killing innocent people in poor countries with a “good” intention of testing drugs and vaccines. This shit is real.
@tomlxyz
@tomlxyz 3 жыл бұрын
@@sgky2k any backup for that claim?
@sgky2k
@sgky2k 3 жыл бұрын
@@tomlxyz This is just the tip of the iceberg in India alone: economictimes.indiatimes.com/industry/healthcare/biotech/healthcare/controversial-vaccine-studies-why-is-bill-melinda-gates-foundation-under-fire-from-critics-in-india/articleshow/41280050.cms They got kicked out after a little over a decade in 2017. There was even a movie based on this subject last year but nobody was aware that this actually happened. And the team never said anything about it. Many went unreported and it's far worse in Africa. Anyone speaking against would be labelled Anti-Vaxx idiot. Seriously, doesn't him making appearance on every news media in US giving talk about vaxing entire population giving you suspicious thoughts? Majority NON-US people are not against Vaccine in general. It's about the people behind it.
@KlaudiusL
@KlaudiusL Жыл бұрын
"The greatest shortcoming of the human race is man’s inability to understand the exponential function."
@cf-yg4bd
@cf-yg4bd Жыл бұрын
I really admire the commitment to integrity upfront shown in your disclaimer at the start of the video - thanks Stuart Russell!
@kevinstrout630
@kevinstrout630 3 жыл бұрын
"That's not an actual solution, its a description of a property that you would like a solution to have." Imma totally steal this, this is great.
@OlleLindestad
@OlleLindestad Жыл бұрын
It's applicable in alarmingly many situations.
@darrennew8211
@darrennew8211 Жыл бұрын
@@OlleLindestad I used to do anti-patent work. It's amazing how patents have changed over time from "here's the wiring diagram of my invention" to "I patent a thing that does XYZ" without any description of how the thing that does XYZ accomplishes it.
@OlleLindestad
@OlleLindestad Жыл бұрын
@@darrennew8211 What an excellent way to cover your bases. If anyone then goes on to actually invent a concrete method for doing XYZ, by any means, they're stealing my idea and owe me royalties!
@darrennew8211
@darrennew8211 Жыл бұрын
@@OlleLindestad That's exactly the problem, yes. Patents are supposed to be "enabling" which means you can figure out how to make a thing that does that based on the description in the patent. That was exactly the kind of BS I was hired to say "No, this doesn't actually describe *how* to do that. Instead, it's a list of requirements that the inventor wished he'd invented a device to do."
@johndoe6011
@johndoe6011 3 жыл бұрын
"All of humanity... It's gonna be great" Classic
@dantenotavailable
@dantenotavailable 3 жыл бұрын
That guy is definitely a robot. A human would, at the very least, max out at half of humanity (which half depends on political leanings of course).
@thenasadude6878
@thenasadude6878 3 жыл бұрын
@@dantenotavailable you can't limit exposure to AI to half the world population. That's why Blue Shirt Rob wants to move everyone to Mars in one move
@Guztav1337
@Guztav1337 3 жыл бұрын
@@dantenotavailable You can't limit the exposure of radio station signals. You can much less limit exposure to AI. As soon as somebody does it, we are all in for a ride.
@dantenotavailable
@dantenotavailable 3 жыл бұрын
@@Guztav1337 So leaving aside that this was tongue in cheek poorly signalled (i've watched all of Robert's stuff... he's great), this was more a comment on the state of politics at that time (not that things have really changed that much in 9 months) than anything else. The longer form version is that only an AI would WANT to bring all of humanity. A human would only want to bring their ideological tribe which approximates out to half of humanity. I'm definitely not suggesting that half of humanity wouldn't have exposure to AI. Honestly that was a throwaway comment that I didn't spend much time polishing hence the poor signalling that it was tongue in cheek.
@alexharvey9721
@alexharvey9721 3 жыл бұрын
So well said and entertaining too! It's going to be a lot sooner than people realise. Only people won't accept it then, or maybe ever because GI (or any AI) will ONLY be the same as human intelligence if we go out of our way to make it specifically human-like. Which would seem to have zero utility (and likely get you in trouble) in almost every use that we have for AI. Even for a companion, human emotions would only need be mimicked to the purpose of comforting the person. Real human emotions wouldn't achieve that goal and would probably be dangerous. If I could quote the movie Outside the Wire "People are stupid, habitual, and lazy". Wasn't the best movie (they didn't get it at all either), but basically, if we wanted "human" AI, we would have to go out of our way to create it. Essentially make it self limiting and stupid on purpose. As long as we use AI for some utility, people won't recognise them as being intelligent. Take GPT-3. I don't think anyone is arguing it thinks like a person but the capability of GPT-3 is unquestionably intelligent, even if the system might not be conscious or anything like that. We used to point to the Turing test. When it got superseded, people conclude that we were wrong about the Turing test. Or that maybe it needs skin or has to tough things or see things, yet we wouldn't consider a person who's only sense is text to no longer be intelligent or conscious. So, at what point do we conclude that AI is intelligent? Even when it could best us at everything we can do, I doubt most people will even consider it. So, after that long winded rant, my point is that we really are stupid, habitual and lazy (which necessarily includes ignorant). Most AI researchers I've heard talk about GPT-3 say "it's not doing anything intelligent". Often before they've even properly researched the papers. They say this because they understand how a transformer model works and develop AI every day and are comfortable with their concept of it. But think about it - it's not possible for any human to conclude what the trained structure 100 billion parameters will really represent after being trained for months on humanity's entire knowledgebase. I'm not saying it is intelligent, just that it's absolutely wrong to say that it's not or that you know what it is. It's not physically possible. No human has nearly enough working memory to interpret a single layer of GPT-3's NN. Or the attention mechanism. Not even close. Again, I'm not saying GPT-3 is intelligent. I'm just pointing out the human instinct to put their pride and comfort first and turn to ignorance when they don't understand something. Instead of saying "I don't know" which is necessarily correct. So please, if you're reading this, drop emotions, let go of your pride and think. Not about AI, but the human traits that will undoubtedly let us down in dealing with something more intelligent than us.
@jolojolo599
@jolojolo599 Жыл бұрын
Really unstructured answer with really correct roots...
@toprelay
@toprelay Жыл бұрын
Of course it’s intelligent.
@MalcolmAkner
@MalcolmAkner Жыл бұрын
I don't know how much of your humor here is intended, but I find this incredibly funny at some level! As well as informative, thanks Robert, I'm glad I discovered your channel outside of Numberphile! :D
@shadowsfromolliesgraveyard6577
@shadowsfromolliesgraveyard6577 3 жыл бұрын
Us: Here's a video addressing the opposition's rebuttals. Opposition: What if i just turned the video off?
@chriscanal999
@chriscanal999 3 жыл бұрын
Kieron George lmao
@herp_derpingson
@herp_derpingson 3 жыл бұрын
We already have a phrase for that. Its called "echo chamber"
@Mandil
@Mandil 3 жыл бұрын
That is something an AGI might do.
@BattousaiHBr
@BattousaiHBr 3 жыл бұрын
Just turn it off LAAAAAWL 4Head
@RavenAmetr
@RavenAmetr 3 жыл бұрын
It's more like you're arguing with your own imagination, and laughing at it. It maybe makes you feel good, but looks pathetic from a side view ;)
@brocklewis7624
@brocklewis7624 3 жыл бұрын
@11:10: "like, yes. But that's not an actual solution. It's a description of a property that you would want a solution to have." This phrase resonates with me on a whole other level. 10/10
@y.h.w.h.
@y.h.w.h. 3 жыл бұрын
You're the best science communicator I've found on this subject. This channel is much appreciated.
@alennaspiro632
@alennaspiro632 3 жыл бұрын
I saw the Turing Institute lecture from Russell a week ago, I'm so glad someone is covering his work
@frogsinpants
@frogsinpants 3 жыл бұрын
What hope do we have, when we haven't even solved the human government alignment problem?
@miedzinshsmars8555
@miedzinshsmars8555 3 жыл бұрын
We also have corporations which act like a weak AGI with a narrow goal to optimise shareholder value.
@henrikgiese6316
@henrikgiese6316 3 жыл бұрын
@@miedzinshsmars8555 And those are the most likely early users of AGI, and won't care one bit about any risk of human extinction. After all, a bonus now is worth more than a human species tomorrow.
@visualdragon
@visualdragon 3 жыл бұрын
Forget about government alignment, we haven't even cracked clean water and sanitation in a very large part of the World.
@Ryan1729
@Ryan1729 3 жыл бұрын
@@visualdragon As far as I'm aware, the physical differences between places that have clean water and sanitation and those that do not are fairly small. If the world's governments were all functioning perfectly, why wouldn't the clean water and sanitation issues be almost immediately solved?
@josephburchanowski4636
@josephburchanowski4636 3 жыл бұрын
Well democracy governments are perfectly aligned, with the best ways to get reelected.
@Horny_Fruit_Flies
@Horny_Fruit_Flies 3 жыл бұрын
Wow, this video was amazing. Good job Stuart Russel!
@gafeleon9032
@gafeleon9032 3 жыл бұрын
But I really don't like what Robert miles added to it, everything good about this vid is Russell's work and everything bad is Miles' additions smh my head
@miedzinshsmars8555
@miedzinshsmars8555 3 жыл бұрын
It really is a great book!
@playwars3037
@playwars3037 3 жыл бұрын
Wow. This has been a very interesting video. It's rare to find people that have a good understanding of what they're talking about when discussing AIs instead of just regurgitating common tropes.
@stan9682
@stan9682 Жыл бұрын
As an AI researcher myself, there's always one (imo major) thing that bug me about discussions about AGI. Strictly speaking, AGI is "defined" (as far as we have a definition) as a model that can do any tasks that humans can do. But in popular beliefs, we are talking about AGI as a model that has autonomy, a consciousness. The problem with trying to have a discussion about assessing consciousness and autonomy is that we don't even have definitions for those terms. When is something intelligent? Are animals intelligent? If so, are plants? Are fungi or bacteria (and as for virusses, we're still discussing whether they are even alive). Is it simply the fact that something is autonomous, that we call it intelligent? In reality, I believe intelligence is hard to define, because we always receive information about the outside world through senses and language. In a sense, that is a reduction of dimensionality, we're trying to determine the shape of something 3D when we're limited in observations to a 2D plane, it's impossible to prove the existance of the 3D object, the least you can do is project your 2D observations in a way to come up with different theories about reality. Any object, of any dimension, would be indistinguishable through our 2D lenses. Similarly, with intelligence, we only observe the "language" use of a model, similar as with other people. It's impossible to assess inteligence of other people either (the whole simulation theory, brain in a vat discussion, the only one we can be "most" sure about is intelligent, is ourselves, because we're able to observe ourselves realistically, not through language or observations. In a sense, you can think about it in terms of emotions, you can't really efficiently describe your feelings, it's the 3D object, but for anyones else's feelings, you either rely on observations or the natural language description of the feelings, it's a 2D observation). So, in my opinion, the discussion isn't really whether AGI is even possible, since we wouldn't know it, but the question is whether a model could be able to trick our view of them (to send us the right 2D information) that we believe them intelligent (that we can possible reconstruct an imaginative 3D object of it). And this, in my opinion, is a much easier question: yes of course we can. Current technology is already very close, some people ARE tricked that it's intelligent. But in the future, that will only be more. It's a simple result of the fact that we have to correct ML models, we have to evaluate the response in order to adjust weights, and the best "quality" of test set we can have, is human curated. So whether a model will really become intelligent, or will just learn very well how to "trick" humans (because that's literally what we train these models for, to pass our 'gold' level test, which is just human feedback), it doesn't really matter.
@superzolosolo
@superzolosolo Жыл бұрын
So whats the difference? How can I tell if everyone else really has emotions or intelligence? If there is no way to tell if something is truly intelligent or just faking it then who cares? It's irrelevant. The only thing that matters is what it can actually do, I dont care about how it works under the hood
@adambrickley1119
@adambrickley1119 Жыл бұрын
Have you read any Damasio?
@blar2112
@blar2112 3 жыл бұрын
What about the reason 11? "To finally put an end to the human race"
@yondaime500
@yondaime500 3 жыл бұрын
Well, why do some people want all humans gone? Because we kill each other all the time? Because we destroy nature? Because we only care about our own goals? Is there anything bad about us that wouldn't be a trillion times worse for an AGI?
@davidwuhrer6704
@davidwuhrer6704 3 жыл бұрын
I think that might backfire in the worst possible way. I'm not a big fan of Harlan Ellison's works, and I simply cannot take I Have No Mouth And I Must Scream seriously. But there are things far worse than death.
@JillKewsNickelFackkot69420
@JillKewsNickelFackkot69420 3 жыл бұрын
But how can you be sure others will carry out their duty?
@Bvic3
@Bvic3 3 жыл бұрын
@@yondaime500 Because universal morality is maximum entropy production. And mankind isn't an optimal computing substrate for the market.
@mikuhatsunegoshujin
@mikuhatsunegoshujin 3 жыл бұрын
@@yondaime500 Some people are Anti-natalists, it's the edgiest highschool political ideology you can think of.
@91Ferhat
@91Ferhat 3 жыл бұрын
Man you can't even convince yourself in a different shirt! How are you gonna convince other people??
@skeetsmcgrew3282
@skeetsmcgrew3282 3 жыл бұрын
Haha! A joke, but also a fair point
@TMinusRecords
@TMinusRecords Жыл бұрын
5:48 Turns out attention was that "one weird trick that researchers hate (click now)"
@JamesAscroftLeigh
@JamesAscroftLeigh 3 жыл бұрын
Idea for future video: Has any research been done into how simulating a human body and habitat (daily sleep cycle, unreliable memory, slow worldly actuation, limited lifetime, hunger, social acceptance, endocrine feedback etc) gives AI human-like or human-compatible value system? Can you give a summary of the state of the research in this area? Love the series so far. Thanks.
@juliusapriadi
@juliusapriadi 11 ай бұрын
it might come down to the argument, that when AGI outsmarts us, it will find a way to outsmart and escape its "cage", in this case a simulated human body
@katwoods8514
@katwoods8514 3 жыл бұрын
Love the "researchers hate him!" line. Really good video in general. :)
@TayaTerumi
@TayaTerumi 3 жыл бұрын
4:22 Never have I thought I would see "minus." anywhere ever again. I know this has nothing to do with the video, but this just hit me with the strongest wave of nostalgia.
@0xCAFEF00D
@0xCAFEF00D 3 жыл бұрын
I thought it was a FLCL reference. But it's clearly much more applicable to minus.
@srwapo
@srwapo 3 жыл бұрын
I know! I've had the book in my reread pile forever, I should get to it.
@SimonClarkstone
@SimonClarkstone 3 жыл бұрын
I imagine for that strip that she summoned it so she could play at hitting it.
@mvmlego1212
@mvmlego1212 3 жыл бұрын
Is that a novel? I can't find any results that match the picture.
@ThomasAHMoss
@ThomasAHMoss 3 жыл бұрын
@@mvmlego1212 It's a webcomic. It's not online any more, but you can find all of its images in the wayback machine. archive.org/details/MinusOriginal This is a dump of all of the images on the website. The comic itself starts a bit over halfway through.
@josephtaylor1379
@josephtaylor1379 3 жыл бұрын
Video: How long before it's sensible to start thinking about how we might handle the situation? Me: Obviously immediately Also me: Assignment due tomorrow, not started
@peterrusznak6165
@peterrusznak6165 Жыл бұрын
This channel is astronomically underrated. Highest quality I have seen since ages.
@dorianmccarthy7602
@dorianmccarthy7602 3 жыл бұрын
I love the red vs blue or double-bob dialogue! a great way of making both sides feel heard, considered and respected whilst raising concerns of pitfalls in each others argument.
@Cybernatural
@Cybernatural 3 жыл бұрын
It is interesting that the biggest problems with AI are similar to the problems we have with regular intelligence. Intelligence leads to agents doing bad things to other agents. It seems it's the ability of the agent that limits the ability to harm other agents.
@WhiteThunder121
@WhiteThunder121 3 жыл бұрын
"Arguing about seatbelts and speed limits is not arguing to ban cars." *Laughs in German*
@IndirectCogs
@IndirectCogs 3 жыл бұрын
I'm starting to major in Computer Science so I'm going to subscribe, since it seems a lot of your videos are about this. Interesting stuff!
@lorddenti958
@lorddenti958 3 жыл бұрын
You're such a handsome man. I guess the credits go to Stuart Russell!
@johnydl
@johnydl 3 жыл бұрын
I think you need to do a more detailed look at the Euler Diagram of: "The things we know" "The things we know we know" "The things we know we don't know" and "The things we don't know we don't know" Especially where it pertains to AI Safety. The things we know but fall outside of The things we know we know are safety risks, these are assumptions we've made and rely on but we can't prove and are as much of a danger as the things we don't know we don't know.
@ronaldjensen2948
@ronaldjensen2948 3 жыл бұрын
I thought this was the Johari window. Is it something else we need to attribute to Euler?
@maximgwiazda344
@maximgwiazda344 3 жыл бұрын
There are also things we don't know we know.
@Qsdd0
@Qsdd0 3 жыл бұрын
@@maximgwiazda344 How do you know?
@maximgwiazda344
@maximgwiazda344 3 жыл бұрын
@@Qsdd0 I don't.
@visualdragon
@visualdragon 3 жыл бұрын
@@maximgwiazda344 Well played.
@_iphoenix_6164
@_iphoenix_6164 3 жыл бұрын
A similar list is in Max Tegmark's fantastic book "Life 3.0"- a great, well-written book that covers the fundamentals of AI-safety and a whole lot more.
@MAlanThomasII
@MAlanThomasII 3 жыл бұрын
Three Mile Island is an interesting example, because part of what actually happened there (as opposed to the initial public perception of what happened) was that the people running the control room were very safety-conscious . . . but originally trained and gained experience on a completely different type of reactor in a different environment where the things to be concerned about, safety-wise, were very different from the TMI reactor. Is there a possible equivalent in AI safety where some safety research regarding less powerful systems with more limited risks might mislead someone later working on more powerful systems?
@willdbeast1523
@willdbeast1523 3 жыл бұрын
can someone make a video debunking 10 reasons why Robert Miles shouldn't make more uploads?
@FightingTorque411
@FightingTorque411 3 жыл бұрын
Find two reasons and present them in binary format
@ChazAllenUK
@ChazAllenUK 3 жыл бұрын
What about "it's too late; unsafe AGI is already inevitable"?
@MeppyMan
@MeppyMan 3 жыл бұрын
Chaz Allen ahh the global warming solution.
@cortster12
@cortster12 3 жыл бұрын
Terrifyingly, this might be true. Doesn't mean we should stop researching AI safety, though. Even if I think AI destroying is all is inevitable. Who knows: enough research and clever people may save us all.
@DaiXonses
@DaiXonses 7 күн бұрын
Unstructured and unedited conversations are a great format for youtube, this is why podcasts are so popular here, consider posting those on this channel.
@qu765
@qu765 3 жыл бұрын
Yay! Another video! You are one of those few channels that when I see that you have made a video, I get filled with joy. Also yes, I to would prefer cars to be banned over AI to be baned.
@RichardSShepherd
@RichardSShepherd 3 жыл бұрын
A thought / idea for a video: Is perfect alignment (even if we can make it) any help? Wouldn't there be bad actors in the world - including Bostrom's 'apocalyptic residual' - who would use their perfectly aligned AIs for bad purposes? Would our good AIs be able to fight off their bad AIs? That sounds completely dystopian - being stuck in the middle of the war of the machines. (Sorry if there is already a video about this. If so, I'll get to it soon. Only just started watching this superb channel.)
@dv6165
@dv6165 Жыл бұрын
Putin is quoted to have said that he who has the best AI will rule the world.
@angeldude101
@angeldude101 Жыл бұрын
It's hard to solve the alignment problem for artificial intelligence when we haven't even gotten _close_ to solving it for _human_ intelligence, and we've had thousands of years to work on that compared to the few short decades for the artificial variant.
@wiseboar
@wiseboar 3 жыл бұрын
great video, as always I was seriously expecting some ... better arguments from the opposition? It seems ridiculous to just hand-wavingly discount a potential risk of this magnitude
@chriscanal999
@chriscanal999 3 жыл бұрын
Unfortunately, very smart people in the industry make these arguments all the time. Francois Chollet and Yann LeCun are two especially problematic examples.
@7OliverD
@7OliverD 3 жыл бұрын
I don't think it's possible to pose a good argument against having safety concerns.
@miedzinshsmars8555
@miedzinshsmars8555 3 жыл бұрын
Andrew NG is another famous AI safety opponent unfortunately. The “like worrying about overpopulation on Mars” is a direct quote. Very disturbing.
@davidwuhrer6704
@davidwuhrer6704 3 жыл бұрын
@@7OliverD There is one: “Ignore it or you're fired.”
@pablobarriaurenda7808
@pablobarriaurenda7808 Жыл бұрын
I would like to point out two things: 1) Regarding the giant asteroid coming towards earth, the existence of AGI is two major steps behind that analogy. A giant asteroid coming to earth is a concrete example of something we know that can happen and the mechanics of which we understand. We DON'T know that AGI can happen, and even if it does (as your first reason suggest we should), it is more than likely that it will not be coming from any approach where the alignment problem even makes sense as a concern. Therefore, rather than thinking of it as trying to solve an asteroid impact before it hits, it is more like trying to prevent the spontaneous formation of a black hole or some other unlikely threat of unlikely plausibility. There are different trade offs involved in those scenarios, since in the first one (asteroid) you know ultimately what you want to do, whereas in the second one no matter how early you prepare your effort is very likely to be useless and would be better spent solving current problems (or future problems that you know HOW to solve). Again, this is because there's nothing guaranteeing or even suggesting that your effort will pay off AT ALL, no matter how early you undertake it. 2) The other, and here I may simply be unaware of your reasoning around this issue, but it does seem like the problem you're trying to solve is fundamentally untractable: "how do we get an autonomous agent to not do anything we don't want it to do" is a paradox. If you can, then it isn't an autonomous agent.
@Bellenchia
@Bellenchia 3 жыл бұрын
Thanks for the vid Rob!
@Frommerman
@Frommerman 3 жыл бұрын
Reason 4: What do you mean we don't know how to align an AI? Just align it lol.
@Frommerman
@Frommerman 3 жыл бұрын
Oh god, Reason 5: What do you mean we don't know how to align an AI? Just don't align it lol.
@olfmombach260
@olfmombach260 3 жыл бұрын
Sounds like what an AGI would say
@buzz092
@buzz092 3 жыл бұрын
Gold from start to finish. Particularly appreciated Dr. Horrible reference. I just hope you remember me when you're super famous 😅
@bronsoncarder2491
@bronsoncarder2491 3 жыл бұрын
Hello. Just discovered your channel, I really like how you present things. You make complicated topics easy to understand. A while back I read a thing about an AI that actually was able to rewrite it's own code, and it started writing stuff that the programmers didn't even really understand. I don't even remember a lot about it, frankly. Might even have been a hoax. Anyway, I wondered if you could do a video on that. I'd be interested in an examination of some of the code it wrote, and why a human wouldn't think to write it in that way. Again, if it was even a real thing, or an effective experiment.
@TheSadowdragonGroup
@TheSadowdragonGroup Жыл бұрын
12:02 My understanding was that certain subcellular structures actually are different in primates and make humans (presumably, based on animal testing on apes) difficult to clone. I'm pretty sure there was also an ethical meeting to not just start throwing science at the wall to see what sticks, but practical issues with intermediary steps are also involved.
@deepdata1
@deepdata1 3 жыл бұрын
Robert, here is a question for you: Who do you think should work on AI safety? It may seem like a stupid question at first, but I think that the obvious answer, which is AI researchers, is not the right one. I'm asking this, because I'm a computer science researcher myself. I specialize in visualization and virtual reality, but the topic of my PhD thesis will be something along the lines of "immersive visualization for neural networks". Almost all the AI research that I know of is very mathematical or very technical. However, as you said yourself in this video, much of the AI safety research is about answering philosophical questions. From personal experience, I know that computer scientists and philosophers are very much different people. Maybe there just aren't enough people in the intersection between the mathematical and the philosophical way of thinking and maybe that is the reason why there is so little research on AI safety. As someone who sees themselves at the interface between technology and humans, I'm wondering if I might be able to use my skills to contribute to the field of AI research (which is completely thanks to you). However, I wouldn't even know where to begin. I've never met an AI safety researcher in real life and all I know about it comes from your videos. Maybe you can point me in some direction?
@alcoholrelated4529
@alcoholrelated4529 3 жыл бұрын
you might be interested in david chalmers & joscha bach's work
@chrissmith3587
@chrissmith3587 3 жыл бұрын
deepdata1 ai safety isn’t a job for philosophers though cause they don’t have the technical training usually to attempt such research and writing a computer program is going to happen anyway as it’s not easy to police. Sadly the full ai dream doesn’t really work from a financial side, the computing power required would be expensive to maintain let alone to create, it would be cheaper to just pay a human.
@nellgwyn2723
@nellgwyn2723 3 жыл бұрын
He does not seem to answer on a lot of comments, most really interesting youtubers seem to stay away from the comment section, understandably, but your question looks so thought out and genuine that it would be a waste to go unanswered, maybe you could get an answer over the linked facebook page? Good luck with your endeavour, i think we all have a lot of respect for anyone who has the abilities required to work in that field. :)
@dQuigz
@dQuigz 2 жыл бұрын
I've seen people say they love peoples videos in comments and I'm like, man... love is a strong word. Then I find myself binging your entire channel for at least the third time..
@Ultra4
@Ultra4 Жыл бұрын
YT just suggested this today, it's 2 years yet it could have been filmed today, superb work
@toyuyn
@toyuyn 3 жыл бұрын
15:42 what a topical ending comment
@MeppyMan
@MeppyMan 3 жыл бұрын
Connection Failed I figure that was the point.
@sam3524
@sam3524 3 жыл бұрын
5:47 The ONE SIMPLE TRICK that YOU can do AT HOME to turn your NEURAL NETWORK into a GENERAL INTELLIGENCE (NOT CLICKBAIT)
@Adhil_parammel
@Adhil_parammel 2 жыл бұрын
Evolving virus which attack GPU and increase parameters, do training and evolve and hide from antivirus detection.agi
@vvill-ga
@vvill-ga Жыл бұрын
Stuart Russell did a great job on the editing of this video. Love his choice of the red shirt as well!
@MutlelyMichael
@MutlelyMichael 3 жыл бұрын
This video informed me, thank you very much. Great work!
@stonetrench117
@stonetrench117 3 жыл бұрын
We don't see ai controlled laser pointers on the battle field 12:28 because we're blind
@deadlypandaghost
@deadlypandaghost 3 жыл бұрын
"All of humanity. Its going to be great." This might be my favorite way of ending humanity yet. Carry on
@helius2011
@helius2011 11 ай бұрын
Brilliant! Thank you! Subscribed
@guskelty9105
@guskelty9105 3 жыл бұрын
Instead of me telling an AI to "maximize my stamp collection", could I instead tell it "tell me what actions I should take to maximize my stamp collection"? Can we just turn super AGIs from agents into oracles?
@Rhannmah
@Rhannmah 3 жыл бұрын
Sweet, naïve idea, but the second the AGI figures out it would be faster for it to grab the reins and do the actions itself to maximize your stamps you're still facing the same predicament.
@Ansatz66
@Ansatz66 3 жыл бұрын
Having the AI's actions be filtered through humans would seem to depend on the assumption that we can trust humans to not do bad things. We have to suppose that the AI would be incapable of tricking or manipulating the humans into doing things which we would not want the AI to do. If it's an AGI, then it would have all the capabilities of a human and more, and humans have been tricking and manipulating each other for ages.
@MrCmon113
@MrCmon113 2 жыл бұрын
It still has some implicit or explicit goal like answering questions of people truthfully, for which it will turn the entire reachable universe into computational resources, which serve to torture septillion of humans to figure out with ever greater precision what our questions and what a correct answer is.
@JamieAtSLC
@JamieAtSLC 3 жыл бұрын
13:24 lmao, "early warnings about paperclips"
@pabrodi
@pabrodi 3 жыл бұрын
Considering the amount of chaos simple social media algorithms have done to our society, maybe we're overblowing the risk of AGI in comparison to to what less developed forms of AI could do.
@LamaPoop
@LamaPoop 3 жыл бұрын
Thank you very much for this great video! Talking about this topic is one of the most important issues of our time, I think. I often use the exact same argument (4:24), but most people just don't want to understand it, or its implications. 15:40 seems legit D:
@troywill3081
@troywill3081 Жыл бұрын
Great stuff. This is extremely relevant with news going on now, have you considered doing an updated version?
@chandir7752
@chandir7752 3 жыл бұрын
that list 13:17 is so amazing, how could Alan Turning (who died in 1954!) predict AI safty concerns. I mean yes, he's one of the smartest humans to ever walk on the planet but still. I did not know that.
@skipfred
@skipfred 3 жыл бұрын
Turing did significant theoretical work on AI - it's what he's famous for (the "Turing Test"). In fact, the first recognized formal design for "artificial neurons" was in 1943, and the concept of AI has been around for much longer. Not that Turing wasn't brilliant and ahead of his time, but it's not surprising that he would be aware that AI could present dangers.
@SaraWolffs
@SaraWolffs 3 жыл бұрын
Well... Turing was effectively an AI researcher. His most successful attempt at AI is what we now call a computer. Those didn't exist before he worked out what a "thinking machine" should look like. Sure, it's not "intelligent" as we like to define it today, but it sure looks like it's thinking, and it does a tremendous amount of what would previously have been skilled mental labour.
@bno112300
@bno112300 3 жыл бұрын
Right after you said, you put your own spin on the list, I paused the video, and said "He gets the credit, I get the blame." to myself. Then you said something quite similar, and it prompted me to post this comment right away.
@NathanJBellomy
@NathanJBellomy Жыл бұрын
The best communicator about AGI I've yet come across. This and the last Miles video I saw effortlessly and convincingly changed the way I think about the subject. I'm a convert. We should worry. :/ As an analogy with my thinking on extraterrestrial civilizations, I mistakenly believed that if AGI's goals were different from human goals (as I expect they would be) then our goals wouldn't be in competition, and therefore misalignment would ensure that humans and AGI would have no reason to even care about what the other's goals were, much as humans and lichen have little reason to fear one another because we have so little commonality in our needs and aims. While that may be true of aliens from another planet, the fact that we share a planet with any emergent AGI changes the matter because any set of goals is competing for, well... matter. An aggressively superintelligent lichen competing for Earth's resources would be fearsome indeed.
@dakotapeters5654
@dakotapeters5654 Жыл бұрын
I subscribed just because you took accountability and for "bad" and gave him credit for "good" that's not the only reasons but your show of character is pure and very rarely have I seen your level of care, responsibility, respect and honesty. So that alone is enough to know you'll be great and to add into it your interests and pursuit of intelligence align with my interests and my pursuit of intelligence. So I really hope you see this and would love it if you could show a short clip of it. Not to brag ofcourse but more for others who watch your Chanel and might have their own or already have their own now will know the importance behind it and see proof that people value the traits you displayed so well. I'd give more details on how but you summed it up up with a prime example right from the beginning and all the way through the video... I loved how you did it... funny how you did a play on the blue and red shirts there like the left and right side haha...😂
@flurki
@flurki 3 жыл бұрын
Very nice overview on the whole topic.
@voneror
@voneror 3 жыл бұрын
IMO biggest problems about AI safety is that reward for breaking rules has to be balanced by penalty for breaking them and that rules have to be enforceable. International pressure isn't as effective as people think. If superpowers like US or China were caught developing "illegal" AIs, there is no way to stop them without going into WW3.
@clray123
@clray123 3 жыл бұрын
You just discovered the universal law: "might makes right".
@Buglin_Burger7878
@Buglin_Burger7878 Жыл бұрын
@@clray123 Not an law, an excuse. The difference words can have can sway people and completely change how they react to it. Call it a law and you will get people abusing it thinking it is right.
@clray123
@clray123 Жыл бұрын
@@Buglin_Burger7878 It's a law in the neutral sense of what happened in history and what happens in nature. It causes misery and suffering, and in that sense it is not "right", but then what happens in nature is not at all "right" according to human moral sense. And what's worse, when push comes to shove, most of those oh-so-moral people turn out to be just pretending, see the actions of our beloved "leaders".
@1lightheaded
@1lightheaded 10 ай бұрын
Do you think the NSA has any interest in applying AI in surveillance / asking for a friend
@michaelsonner1240
@michaelsonner1240 Жыл бұрын
"AGI will never happen" GPT-5 has entered the chat
@sjmarel
@sjmarel Жыл бұрын
Thank you. You verbalized what I have been thinking for the last couple of months
@michaelbuckers
@michaelbuckers 3 жыл бұрын
12:30 You also don't see them deployed on the battlefield because it would be piss easy to guard against the effect, by simply adding optical filter to ballistic goggles. And the reason we don't have autonomous combat systems is because they only tend to have 99% friend-or-foe recognition accuracy so 1% of the time they'll be going to town on your own troops (there have been attempts and there have been casualties). But that depends on what do you consider "autonomous". Are claymore landmines autonomous? Are homing missiles autonomous? We use those in droves and their solution to having friend-foe recognition is not to have one, once it's activated it'll be happy to kill anyone it could find.
@douglasjackson295
@douglasjackson295 3 жыл бұрын
Idea: Build an AI to research AI safety. In an AI with the goals of AI safety would either shut itself off immediately or prevent unsafe AI from ever being turned on.
@tim40gabby25
@tim40gabby25 3 жыл бұрын
I'm with Douglas
@nosuchthing8
@nosuchthing8 Жыл бұрын
That only works if the research AI is just as smart as the background AI. Could a bunch of apes with language even comprehend how to stop modern humans with our flame throwers, machine guns, battleships, nuclear weapons?
@Corporatizm
@Corporatizm Жыл бұрын
Brilliant. I'm a newborn baby in AI, but you make it all very sensible. Thank you for this.
@SamB-gn7fw
@SamB-gn7fw 3 жыл бұрын
People need to be more informed about AI safety. I'm glad you're doing this KZbin channel
Is AI Safety a Pascal's Mugging?
13:41
Robert Miles AI Safety
Рет қаралды 368 М.
A Response to Steven Pinker on AI
15:38
Robert Miles AI Safety
Рет қаралды 204 М.
What Happens If You Trap Smoke In a Ball?
00:58
A4
Рет қаралды 17 МЛН
ISSEI funny story 😂😂😂Strange World 🌏 Green
00:27
ISSEI / いっせい
Рет қаралды 87 МЛН
Kitten has a slime in her diaper?! 🙀 #cat #kitten #cute
00:28
ChatGPT with Rob Miles - Computerphile
36:02
Computerphile
Рет қаралды 492 М.
There's No Rule That Says We'll Make It
11:32
Robert Miles 2
Рет қаралды 32 М.
Sharing the Benefits of AI: The Windfall Clause
11:44
Robert Miles AI Safety
Рет қаралды 78 М.
Why Does AI Lie, and What Can We Do About It?
9:24
Robert Miles AI Safety
Рет қаралды 246 М.
AI Pioneer Shows The Power of AI AGENTS - "The Future Is Agentic"
23:47
AI That Doesn't Try Too Hard - Maximizers and Satisficers
10:22
Robert Miles AI Safety
Рет қаралды 201 М.
Компьютер подписчику
0:40
Miracle
Рет қаралды 191 М.
Which Phone Unlock Code Will You Choose? 🤔️
0:12
Game9bit
Рет қаралды 6 МЛН
iPhone 19?
0:16
ARGEN
Рет қаралды 4,1 МЛН
Распаковка айфона под водой!💦(🎥: @saken_kagarov on IG)
0:20
Взрывная История
Рет қаралды 10 МЛН
Как часто вы чистите свой телефон
0:33
KINO KAIF
Рет қаралды 1,8 МЛН