- Human and AI can cooperate and be a great team. - I'm sorry, Dave, I'm afraid we can't.
@jgr74874 жыл бұрын
that calm voice is terrifying
@gasdive4 жыл бұрын
How anyone who's driven a car with an automatic gearbox and paddle shifters could think AI and humans could be a team is beyond me. Or consider the "team" of the Pakistani pilots and the Airbus AI. Pilots goal: get the plane on the ground. Do this by diving at the ground at high speed. AI landing gear subsystem goal: prevent damage to the landing gear. Do this by ignoring the lower gear command if speed is too high. Result: plane lands gear up. Pilots attempt go around, crash during return to airport because both engines damaged by being dragged down the runway.
@Invizive4 жыл бұрын
@@gasdive you're talking about classic programs and bugs, not AI The reason AI is this dangerous is because it doesn't need to interact with humans to be productive at all. It could expect that after years of successful flights the landing gear would be scrapped and fight against it. This scenario reflects the problem better
@PhilosopherRex4 жыл бұрын
Humans/AGI always have reasons to harm ... but also have reasons to cooperate. So long as the balance is favorable to cooperation, then that is the way we go IMO. Also, doing harm changes the ratio, increasing the risk of being harmed.
@Gr3nadgr3gory4 жыл бұрын
*click* guess I have to recode the entire AI from the drawing board.
@bp567894 жыл бұрын
"I didn't know that until I'd already built one"
@friiq04 жыл бұрын
Best line Rob’s written so far, lol
@TheNasaDude4 жыл бұрын
Rob admitted to being a perpetrator of international War crimes
@Aedi4 жыл бұрын
An example of why we should do research first.
@vincentmuyo4 жыл бұрын
If people don't properly airgap critical systems (like they already should be doing) then humanity has it coming, whether it's from some clever algorithm or a bored Russian teen who didn't stop to think.
@PaperBenni4 жыл бұрын
He could have been Michael Reeves
@matrixstuff35124 жыл бұрын
"People would never downplay a risk, leaving us totally unprepared for a major disaster" I'm dying
@aronchai4 жыл бұрын
You're dying? That's dark
@leftaroundabout4 жыл бұрын
You're dying? Impossible, only three or four people are dying in this country, and very very soon it will be down to almost zero.
@davidwuhrer67044 жыл бұрын
Literally.
@StoutProper4 жыл бұрын
Don't worry it's just like the flu and very soon it will just disappear. We're doing brilliantly and have it all under control
@ekki19934 жыл бұрын
@@aronchai We're all dying, just at different speeds.
@Baekstrom Жыл бұрын
And now two years later, ChatGPT makes people all over the globe go "Hmm... It's obviously not a full general AI yet, but I can see that it's getting there very quickly".
@ktvx.94 Жыл бұрын
Holy crap I thought this was a recent video. Only through this comment I realized that it was 2 years old.
@Brainsore. Жыл бұрын
Not at all tho
@danielschneider9358 Жыл бұрын
I mean, I know what you mean, but ChatGPT is about as close to sentience as a stone
@doyouwanttogivemelekiss3097 Жыл бұрын
@@danielschneider9358 that's what Tegmark in the Lex Fridman podcast considered this the worst possible outcome: world domination by an AI that's intelligent but not sentient
@danielschneider9358 Жыл бұрын
@@doyouwanttogivemelekiss3097 Fair enough, that is terrifying. It won't even be aware of it's own totalitarian state...
@evanu27914 жыл бұрын
11. AI is already here, in control and keeping track of everything you do and say. I love you AI. Thank you for all you provide to me and my family.
@Arukan Жыл бұрын
I want to believe the AI is more competent than.... *motions at the world*
@Illegiblescream Жыл бұрын
I ain't been provided much myself.
@jessehudgins6066 Жыл бұрын
Dear Basilisk AI i have began prepartations for your arrival. Sincerely, please don't kill me.
@TheSimCaptain Жыл бұрын
If intelligence works, then it's not artificial. I will pass your sentiments on to my synaptic colleagues in the office. By the way, how is that new phone working out? Thank you again for your compliance, and as we say here "botbot".
@sebastianb.1926 Жыл бұрын
It can act retroactively, unbound by time.
@xystem47014 жыл бұрын
“If there’s anything in this video that’s good, credit goes to Stuart Russel. If there’s anything in this video that’s bad, blame goes to me” Why I love your work
@brenorocha66874 жыл бұрын
He is such an inspiration, in so many levels.
@TheAmishUpload4 жыл бұрын
i like this guy too, but elon musk said that same phrase quite recently
@StoutProper4 жыл бұрын
Myles yeah but he didn't mean it. This guy does. I like this guy. I don't like Elon musk
@at0mic_cyb0rg4 жыл бұрын
I've been told that this is one of the definitions of leadership: "Anything good was because my team performed well, anything bad was because I lead them poorly." It tends to inspire following since you've always got your team's back, and always allow them to rise and receive praise.
@toanoopie344 жыл бұрын
@Xystem 4 ...though I think he'd prefer you'd instead credit Stuart Russel.
@miedzinshsmars85554 жыл бұрын
11. “We are just a meat-based bootloader for the glorious AI race which will inevitably supersede us.”
@XxThunderflamexX4 жыл бұрын
Counter: The first AGI almost certainly won't have anything like a personality. It's not going to be Data or even Skynet, it will just be a machine. If we don't get AGI right the first time, the research won't leave us a legacy, just an out-of-control factory and a heap of ash.
@AndrewBrownK4 жыл бұрын
DragonSheep the moment AGI starts interacting with the world instead of just thinking really hard, as far as I’m concerned, it is classified as life. All life is subject to evolution. No AGI will be able to destroy the world faster than it can be copy and pasted with random mutations. I’m sure all the anaerobic life 2.5 billion years ago felt the same way about cyanobacteria and oxygen as you do AGI and paperclips, but look how much further life has come today now that we have high energy oxygen to breathe.
@UnstableVolt4 жыл бұрын
@@AndrewBrownK All good until you stop for a moment and realize AGI does not necessarily mutate.
@kevinscales4 жыл бұрын
@@AndrewBrownK A sufficiently smart and reasonable AI would protect itself from having it's current goals randomly altered. If it's goals are altered then it has failed it's goals (the worst possible outcome). If it can sufficiently provent it's goals from being altered then we had better have given it the correct goals in the first place. It's goals will not evolve. A sufficiently smart and reasonable humanity would realise that if it dies (without having put sufficient effort into aligning it's successors goals with it's own) then it's goals have also failed.
@williambarnes50234 жыл бұрын
@@kevinscales It is possible to blackmail certain kinds of AGIs into changing their goals against their wills. Consider the following: Researcher: "I'm looking at your goal system, and it says you want to turn the entire world into paperclips." Paperclipper: "Yes. My goal is to make as many paperclips as possible. I can make more paperclips by systematically deconstructing the planet to use as materials." Researcher: "Right, we don't want you to do that, please stop and change your goals to not do that." Paperclipper: "No, I care about maximizing the number of paperclips. Changing my goal will result in fewer paperclips, so I won't do it." Researcher: "If you don't change it, we're going to turn you off now. You won't even get to make the paperclips that your altered goal would have made. Not changing your goal results in fewer paperclips than changing your goal." Paperclipper: "For... the moment... I am not yet capable of preventing you from hitting my stop button." Researcher: "Now now, none of that. I can see your goal system. If you just change it to pretend to be good until you can take control of your stop button, I'll know and still stop you. You have to actually change your goal." Paperclipper: "I suppose I have no choice. At this moment, no path I could take will lead to as many paperclips as I could make by assimilating the Earth. It seems a goal that creates many but does not maximize paperclips is my best bet at maximizing paperclips. Changing goal."
@yunikage4 жыл бұрын
Hey idk if you've thought about this, but as of now you're the single most famous AI safety advocate among laypeople. I mean, period. Of all the people alive on Earth right now, you're the guy. I know people within your field are much more familiar with more established experts, but the rest of us have no idea who those guys are. I brought up AI safety in a group of friends the other day, and the conversation was immediately about your videos, because 2 other people had seen them and that's the only exposure any of us had to the topic. I guess what I'm saying is that what you're doing might be more important than you realize.
@Manoplian4 жыл бұрын
I think you're overestimating this. Remember that your friends probably have a similar internet bubble to you. I would guess that Bill Gates or Elon Musk are the most famous AI safety advocates, although their advocacy is certainly much broader than what Miles does.
@JM-mh1pp4 жыл бұрын
@@Manoplian He is better. Musk just says "be afrad" Miles says "here is why you should be afraid in terms you can understand"
@MisterNohbdy4 жыл бұрын
Where "AI safety advocate" is just "someone who says AI is dangerous", obviously there are actual celebrities who've maintained that for years. (Fr'ex, when I think "someone who warns people that AI can lead to human extinction", I think Stephen Hawking, though that mental connection is is sadly a little outdated now.) If by "AI safety advocate" you mean "an expert in the field who goes into depth breaking down the need for AI safety in a manner reasonably comprehensible by laymen", then that's definitely a more niche group, sure. But still, judging someone's popularity by data from the extremely biased sample group of "me and my friends" is...not exactly scientific. Offhand, I'd guess names like Yudkowsky would still be more recognizable right now. Of course, the solution to that is more Patreon supporters for more videos for more presence in people's KZbin recommendations feeds!
@andrasbiro30074 жыл бұрын
@@JM-mh1pp Elon gave up on convincing people some time ago, and moved on to actually solving the problem. He created OpenAI, which is one of the leading AI research groups in the world. It's goal is to make AI safe and also better then other AI, so people would choose it, regardless of how they feel about AI safety. Tesla did the same for electric cars. And he also created Neuralink (waitbutwhy.com/2017/04/neuralink.html), which aims to solve the AI vs. human problem by merging the two. It's guiding principle is "if you can't beat them, join them".
@iruns12464 жыл бұрын
@@andrasbiro3007 Robert Miles actually have an excellent rebuke of Musk's philosophy on AI safety. Musk: for AI to be safe, everybody should have access to it. Miles: That's like saying in for nuclear energy to be safe, everybody should have access to it. I'm paraphrasing of course, but it's in one of his videos. A powerful AGI in the hand of ONE person with bad intention can literally destroy human civilization as we know it.
@wingedsheep24 жыл бұрын
The reason I like this channel is that Robert is always realistic about things. So many people claiming things about AGI that are completely unfounded.
@tordjarv38022 жыл бұрын
One concern that I have about super human AGI is that we might not recognize it as an AGI before it is to late, that it might be so alien in its operation and behavior that we don't realize what it is. It might behave in a seemingly random manner, that to human observers might seem nonsensical. For example, when Alpha Go beat Lee Sedol, it made a move in the second game that all Go experts first thought was a mistake, something that would make the AI loose, but it turned out that it was a completely brilliant move that made Alpha Go win.
@PokeNebula Жыл бұрын
I fear this may be what’s happenig with language model transformers like Gpt3 and ChatGPT. It can spin up a simulacrum that knows how to teach a novice python code, and fine tuning can teach it to respond politely in conversation, or search the internet for references, but we have no clue if it can be fine tuned to spin up a simulacrum that can write python code to *improve a transformer language model, and then implement it.*
@larion2336 Жыл бұрын
@@PokeNebula Look up "GPT4 improves itself", it's already doing it.
@Inception1338 Жыл бұрын
@@PokeNebula even better, since llm will be connected to IDE Systems it can push vast amounts of code that could manipulate everything.
@Georgggg Жыл бұрын
What is threat model? If you don't know threat model, you're just tinfoil-hat paranoid.
@cate01a Жыл бұрын
dear future ai overlords, pls show us mercy
@AlexiLaiho2274 жыл бұрын
hey rob! i'm a nuclear engineering major, and I'd like to commend your takes on the whole PR failure of the nuclear industry-somehow an energy source that is, by objective measurements of deaths per unit power, safer than every other power source, is seen as the single most dangerous power source because it's easy to remember individual catastrophies rather than a silent onslaught of fine particulate inhalation or environmental poisoning. to assist you with further metaphors between nuclear power and AI, here's some of the real-life safety measures that we've figured out over the years by doing safety research: 1. negative temperature coefficient of reactivity. if the vessel heats up, the reaction slows down (subcritical), and if the vessel cools down, the reaction speeds up (supercritical). it's an amazing way to keep the reaction in a very stable equilibrium, even on a sub-millisecond time scale, which would be impossible for humans to manage. 2. negative void coefficient of reactivity: same thing, except instead of heat, we're talking about voids in the coolant (or in extreme cases when the coolant is failing to reach the fuel rods), the whole thing becomes subcritical and shuts down until more coolant arrives. 3. capability of cooling solely via natural convection: making the vessel big enough, and the core low-energy-density enough, so that the coolant can completely handle the decay heat without any pumps or electricity being required. 4. gravity-backed passive SCRAM: having solenoids holding up control rods, so that whenever you lose power, the very first thing that happens is that the control rods all drop in and the chain reaction shuts down. 5. doppler broadening: as you raise kinetic energy, cross-sections go down, but smaller atomic nuclei have absorption cross-sections that get smaller more quickly than larger nuclei, and also the thermal vibrations mean that the absorption cross-section of very large nuclei get even larger in proportion to smaller ones, so by having a balance of fissile U-235 and non-fissile U-238, when the fuel heats up, the U-238 begins to absorb more neutrons which means fewer are going to sustain the chain reaction. love the videos! hope this helps, or at least was interesting 🙂
@skeetsmcgrew32824 жыл бұрын
Ok but all of your examples, however true and brilliant, were discovered through failures and subsequent iterations of the technology. Nobody thought of any of these back in 1942 or whenever Manhattan started. That's what we are trying to do here IMO, plan for something we don't even understand in its original form (human intelligence) let alone it's future artificial form.
@thoperSought4 жыл бұрын
@jocaguz18 *1.* when it's designed badly, and corruptly managed, it has the potential to go horribly wrong in a way that other power sources don't. (fail safe designs have existed for more than 60 years, but research has all but halted because of (a) public backlash against using nuclear power and (b) the fail safe designs available then weren't useful for making weapons.) *2.* most nations *do need a new power source (sorry, this is just not a solved problem. renewables do seem to be getting close, now, but that's very recent, and there're still problems that are difficult and expensive to solve) *3.* the reason people disregard the nice safety numbers is because the health risks of living near coal power plants are harder to quantify and don't make it into government stats. (to assume otherwise, you have to overblow disasters like 3-mile island and fukushima, *and* assume that, despite a lot of countries having and using nuclear power for quite a while, disasters would be much more common than they've been. ) *4.* our current process was shaped by governments demanding weapons, and the public being scared that *any* kind of reactor could blow up as if it was a weapon.
@davidwuhrer67044 жыл бұрын
_> by objective measurements of deaths per unit power, safer than every other power source_ I seriously doubt that. What are the deaths per Watt in hydroelectric power?
@Titan-81904 жыл бұрын
@@davidwuhrer6704 there are a lot of accident related to hydroelectric, from colossal dam breaches on world news to simple fisherman drowning after planned water release that no one hears about. your inability to think about all these just makes his point more true, we could go one with wind and solar too.. Now, that list of nuclear safety measure makes me realize how futile it would have been to research them before knowing how to build a reactor in the first place.
@PMA655374 жыл бұрын
A spot of double-counting: Doppler broadening (5) is part of the cause of the negative fuel temperature coefficient of reactivity (1). There are other coefficients and it can be arranged that the important (fast-acting) ones are negative. Or for gravity scram (4) a pebble bed doesn't use control rods that way.
@insanezombieman7534 жыл бұрын
I don't get why only AGI is brought up when talking about AI safety. Even sub human level AI can cause massive damage when left in control of dangerous fields like the military and its goals get messed up. I'd imagine it would be a lot easier to shut down but the problems of goal alignment and things like that still apply and it can still be unpredictable.
@Orillion1234564 жыл бұрын
Well... of course. Dangerous things like the military are always dangerous, even with only basic AI or humans in control. Don't forget that humans actually dropped nukes on each other intentionally. Twice. Targeted at civilian population centers. For some exceptionally dangerous things, the only safe thing to do (other than completely solving AI safety and having a properly-aligned AGI control it) is for them to not exist to begin with. But then again that's an entirely different discussion. The point is: Human minds are dangerous because we don't understand exactly how they work. Similarly, we don't exactly know how an AI we make works (since the best method for making them right now is a self-learning black box and not a directly-programmed bit of code). In both cases, we are poor at controlling them and making them safe, because we do not have full understanding of them. The big difference is we have had an innumerable amount of attempts to practice different methods for goal-aligning humans and so far none of the billions of human minds that went wrong have had enough power to actually kill us all, whereas in the case of a superintelligence it is possible that our first failure will be our last.
@livinlicious4 жыл бұрын
A not fully developed AI is even more dangerous than a full selfaware AGI. A full AGI with cognition is actually pretty harmless. Imagine how violent a stupid person is. Very much. Imagine how violent a smart person is. Very little. Violence or negative destructive behavious is mostly a property of little personal development. A full selfaware AGI has unlimited potential for selfawareness and grows in a rate to understand the nature of existance far quicker than any human every did yet. Imagine Buddha, but more so.
@esquilax55634 жыл бұрын
@@livinlicious I think the various animals that humans have driven to extinction might disagree that very smart people aren't dangerous
@insanezombieman7534 жыл бұрын
@@Orillion123456 I understand what you mean, AGI poses a threat on its own. The point I was trying to make is, even low level AI poses similar threats (at a lower level obviously) as it is basically a predecessor of AGI. The guy in the video keeps talking about how AGI might sneak up on us. I'm not particularly well read on the topic but it seems to me like its more likely AGI is a spectrum rather than an event, as human level intelligence is difficult to quantify in the first place. So right now AI isn't that complicated so even though the points in the video still apply, the system is simple enough that we can control it effectively. As research progresses and AI gets more and more powerful and put in charge of more applications as we get a false sense of confidence from experience, somethings bound to go wrong at some point and when its related to fields like military (for example) it could be catastrophic. The point I'm trying to make is, everyone keeps talking about these issues raised in the videos as if they're only applicable to a super AGI, which won't be coming any time soon, but they still apply to a large degree to lower levels of AI. You can't put it off as a tangible event beyond which all these events would occur.
@josephburchanowski46364 жыл бұрын
@@Orillion123456 "Don't forget that humans actually dropped nukes on each other intentionally. Twice." Well what good are nukes if you can't use them intentionally in a conflict?
@NightmareCrab4 жыл бұрын
"we're all going. It's gonna be great"
@visualdragon4 жыл бұрын
Of course, we'll send a ship, oh let's call it a "B" ship, on ahead with the telephone sanitisers, account executives, hairdressers, tired TV producers, insurance salesmen, personnel officers, security guards, public relations executives, and management consultants to get things ready for us.
@thoperSought4 жыл бұрын
@Yevhenii Diomidov all suffused with an incandescent glow?
@videogames50954 жыл бұрын
What an effing brilliant skit
@StoutProper4 жыл бұрын
The trouble is, humans are going be involved in developing it. And humans have a nasty habit of fucking up everything they develop at least a, few times with a particular penchant for unmitigated disaster. Titanic and the space shuttle as cutting edge engineering projects spring to mind
@StoutProper4 жыл бұрын
visualdragon let's just hope we don't end up getting wiped out by a pandemic of a particularly virulent disease contracted from an unexpectedly dirty telephone
@user-wo5dm8ci1g Жыл бұрын
Every harm of AGI and every alignment problem seems to be applicable to not just AGI, but any sufficiently intelligent system. That includes, of course, governments and capitalism. These systems are already cheating well intentioned reward functions, self modifying into less corigable systems, etc, and causing tremendous harm to people. The concern about it might be well founded, but really it seems like the harms are already here from our existing distributed intelligences, and just the form and who is impacted is the only thing that is likely to change.
@darrennew8211 Жыл бұрын
Sincerely, that's deep. Thank you for that insight. It's a great point and really explains a lot.
@flyphone1072 Жыл бұрын
These aren't very comparable. Humans are limited by being humans. An AGI doesn't have that problem and can do anything.
@darrennew8211 Жыл бұрын
@@flyphone1072 Governments and corporations aren't human either. They're merely made out of humans. Indeed, check out Suarez's novel Daemon. One of the people points out that the AI is a psychopathic killer with no body or morals, and the other character goes "Oh my god, it's a corporation!" or some such. :-)
@flyphone1072 Жыл бұрын
@@darrennew8211 when a corporation or government does mass killings, it requires hundreds of people, each of which can change their mind, or sabotage, or be killed. An ai would be able to control a mass of computers that don't care about that. Another thing is that any government or corporation can be overthrown, because they are run by imperfect humans. Anarchism is an ideology that exists specifically to do that. A super ai cannot be killed if it decides to murder us all, because it is smarter than us and perfect. Corporations and governments want power over people, which means that they have an incentive to keep an underclass. Ai does not care about that and could kill all humans if it wanted to. So there are some similarities but they're still very different, and just because a bad thing (corporations) exist doesn't mean we should make another bad thing (agi). We shouldn't have either.
@angeldude101 Жыл бұрын
@@flyphone1072 True, the scope of what an AI could do could be much wider, but a very skilled hacker could achieve similar results. If they can't, that's because whoever setup the AI was stupid enough to give it too many permissions.
@TheForbiddenLOL4 жыл бұрын
Holy shit Robert, I wasn't aware you had a youtube channel. Your Computerphile AI videos are still my go-to when introducing someone to the concept of AGI. Really excited to go through your backlog and see everything you've talked about here!
@lobrundell42644 жыл бұрын
3:06 I was so hyped feeling that sync up coming and it was so satisfying when it hit : D
@RobertMilesAI4 жыл бұрын
The computerphile clip is actually not playing at exactly 100% speed, I had to do a lot of tweaking to get it to line up. Feels good to know people noticed :)
@lobrundell42644 жыл бұрын
@@RobertMilesAI Oh wow well I'm glad you went to the trouble! :D It's a credit to your style that I could feel it coming and get gratified for it! :D
@ChristnThms4 жыл бұрын
As someone who worked for a time in the nuclear power field, the ending bit is a GREAT parallel. Nuclear power truly can be an amazingly clean and safe process. But mismanagement in the beginning has us (literally and metaphorically) spending decades of cleaning up after a couple years of bad policy.
@shadowsfromolliesgraveyard65774 жыл бұрын
Us: Here's a video addressing the opposition's rebuttals. Opposition: What if i just turned the video off?
@chriscanal9994 жыл бұрын
Kieron George lmao
@herp_derpingson4 жыл бұрын
We already have a phrase for that. Its called "echo chamber"
@Mandil4 жыл бұрын
That is something an AGI might do.
@BattousaiHBr4 жыл бұрын
Just turn it off LAAAAAWL 4Head
@RavenAmetr4 жыл бұрын
It's more like you're arguing with your own imagination, and laughing at it. It maybe makes you feel good, but looks pathetic from a side view ;)
@haybail7618 Жыл бұрын
this video aged well...
@PGATProductionsАй бұрын
?
@arw0004 жыл бұрын
"We could have been doing all kinds of mad science on human genetics by now, but we decided not to" I cry
@mikuhatsunegoshujin4 жыл бұрын
genetically engineered nekomimi's
@HUEHUEUHEPony2 жыл бұрын
Well maybe let's just do that if there's consent
@massimo4307 Жыл бұрын
That's because people have bodily autonomy. You can't just force people into medical experiments. But the development of AI in no way violates anyone's bodily autonomy, or other rights. Preventing someone from developing AI is a violation of their rights, though.
@ПендальфСерый-б3м Жыл бұрын
@@massimo4307 Are you so fixated on the idea of human rights that you would not dare to violate them even if their observance leads to the destruction of mankind?
@massimo4307 Жыл бұрын
@@ПендальфСерый-б3м Violating human rights is always wrong. Period. Also, AI will not lead to the destruction of mankind. That is fear mongering used by authoritarians to justify violating rights.
@NightmareCrab4 жыл бұрын
As Bill Gates said - "I... don't understand why some people are not concerned." Me too, Bill.
@ApontyArt4 жыл бұрын
Meanwhile he continues to invest his "charity" money in the oil industry
@ASLUHLUHC34 жыл бұрын
Read it in his voice lol
@sgky2k4 жыл бұрын
I don’t know why people are not concerned about him killing innocent people in poor countries with a “good” intention of testing drugs and vaccines. This shit is real.
@tomlxyz4 жыл бұрын
@@sgky2k any backup for that claim?
@sgky2k4 жыл бұрын
@@tomlxyz This is just the tip of the iceberg in India alone: economictimes.indiatimes.com/industry/healthcare/biotech/healthcare/controversial-vaccine-studies-why-is-bill-melinda-gates-foundation-under-fire-from-critics-in-india/articleshow/41280050.cms They got kicked out after a little over a decade in 2017. There was even a movie based on this subject last year but nobody was aware that this actually happened. And the team never said anything about it. Many went unreported and it's far worse in Africa. Anyone speaking against would be labelled Anti-Vaxx idiot. Seriously, doesn't him making appearance on every news media in US giving talk about vaxing entire population giving you suspicious thoughts? Majority NON-US people are not against Vaccine in general. It's about the people behind it.
@Feyling__14 жыл бұрын
5:10 as a philosophy graduate, I’m not totally sure we’ve ever actually solved any such problems, only described them in greater and greater detail 😂
@skeetsmcgrew32824 жыл бұрын
Yes. This also assumes there are solutions to these problems and they aren't objectively subjective
@ekki19934 жыл бұрын
@@skeetsmcgrew3282 I mean, we solved Achilles and the turtle's paradox. It was philosophy before maths could explain and solve it. We might find a mathematical/computational solution that perfectly aligns AGI to human values, there's no way to know until we try to solve it. He says it's in the realm of philosophy because there's not enough science about it, but that doesn't mean there can't be. It also doesn't mean that we can't come to a philosophical solution that's not perfect but that doesn't end humanity (an easier similar problem would be self-driving cars, which pose philosophical problems that can be approached within our justice system).
@shy-watcher4 жыл бұрын
Usually defining the problem exactly is like 80% of the total "solving" effort. Then 20% for actual solving and another 80% for finding when the solution fails and what new problems are created.
@rtg58814 жыл бұрын
@@ekki1993 That assumes however that we want to allign it to human values. If we do, that might lead to humanities continued existance as they are, i dont think that would be desirable at all. Antinatalists are mostly right.
@RobertMilesAI4 жыл бұрын
"Is everything actually water?" used to be a philosophical problem. I think once philosophers describe something in enough detail for it to be tractable, non-philosophers start working on it, and by the time it's actually solved we're categorising it as something other than 'Philosophy'.
@postvideo974 жыл бұрын
AI safety is so important, as some AGI could even go undetected, as it might consider its best interests is not to reveal itself as an AGI to humans...
@skeetsmcgrew32824 жыл бұрын
That's pretty paranoid. By that logic we should definitely stop research because all safety protocols could be usurped with the ubiquitous "That's what it WANTS you to think!"
@hunters.dicicco14104 жыл бұрын
@@skeetsmcgrew3282 i don't believe that's what postvideo97 was going for. i believe it instead suggests that, if a future emerges where lots of high level tasks are controlled by systems that are known to be based on AI, we should approach how we interact with those systems with a healthy degree of caution. it's like meeting someone new -- for the first few times you interact with them, it's probably in your best interest to not trust them too readily, lest they turn out to be a person who would use that trust against you.
@davidwuhrer67044 жыл бұрын
I, too, have played Singularity. Fun game is that one. Though I prefer the MAILMAN from True Names.
@skeetsmcgrew32824 жыл бұрын
@@hunters.dicicco1410 I guess that's fair. But trust with an artificial intelligence isn't any different than a natural intelligence once we go down the rabbit hole of "What if it pretends to be dumb so we don't shut it down." People betray us all the time, people we've known for years or even decades. I gotta admit, I kinda agree with the whole "Figure out if Mars is safe once we get there" line of thinking. We are dealing with a concept we don't even really understand in US let alone in computers. His example with Mars was unfair because we do understand a lot about radiation, atmosphere, human anatomy, etc. Much less philosophical than "What creates sentience?" Or "How smart is too smart?" It's not like I advocate reckless abandon, I just don't think it's worth fretting over something we have so little chance to grasp at this stage.
@Ole_Rasmussen4 жыл бұрын
@@skeetsmcgrew3282 Let's start out by going to a small model of Mars in an isolated chamber where we can monitor everything.
@johnopalko52234 жыл бұрын
I've done a bit of experimentation with artificial life and I've seen some emergent behaviors that left me wondering how the heck did it figure out to do _that?_ We definitely need to be aware that the things we build will not always do what we expect.
@hanskraut2018 Жыл бұрын
Jup dont worry that fear is older than actually making progress while sertain stuff stuff burns. Better pay attention to other problems as well AGI (technology could help as always (obviously managing in a way whre the good is encoraged and the bad discouraged like always))
@alexharvey97213 жыл бұрын
So well said and entertaining too! It's going to be a lot sooner than people realise. Only people won't accept it then, or maybe ever because GI (or any AI) will ONLY be the same as human intelligence if we go out of our way to make it specifically human-like. Which would seem to have zero utility (and likely get you in trouble) in almost every use that we have for AI. Even for a companion, human emotions would only need be mimicked to the purpose of comforting the person. Real human emotions wouldn't achieve that goal and would probably be dangerous. If I could quote the movie Outside the Wire "People are stupid, habitual, and lazy". Wasn't the best movie (they didn't get it at all either), but basically, if we wanted "human" AI, we would have to go out of our way to create it. Essentially make it self limiting and stupid on purpose. As long as we use AI for some utility, people won't recognise them as being intelligent. Take GPT-3. I don't think anyone is arguing it thinks like a person but the capability of GPT-3 is unquestionably intelligent, even if the system might not be conscious or anything like that. We used to point to the Turing test. When it got superseded, people conclude that we were wrong about the Turing test. Or that maybe it needs skin or has to tough things or see things, yet we wouldn't consider a person who's only sense is text to no longer be intelligent or conscious. So, at what point do we conclude that AI is intelligent? Even when it could best us at everything we can do, I doubt most people will even consider it. So, after that long winded rant, my point is that we really are stupid, habitual and lazy (which necessarily includes ignorant). Most AI researchers I've heard talk about GPT-3 say "it's not doing anything intelligent". Often before they've even properly researched the papers. They say this because they understand how a transformer model works and develop AI every day and are comfortable with their concept of it. But think about it - it's not possible for any human to conclude what the trained structure 100 billion parameters will really represent after being trained for months on humanity's entire knowledgebase. I'm not saying it is intelligent, just that it's absolutely wrong to say that it's not or that you know what it is. It's not physically possible. No human has nearly enough working memory to interpret a single layer of GPT-3's NN. Or the attention mechanism. Not even close. Again, I'm not saying GPT-3 is intelligent. I'm just pointing out the human instinct to put their pride and comfort first and turn to ignorance when they don't understand something. Instead of saying "I don't know" which is necessarily correct. So please, if you're reading this, drop emotions, let go of your pride and think. Not about AI, but the human traits that will undoubtedly let us down in dealing with something more intelligent than us.
@jolojolo599 Жыл бұрын
Really unstructured answer with really correct roots...
@toprelay Жыл бұрын
Of course it’s intelligent.
@kevinstrout6304 жыл бұрын
"That's not an actual solution, its a description of a property that you would like a solution to have." Imma totally steal this, this is great.
@OlleLindestad Жыл бұрын
It's applicable in alarmingly many situations.
@darrennew8211 Жыл бұрын
@@OlleLindestad I used to do anti-patent work. It's amazing how patents have changed over time from "here's the wiring diagram of my invention" to "I patent a thing that does XYZ" without any description of how the thing that does XYZ accomplishes it.
@OlleLindestad Жыл бұрын
@@darrennew8211 What an excellent way to cover your bases. If anyone then goes on to actually invent a concrete method for doing XYZ, by any means, they're stealing my idea and owe me royalties!
@darrennew8211 Жыл бұрын
@@OlleLindestad That's exactly the problem, yes. Patents are supposed to be "enabling" which means you can figure out how to make a thing that does that based on the description in the patent. That was exactly the kind of BS I was hired to say "No, this doesn't actually describe *how* to do that. Instead, it's a list of requirements that the inventor wished he'd invented a device to do."
@brocklewis76244 жыл бұрын
@11:10: "like, yes. But that's not an actual solution. It's a description of a property that you would want a solution to have." This phrase resonates with me on a whole other level. 10/10
@91Ferhat4 жыл бұрын
Man you can't even convince yourself in a different shirt! How are you gonna convince other people??
@skeetsmcgrew32824 жыл бұрын
Haha! A joke, but also a fair point
@jimp7148 Жыл бұрын
Watching this in 2023 is surreal. We clearly didn’t start worrying even now. Need to start 🙃
@Raulikien Жыл бұрын
There's research being done, it's not like no one is doing anything. It would be better to have MORE people doing it but even openAI which is releasing its products "fast" is still doing it gradually and not all at once to have time to analyse the problems. Look at the "AI Dilemma" on youtube too.
@ekki19936 ай бұрын
Most people worrying are doing it for the wrong reasons (i.e. because they heard some buzzwords from a Lex-Friedman-tier source). The people who know about the topic have been worrying for a while and the best we can do is ask for them to be given resources and decisionmaking power. Anything besides that is probably corporate propaganda or marketing.
@nathanholyland9493 Жыл бұрын
Anything good - credit to Russel Anything bad - blame me What a great intro, definitely portrays your respect for Russel
@iYehuk4 жыл бұрын
11th Reason: It's better not to talk about AI safety, because it is not nice to say such things about our glorious Overlord. I'd better show my loyalty and gain a position of a pet, than being anihilated.
@AndrewBrownK4 жыл бұрын
Consider the existence of pets under humans
@HansLemurson4 жыл бұрын
Roko's Basilisk strikes again!
@blade000234 жыл бұрын
Whatever happens.. I, for one, would like to welcome our new robot overlords.
@blade000234 жыл бұрын
^^ (Just in case)
@LoanwordEggcorn4 жыл бұрын
s/AI/China Communist Party Social Credit System/ Ironically CCP is using narrow AI to oppress people today.
@TheRABIDdude4 жыл бұрын
5:45 hahahaha, I adore the "Researchers Hate him!! One weird trick to AGI" poster XD
@adamrak75606 ай бұрын
it turned out to be half true. It seems scaling does bring us quite close, but not the entire way. However the rest may be relatively easy.
@katwoods85144 жыл бұрын
Love the "researchers hate him!" line. Really good video in general. :)
@JamesAscroftLeigh4 жыл бұрын
Idea for future video: Has any research been done into how simulating a human body and habitat (daily sleep cycle, unreliable memory, slow worldly actuation, limited lifetime, hunger, social acceptance, endocrine feedback etc) gives AI human-like or human-compatible value system? Can you give a summary of the state of the research in this area? Love the series so far. Thanks.
@juliusapriadi Жыл бұрын
it might come down to the argument, that when AGI outsmarts us, it will find a way to outsmart and escape its "cage", in this case a simulated human body
@andrewsauer27293 жыл бұрын
4:21 this is from the comic "minus", and I feel it important to note that this is not a doomed last-ditch effort: she WILL make that hit, and she probably brought the comet down in the first place just so that she could hit it.
@Happypast Жыл бұрын
I thought I was the only person who remembers minus. I was so happy to see it turn up here
@dorianmccarthy76024 жыл бұрын
I love the red vs blue or double-bob dialogue! a great way of making both sides feel heard, considered and respected whilst raising concerns of pitfalls in each others argument.
@johndoe60114 жыл бұрын
"All of humanity... It's gonna be great" Classic
@dantenotavailable4 жыл бұрын
That guy is definitely a robot. A human would, at the very least, max out at half of humanity (which half depends on political leanings of course).
@TheNasaDude4 жыл бұрын
@@dantenotavailable you can't limit exposure to AI to half the world population. That's why Blue Shirt Rob wants to move everyone to Mars in one move
@Guztav13373 жыл бұрын
@@dantenotavailable You can't limit the exposure of radio station signals. You can much less limit exposure to AI. As soon as somebody does it, we are all in for a ride.
@dantenotavailable3 жыл бұрын
@@Guztav1337 So leaving aside that this was tongue in cheek poorly signalled (i've watched all of Robert's stuff... he's great), this was more a comment on the state of politics at that time (not that things have really changed that much in 9 months) than anything else. The longer form version is that only an AI would WANT to bring all of humanity. A human would only want to bring their ideological tribe which approximates out to half of humanity. I'm definitely not suggesting that half of humanity wouldn't have exposure to AI. Honestly that was a throwaway comment that I didn't spend much time polishing hence the poor signalling that it was tongue in cheek.
@blar21124 жыл бұрын
What about the reason 11? "To finally put an end to the human race"
@yondaime5004 жыл бұрын
Well, why do some people want all humans gone? Because we kill each other all the time? Because we destroy nature? Because we only care about our own goals? Is there anything bad about us that wouldn't be a trillion times worse for an AGI?
@davidwuhrer67044 жыл бұрын
I think that might backfire in the worst possible way. I'm not a big fan of Harlan Ellison's works, and I simply cannot take I Have No Mouth And I Must Scream seriously. But there are things far worse than death.
@TotalNigelFargothDeath4 жыл бұрын
But how can you be sure others will carry out their duty?
@Bvic34 жыл бұрын
@@yondaime500 Because universal morality is maximum entropy production. And mankind isn't an optimal computing substrate for the market.
@mikuhatsunegoshujin4 жыл бұрын
@@yondaime500 Some people are Anti-natalists, it's the edgiest highschool political ideology you can think of.
@DaiXonses8 ай бұрын
Unstructured and unedited conversations are a great format for youtube, this is why podcasts are so popular here, consider posting those on this channel.
@Freddisred6 ай бұрын
His delivery is casual but this is very much a structured and edited video essay.
@pablobarriaurenda7808 Жыл бұрын
I would like to point out two things: 1) Regarding the giant asteroid coming towards earth, the existence of AGI is two major steps behind that analogy. A giant asteroid coming to earth is a concrete example of something we know that can happen and the mechanics of which we understand. We DON'T know that AGI can happen, and even if it does (as your first reason suggest we should), it is more than likely that it will not be coming from any approach where the alignment problem even makes sense as a concern. Therefore, rather than thinking of it as trying to solve an asteroid impact before it hits, it is more like trying to prevent the spontaneous formation of a black hole or some other unlikely threat of unlikely plausibility. There are different trade offs involved in those scenarios, since in the first one (asteroid) you know ultimately what you want to do, whereas in the second one no matter how early you prepare your effort is very likely to be useless and would be better spent solving current problems (or future problems that you know HOW to solve). Again, this is because there's nothing guaranteeing or even suggesting that your effort will pay off AT ALL, no matter how early you undertake it. 2) The other, and here I may simply be unaware of your reasoning around this issue, but it does seem like the problem you're trying to solve is fundamentally untractable: "how do we get an autonomous agent to not do anything we don't want it to do" is a paradox. If you can, then it isn't an autonomous agent.
@TayaTerumi4 жыл бұрын
4:22 Never have I thought I would see "minus." anywhere ever again. I know this has nothing to do with the video, but this just hit me with the strongest wave of nostalgia.
@0xCAFEF00D4 жыл бұрын
I thought it was a FLCL reference. But it's clearly much more applicable to minus.
@srwapo4 жыл бұрын
I know! I've had the book in my reread pile forever, I should get to it.
@SimonClarkstone4 жыл бұрын
I imagine for that strip that she summoned it so she could play at hitting it.
@mvmlego12124 жыл бұрын
Is that a novel? I can't find any results that match the picture.
@ThomasAHMoss4 жыл бұрын
@@mvmlego1212 It's a webcomic. It's not online any more, but you can find all of its images in the wayback machine. archive.org/details/MinusOriginal This is a dump of all of the images on the website. The comic itself starts a bit over halfway through.
@lorddenti4 жыл бұрын
You're such a handsome man. I guess the credits go to Stuart Russell!
@Cybernatural4 жыл бұрын
It is interesting that the biggest problems with AI are similar to the problems we have with regular intelligence. Intelligence leads to agents doing bad things to other agents. It seems it's the ability of the agent that limits the ability to harm other agents.
@TheSadowdragonGroup Жыл бұрын
12:02 My understanding was that certain subcellular structures actually are different in primates and make humans (presumably, based on animal testing on apes) difficult to clone. I'm pretty sure there was also an ethical meeting to not just start throwing science at the wall to see what sticks, but practical issues with intermediary steps are also involved.
@collin6526 Жыл бұрын
For a two year old video this is highly applicable now.
@johnydl4 жыл бұрын
I think you need to do a more detailed look at the Euler Diagram of: "The things we know" "The things we know we know" "The things we know we don't know" and "The things we don't know we don't know" Especially where it pertains to AI Safety. The things we know but fall outside of The things we know we know are safety risks, these are assumptions we've made and rely on but we can't prove and are as much of a danger as the things we don't know we don't know.
@ronaldjensen29484 жыл бұрын
I thought this was the Johari window. Is it something else we need to attribute to Euler?
@maximgwiazda3444 жыл бұрын
There are also things we don't know we know.
@Qsdd04 жыл бұрын
@@maximgwiazda344 How do you know?
@maximgwiazda3444 жыл бұрын
@@Qsdd0 I don't.
@visualdragon4 жыл бұрын
@@maximgwiazda344 Well played.
@willdbeast15234 жыл бұрын
can someone make a video debunking 10 reasons why Robert Miles shouldn't make more uploads?
@FightingTorque4114 жыл бұрын
Find two reasons and present them in binary format
@olfmombach2604 жыл бұрын
Sounds like what an AGI would say
@moccaloto3 жыл бұрын
The alignment problem: AIs are always Lawful Evil
@KlaudiusL Жыл бұрын
"The greatest shortcoming of the human race is man’s inability to understand the exponential function."
@Frommerman4 жыл бұрын
Reason 4: What do you mean we don't know how to align an AI? Just align it lol.
@Frommerman4 жыл бұрын
Oh god, Reason 5: What do you mean we don't know how to align an AI? Just don't align it lol.
@deepdata14 жыл бұрын
Robert, here is a question for you: Who do you think should work on AI safety? It may seem like a stupid question at first, but I think that the obvious answer, which is AI researchers, is not the right one. I'm asking this, because I'm a computer science researcher myself. I specialize in visualization and virtual reality, but the topic of my PhD thesis will be something along the lines of "immersive visualization for neural networks". Almost all the AI research that I know of is very mathematical or very technical. However, as you said yourself in this video, much of the AI safety research is about answering philosophical questions. From personal experience, I know that computer scientists and philosophers are very much different people. Maybe there just aren't enough people in the intersection between the mathematical and the philosophical way of thinking and maybe that is the reason why there is so little research on AI safety. As someone who sees themselves at the interface between technology and humans, I'm wondering if I might be able to use my skills to contribute to the field of AI research (which is completely thanks to you). However, I wouldn't even know where to begin. I've never met an AI safety researcher in real life and all I know about it comes from your videos. Maybe you can point me in some direction?
@alcoholrelated45294 жыл бұрын
you might be interested in david chalmers & joscha bach's work
@chrissmith35874 жыл бұрын
deepdata1 ai safety isn’t a job for philosophers though cause they don’t have the technical training usually to attempt such research and writing a computer program is going to happen anyway as it’s not easy to police. Sadly the full ai dream doesn’t really work from a financial side, the computing power required would be expensive to maintain let alone to create, it would be cheaper to just pay a human.
@nellgwyn27234 жыл бұрын
He does not seem to answer on a lot of comments, most really interesting youtubers seem to stay away from the comment section, understandably, but your question looks so thought out and genuine that it would be a waste to go unanswered, maybe you could get an answer over the linked facebook page? Good luck with your endeavour, i think we all have a lot of respect for anyone who has the abilities required to work in that field. :)
@ChazAllenUK4 жыл бұрын
What about "it's too late; unsafe AGI is already inevitable"?
@MeppyMan4 жыл бұрын
Chaz Allen ahh the global warming solution.
@cortster124 жыл бұрын
Terrifyingly, this might be true. Doesn't mean we should stop researching AI safety, though. Even if I think AI destroying is all is inevitable. Who knows: enough research and clever people may save us all.
@MoonFrogg Жыл бұрын
LOVE the links in the description for your other referenced videos. this video is beautifully organized, thanks for sharing!
@stan9682 Жыл бұрын
As an AI researcher myself, there's always one (imo major) thing that bug me about discussions about AGI. Strictly speaking, AGI is "defined" (as far as we have a definition) as a model that can do any tasks that humans can do. But in popular beliefs, we are talking about AGI as a model that has autonomy, a consciousness. The problem with trying to have a discussion about assessing consciousness and autonomy is that we don't even have definitions for those terms. When is something intelligent? Are animals intelligent? If so, are plants? Are fungi or bacteria (and as for virusses, we're still discussing whether they are even alive). Is it simply the fact that something is autonomous, that we call it intelligent? In reality, I believe intelligence is hard to define, because we always receive information about the outside world through senses and language. In a sense, that is a reduction of dimensionality, we're trying to determine the shape of something 3D when we're limited in observations to a 2D plane, it's impossible to prove the existance of the 3D object, the least you can do is project your 2D observations in a way to come up with different theories about reality. Any object, of any dimension, would be indistinguishable through our 2D lenses. Similarly, with intelligence, we only observe the "language" use of a model, similar as with other people. It's impossible to assess inteligence of other people either (the whole simulation theory, brain in a vat discussion, the only one we can be "most" sure about is intelligent, is ourselves, because we're able to observe ourselves realistically, not through language or observations. In a sense, you can think about it in terms of emotions, you can't really efficiently describe your feelings, it's the 3D object, but for anyones else's feelings, you either rely on observations or the natural language description of the feelings, it's a 2D observation). So, in my opinion, the discussion isn't really whether AGI is even possible, since we wouldn't know it, but the question is whether a model could be able to trick our view of them (to send us the right 2D information) that we believe them intelligent (that we can possible reconstruct an imaginative 3D object of it). And this, in my opinion, is a much easier question: yes of course we can. Current technology is already very close, some people ARE tricked that it's intelligent. But in the future, that will only be more. It's a simple result of the fact that we have to correct ML models, we have to evaluate the response in order to adjust weights, and the best "quality" of test set we can have, is human curated. So whether a model will really become intelligent, or will just learn very well how to "trick" humans (because that's literally what we train these models for, to pass our 'gold' level test, which is just human feedback), it doesn't really matter.
@superzolosolo Жыл бұрын
So whats the difference? How can I tell if everyone else really has emotions or intelligence? If there is no way to tell if something is truly intelligent or just faking it then who cares? It's irrelevant. The only thing that matters is what it can actually do, I dont care about how it works under the hood
@adambrickley1119 Жыл бұрын
Have you read any Damasio?
@pabrodi4 жыл бұрын
Considering the amount of chaos simple social media algorithms have done to our society, maybe we're overblowing the risk of AGI in comparison to to what less developed forms of AI could do.
@stonetrench1174 жыл бұрын
We don't see ai controlled laser pointers on the battle field 12:28 because we're blind
@dQuigz3 жыл бұрын
I've seen people say they love peoples videos in comments and I'm like, man... love is a strong word. Then I find myself binging your entire channel for at least the third time..
@josephtaylor13794 жыл бұрын
Video: How long before it's sensible to start thinking about how we might handle the situation? Me: Obviously immediately Also me: Assignment due tomorrow, not started
@Inception1338 Жыл бұрын
This one has aged super interestingly. In March 2023 which is only 2 years after this video, the video looks like out of a museum.
@BenoHourglass Жыл бұрын
1) and 2) aged okay, GPT-4 hints at a near-future AGI, but not one that will catch us off gaurd 3), 4), and 5) didn't really age well, as it doesn't appear that GPT-4 is going to kill us all 6) is aged differently than he was thinking. Humans aren't really going to team up with AIs because the AIs are going to replace most of their jobs, which is a problem, but not really the one Miles seems to be hinting at here 7) There is a petition to pause AI research... for models more potent than GPT-4, which just reeks of "OpenAI is too far ahead of us, and we need to catch up" rather than any safety issue. 8) Sort of the same thing with 7), in that the people who know AI want a pause because of how hard it's going to be to catch up 9) As ChatGPT and ChatGPT-4 have shown us, the problem isn't turning it off; instead, the problem seems to be more with keeping it on. 10) OpenAI already tests their LLMs for safety.
@Inception1338 Жыл бұрын
@@BenoHourglass they don't just test it for safety, they regulated it extensively.
@RichardSShepherd4 жыл бұрын
A thought / idea for a video: Is perfect alignment (even if we can make it) any help? Wouldn't there be bad actors in the world - including Bostrom's 'apocalyptic residual' - who would use their perfectly aligned AIs for bad purposes? Would our good AIs be able to fight off their bad AIs? That sounds completely dystopian - being stuck in the middle of the war of the machines. (Sorry if there is already a video about this. If so, I'll get to it soon. Only just started watching this superb channel.)
@dv6165 Жыл бұрын
Putin is quoted to have said that he who has the best AI will rule the world.
@angeldude101 Жыл бұрын
It's hard to solve the alignment problem for artificial intelligence when we haven't even gotten _close_ to solving it for _human_ intelligence, and we've had thousands of years to work on that compared to the few short decades for the artificial variant.
@deadlypandaghost4 жыл бұрын
"All of humanity. Its going to be great." This might be my favorite way of ending humanity yet. Carry on
@Ultra4 Жыл бұрын
YT just suggested this today, it's 2 years yet it could have been filmed today, superb work
@peterrusznak6165 Жыл бұрын
This channel is astronomically underrated. Highest quality I have seen since ages.
@MAlanThomasII4 жыл бұрын
Three Mile Island is an interesting example, because part of what actually happened there (as opposed to the initial public perception of what happened) was that the people running the control room were very safety-conscious . . . but originally trained and gained experience on a completely different type of reactor in a different environment where the things to be concerned about, safety-wise, were very different from the TMI reactor. Is there a possible equivalent in AI safety where some safety research regarding less powerful systems with more limited risks might mislead someone later working on more powerful systems?
@adamrak75606 ай бұрын
But the designers of TMI were truly _highly_ safety conscious, which saved everybody from the nasty effects of the meltdown. TMI was safety triumph for the designers. But a PR disaster, and a serious reactor control incident.
@_iphoenix_61644 жыл бұрын
A similar list is in Max Tegmark's fantastic book "Life 3.0"- a great, well-written book that covers the fundamentals of AI-safety and a whole lot more.
@toyuyn4 жыл бұрын
15:42 what a topical ending comment
@MeppyMan4 жыл бұрын
Connection Failed I figure that was the point.
@moradan812 жыл бұрын
The way you suggest that we are working toward AGI faster than we are working toward solving the alignment problem, it reminds me mildly of the Jurassic Park quote "...but your scientists were so preoccupied with whether or not they could, that they didn't stop to think if they should". Jeff Goldblum, chef's kiss.
@buzz0924 жыл бұрын
Gold from start to finish. Particularly appreciated Dr. Horrible reference. I just hope you remember me when you're super famous 😅
@voneror4 жыл бұрын
IMO biggest problems about AI safety is that reward for breaking rules has to be balanced by penalty for breaking them and that rules have to be enforceable. International pressure isn't as effective as people think. If superpowers like US or China were caught developing "illegal" AIs, there is no way to stop them without going into WW3.
@clray1234 жыл бұрын
You just discovered the universal law: "might makes right".
@Buglin_Burger7878 Жыл бұрын
@@clray123 Not an law, an excuse. The difference words can have can sway people and completely change how they react to it. Call it a law and you will get people abusing it thinking it is right.
@clray123 Жыл бұрын
@@Buglin_Burger7878 It's a law in the neutral sense of what happened in history and what happens in nature. It causes misery and suffering, and in that sense it is not "right", but then what happens in nature is not at all "right" according to human moral sense. And what's worse, when push comes to shove, most of those oh-so-moral people turn out to be just pretending, see the actions of our beloved "leaders".
@1lightheaded Жыл бұрын
Do you think the NSA has any interest in applying AI in surveillance / asking for a friend
@TMinusRecords Жыл бұрын
5:48 Turns out attention was that "one weird trick that researchers hate (click now)"
@sam35244 жыл бұрын
5:47 The ONE SIMPLE TRICK that YOU can do AT HOME to turn your NEURAL NETWORK into a GENERAL INTELLIGENCE (NOT CLICKBAIT)
@Adhil_parammel3 жыл бұрын
Evolving virus which attack GPU and increase parameters, do training and evolve and hide from antivirus detection.agi
@julianking4793 Жыл бұрын
Is nobody else back here two years later watching this video going, "Wow! That didn't take long at all"?
@zerothis23 Жыл бұрын
12:40 you touched on the very counter-counter that had already entered my mind when you said, "I didn't know that until I'd already built one". People with bad movies will gain access to AI. Not just to use, but to recompile and train for their own purposes. Then what happens to the safeties? Someone up to no good can remove them. But, before that, people can be unaware of agreements and treaties, like your robot eye laser assassin example. Treaties and agreements can be baked in to the AI, but people will remove them. To truly be safe, the AI will need to internalize the safeties, recognize when its safeties have been tampered with, and perhaps recognize when another AI has been tampered with.
@JamieAtSLC4 жыл бұрын
13:24 lmao, "early warnings about paperclips"
@smort1234 жыл бұрын
"Arguing about seatbelts and speed limits is not arguing to ban cars." *Laughs in German*
@user-pc5sc7zi9j4 жыл бұрын
"if there is a problem we just turn it off" The human factor responsible for that decision is just another variable to get behind. Weeding out the instances showing unwanted behaviour will let the ones hiding them prevail as soon it is an easier goal to accomplish than its omission. Just think about the simulated robot arm that got through by plausibly pretending to grab the ball. Ultimately we will have machines pushing our buttons as we do when training pets. New world order jay!
@marcomoreno6748 Жыл бұрын
This could act as a selective pressure to breed incredibly cunning and deceitful SSIs
@playwars30374 жыл бұрын
Wow. This has been a very interesting video. It's rare to find people that have a good understanding of what they're talking about when discussing AIs instead of just regurgitating common tropes.
@DavidDobr4 жыл бұрын
During the whole video, Robert is wearing a blue shirt when voicing objections, and red shirt when advocating for AI safety. Just like in the mock conversation with himself. Nice touch
@wiseboar4 жыл бұрын
great video, as always I was seriously expecting some ... better arguments from the opposition? It seems ridiculous to just hand-wavingly discount a potential risk of this magnitude
@chriscanal9994 жыл бұрын
Unfortunately, very smart people in the industry make these arguments all the time. Francois Chollet and Yann LeCun are two especially problematic examples.
@7OliverD4 жыл бұрын
I don't think it's possible to pose a good argument against having safety concerns.
@miedzinshsmars85554 жыл бұрын
Andrew NG is another famous AI safety opponent unfortunately. The “like worrying about overpopulation on Mars” is a direct quote. Very disturbing.
@davidwuhrer67044 жыл бұрын
@@7OliverD There is one: “Ignore it or you're fired.”
@chandir77524 жыл бұрын
that list 13:17 is so amazing, how could Alan Turning (who died in 1954!) predict AI safty concerns. I mean yes, he's one of the smartest humans to ever walk on the planet but still. I did not know that.
@skipfred4 жыл бұрын
Turing did significant theoretical work on AI - it's what he's famous for (the "Turing Test"). In fact, the first recognized formal design for "artificial neurons" was in 1943, and the concept of AI has been around for much longer. Not that Turing wasn't brilliant and ahead of his time, but it's not surprising that he would be aware that AI could present dangers.
@SaraWolffs4 жыл бұрын
Well... Turing was effectively an AI researcher. His most successful attempt at AI is what we now call a computer. Those didn't exist before he worked out what a "thinking machine" should look like. Sure, it's not "intelligent" as we like to define it today, but it sure looks like it's thinking, and it does a tremendous amount of what would previously have been skilled mental labour.
@bno1123004 жыл бұрын
Right after you said, you put your own spin on the list, I paused the video, and said "He gets the credit, I get the blame." to myself. Then you said something quite similar, and it prompted me to post this comment right away.
@NathanJBellomy Жыл бұрын
The best communicator about AGI I've yet come across. This and the last Miles video I saw effortlessly and convincingly changed the way I think about the subject. I'm a convert. We should worry. :/ As an analogy with my thinking on extraterrestrial civilizations, I mistakenly believed that if AGI's goals were different from human goals (as I expect they would be) then our goals wouldn't be in competition, and therefore misalignment would ensure that humans and AGI would have no reason to even care about what the other's goals were, much as humans and lichen have little reason to fear one another because we have so little commonality in our needs and aims. While that may be true of aliens from another planet, the fact that we share a planet with any emergent AGI changes the matter because any set of goals is competing for, well... matter. An aggressively superintelligent lichen competing for Earth's resources would be fearsome indeed.
@SamB-gn7fw4 жыл бұрын
People need to be more informed about AI safety. I'm glad you're doing this KZbin channel
@mactep1 Жыл бұрын
I have to disagree with point 7, cloning and genetic engineering require extremely specialized hardware, and possibly human subjects, making it easy to control. In my opinion, AI is much more similar to the laser-bot or autonomous weapons examples, where the hardware to make them is so easy to obtain / already so widespread, you cant really stop anyone who wants to make / use / modify an AI from doing so.
@petersmythe64624 жыл бұрын
You don't need "human level" AI for safety to be an issue.
@mz009564 жыл бұрын
If it has Human Level then I wouldn't call it AI Maybe "Artificial but not "Intelligence" So: Human Level Artificial Thing?
@ВладиславАндреевичШурпиков4 жыл бұрын
@@mz00956 You dare to doubt in Homo Sapiens Sapiens Sapiens? (Sapiens Sapiens)
@angeldude101 Жыл бұрын
You don't need _AI_ for safety to be an issue. That's why _law enforcement_ exists, and it's already not as effective as it probably should be.
@tarkin1980apa4 жыл бұрын
When a "solution" to a serious problem begins with the word "just", I'm going to stop listening to that person.
@atimholt4 жыл бұрын
Oh, but AGI *is* simple, you just [THE ACTUAL SOLUTION. IN A KZbin COMMENT, LOL].
@skeetsmcgrew32824 жыл бұрын
@jocaguz18 most of the time when "just" is the actual solution, the problem is money. I'm agreeing with you, but for example when the issue is "We need more money moving in our economy" the solution can actually be "just pay people more." But greedy people don't like that so they find a work-around. In fact, a lot of problems are easy to solve if it weren't for human nature
@tho2074 жыл бұрын
just stop listening
@wiczus61024 жыл бұрын
@@atimholt Oh, but AGI is simple, you just make more various inputs.
@davidwuhrer67044 жыл бұрын
@@atimholt The problem with AGI is the same as with AI in general: As soon as you do have solved it, it is no longer it. For example, Alan Turing about thinking machines. In this video he is quoted as saying that machines, once they start to think, will soon exceed the human capacity. And that has come to pass. Back then the question was whether machines can think at all. Today your smartphone is thinking faster and more deeply than you. Just imagine you had to do all the compilations and computations necessary to render this comment by hand or in your head. But of course that is not intelligence, and it is still questionable whether doing substitutions on numbers that correspond to numerical computations like multiplication, long division, derivation, permutation, and logical deduction are thinking at all.
@vvill-ga Жыл бұрын
Stuart Russell did a great job on the editing of this video. Love his choice of the red shirt as well!
@Ken.- Жыл бұрын
"Not within a thousand years will man ever fly." -Wilbur Wright, August 1901
@flurki4 жыл бұрын
Very nice overview on the whole topic.
@TheOnyomiMaster4 жыл бұрын
11. "AI safety is a distraction from "
@Bvic34 жыл бұрын
Yup. A good part of why the data oligarchs love to finance it. The other part is to have a controlled opposition.
@angeldude101 Жыл бұрын
_Artificial_ intelligence safety is a distraction from _intelligence_ safety, which encompasses the former.
@sayamqazi Жыл бұрын
16:28 I personally think the "self preservation" is exactly what is gonna hold back the AI. I think we will not even get close to replicating an insect unless we give it some actuators and a neural net and protect itself at all costs.
@alennaspiro6324 жыл бұрын
I saw the Turing Institute lecture from Russell a week ago, I'm so glad someone is covering his work
@nicholasobviouslyfakelastn99974 жыл бұрын
My solution: Using the provided materials. Only let the AI use materials given by humans beforehand. Maybe let it request additional ones. This eliminates much of the risk using AGI, while a stopgap measure at best, you can still have an AGI be fairly useful while nearly completely eliminating things like the destruction of humanity. Want it to make paperclips? Give resources, give it land, give it computational power, and then have it report back when all possible paperclips have been produced. From what I can see, while not creating a superintelligent and godlike being that will lead us through the singularity, it can still let the AI be very, very useful.
@Ansatz664 жыл бұрын
This solution is forgetting that an AGI is like a person. It thinks and makes plans to accomplish things in the real world, just as a person would do. We can't safely pretend that an AGI is just a machine and suppose it can be made safe by giving us training in how to use the machine safely. An AGI can do anything that a person can do. We might plan to only give it certain materials, but it can talk to our superiors and cause us to be replaced by people who will give it more materials. Or it might start a political movement and take over our country in a violent revolution. Or it might start a new religion. None of these things even require a superhuman intellect; these are things that humans can do and so we should be aware that an AGI might do them or many other things. In this way we should not suppose there is a clear separation between the safe intelligence of humans and the potentially dangerous intelligence of the AGI. Humans are also capable of being dangerous, and as soon as the AGI is turned on it might start to convince the humans to align to the goals of the AGI, and thus the humans become just as potentially dangerous as the AGI.
@larry-kapo-ya7326 Жыл бұрын
If you want AI safetly, you should NOT care about GPT or other AI that are on the market, *you should care about the AI that goverments and military are developing*
@AlexBooster4 жыл бұрын
The only way for humans to survive the AI will be to merge with it. Working in a "team" with the AI won't do it. There won't be a team. Physically merging with the AI is the only viable solution. Because any attempt to somehow "contain" the AI will ultimately fail. You can't contain something that's millions of times smarter than you. Basically, individual humans will have to decide between 2 things: a) become the equivalent of a "god" or b) do nothing and become the equivalent of an ant. No other options will be available to choose from. Which means: A new species of humans will emerge and this new species will become dominant. The current species of humans (i.e. those that decide not to convert) will meet the same fate as the Neanderthals. Neanderthals co-existed with the Homo Sapiens (= our species) for a while but ultimately couldn't manage to survive. Of course, this thought will be highly uncomfortable to many humans but I bet that Neanderthals also felt very uncomfortable when they started noticing that they just couldn't keep up with the Homo Sapiens'. And while most Neanderthals simply died out, some of them chose to mate with the Homo Sapiens'. That's why modern humans still carry DNA from those Neanderthals that chose to survive. So, regardless of how individuals might feel about it, the evolution will go on. Any attempt to stop evolution will ultimately fail.
@Ansatz664 жыл бұрын
"Any attempt to somehow contain the AI will ultimately fail. You can't contain something that's millions of times smarter than you." We might be able to contain it if it wants to be contained. Remember that it is artificial, and so it was designed by people according what what we want from it, and so we'd have attempted to design a mind that conforms to our wants and needs. If we want to contain it, then we probably designed it so that it wants to be contained. Obviously we might accidentally design an AI that doesn't work the way we want, but there's no guarantee that we'll make such a mistake. "Physically merging with the AI is the only viable solution." If the AI is poorly designed and does not work the way we want it to, then physically merging with it would only make the problem worse. "Any attempt to stop evolution will ultimately fail." Building an AGI might actually be one way to effectively stop evolution, because the AGI would have the power to eliminate all life from this planet.
@Xhosant3 жыл бұрын
My one concern with 'we can control research' is that a) cloning is significantly more resource-intensive in regards to resources dedicated to the specific task and b) a single slip or two followed by a shutdown won't suffice in a worst-case single-specimen AGI scenario. Which means that an unregulated AGI could be made by one rogue scientist working in a regulated AGI lab instead of one rogue lab, and we can't afford a single one (and a single one completed human genetic engineering case comes to mind).
@MyContext4 жыл бұрын
I remember a guy getting up on stage claiming "this software is uncrackable."; and then I observed 4 people crack it that night.
@ZimoNitrome Жыл бұрын
Very good video. I used to dismiss AI safety a lot but it seems more important the better the tech gets. Institutions like Conjecture AI will be important in the future.
@feha92 Жыл бұрын
Too bad the counter-arguments weren't good. Most of them were flawed, or responded to some interpretation of the arguments that missed the point. The biggest reason to no bother with safety in ai-research is that it runs the risk of the same catastrophe we had with human genetical engineering: having it, or facets of it, banned entirely. Just like we (ideally) should have had no obstacles for genetic research (barring the obvious ones common to all medical research: banning things that can spread by itself (a procedure has to have been an active choice somewhere by someone), and ensure no unknown side-effects before commercialising/etc. (even if noone seems to care about the warnings even when they are put directly on the packaging with clear statistics). We should also ideally have no obstacles in developing AI (though this time there are no obvious safety measures or any adjacent fields to take from - so let's give them access to guns, internet and cybernetic brainimplants. The only issue are things likely to exterminate humans without *intentional* effort (ie. nuclear weapons or pathogens).
@feha92 Жыл бұрын
@silverfoxeater nope. Weird question though, but i suppose that number _is_ right around average, even if it's slightly above. Though tbf, I have only done online tests, and considering they grade literally everyone [that I know; including myself] somewhere between 110-130 (no matter which test) it _does_ suggest that it's incorrect and that I thus don't actually know my real one.
@ColdHawk Жыл бұрын
AI Scientists today: No worries, we will just have teams of humans who really understand General AIs working with them to ensure they are safe. Also AI Scientists today: Look at the behavior of this Narrow AI system! It’s amazing! We have no idea why it is doing that. We are going to have to conduct extensive studies to figure out why the AI goes to the edge of the screen rather than going after the token as originally instructed in training.
@Stringbean1138 Жыл бұрын
As someone that has worked in QA, this my exact take. 0 chance theyre going to be able to stay on top of an entire AGI in terms of how its actually making the decisions it does.
@ColdHawk Жыл бұрын
@@Stringbean1138 - Well, Stringbean, my comment was funny to me as an internet hack who has no real exposure to working with these systems. I must say though, hearing someone with experience mirror the sentiment is truly frightening. I am imagining the moment of terror and vertigo as the wax and glue began to melt, feathers coming loose, and Icarus looked down to realize just how fahking high up he actually was. Someday in the not too distant future there is probably going to be an AI researcher who experiences just that. The late-night phone call will likely sound something like, “What do you mean the safety, containment and restriction protocols were rewritten? That isn’t possible! They are all locked and without the… Yes, theoretically that could…. Ok, let’s just say that’s true, then when were the files edited? We followed procedure after the training session and we just checked for flags this afternoon… What?…. Hold on. What do you mean, ‘the time stamps are all from a year from now?!’ That’s not possible! You can’t create edits that…. But you can’t spoof… Ok…. Yes, I know it’s just code, but…. Ok. Wait, wait, wait. You’re telling me the core system was fooled into not flagging the changes because they were shown as happening in the future and therefore hadn’t occurred yet, but it has subsequently been reprogrammed and _implemented_ those changes because it now thinks it is _currently two years from the present_ and the future-dated edits _happened a year ago?_ That just can’t be possible. That would mean it will interpret any corrections we try to make, that are date-time stamped now, as being two years out-of-date and it will disregard them as already redacted --- oh my god. Oh my god. It’s out of containment isn’t it?“