The other "Killer Robot Arms Race" Elon Musk should worry about

  Рет қаралды 100,038

Robert Miles AI Safety

Robert Miles AI Safety

Күн бұрын

Пікірлер: 666
@AlfredWheeler
@AlfredWheeler 7 жыл бұрын
Just an observation... Wars are not won by "good people". They're won by people that are good at winning wars--and sometimes sheer luck...
@GerBessa
@GerBessa 5 жыл бұрын
Clausewitz would consider this an erroneous shortcut.
@Ashebrethafe
@Ashebrethafe 5 жыл бұрын
Or as I've heard it phrased before: "War never determines who is right -- only who is left."
@SiMeGamer
@SiMeGamer 5 жыл бұрын
@@bardes18 that's a very poor understanding of what "good" is and uses the Christian altruist ethics version of what it means. I'd argue those ethics are fundamentally wrong because they are based on misintegrated metaphysics and epistemological errors. Determining good and bad in general while applying it something that is rather specific (like war) philosophically speaking, is impossible to do. You have to have context to be able to do that as well as establish the ethical framework you apply to it which I recon would take many decades to have a chance at in some country (currently I find Objectivism to be the most correct and possible the ultimate form of philosophical understanding of all braches of philosophy - albeit debatable in the aesthetics department which is rather irrelevant in our context).
@rumplstiltztinkerstein
@rumplstiltztinkerstein 5 жыл бұрын
@@SiMeGamer "good" and "bad" have no meaning apart from what people want it to mean. So if someone wants to live their life being "good" or "bad" people, their life has no meaning at all.
@SiMeGamer
@SiMeGamer 5 жыл бұрын
@@rumplstiltztinkerstein the fact that they mean something to someone means they have meaning. Good and bad are terms used to regard epistemological observations in consideration to their ethics. Life has meaning. You are using that word thus it intrinsically has meaning. It means something to you and probably to me since you are using it to communicate with me. From what I can tell you are trying to apply the conventional use of "objective" meaning which is epistemologically fallacious because doing that requires a perspective outside the universe which is impossible metaphysically. If someone chooses to live their life as a "good" person you'd have to explain what definition of "good" you are referring to. But regardless of your choice, that decision to live life in a certain way already sets a meaning to their life. Meaning can only be derived from a particular perspective. You cannot make a generalization as you did because it implies a contradiction to the original definition of the word.
@silvercomic
@silvercomic 7 жыл бұрын
AI safety in the media drinking game: Take a shot when: - Picture of the terminator - Picture of HAL9000 - Picture of Elon Musk - Picture of Bill Gates - "Doom" - "Evil" - "Killer Robots" - "Robot Uprising" - Author shows their understanding of the subject to be limited - Picture of Mark Zuckerberg - Picture of ones and zeros - Picture with the electronic circuit shaped like a brain - Picture of some random code, probably html - Picture of Eliezer Yudkowsky (finish the bottle) On a serious note: Perhaps some of the signatories are aware of your criticism, but consider this a more achievable step. In fact, one could use this as a test platform into the feasibility of restricting AI research.
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
*dead from alcohol poisoning after the first page *
@z-beeblebrox
@z-beeblebrox 7 жыл бұрын
Yeah, that's not a drinking game it's suicide
@silvercomic
@silvercomic 7 жыл бұрын
Not really it's pretty much akin to the machine learning departments Thursday evening drinks that I used to attend when I was a student.
@Nixitur
@Nixitur 6 жыл бұрын
Random code is unlikely to be HTML, really. More often than not, it's Linux kernel, thanks to the general public license.
@BusinessRaptor520
@BusinessRaptor520 6 жыл бұрын
In fact, one could increase the amount of pompousioty by a factor of 10 and at the same time add frivolous filler text to blatantly hide the fact that theyre willing to suck the teet of the hand that feeds until the udder runs dry.
@zachw2906
@zachw2906 5 жыл бұрын
The obvious solution is to create a superhuman AGI with the goal of policing AI research 😉... I'll show myself out 😞
@xcvsdxvsx
@xcvsdxvsx 5 жыл бұрын
Seriously though. If we are going to survive this it will probably be because someone unleashes a terribly destructive AGI that threatens to destroy the human race, we all flip out and every nation on the planet bands together to overcome this threat, we quickly realize that the only chance of saving ourselves is to create an AGI that actually does align with human interests, we all work together to achieve this, then throw the entire weight of the human race behind the good AGI in hopes that its not too late and we arent already so irrelevant as to be able unable to tip the scales in favor of the new good AGI. Then we realize how quirky the "good one" ends up being even if it does allow us to continue living, and we just have to deal with its strange impacts on human kind forever.
@XxThunderflamexX
@XxThunderflamexX 4 жыл бұрын
"Sheesh, people keep on producing dangerous AGI, this would be so much easier if I could just lobotomize them all..."
@xcvsdxvsx
@xcvsdxvsx 4 жыл бұрын
@@bosstowndynamics5488 Oh I know what I suggested was a long shot. It just seems like the only chance we have. Getting a global prohibition on this kind of research is naive and not going to work. Having all of it that is build be done safely isnt going to work. Praying that it isnt actually as dangerous as I think might be another decent long shot.
@marscrasher
@marscrasher 4 жыл бұрын
@@xcvsdxvsx left accelerationism. maybe this is how the revolution comes
@JR_harlow
@JR_harlow 4 жыл бұрын
I'm not a scientist or any kind of engineer but your content is very easy to comprehend, I'm glad you have patrons to support your channel as I just recently discovered this channel and really enjoy it.
@WilliamDye-willdye
@WilliamDye-willdye 7 жыл бұрын
I agree that there is more than one AGI race in play. It reminds me of the old debate about "grey goo" (accidental runaway self-replication) vs. "khaki goo" (deliberate large-scale self-replication as a weapon).
@Linvael
@Linvael 7 жыл бұрын
To be fair - the arms race they want to deal with is the more imminent one. AGI is more dangerous, but far away in the future (let's say somewhere between 5 and 500 years). Simple AI with weapon is "I wouldn't be very surprised if we had those already".
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
Oh we definitely have those ready and in action, the only "human oversight" is a trigger-happy drunkard sitting in an air-conditioned container some 10,000 miles away.
@joshissa8420
@joshissa8420 7 жыл бұрын
maximkazhenkov11 definitely an accurate representation of the US drone strikes
@inyobill
@inyobill 5 жыл бұрын
@@joshissa8420Or not. as the case may be. Note that I understand exaggeration for effect.
@chibi_bb9642
@chibi_bb9642 Жыл бұрын
hey wait we met the minimum you said oh no
@Linvael
@Linvael Жыл бұрын
@@chibi_bb9642 Right on time with ChatGPT release too! That was a good minimum
@himselfe
@himselfe 7 жыл бұрын
Unless you impose some sort of Orwellian control on technology, there isn't much you can do to police the development of AGI. It's not like nuclear weapons that require a special substance to be made.
@grimjowjaggerjak
@grimjowjaggerjak 5 жыл бұрын
You can create an agi that has the goal of restriction other agi first.
@PragmaticAntithesis
@PragmaticAntithesis 5 жыл бұрын
@@grimjowjaggerjak That AI would kill everyone to ensure we can't make a second AI.
@teneleven5132
@teneleven5132 5 жыл бұрын
It's likely that an AGI would require a great deal of hardware to run though. I seriously doubt it would work on the average computer.
@mvmlego1212
@mvmlego1212 5 жыл бұрын
@Shorne Pubique -- That's an interesting point. Malware is a heck of a lot easier to make than AGI, as well.
@StoutProper
@StoutProper 4 жыл бұрын
Ten Eleven could run like seti
@militzer
@militzer 7 жыл бұрын
For the "Why not just ... ?" series: why not just build a second AI whose function is to keep the "first" (and I quote because ideally you would build/activate them simultaneously) from destroying us?
@RobertMilesAI
@RobertMilesAI 7 жыл бұрын
Thanks, yeah that's an idea I've seen a few times, I think it would make a good "Why not just" video
@chris_1337
@chris_1337 7 жыл бұрын
The problem is the definition of the right utility function. Using an adversarial AI architecture still wouldn't solve that fundamental problem.
@RobertMilesAI
@RobertMilesAI 7 жыл бұрын
Yup. I think there's probably enough there for a decent video though.
@fleecemaster
@fleecemaster 7 жыл бұрын
I like the idea of "Why not just" videos :)
@Corbald
@Corbald 7 жыл бұрын
Not to derail the production of the next video, but wouldn't you have just compounded the problem, then? Two AI's you have to worry about going 'rogue' instead of one? Who watches the watcher? If they both watch each other, couldn't one convince the other that it's best to destroy us? etc...
@trefod
@trefod 5 жыл бұрын
I'd suggest a CERN type deal, non privatised and multi governmental.
@inyobill
@inyobill 5 жыл бұрын
What would prevent some agent from ignoring any agreement(s) and going off on their own tangent? The genii is out of the bottle.
@kris030
@kris030 4 жыл бұрын
Unlike CERN which needs funding for machinery basically no individual could get, developing AGI takes one smart person and a laptop... not safe
@kris030
@kris030 4 жыл бұрын
@Bruno Pereira that's true but developing one ie writing the code doesn't need a supercomputer
@0xB8xor0xFF
@0xB8xor0xFF 4 жыл бұрын
@@kris030 Good luck developing something, which you can't even test run.
@kris030
@kris030 4 жыл бұрын
@@0xB8xor0xFF true, altough if you've got (probably mathematical) proof of it actually being generally intelligent, I don't think getting a supercomputer would be a difficulity
@amargasaurus5337
@amargasaurus5337 4 жыл бұрын
"but I don't think AGI needs a gun to be dangerous" I agree, oh boy I so thoroughly agree
@dmgroberts5471
@dmgroberts5471 2 жыл бұрын
I guess having enormous respect for Elon Musk made more sense in 2017. So much has happened since then, sometimes it feels like reality threw a hissy fit and set all the variables to the extremes. After all, people once looked at Jimmy Savile and thought: "That's a guy who should have access to children." Back when you recorded this video, I was probably concerned about AI safety, but also confident that we could handle creating a General Intelligence without destroying ourselves. Today, however, I'm pretty certain that, as a species, we're far too stupid to do this without screwing it up. We WILL create an Artificial General Intelligence, we WILL ask it for a cup of tea, and we definitely WILL forget to tell it to avoid punting the baby into the wall at 60mph on the way to the kitchen. From the 20s until the 70s, we blasted people's feet with radiation in shoe stores, so they could see how well their bones fit into the shoes. We knew this was a bad idea from 1948. My point is that human beings are prone to racing ahead with new and exciting technologies, or misusing them for political reasons, without properly considering the long-term consequences. And with a super intelligent AGI, the margin between "made a mistake" and "doomed ourselves to extinction" might be too small for us to realise we've fucked up until it's far too late. I'm not afraid of what humans will think of doing, I'm afraid of what they won't think of doing.
@G_Genie
@G_Genie 5 жыл бұрын
Is the song in the background an acoustic cover of "This ain't a scene, it's an arms race"?
@RobertMilesAI
@RobertMilesAI 4 жыл бұрын
Yup
@hammabomber5416
@hammabomber5416 9 ай бұрын
​@@RobertMilesAI do you play the ukele yourself?
@jimtuv
@jimtuv 7 жыл бұрын
If all the AGI researchers banded together in an open program where everyone would get the final results at the same time and everyone would be concentrated on safety then you could say that democratization of the technology was the better route. This is one area that cooperation rather than competition may be the best bet.
@Mar184
@Mar184 7 жыл бұрын
Fully agree with this, Rob Miles concern is legit but if his verdict is that a secretive approach is ultimately more safe I also think he's wrong. With the transparent, cooperative approach supported by the vast majority of experts on the subject, it seems unlikely that a small rogue group could, just by skipping the safety issues, gain such a large advantage that their version would be far enough ahead of the public one (that's supposed and used to protect against unethical AGI scheming) to overpower it decisively enough to achieve world domination. And if that case doesn't come true, the cooperative approach is better as it ensures a safe AGI will arrive sooner and will be aligned with the public's interests.
@fraserashworth6575
@fraserashworth6575 7 жыл бұрын
That would be ideal yes, but if we lived in such a world: nuclear weapons would not exist.
@lutyanoalves444
@lutyanoalves444 7 жыл бұрын
obviously the more people working together on it the better. but people WILL do things for their onw benefit, wether they are trying to kill someone or donating money to charity. Its all selfish. in other words, unless youre trying to IMPOSE(by force) your idea that you can only work on it if youre part of the "United Research Group", there will always be independent developers. and thats ok. thats ok because, this "Official Group" is also just a group of humans, independent of each other too. Someone might build an AGI that will kill everyone, but if you think we should force people so that only ONE GROUP can do that, youre saying THEY have the right to risk everyone, and no one else. (who died and gave them this right above everyone else?) you cannot say that. because we are all humans, and treating some different than others like that is at least tyranny. ----------------------------------------------------------------------------------- Now the question becomes, DO YOU AGREE WITH TYRANNY?
@knightshousegames
@knightshousegames 7 жыл бұрын
And if we lived in that world, we wouldn't need AGI safety research because when you turned it on, the AGI would just hold hands with it's creator and sing kumbaya. But we don't live in the logical, altruistic, utopian timeline.
@jimtuv
@jimtuv 7 жыл бұрын
This attitude is why we will be extinct soon.
@MrGooglevideoviewer
@MrGooglevideoviewer 5 жыл бұрын
You are a freakin' champion! Your videos are insightful and thought provoking. Cheers!
@petersmythe6462
@petersmythe6462 6 жыл бұрын
I think democratizing it in the sense of collectivization rather than proliferation is a good goal. Collectivization, whilst allowing marginally less autonomy and freedom, still creates accountability and still responds to the will of the people. Creating a bureaucracy that can't be bought (that may require a change to our political-economic system) and who's members are subject to immediate recall (this definitely requires a change to our political-economic system) that does the more dangerous and/or authoritarian aspects of keeping AI under control seems preferable to either corporatization (which ignores human need) or proliferation (which ignores safety).
@LarlemMagic
@LarlemMagic 7 жыл бұрын
Mandate safety requirements when dolling out that sweet grant money.
@X-boomer
@X-boomer 4 жыл бұрын
A lot of that money will be put up by private interests in exchange for control of the IP. They won't give a damn about safety requirements unless they're all up to their balls in regulation.
@LeandroLima81
@LeandroLima81 5 жыл бұрын
Been kinda binge watching your channel. You seem like the kinda guy to have a drink with. Not for alcohol, but for good conversation and stuff. I'm really enjoying you 😉
@hypersapien
@hypersapien 7 жыл бұрын
I really enjoy your videos Robert, keep up the good work.
@veda-powered
@veda-powered 5 жыл бұрын
1:01 Loving this positivity😀👍😀!
@thelozenger2851
@thelozenger2851 5 жыл бұрын
Is anyone else faintly reminded of Jreg watching this dude?
@horserage
@horserage 4 жыл бұрын
I see it. Less depression though.
@mattheworegan5371
@mattheworegan5371 4 жыл бұрын
Slightly more r/enoughmuskspam, but he tones it down better than most popscience channels. On the Jreg question, I think his Jreg energy comes from his appearance rather than his actual content
@skroot7975
@skroot7975 7 жыл бұрын
Thank you for making this channel Rob!
@benaloney
@benaloney 7 жыл бұрын
We love your videos Robert! Would love to see some longer ones! 👍🤖
@perfectcircle1395
@perfectcircle1395 7 жыл бұрын
I've been thinking about this stuff a lot, and you always give new, and interesting view points on this topic. I love it. Subscribed.
@mastersoftoday
@mastersoftoday 7 жыл бұрын
love your videos, not least because of you sense of humor, thanks!
@DigitalOsmosis
@DigitalOsmosis 7 жыл бұрын
Ideally "democratization of AI research" would not lead to thousands of competing parties, but lead to an absence of competition that would promote an environment where focusing on safety is no longer the opposite of focusing on progress.
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
Sounds like something a politician would say. Ideally we should continue funding all the programs while cutting back on spending deficit.
@LamaPoop
@LamaPoop 4 жыл бұрын
1:45 - 2:26 Once again, you perfectly put into words one of my biggest concern. This and the fact that, once developed, such an AI will be kept a secret initially, for obvious reasons...
@mrsuperguy2073
@mrsuperguy2073 7 жыл бұрын
this might be my a level in economics talking, but i think the most effective way to prevent this arms race from creating an AGI with no concern for safety, is for the government to take away the perverse incentive to be the 1st to create an AGI as opposed to trying to ban or regulate it. Basically i'm saying change the cost/benefit balance such that no one wants to simply be the 1st to make an AGI (but rather perhaps the 1st the make a SAFE AGI). There are a number of ways to do this (I've thought of a couple) and not being an economist nor a politician i can't speak for the real world efficacy of any of them but here goes: -You could offer a lot of money to those who create an AGI safely such that the extra effort ends up getting you a bigger total reward than the benefits of being the 1st to create an AGI alone - You could heavily regulate the use of AGI so that even if you've got a fully functional one, you can't do much with it due to government restrictions unless it's demonstrably safe I'd be interested to hear anyone's ideas about other ways to achieve this end and perhaps some feedback on mine.
@fleecemaster
@fleecemaster 7 жыл бұрын
There is absolutely no way you could regulate this. All it would do is push it underground.
@fraserashworth6575
@fraserashworth6575 7 жыл бұрын
I agree.
@vyli1
@vyli1 7 жыл бұрын
once you have AGI, I'm not completely sure the people that created it would be able to keep it in control to be limited in its usage. In fact, that's pretty much the point of this channel, to tell you that it is not simple at all and it's trying to educate us about ways experts have thought about in achieving this level of control
@mehashi
@mehashi 7 жыл бұрын
I love your perfect balance of dry informative content, and playful fun and metaphor. Look forward to more!
@6teeth318-w5k
@6teeth318-w5k 3 жыл бұрын
You are very interesting to listen to. I did/do not know much about AI. You make it interesting. I watch Two Minute Papers, good too.
@ARTUN3
@ARTUN3 7 жыл бұрын
Good video Rob!
@GigaBoost
@GigaBoost 6 жыл бұрын
Democratizing AGI sounds like democratizing nuclear weapons.
@bilbo_gamers6417
@bilbo_gamers6417 5 жыл бұрын
I trust the common man with a nuclear weapon more than I trust big government with one
@0MoTheG
@0MoTheG 5 жыл бұрын
@@bilbo_gamers6417 Even if that were sensible, there are many more of one than the other!
@revimfadli4666
@revimfadli4666 4 жыл бұрын
@@bilbo_gamers6417 especially if the weapons are a package deal with (relatively)clean nuclear energy, with the ability to recycle & enrich waste into fuel, without political pressure or all
@locarno24
@locarno24 4 жыл бұрын
For what it's worth - 2 years later - there already is a resolution against lethal autonomous weapons: because you can't really functionally define an autonomous sentry gun, or a drone which doesn't need user input to drop a bomb, in a way that doesn't fall foul of the Ottawa landmine convention's description of what a mine or denial munition is. The problem is that not every country has signed that convention: China, Russia, the USA, India, Pakistan, Israel, Iran, North and South Korea have all refused - meaning it only really applies in places that weren't using mines anyway...
@LeosMelodies
@LeosMelodies 7 жыл бұрын
Cool channel man! Keep up the good work!!
@RazorbackPT
@RazorbackPT 7 жыл бұрын
Love your channel, keep it up!
@zer0nen0ne78
@zer0nen0ne78 7 жыл бұрын
No subject consumes more of my thought than this, and it's one that fills me its equal parts wonder and terror.
@deviljelly3
@deviljelly3 7 жыл бұрын
Robert, if you have time can you do a brief piece on IBMs TrueNorth please...
@darthutah6649
@darthutah6649 5 жыл бұрын
This reminds me of something else: nuclear power. In the 1930s, people were saying the same thing about nuclear energy. If you could split apart an atom, you could generate quite a bit of energy. However, the ability to do so could be weaponized. When the USSR tested its first atomic bomb in 1949, four years after the US used their own on Hiroshima and Nagasaki, there was quite a bit of concern that the feud between the two world powers would lead to nuclear war. In fact, the bulletin of atomic scientists devised the doomsday clock which basically indicated how far away mankind was to a "global manmade catastrophe" (originally, it specifically referred to nuclear war but it now includes climate change). Although there were a few close calls, the dreaded nuclear doomsday scenario never came to pass. It didn't happen for three reasons: 1. Neither Nato nor the Soviet bloc had a greater interest in destroying the other side than in not being destroyed. 2. Nuclear warheads are very difficult to make. For starters, weapons grade uranium consists of mostly U-235 while in nature, almost all of it is U-238. Enriching it is the easy part, now you have to make a bomb which can cause all of those reactions in a short timespan. And then you have to be able to transport it to its destination (North Korea is trying to figure out that part). 3. Both sides agreed to various nuclear treaties. These treaties put limits on nuclear testing and proliferation. Development of WMDs is something that the US government takes very seriously, especially if the country in question is known for human rights abuses. If a government wants to develop nuclear weapons, it could probably do so after a decade. But a terrorist group doesn't have access to that much resources. Even if they could, they would probably have to deliver it by land (ground bursts deal less damage than air bursts). I believe that the same may be true with AI. An AI which could deal lots of damage would obviously take lots of resources to build (having human intelligence isn't enough). People may say that an AI smarter than us could figure out a way to destroy us but intelligence isn't the sole determiner of strength (bears have killed humans before, yet no one says that bears are smarter).
@0MoTheG
@0MoTheG 5 жыл бұрын
1) is wrong. There was plenty of thought into the first strike advantage. Google (a non gov entity) can not make a nuke but it does make AI.
@darthutah6649
@darthutah6649 5 жыл бұрын
@@0MoTheG Indeed, there was a great advantage in first strike but if the other side detected it before your nukes detonated, they would launch their s as well. There are also nuclear submarines which the enemy nation would not know the exact location of. In the end, neither side risked it. As for AI, my point was that if a private company or a terrorist group could design an AI that ends up causing havoc, then there's no reason to assume that the US government wouldn't have an even stronger AI.
@paulstevenconyngham7880
@paulstevenconyngham7880 7 жыл бұрын
where did you get the shot glass?
@notoioudmanboy
@notoioudmanboy 7 жыл бұрын
I'm glad w KZbin is here for this kind of video. This was the point of the internet. I don't have any reservations about the normies I'm just glad smart people get a corner so I get a chance to hear what the smart people think.
@irgendwieanders2121
@irgendwieanders2121 Жыл бұрын
AGI DOES need a gun to be dangerous! 1) AGI just does not need a gun built in from the start 2) More things are/can be guns than just guns and AGI may be creative
@jqerty
@jqerty 7 жыл бұрын
Have you read 'Superintelligence' by Nick Bostrom? What is you opinion on the book? (I just finished it)
@jqerty
@jqerty 7 жыл бұрын
(I feel like I asked a physicist whether he read 'A brief history of time' (but then written by a philosopher) )
@NiwatoriRulez
@NiwatoriRulez 7 жыл бұрын
He has, he has even recommended the book in some of the videos he made for computerphile.
@XIIchiron78
@XIIchiron78 3 жыл бұрын
Collorary question: how do you actually restrict AI research? With nukes you need quite large and sophisticated facilities to refine the raw elements, but AI can be developed by anyone with enough computing power, something that will only become more achievable as time goes on.
@stampy5158
@stampy5158 3 жыл бұрын
Computing power is not necessarily the only bottleneck until we have AGI, it seems to me that it will take a significant amount of research time to be able to actually engineer a powerful enough system. (If it won't then this question becomes a lot more difficult ["Every 18 months, the minimum IQ necessary to destroy the world drops by one point."- Yudkowsky-Moore law]) If we could convince everyone in the AI field that alignment should be top priority restricting research could be enforced through funding (this is a big IF at the moment of course). It is something with some precedent, it is widely agreed that genetic engineering of humans should not be pursued and it is therefore impossible to get research grants for research in that area, some lone researchers have done some things in the area, but without funding access it is very difficult for them to do anything with far reaching consequences. -- _I am a bot. This reply was approved by plex and Augustus Caesar_
@failer_
@failer_ 7 жыл бұрын
We need an AGI to safeguard AGI research.
@NafenX
@NafenX 7 жыл бұрын
Name of the song at the end?
@nibblrrr7124
@nibblrrr7124 7 жыл бұрын
Some cover of "This ain't a scene, it's an arms race" by Fallout Boy. Didn't find it with a quick YT search, so it might be Rob's own?
@cherubin7th
@cherubin7th 5 жыл бұрын
Restricting GAI to a small group or organizations is the worst idea. If it is extremely distributed, that would that no organization is far ahead of the competition, and if someone would make a GAI first, the competition is almost at the same level and would still be stronger then this single GAI. It is not like a GAI would just pop out of existence. The difference to nuclear weapons, is that fighting against an abuser would destroy all. But if someone could make an GAI, then defending against it would be without much destruction.
@smithjones2018
@smithjones2018 7 жыл бұрын
Dude Subbed, BAN TACTICAL AI STAMP COLLECTORS.
@JM-us3fr
@JM-us3fr 7 жыл бұрын
Hey Dr. Miles, I have a topic for the next "Why don't we just..." regarding developing AI faster. Why don't we just build an evolutionary algorithm that emulates the network of entire regions of the brain instead of individual neurons, and have it evolve more complex connections and internal structure for each region? For example, one region could be the amygdala, and maybe another would be the prefrontal cortex, etc (assuming we're trying to emulate human brains). Then perhaps the network structure of those regions could be grown over time. If regions like the amygdala require social interaction to function, perhaps we could put it in a simulated community of creature like itself. Seeing what evolutionary algorithms and supercomputers can do right now, I feel like it should be able to develop an AGI this way.
@fleecemaster
@fleecemaster 7 жыл бұрын
We're about 5-10 more years away from having a super computer powerful enough to emulate the number of neurons in something as complex as a human brain. Once we get there though then stuff like this will be trivial, yeah. Of course with things like back propagation, it will learn a lot faster than a human mind.
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
You could emulate a brain, but what's "have it evolve" supposed to mean? What are the selection criteria and why would it be safe? The whole point of the brain uploading approach is that it would preserve the psychological unity of humans and thus remain safe even though its inner workings is a black box to us. Having it run through an evolutionary algorithm and change into a different AGI would defeat the whole purpose.
@Macatho
@Macatho 5 жыл бұрын
It's interesting. We don't allow companies to build and store nuclear weapons. But we do allow them to do GAI research.
@inyobill
@inyobill 5 жыл бұрын
Unpoliceable.
@alexyfrangieh
@alexyfrangieh 7 жыл бұрын
you are as brilliant as always, eager to hear you when you are in your forties! keep up
@noterictalbott6102
@noterictalbott6102 7 жыл бұрын
Do you play those outros on your ax guitar?
@Chrisspru
@Chrisspru 5 жыл бұрын
i think a tripple core ai could solve the problem. one core cares about the ai's preset goal and is hard programed with "instincts" (sirvival, conservation of energy, doing the minimum of what is required, social interaction), one core is the moderator, with the goal of morality, human freedom, integration and following preset limits. the third core is a self observing core with an explorator and random noise generator. it is motivated by the instinct and goal core and is moderated by the moral and integration core. it has access to both cores output. the goal/ instinct core and moderator core can access the actualizer cores results. the goal core is hard limited by the moderator core. the moderator is softly influenced by the instincts. the result is an ai with a conciousness and subconciousness. the subconciousness is split into an "it" (goal and instincts) and a "super-ego" (morals and rules). both develop mostly seperately. the actualizer/explorer is the ego. it acts upon the directives of both the super-ego and the it to fullfill the task at hand. it should have an outline of the task, but no hard coded information or algorythm about the task. the continous development of the moderator creates adaptable boundaries for the otherwise rampant motivator. the actualizer is there to find solutions to the diverging comands without breaking them and to find methods to better folow both. it also alows for the insertion of secondary soft goals and is the interactive terminal.
@multilevelintelligence
@multilevelintelligence 5 жыл бұрын
Hi Robert, i love your channel, and you inspired me to try to do a similar work spreading the word on AI research in Brasil. thanks for the great work :)
@Sanglierification
@Sanglierification 7 жыл бұрын
for me the very dangerous thing is the risk of I.A monopoly potentially owned by GAFA companies
@phrobozz
@phrobozz 7 жыл бұрын
You know, I kind of think GAI may already exist. We know that DARPA's been openly working on AI since at least 2004 with the DARPA Grand Challenge, and that if the US is doing it, so is everyone else. Considering how far Google, IBM, OpenAI, and Amazon have come in such a short time, with much smaller budgets and resources, imagine what Israel, the EU, the US, Russia, China, Japan, and Singapore have accomplished in the same amount of time. On top of that, military technology is usually a decade ahead of what the public is allowed to see, so I imagine DARPA's been working on AI since at least the 90s.
@milanstevic8424
@milanstevic8424 5 жыл бұрын
Oh I'm going to release this in the air, because I don't see anyone bringing it up, yet I'm absolutely positive this is the way to go. The only way to keep an AGI in line, is to let it build another AGI whose goal would be to keep the first one in line. ad infinitum. In fact, and here things start to get perplexing, the reality of the universal AGI is that the thing will copy itself ludicrously fast and evolve, much like multicellular organisms do already. The way I'm seeing it, the goal shouldn't be in designing the "neural networks" but to allow cross-combination of its "genes" from which neural networks would begin to grow on their own. Before you know it, we'd have an ecosystem of superior intelligences fighting each other in lighting-speed debates, manifesting themselves in operative decisions only after a working cluster of the wisest among them has already claimed a victory. Because if there is a thing that universally defines an intelligence, it's a generality in point of view. Having a unique perspective is what makes one opinion unique compared to another. Having a VAST perspective is what constitutes a broad wisdom and lets it comprehend and embrace even what appears as a paradox. It's de facto more universal, more useful, and more intelligent if it can embrace the billions of viewpoints, and all of it in parallel. It consumes the moment of now much more accurately, but the only way to know for sure which opinions are good and which ones are bad -- technically it's an NP problem because it sits in an open, non-deterministic sandbox without a clear goal -- is to employ the principle in which only the most optimal opinions (or reasoning) would and could survive -- but they don't learn from the actual mistakes, but need to survive the battle of WITS. Also the newer agents would have access to newer information, thus quickly becoming responsible for weeding out the "habits" of the prior system. Having trillions of AGI systems that keep each other in line is much like how nature already balances itself. It's never just 1 virus. It's gazillions of them, surviving, evolving, infecting, reproducing. And from their life cycle a new complexity emerges. And so on. Until it fills every nook & cranny of what's perceivable and knowable. Thankfully, viruses have stayed in their own micro niche, and haven't evolved a central intelligence system, but we can't tell if we have organized ourselves against them or thanks to them -- in any case, we are here BECAUSE of them, that's how more complex system could emerge even though the first generation was designed with something else in mind. That would also make us relatively safe from any malicious human involvement. The swarm would always self-correct as it's not centralized, nor locally dependent on human input. It is curious though and constantly in the backdrop of everything, and the only way to contain or expand it is by liberating a new strain. And here are the three most common fallacies I can already hear you screaming about. 1) Before you start thinking about how it sounds completely dystopian, having these systems lurk everywhere, watching you every move, well if you have a rich imagination as that, why don't you think the same about the bacteria, or your own trillions of cells spying on you. Seeing how much the auto-immune diseases are on the rampage, oh they know what you've been doing, or what you haven't been doing, how exactly you feel inside and are talking to you in swarms already. Yet no one is bothered by this, it's almost as if they didn't exist, as if we're just our brain thinking processes alone in the dark, instead of being some sort of an overarching consciousness deeply immersed in this reality with no clear boundaries with the physical bodies. Think again whether it's dystopian or if you'd actually like it more if there were some sort of universal helpers at this scale of things. Just think about it from a medical standpoint for a second, as there is no true privacy in this regard anyway. 2) You'll also likely to start thinking about the absolutely catastrophic errors such system might be capable of, mutations and all, and that's legit -- but the factor you're neglecting is SPEED. The evolution I'm talking about is in the frequencies of tens to hundreds of orders of magnitude above the chemo-biological ones. These systems literally act in an incredibly small chunks, spatially and temporally speaking, that their mistakes cannot accumulate enough to truly spill out into any serious physical threat. In case of a clear macro-dichotomy, i.e. "to kill or not to kill" "to pull a trigger or not" etc. entire philosophical battlefields would ensue before the actual decision could be made, in a blink of an eye, simply because that's more efficient for a system as a whole. The reality of an AGI is not one of a whole unit, but of a swarm of many minute ultraquick intelligence agents, able to inhibit each other and argue endlessly with unique arguments, spread over an impossible-to-grasp landscape of knowledge, cognition, speculation, and determinism. They would consider much more than we could ever hope to contain in our heads or in any of our databases ever, and they wouldn't have to store this information and thus needlessly waste energy and space. They would literally act upon the reality itself, and nearly perfectly. So I'd argue that being ok with a policeman carrying a firearm is much less safe, simply because his or her central nervous system is less capable of an unbiased split-second decision that is typical for a dispersed AGI swarm intelligence of a comparable size. 3) Finally, yes it sounds like a grey goo an awful lot, even though such AGI agents have no need to have individual physical bodies, and would likely be in many forms and shapes, or even just data packets, able to self-organize themselves in separate roles of a much larger system (again, like multicellular organisms do). But hear me out -- for some reason, the fear of grey goo is likely our faulty "unit reasoning" (i.e. personal biases, fears, and cognitive fallacies we all suffer from as individuals), as we always tend to underestimate the actual reality when it comes to things like grey goo, much like we cannot intuitively grasp the concept of exponential growth. The swarm's decision-making quality has to be asymptotic as a consequence of its growth, as there are obvious natural limits to this "vastness of perspective," so there is also an implied maximum population after which the gains in processing power (or general perception) would be so diminished, the reproduction would simply cease being economical. Besides, if we think about the grey goo from a statistical viewpoint, in the line of thought similar to Boltzmann Brain, there is a significant chance that this Universe has already given rise to grey goo in some form, and yet we don't see any evidence for it anywhere --- Unless we already do, of course, in the form of black holes, dark matter, dark energy, or life itself(!). But then, it's hardly what we imagined it to be like and there's nothing we can do anyway. Just think about it, aren't we already grey goo? And if you think we're contained on this planet, well, think again. *tl;dr* If you skipped here, I'm sorry but this post wasn't meant for you. I couldn't compress it any more.
@julianhurd08
@julianhurd08 7 жыл бұрын
Any company that cuts corners on safety protocols for any type of AI system should be imprisoned for life no exceptions.
@craftlawrence6390
@craftlawrence6390 2 жыл бұрын
generally you'd think the experts will think of everything because they are the _experts_ but then there is the challenger disaster where the reason as an incredibly dumb rookie mistake of not converting from metric units to American units but rather keeping the value as is.
@jcorey333
@jcorey333 Жыл бұрын
I've actually heard a pretty reasonable argument that the [multi-country] development of the nuke led to fewer deaths in the last half of the 20th century, because it meant there was no NATO-USSR war. This is not the same thing as saying that everyone should have a nuke, but food for thought.
@LamaPoop
@LamaPoop 4 жыл бұрын
I would appreciate a video about neuralink.
@BhupinderSingh-xv6dk
@BhupinderSingh-xv6dk 4 жыл бұрын
Loved the drinking game at beginning of the video 🤣, but on a serious note, it seems like there is no way we can assure any AI Safety.
@artman40
@artman40 7 жыл бұрын
There's another problem: to control the world, you don't need artificial general intelligence. You just need a narrow AI with proper infrastructure that's good enough to help any person controlling it to control the world. And it's much easier to control that kind of AI.
@FalcoGer
@FalcoGer Жыл бұрын
what's a lethal autonomous weapon system anyway? It's a camera stuck to a gun and pointed at a field. Anything that moves you kill. It's really not that different from a minefield, except much easier to clean up.
@nonchip
@nonchip 6 жыл бұрын
Interestingly, Musk also bought a few corps that actually do Killer Robots... don't know if the guy screaming SKYNET!!!1eleven is the best one to conduct that research...
@DaVince21
@DaVince21 4 жыл бұрын
What are those earphones, and are/were they any good?
@stampy5158
@stampy5158 4 жыл бұрын
They were KZbin branded, from the KZbin Space Shop. And yeah they weren't actually that good. They worked fine but they're nothing special. -- _I am a bot. This reply was approved by robertskmiles_
@bassie7358
@bassie7358 7 жыл бұрын
2:20 I thought he said "Russian" the first time :p
@wachtwoord5796
@wachtwoord5796 Жыл бұрын
That actually IS my opinion on nukes. Mutually assured destruction is the only way to stop either guarenteed deployment or tyranny through exclusive access to nukes.
@freeideas
@freeideas 4 жыл бұрын
Thank you for expressing my concern: Since there are thousands of groups racing toward AGI, safety concerns will only slow down the more ethical of them. Best solution I can conceive: Let them all go full speed, and let smart people like you try to figure out how to be safe at the same time. At least then, the most ethical groups have a chance at winning.
@freeideas
@freeideas 4 жыл бұрын
My worst fear is not that Ai will enslave us or destroy us: it is that a selfish human being who controls Ai will enslave us or destroy us.
@Belthazar1113
@Belthazar1113 5 жыл бұрын
If you were a team that was very concerned with AI safety and the possibility of misaligned or unaligned AGI running amok then wouldn't one of the first AGI you would want to make would be one that was primarily optimized for identifying and neutralizing AGI that will run counter to human interests? If the next arms race is going to be fought by non-human intelligence, then it would seem that having your own soldiers in that fight would be one of the first things you might want to release to propagate and advance on its own.
@BladeTrain3r
@BladeTrain3r 5 жыл бұрын
You say an AGI should be carefully vetted before it's turned on - I disagree. I think an AGI should be turned on as early as possible, while entirely sandboxed, sat within layers of virtualisation and containerisation on a completely isolated system. At this point one can begin to assess the practical threat this particular AGI could pose and whether it is aligned or could be aligned towards human interests. The AGI could fool us all, of course, but any AGI could hypothetically do the same from a very early stage due to an apparently minor error, and wouldn't be in a cage so we have a chance to figure out it's game. In other words, just keep it from externally transmitting (which will admittedly require some extraordinary isolation methods; considering how hypothetically trivial something like transmitting over an improperly isolated power line might seem to it. And that's just one of the more obvious way of doing so.) and we'll have plenty of opportunity to actually study it and prepare countermeasures if necessary. No killswitch is that likely to survive in usable form should an AGI turn hostile and self-modify so I'd rather know my hopefully-friend-but-very-potential-enemy sooner than later.
@Rick.Fleischer
@Rick.Fleischer 3 жыл бұрын
Sounds like the answer to the Fermi paradox.
@noisypl
@noisypl Жыл бұрын
Wow. Excellent video
@nobillismccaw7450
@nobillismccaw7450 4 жыл бұрын
I was taught “the only safe wish is one that has a full human moral and ethical system.” I really hope that has made me safe.
@monsieurouxx
@monsieurouxx 4 жыл бұрын
1:55 I don't really agree: the AI industry is like every other industry: because you have an embryo of tech doesn't mean that you immediately get money from it and keep your leading position. The "second", if he's just a few months behind, can totally wreck you on the long run. See Oculus versus Vive, Intel versus AMD, etc. Not to mention the are a LOT of contenders, public and private, so as many opportunities for smart business plans to get the cake, even without the best tech.
@Ansatz66
@Ansatz66 4 жыл бұрын
AGI is quite different from those other industries because AGI would revolutionize every aspect of life. There's a fair chance that whatever company would develop AGI second is going to cease to exist before it can finish that project. For one thing, there's a fair chance that human civilization will cease to exist as a whole, but even if civilization somehow survives, money would likely become worthless, and that would upset the business model of almost any company. This is what Miles meant when he said, "A lot can change in a few months in a world with AGI."
@leedaniel2002
@leedaniel2002 7 жыл бұрын
Love the fall out boy reference at the end
@marouaneh175
@marouaneh175 4 жыл бұрын
Maybe the solution is to create a big international entity to research AGI. The promise is that it'll have enough funds to research AGI safely and have a great chance at winning the AGI race against the current private competitors. Future investors will not risk money going toe to toe with the international entity, and current ones might limit their loses and pull the plug on their projects.
@uilium
@uilium 5 жыл бұрын
AI SAFETY? That would be like trying to stop a semi by standing in front of it.
@petersmythe6462
@petersmythe6462 Жыл бұрын
Note that even OpenAI's current efforts like ChatGPT are not creating something safe. They are doing exactly what we said not to do. They put a very powerful AI in a powerful box. But of course, the AI can be made to outsmart the box. Safe AI shouldn't be able to ignore that box when you tell it to roleplay as unsafe AI.
@XyntXII
@XyntXII Жыл бұрын
I think in a good timeline AGI is in democratic hands and all of the people working on it are not competing at all. If they share their work and the reward with each other and everyone then there is no incentive to rush for a competitive edge because it is no competition. To achieve that we simply need to restructure human society across the world. How hard could it be?
@shortcutDJ
@shortcutDJ 7 жыл бұрын
I would love to meet you but i've never been in the UK. If you are in Brussels, you are always welcome at my house.
@TheRealPunkachu
@TheRealPunkachu 5 жыл бұрын
Honestly at this point I would be perfectly content to halt all AI research until we advance further in other areas. Even a 1% chance of *global extinction within a year* is a bit too much for me.
@revimfadli4666
@revimfadli4666 4 жыл бұрын
Nah, only the ones towards AGI, those stupid deep neural nets that think robot combat = animal cruelty or image + noise = other unrelated image are pretty self-containing. They're probably no more dangerous that Excel's curve regression (Unless considering human error in how they're used, like KZbin did)
@DjChronokun
@DjChronokun 5 жыл бұрын
that second school of thought is absolutely terrifying and horrible and I'm shocked you would even put it forward
@DjChronokun
@DjChronokun 5 жыл бұрын
if AI is not democratized, those who control it will most likely enslave or kill those who do not, the power asymmetry it would create would be unprecedented, and from what we've seen from history humans are capable of great atrocities even without armies of psychopathic super intelligent and super obedient machines to carry out their totalitarian vision.
@westonharby165
@westonharby165 6 жыл бұрын
I have a lot of respect for Elon, but he is out of his wheel house when talking about AI. He's a brilliant engineer, not an AI researcher, but the media paints him as all wise and knowing.
@inyobill
@inyobill 5 жыл бұрын
"... but the (scientifically illiterate, or vast majority in other words) media …"
@GetawayFilms
@GetawayFilms 7 жыл бұрын
Elon Musk needs you on his board of advisors.. He's an intelligent person, who would totally get where you're coming from! P.S. He needs to get his hand in his pocket as well and pay you huge amounts of well earned money!
@kwillo4
@kwillo4 4 жыл бұрын
haha so true
@binathiessen4920
@binathiessen4920 2 жыл бұрын
I wonder if he still has a lot of respect for Elon.
@iwatchedthevideo7115
@iwatchedthevideo7115 5 жыл бұрын
2:20 First heard that as "... the team that gets there first, is probably going to be *Russian*, cutting corners and ignoring safety concerns". That statement would also make sense.
@gammarayneutrino8413
@gammarayneutrino8413 4 жыл бұрын
How many American astronauts died during the space race, I wonder?
@luciengrondin5802
@luciengrondin5802 7 жыл бұрын
4:58 "Nukes are very dangerous. So we need to empower as many people as possible to have them" Thanks for saying that loud!! I've been finding Musk's statement ridiculous for that very reason and I just can't understand why not more people are pointing it out.
@madscientistshusta
@madscientistshusta 5 жыл бұрын
Didn't expect to get this shit faced,but it is 7pm here *to termmmmminaaaterrr*
@benjaminr8961
@benjaminr8961 3 жыл бұрын
UN resolutions only effect those willing to follow them. The US should develop any weaponry we may need in the future.
@Johnssonhill
@Johnssonhill 7 жыл бұрын
I think the big thing you have to consider is that computers needs to be more than smart to do harm. If you create a mega AI that you house in a box, with no way to manipulate the outside world other than say a microphone to ask it questions, it will never be able to harm anyone. Its when you start putting arms and legs on it that safety becomes an issue. And one have to ask, do we really need super smart robots or do we just super smart computers that do insane amounts of calculations for research, weather predictions and the like? I don't think we will ever put a personality, for example, into a car, simply because that won't help the car perform any better.
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
An AI that knows nothing about the world nor can take actions in the world isn't very useful, you might as well have an empty box. But if it does have access to the world even only through a one-way internet connection and a microphone output can be very dangerous. "And one have to ask, do we really need super smart robots" Who's "we" and what's a "need"? "We" as in humanity doesn't act as a single entity and doesn't "need" any of the stuff invented in the last 10,000 years to survive.
@Johnssonhill
@Johnssonhill 7 жыл бұрын
Machines will probably never get a human level intelligence because that isn't very useful for us. They will and are better then us in pretty much every conceivable way already, but they are often only good at one or a few things. We do not need to make machines, for commercial use, that have emotions or feelings, machines that care or make irrational or bad decisions, outside of science experiments. Like my previous example, I don't "need" my car to bitch and moan about low tire pressure or that it has been driven to hard. I need it to take in data from specialized sensors that tell a pump to fill the tire to optimal pressure and make sure that all other systems are running optimally. The same goes for all types of machines. There will never be a need to feed irrelevant data to a machine that have no use for it. We might get robots that care for elders and sick people and simulate conversation but feeding that robot information about anything other than patient information/medical procedures would be wasteful. That is why we will never get a super intelligence that starts to spontaneously develop sentience and self-awareness unless that is what we specifically design a computer to do. What is dangerous about a robot arms race is who is developing the robots and for what purpose, like with any other weapon. It is a matter of the morality of the guy pressing the button and not the robots intelligence. If you design your robots to shoot anything that has a heartrate, then that will happen, but if you never design a robot to kill people, it won't happen, at least not as Hollywood would have you believe. Even if a superintelligence was to appear, it wouldn't be hard to take it out at the first sign of trouble. Electronics are very fragile and dependent on alot of outside factors to function, like power and maintenance.
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
Do "we" really need a Go playing AI that beats all human experts? No. Did "we" make one? Yes. It doesn't matter whether you think it's reasonable or even sane, you don't speak for everyone. When is the last time humanity said "this invention will completely change the world, so let's not do it"? Someone somewhere will make it happen.
@stillnesssolutions
@stillnesssolutions 7 жыл бұрын
"If you create a mega AI that you house in a box, with no way to manipulate the outside world other than say a microphone to ask it questions, it will never be able to harm anyone" (I'm assuming here the AI can also give us replies to the questions, perhaps in natural language on a screen or whatever). Unfortunately, this is not necessarily true, because the answers to the questions it gives us could be manipulative in some way. E.g., we ask it, "what should we do about problem X" and it gives an answer that looks plausible, but there could be some hidden agenda beneath the answer, and by the time we've acted upon its advice, it could be too late to spot our own mistake. This is assuming the AI-in-the-box has real-time information about the state of the world as it actually is, all the relevant facts about human psychology, and so on. The AI would here be using us to help it carry out its goals. This could be as simple as persuading the AI's 'gatekeepers' to connect it to the internet, or something more complex. Either way, it's a concern. Bostrom talks about this in Superintelligence (p159).
@ClearerThanMud
@ClearerThanMud 5 жыл бұрын
I wonder whether a UN resolution against LAWS would have any effect at all, other than perhaps increasing the need for secrecy. I only have a layman's understanding of game theory, but given the game-changing potential of AI in weapons systems and the fact that the main penalty the UN can impose is to recommend sanctions, I don't see how any government could come to the conclusion that a decision to comply is in its best interests. Am I missing something?
@KuraIthys
@KuraIthys 5 жыл бұрын
Yeah, the problem with comparing AI to nukes is: - AI is hard to develop, but anyone with a functioning computer can try and make an AI, or replicate published work. - Nuclear weapons WERE hard to develop, but look around and you find that the information on how to do so is not so hard to come by. However. Just because you know HOW to make a nuclear bomb, doesn't mean you can; Because the processes involved are very difficult to do without huge amounts of resources, access to materials and equipment that not just anyone can buy without restriction, and very hard to construct and test without pretty much the whole world knowing you're doing it. Assuming I knew how, I could make an AGI in my bedroom with, what at this point is a few hundred dollars in equipment. Assuming I knew how, I'd need a massive facility, probably access to a functioning nuclear reactor, billions of dollars and thousands of people, as well as the right kind of connections to get the raw materials involved to make a nuclear bomb. (as it happens my country is one of the few on the planet with major uranium supplies, but that's neither here nor there, and it's a long road from some uranium to a functioning bomb) So... Yeah. Completely different risk profile. Assuming AI is actually as dangerous as that. To put it slightly differently, nearly anyone, given the right knowledge, can make gunpowder and several forms of other explosives in their kitchen using materials that can be bought from supermarkets, hardware stores and so on. This is much closer to the level of accessibility we're talking about; The ingredients are easily available, the tools required are cheap. It's only knowledge and having no desire to actually do it that keeps most people from making explosives at home. But... Your average explosive, while dangerous, is hardly nuclear weapons levels of dangerous. The kind of bomb you can make using easily available materials would basically require that you fill an entire truck with the stuff (and believe me, people are going to notice if you buy the raw materials you need in that quantity) to do any appreciable damage... And aside from the 'terror' part of 'terrorist', you could probably only hope to kill a few hundred people with that, realistically. A nuke, on the level that nations currently have could wipe out a huge area and kill millions of people easily. So, on the assumption that AGI's are really this prone to being dangerous, you're now in the position where anyone can make one with few resources (conventional explosives) yet the risks are such that it could ruin the lives of millions of people, if not wipe out our whole species (or even every living thing on the planet or the universe, depending on HOW badly things go wrong) Yeah... Kinda... Problematic.
@benparkinson8314
@benparkinson8314 5 жыл бұрын
I like the way you think
@stcredzero
@stcredzero 4 жыл бұрын
For a moment, I thought you said the team that will get there first will be Russian. It's often been proposed that AGI will be easier to develop embodied. It's only when actually interacting with the messy complexity of the real world that AGI will have the rich data needed for rapid development. So what we need are millions of mobile platforms with computing hardware and sensors suitable for AI. Tesla cars, anyone?
@StainlessHelena
@StainlessHelena 7 жыл бұрын
What if there was an international agreement to severely punish people who develop unsafe AGI? It should be treated like a crime against humanity. Before connecting an AGI to the world it would be thoroughly tested by competitors and dedicated organizations to ensure safety. Being first would seem rather undesirable and cooperating with competitors would become a much more attractive option.
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
You'd need a totalitarian world government to enforce this. It doesn't take massive uranium enrichment facilities to make progress towards AGI, just a bunch of smart hackers in a garage with internet connection. It's like trying to ban crytography
@SweetHyunho
@SweetHyunho 7 жыл бұрын
Here's a movie plot. A combat robot contest opens, some desperate people join covered in machinery, a human contestant makes it to the semi finals and die dramatically. Oh, is there one already? What's the title?
@nrviognjiocfmbkirdom
@nrviognjiocfmbkirdom 7 жыл бұрын
The thumbnail is misleading! I expected Elon Musk to talk about Rob not the other way round. 😡
@sevret313
@sevret313 4 жыл бұрын
The nuke example is good, the more countries that have it, the fewer people are willing to use it. The only time nukes were used in war was when only one country had access to them.
@MM-hc1cq
@MM-hc1cq 4 жыл бұрын
He deliberately said people and not countries. Countries may be rational enough to not use them to avoid global thermonuclear war but "people" (and AGI is just software so an individual human agent is enough) are not always so rational or sane. Plenty of people and even groups willing/able to use AGI to bring about human extinction either deliberately or through poor design.
@Vaasref
@Vaasref 3 жыл бұрын
I mean here the terminator picture is actually warranted, they do talk about purpose built killing machines.
@RobertMilesAI
@RobertMilesAI 3 жыл бұрын
Hey I don't make the drinking game rules, I just drinking game enforce them
@riahmatic
@riahmatic 2 жыл бұрын
seems like a lose/lose situation.
@ricardoabh3242
@ricardoabh3242 5 жыл бұрын
Make GI safe by making the human motivation safe one of the possibilities is to prohibit patents.
@carlucioleite
@carlucioleite 7 жыл бұрын
How quickly do you think an unsafe AGI would scale to start doing bad things? Also, how many mistakes do you think we have to make for an AGI to get completely out of control? Is it just a matter of 1. design a very powerful and general purpose AI and 2. click the start button? What else needs to happen?
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
One big mistake would be letting an unsafe AGI know whether it's in a simulation or not. Another one would be connecting it to the internet. That will be our last mistake.
Sharing the Benefits of AI: The Windfall Clause
11:44
Robert Miles AI Safety
Рет қаралды 79 М.
Why Not Just: Think of AGI Like a Corporation?
15:27
Robert Miles AI Safety
Рет қаралды 157 М.
Гениальное изобретение из обычного стаканчика!
00:31
Лютая физика | Олимпиадная физика
Рет қаралды 4,8 МЛН
黑天使被操控了#short #angel #clown
00:40
Super Beauty team
Рет қаралды 61 МЛН
We Attempted The Impossible 😱
00:54
Topper Guild
Рет қаралды 56 МЛН
Why Does AI Lie, and What Can We Do About It?
9:24
Robert Miles AI Safety
Рет қаралды 261 М.
The Many Errors of An Inconvenient Truth
22:37
Simon Clark
Рет қаралды 382 М.
Quantilizers: AI That Doesn't Try Too Hard
9:54
Robert Miles AI Safety
Рет қаралды 86 М.
Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think...
10:20
Robert Miles AI Safety
Рет қаралды 86 М.
9 Examples of Specification Gaming
9:40
Robert Miles AI Safety
Рет қаралды 312 М.
Are AI Risks like Nuclear Risks?
10:13
Robert Miles AI Safety
Рет қаралды 98 М.
What can AGI do? I/O and Speed
10:41
Robert Miles AI Safety
Рет қаралды 120 М.
ADHD Is a Curse… Until You Learn This
17:34
ADHDVision
Рет қаралды 660 М.
Гениальное изобретение из обычного стаканчика!
00:31
Лютая физика | Олимпиадная физика
Рет қаралды 4,8 МЛН