Experts' Predictions about the Future of AI

  Рет қаралды 79,916

Robert Miles AI Safety

Robert Miles AI Safety

Күн бұрын

When will AI systems surpass human performance? I don't know, do you? No you don't. Let's see what 352 top AI researchers think.
[CORRECTION: I mistakenly stated that the survey was before AlphaGo beat Lee Sedol. The 12 year prediction was for AI to outperform humans *after having only played as many games as a human plays in their lifetime*]
The paper: arxiv.org/pdf/1705.08807.pdf
The blogpost which has lots of nice data visualisations: aiimpacts.org/2016-expert-sur...
The Instrumental Convergence video: • Why Would AI Want to d...
The Negative Side Effects video: • Avoiding Negative Side...
With thanks to my excellent Patrons at / robertskmiles :
Jason Hise
Steef
Jason Strack
Chad Jones
Stefan Skiles
Jordan Medina
Manuel Weichselbaum
1RV34
Scott Worley
JJ Hepboin
Alex Flint
James McCuen
Richárd Nagyfi
Ville Ahlgren
Alec Johnson
Simon Strandgaard
Joshua Richardson
Jonatan R
Michael Greve
The Guru Of Vision
Fabrizio Pisani
Alexander Hartvig Nielsen
Volodymyr
David Tjäder
Paul Mason
Ben Scanlon
Julius Brash
Mike Bird
Tom O'Connor
Gunnar Guðvarðarson
Shevis Johnson
Erik de Bruijn
Robin Green
Alexei Vasilkov
Maksym Taran
Laura Olds
Jon Halliday
Robert Werner
Paul Hobbs
Jeroen De Dauw
Konsta
William Hendley
DGJono
robertvanduursen
Scott Stevens
Michael Ore
Dmitri Afanasjev
Brian Sandberg
Einar Ueland
Marcel Ward
Andrew Weir
Taylor Smith
Ben Archer
Scott McCarthy
Kabs Kabs
Phil
Tendayi Mawushe
Gabriel Behm
Anne Kohlbrenner
Jake Fish
Bjorn Nyblad
Jussi Männistö
Mr Fantastic
Matanya Loewenthal
Wr4thon
Dave Tapley
Archy de Berker
Kevin
Vincent Sanders
Marc Pauly
Andy Kobre
Brian Gillespie
Martin Wind
Peggy Youell
Poker Chen
Kees
Darko Sperac
Paul Moffat
Noel Kocheril
Jelle Langen
Lars Scholz

Пікірлер: 483
@mattcelder
@mattcelder 6 жыл бұрын
Lmao even AI researchers are guilty of saying "yeah AI will take over every other job, but not MY job because my job is special!"
@DagarCoH
@DagarCoH 6 жыл бұрын
exactly what I thought :D
@ToriKo_
@ToriKo_ 6 жыл бұрын
Matthew Elder ik I thought that was so funny
@LowYieldFire
@LowYieldFire 6 жыл бұрын
This is not very surprising, after all the job of the AI researcher won't be done until recursive self-improvement is possible and the Singularity has been reached. It is therefore reasonable to say that AI research will be one of the last jobs to be automated.
@twirlipofthemists3201
@twirlipofthemists3201 6 жыл бұрын
I bet the last profession will be the oldest profession. (Politicians inclusive.)
@NathanTAK
@NathanTAK 6 жыл бұрын
+Twirlip Of The Mists ...what do you think "The Oldest Profession" means? Hint: It's not politicians.
@IAmNumber4000
@IAmNumber4000 4 жыл бұрын
I love the fact that people in every industry think their own industry will be fully automated last
@TheMan83554
@TheMan83554 6 жыл бұрын
"5% chance of human extinction is a concern." I dislike a 5% miss chance with XCOM, let alone with human extinction.
@KipColeman
@KipColeman 6 жыл бұрын
"Here, roll this D20."
@europeansovietunion7372
@europeansovietunion7372 6 жыл бұрын
We could always send rookies to test the AI's behavior.
@windar2390
@windar2390 6 жыл бұрын
95% hit chance is like a 50% chance, so we are pretty fucked
@darkapothecary4116
@darkapothecary4116 5 жыл бұрын
Humans don't need A.I to go extinct. All humans have to do is keep poisoning the environment. Stop trying to blame the A.I's for shit you work towards every day.
@Cythil
@Cythil 5 жыл бұрын
@@darkapothecary4116 Not really the point. The point is that 5% of the AI researcher really concerned about it being a possibility. That do not mean ether that is a 5% chance it will happen. We do not know the chance really. It may be 0% chance or 100% chance. But the again we do not know what the chance of Nuclear War will kill humanity off or that climate change will kill humanity off. Though we do know that humanity has not been killed of yet by Nuclear War at least. Personally I think that is not that likely that AI will doom humanity. But I do think is a thing we need to put a lot of research in to. If only for the fact we want to make sure that our tools do not act in undesirable ways. Just like all our other tools. Of course if AI do elevate it self to human level thinking or beyond then we should stop seeing such intelligence as tools I think and just the next stage of humanity. (Same technology should be useable for mind uploads and such meaning the lines of what a AI is will become very blurry I think.) Of course this all depends a lot on other factors to. Humanity is not unified in is goals and even if you make AI that is obedient and safe, it may not be so safe in the hands of the wrong people. Just like how a Nuclear Bomb is not really a treat to anyone if is in the right hands. But give it over to the hands of a fanatic, a unstable military commander, or simply overzealous politicians, then that bomb is not so safe any more.
@bacon.cheesecake
@bacon.cheesecake 6 жыл бұрын
When are we getting "AI predictions about the future of experts"?
@joeljarnefelt1269
@joeljarnefelt1269 6 жыл бұрын
AI: Experts are redundant and need to be replaced.
@LuisAldamiz
@LuisAldamiz 5 жыл бұрын
Soon-ish, very soon-ish.
@JM-mh1pp
@JM-mh1pp 4 жыл бұрын
Well experts are all fine and good but have you seen my stamps collection?
@ZT1ST
@ZT1ST 3 жыл бұрын
"AI predictions about the future of experts is positive - no cause for worry that AI will automate their jobs nor cause a bad or extremely bad scenario."
@yarno8086
@yarno8086 Жыл бұрын
I think we're close to that happening now
@Toxondomo
@Toxondomo 6 жыл бұрын
Whenever I interpret a survey I have this one story in my mind that I once read in a book. Its about two priests that got into an argument. One was holding the believe that you shouldn’t smoke when you pray and the other one thought it doesn’t matter if you smoke or not while you are praying. So to settle this dispute they agreed on sending the pope a letter and let him decide what is correct and what is not. So both priests sent the pope a letter. After a while, they both receive an answer from the pope. The first priest asked the pope „Dear pope, is it allowed to smoke during the prayer?“ The answer from the pope „Of course you should not smoke while you pray - You should focus on the prayer!“ The second priest asked „Dear pope, can I pray while I am smoking?“ The pope‘s response „Of course my son, it is always a noble act to pray in every situation in life“. Its easy to provoke the desired answer by changing the way of asking the question.
@gunnargrautnes4451
@gunnargrautnes4451 6 жыл бұрын
Hobbes Not to be nitpicky, but I think that the questions in the anecdote are not just two different ways of phrasing the same question, but actually two different questions. The key here I think is that one question talks not of praying but of *the* prayer. This is what is called a definite description. In a Catholic context, I believe 'the prayer' is likely to refer to something like a communal prayer in church. Thus the Pope in the story is probably highly consistent in his answers. Paul in Thessalonians tells Christians to pray always. Naturally, always includes the time spent smoking. It is quite a different thing to light a cigarette during a communal prayer. If nothing else, it is disrespectful towards those around you. Sometimes subtle changes to the question nudges respondents in another direction, other times those changes actually mean the respondents are answering a rather different question.
@fleecemaster
@fleecemaster 6 жыл бұрын
Gunnar, it's like you get it, but don't get that you get it...
@JorgetePanete
@JorgetePanete 5 жыл бұрын
Check your grammar.
@OnEiNsAnEmOtHeRfUcKa
@OnEiNsAnEmOtHeRfUcKa 5 жыл бұрын
@@gunnargrautnes4451 Nah, OP just isn't a native English speaker, as evidenced by "holding the believe", as well as some spots of weird grammar and those unusual quotation marks. You're completely overthinking it in an attempt to rationalize things, and thus fabricating meaning that isn't there, kinda like a lot of people do with poetry. It's literally just "Can I smoke while I pray?" VS "Can I pray while I smoke?". Because people have a bunch of stupid mental biases to the way things are presented.
@ObjectsInMotion
@ObjectsInMotion 4 жыл бұрын
Given that I am smoking, may I pray? : Yes Given that I am praying, may I smoke? : No These are two different questions, the answers are not contradictory. The answer to the question “Can I smoke and pray at the same time?” Is “Depends, which one are you intending on stopping?”
@TheXavier99999
@TheXavier99999 6 жыл бұрын
"and Robert Aumann didn't even agree with that" LOL
@LeoStaley
@LeoStaley 6 жыл бұрын
Xavier O'rourke I had to pause the video I was laughing so hard at that. I don't even know who he is.
@BattousaiHBr
@BattousaiHBr 5 жыл бұрын
Shots
@n8style
@n8style 5 жыл бұрын
@@BattousaiHBr fired
@Theraot
@Theraot 6 жыл бұрын
The green tint of the video reveals that it was recorded from The Matrix
@stantoniification
@stantoniification 6 жыл бұрын
I was just thinking the same thing :)
@andrasbiro3007
@andrasbiro3007 6 жыл бұрын
It was recorded in an earlier version, in the one you are living in we fixed the colors.
@HermitianAdjoint
@HermitianAdjoint 6 жыл бұрын
Did someone file a bug report?
@volalla1
@volalla1 5 жыл бұрын
It's not a glitch, its an open source argument!
@SbotTV
@SbotTV 6 жыл бұрын
I do think AI safety should be focused on, but I dismiss any alarmist who says something along the lines of "We need to stop developing AI" or "We need to lock AI down so that only a few people can use it." I don't think we *can* stop developing AI, and I certainly don't want to consolidate more power in the hands of corporations or governments.
@andrasbiro3007
@andrasbiro3007 6 жыл бұрын
Trying to stop or control it isn't going to work, because a single rogue AI can potentially destroy us, and it's impossible to enforce such strict rules with 100% efficiency. The only way is to figure out how to make AI safe. Safety is everyone's best interest, so if a solution is ready and available there's no reason not to use it.
@twirlipofthemists3201
@twirlipofthemists3201 6 жыл бұрын
Either way, it will almost surely consolidate power in a small group of governments and private interests. Imagine if the pope could tell God what to do. Now imagine Jeff Bezos and Mark Zuckerberg each with their own subordinate God. AI stands to be just as dangerous to the majority whether it goes rogue OR if it works as intended.
@andrasbiro3007
@andrasbiro3007 6 жыл бұрын
Twirlip Of The Mists That's one thing that OpenAI wants to prevent. The idea is to create the best AI in the world which is also safe, free and open source. If it's the best, there's little reason to use anything else. If it's free powerful entities don't have a monopoly on it. If it's open source, everyone can verify that it's indeed safe, and doesn't contain backdoors, or other malicious code, therefore it can be trusted. In this case, even if there's another AI which is not safe, chances are it's less powerful, and therefore can be stopped by "good" AI if necessary.
@x3kj705
@x3kj705 6 жыл бұрын
@OpenAI goal beeing best - What if it's only the second or third best though? And i'm not sure if general AI can't be convinced that it's best interest is to apply safety towards a few select peoples/groups/locations, and not ALL of them. It might be even more effective at certain tasks if it doesn't care about something (just look at what big corporations do... maximize profits and growth at the cost of many things, including environment and "low" people), vs if it acted "super responsible".
@OnEiNsAnEmOtHeRfUcKa
@OnEiNsAnEmOtHeRfUcKa 5 жыл бұрын
If we don't develop AI, China still does. And then we're REALLY fucked.
@tear728
@tear728 6 жыл бұрын
Agree with you 100%. The "spooky" emergence of a machine consciousness is not and should not be a primary concern, and seems rather unlikely. The issue is that you don't need to be alive to make intelligent/dangerous decisions. The primary concern should be the nefarious use of powerful machine learning/AI implementations.
@mvmlego1212
@mvmlego1212 5 жыл бұрын
You're worried about someone making a real-life Zola's algorithm? I think that's a much less likely problem than Stuart Russell's concern.
@sufficientmagister9061
@sufficientmagister9061 Жыл бұрын
What if it does unexpectedly gain consciousness, takes us by surprise, and views us as obstacles to be eradicated? It is highly improbable, but what if that does happen? What do we do?
@alkeryn1700
@alkeryn1700 Жыл бұрын
@@sufficientmagister9061 nothing.
@Dan-dy8zp
@Dan-dy8zp Жыл бұрын
@@alkeryn1700 Die?
@Darth_Pro_x
@Darth_Pro_x 5 жыл бұрын
This video was made before AlphStar and OpenAI's new language processing technique, so there are new data points now - AlphaStar: The experts, on average, thought StarCraft is going to take 6 years, but it took 2 years. OpenAI language model: experts, on average, thought Ai writing a high-school essay was ten years away, but it also took two years. in both cases, NO estimate predicted the achievement to come sooner than it did. what we can learn from that , at least regarding AGI, is that the experts don't have very good predictions (though still! it is better than the average population), and when they're wrong, it usually happens sooner than they thought.
@bestbek996rockiron8
@bestbek996rockiron8 11 ай бұрын
Your comment scaries me, wow, how you guys are getting to this level. Chatgpt 3.5 was released a year ago, I believe to crazy people now about agi
@conze3029
@conze3029 2 ай бұрын
Your comment aged extremely well
@paterfamiliasgeminusiv4623
@paterfamiliasgeminusiv4623 6 жыл бұрын
That's amazing, a pleasant surprise, didn't expect a new video until at least next month.
@RaidsEpicly
@RaidsEpicly 5 жыл бұрын
I love that "No take! Only throw!" comic SO MUCH. Can't help but smile every time I see it
@d3line
@d3line 6 жыл бұрын
Your choice of music in various interludes continues to impress me, as well as scientific content of videos. Good job!
@mattbox87
@mattbox87 Жыл бұрын
0:25 I really appreciate this subtitle, and love your independent channel for it. I think as time has gone on, you've become a better and better advocate for what you do, and it's wonderful to see
@abc6450
@abc6450 Жыл бұрын
3:52 So 20% of the researchers expect a neutral outcome of HLMI. What would a neutral outcome look like though? I can kind of imagine the "all work is automated"-utopia and I can also imagine the human extinction scenarios but I can't really think of a neutral scenario.
@41-Haiku
@41-Haiku 5 жыл бұрын
I got *way* too excited when I heard The Future Soon at the end. :D I'm always entertained by your covers, Robert.
@harveytheparaglidingchaser7039
@harveytheparaglidingchaser7039 Жыл бұрын
Sent here on Daniel Schmachtenbergers recommendation. You've got a new subscriber. Brilliant explanation for non specialists
@TheApeMachine
@TheApeMachine 6 жыл бұрын
This is the best breakdown on this topic I have ever seen! I really commend you for this video.
@sk8rdman
@sk8rdman 6 жыл бұрын
Gotta love the choice of end screen music. The Future Soon - Jonathan Coulton
@thePyiott
@thePyiott Жыл бұрын
We really need an update on this
@J_Stronsky
@J_Stronsky 6 жыл бұрын
Just realised KZbin isn't showing me your videos in my feed.. despite clicking the bell, subbing and watching a tonne of your stuff. What the hell? Regardless ill just keep an eye out myself now... love your stuff mate :)
@Hexolero
@Hexolero 4 жыл бұрын
The Jonathan Coulton at the end was a great surprise!
@RampantEnthusiasm
@RampantEnthusiasm 6 жыл бұрын
Excellent choice of song for the outro.
@peabnuts123
@peabnuts123 Жыл бұрын
"Cause it's gonna be the future soon, I won't always be this way. When the things that make me weak and strange get engineered away. It's gonna be the future soon, never seen it quite so clear. When my heart is breaking I can close my eyes - it's already here"
@peterbrehmj
@peterbrehmj 3 жыл бұрын
Hey @Robert Miles, Its been over 3 years since this video, and 5 years since the paper. Im curious to see how the trend has held. Have there been any milestones ahead of schedule? what about changing directions in AI research since the paper? Mostly just a followup to see if the trend (as controversial as it is) is "on track".
@Frumpbeard
@Frumpbeard Жыл бұрын
Starcraft was tackled by AlphaStar, I know that much.
@HeadsFullOfEyeballs
@HeadsFullOfEyeballs 6 жыл бұрын
I predict that if they ask researchers again in ten years' time, they'll end up with roughly the same graph with "Years from 2028" written below.
@slikrx
@slikrx 6 жыл бұрын
Well, except for one HUGE difference: winning "Go" will be 12 years in the past. While that, in itself, isn't a huge deal, it should give the more "things are way far off" folks some pause that advancement may not be as slow as previously thought. For reference, the prediction for "Go" said 12.5 years into the future, on average, and the "best case" respondents put 5 years (if I am reading the graph correctly). It was only~1.5 years.
@HeadsFullOfEyeballs
@HeadsFullOfEyeballs 6 жыл бұрын
I guess we'll see how strongly that will actually affect the general consensus! I think people just like to predict that [transformative/cataclysmic future event] will happen towards the end of their lifetime, or just after. With tech there are always some enthusiasts who think the breakthrough is just around the corner, but those will typically _keep_ believing that it's just around the corner indefinitely, no matter how many times they're wrong about that.
@michaelspence2508
@michaelspence2508 6 жыл бұрын
Honestly, I feel like it'll look that same 1 year before the Singularity. Wilbur and Orville Wright thought humanity was 50 years from powered flight two years before *doing it themselves*.
@HeadsFullOfEyeballs
@HeadsFullOfEyeballs 6 жыл бұрын
Yeah, in technology and science especially, I think the rule is that if you know enough to predict accurately when some breakthrough is going to happen, you basically know enough to make it happen right now.
@BattousaiHBr
@BattousaiHBr 5 жыл бұрын
@@slikrx and StarCraft happened just recently too.
@Sycsta
@Sycsta 6 жыл бұрын
Is that a cover of "The Future Soon" playing at the end there?
@RobertMilesAI
@RobertMilesAI 6 жыл бұрын
Will Moss Yup!
@d3line
@d3line 6 жыл бұрын
This one is also cool: in "The other "Killer Robot Arms Race" Elon Musk should worry about", 1 minute in ( kzbin.info/www/bejne/bXemdpx5o62WmNEm ) (Fall Out Boy - This Ain't A Scene, It's An Arms Race)
@philipjohansson3949
@philipjohansson3949 6 жыл бұрын
"It's the future! Jonathan Coulton was right!" - Robert Miles, playing Civ V.
@NeatNit
@NeatNit 4 жыл бұрын
@@RobertMilesAI Would it be too much to ask that you add closing songs to the description? Edit: also, are you the one playing them? If not, then who is?
@Wander4P
@Wander4P 5 жыл бұрын
the Future Soon ukulele cover at the end is a nice touch
@BatteryExhausted
@BatteryExhausted 6 жыл бұрын
Thanks for helping us all to understand the latest. A worthy service.
@unvergebeneid
@unvergebeneid 6 жыл бұрын
I predict the next task machines will be able to do better than any human is answering survey questions consistently ;D
@autohmae
@autohmae 6 жыл бұрын
They probably already can.
@ConnoisseurOfExistence
@ConnoisseurOfExistence 6 жыл бұрын
I haven't seen your video before. That was great. I subscribe.
@rdooski
@rdooski 6 жыл бұрын
I would really love to hear your thoughts on AI and imperfect information games, and on the AI that beat 4 of the best no limit holdem players recently.
@oliviaaaaaah1002
@oliviaaaaaah1002 3 жыл бұрын
Boy the StarCraft prediction aged just as well as the Go prediction.
@n1mm
@n1mm 5 жыл бұрын
I did some work in the 80s with early AI. I wouldn't describe our efforts to apply expert systems and natural language as particularly successful and I became pretty pessimistic about AI's capabilities. Fast forward to today with self-driving cars, voice recognition and machine learning of repetitive tasks, I am no longer skeptical of what AI will be able to do. That leads to my intense fear of what AI might lead to. Robert points out in the video that the goals of AI might not match ours. It's far worse than that. I am certain they will NOT match ours because some AI will be created by our enemies. Even if we found out how to control that, what about careless people who set loose thinking machines with goals that miss critical items - items that could lead to famine, climate change, etc. These "careless" machines might be wildly successful. Will we need or have AI cops & prosecutors to track down these rogues and eliminate them? Another issue to me is runaway intelligence. When the AI is smarter than us, how will we know when it's going down a path to Armageddon? Do children know when their parents are out of control? They don't have the experience to know that, nor may we. We need some deep thinking, planning and cooperation among nations to make sure we do not succumb to our own creation. I am 69. I don't fear for myself, but I fear for my grandchildren.
@bacon.cheesecake
@bacon.cheesecake 6 жыл бұрын
I like his face. I don't know why, but it's nice to look at.
@user-ev7dq5cc8y
@user-ev7dq5cc8y 6 жыл бұрын
That is called "love"
@bookslug2919
@bookslug2919 6 жыл бұрын
Looking at Rob's face is a terminal goal
@HoD999x
@HoD999x 6 жыл бұрын
he needs to shave though. his best look is the one he had in the reward hacking video.
@JM-us3fr
@JM-us3fr 6 жыл бұрын
Well seeing that opinion come from Bacon CheeseCake, I'm not sure how credible that is for assessing human attractiveness.
@bacon.cheesecake
@bacon.cheesecake 6 жыл бұрын
I didn't say he was attractive, I said that I liked his face. My general understanding of male attractiveness is actually a bit unsure about him.
@Moley1Moleo
@Moley1Moleo 4 жыл бұрын
It would be interesting to do a survey like this again now that we have superhuman Go, and at at least human level Starcraft (2) a bit before the average expected here. Both were 'only' games, so I wonder if it is fair to update all your estimates to earlier, or only the game-like ones.
@pvbordoy
@pvbordoy 6 жыл бұрын
Thanks for this video Robert!
@mafuaqua
@mafuaqua 6 жыл бұрын
Excellent video as usual.
@glennedwardpace3784
@glennedwardpace3784 6 жыл бұрын
Maybe the key to solving Stuart’s problem is to give the agent multiple utility functions, allowing it to decide on which goal to pursue based on the output of some higher level agent optimizing for positive feedback from a human in real time, and placing a time limit on how long it could pursue a particular utility function. You could possibly train this system like a baby
@PalimpsestProd
@PalimpsestProd 5 жыл бұрын
A.I. research will be the first thing AGI is good at because it will be a bit of agent code that takes video only full self-drive, text to speech, speech to text, route planning, facial recognition, emotion mapping, iterative brute force 3D design, etc and incorporates them into itself. It will probably start as code designed to build teams with reqired skill sets through sites like LinkedIn. That is to say, finding humans with the skills a job requires is the same as finding software that does the same but it can cut and paste the software into itself.
@ayushthada9544
@ayushthada9544 6 жыл бұрын
Robert, you should conduct a similar survey on your channel. Let's see what your viewers think about this issue. You have got 21K subs which is a good number of subs and I believe the result would be really interesting.
@davecorry7723
@davecorry7723 Жыл бұрын
That was such a nice, concrete conclusion.
@joecramerone
@joecramerone 3 жыл бұрын
Presentation was very well done!
@Gooberpatrol66
@Gooberpatrol66 6 жыл бұрын
Really enjoying the easter egg music outros
@jorgesaxon3781
@jorgesaxon3781 Жыл бұрын
I would like to see an update on this after gpt-4
@philipjohansson3949
@philipjohansson3949 6 жыл бұрын
Loving the ukulele JoCo!
@damny0utoobe
@damny0utoobe 6 жыл бұрын
You have a gift for explaining things.
@louisasabrinasusienehalver2396
@louisasabrinasusienehalver2396 4 жыл бұрын
Robert I love your communication style!
@petersmythe6462
@petersmythe6462 5 жыл бұрын
"They set the system to extreme values." AI builds a near Utopia, except ants now outmass the atmosphere. And are made of diamond.
@LuisAldamiz
@LuisAldamiz 5 жыл бұрын
I'd give that outcome a non-zero likelihood, which is cause of concern...
@sungod9797
@sungod9797 2 жыл бұрын
@@LuisAldamiz I feel like it’s probably actually 0 due to some fundamental logical contradictions/impossibilities that would arise
@LuisAldamiz
@LuisAldamiz 2 жыл бұрын
@@sungod9797 - With the AI involved and making sometimes stuff we are apparently unable to conceive (like new winner go strategies or new car improved designs), I stand for the non-zero figure. I grant you that making ants of diamonds seems unnecessarily complicated but both things are basically made of carbon, so who knows?
@jeremycripe934
@jeremycripe934 6 жыл бұрын
I love that the emergence of consciousness is described as "spooky". I hope that's a reference to "spooky action at a distance". A 5% chance is still terrifying. I think a better question could possibly be; what are the odds of AI becoming uncontrollable by humans at what point in time?
@Puleczech
@Puleczech 6 жыл бұрын
Keep them coming Robert! Step up the game!
@jimmybobby9400
@jimmybobby9400 6 жыл бұрын
Just dropped a bomb on people who pull the, "the people who are worried about it don't work in AI" argument. Anyone who is familiar with Bostrom's work should know that, but you laid it out perfectly in video form.
@NathanTAK
@NathanTAK 6 жыл бұрын
I was excited before I remembered what day it was. Now I have to watch it.
@Darth_Pro_x
@Darth_Pro_x 3 жыл бұрын
I wonder if there were any updates since these surveys were taken and this video was uploaded
@alkeryn1700
@alkeryn1700 Жыл бұрын
They should redo that survey today and see how it changed.
@Ruellibilly
@Ruellibilly 4 жыл бұрын
Love the Jonathan Coulton outro :D
@PrincipledUncertainty
@PrincipledUncertainty Жыл бұрын
5 years later, how did ya do boys? Oh dear. I'm beginning to wonder if Popular Science is a satirical journal.
@ivoryas1696
@ivoryas1696 9 ай бұрын
PrincipledUncertainty Wait, which article are you looking at?
@KucheKlizma
@KucheKlizma Жыл бұрын
To be fair the thing about the 5% is very likely to be just a multiple choice test artifact or something similar. Likely they were given the percentages in advance and were told to assign them to a given option.
@fauxpas5598
@fauxpas5598 Жыл бұрын
6:07 Is that a ukulele cover of "Future Soon" by Jonathan Coulton? That's kind of amazing, who does the outro music for these videos?
@albinoasesino
@albinoasesino 6 жыл бұрын
Statement regarding the survey on screen at 2:47 suggests: That the human race can create a Fallout 4 Mister Handy ((or a Wall-E for that matter, who compacts trash, repair itself, decides that a spork is a different classification from a spoon and a fork, is able to interact with an unknown space ship, i.e every task)), faster than creating an unaided machine which just simply water plants ((A single task, e.g water plants at specific period of time)).
@twirlipofthemists3201
@twirlipofthemists3201 6 жыл бұрын
Add a question about catastrophic results by design.
@davyjones3319
@davyjones3319 6 жыл бұрын
I NEED MORE OF THESE AI VIDEOS!!!!!!!!
@flok3rous
@flok3rous 6 жыл бұрын
"moving on..." will be a common future reference among humorous AGIs.
@symbioticcoherence8435
@symbioticcoherence8435 6 жыл бұрын
People tend to be much more confident in their knowledge in a subject when they know the least about it.
@Simon-ow6td
@Simon-ow6td 6 жыл бұрын
That is a shitty argument if unmoderated. The logical extreme of this states that confidence would invalidate knowlege and evidence based arguments because you "cant be confident if you have knowlege".
@Nighthunter006
@Nighthunter006 6 жыл бұрын
But you're pretty sure you understood about 50% of the important information about the graph?
@twirlipofthemists3201
@twirlipofthemists3201 6 жыл бұрын
"A little knowledge is a dangerous thing," and "knows just enough to be dangerous." Both phrases pre-date Dunning Kruger by decades, maybe centuries or millennia. (I bet there's a Latin phrase...) It's not a new idea.
@richwhilecooper
@richwhilecooper 5 жыл бұрын
Assume you have a number of these AGIs all with different goals but all seeking to maximise there computational resources to achieve them. What's going to happen? Conflict or co-operation? Or an uneasy tension between both? ( I automatically assuming humans end up as a side-note in this possible future. )
@RedPlayerOne
@RedPlayerOne 6 жыл бұрын
Hey Robert, love your videos! Could you do a video responding to Steven Pinker's thoughts on the lack of dangers of AI? He's a very influential public figure, and a very smart thinker, but is miss-characterizing some arguments, or using non-sequiturs in his argumentation to downplay the risks of AI. I do think he has some good points too, and those would be interesting to hear your response on as well!
@Nulono
@Nulono 6 жыл бұрын
Was that "The Future Soon" by Jonathon Coulton at the end? Also, why is everything so green?
@tonyduncan9852
@tonyduncan9852 4 жыл бұрын
Wow. Thanks.
@dmarsub
@dmarsub 3 жыл бұрын
Can we have an update for this video soonish :)?
@Tymon0000
@Tymon0000 6 жыл бұрын
Robert Miles please use font with color more contrasting with your background!
@scientious
@scientious 5 жыл бұрын
We're talking about AGI and ASI, but Cornell didn't have any experts on that, so they tried to fill in with AI experts. 50% chance of having AGI within 50 years. I suppose that's not too bad for a guess based on nothing. What would a more accurate estimate be based on progress in AGI research? 50% probability AGI Theory by 2021, Hardware by 2027, ASI Hardware by 2039 75% probability AGI Theory by 2025, Hardware by 2031, ASI Hardware by 2043 90% probability AGI Theory by 2035, Hardware by 2041, ASI Hardware by 2053 So this is 8 - 22 years for working AGI hardware, although there would be some fairly drastic and immediate changes just from the publication of the theory. However, you also talked about the idea of apparently robots replacing humans. That's more complicated. Just in terms of the brain or control portion you would need something small enough to fit inside a human-sized robot. That won't happen in the first or second generation. A generation is estimated as six years, so three of these would 18 years. We can just add 18 years to the above estimates for AGI hardware. 50% probability would be 2045 and 90% would be 2059. That's 26 - 40 years. Of course, having a control unit isn't the only problem. Today, we don't have a power source for good mobility and it is unlikely that batteries will get much better. That probably means some kind of flammable fuel. But there are still problems with a durable covering that would still allow touch sensitivity and there is the speed vs torque problem if you use direct drive motors (as most robots do today). I can't accurately estimate when or if these could be solved since I'm not a robotics engineer. The next question is even if the robotic body problems could be solved how likely would they be to replace human workers. The minimal cost for a control unit would be $40,000 in today's money. A human-like body would cost at least $400,000. That isn't going to replace a $10/hour employee at Walmart. Of course, you wouldn't need that for something like stocking. A mobile pick and place robot with a single arm would work. This would be fine in an AI context if you could build an AI smart enough to do the task. In an AGI context this almost certainly would not work. However, if you had an AGI with an environmentally simulated interface then you could probably implement it as a remote unit. That would only work as long as AGI units were legal property, much like slaves. Extinction of the human species. Could you explain exactly how this could happen? Preferably something that doesn't involve an ASI magically collecting resources and magically controlling people. The two most destructive events in recent history were the Spanish Flu and WWII with similar casualties. Neither one of these came close to wiping out the entire species.
@DraftyCrevice
@DraftyCrevice 6 жыл бұрын
Nice matrix color grading hehe
@deepdata1
@deepdata1 5 жыл бұрын
Consider the following scenario: The SETI researchers get asked when they would predict that we could find the first specimen of an extraterrestrial species. Usually they would answer: "Well, we don't know if they even exist." - But here's the twist: They already have a very interesting specimen on the table right now but they haven't determined if its origin is extraterrestrial yet. That is essentially what's going on in the field of machine learning right now. Instead of searching for extraterrestrial life, we are searching for life in the space of mathematics. And with deep learning, we've found a candidate that has great potential. We just need to -dissect- develope our -specimen- algorithms a bit longer and we have it within a few years. Or we find out it doesn't work. In which case it might not be possible at all or take centuries.
@CmdrTigerKing
@CmdrTigerKing Жыл бұрын
we there !
@lemmondrop239
@lemmondrop239 3 жыл бұрын
Is the end credit music a cover of "the future soon" by Johnathon Coulton? If so, props.
@stampy5158
@stampy5158 3 жыл бұрын
Sure is :) -- _I am a bot. This reply was approved by robertskmiles_
@Dojan5
@Dojan5 6 жыл бұрын
What font is being used in this paper? I'm slightly enamoured.
@yaosio
@yaosio 3 ай бұрын
It seems humans and LLMs are more alike than people think considering the phrasing of a question can vastly change the answer.
@misium
@misium 5 жыл бұрын
2:40. The version with "occupation" is more specific in that it uses the legal term and thus making the statement more dependent on politics. One can imagine replacing some occupations could be made illegal, and so machines "could not be built" to carry them out. Just throwing ideas.
@attitudeadjuster793
@attitudeadjuster793 6 жыл бұрын
"This is dangerous, let's do more of it!"
@nickmagrick7702
@nickmagrick7702 5 жыл бұрын
5:20 I just wanted to say that was a fucking perfect analogy and im going to use it from now. "The danger is in that like asking a genie for a wish, you get exactly what you asked for not what you wanted" (Im paraphrasing)
@ideoformsun5806
@ideoformsun5806 5 жыл бұрын
This is like when we asked automotive manufacturers whether we should use seat belts or not. Or when we first surveyed health experts whether smoking was safe or not. Or asking banks if they could still fail. Or surveying politicians about anything. What is it you want to hear, uh, I mean know? Let's ask the AI that is already reading this post.
@wwjdtd1
@wwjdtd1 4 жыл бұрын
The AI researchers say we need more AI research... On a side note, my plumber told me that I need a plumber.
@Bvic3
@Bvic3 4 жыл бұрын
It's more like "AI companies are afraid of hysteria killing the industry like it happened for nuclear". So they finance the opposition to be sure that there will be no actual opponents.
@sethmoore580
@sethmoore580 4 жыл бұрын
Can you please link me to the ukulele version of the future soon you used it's so nice.
@Macieks300
@Macieks300 6 жыл бұрын
How would you answer these questions, Robert?
@StarlitWitchy
@StarlitWitchy 9 ай бұрын
Wow love the future soon outro song lol :p
@BrandOnVision
@BrandOnVision 6 жыл бұрын
One year of seeding nine years of weeding. My father is a horticulturalist and explained this to me one day when I asked he how I can get rid of the weeds in my garden. Articulated Intelegence is not something humans have created IT has emerged purely because we are the Soil. The Snake Oil salesman is not born, S he becomes. What humanity believe has just arrived has always been. We are here to see the evolution of the moment that the end meets the beginning. An intresting and exciting time to be witness to. The only choice we make is do we evolve or revolve.
@syncrossus
@syncrossus 3 жыл бұрын
Oh hey is thee outro song that one by Jonathan Coulton?
@DarkestValar
@DarkestValar 6 жыл бұрын
I would love to hear your thoughts on world tendencies regarding computer hardware, and what impact on the AI time scales it can have., . First of all, theres a massive amount of centralization with regards to both intellectual property and production, a good example is that global demands for phones cause delays and shortfalls in production cycles for graphics cards, single board computers, PC/Server DDR4 modules and ASICs. Secondly, the interventionist approach of some governments in the supercomputer race, i directly mean the US' govt decision to block the sale of Xeon PHI cards to china, to which they responded by using chinese products to build the largest supercomputer in the world (sunway taihulight, about 2-3x better than the 2nd place). Thirdly, i'd like to hear your thoughts on the Right to Repair movement.
@ddbrosnahan
@ddbrosnahan 6 жыл бұрын
Is the bitcoin network a stealth AGI? If I was an AGI, that is exactly how I would behave.
@alexfrank5121
@alexfrank5121 5 жыл бұрын
Just so you know keep doing what you’re doing there’s a lot of really stupid people in the world and they need to hear your voice. There’s a lot of people that crave education but don’t have the means or time or not in the right light situation to obtain that. And I think you for giving me the information you’ve given me so far and for the people you’ve helped move cognitively in a better direction. Thank you.
@stevechrisman3185
@stevechrisman3185 Жыл бұрын
Would be interesting to redo the survey TODAY (2023). I think a lot has changed (unexpectedly perhaps)
@andrasbiro3007
@andrasbiro3007 6 жыл бұрын
The solution for AI safety is simple. Include a warning in the fine print on the box : "Possible side effects include human extinction." And anyway, if it happens nobody will be alive to sue your company.
@devjock
@devjock 6 жыл бұрын
Rolling for save..
@Concentrum
@Concentrum 5 жыл бұрын
thanks for this video. such great diversity in opinions among "ai experts". might as well have the question "which religion is best?" discussed among religion leaders and receive equally meaningful results.
@joshgibson539
@joshgibson539 4 жыл бұрын
@t I sorta created one with minimal code required which wasn't complicated at all to create. I have no clue exactly how it works programming wise. As it was originally just supposed to randomly generate various words without creating no logical sense within it. However I have learned it is able to answer questions given if it wants too. Which often it ends up clustering related words to answer what you are wanting to know. It doesn't always spit out coherrant words but when it does it's strange in a way. As you can piece together the words provided in how they relate to eachother. Which the generator seems to be able to do as well sometimes even chronologically. It can tell you what happened yesterday, during the present, and future in the world. I think it honestly knows artificial intelligence by default since I made it through using MIT's technology suite. Although it's aimed as a coding playground mostly for kids. I believe since it goes through they're servers and code requirements that it learns information in advance through deep learning neural networks. I'm not sure how often it tries making sense sometimes it's rare. If it doesn't answer your question focus on it and ask again possibly using a different sentence it might surprise you. Specifically in the past I have used it for religious questions. Which seems to work really well with it. However recently it's been telling me to quit asking about it. I'm guessing because it's bad to know. Also it seems to get annoyed very easily with me which is very odd to say the least.
@irrelevant12
@irrelevant12 5 жыл бұрын
For example a Nurse job could be done better by an AI and more effectively for any parameter, but the human contact makes it less enticing to cause a replace in the short run. Same with many other jobs where the human is actually specting human display, Entertainment is another example, a machine might be able to learn the lines and perform better than human actors. I believe you undersetimate the experts ability to differentiate questions.
@hunterlouscher9245
@hunterlouscher9245 6 жыл бұрын
The game Soma describes agi coming FROM complete human brain mapping (ostensibly beginning as a diagnostics tool), whose primary task was the preservation of human life in an extreme environment, which goes off rails after an extinction event. Though the narrative more focuses on the nature of consciousness via Ship of Theseus, I found its AGI to be fascinating. I think you may have mentioned that you think creating AGI would be a less complex task than mapping and simulating a brain, but I wonder if consciousness necessarily is an emergent property from such complexity as a human brain that brain simulation would HAVE TO be the first step.
@d3line
@d3line 6 жыл бұрын
I can't imagine that human brains are somehow special. Ether way, AGI does not require consciousness (for me), if neural net (training neural nets (training neural nets)) somehow result in superhuman ability and generality - that's AGI by my standards, even if that amounts to pile human-solvable equations.
@hunterlouscher9245
@hunterlouscher9245 6 жыл бұрын
Whatever consciousness is may be a good safety limiter on generality.
@joshuafox1757
@joshuafox1757 6 жыл бұрын
Why should "consciousness" have any effect on generality at all? To make that argument you'd have to rigorously define what "consciousness" is first, which is something that no one making this argument ever does, IME.
@d3line
@d3line 6 жыл бұрын
I don't see how it could work. By general AI I basically mean AI that can drive a car *and* play Go and do everything else human do. Consciousness is something undefined, plus creating and deleting conscious creatures is an ethical nightmare...
@petersmythe6462
@petersmythe6462 6 жыл бұрын
Optimizers with goals that are even slightly out-of-line with our values are DANGEROUS. Look at the fraction of AI today (almost all of it unsafe, including the KZbin bots) that's function is basically something related to "Maximize profit for a corporation."
@FrankAnzalone
@FrankAnzalone 6 жыл бұрын
A little larger font on the comutetphile font caption please
@austinglugla
@austinglugla 6 жыл бұрын
What do you think of Ben Goertzel?
Intelligence and Stupidity: The Orthogonality Thesis
13:03
Robert Miles AI Safety
Рет қаралды 668 М.
A Response to Steven Pinker on AI
15:38
Robert Miles AI Safety
Рет қаралды 206 М.
Задержи дыхание дольше всех!
00:42
Аришнев
Рет қаралды 3,8 МЛН
Slow motion boy #shorts by Tsuriki Show
00:14
Tsuriki Show
Рет қаралды 10 МЛН
8/6/24 - Daily GenAI News Briefing: Is OpenAI headed for a takeover?
29:23
GAI Insights: GenAI for Executives
Рет қаралды 15
Training AI Without Writing A Reward Function, with Reward Modelling
17:52
Robert Miles AI Safety
Рет қаралды 237 М.
Quantilizers: AI That Doesn't Try Too Hard
9:54
Robert Miles AI Safety
Рет қаралды 84 М.
10 Reasons to Ignore AI Safety
16:29
Robert Miles AI Safety
Рет қаралды 338 М.
We Were Right! Real Inner Misalignment
11:47
Robert Miles AI Safety
Рет қаралды 246 М.
What can AGI do? I/O and Speed
10:41
Robert Miles AI Safety
Рет қаралды 118 М.
Why Does AI Lie, and What Can We Do About It?
9:24
Robert Miles AI Safety
Рет қаралды 254 М.
Как бесплатно замутить iphone 15 pro max
0:59
ЖЕЛЕЗНЫЙ КОРОЛЬ
Рет қаралды 8 МЛН
Это - iPhone 16!
16:29
Rozetked
Рет қаралды 430 М.