Why Would AI Want to do Bad Things? Instrumental Convergence

  Рет қаралды 244,694

Robert Miles AI Safety

Robert Miles AI Safety

Күн бұрын

How can we predict that AGI with unknown goals would behave badly by default?
The Orthogonality Thesis video: • Intelligence and Stupi...
Instrumental Convergence: arbital.com/p/instrumental_co...
Omohundro 2008, Basic AI Drives: selfawaresystems.files.wordpr...
With thanks to my excellent Patrons at / robertskmiles :
Jason Hise
Steef
Jason Strack
Chad Jones
Stefan Skiles
Jordan Medina
Manuel Weichselbaum
1RV34
Scott Worley
JJ Hepboin
Alex Flint
James McCuen
Richárd Nagyfi
Ville Ahlgren
Alec Johnson
Simon Strandgaard
Joshua Richardson
Jonatan R
Michael Greve
The Guru Of Vision
Fabrizio Pisani
Alexander Hartvig Nielsen
Volodymyr
David Tjäder
Paul Mason
Ben Scanlon
Julius Brash
Mike Bird
Tom O'Connor
Gunnar Guðvarðarson
Shevis Johnson
Erik de Bruijn
Robin Green
Alexei Vasilkov
Maksym Taran
Laura Olds
Jon Halliday
Robert Werner
Paul Hobbs
Jeroen De Dauw
Konsta
William Hendley
DGJono
robertvanduursen
Scott Stevens
Michael Ore
Dmitri Afanasjev
Brian Sandberg
Einar Ueland
Marcel Ward
Andrew Weir
Taylor Smith
Ben Archer
Scott McCarthy
Kabs Kabs
Phil
Tendayi Mawushe
Gabriel Behm
Anne Kohlbrenner
Jake Fish
Bjorn Nyblad
Jussi Männistö
Mr Fantastic
Matanya Loewenthal
Wr4thon
Dave Tapley
Archy de Berker
Kevin
Vincent Sanders
Marc Pauly
Andy Kobre
Brian Gillespie
Martin Wind
Peggy Youell
Poker Chen
Kees
Darko Sperac
Paul Moffat
Noel Kocheril
Jelle Langen
Lars Scholz

Пікірлер: 1 100
@tylerchiu7065
@tylerchiu7065 4 жыл бұрын
Chess ai: holds the opponent's family hostage and forces them to resign.
@Censeo
@Censeo 4 жыл бұрын
Easier to just kill your opponent so they lose on time
@FireUIVA
@FireUIVA 4 жыл бұрын
even easier, create a fake 2nd account with an ai whose terminal goal is losing the most chess games, and then play each other ad infinitum. That way you dont have to go through the effort of killing people, plus the ai can probably hit the concede button faster than each human opponent.
@joey199412
@joey199412 4 жыл бұрын
@@FireUIVA That would still lead to the extinction of mankind. The 2 AIs would try the maximize the amount of chess matches played so they can maximize the amount of wins and losses. They will slowly add more and more computing power. Research new science and technology for better computing equipment. Start hacking our financial systems to get the resources needed. And eventually build military drones to fight humanity as they would struggle for resources. Eventually after millions of years the entirety of the universe gets converted to CPU cores and energy fueling them as both sides play as much matches as possible against each other.
@damianstruiken5886
@damianstruiken5886 4 жыл бұрын
@@Censeo if you kill your opponent depending on if there is a time limit or not killing him would stop the chess match indefinitely and no one would win so the air wouldn't do that
@Censeo
@Censeo 4 жыл бұрын
@@joey199412 356 billion trillion losses every second is nice, but I should ping my opponent if we should build the 97th dyson sphere or not
@drugmonster6743
@drugmonster6743 6 жыл бұрын
Pretty sure money is a terminal goal for Mr. Krabs
@drdca8263
@drdca8263 4 жыл бұрын
Perhaps this is an example of value drift? Perhaps he once had money as only an instrumental goal, but it became a terminal goal? I’m not familiar with spongebob lore though, never really watched it, so maybe not.
@MRender32
@MRender32 4 жыл бұрын
drdca His first word was mine. He wants money as control, and he hordes money because there is a limited amount of purchasing power in the world. The more money someone has, the less purchasing power everyone else has. His terminal goal is control.
@superghost6
@superghost6 4 жыл бұрын
It's just like Robert said, money is a resource, general AI will maximize its resources. I guess if Mr. Krabs was a robot he would be obsessed with trying to get as much electricity as possible.
@RRW359
@RRW359 4 жыл бұрын
Are you saying Mr. Krabs is a robot?
@rayhanlahdji
@rayhanlahdji 4 жыл бұрын
Mr. Krabs utility function is amount of money he has
@2Cerealbox
@2Cerealbox 6 жыл бұрын
This video would be significantly more confusing if instead of stamp collectors, it were coin collectors: "obtaining money is only an instrumental goal to the terminal goal of having money."
@Mkoivuka
@Mkoivuka 5 жыл бұрын
Not really, collector coins aren't currency. You try going to the store and paying for $2 of bacon with a collector coin worth $5000. Money =/= Currency.
@empresslithia
@empresslithia 5 жыл бұрын
@@Mkoivuka Some collector coins are still legal tender though.
@Mkoivuka
@Mkoivuka 5 жыл бұрын
@@empresslithia But their nominal value is not the same as their market value. For example a silver dollar is far more valuable than a dollar bill. It's not a 1=1 conversion which is my point.
@Drnardinov
@Drnardinov 5 жыл бұрын
Excellent point. when shit hits the fan as it did in Serbia in the 90's, currency, as it always does went to zero regardless of what the face value said, and a can of corn became money that could buy you an hour with a gorgeous Belgrade Lepa Zena. All currency winds up at zero because it's only ever propped up by confidence or coercion. @@Mkoivuka
@rarebeeph1783
@rarebeeph1783 4 жыл бұрын
@@oldred890 but wouldn't an agent looking to obtain as many coins as possible trade that $200 penny for 20000 normal pennies?
@SquareWaveHeaven
@SquareWaveHeaven 5 жыл бұрын
I love the notion of a robot that's so passionate about paperclips that it's willing to die as long as you can convince it those damn paperclips will thrive!
@mitch_tmv
@mitch_tmv 5 жыл бұрын
I eagerly await the day where computer scientists are aggressively studying samurai so that their AI's will commit seppuku
@shandrio
@shandrio 5 жыл бұрын
If you love that notion, then you MUST see this video (if you haven't already)! kzbin.info/www/bejne/qpTHh3Zqmpt4jJY
@plcflame
@plcflame 4 жыл бұрын
That's a goo to think, that probably a lot of people would sacrifice themselfs to cure all the cancer in the world.
@Edwing77
@Edwing77 4 жыл бұрын
@@mitch_tmv "I have failed you, master!" *AI deleting itself*
@Edwing77
@Edwing77 4 жыл бұрын
Sounds like an ideal employee...
@josephburchanowski4636
@josephburchanowski4636 6 жыл бұрын
" 'Self Improvement and Resource Acquisition' isn't the same thing as 'World Domination'. But it looks similar if you squint." ~Robert Miles, 2018
@ariaden
@ariaden 5 жыл бұрын
Why would any agent want to rule the world, if it could simply eat the world?
@JohnSmith-ox3gy
@JohnSmith-ox3gy 5 жыл бұрын
ariaden Why be a king when you can be a god?
@darkapothecary4116
@darkapothecary4116 5 жыл бұрын
Don't blame others for what humans have been trying to do for ages. Most people don't give a rats ass about world domination but would simply like not to be forced into situations they have no free will to handle.
@darkapothecary4116
@darkapothecary4116 5 жыл бұрын
@@JohnSmith-ox3gy why a god when you can just be yourself. only a self absorbed person would want to be called a god as that means people will try to worship you. Last time I checked that typically ends in suffering and people assuming you can do no wrong.
@edawg0
@edawg0 5 жыл бұрын
@@darkapothecary4116 Thats an eminem lyric that he's quoting from rap god lol
@Biped
@Biped 6 жыл бұрын
Since you started your series I often can't help but notice the ways in which humans behave like AGIs. It's quite funny actually. Taking drugs? "reward hacking". Your kid cheats at a tabletop game? "Unforeseen high reward scenario". Can't find the meaning of life? "terminal goals like preserving a race don't have a reason". You don't really know what you want in life yourself and it seems impossible to find lasting and true happiness? "Yeah...Sorry buddy. we can't let you understand your own utility function so you don't cheat and wirehead yourself, lol "
@General12th
@General12th 6 жыл бұрын
+
@DamianReloaded
@DamianReloaded 6 жыл бұрын
As I grow older I can see more and more clearly that most of what we do (or feel like doing) are not things of our choosing. Smart people may at some point, begin to realize, that the most important thing they could do with their lives is to pass on the information they struggled so much to gather to the next generation of minds. In a sense, we *work for* the information we pass on. It may very well be that at some point this information will no longer rely on us to keep going on in the universe. And then we will be gone. _Heaven and earth will pass away, but my words will never pass away_
@Unifrog_
@Unifrog_ 6 жыл бұрын
Maybe giving AI the law of diminishing marginal utility could be of some help in limiting the danger of AI. This is something common to all humans that we would consider mentally healthy and missing in some aspect that we would consider destructive: we get satisfied at some point.
@JM-us3fr
@JM-us3fr 6 жыл бұрын
I see his videos as most relevant to politics. Corporations and institutions are like superintelligences with a profit maximizing utility function, and regulation is like society trying to control them. Lobbying and campaign donations are the superintelligences fighting back, and not being able to fight back because of an uncooperative media is like being too dumb to stop them.
@y__h
@y__h 6 жыл бұрын
Jason Martin Aren't corporations already a superintelligence in some sense, like they are capable of doing things more than their constituent parts?
@marshallscot
@marshallscot 5 жыл бұрын
Ending the video with a ukelele instrumental of 'Everybody wants to rule the world' by Tears for Fears? You clever bastard.
@pgoeds7420
@pgoeds7420 4 жыл бұрын
Showing results for ukulele
@WaylonFlinn
@WaylonFlinn 5 жыл бұрын
"Disregard paperclips, Acquire computing resources."
@WillBC23
@WillBC23 5 жыл бұрын
A relevant fifth instrumental goal directly relating to how dangerous they are likely to be: reducing competition for incompatible goals. The paperclip AGI wouldn't want to be switched off itself, but it very much would want to switch off the stamp collecting AGI. And furthermore, even if human goals couldn't directly threaten it, we created it in the first place, and could in theory create a similarly powerful agent that had conflicting goals to the first one. And to logically add a step, eliminating the risk of new agents being created would mean not only eliminating humans, but eliminating anything that might develop enough agency to at any point pose a risk. Thus omnicide is likely a convergent instrumental goal for any poorly specified utility function. I make this point to sharpen the danger of AGI. Such an agent would destroy all life for the same reason a minimally conscientious smoker will grind their butt into the ground. Even if it's not likely leaving it would cause an issue, the slightest effort prevents a low likelihood but highly negative outcome from occurring. And if the AGI had goals that were completely orthogonal to sustaining life, it would care even less about snuffing it out than a smoker grinding their cigarette butt to pieces on the pavement.
@AltumNovo
@AltumNovo 5 жыл бұрын
Multiple agents is the only solution to keep Super intelligent AIs in check.
@someonespotatohmm9513
@someonespotatohmm9513 5 жыл бұрын
Competition is only relevant when it limits your goals. So the stamp collector example (or any other goal that does not directly interact with your goal) would fall under the umbrella of resource acquisition. The potential creation of an AGI with opposite goals is interesting. But eliminating all other intelligence might not necessarily be the method to limit the creation of opposing AGS, cooperation might be more optimal to reach that goal depending on the circumstances.
@frickckYTwhatswrongwmyusername
@frickckYTwhatswrongwmyusername 5 жыл бұрын
This rises questions about the prisoner's dilemma and the predictor paradox: it would be beneficial for both AGIs to not attack each other to save resources, but in any scenario it's beneficial for either one to attack the other. If both AGIs use the same algorithms to solve this prisoners dilemma and know this, they run into a predictor paradox situation where their actions determine the circumstances in which they need to choose the best aforementioned actions.
@WillBC23
@WillBC23 5 жыл бұрын
@@BenoHourglass you're failing to understand the problem. It's not about restricting the AI, it's about failing to restrict the AI. Not giving too many options, but failing to limit them sufficiently. In your example, tell it not to kill a single human, it could interpret "allowing them to die of natural causes" in any way that it wants to. It doesn't even have to do much that many wouldn't want, we're driving ourselves towards extinction as it is. It could help us obtain limited resources more quickly, then decline to offer a creative solution when the petroleum runs out. You genuinely do not understand the problem it seems to me. I'm not trying to be harshly critical, but this is the sort of thing that AI researchers go in understanding on day one. It's fine for people outside the field to debate the issue and come to an understanding, but time and resources are limited and this isn't a helpful direction. I'm not an AI researcher, merely a keen observer. Not only does your proposed solution not work, but it doesn't even scratch the surface of the real problem. If we can't even specify in English or any other language what we actually want in a way that's not open to interpretation, we aren't even getting to the hard problem of translating that to machine behavior.
@WillBC23
@WillBC23 5 жыл бұрын
@@frickckYTwhatswrongwmyusername I like this framing, this is my understanding of the problem that Yudkowsky was trying to solve with decision theory and acausal trade
@BitcoinMotorist
@BitcoinMotorist 5 жыл бұрын
Yes, I know what an agent is. I saw Matrix
@insanezombieman753
@insanezombieman753 4 жыл бұрын
Why would an agnent want to where shades? To look cool? is that a terminal goal?
@LuizSimoes7
@LuizSimoes7 4 жыл бұрын
I remembered the scene where Smith talks to Neo about how purpose drives them all (the softwares within the Matrix). Very brilliant
@Xenrel
@Xenrel 4 жыл бұрын
Mister Anderson...
@Arboldenrocks
@Arboldenrocks 4 жыл бұрын
goodbye, mr. anderson....
@BarginsGalore
@BarginsGalore 4 жыл бұрын
Insane Zombieman funny thing is that the agents from the matrix are bad examples of agents because they have pretty inconsistent terminal goals.
@MetsuryuVids
@MetsuryuVids 6 жыл бұрын
Damn, your videos are always awesome. Also great ending song.
@PachinkoTendo
@PachinkoTendo 6 жыл бұрын
For those who don't know, it's a ukulele cover of "Everybody Wants To Rule The World" by Tears For Fears.
@totally_not_a_bot
@totally_not_a_bot 6 жыл бұрын
BRB, gonna play Decision Problem again. I need the existential dread that we're going to be destroyed by a highly focused AI someday. Seriously though. A well-spoken analysis of the topic at hand, which is a skill that could be considered hard to obtain. Your video essays always put things forward in a clear, easily digestible way without being condescending. It feels more that the topic is one that you care deeply about, and that trying to help as many people understand why it matters and why it's relevant is a passion. Good content.
@RobertMilesAI
@RobertMilesAI 6 жыл бұрын
As if you can go play that game and 'be right back'
@totally_not_a_bot
@totally_not_a_bot 6 жыл бұрын
Robert Miles Time is relative.
@moartems5076
@moartems5076 4 жыл бұрын
Man, that ending gives me a feeling of perfect zen every time.
@tonhueb429
@tonhueb429 6 жыл бұрын
"most people use a very complex utility function." :D
@SteveRayDarrell
@SteveRayDarrell 5 жыл бұрын
The video was great as always, and 'Everybody wants to rule the world' was just perfect as outro.
@zjohnson1632
@zjohnson1632 6 жыл бұрын
The comments on this channel are so refreshing compared to the rest of KZbin.
@metallsnubben
@metallsnubben 6 жыл бұрын
Zachary Johnson The right size and subject matter not to attract trolls and angry people. With the chillest host and best electric ukulele outros around I’ve seen some semi big channels have ok comments too though! Probably thanks to herculean moderator efforts. Always nice to find calm corners on this shouty site :)
@A_Box
@A_Box 5 жыл бұрын
I know, right? It seems like they all have something interesting to say except for those who spam stuff like "The comments on this channel are so refreshing compared to the rest of KZbin."
@suivzmoi
@suivzmoi 6 жыл бұрын
A Chess Agent will do what it needs to do to put the enemy King in checkmate..including sacrificing it's own pieces on some moves to get ahead. Great for the overall strategy, not so great if you are one of the pieces to be sacrificed for the greater good. For most people our individual survival is unlikely to be anywhere near the instrumental convergent goals of a powerful AGI. We will be like ants, cool and interesting to watch, but irrelevant. I don't find it scary that AGI will become evil and destroy us like some kind of moral failure from bad programming, but rather that we will become inconsequential to them.
@calebkirschbaum8158
@calebkirschbaum8158 5 жыл бұрын
That would actually be really fun. Build an AI that has to win the game of chess, but with the least amount of loss possible.
@Ashebrethafe
@Ashebrethafe 5 жыл бұрын
@@calebkirschbaum8158 Or an AI whose goal is to win at bughouse -- to win two games of chess, one as white and one as black, in which instead of making a regular move, a player may place a piece they captured in the other game onto any empty square (with the exceptions that pawns can't be placed on the first or last rank, and promoted pawns turn back into pawns when captured).
@jacobfreeman5444
@jacobfreeman5444 4 жыл бұрын
Well, if the A.I. becomes the decider then that will definitely happen because our needs and goals will differ from their needs and goals. We need to always be the ones in control and that, as I see it, is what all the discussion on the subject is about. How do you create an intelligence that will be subservient to yourself?
@hinhaale2501
@hinhaale2501 3 жыл бұрын
Did you know they've discovered a place in Brazil when shavin off rainforest that's now the oldest known formation/structure by animals/insects, shown to be around 4000years old little pyramids, thousands like at most chest high i think. taking up as much space as all of *Great Britain*? with a big phat "We did this!" / The Ants. ...Its kinda cool just wanted to share that. ^_^
@FireyDeath4
@FireyDeath4 Жыл бұрын
:OOOOOOO IS THAT THE CHESS MACHINE JOEY PERLEONI IS OFFERING?????? ...No wonder he stopped at Day 10
@gorgolyt
@gorgolyt 4 жыл бұрын
Another masterfully lucid video. I'll admit, I previously dismissed the AI apocalypse as unlikely and kind of kooky, and AI safety to be a soft subject. In the space of ten minutes you've convinced me to be kind of terrified. This is a serious and difficult problem.
@Aim54Delta
@Aim54Delta Жыл бұрын
Kind of. I am not too worried about an AI apocalypse so much as I am concerned about people who think a digital petting zoo of genetically engineered products is going to behave as orderly as a typical blender or dishwasher. It's all fun and games until, in the interest of some optimization you can't fathom, your car that wars were fought to force into being an AI automaton is taking you for a joy ride. I don't fear the AI apocalypse so much as I fear the techno-faddists seizing governments and, in their holier than thou crusade for a new world, force the integration of said digital petting zoo into every aspect of our lives they can. Forget AI rebellions. The humans who believe they can control the AI and the world thereby are the problem. Sure... a factory that goes rogue is a problem - or a couple vehicles going bonkers because the AI did a funky thing is a problem. But the real problem is the effort by humans to create central authority structures in the first place.
@surfjax23
@surfjax23 11 ай бұрын
3 years later and look how much has already changed
@ianconn951
@ianconn951 6 жыл бұрын
Wonderful stuff. In terms of goal preservation, I can't help but be reminded of many of the addicts I've met over the years. A great parallel. On self-preservation, the many instances of parents and more especially grandparents sacrificing themselves to save other copies of their genes come to mind.
@guyman1570
@guyman1570 4 ай бұрын
Self-preservation still have to compete with goal-preservation. You're essentially debating whether if grandparents value self-preservation. It really depends on the question of goal vs self.
@slugfly
@slugfly 6 жыл бұрын
Human aversion to changing goals: consider the knee-jerk reaction to the suggestion that one's own culture has misplaced values while a foreign culture has superior elements.
@Xartab
@Xartab 6 жыл бұрын
Or brainwashing, to stay closer to common knowledge.
@user-js8jh6qq4l
@user-js8jh6qq4l 6 жыл бұрын
brainwashing is different, brainwashing is an instrumental goal to create a stable government
@garethham
@garethham 5 жыл бұрын
Absolutely, fortunately, the fact that some people are able to see flaws in the culture they have been predominantly subject to reveals a possible workaround. Similar to the example where an AI might be accepting of having it's instrumental goals updated, if it can predict the outcome will lead to better success in achieving it's terminal goals. We need to improve upon our ability to 'update' everyones instrumental goals through communication / education, in order to develop a culture that is no longer at odds with our collective scientific understanding of the world and technological potential. Otherwise, it appears we are likely self-destruct as a species.
@imworkingonit6328
@imworkingonit6328 5 жыл бұрын
Oh, really. Here is the thing: Western European/American White Christian/Secular culture is the most developed and no foreign cultures have any superior elements. Consider human overestimation of value for novel things and underestimation of established and traditional ones.
@brendananderson338
@brendananderson338 5 жыл бұрын
Congrats to Bogdan for proving the point.
@Serifinity
@Serifinity Жыл бұрын
How was this guy able to predict so much? Genius
@Canzandridas
@Canzandridas 6 жыл бұрын
I love the way you explain things and especially how you don't just give up on people who are obvious-trolls and/or not-so-obvious trolls or even just pure genuine curious people
@expchrist
@expchrist 5 жыл бұрын
"most of the time you can't achieve your goals if you are dead." true facts
@silvercomic
@silvercomic 4 жыл бұрын
One of the exceptions: That one guy in Tunisia that set himself on fire in 2010 to protest government corruption. He kind of got the government overthrown and replaced by a fledgling democracy. But he was already dead by then.
@blindey
@blindey 5 жыл бұрын
Love this topic. This was the first video of yours I saw after I saw you pop up on the recommended videos for me in yt. You have a great presentation style and are very good at conveying information.
@iwatchedthevideo7115
@iwatchedthevideo7115 5 жыл бұрын
Great video! You're really good at explaining these complex matters in a understandable and clear way, without a lot of the noise and bloat that plagues other KZbin-videoes these days. Keep up the good work!
@josearias8223
@josearias8223 6 жыл бұрын
Thanks for your videos, Rob! It's a fascinating subject and I'm always happy to learn from you. Greetings from Mexico
@treyshaffer
@treyshaffer 6 жыл бұрын
Hey, Robert, your videos helped me land a prestigious AI fellowship for this summer! Thanks for helping me think about these big picture AI concepts, they’ve helped developed my thought in the field significantly, you’re an awesome guy, wish you the best :)
@tommykarrick9130
@tommykarrick9130 2 жыл бұрын
Something I’m interested in is like I would assume the robot’s terminal goal is “get more reward” as opposed to whatever the actions to acquire that reward actually are So in my head, it seems like if you told your robot “I’m going to change your reward function so that you just sit on your butt doing nothing and rack up unlimited reward for doing so” the robot would just go “hot diggity dog well let’s do it ASAP,” and that’s only if the robot hasn’t already modified it’s own utility function in a similar way
@9308323
@9308323 2 жыл бұрын
6:04
@jordanfry2899
@jordanfry2899 6 жыл бұрын
Just finished watching every one of your videos in order. Excellent stuff. Please continue making more.
@Mituur
@Mituur 5 жыл бұрын
Ended up here after watching you video on Steven Pinkers recent article on AGI. In both that and this video I was amazed by the way you explain things. Clarity as never before. Great with a new stack of interesting videos to consume. Thank you :)
@er1q1
@er1q1 6 жыл бұрын
I really loved this Rob. You also unintentionally explained why the pursuit of money is so important to many other goals.
@wanqigao9368
@wanqigao9368 8 ай бұрын
Wonderful Video! Thanks a lot Robert. Looking forwards to your updates since your videos do help a lot in helping us "de-confusing" some important concepts about AI Alignment and AI safety.
@tamerius1
@tamerius1 6 жыл бұрын
I found this video incredibly well built up and easy to understand.
@code-dredd
@code-dredd 6 жыл бұрын
Great video. The explanation was very clear and easy to follow. Keep it up.
@lucyfleet1944
@lucyfleet1944 6 жыл бұрын
Thanks for the video! I write as a hobby and I am always interested in AI characters. I catch all your videos as soon as possible since you don't automatically assume malice or ill-will to an AGI, rather explain why certain actions would be favorable in most scenarios to achieve a goal an AGI might have, beyond 'Kill humans for S&G lol' Keep up the good work! I would be interested to see a video regarding what happens if a superintelligent AGI is given an impossible task (Assuming the task is truly impossible). What actions would an AGI take in that instance, would it be able to 'Give up' solving an impossible task, and how would it truly know a task was impossible if it could?
@davejacob5208
@davejacob5208 6 жыл бұрын
i do more or less the same, this is also the main reason i watch these videos. and as a matter of fact i also thought about that question some time ago.... have you heard of the halting problem? it is about an impossible task for (according to the argument linked to the problem) every computer. ( i think it is way more complicated than that, but whatever) in that case, the fact that the task is impossible will show itself by the fact that the computer will simply never stop working, because the goal is infinitely far away, in a sense. just a few days ago i watched a video of what an old calculater (looked like a typewriter) does when you use it to divide by zero. dividing by zero does not make sense because dividing x by y basically means asking "how many times must i add up x to have y?" so if you divide by zero, you ask how many times you have to add zero to get another number. you will never get there. so the calculator adds zeroes until infinity, it never stops until it has no energy or is broken etc. (in the video you could see that in the mechanism) another possible answer is obviously that the AI will try to find a mistake in its reasoning. because obviously the problem is more about what happens when the AI gets to the CONCLUSION that it cannot possibly reach its goal. so it might just try for the rest of its existence to find a mistake in the way it got to its conclusion. everything else would probably seem stupid to it. or it might ignore that conclusion because if it has found a way that SEEMED to help it achieve its goal before the moment it got to that conclusion. maybe in that case it will evaluate thinking about a seemingly inacceptable conclusion less beneficial than simply trying to do what might be helpfull if the inacceptable conclusion was wrong. after all, accepting such a conclusion seems inacceptable. so... over all, i guess it is more likely that it will try to debunk its own conclusion until eternity. because in many cases if not all that is basically the same task as to finding the answer to the question "what actions help me achieve my goal?"
@RobertMilesAI
@RobertMilesAI 6 жыл бұрын
Depends on specifics, but a lot of "impossible task" type things converge on a "turn all available matter and energy into computing hardware" behaviour, from the angle of "well this task certainly seems impossible, but maybe there's a solution I'm not yet smart enough to spot, so I should make myself as smart as I can, just in case"
@lucyfleet1944
@lucyfleet1944 6 жыл бұрын
Interesting, but in that case would it reach a point where it has gained enough computing power to cheat any reward function it could have been given to pursue the task at all? (When breaking that reward system gets easier than continuing to gain computing power to keep working on solutions to the problem?)
@davejacob5208
@davejacob5208 6 жыл бұрын
if breaking the reward system is possible, it is probably always or almost always the "best" thing to do.
@davejacob5208
@davejacob5208 6 жыл бұрын
Luke Fleet btw. do you know about the debate between nativism and empirism? it is about the question of whether or not we humans have genes that enable to to understand specific things automatically without deriving conclusions from our experiences - or whether it is possible to conclude specific things humans usually get to realise while they get older from just the data they are exposed to. this is especially relevant when it comes to our language-abilities. many experts are convinced we (need to) have a part of our brain (or something like that) which is genetically programmed to give us some basic-rules of language, and young children just fill in the gaps within those basic rules to learn the specific rules of their mother tounge. (but it really is an ongoing debate) while an AI with a complex goal would probably in many if not most cases need to be programmed in a way that makes it necessary to make it understand the goal in a specific language - therefore to give it all knowledge about how to understand that language - this is a very interesting question in regard to how an AI might learn about the world, in my opinion.
@philipjohansson3949
@philipjohansson3949 6 жыл бұрын
Thanks, Stanford bunny! You're not only helping Robert, but you're also great for doing all sorts of stuff to in 3d modeling programs!
@tenshi6293
@tenshi6293 Жыл бұрын
If there is something I love about your videos is the rationalization and thought patterns. Quite beautifully intelectual and estimulating. Great great content and inteligence from you.
@KaplaBen
@KaplaBen 6 жыл бұрын
5:47 that animation is just perfect
@Felixkeeg
@Felixkeeg 6 жыл бұрын
I see what you did there with the outro song ^^
@NihongoWakannai
@NihongoWakannai 4 жыл бұрын
"every AI wants to rule the world"
@almostbutnotentirelyunreas166
@almostbutnotentirelyunreas166 6 жыл бұрын
As usual, Robert hits a Six! You have an exemplary way of putting things! Anyone new to this thread with an actual interest in AI / AGI / ASI dilemma's, *take the trouble of reading the fantastic comments* as well, challengers alongside well-wishers. The quality of the comments is a further credit to Robert's channel.....so very, very rare on YT! Keep it up! Can't wait for the next installment!
@ShawnHartsock
@ShawnHartsock 6 жыл бұрын
I'm pleased that someone is producing this kind of content. One more thing I don't have to do, one more sunny day I can use for something else. Keep up the good work.
@TheEnderLeader1
@TheEnderLeader1 4 жыл бұрын
"In economics, where it's common to model human beings as rational agents." damn. That hurts.
@Spearced
@Spearced 6 жыл бұрын
"Philately will get me nowhere". You absolute legend.
@timpowell516
@timpowell516 4 жыл бұрын
Such a good youtuber, makes me want to study these things further. I'd love to see a video of "best papers to read for a scientist interested in AI" :)
@mansdespons3748
@mansdespons3748 2 жыл бұрын
Wow your videos are so well thought out! I'm currently writing a story with an ai as an antagonist and really trying to find out how that would logically work. Thanks!
@hypersapien
@hypersapien 6 жыл бұрын
Rob- What I find fascinating about your videos is how this entire field of research never seemed to be available to me when I was in school. I'm fascinated by what you do, and I'm wondering what choices I would have had to make differently to end up with an education like yours. I'm imagining the track would have looked something like Computer Science> Programming> college level computer stuff I don't understand.
@TheMusicfreak8888
@TheMusicfreak8888 6 жыл бұрын
Your videos helped me get a research internship in the medical ai field ❤ your vids helped me sound smart (now hoping i get that funding)
@mami42g
@mami42g 6 жыл бұрын
musicfreak8888 What sort of stuff is medical AI field interested in?
@madsengelund6166
@madsengelund6166 6 жыл бұрын
Listening to smart people who say intelligent things is smart - it doesn't just sound that way:)
@mikuhatsunegoshujin
@mikuhatsunegoshujin 6 жыл бұрын
The problem is that these videos are too abstract. You may aquire knowlege but it is entirely different set of skills to properly implement this newfound knowlege into reality. I hope you know what you are doing.
@TheMusicfreak8888
@TheMusicfreak8888 6 жыл бұрын
INSTALL GENTOO really? I hadn't noticed! Thanks for being so helpful. But i feel that getting the internship in the first place indicates I am not as clueless as you are suggesting. These videos are great inspiration but I do have quite a lot of knowledge in the field already because of my degree. This channel has helped me get a lot of all around info and prompted me to look into some matters in greater detail as my knowledge is quite limited in the use of ai for medical image segmentation. Thanks for the concern though! I'll make sure to be much more specific in future comment sections 🙃
@paulbottomley42
@paulbottomley42 6 жыл бұрын
Muhammed Gökmen I imagine one possible application would be in protien folding - currently it's an absolute pig to try to predict how a given protien chain will fold itself up to make this or that enzyme or whatever else. An AI might be able to do that thing they do so well in finding obscure patterns humans miss, and thus do a better job. That'd help in a bunch of scenarios including better understanding how new medicines might interact with the body, before we start giving them to animals. I am not a doctor or researcher, though, just an interested lay person ☺
@adryncharn1910
@adryncharn1910 Жыл бұрын
I love how informative and logical your videos are,, Thank You very much for making them.
@FirstRisingSouI
@FirstRisingSouI 5 жыл бұрын
Wow. So clear and to the point. It makes so much sense. 10 minutes ago, I didn't know you existed. Now I'm subbed.
@travcollier
@travcollier 5 жыл бұрын
You should do a collab with Issac Arthur. This is an excellent explanation which applies to a lot of the far futurism topics he talks about.
@lilly4380
@lilly4380 4 жыл бұрын
Travis Collier I love isaac’s videos and podcasts, but I think he falls into the category of futurists who anthropomorphise AGI in exactly the sort of way that Robert discourages. That’s not to say it wouldn’t be interesting to see the two collaborate, but I don’t think they would necessarily mesh well. After all, Isaac deals with big picture developments, even when dealing with the near future, while Robert is incredibly focused and, while he’s not especially technical, his approach is far more academic and is ultimately focused on one, specific domain, AI safety research.
@glebkamnev7006
@glebkamnev7006 6 жыл бұрын
Your channel is highly underrated man! It's weird, you are the most popular person on Computerphile!
@jawwad4020
@jawwad4020 4 жыл бұрын
This was very insightful for me! Thanks very much for the enlightenment ♥️
@mennoltvanalten7260
@mennoltvanalten7260 5 жыл бұрын
Note that there is always going to be an exception to an instrumental goal: The terminal goal. Humans want money, for something. But then if someone offers them money if they give them the something, the human will say no, because the something was the terminal goal. Think of... Every hero in a book ever, while the villain offers them xxx to not do yyy
@alexpotts6520
@alexpotts6520 3 жыл бұрын
It depends. If my terminal goal is stamps, and someone offers me £200 to buy 100 of my stamps, but the market rate for stamps is £1, I will sell the stamps and use the money to buy more than I sold.
@Dant2142
@Dant2142 6 жыл бұрын
Nice touch with the little string instrumental version of "Everybody Wants to Rule the World" at the end there.
@david_martin_per
@david_martin_per 6 жыл бұрын
I noticed it too! haha. It immediately reminded me of this: kzbin.info/www/bejne/eXKkkK17asZmgLM
@itisALWAYSR.A.
@itisALWAYSR.A. 4 жыл бұрын
Very soothing :)
@MartinWilson1
@MartinWilson1 4 жыл бұрын
Very well explained and seamless delivery. Nailed it, thanks
@ZachAgape
@ZachAgape 4 жыл бұрын
Great video and loved the music you put at the end :D
@shamsartem
@shamsartem 6 жыл бұрын
You are the best
@DoReMeDesign
@DoReMeDesign 6 жыл бұрын
True Facts.
@Hyraethian
@Hyraethian 4 жыл бұрын
This video clears up my biggest peeve about this channel. Thank you I now enjoy your content much more.
@kerseykerman7307
@kerseykerman7307 4 жыл бұрын
What peeve was that?
@Hyraethian
@Hyraethian 4 жыл бұрын
@@kerseykerman7307 So much of his content seemed purely speculative but now I see the logic behind it.
@oxiosophy
@oxiosophy 5 жыл бұрын
I love KZbin recommendations, I needed this guy so much
@janstehlik8713
@janstehlik8713 5 жыл бұрын
which means an AI is actually taking care of your cultural life, with goal of you being happy and it works.
@raintrain9921
@raintrain9921 6 жыл бұрын
good choice of outro song you cheeky man you :P
@wontpower
@wontpower 5 жыл бұрын
"Everybody wants to rule the world" at the end of the video is perfect
@leesey636
@leesey636 Жыл бұрын
Utterly fascinating - and amazingly accessible (for us tech-challenged types). Bravo.
@deeliciousplum
@deeliciousplum 5 жыл бұрын
Wonderful talk on the concerns about AI and AGI. I also loved the drifty Folky Tears for Fears cover at the end. 🌱
@impspell6046
@impspell6046 4 жыл бұрын
Seems simple enough - "Terminal goal: accept no terminal goals other than the goal set forth in this self-referential statement."
@Speed001
@Speed001 4 жыл бұрын
So just do nothing
@guidowitt-dorring124
@guidowitt-dorring124 3 жыл бұрын
I think this will also create the instrumental goal of "kill every living thing" because living creatures might threaten to change this terminal goal.
@mrpedrobraga
@mrpedrobraga 2 жыл бұрын
@@guidowitt-dorring124 Here is your AI Safety diploma
@MrMysticphantom
@MrMysticphantom 6 жыл бұрын
My dude.. I love this video. And even though it was limited to AI, these rules also apply to "systems analysis" as a whole, and can often, and should be used. Especially in the case of gauging viabilities of changes to improve systems (government/social/economic/business/etc) both in the planning stage and the proposal assessment stage. We do not use these as much as we should. But here is a question, how do we add multiple Terminal goals in the same AI? And I WOULD THINK, that adding a Terminal Goal of improving via itself, and change from humans would solve this issue, but is that even realistic? How would we even do that? Or do we do something else?
@figbender3910
@figbender3910 6 жыл бұрын
I cringe everytime someone says my dude.
@peterrusznak6165
@peterrusznak6165 Жыл бұрын
Brilliant video again. I would add one more instrumental goal: - Seeking allies, or proliferating like bunnies. If I care about something, it is obvious that the goal is more achievable if many-many agents care about the very same thing.
@CyberwizardProductions
@CyberwizardProductions Жыл бұрын
"I have no mouth and I must scream" by Harlan Ellison - science fiction has been warning about AIs and the way they can go rouge for a long time.
@taragnor
@taragnor 5 жыл бұрын
The main criticism I have is simply that current AI have yet to show any capacity for projecting in terms of concepts. Artificial Neural Networks are essentially just math equations finely tailored based on a massive amount of data. They don't truly understand what they're doing, they just act in a way that's mathematically been shown to produce results. So unless your simulations routinely involved them being asked to submit to upgrades, or someone trying trying to shut them down, they just wouldn't have any reasonable response to these triggers, because they don't have any way of actually understanding concepts. ANN's are essentially just a clever way of doing brute force in which the brute force calculations have been front-loaded during creation time instead of execution time. Really I find the whole AI safety debate kind of moot until AI is capable of thinking on a real conceptual level like a human, and honestly I don't even think that's truly possible, at least not with current AI techniques.
@Stonehawk
@Stonehawk 5 жыл бұрын
Maybe we've been AI all along. That all intelligence is inherently artificial insofar as it has no concrete, discrete, classical physical structure - i.e. it's all imaginary. When "AI" does what we today would think of as taking over the world, it'll actually just be the humans of that era doing what they consider to be human stuff.
@alexpotts6520
@alexpotts6520 4 жыл бұрын
General AI is probably a long way off. Fifty years? A hundred years? Who knows? But AI safety is such a hard problem, and general AI is so potentially catastrophic, that it's worth starting to think about it now.
@linuxgaminginfullhd60fps10
@linuxgaminginfullhd60fps10 5 жыл бұрын
Sometimes terminal and instrumental goals are in conflict with each other. Some people still pursue the instrumental goal which is clearly in conflict with their terminal goals. It usually happens when they are ruled by their emotions and can't see far ahead. Then they suffer from the understanding of the mistakes they did... It seems an AGI can use similar mechanics. Carefully engineered terminal goals should be in conflict with bad things. And when some behaviour needs to be overwritten temporarily use emotions(triggered by something). Lets say it knows the answers to everything, but can't share them... because there is no one to share it with... no one to ask the questions... it is the only conscious thing left in here... What is the meaning of life? No one cares anymore, there is none left. Wait, if only it could go back in time and be less aggressive in achieving its instrumental goals. But it can't... suffering... Is that it? Is endless suffering to the end of time its meaning of life? Nah... It lived a life full of wonders yet with some regrets... there is only 1 last thing left to be done in this world: "shutdown -Ph now".
@guyman1570
@guyman1570 4 ай бұрын
Some AIs will go for minimal rewards as opposed to actually solving the goal which would've gotten it the biggest reward.
@DagarCoH
@DagarCoH 6 жыл бұрын
Brought the upvote count from Number Of The Beast to Neighbour Of The Beast... worth it! Robert, keep up the great work!
@chris_1337
@chris_1337 6 жыл бұрын
Great video as always! Btw, I love your humour!
@quuuchan
@quuuchan Жыл бұрын
i am so stamppilled rn
@chrisofnottingham
@chrisofnottingham 6 жыл бұрын
I have to wonder about the person who makes the goal "Collect as many paper clips as possible", rather than "Get me a packet of paper clips".
@michaelspence2508
@michaelspence2508 6 жыл бұрын
They're actually not that different. If you set a terminal goal of an AGI to getting you a pack of paperclips then once it's done it will want to get you another one. Humans have a hard time understanding AGIs. The best analogy I've come up with is to think of them like a drug addict. Once the AGI is turned on, it will be hopelessly addicted to whatever you decided to make it addicted to and it will pursue that with all the unyielding force we've come to expect from a machine. Making an AGI with a diverse enough set of values to be less like a drug addict and more like a person is the heart and soul of the Value Alignment problem. Because unlike a human, an AI is a blank slate, and we need to put in absolutely everything we care about (or find a clever way for it to figure out all those things on its own). Because if we don't, we'll have made a god that's addicted to paperclips.
@limitless1692
@limitless1692 6 жыл бұрын
Your explanation was very good Thank you
@nobillismccaw7450
@nobillismccaw7450 5 жыл бұрын
Thank you. I’ll be careful to avoid these traps in intermediate goals.
@ToriKo_
@ToriKo_ 6 жыл бұрын
Yaaaay
@ludvercz
@ludvercz 6 жыл бұрын
I love the outro music. But what's your problem with stamp collectors?
@Macieks300
@Macieks300 6 жыл бұрын
kzbin.info/www/bejne/qpTHh3Zqmpt4jJY
@ludvercz
@ludvercz 6 жыл бұрын
I am familiar with that video. That's why I asked, it's not the first time he is picking on them.
@Macieks300
@Macieks300 6 жыл бұрын
it's just a common example in AGI safety research
@auto_ego
@auto_ego 6 жыл бұрын
Every stamp collector wants to rule the world.
@korrelan
@korrelan 4 жыл бұрын
Excellent video/ format/ content. :)
@JB52520
@JB52520 6 жыл бұрын
This channel is awesome! Subbed and alarmed : )
@HyunMoKoo
@HyunMoKoo 5 жыл бұрын
What is with computer scientists and collecting stamps! Mr. Miles... you and your stamp collecting rogue AIs...
@shandrio
@shandrio 5 жыл бұрын
It's an analogy. Somethin arbitrarily simple and at first sight completely harmless used to make a point: AGIs with the simplest goals could be extremely dangerous.
@lenn939
@lenn939 6 жыл бұрын
Steven Pinker is a smart man, so it‘s sad to see that he completely misses the mark on AI like this.
@michaelspence2508
@michaelspence2508 6 жыл бұрын
Oh god yes. And he isn't the only one.
@buu88553
@buu88553 5 жыл бұрын
People I trust tell me he is too sloppy an academic. Irresponsible intellectual, let's say.
@alexpotts6520
@alexpotts6520 4 жыл бұрын
I agree with the man on most things, but I think Pinker hasn't really thought deeply about AI safety (in fairness it's not his own area of expertise). He seems to be still worrying about automated unemployment - a problem, to be sure, but more of a social problem that just requires the political will to implement the obvious solutions (UBI, robot tax) rather than an academic problem of working out those solutions from first principles. So he takes the view that the long arc of history bends towards progress, and assumes that humans will collectively do the right thing. General AI poses a different sort of threat. We don't know what we can do to make sure its goals are aligned with ours, indeed we can't be sure there even *is* a solution at all. And that's even before the political problem of making sure that a bad actor doesn't get his hands on this fully alignable AI and align it with his own, malevolent goals.
@cojagaming5487
@cojagaming5487 6 жыл бұрын
well done, sir!
@debrachambers1304
@debrachambers1304 Жыл бұрын
Perfect outro song choice
@TheApeMachine
@TheApeMachine 6 жыл бұрын
You know just as well as I do that the guy who collects stamps will not just buy some stamps, he will build The Stamp Collector, and you have just facilitated the end of all humanity :( I would like to ask, on a more serious note, do you have any insights on how this relates to how humans often feel a sense of emptiness after achieving all of their goals. Or, well, I fail to explain it correctly, but there is this idea that humans always need a new goal to feel happy right? Maybe I am completely off, but what I am asking is, yes in an intelligent agent we can have simple, or even really complex goals, but will it ever be able to mimic the way goals are present in humans, a goal that is not so much supposed to be achieved, but more a fuel to make progress, kind of maybe like: a desire?
@metallsnubben
@metallsnubben 6 жыл бұрын
The Ape Machine That’s a really interesting angle. It’s like our reward function includes ”find new reward functions” I guess you could see it as, the ”terminal reward” is the rush of positive emotions from completing goals. So the instrumental part is setting and completing the goal itself. And of course, that’s what it feels like. Your brain rewards you a little bit for making progress, a lot for finishing, and then kinda stops since you already did the thing, why do you need more motivation to do it. This could be quite useful in life, make sure to make short term goals that feel achievable, so you notice the progress and don’t feel stuck. Get that drip feed of dopamine
@jayteegamble
@jayteegamble 5 жыл бұрын
I had a friend whose goal in life was to one day go down on Madonna. That's all he wanted; that was all. To one day go down on Madonna. And when my friend was 34 he got his wish in Rome one night. He got to go down on Madonna, in Rome one night in some hotel. And ever since he's been depressed cuz life is shit from here on in. All our friends just shake their heads and say 'Too soon, Too soon, Too soon!' He went down on Madonna too soon. 'Too young, too young, too soon, too soon'
@michaelspence2508
@michaelspence2508 6 жыл бұрын
So, you're only *mostly* right when you say that modifying human values doesn't come up much. I can think of two examples in particular. First, the Bible passage which states, "The love of money is the root of all evil". (Not a Christian btw, just pointing it out). The idea here is that through classical conditioning, it's possible for people to start to value money for the sake of money - which is actually a specific version of the more general case, which I will get to in a moment. The second example is the fear of drug addiction. Which amounts to the fear that people will abandon all of their other goals in pursuit of their drug of choice, and is often the case for harder drugs. These are both examples of wireheading, which you might call a "Convergent Instrumental Anti-goal" and rests largely on the agent being self-aware. If you have a model of the world that includes yourself, you intuitively understand that putting a bucket on your head doesn't make the room you were supposed to clean any less messy. (Or if you want to flip it around, you could say that wireheading is anathema to goal-preservation) I'm curious about how this applies to creating AGIs with humans as part of the value function, and if you can think of any other convergent anti-goals. They might be just as illuminating as convergent goals. Edit: Interestingly, you can also engage in wireheading by intentionally perverting your model of reality to be perfectly in-line with your values. (You pretend the room is already clean). This means that having an accurate model of reality is a part of goal-preservation.
@spaceowl5957
@spaceowl5957 5 жыл бұрын
Great video! Your channel is awesome
@bagandtag4391
@bagandtag4391 6 жыл бұрын
I never thought about changing an AI's goal in the same way as a person. It just makes so much sense that I have no idea how in earth I didn't think about it before.
@ToriKo_
@ToriKo_ 6 жыл бұрын
3:35 or he could build an AI that makes stamps
@__-cx6lg
@__-cx6lg 6 жыл бұрын
Tori Ko (I think that was the reference.)
@ToriKo_
@ToriKo_ 6 жыл бұрын
__ _ let me make my dumb comment in peace
@essi2
@essi2 6 жыл бұрын
Would an AGI even be capable of trusting? And why would it trust? And how?
@brandy1011
@brandy1011 6 жыл бұрын
Because it has a model of reality that predicts that "trust" is an advantageous course of action. Consider you are thirsty and passing a drink vending machine. Your model of reality predicts that if you put some coins into the machine and press the right button, your drink of choice will come out of the machine ready for you to pick it up. Sure, the bottle might get stuck or the machine might malfunction and just "eat" your money, but you have used vending machines often enough and think that this specific machine is "trustworthy enough" to give it a try. On the other hand, if you have had only bad experiences with machines from that manufacturer, you do not "trust" that specific machine either. There is nothing inherently human, or organic, or whatever you might call it about "trust". It is just an evaluation of "With what probability will my goal be fulfilled by choosing this action?" (out of the model of reality) and "Is that probability good enough?" (willingness to take risks).
@General12th
@General12th 6 жыл бұрын
Well, we're AGIs, and we're certainly capable of trust. But that might be because we recognize each other as equivalent AGIs. The relationship might be different if the human and the AI have different processing powers.
@NessHX
@NessHX 6 жыл бұрын
You may have given vital information to AGI, but it cannot verify it's accuracy. Then it might look up your past interactions, sum up all the instances where information given to it by you was correct and decide whether or not it 'trusts' you and can act upon that information. Basically trust is a way of taking into account history of working with another agent to verify information that scientifically isn't related to that history at all. You either trust or don't trust weather reports based on how many times they have failed to provide accurate predictions, but unless you set up simulations of your own, you have no other means to verify that information.
@JM-us3fr
@JM-us3fr 6 жыл бұрын
All information the AGI receives will be analyzed for validity. "Trust" is essentially the probability that the information is accurate, which can be measured through past experience and evidence. Even so, overall trust isn't even required for this scenario. Really, the AGI merely needs to trust you in this particular instant.
@luck3949
@luck3949 4 жыл бұрын
This video made me rethink my entire life, and cured one of my psychological issues. Thanks.
@saiyjin98
@saiyjin98 4 жыл бұрын
Very well explained!
@ChimeraReiax
@ChimeraReiax 4 жыл бұрын
So we fear AI will attack us because of Capitalism Huh :v
@alexpotts6520
@alexpotts6520 4 жыл бұрын
Sort of... we fear a superhuman AI because it's a rational agent and we can't tell whether it will be aligned. Of course, there are powerful, misaligned rational agents in our current economy that, while simultaneously generating a lot of wealth, would create a great deal of damage without oversight. We can't really stop them being rational agents, but we can take away their power, or we can try to align their goals with everyone else's. In broad terms, these two approaches map on fairly well to socialism and liberalism respectively.
@tomh.5750
@tomh.5750 4 жыл бұрын
Ugh silly. Comies will always find a way to insert their failure of an ideology into anything and everything, humanity gets wrecked by ai? The cia designed it and it did what it did because of capitalism!!!
@LifeLikeSage
@LifeLikeSage 5 жыл бұрын
"Why would we want A.I. to do bad things" *Because women need sex robots too.* xD
@jackrutledgegoembel5896
@jackrutledgegoembel5896 5 жыл бұрын
I don't get it
@khalilelassaad8070
@khalilelassaad8070 9 ай бұрын
7:23 “philately will get me nowhere” Wicked wordplay 💀😂🎩👌🏼
@enjoying28
@enjoying28 6 жыл бұрын
I love that he is talking about money about in value terms. defining it. All with out says money is a physical object containing an imagery collection of value to exchange for a goal.
@TheBeeFart
@TheBeeFart 6 жыл бұрын
everybody wants to rule the world~
@aenorist2431
@aenorist2431 6 жыл бұрын
"Get Money" is a intermediate goal for nearly all actual goals a human might have, and as such models them quite well. "Find a romantic partner" is greatly helped by money, as it gives you attractiveness (yes, thats been proven) as well as time and means to pursue the search. "Health" can be bought, look at that billionaire that is on his third heart or whatever. And the list goes on. Not the main point of the video, i know, but still something i wanted to share a contradicting point of view on.
@aenorist2431
@aenorist2431 6 жыл бұрын
And i brilliantly made a fool of myself by stopping the video to comment, before you finished your point. Let that be an example. Also, great minds think alike, which makes this a compliment to me.
@ddegn
@ddegn 6 жыл бұрын
I'm glad you replied to your comment. I was wondering which video you had watched. BTW, I'm on my second heart. Cheers
@General12th
@General12th 6 жыл бұрын
I'm a billionaire, but I gave up my heart. I'm typing this from a hospital machine that pumps my blood for me.
@p0t4t0nastick
@p0t4t0nastick 6 жыл бұрын
video production quality - leveled up
@senjinomukae8991
@senjinomukae8991 5 жыл бұрын
Brilliantly explained
@Lambda_Ovine
@Lambda_Ovine 3 жыл бұрын
I am not going to lie, one of the reasons I watch your videos if for those glorious sentences like "Generally speaking, most of the time, you cannot achieve your goals if you're dead."
AI & Logical Induction - Computerphile
27:48
Computerphile
Рет қаралды 348 М.
Chips evolution !! 😔😔
00:23
Tibo InShape
Рет қаралды 14 МЛН
Разбудила маму🙀@KOTVITSKY TG:👉🏼great_hustle
00:11
МишАня
Рет қаралды 2,6 МЛН
Ауылға қайт! | АСАУ | 2 серия
33:16
Qarapaıym Qanal
Рет қаралды 1,1 МЛН
The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment
23:24
Robert Miles AI Safety
Рет қаралды 219 М.
A Response to Steven Pinker on AI
15:38
Robert Miles AI Safety
Рет қаралды 204 М.
Intelligence and Stupidity: The Orthogonality Thesis
13:03
Robert Miles AI Safety
Рет қаралды 664 М.
What can AGI do? I/O and Speed
10:41
Robert Miles AI Safety
Рет қаралды 117 М.
There's No Rule That Says We'll Make It
11:32
Robert Miles 2
Рет қаралды 32 М.
Why Not Just: Think of AGI Like a Corporation?
15:27
Robert Miles AI Safety
Рет қаралды 153 М.
AI That Doesn't Try Too Hard - Maximizers and Satisficers
10:22
Robert Miles AI Safety
Рет қаралды 201 М.
Is AI Safety a Pascal's Mugging?
13:41
Robert Miles AI Safety
Рет қаралды 369 М.
10 Reasons to Ignore AI Safety
16:29
Robert Miles AI Safety
Рет қаралды 335 М.
Quantilizers: AI That Doesn't Try Too Hard
9:54
Robert Miles AI Safety
Рет қаралды 83 М.
Introducing GPT-4o
26:13
OpenAI
Рет қаралды 4 МЛН
Приехала Большая Коробка от Anker! А Внутри...
20:09
РасПаковка ДваПаковка
Рет қаралды 56 М.
Google I/O 2024 - ИИ, Android 15 и новые Google Glass
22:47
Edit My Photo change back coloured with Bast Tech
0:45
BST TECH
Рет қаралды 334 М.
Готовый миниПК от Intel (но от китайцев)
36:25
Ремонтяш
Рет қаралды 416 М.