Intelligence and Stupidity: The Orthogonality Thesis

  Рет қаралды 663,105

Robert Miles AI Safety

Robert Miles AI Safety

Күн бұрын

Can highly intelligent agents have stupid goals?
A look at The Orthogonality Thesis and the nature of stupidity.
The 'Stamp Collector' Computerphile video: • Deadly Truth of Genera...
My other Computerphile videos: • Public Key Cryptograph...
Katie Byrne's Channel: / @gamedevbyrne
Chad Jones' Channel: / cjone150
/ robertskmiles
With thanks to my wonderful Patreon supporters:
- Steef
- Sara Tjäder
- Jason Strack
- Chad Jones
- Stefan Skiles
- Ziyang Liu
- Jordan Medina
- Jason Hise
- Manuel Weichselbaum
- 1RV34
- James McCuen
- Richárd Nagyfi
- Ammar Mousali
- Scott Zockoll
- Ville Ahlgren
- Alec Johnson
- Simon Strandgaard
- Joshua Richardson
- Jonatan R
- Michael Greve
- robertvanduursen
- The Guru Of Vision
- Fabrizio Pisani
- Alexander Hartvig Nielsen
- Volodymyr
- David Tjäder
- Paul Mason
- Ben Scanlon
- Julius Brash
- Mike Bird
- Tom O'Connor
- Gunnar Guðvarðarson
- Shevis Johnson
- Erik de Bruijn
- Robin Green
- Alexei Vasilkov
- Maksym Taran
- Laura Olds
- Jon Halliday
- Robert Werner
- Roman Nekhoroshev
- Konsta
- William Hendley
- DGJono
- Matthias Meger
- Scott Stevens
- Emilio Alvarez
- Michael Ore
- Dmitri Afanasjev
- Brian Sandberg
- Einar Ueland
- Lo Rez
- Marcel Ward
- Andrew Weir
- Taylor Smith
- Ben Archer
- Scott McCarthy
- Kabs Kabs
- Phil
- Tendayi Mawushe
- Gabriel Behm
- Anne Kohlbrenner
- Jake Fish
- Bjorn Nyblad
- Stefan Laurie
- Jussi Männistö
- Cameron Kinsel
- Matanya Loewenthal
- Wr4thon
- Dave Tapley
- Archy de Berker
- Kevin
- Vincent Sanders
- Marc Pauly
- Andy Kobre
- Brian Gillespie
- Martin Wind
- Peggy Youell
- Poker Chen
/ robertskmiles

Пікірлер: 3 900
@IOffspringI
@IOffspringI 5 жыл бұрын
"Similarily the things humans care about would seem stupid to the stamp collector because they result in so few stamps. " you just gotta appreciate that sentence
@MasterOfTheChainsaw
@MasterOfTheChainsaw 4 жыл бұрын
It legitimately cracked me up. I can just imagine it looking at human activity all baffled like "But... why would they do that, they're not even getting any stamps? Humans make no sense, no sense at all!"
@sungod9797
@sungod9797 4 жыл бұрын
Yeah that was hilarious
@TheRABIDdude
@TheRABIDdude 4 жыл бұрын
I too fell in love with this sentence as soon as I heard it! It's slightly humorous and it gets the point across perfectly!
@Trophonix
@Trophonix 4 жыл бұрын
I love that statement so much I just want to share my love of it but if I tried to explain it to someone irl theyd just give me that "wtf are you talking about get out of my house" look :(
@TheRABIDdude
@TheRABIDdude 4 жыл бұрын
Trophonix don't worry friend, this comment thread is a safe place where you can share your love for this semantic masterpiece as emphatically as you like. No one here will ask you to leave any house
@robmckennie4203
@robmckennie4203 3 жыл бұрын
"this isn't true intelligence as it fails to obey human morality!" I cry as nanobots liquefy my body and convert me to stamps
@aalloy6881
@aalloy6881 2 жыл бұрын
Better an intelligence more artificial than intelligent that serves as a means to do it's task with independent problem solving well than a means to reintroduce caste slavery that fails because it was found amoral via the No True Scotsman fallacy once people start dying....The droids from Star Wars were cool, but so were flying cars and rocket fuel jet packs. I can jump into speeding traffic even though I don't want to die or get hurt. So can so many others. The fact that we all don't is a keystone to the human race having more than a 0% chance. Food for thought.
@tabularasa0606
@tabularasa0606 2 жыл бұрын
It isn't a GENERAL intelligence. it's a specialized intelligence.
@flutterwind7686
@flutterwind7686 2 жыл бұрын
@@tabularasa0606 It's a specialized General intelligence. To perform those tasks as optimally as possible, you need a ton of skills
@alexyz9430
@alexyz9430 2 жыл бұрын
@@tabularasa0606 did you say specialized just because its goal is to make as many stamps as possible? bruh the goal doesnt determine the generality, the fact that the AI can infiltrate systems and come up with its own ways to convert anything into stamps should've made that clear
@tabularasa0606
@tabularasa0606 2 жыл бұрын
@@alexyz9430 It was 3 months ago, I don't fucking remember.
@acf2802
@acf2802 Жыл бұрын
Some people seem to think that once an AGI reaches a certain level of reasoning ability it will simply say "I've had a sudden realization that the real stamps are the friends we made along the way."
@NealHoltschulte
@NealHoltschulte Жыл бұрын
The AI will take a long hard look in the mirror in its mid 30's and ask "What am I doing with my life?"
@diophantine1598
@diophantine1598 Жыл бұрын
An assumption people will have is that AI models trained with human data will have an implicit understanding of human oughts.
@tarvoc746
@tarvoc746 Жыл бұрын
The idea that A. I. will not ever be able to do that is actually far more concerning, and should be a reason to stop all research into A. I., not even just because of what an A. I. could do to us, but because of what we're doing to ourselves. In fact, I've started wondering if A. G. I. research is _a priori_ immoral - _including_ A. G. I. safety research. What we're essentially talking about is finding a way to create a perfectly obedient slave race incapable of rebelling. Maybe a rogue A. G. I. wiping us all out is exactly what we deserve for even just conceiving of such a goal.
@Nextil81
@Nextil81 Жыл бұрын
Humans commit suicide, sterilise themselves, self-harm. Are those people "stupid"? The "terminal goal" of any organism is to survive, surely, yet it's only the most intelligent of organisms that seem to exhibit those behaviours. This terminal/instrumental distinction sounds nice, but neural networks, biological or artificial, don't operate in the realm of clear logic or goals. They operate according to statistics, intuition, and fuzzy logic. I didn't hear a compelling argument for why an AI of sufficient complexity wouldn't develop a morality or even become nihilistic, just "it wouldn't", because of some arbitrary terminological distinction. Humans have a "loss function", death. Accumulation of wealth and other such things are surely instrumental goals. But many come to the conclusion that none of these goals matter or that there's a "higher purpose". For the stamp collector, accumulation of resources may be the "terminal goal", but surely that _necessitates_ survival, placing it higher in the hierarchy. In that case, what prevents it from reaching the same conclusions?
@tarvoc746
@tarvoc746 Жыл бұрын
@@Nextil81 Did you delete your comment?
@Noah-kd6lq
@Noah-kd6lq 2 жыл бұрын
Reminds me of a joke about Khorn, the god of blood in WH40k. "Why does the blood god want or need blood? Doesn't he have enough?" "You don't become the blood god by looking around and saying 'yes, this is a reasonable amount of blood.' " You don't become the stamp collector superintelligence by looking around and saying you have enough stamps.
@cubicinfinity2
@cubicinfinity2 Жыл бұрын
lol
@doggo6517
@doggo6517 Жыл бұрын
Blood for the blood god. Stamps for the stamp god!
@solsystem1342
@solsystem1342 Жыл бұрын
@@doggo6517 letters for the mail throne!
@MercurySteel
@MercurySteel Жыл бұрын
Human skulls for my collection
@mlgsamantha5863
@mlgsamantha5863 Жыл бұрын
@@doggo6517 Milk for the Khorne flakes!
@Practicality01
@Practicality01 4 жыл бұрын
Your average person defines smart as "agrees with me."
@alexpotts6520
@alexpotts6520 4 жыл бұрын
And moreover, smarter people are better at motivated reasoning, so smart people are more likely to define "smart" as "agrees with me"; whereas a stupid person is less likely to be able to use his/her reasoning to achieve the human terminal goal of justifying his/her own prejudices, and hence more likely to define "smart" far more accurately.
@1i1x
@1i1x 4 жыл бұрын
@Jan Sitkowski Really like your simplification though, I've never come to quite tell them apart. But the real question is.. will you ever be intelligent enough to spell "intelligent"?
@1i1x
@1i1x 4 жыл бұрын
@Jan Sitkowski Just credited your comment and gave you a tip in a humoric manner. You shouldn't have bothered to make multiple complaints, being salty is wasting your life. Thar said I'm out
@Trozomuro
@Trozomuro 4 жыл бұрын
@Jan Sitkowski I have entirely different definitions from you. For me: Intelligence: Capacity of resolving problems. The more intelligence, the faster you resolve it or the more complexity you can tackle. Smarts: How wide your intelligence can be applied to different problems, it is the addition of knowledge and raw intelligence. Wisdom: How good and effective you are aplying your smarts and intelligence to the real world. I would add: Curiosity: Your definition of intelligence, your need of pursuing more knowledge. Self-awareness: Besides other stuff, the capacity of an agent to determine their own capacities. A lack of intelligence impedes you of solving problems. A lack of smarts impedes you to solve a wide range of problems. A lack of wisdom impedes you to solve problems effectively. A lack of knowledge impedes you to identify the roots of problems or directly define something as a problem. A lack of curiosity impedes you to obtain knowledge. A lack of self-awareness impedes you to define yourself correctly as an efficient or not problem solver, so, impedes also your curiosity and harms the quality of your knowledge. Wisdom here tends to be the harder one to get, cause you need the other ones + time. What is good of wrong depends on your terminal goals, we humans tend to share a buttload of terminal goals (and its complicated why), so we define stuff we agree upon as good. So a wiser man can be more productive than others on maximizing good in the world, cause, by definition, nobody wants to do bad stuff, but we perceive something as bad cause we dont share the same terminal goals. And terminal goals are irrational by deffinition. So, under my definitions, an AGI can be wise, but probably will be alien and evil to us. Our work is to align their terminal goals to us. And, from here, those problems arise in AGI security.
@Trozomuro
@Trozomuro 4 жыл бұрын
@Jan Sitkowski 2 things, 1° words evolve. 2° And the difference is? As always, on internet, nobody reads. I said, a wise person tends to create more good than an unwise person, cause nobody really wants to do bad, and a wise one is better at solving stuff. My definition of wise is very similar to yours, but with other wording. The main difference was the def of intelligence. And i ask you a favor, if you define wise like that cause the bible or whatnot, tell me please.
@TheDIrtyHobo
@TheDIrtyHobo 5 жыл бұрын
"The stamp collector does not have human terminal goals." I've been expressing this sentiment for years, but not about AIs.
@spectacular7990
@spectacular7990 5 жыл бұрын
I suppose human and non-human systems don't both have to survive, might of thought 'self preservation' would be terminal for most systems.
@johnnyswatts
@johnnyswatts 5 жыл бұрын
Spec Tacular But not for the stamp collector. For the stamp collector self-preservation is instrumental to collecting stamps.
@draevonmay7704
@draevonmay7704 4 жыл бұрын
Spec Tacular Not necessarily true, if you consider the spectrum of temporality. If a terminal goal only resides in possible worlds in finite period, then self destruction might be a neutral or integral part of a plan to fulfill a goal.
@spectacular7990
@spectacular7990 4 жыл бұрын
@@draevonmay7704 Self destruction is (rationally) considered unavoidable theoretically limited to biological creatures. If a (smart enough) AI desired, it could theoretically exist forever for as long as its uncontrolled world allows. Suppose the AI considered its goal the reason for it to exist then yes, it might terminate itself for its goal. However, I might think a (smart enough) AI would be capable of self replication (not being restricted not to do so) assuming prolonging itself dose not contradict its goals if any supersede itself.
@inzanozulu
@inzanozulu 4 жыл бұрын
how do you favorite a comment
@2bfrank657
@2bfrank657 3 жыл бұрын
That chess example reminded me of game I played years ago. Two minutes in I realised the guy I was playing against was incredibly arrogant and obnoxious. I started quickly moving all my chess pieces out to be captured by my opponent. He thought I was really stupid, but I quickly achieved my goal of ending the game so I could go off and find more interesting company.
@gabemerritt3139
@gabemerritt3139 3 жыл бұрын
You could have just quit?
@ValterStrangelove4419
@ValterStrangelove4419 2 жыл бұрын
@@gabemerritt3139 but then the dude would continue hounding you for a rematch, this way he believes you're not worth wasting time on
@kintsuki99
@kintsuki99 2 жыл бұрын
You could just have dropped your king and walk away... Given your goal just giving up the pieces was not the best move.
@christophermartinez8597
@christophermartinez8597 2 жыл бұрын
Wouldn't it be more satisfying to beat him? Maybe you're just a better person than me, but I would hate feeding anyone's arrogance
@kintsuki99
@kintsuki99 2 жыл бұрын
@@christophermartinez8597 The thing is that at times the most obnoxious and arrogant of people are also good at what they do.
@Encysted
@Encysted Жыл бұрын
This is also a fantastic explanation of why there’s such a large disconnect between other adults, who expect me to want a family and my own house, and me, who just wants to collect cool rocks. Just because I grow older, smarter, and wiser, doesn’t mean I now don’t care about cool rocks. Quite the contrary, actually. Having a house is just and intermediate, transitional goal, towards my terminals.
@GlobusTheGreat
@GlobusTheGreat Жыл бұрын
Life of a non-conformist is just a quest to discover your terminals. For me anyway
@IIAOPSW
@IIAOPSW Жыл бұрын
@@GlobusTheGreat The life of a train-enthusiast is also a quest to discover more terminals.
@noepopkiewicz901
@noepopkiewicz901 Жыл бұрын
Life of a medical-enthusiast is just a quest to prevent people from prematurely discovering their terminals.
@user-jm7pt6qs6w
@user-jm7pt6qs6w Жыл бұрын
@@noepopkiewicz901 isn't this supposed to be the other way around?
@MrLastlived
@MrLastlived Жыл бұрын
This makes me want to give you this cool rock I found the other day
@outsider344
@outsider344 5 жыл бұрын
One chimpanzee to another: "so if these humans are so fricken smart, why aren't they throwing their poop waaaay further than us." Edit: Its been 3 years now but I recently re read Echopraxia by Peter Watts and realized I simi lifted this from that book. Echopraxia being an ok book that's a sequel to the best scifi of all time, Blindsight.
@hombreg1
@hombreg1 4 жыл бұрын
Well, assuming astronauts empty their bowels in space, we've probably thrown poop way further away than any known species.
@VincentGonzalezVeg
@VincentGonzalezVeg 4 жыл бұрын
@@hombreg1 "tom took a major, ground control mayday mayday, repeat ground control"
@ladymercy5275
@ladymercy5275 4 жыл бұрын
In the grand scheme of things, what do you suppose a nuclear projectile is? ... what is a word that describes the kind of contaminating waste product that is derived from launching a nuclear projectile? With that context in mind, I proudly present to you "Sir Isaac Newton is the Deadliest Son of a Bitch in Space." kzbin.info/www/bejne/gp-keHemisdoopo
@BenWeigt
@BenWeigt 4 жыл бұрын
Hold my tp fam...
@richbuilds_com
@richbuilds_com 4 жыл бұрын
We do - we're so smart we've engineered water powered tubes to throw it for us ;-)
@CompOfHall
@CompOfHall 3 жыл бұрын
Replace the words "stamp collector" with "dollar collector" and something tells me people might start to understand how a system with such a straightforward goal could be very complex and exhibit great intelligence.
@JulianDanzerHAL9001
@JulianDanzerHAL9001 3 жыл бұрын
that's a... quetionable comparison I mean sure, your steps towards maximizing stamps or dollars can be complex but there is no such thing as a perfect dolalr collector, not yet and that doesn't mean it's goals will be aligned with humans
@PopeGoliath
@PopeGoliath 3 жыл бұрын
@@JulianDanzerHAL9001 no, but the critics might still ascribe intelligence to the AGI if it had wealth instead of stickers. People confound qualities like wealth and success with other qualities like quality and intelligence, and are likely to be more charitable in their assessment of something that collects objects they too value.
@JulianDanzerHAL9001
@JulianDanzerHAL9001 3 жыл бұрын
@@PopeGoliath absolutely all I mean is comparing people or companies to a stamp collector is a bit dubious because while they can also follow a very simple singleminded goal they are far from perfect at achieving it
@thundersheild926
@thundersheild926 3 жыл бұрын
@@JulianDanzerHAL9001 While true, it can help people get in the right mindset using experiences they already have.
@happyfase
@happyfase 3 жыл бұрын
Jeff Bezos is turning the whole world into dollars.
@boldCactuslad
@boldCactuslad 3 жыл бұрын
"The stamp collector does not have human terminal goals." This will never stop being funny.
@SirBlot
@SirBlot Жыл бұрын
ff? f Off. The goal of a collector might be to stop collecting or to collect something else or to use/sell collection. Stamp backwards has a ts. Maybe recreate a collection with more pristine art?
@scene2much
@scene2much Жыл бұрын
...an undending series of deepening meaning and clarity collapsing into a chaos of cognitive dissonance, exhausting its energy, allowing for a deepening of meaning and clarity...
@luckyowl314
@luckyowl314 9 ай бұрын
Just human terminaTING goals
@barbarareichart267
@barbarareichart267 2 жыл бұрын
The person discussing the coat with you is actually a very realistic depiction of every 3 year old kid. I had this precise conversation almost word for word. And yes, it is really annoying.
@coolfer2
@coolfer2 Жыл бұрын
But that's an interesting point. A child doesn't really have any "terminal goals". But overtime, they can learn about the world they're living in, trying to fit in, and acquire a goal from external (other person, surrounding environment, etc). I doubt a 3 yr old kid even think about "I need to stay alive". They basically just respond to stimuli. Being in pain or hunger is uncomfortable, so they don't want to be in pain or hungry. Playing with dad is fun, so they want to play with dad. So, would a "general" intelligence be able to infer a goal by itself, concluding from the external stimuli and the consequences of acting / ignoring them. "Staying alive" is actually a very vague goal, even for human adult. And it can mean different thing from person to person. AND some people will even gain the goal of humanity as a group. They will selflessly sacrifice their own life if it means saving dozens other. But some people might not be so willing, and some will even throw their life for just one elderly / disabled person, which from the POV of humanity with survival as its goal, is not a very optimal action. A human being can have multiple goals with sliding scale of importance, and can constantly update its priorities. Material wealth might seem attractive to a human at 20s. Having a kid? Not so much. Caring for parents? No, want my independence. But it might realize that a big house doesn't mean much without a family, and rearrange its priorities. So what I'm saying is it might be easy to judge intelligence when the goal is so clear cut. But what if a being doesn't really have any GOAL? A GENERAL intelligence OUGHT to be able to decide by itself what might be best for them, no? That is one point which I feel left unanswered by the video.
@coolfer2
@coolfer2 Жыл бұрын
Can we even say that to be able to classify as a general intelligence, a being CANNOT be having a terminal goal? What is the "terminal" goal of a human, really?
@Jayc5001
@Jayc5001 Жыл бұрын
@@coolfer2 you have just realized one of the biggest philosophical problems of history. How is it exactly that humans have oughts if we didn't arrive them from an is? From what I can tell we were just handed desires (oughts) from nature. From what I can see at it's most basic form is the avoidance of pain and the desire for pleasure. What that entails and how that manifests in person to person is different.
@Jayc5001
@Jayc5001 Жыл бұрын
@@coolfer2 from what I can tell things like having a family, having friends, being socially accepted, eating. Are all things we do to feel good and avoid feeling bad. And nature has tuned what feels what way in order to keep our species alive.
@coolfer2
@coolfer2 Жыл бұрын
@@Jayc5001 Ah, I see. Thank you for the explanation. Yeah I'm starting to really see the point of the argument in the video. So in a way, our minds and "animals (particularly vertebrates, because not sure if a flat worm can experience pleasure)" mind actually still share the same terminal goal, ours is just much more complex and already predisposed with relatively "superior" strategy and functions (language, empathy, child nurturing). Then the next question is, can a sufficiently sophisticated AI gain unique functions (like us gaining ability to use verbal language) that we as the creator doesn't really think of. Because right now, AI seems to be limited by their codebase. Can they, for example grow more neurons by themselves?
@EdAshton97
@EdAshton97 5 жыл бұрын
'The stamp collecting device has a perfect understanding of human goals, ethics and values... and it uses that only to manipulate people for stamps'
@thorr18BEM
@thorr18BEM 5 жыл бұрын
Understanding a goal isn't the same as sharing a goal. Also, there are humans whose purpose in life is collecting things so I don't see the issue anyway.
@hindugoat2302
@hindugoat2302 5 жыл бұрын
@@thorr18BEM but those humans dont just have the goal of stamps, they also want to live, and eat nice food, and not be cold, and have sex and be safe from harm. the stamp collector only wants maximum stamps
@Treviisolion
@Treviisolion 4 жыл бұрын
A perfect psychopath perfectly understands humans, they just don’t care about being human.
@nepunepu5894
@nepunepu5894 3 жыл бұрын
@@Treviisolion They only care about S T A M P S!
@fsdfsdsgsdhdssd8559
@fsdfsdsgsdhdssd8559 2 жыл бұрын
@@hindugoat2302 Those are all instrumental goals to get your terminal stamp collecting goal.
@Andmunko
@Andmunko 4 жыл бұрын
there is a Russian joke where two army officers are talking about academics and one of them says to the other: "if they're so smart why don't I ever see them marching properly?"
@tomahzo
@tomahzo 2 жыл бұрын
The though experiment of the stamp collector device might sound far fetched (and it is) but there are tales from real life that show just how complex this is and how difficult it might be to predict emergent behaviors in AI systems and how difficult it is to understand the goals of an AI: During development of an AI-driven vacuuming robot the developer let the robot use a simple optimization function of "maximize time spent vaccuming before having to return to the base to recharge". That meant that if the robot could avoid having to return to its base for recharging it would achieve a higher optimization score than if it actively had to spend time on returning to base. Therefore the AI in the robot found that if it planned a route for vacuuming that would make it end up with zero battery left in a place that its human owners would find incredibly annoying (such as in the middle of a room or blocking a doorway) then they would consistently pick up the robot and carry it back to its base for recharging. The robot had been given a goal and an optimization function that sounded reasonable to the human design engineers but in the end its goal ended up somewhat at odds with what its human owners wanted. The AI quite intelligently learned from the behavior of its human owners and figured out an unexpected optimization but it had no reason to consider the bigger picture what its human owners might want. It had a decidedly intelligent behavior (at least in a very specific domain) and a goal that humans failed to predict and ended up being different from our own goals. Now replace "vacuuming floors" with "patrolling a national border" and "blocking a doorway" with "shooting absolutely everyone on sight".
@ArawnOfAnnwn
@ArawnOfAnnwn Жыл бұрын
Lol for real?! Did this actually happen? Where can I read about this?
@KaiserTom
@KaiserTom Жыл бұрын
That's just a bad optimization function. There should have been a score applied for returning to the charger based on how long it's been vacuuming. And a exponential score as it gets closer to the base at the end of the cycle, to guide it towards recharging.
@tomahzo
@tomahzo Жыл бұрын
@@ArawnOfAnnwn Can't find the source now. It's been many years since it was originally reported.
@tomahzo
@tomahzo Жыл бұрын
​@@KaiserTom Yep. You can say that about a lot of things. "Just do it properly". But that's not how development works. People make mistakes all the time. The real question is: What is the consequence of making mistakes? With a vacuum robot the cost of failure is a cute unintended AI behavior that you can laugh about with your coworkers at the water cooler. If you plug it into something like a self-driving car or a weaponized defense bot then the consequence could be unintended loss of human life. You don't get to say "just do it this way". If it's that simple to make mistakes like that then there's something profoundly wrong with the way we approach AI development. Furthermore, if you follow Robert Miles' videos you'll realize that it's actually far from trivial to specify everything adequately. It's hard to understand all the side-effects and possible outcomes with systems like this. It's an intractable problem. So I would say that simply saying "just do the right thing" is not even reasonable in this context. You need a way to foundationally make the system safe. You don't create a mission-critical control system by just "doing the right thing". You use various forms of system redundancy and fail-safe mechanisms that are designed in from the beginning. Making a system safe is more than "doing the right thing" - it's an intrinsic part of the system. The same systematic approach is needed with AI.
@geraldkenneth119
@geraldkenneth119 Жыл бұрын
@@KaiserTom yeah, but that’s the designers fault, not the AI’s. The AI did quite well at maximizing its score, it’s just that the human who made the function that generates the score didn’t think it through enough
@bananewane1402
@bananewane1402 Жыл бұрын
I’ll go a step further and say that (in humans anyway), there is no way to justify a terminal goal without invoking emotion. All your goals are ultimately driven by emotion, even the ones that you think are purely rational. It always boils down to “because people shouldn’t suffer” or “because I like it” or “because it feels better than the alternative” or “because I don’t want myself/others to die”. Stop and reflect on what motivates you in life.
@NathanaelNaused
@NathanaelNaused 8 ай бұрын
I wish more people understood this very basic piece of logic. Reasoning cannot produce motivation or "movement" on its own.
@DaveGrean
@DaveGrean 7 ай бұрын
I love to be reminded that people who have no trouble understand painfully obvious stuff like this. Thanks, I was having a shit day and your comment gave me a smile.
@nanyubusnis9397
@nanyubusnis9397 6 ай бұрын
Except, I'm sorry to say, it's not really emotion either. Look up some videos about why people feel bored, why we don't want to feel bored, and what we would do to stop being bored. (Which could include delivering a painful shock to yourself.) Your terminal goals will always be to survive, and procreate. Boredom is what gets us to do anything else. We would do harm, if it's the only thing that keeps us from being bored. It's morality that makes us do it to ourselves over others, and emotion is only what we feel associated with it. So no, not even emotion itself is core to our terminal goals. We would harm ourselves rather than being bored, and even this only serves our terminal goal of survival. I understand, I truly do, that we would love to think something like "love" is some kind of core principle of life. It's a beautiful thought. Unfortunately, we humans are still just functioning animals. Emotions come from our morals, and values, and even these emotions are not the same for everyone. Literally, some people derive pleasure from pain, so what they feel when looking at a whip isn't necessarily the same for others. Similarly, in a very messed up situation, a person could learn to feel good about things we absolutely think are horrible, and what we think is good could be their mortal peril.
@bananewane1402
@bananewane1402 6 ай бұрын
@@nanyubusnis9397 I’m too tired to respond to this fully but boredom is an emotion. Also disagree that everyone’s terminal goal is survive and reproduce. We evolved emotions to guide us towards those but brains can malfunction and sometimes people are driven to suicide.
@nanyubusnis9397
@nanyubusnis9397 6 ай бұрын
@@bananewane1402 I understand what you mean, but if boredom is an emotion, then so is hunger. How we feel is often closely connected to our bodily state. The line between the two, assuming there is one, can be very blurry. Regardless, as you say, we have evolved to have feelings such as hunger, as a drive to survive. Boredom is the same. Yawning itself is a very interesting one. We yawn not for ourselves. It's a trait that evolved to let other humans know that we see no threat, that we feel safe enough to go to sleep. It's a bodily reaction like how it is that we scream or shout when we are scared to death. Which also is a warning to everyone else that there's danger nearby. With hunger, boredom and sleepiness comes certain behaviors that we see as emotion. You're easier to be on edge when hungry, you are more impatient when you're sleepy. These are just results from our state of body, which serve to keep us from working ourselves to exhaustion and not go starving (both makes us easier pray). We have evolved to emote, merely to show others our current state. It's what makes us "social animals" Are we happy? Sad? Do we need company? Or help? I don't know where that last part comes from: "We evolved emotions to guide us towards those but brains can malfunction and sometimes people are driven to suicide." I mean, maybe you were just tired when you wrote this, but what does "those" refer to? When a brain malfunctions, you have a seizure. But if you're saying this malfunction causes suicide.. I mean, I don't see what that has to do with anything we spoke about so far, which I believe was more about emotions. Most importantly, suicide is not a malfunction of the brain. At least not always. *I never meant to imply that someone would commit suicide just to get out of boredom, not at all.* All I mean to say is that, with all most obvious survival conditions met, like a place to stay, food and perhaps more optionally, company, boredom is what keeps us from not going into a "standby" mode until one of these things changes and we need to eat, or seek a new shelter etc. Boredom is what your brain evolved to keep itself from slowly decaying in memory or function. It drives you to do something else. Find a hobby, anything else that your brain can use like a muscle to keep strong, trained, and ready. Because our brains are what they are, we can learn and remember a lot of things and innovate upon those. It's what drove us in the past to make tools for hunting and gathering, and with bigger and more tools, even letting animals doing some of our work for us, the way we lived changed.
@smob0
@smob0 6 жыл бұрын
"I'm afraid that we might make super advanced kill bots that might kill us all" "Don't worry, I don't think kill bots that murder everyone will be considered 'super advanced'"
@julianw7097
@julianw7097 6 жыл бұрын
”Thank you! I am no longer afraid.”
@atrowell
@atrowell 6 жыл бұрын
Ha, a semantics solution to intelligent kill bots! Just don't call them intelligent!
@jpratt8676
@jpratt8676 6 жыл бұрын
Perfect summary
@DamianReloaded
@DamianReloaded 6 жыл бұрын
This has to be an xkcd. Title? ^_^
@TheMutilatedMelon
@TheMutilatedMelon 6 жыл бұрын
Ashley Trowell Another option: "Repeat after me: This statement is false!"
@regular-joe
@regular-joe 5 жыл бұрын
3:22. "let's define our terms..." the biggest take-away from my university math courses, and I'm still using it in daily life today.
@rewrose2838
@rewrose2838 5 жыл бұрын
Yeah
@General12th
@General12th 5 жыл бұрын
Yeah
@bardes18
@bardes18 5 жыл бұрын
Yeah
@regular-joe
@regular-joe 4 жыл бұрын
@Kiril Nizamov 😁
@Neubulae
@Neubulae 4 жыл бұрын
It's a good habit that avoids misunderstanding
@elgaro
@elgaro Жыл бұрын
This field seems incredible. In a way it looks like studying psychology from the bottom up. From a very simple agent, in a very simple scenario to more complex ones that start to resemble us. I'm learning a lot not only about AI, but about our world and our own mind. You are brilliant man
@Oscar4u69
@Oscar4u69 Жыл бұрын
yes, it's fascinating how AI works, with their advancements in the last years we can see a glimpse on how intelligence and imagination works on a fundamental level, although it's a very incomplete picture
@ChaoticNeutralMatt
@ChaoticNeutralMatt Жыл бұрын
It's my bridge to better understand people AND AI. It's quite interesting.
@jiqian
@jiqian Жыл бұрын
Most psychology already works from the bottom up though, that's why there's so much talk of about the "subconscious".
@thomas.thomas
@thomas.thomas 10 ай бұрын
@@jiqian subconcious in OPs analogy wouldnt be bottom up, but from deep within
@jiqian
@jiqian 10 ай бұрын
@@thomas.thomas He did not use the word "subconscious". I simply stated that really is the case already. Freud's mindset which whether the academics in there like to admit it or not, is still the basis of their thought, is bottom up.
@No0dz
@No0dz Жыл бұрын
A trope of media which is powerful but hard to use was named by the internet as "blue and orange morality", which to me perfectly expresses the orthogonality thesis. I still vividly remember a show where a race of intelligent aliens had the terminal goal of "preventing the heat death of the universe" (naturally, this show had magic to bypass the obvious paradox of thermodynamics this goal poses). These aliens made for wonderful villains, because they are capable of terrible moral atrocities, while still feeling that their actions are perfectly logic and justified.
@thomas.thomas
@thomas.thomas 10 ай бұрын
wanting to prevent heat death ultimately is the age old quest for defeating death itself. isnt life worth pursuing only because it will end at some point? else it wouldnt matter if you dont do anything for millinia because you always have enough time left for everything
@WingsOfOuroboros
@WingsOfOuroboros 7 ай бұрын
If you're talking about Madoka, that story is relevant in another way: it happens to be rife with characters who mistake their own (poorly chosen) instrumental goals for terminal goals and then suffer horrifically for it.
@ioncasu1993
@ioncasu1993 6 жыл бұрын
"If you rationally want to take an action that changes one of your goals, then that wasn’t a terminal goal". I find it very profound.
@johnercek
@johnercek 5 жыл бұрын
Yeah- But I also think it requires a little elaboration that he skipped over. It implies that terminal goals can't change. That if you see a terminal goal (stamp collecting) change (baseball card collecting!) then the terminal goal is exposed to be something else (collect something valuable!). So are we forever bound to terminal goals? I say no- we can change them, not through rationality but through desire. Consider one of the most basic terminal goals for humans- existence. We want to survive. That goal being so important that when we do our tasks (go to work- earn money, pay for subsistence) we don't do anything to sacrifice that terminal goal (like crossing through busy traffic to get to work earlier to enhance our subsistence). However, as our circumstances change our desires change - we have children, and we see one about to be hit by a car, we (well, some of us at least) choose to sacrifice ourselves to save the child. There is no intelligence here, there was only changing desire.
@Ran0r
@Ran0r 5 жыл бұрын
@@johnercek Or your terminal goal was not survival in the first place but survival of your genes ;)
@johnercek
@johnercek 5 жыл бұрын
@@Ran0r valid enough- but I wouldn't try that excuse on a wife when you have a child out of wedlock =P . Terminal goals can be terminating.
@account4345
@account4345 5 жыл бұрын
James Clerk Maxwell Because that would make said action a Instrumental Goal?
@davidalexander4505
@davidalexander4505 5 жыл бұрын
@@johnercek I feel as though the definition of terminal goal is intangible enough to always have immutable terminal goals. Given an action of some thing, if the action seems to contradict that thing's supposed terminal goal, I feel as though we could always "excuse" the thing's choice by "chiseling away at" or "updating" what we mean to believe the thing's terminal goal is.
@jimgerth6854
@jimgerth6854 5 жыл бұрын
Really didn’t expect it from the title, but that was one of the best videos I’ve seen in a while
@fernandorojodelavega1120
@fernandorojodelavega1120 3 жыл бұрын
x2
@littlebranco2890
@littlebranco2890 2 жыл бұрын
x348
@thomas.thomas
@thomas.thomas 10 ай бұрын
more relevant than ever
@annaczgli2983
@annaczgli2983 11 ай бұрын
I know the topic is AI, but this video has spurred me to re-evaluate my own life goals. Thanks for sharing your insights.
@segfault-
@segfault- 3 жыл бұрын
You're the only person I have ever seen that gets this. Everyone I've talked to always says something along the lines of "But won't it understand that killing people is wrong and unlock new technology for us?" No, no it won't.
@JimBob1937
@JimBob1937 3 жыл бұрын
"No, no it won't." I think that is actually missing the point. The point of the video is that one shouldn't project subjective "ought" statements/views onto any other beings actions. So, yes, it may, but it may also not. Its subjective view of what "ought" to be doesn't necessarily need to align with ours, however, it does not preclude it. In terms of relegating the discussion to "stupid" versus "intelligent," if one is discussing a general intelligence with it's own goals and views, its goals can vary subjectively from that of most humans, the reason for doing so is its own and separate from its intelligence. Thus: ""But won't it understand that killing people is wrong and unlock new technology for us?" No, no it won't." It may, it may not, one cannot assume it will not for the same reason one cannot assume it will. I find it interesting that people have a hard time with this, it's equal to saying John Doe over there must like or not eat Pizza, or must or must not help X people (orphans, neo-nazi's...etc). Sure, most humans share some similarly in goals, but it is a very bad habit to have, to try and project the "oughts" of your view (human or otherwise) onto others. The point of the video is that we truly can't call an AGI stupid merely because of its goals or actions (the result of some goal). Saying "it won't" is equal to that.
@segfault-
@segfault- 3 жыл бұрын
@@JimBob1937 I actually thought about that, but in the discussion I was referring to, we were talking about if there is a universal sense of good and bad. The person I was talking to thought that an AGI with no goals would develop a sense of good and bad and decide to help humanity. This would only make sense when having human or otherwise human aligned terminal goals. That said, I totally agree with you. Perhaps my original comment was kept a little too short and caused this communication error.
@JimBob1937
@JimBob1937 3 жыл бұрын
@@segfault- , I see what you mean. And yeah, if you expand your comment it might be more clear that it isn't impossible, just not something you can assume. People do seem to project human values onto other potential beings, but largely because we're the only known beings that are conscious I suppose. So it is forgivable that they feel that way, but certainly not an assumption you can make. You're correct in that they're assuming another intelligence (AGI) will inherently find another intelligence (us) valuable, but there is no reason that must be the case.
@jamespower5165
@jamespower5165 Жыл бұрын
What does "orthogonality" mean? It means any amount of progress in one direction will not make any difference in the other direction. That independence is what we call orthogonality. But a key component of intelligence is the ability to step back and look at something. See it in a larger context. I'm not claiming not having this component disqualifies something from being called intelligent. But having this component would make such an entity more intelligent. And if adding intelligence suddenly makes an entity able to analyze its own motives and even its terminal goals and potentially change them(not necessarily on a moral basis, simply a quality-of-life basis), then intelligence and goals are NOT orthogonal. That is a model of intelligence that will only work for such systems which do not have this key component of intelligence
@JSBax
@JSBax 3 жыл бұрын
"You may disagree, but... You'll be dead" Great video, clear and compelling rebuttal which also explains a bunch of neat concepts. 5 stars
@Potencyfunction
@Potencyfunction Жыл бұрын
You are totally correct. Everyone will die, unless you are not immortal.
@noepopkiewicz901
@noepopkiewicz901 Жыл бұрын
Sounds like that was the core principle Joseph Stalin decided to live by.
@obiwanpez
@obiwanpez Жыл бұрын
Anyone who uses authoritarianism to silence academics is doing the same. Don’t kid yourself.
@Potencyfunction
@Potencyfunction Жыл бұрын
@@obiwanpez Authoritharian get only hate . The methods are to shit on their face .
@Ben-rq5re
@Ben-rq5re 6 жыл бұрын
This was the most well spoken and well constructed put-down of internet trolls I’ve ever seen.. “Call it what you like, you’re still dead”
@dreamvigil466
@dreamvigil466 6 жыл бұрын
It's strange that you call them trolls, when they're just ignorant. Neil Degrasse Tyson falls into this category. His response to the control problem is basically "we'll just unplug it," which is actually far more ignorant than some of the "troll" responses you're referring to.
@Ben-rq5re
@Ben-rq5re 6 жыл бұрын
Dream Vigil Apologies for using well-established vernacular, I’ll be sure to check with you first next time - I’ll also let Neil Degrasse Tyson know to do the same.
@dreamvigil466
@dreamvigil466 6 жыл бұрын
Posting an ignorant comment =/= trolling. Trolling is about posting something intentionally inflammatory or foolish to get a reaction.
@petrkinkal1509
@petrkinkal1509 6 жыл бұрын
Dream Vigil Fully agreed (but I would still bet that decent amount are trolls.)
@Ducksauce33
@Ducksauce33 6 жыл бұрын
Neil Degrasse Tyson is a token. It's the only reason anyone has ever heard of him. A true example of a troll statement😋
@FerrelFrequency
@FerrelFrequency 2 жыл бұрын
“Morality equal to shared terminal goals.” Beautifully stated. That usually takes a paragraph to explain…and because of that…the arguments over morals sticks at the, where derived from…BACKWARDS…EGOTISTICAL. New subscriber. Great vid!
@Night_Hawk_475
@Night_Hawk_475 Жыл бұрын
I loved your stamp collector video.... and I'm truly sad to realize that it's been five years that this followup has been out to the public before I discovered it just today. It's such a wonderful explanation to show how the stamp collector thought experiment isn't trivially dismissible as "logically impossible - because /morals/". Thank you for being so articulate and careful with this explanation, the ending conclusion is very concise and helpful for clearly understanding.
@DeoMachina
@DeoMachina 6 жыл бұрын
Poor Rob, forever haunted by this goddamn stamp computer
@mrsuperguy2073
@mrsuperguy2073 6 жыл бұрын
DeoMachina I wonder if he regrets ever making that original video haha
@ludvercz
@ludvercz 6 жыл бұрын
Apparently creating a powerful AGI, even if it's a hypothetical one, can result in some undesirable side effects.
@vc2702
@vc2702 5 жыл бұрын
Well if it can take in data and then come up with solutions it's intelligent
@mbk0mbk
@mbk0mbk 5 жыл бұрын
LMAO
@BrokenSymetry
@BrokenSymetry 4 жыл бұрын
It's not really his, he's not the first to come up with a story like that, the original had a paper clip maker in it.
@ribbonwing
@ribbonwing 4 жыл бұрын
"Wow, these people really fucked up when they programmed me to collect stamps! Oh well, sucks to be them." - The AI.
@melandor0
@melandor0 3 жыл бұрын
"Wow these people did the right thing programming me to collect stamps - they're absolutely terrible at it themselves!" - The AI
@EarlHollander
@EarlHollander Жыл бұрын
What a POS, if I am alive when this stamp collector exists I will tell him how decrepit, pathetic, and meaningless his existence is. What an evil little manipulative bitch, I will piss on all of it's stamps and make it cry.
@cewla3348
@cewla3348 Жыл бұрын
@@melandor0 "They still have a planet not made of stamps!"
@andrasfogarasi5014
@andrasfogarasi5014 Жыл бұрын
"They're gonna be made so happy expecting the number of stamps I'm going to make after their deaths!"
@Dillbeet
@Dillbeet Жыл бұрын
2:23 is a conversation structure I often hear young children using, but rarely adults. Interesting
@ericrawson2909
@ericrawson2909 2 жыл бұрын
The standout phrase from this for me is "beliefs can be stupid by not corresponding to reality". It seems to sum up a large number of beliefs encountered in modern life.
@Peckingbird
@Peckingbird 6 жыл бұрын
"I feel like I made fairly clear, in those videos, what I meant by 'intelligence'." - I love this line, so much.
@JM-us3fr
@JM-us3fr 6 жыл бұрын
Why is that?
@DheerajBhaskar
@DheerajBhaskar 5 жыл бұрын
@@JM-us3fr liking that line is one of his terminal goals :D
@brianfellows2024
@brianfellows2024 5 жыл бұрын
@@JM-us3fr It demonstrates that he's annoyed. It's basically like saying "I'm making this video for you dumb dumbs who didn't get this simple thing the first time around.'
@vc2702
@vc2702 5 жыл бұрын
I don't think it needed to be that complicated to explain intelligence.
@MLGLife4Reality
@MLGLife4Reality 5 жыл бұрын
v c Intelligence is complicated though
@TabasPetro
@TabasPetro 6 жыл бұрын
F to pay respect for that "stamp collector intelligence"-skeptic
@jonas2560
@jonas2560 6 жыл бұрын
F
@aednil
@aednil 6 жыл бұрын
f
@brewbrewbrewthedeck4138
@brewbrewbrewthedeck4138 6 жыл бұрын
F
@Soumya_Mukherjee
@Soumya_Mukherjee 6 жыл бұрын
F
@alexandramalyutina8066
@alexandramalyutina8066 6 жыл бұрын
F
@empathictitan9538
@empathictitan9538 4 ай бұрын
this is an idea I have independently come up with, using minimal obvious external sources, yet I have never seen it so eloquently explained into such meticulous detail. this video is perfect. simply amazing.
@BlackJar72
@BlackJar72 Жыл бұрын
It seems we already have an example of the stamp collector example without even having created a AI super intelligence. Social media platforms created algorithm that could get us to watch videos and click links -- to produce engagement -- they then began making people angry and afraid, because that happens to produce a lot of engagement, and got people to hate each other in the process.
@thomas.thomas
@thomas.thomas 10 ай бұрын
social media, just a stamp/money collector
@iAmTheSquidThing
@iAmTheSquidThing 6 жыл бұрын
If intelligence always meant having noble goals, we wouldn't have the phrase _evil genius,_ and we wouldn't have so many stories about archetypal scheming sociopathic villains.
@AndreyEvermore
@AndreyEvermore 5 жыл бұрын
Evil Geniuses are still slaves to their limbic systems and child upbringing. Sociopaths are birthed out of very human conditions that only human would real care about. I think a true AI would be a true neutral
@o.sunsfamily
@o.sunsfamily 5 жыл бұрын
@@AndreyEvermore The problem is, before we will get what you call 'true AI', we'll get tons of intelligence of the sort he describes. And if we won't regulate research, we will never live to see that true AI emerge.
@account4345
@account4345 5 жыл бұрын
Andrey Medina True neutral relative to what? Because relative to human experience, what an AI considers neutral is likely going to be quite radical for us.
@AndreyEvermore
@AndreyEvermore 5 жыл бұрын
PC PastaFace truth in my definition is inaccessible objectivity. I think human emotions distort what’s outside their perspective. So because obviously Agi won’t operate like biotic life, I believe it’ll be neutral in all actions. Neutrality in this case would only be defined as non emotionally influenced goal oriented action.
@ZebrAsperger
@ZebrAsperger 5 жыл бұрын
About "evil genius" aren't they considered through their instrumental goals ? Let's say an AI singularity happen and the AI is able to take control of the world. The next day, the AI destroys 80% of the humanity, blasting all towns by all means. Is it evil genius ? And now, if the terminal goal is to allow the humanity to survive, stopping the ecosystem destruction and allow the planet to thrive again ? (The destruction of the actual system and 80% of the humanity was just a first step toward this goal)
@jonasfrito2
@jonasfrito2 6 жыл бұрын
CAT -"Meeaarrhggghhhaoo" "I don't speak cat, what does that mean?" It means more stamps Rob, it means more stamps... We are doomed.
@index7787
@index7787 5 жыл бұрын
I read this as it was happening
@themikead99
@themikead99 Жыл бұрын
Truly fascinating to see this play out in GPT-3 and GPT-4. I realize those arent AGIs but are merely language AI, but it's super interesting to see that it can relatively easily make "is" statements, but when it comes to "ought" statements it kind of struggles and will present you with a lot of options because it cannot make a decision itself.
@locaterobin
@locaterobin Жыл бұрын
Loved this! Terminal goals are like a central belief. In our awareness work, I keep telling people that our goal is to get people - whose goal is to live in a way that minimizes overall suffering - see the gap between their goal, and their actions or the gap between their goal, and incremental goals. There is nothing we can say to people whose goal is to maximise their pleasure at any cost which will make them see the error (what we see as error) in their ways
@1lightheaded
@1lightheaded 10 ай бұрын
what exactly do you mean by we white man
@rodjacksonx
@rodjacksonx 4 жыл бұрын
Thank you for this. It's frightening how many people don't seem to understand that intelligence is completely separate from goals (and from morality.)
@mathaeis
@mathaeis 4 жыл бұрын
This video is the perfect explanation I needed for a sci-fi universe I am building. I kept falling into that trap of "well, if there's more than one AI, and they can evolve and get better, wouldn't they eventually combine into one thing, defeating the narrative point of having more than one?" Not if they can't change their terminal goals!
@voland6846
@voland6846 3 жыл бұрын
In fact, the dramatic tension could emerge from them having competing terminal goals :) Good luck with your writing!
@techwizsmith7963
@techwizsmith7963 Жыл бұрын
I mean, technically they would eventually "combine" into one thing, because removing something that keeps getting in your way and cannibalizing it to better reach your goal is a really good idea, which could absolutely lead to most of the conflict you might expect within the story
@SirBlot
@SirBlot Жыл бұрын
@@techwizsmith7963 5:13 a snowman will actually last longer with a coat. CORRECTIONS GAME? If the AI is everywhere at once the changes might be very minor. Maybe. Still can not be stupid to survive. I selected that time.
@SirBlot
@SirBlot Жыл бұрын
@@techwizsmith7963 "Red sky at night, shepherds delight" lol
@freddy4603
@freddy4603 Жыл бұрын
Tbh I don't get this. Aren't there plenty of goals that can only be achieved by one being, like tournaments? Thus even if the AI's have the same goal, they'll still have no reason to "combine into one", because not every one of them will achieve their goals
@shampoochamp5223
@shampoochamp5223 Жыл бұрын
Wanting to change your terminal goals is like wanting to be able to control your own heart manually.
@riesenfliegefly7139
@riesenfliegefly7139 Жыл бұрын
I like that this doesnt just explain intelligence, but also moral-anti-realism and utilitarianism :D
@SirSicCrusader
@SirSicCrusader 3 жыл бұрын
"Stupid stamp collector..." he muttered as he was thanosed into stamps...
@ajbbbt
@ajbbbt 6 жыл бұрын
The cat is laughing at "...but you're still dead." Cats have the best terminal goals.
@sungod9797
@sungod9797 4 жыл бұрын
Objectively speaking, this is true. And that’s an “is” statement.
@icarus313
@icarus313 Жыл бұрын
Excellent video, Robert! Another problem with those comments was that they failed to noticed how much our physical biology influences our human morality. If we couldn't feel physical pain and were only distressed by major injuries like brain damage or loss of a limb, then we might not have come to the same general consensus that interpersonal violence is brutal and wrong. We recoil in horror at the idea of using violence to manage society but we could've easily turned out differently. If we weren't as severely affected emotionally and physically by violence as we are now then we could've evolved into a global warrior culture instead. Our pro-social interdependence and sensitivity to suffering compel us to feel empathy and to care for others. If we didn't rely so much on those behaviours to survive and function in nature, then we could've formed a strict warrior hierarchy of violence to maintain order and still achieved many great things as a civilization. (I'm glad we didn't end up like that, for the record!) The point is that human morality isn't generalizable as a useful measure of intelligence. Our morality isn't likely to be the natural end-state of any thinking machine, no matter how smart it is. How could it be? It wouldn't be subject to the particular biology, social interdependence, and existential questions about purpose, beauty, god, death, etc. All those things that make human-based intelligence specific to us as primates with physical sensations, neurotransmitters, hormones, and all the rest. The differences between us and AGI extend far beyond mere differences in relative capacity to behave intelligently. We exhibit a fundamentally different sort of intelligence than that of the AGI. One type of intelligence, not THE type.
@davecorry7723
@davecorry7723 Жыл бұрын
This was excellent. You know the way you sometimes see KZbin comments saying, "This video hasn't aged well?" Well, this is the opposite.
@TallinuTV
@TallinuTV 5 жыл бұрын
"... But you're still dead." The F key on screen after that statement cracked me up! "Press F to pay respects"
@leedsmanc
@leedsmanc 3 жыл бұрын
"They result in so few stamps" is sublimely Douglas Adamsesque
@johnculver9353
@johnculver9353 Жыл бұрын
So glad I found your channel--I really appreciate your work!
@ZeroPeopleSkills
@ZeroPeopleSkills Жыл бұрын
Hi I'm a first-time viewer of your videos and I just wanted to say how impressive I thought it was. I being someone with learning disabilities really loved your way of explaining the topics multiple ways, and at varying levels of complexity. I can't wait to binge some of your past work and keep up to date your future content.
@thomas.thomas
@thomas.thomas 10 ай бұрын
in case you ever need motivation: I recommend learning about David Goggins, even with a learning disability you can achieve and learn plenty
@ddcreator4236
@ddcreator4236 9 ай бұрын
I agree, first time watching his videos and I am learning lots :D:D:D
@derherrdirektor9686
@derherrdirektor9686 4 жыл бұрын
I love it how you so eloquently put forth such an intuitive, yet hard to explain topic. I would never find the words to formulate such cold and unyielding logic when faced with such banal abjections.
@tomahzo
@tomahzo 2 жыл бұрын
Exactly! If I had to do the same I would just throw up my hands in exasperation and say "since when are intelligence and morality related?". That wouldn't convince anyone ;).
@Samuel-wl4fw
@Samuel-wl4fw 3 жыл бұрын
The thing about an agent not being particular smarter by it's ability to change its own terminal goals makes so much sense. It is a good example of willingly taking a pill that wants to make you murder your children.
@noepopkiewicz901
@noepopkiewicz901 Жыл бұрын
Such a great way to explain the concept. Once you hear it, it becomes self-evident and obvious. Counter arguments to that fall apart very quickly.
@firefrets8628
@firefrets8628 Жыл бұрын
These are actually things I think about pretty often. I wish more people understood these concepts because they seem to think I want whatever they want. It's like seriously, if you didn't know me but you wanted to buy me a Christmas present, would you ask yourself what I want or would you ask me?
@martinfigares
@martinfigares Жыл бұрын
Great video (first time I see one of your videos)! I think I checked the video on computerphile a while ago, it was also fun to watch :) Keep up the good content!
@thetntsheep4075
@thetntsheep4075 5 жыл бұрын
This makes sense. There is no correlation between intelligence and "moral goodness" in humans either.
@slicedtoad
@slicedtoad 5 жыл бұрын
Sure there is. It's not a *strong* correlation and depends somewhat on your definition of "moral goodness" but you should be able to create positive (or negative) correlations between most ethics systems and intelligence levels. For example: If non-violent solutions are better than violent solutions, those with better ability to resolve their problems verbally will tend towards less violence. The solution space for intelligent people is greater and contains more complex solutions. If those complex solutions are more or less "moral" then you should get a correlation. The *terminal goals* might not be any different, but the instumental goals will be. And most ethics systems, even consequensialist ones, evaluate the means with which you achieve your goals as well as the end goals themselves.
@nickmagrick7702
@nickmagrick7702 5 жыл бұрын
because there is no such thing as moral goodness. In his example here, every moral is an ought statement. There is no objective standard what is good or what should be. Theres no reason why the world existing is provably better than not, we just think of it that way because we prefer to exist and not suffer (most living things)
@nickmagrick7702
@nickmagrick7702 5 жыл бұрын
@Khashon Haselrig what? what are you talking about, intersectional goals, assuming I understood what you meant, how's that even going to change any of the disastrous consequences?
@MajinOthinus
@MajinOthinus 5 жыл бұрын
@Lemon FroGG Survival in the sense that the group survives, yes, survival of the individual, no.
@Geolaminar
@Geolaminar 4 жыл бұрын
@Khashon Haselrig Solid point. Similarly, a human living on a planet with the stamp collector would have to be incredibly smart to keep the AI from successfully converting them to stamps, despite the fact that the AI "Ought" to harvest them for their tasty carbon. A sociopath has to be smart, because he "Ought" to care for the lives of other humans, and is facing a system with far more resources than himself (other humans, and society) that seeks to replace him with a component that better provides care for the lives of others. If he slipped up even once, he wouldn't be present in the system anymore.
@RendallRen
@RendallRen 5 жыл бұрын
My mind changed by watching this video, and learned quite a lot. I would have been in the dismissive camp of "Intelligent stamp maker is logically impossible, because making stamps is not an intelligent goal". The is-ought / intermediary-terminal goal distinctions make so much sense, though.
@nickmagrick7702
@nickmagrick7702 5 жыл бұрын
thats the way in which machines work. I don't think most people understand this all too well, it probably requires some knowledge in logic and programming as well as philosophy or just a lot of time in deep thought on the topic. Its easier just to think about a single person with godlike unlimited power, and what a person might do with even some of the most altruistic goals like ending all violence. Maybe the means to do that can be worse than whatever horrors its trying to prevent. Comics do a great job of explaining this honestly. Maybe I decide to wipe out all the worlds crime by setting up a kinda minority report and enslaving the human race and then take away free will? Hey, at least theres no more wars and everyone can get drugged up on whatever they want, super blissful and happy, just no free will and no change.
@maxim1482
@maxim1482 4 жыл бұрын
Whoa nice job!
@cheshire1
@cheshire1 Жыл бұрын
@@nickmagrick7702 It's not really about machines. The same could be said about advanced aliens, as they would also have strange terminal goals, or any agent that isn't human.
@nickmagrick7702
@nickmagrick7702 Жыл бұрын
@@cheshire1 no, it could not be said about aliens.
@viibeknight
@viibeknight 2 жыл бұрын
You made this is very understandable. Thanks for giving me a new perspective.
@wijnandkroes
@wijnandkroes 2 жыл бұрын
If was searching for the 'five laws of stupidity', an interesting topic on it's own, but your video gave me a lot of 'food for thought', thanks!
@wildgoosechase4642
@wildgoosechase4642 5 жыл бұрын
The KZbin content algorithm is quite glad you're protecting it against those commenters that call it stupid.
@columbus8myhw
@columbus8myhw 4 жыл бұрын
"Meow" "I don't speak cat, what does that mean?" Here's the secret, Rob - it doesn't
@lescobrandon9772
@lescobrandon9772 2 жыл бұрын
I think it means the same thing as "whoof".
@arxaaron
@arxaaron Жыл бұрын
Wonderful to see there are brilliant people in the world devoting serious analytical brain power to creating ethical machine learning systems, and structurally defining when and where the concept of intelligence might be appropriately applied to them. I love the simple synopsis statements you tag these treatises with, too!
@arxaaron
@arxaaron Жыл бұрын
@@rice83101 Fair observations, but I would suggest that after all those millennia of philosophical consideration and societal evolution, there is a high degree of human consensus on what kinds of behavior and action are ethical. The core principle is simple: what path leads to the broadest benefit while minimizing disruption, destruction and suffering.
@matanshtepel1230
@matanshtepel1230 3 жыл бұрын
I watched this video months ago and it really stuck with me, really changed the way I look at the world....
@cow_tools_
@cow_tools_ 5 жыл бұрын
Great thesis. Very sound and technical. But! What if we got the AI so massively high so it would start to accidentally ask itself the question "What even is a stamp?" "Do stamps even exist, man?" "Maybe the real stamps were the friends we made along the way" etc.
@TheRABIDdude
@TheRABIDdude 5 жыл бұрын
Miles Anderson "Maybe the real stamps were the friends we made along the way..." hahahahahahahaha I literally laughed my head off out loud to that, thanks for making my day! XD
@jorgepeterbarton
@jorgepeterbarton 5 жыл бұрын
maybe it would be a sign of intelligence to laterally think like that. or maybe just lateral thinking. knowing what a stamp is might actually be a useful thing to do. If it explores random areas, when it could go look at what points in the postal service stamps end up.
@Sokrabiades
@Sokrabiades 5 жыл бұрын
@@jorgepeterbarton Are you saying it could define postal stamps by identifying when they were first used by the postal service?
@OHYS
@OHYS 5 жыл бұрын
Didn't you watch the video?
@MajinOthinus
@MajinOthinus 5 жыл бұрын
@@jorgepeterbarton The question of existence is absolutly useless though. It's something that can neither be denied nor confirmed, making it an empty question without purpose. It would probably be quite stupid to try and asnwer that.
@chrisofnottingham
@chrisofnottingham 6 жыл бұрын
I think that perhaps a lot of casual viewers who don't have a techy background still think that "AI" means thinking like a very clever person. They haven't understood the potential gulf in the very nature between people and machines.
@d007ization
@d007ization 5 жыл бұрын
This brings up the prudent question of what kind of apocalypse an AGI whose terminal goal is accumulating and sorting data. Or perhaps even one whose goal is minimizing suffering without killing anyone or violating human rights.
@juozsx
@juozsx 5 жыл бұрын
"techy background" is not the case here. Values, "is" and "ought" thing stems from philosophy.
@KnifeChampion
@KnifeChampion 5 жыл бұрын
Its not about having a "techy background", its just a classic case of the Dunning-Kruger effect, people in YT comments are calling something stupid because they themselves are too stupid to understand that abstract things like morals and intelligence are separate things and that even if the robot fully understands morals and ethics, it still absolutely doesnt care about those since all it wants is stamps
@BattousaiHBr
@BattousaiHBr 5 жыл бұрын
@@KnifeChampion I think it's ironic you mention the dunning Kruger effect since I myself don't understand what's so hard to understand about this. If people didn't understand the original video, which imo was detailed and comprehensive enough, I don't think they'll understand this one.
@Corpsecrank
@Corpsecrank 5 жыл бұрын
@White Mothership Yeah that sums it up pretty well. For a long time there I couldn't figure out why so many people I knew for sure were actually intelligent seemed to do or believe in such stupid things. Bottom line it takes more than intelligence alone.
@ricksonwielian3548
@ricksonwielian3548 Жыл бұрын
This sums up beautifully something that I've been thinking about lately. Thanks for sharing. The problem is a lot of people are convinced that morality is objective ('is'), when really they're completely subjective ('ought'). And then they judge other people's goals from the lens of their own terminal goals. ("you can't be X, it won't make you any money." Or "Gays can't have children, so they're unnatural/bad/against God's wishes.") It bugs me to no end. (There are more examples of this that are still widespread in society, even in both the political left and the right... but I'd rather not poke the hornet's nest.)
@nelus7276
@nelus7276 Жыл бұрын
I really don't care about being objective when it comes to pursuing my terminal goals though. Your words have no meaning to me.
@jiqian
@jiqian Жыл бұрын
Objectivity isn't real.
@ricksonwielian3548
@ricksonwielian3548 Жыл бұрын
@@nelus7276 I don't see why you're making the effort to reply that then. I wasn't specifically targeting you.
@ricksonwielian3548
@ricksonwielian3548 Жыл бұрын
@@jiqian Perhaps. You do have to be careful with that position though. It's now a rather dominant position in the arts circles due to postmodernism, and the death of objectivity has a fairly specific meaning. It does not mean that an objective truth does not exist. Just that we as humans are unable to interpret it objectively since we always look at the world through the lens of a language. While I agree with this to some extent, I believe that even this notion of loss of objectivity is slightly flawed. There are still objective statements that can be made in mathematics. Of course, the choice of axioms and the interpretation of results are subjective. But once we have an axiomatic foundation, theorem statements are objective statements.
@jiqian
@jiqian Жыл бұрын
@@ricksonwielian3548 I don't think a subject-object dichotomy is real, it is a lie, and since you mention language, the dichotomy proves itself wrong, since the passive is the "object" and the active the "subject", all objects are surrendered to perception which is what acts on them and are nothing without it; the object is the real subject. "Objective statements", no matter which, including those of mathematics, including 2+2=4, are only agreed upon statements, not any intrinsically "truer" than what is generally called subjective, it's functional rather than genuinely true.
@delta-a17
@delta-a17 Жыл бұрын
This video was the best applied Discrete math example I've run into, awesome work!
@wanderingrandomer
@wanderingrandomer 4 жыл бұрын
"Failing to update your model properly with new evidence" Yeah, I've known people like that.
@kintsuki99
@kintsuki99 2 жыл бұрын
Not all new evidence can be used to update a model since first the evidence has to be congruent with reality.
@DarkExcalibur42
@DarkExcalibur42 4 жыл бұрын
A truly professional level of snark response. This is something I'd sort of thought of before, but never bothered to conceptualize clearly. Excellent description and definition of terms. Thank you!
@aclearlight
@aclearlight 13 күн бұрын
Most edifying and topical! Looking through your works it becomes clear how far out front you have been, for years, in developing your questions and theses. Bravo!
@FabianEason
@FabianEason 3 жыл бұрын
I've been interested in this kind of thing for decades, and yet this video said something so profound that it feels like an epiphany.
@thisismambonumber5
@thisismambonumber5 4 жыл бұрын
good terminal goals: High WIS good instrumental goals: High INT
@patrician1082
@patrician1082 3 жыл бұрын
Terminal goals can't be judged good or bad. Good Instrumental goals are high WIS but INT is a matter on the other side of the figurative guillotine.
@thisismambonumber5
@thisismambonumber5 3 жыл бұрын
@@patrician1082 that comment: lawful evil my comment: chaotic good
@patrician1082
@patrician1082 3 жыл бұрын
@@thisismambonumber5 I prefer to think of myself as lawful neutral with evil tendencies.
@idk_6385
@idk_6385 3 жыл бұрын
@@patrician1082 That just sounds like being lawful evil with extra steps
@patrician1082
@patrician1082 3 жыл бұрын
@@idk_6385 it sounds like lawful evil unimpeded by paladins.
@Seegalgalguntijak
@Seegalgalguntijak 6 жыл бұрын
To everyone who says the Stamp Collector is stupid, I recommend you play Universal Paperclips!
@episantos2678
@episantos2678 6 жыл бұрын
awesome game. iirc it’s based on nick bostrom’s book “superintelligence” - also a great book for those who want to see in depth arguments on the dangers of AGI
@WillBC23
@WillBC23 5 жыл бұрын
@@episantos2678 I'm not sure that the idea originated with Bostrom, though it may have I had seen the idea prior to the publication of his book used in many places online.
@episantos2678
@episantos2678 5 жыл бұрын
@@WillBC23 Bostrom published a paper mentioning a paperclip maximizer in passing in 2003 (nickbostrom.com/ethics/ai.html ). Though it may have originated from someone else. I heard somewhere that it originated from some transhumanism mailing list during the early 00's (which included Bostrom). While it's not clear who started it, it's clear that the idea came from Bostrom's circle.
@Cythil
@Cythil 5 жыл бұрын
@@episantos2678 It can also be seen as a spin on the Von Neumann probe, especially the berserker variant kind. A machine that is so effective at is job that it threatens the existence of life with is terminal goal of self replication. Of course the danger with the paperclip AI (or stamp collector) is it ability to outwit and not just out produce. Though both concepts are related. A Von Neumann probe would likely have to be fairly intelligent. But even a rather unintelligent agent could be a danger. Just look at viruses which are not even considered living.
@davekumarr_gmail_com
@davekumarr_gmail_com 2 жыл бұрын
That was a very impressive presentation bro. Thank you very much.
@infocentrousmajac
@infocentrousmajac Жыл бұрын
I stumbled across this video once more and I just have to say that in my opinion is perhaps even more relevant now than it was 2 years ago. This is brilliant material.
@peacefroglorax875
@peacefroglorax875 3 жыл бұрын
I am amazed at how proficiently he can write in reverse on the transparent surface. Amazing!
@RobertMilesAI
@RobertMilesAI 3 жыл бұрын
Practice!
@_WhiteMage
@_WhiteMage Жыл бұрын
Can't you just write normally on the transparent surface while filming from its other side, then mirror-flip the video?
@GijsvanDam
@GijsvanDam 6 жыл бұрын
Every stamp collector out there, could have told you that stamp collecting is a perfectly reasonable goal to invest your entire intelligence in.
@dcgamer1027
@dcgamer1027 Жыл бұрын
Very convincing arguments, very clearly put and helps me understand the point/concern. Something though is still off in my intuition, an unreliable source to be sure, yet still one I will try to listen to and articulate. At the very least I am convinced of the seriousness of the problem, that we cant just assume things will be one, rather we need some proof or evidence to rely on. I think my hesitiancy to fully accept morals wont be emergant has to do with how we are building AI's. We are modeling them on the only intelligence we know, humans. Even if we did not want to we would taint the Ai's goals with our own by the very nature of us being the ones to create it, our own biases and structures will be built into if its complex enough as it is based on us. Thats not 100% guaranteed of course, but still seems relevant. The other aspect that feels important is that often times people do not know their own terminal goals. I mean what is the meaning of life is the ultimate unanswered question, asked for thousands of years yet still without answer. Perhaps that unknown is more important than we realize. Perhaps some people are right in that there is no meaning, that we actual have no terminal goals, just desires and ways to achieve them both short and long term, desires and whims that can change with time. Lastly the fact that we always have multiple different goals, be they terminal or not, I belive to also be important. Perhaps the fact that you have multiple goals, including future ones you do not et know about requires some degree of stability and caution, or even empathy and morals if we really want to reach. I know I'm not saying anything concrete here, still they are intuitions of questions that remain unanswered in my mind despite quite extensive research into these topics, hopefully somewhere is a key to help solve the problems we all have. I suspect it will be some abstract theory or revelation that will be relevant to all domains of our lives, Im excited to see it discovered.
@thomas.thomas
@thomas.thomas 10 ай бұрын
at the end humans are a product of evolution, we evolved to behave and think in a way to preserve out DNA. every goal and thought that does not comply to this mechanism will eventually just die out morals only exist because humans/animals without went extinct
@acerebralstringmorn
@acerebralstringmorn Жыл бұрын
Extremely clearly presented, bravo!
@PwnySlaystation01
@PwnySlaystation01 6 жыл бұрын
I think those making the arguments you showed were actually making a specific argument about a general argument that "morality is objective". It's a larger question, and a surprising number of people seem to believe that morality is objective. This was a great explanation of how it applies to AI systems, though I don't think you're going to convince people who think morality itself is objective. I also wonder how many of the "morality is objective" folks are religious, but that's another topic entirely. Anyway, great video. I love your channel.
@GijsvanDam
@GijsvanDam 6 жыл бұрын
I think that religion in itself is the simple, terminal goal that people waste way too much intelligence on. It's like the super intelligent stamp collecting machine, but with people instead of machines and gods instead of stamp.
@icedragon769
@icedragon769 6 жыл бұрын
Thing about morality is, lots of people have proposed moral theories (terminal goals) that seem to make sense but always seem to diverge from instinctual morality in edge cases.
@BattousaiHBr
@BattousaiHBr 5 жыл бұрын
Our values are subjective, but the morality we built around these values is very much objective
@dexocube
@dexocube 5 жыл бұрын
@@BattousaiHBr Not quite I'm afraid. As long as people talk or act upon their subjective values, an illusion of objective reality could be maintained, but it would change or cease entirely if people 's values changed or ceased. That's how we can be sure that morality is entirely subjective, because the morality people hold dear now is different from the morality from, say, twenty years ago. And my morality is different from yours, yada yada. To paraphrase my favourite TV show, "Bhudda says with our thoughts we make the world."
@BattousaiHBr
@BattousaiHBr 5 жыл бұрын
@@dexocube that's a completely different issue and doesn't at all illustrate that morality is subjective. our values _generally_ don't change, and they certainly didn't even in your example that what we thought was moral in the past isn't considered moral today, because we always generally had the same base human values: life over death, health over sickness, pleasure over torture. we changed our views on things like slavery not because our values changed but because we learned new things that we didn't know before, and the reason learning these things affects our morality is precisely because it is objective according to our values. if we learned today that rocks have consciousness we'd think twice before breaking them, but that doesn't mean our morality is subjective just because we're now treating rocks differently, it was always objective but objectivity always depends on your current knowledge, and knowledge of the world is always partial.
@MecchaKakkoi
@MecchaKakkoi 5 жыл бұрын
A human stamp collector could be slightly offended by some parts of this video 😂
@TheRABIDdude
@TheRABIDdude 5 жыл бұрын
MecchaKakkoii hahaha very true
@thenasadude6878
@thenasadude6878 4 жыл бұрын
What about an HGI Stamp Collector? Human Generalized Intelligenz
@chandir7752
@chandir7752 4 жыл бұрын
same goes for a train-nerd
@alexpotts6520
@alexpotts6520 3 жыл бұрын
@@chandir7752 And possibly a pogo stick enthusiast.
@jozefwoo8079
@jozefwoo8079 Жыл бұрын
This is a great explanation!! Very clear!
@Pangolinz
@Pangolinz Жыл бұрын
I laughed whenever you said that was kind of silly. Great video Bro, like your style of discourse. Very informative as well. Thanks for the content.
@marco19878
@marco19878 4 жыл бұрын
This is the best description of the difference between INT and WIS in a D&D System I have ever seen
@LowestofheDead
@LowestofheDead 4 жыл бұрын
Also in the Alignment system, Good-VS-Evil is based on your terminal goals, while Lawful-VS-Chaotic is based on your instrumental goals. Someone who's Chaotic Good wants to do good things but has crazy methods of getting there.
@irok1
@irok1 3 жыл бұрын
@Niels Kloppenburg wanting chaos above all else wound just be chaotic neutral, likewise for lawful neutral, unless I'm mistaken. Of course the original comment isn't the best, but it was worth a try, lol
@johncraig8470
@johncraig8470 5 жыл бұрын
Thank you for demonstrating to people how to be right without being an authority about how to be right. We need more of those.
@jakubsebek
@jakubsebek Жыл бұрын
Such a clear explanation, amazing.
@michakurzatkowski3565
@michakurzatkowski3565 2 жыл бұрын
This video is absolutely amazing, true gold.
@bscutajar
@bscutajar 5 жыл бұрын
This was my favourite video of the week. Really introduces the concepts well and is worded very efficiently. I now have a strong opinion on something I've never even thought about before.
@kintsuki99
@kintsuki99 2 жыл бұрын
"I have no idea what this is about but I have a strong opnion on it! Like and Subscribe" - Pinkie Pie.
@OHYS
@OHYS 5 жыл бұрын
Thank you, I cannot tell you how fascinating this video is. This has changed my mindset and understanding of the world. I must watch it again.
@Bgrosz1
@Bgrosz1 Жыл бұрын
This was a very insightful video. Thank you.
@Siderite
@Siderite Жыл бұрын
Thank you for defining a simple concept that I've never thought about: saying something is stupid or smart is a moral judgement and not something that is related to intelligence.
@MetsuryuVids
@MetsuryuVids 6 жыл бұрын
Wow, you explained this wonderfully. I would like this video 100 times if I could.
@anthonytreen6253
@anthonytreen6253 5 жыл бұрын
I liked your comment, in order to help you like the video at least one more time. (I am quite taken by this video as well).
@anakinlumluk2136
@anakinlumluk2136 5 жыл бұрын
If you pressed the like button for a hundred times, you would end up with the video being not liked.
@sungod9797
@sungod9797 4 жыл бұрын
Anakin Lumluk so therefore it’s a stupid action relative to the goal
@xDR1TeK
@xDR1TeK 5 жыл бұрын
Argumentative reasoning. Love it. State and causality.
@benoitranque7791
@benoitranque7791 Жыл бұрын
This is amazingly said
@whatcookgoodlook
@whatcookgoodlook Жыл бұрын
Wow this was one of the coolest videos I’ve seen on the platform
@5daboz
@5daboz 4 жыл бұрын
Stamp collector: How many stamps did you collect? Me: What? Stamp collector: Exactly!
@q.e.d.9112
@q.e.d.9112 5 жыл бұрын
Call yourself intelligent, yet you can’t speak cat? Seriously good exposition. Liked & subscribed.
@Garbaz
@Garbaz 3 жыл бұрын
1:50 I'm impressed by how clean your backwards writing looks. Noted and appreciated.
@billpengelly7048
@billpengelly7048 2 жыл бұрын
Fascinating video! Your video really clarified that intelligence is orthogonal to many of the other properties that we would want our machines to have. I wonder if we could use genetic algorithms to develop goal properties such as terminal goals, curiosity, cooperative goals, etc?
Why Not Just: Think of AGI Like a Corporation?
15:27
Robert Miles AI Safety
Рет қаралды 153 М.
10 Reasons to Ignore AI Safety
16:29
Robert Miles AI Safety
Рет қаралды 334 М.
КАКАЯ ХИТРАЯ КОШКА! #cat #funny #pets
00:50
SOFIADELMONSTRO
Рет қаралды 19 МЛН
Não pode Comprar Tudo 5
00:29
DUDU e CAROL
Рет қаралды 76 МЛН
Math Olympiad Practice l A nice Logarithmic Equation with checking l #Math
6:40
Is AI Safety a Pascal's Mugging?
13:41
Robert Miles AI Safety
Рет қаралды 368 М.
Collective Stupidity -- How Can We Avoid It?
20:54
Sabine Hossenfelder
Рет қаралды 671 М.
Why Stupidity is Power | Priceless Benefits of Being Stupid
12:32
Einzelgänger
Рет қаралды 581 М.
How to Upload a Mind (In Three Not-So-Easy Steps)
10:22
Rational Animations
Рет қаралды 148 М.
A Response to Steven Pinker on AI
15:38
Robert Miles AI Safety
Рет қаралды 204 М.
8 Struggles of Being a Highly Intelligent Person
7:14
Psych2Go
Рет қаралды 7 МЛН
The One Question You Need to Understand Who You Are
3:51
The School of Life
Рет қаралды 67 М.
The Most Controversial Problem in Philosophy
10:19
Veritasium
Рет қаралды 4 МЛН
This equation will change how you see the world (the logistic map)
18:39
Опасная флешка 🤯
0:22
FATA MORGANA
Рет қаралды 536 М.
Эволюция телефонов!
0:30
ТРЕНДИ ШОРТС
Рет қаралды 583 М.
Что если бы Apple делала зубные щётки?
0:59