Why Not Just: Think of AGI Like a Corporation?

  Рет қаралды 153,678

Robert Miles AI Safety

Robert Miles AI Safety

Күн бұрын

Corporations are kind of like AIs, if you squint. How hard do you have to squint though, and is it worth it?
In this video we ask: Are corporations artificial general superintelligences?
Related:
"What can AGI do? I/O and Speed" ( • What can AGI do? I/O a... )
"Why Would AI Want to do Bad Things? Instrumental Convergence" ( • Why Would AI Want to d... )
Media Sources:
"SpaceX - How Not to Land an Orbital Rocket Booster" ( • How Not to Land an Orb... )
Undertale - Turbosnail
Clerks (1994)
Zootopia (2016)
AlphaGo (2017)
Ready Player One (2018)
With thanks to my excellent Patreon supporters:
/ robertskmiles
Jordan Medina
Jason Hise
Pablo Eder
Scott Worley
JJ Hepboin
Pedro A Ortega
James McCuen
Richárd Nagyfi
Phil Moyer
Alec Johnson
Bobby Cold
Clemens Arbesser
Simon Strandgaard
Jonatan R
Michael Greve
The Guru Of Vision
David Tjäder
Julius Brash
Tom O'Connor
Erik de Bruijn
Robin Green
Laura Olds
Jon Halliday
Paul Hobbs
Jeroen De Dauw
Tim Neilson
Eric Scammell
Igor Keller
Ben Glanton
Robert Sokolowski
Jérôme Frossard
Sean Gibat
Sylvain Chevalier
DGJono
robertvanduursen
Scott Stevens
Dmitri Afanasjev
Brian Sandberg
Marcel Ward
Andrew Weir
Ben Archer
Scott McCarthy
Kabs Kabs Kabs
Tendayi Mawushe
Jannik Olbrich
Anne Kohlbrenner
Jussi Männistö
Mr Fantastic
Wr4thon
Dave Tapley
Archy de Berker
Kevin
Marc Pauly
Joshua Pratt
Gunnar Guðvarðarson
Shevis Johnson
Andy Kobre
Brian Gillespie
Martin Wind
Peggy Youell
Poker Chen
Kees
Darko Sperac
Truls
Paul Moffat
Anders Öhrt
Lupuleasa Ionuț
Marco Tiraboschi
Michael Kuhinica
Fraser Cain
Robin Scharf
Oren Milman
John Rees
Shawn Hartsock
Seth Brothwell
Brian Goodrich
Michael S McReynolds
Clark Mitchell
Kasper Schnack
Michael Hunter
Klemen Slavic
Patrick Henderson
/ robertskmiles

Пікірлер: 791
@MrGustaphe
@MrGustaphe 5 жыл бұрын
"Instead of working it out properly, I just simulated it a hundred thousand times" We prefer to call it a Monte Carlo method. Makes us sound less dumb.
@riccardoorlando2262
@riccardoorlando2262 5 жыл бұрын
Through the use of extended computational resources and our own implementation of the Monte Carlo algorithm, we have obtained the following.
@plapbandit
@plapbandit 5 жыл бұрын
Hey man, we're all friends here. Sometimes you've just gotta throw shit at the wall til something sticks. Merry Christmas!
@pafnutiytheartist
@pafnutiytheartist 5 жыл бұрын
Well it's the second best thing to actually working it out properly
@silberlinie
@silberlinie 5 жыл бұрын
...simulatet it a few MILLION times...
@jonigazeboize_ziri6737
@jonigazeboize_ziri6737 5 жыл бұрын
How would a statistician solve this?
@user-go7mc4ez1d
@user-go7mc4ez1d 5 жыл бұрын
"Like Starcraft". That aged well....
@Qwerasd
@Qwerasd 5 жыл бұрын
Was about to comment this.
@CamaradaArdi
@CamaradaArdi 5 жыл бұрын
I don't even know if alphaStar had played vs. TLO by then, but I think it did.
@RobertMilesAI
@RobertMilesAI 5 жыл бұрын
It said 'for now'!
@guyincognito5663
@guyincognito5663 5 жыл бұрын
Robert Miles you lied, 640K is not enough for everyone!
@Zeuts85
@Zeuts85 5 жыл бұрын
I wouldn't say this has been demonstrated. So far AlphaStar can only play as and against Protoss, and it hasn't played any of the top pros. Don't get me wrong, I think Mana is an amazing player, but until it can consistently beat the likes of Stats, Classic, Hero, and Neeb (without resorting to super-human micro), then one can't really claim it has beaten humans at Starcraft.
@dirm12
@dirm12 5 жыл бұрын
You are definitely a rocket surgeon. Don't let the haters put you down.
@michaelstanko5896
@michaelstanko5896 5 жыл бұрын
dirm12 q Rocket Neurosurgeon FTFY
@michaelliu2961
@michaelliu2961 4 жыл бұрын
don't doubt ur vibe
@asdfghyter
@asdfghyter Жыл бұрын
i’m neurorocket though
@petersmythe6462
@petersmythe6462 5 жыл бұрын
"You can't get a baby in less than 9 months by hiring two pregnant women." Wow we really do live in a society.
@williambarnes5023
@williambarnes5023 5 жыл бұрын
If you hire very pregnant women, you can get that baby pretty quick, actually. The 200 IQ move here is to go to the orphanage or southern border. You can just buy babies directly.
@e1123581321345589144
@e1123581321345589144 5 жыл бұрын
It they're already pregnant when you hire them, then yeah, it's quite possible
@dannygjk
@dannygjk 5 жыл бұрын
I think it's safe to assume that the quote is meant to be read as two women who just became pregnant. To assume otherwise is to assume that whoever said it doesn't have enough brain cells to be classified as a paramecium.
@isaackarjala7916
@isaackarjala7916 4 жыл бұрын
It'd make more sense as "you can't get a baby in less than 9 months by knocking up two women"
@diabl2master
@diabl2master 4 жыл бұрын
Oh shut up, you know what he meant
@618361
@618361 5 жыл бұрын
For anyone interested in the statistics of the model in 6:16 The cumulative distribution function (cdf) of the maximum of multiple random variables is, if they are all continuous random variables and independent of one another, the product of the cdfs. This can be used to solve analytically for the statistics he shows throughout the video: Start with the pdf (bell curve in this case) for the quality of one person's idea and integrate it to get the cdf of one person. Then, since each person is assumed to have the same statistics, multiply that cdf by itself N times, where N is the number of people working together on the idea. This gives you the cdf of the corporation. Finally, you can get the pdf of the corporation by taking the derivative of its cdf. For fun, if you do this for the population of the earth (7.5 billion) using his model (mean=100, st.dev=10) you get ideas with a 'goodness' quality of only around 164. If an AI can consistently suggest ideas with a goodness above 164, it will consistently outperform the entire human population working together.
@horatio3852
@horatio3852 4 жыл бұрын
thx u))
@harry.tallbelt6707
@harry.tallbelt6707 4 жыл бұрын
No, actually thank you , though
@cezarcatalin1406
@cezarcatalin1406 4 жыл бұрын
That’s if the model you are using is correct... which might not be. Edit: Probably it’s wrong.
@drdca8263
@drdca8263 4 жыл бұрын
Oh, multiplying the CDFs, that’s very nice. Thanks!
@618361
@618361 4 жыл бұрын
@@cezarcatalin1406 That's a valid criticism. The part I felt most iffy about was the independence assumption. People don't suggest ideas in a vacuum, they are inspired by the ideas of others. So one smart idea can lead to another. It's also possible that individuals have a heavy tail distribution (like a power law perhaps) instead of a gaussian when it comes to ideas. This might capture the observation of paradigm-shattering brilliant ideas (like writing, the invention of 0, fourier decomposition, etc.). Both would serve to undermine my conclusion. That being said, I didn't want that to get in the way of the fun so I just went with those assumptions.
@yunikage
@yunikage 4 жыл бұрын
"we're going to pretend corporations dont use AI" ah yes, and im going to assume a spherical cow....
@brumm0m3ntum94
@brumm0m3ntum94 3 жыл бұрын
in a frictionless...
@Tomartyr
@Tomartyr 2 жыл бұрын
vacuum
@linnthwin7315
@linnthwin7315 Жыл бұрын
What do you mean my guy just avoided an infinite while loop
@sashaboydcom
@sashaboydcom 4 жыл бұрын
Great video, but one thing I think you missed is that a corporation doesn't need any of its employees to know what works, it just needs to survive and make money. This means that the market as a whole can "know" things that individuals don't, since companies can be successful without fully understanding *why* they're successful, or fail without anyone knowing why they fail. Even if a company succeeds through pure accident, the next companies that come along will try to mimic that success, and one of *them* might succeed by pure accident, leading to the market as a whole "knowing" things that people don't.
@AtticusKarpenter
@AtticusKarpenter Жыл бұрын
And.. thats pretty much not effective way of doing things, if we see modern HollyWoke, or Ubisoft
@glaslackjxe3447
@glaslackjxe3447 Жыл бұрын
This can be seen as part of AI training, if a corporation has the wrong goal or wrong solution it will be outcompeted/fail and the companies that survive have better selected for successful ways to maximise profit
@monad_tcp
@monad_tcp Жыл бұрын
@@AtticusKarpenter I bet those are not following market signals and not succeeding at the market, yet they survive from income from other "sources", the stupid ESG scores
@rdd90
@rdd90 11 ай бұрын
This is true, but only for tasks with a small enough solution space that it's feasible to accidentally stumble across the correct solution. This is unlikely to be the case for sufficiently hard intellectual problems. Also, a superintelligence will likely be better at stumbling across solutions than corporations, since the overhead of spinning up a new instance of the AI will likely be less than that of starting a new company (especially in terms of time).
@TheOneMaddin
@TheOneMaddin 5 жыл бұрын
I have the feeling that AI safety research is the attempt to outsmart a (by definition) much smarter entity by using preparation time.
@oldvlognewtricks
@oldvlognewtricks 4 жыл бұрын
I seem to remember Mr. Miles mentioning in several videos that trying to outsmart the AI is always doomed, and a stupid idea (my wording). Hence all the research into aligning AI goals with human interests and which goals are stable, rather than engaging in a cognitive arms race we would certainly lose.
@martinsmouter9321
@martinsmouter9321 4 жыл бұрын
It's a try to get a history boost if we can have more time and resources we might be able to overwhelm it. A little bit like building a fort: you know bigger armies will come, so you build structures to help you be more efficient in fighting them off.
@augustday9483
@augustday9483 Жыл бұрын
And it looks like we've run out of prep time. AGI is very close. And the pre-AGI that we have right now are already advanced enough to be dangerous.
@petersmythe6462
@petersmythe6462 5 жыл бұрын
Corporations still have basically human goals, just those of the bourgeoisie. AI can have very inhuman goals indeed. A corporation might bribe a goverment to send in the black helicopters and tanks to control your markets so it can enhance the livelihood of the shareholders. An AI might send in container ships full of nuclear bombs and then threaten your country's dentists with nuclear annihilation if they don't take everyone's teeth because its primary goal and only real purpose in life is to study teeth at large sample sizes.
@SA-bq3uy
@SA-bq3uy 5 жыл бұрын
Humans cannot have differing terminal goals, some are just in a better position to achieve them.
@fropps1
@fropps1 5 жыл бұрын
@@SA-bq3uy What do you mean by that? I feel like it's pretty self-evident that people can have different goals. I don't have "murdering people" as a terminal goal for example, but some people do.
@SA-bq3uy
@SA-bq3uy 5 жыл бұрын
@@fropps1 These are instrumental goals, not terminal goals. We all seek power whether we're willing to accept it or not.
@fropps1
@fropps1 5 жыл бұрын
@@SA-bq3uy If your argument is what I think it is then it's reductive to the point where the concept of terminal goals isn't useful anymore. I don't happen to agree with the idea that people inherently seek power, but if we take that as a given, you could say that the accumulation of power is an instrumental goal towards the goal of triggering the reward systems in the subject's brain. It is true that every terminal goal is arrived at by the same set of reward systems in the brain, but the fact that someone is compelled to do something because of their brain chemistry doesn't tell us anything useful.
@SA-bq3uy
@SA-bq3uy 5 жыл бұрын
@@fropps1 All organisms are evolutionarily selected according to the same capacities, the capacity to survive and the capacity to reproduce. The enhancement of either is what we call 'power'.
@stevenneiman1554
@stevenneiman1554 Жыл бұрын
I think one of the most important things to understand about both corporations and AIs is that as an agent's capabilities increase, its ability to do helpful things increases, but the risk of misalignment problems which cause it to do bad things increases faster. As an agent with goals grows, it becomes more able to seek its goals in undesirable ways, the efficacy of its actions increases, it becomes more likely to be able to recognize and conceal its misalignment, AND it becomes less likely you'll be able to stop it if you do discover a problem.
@jonathanedwardgibson
@jonathanedwardgibson 4 жыл бұрын
I’ve long thought Corporations are analog prototypes of AI lumbering across the centuries, faceless, undying, immortal, without moral compass as they clear-cut and plow-under down another region in their mad minimal operating rules.
@MrTomyCJ
@MrTomyCJ Жыл бұрын
Corporations clearly do have a very important moral compass, and even Miles himself considers that so far humanity has been progressing. The fact some are corrupt doesn't mean corporations as a concept are intrinsecally bad, just like with humans in general.
@Primalmoon
@Primalmoon 5 жыл бұрын
Only took a month for the Starcraft example to become dated thanks to AlphaStar. >_
@spencerpowell9289
@spencerpowell9289 4 жыл бұрын
AlphaStar arguably isn't at a superhuman level yet though(unless you let it cheat)
@rytan4516
@rytan4516 4 жыл бұрын
@@spencerpowell9289 By now, AlphaStar is now beyond my skill, even with more limitations than myself.
@flamencoprof
@flamencoprof 5 жыл бұрын
As a reader of Sci-Fi since the Sixties, I remember at the dawn of easily available computing power in the Eighties I wrote in my journal that the Military-Industrial complex might have a collective intelligence, but it would probably be that of a shark! I appreciate having such thoughtful material available on YT. Thanks for posting.
@visigrog
@visigrog 5 жыл бұрын
In most corporate settings, a few individuals get to pick which ideas are implemented. From experience, they are almost always not close to the best ideas.
@Soumya_Mukherjee
@Soumya_Mukherjee 5 жыл бұрын
Great video Robert. See you again in 3 months. Seriously we need more of your videos. Love your channel.
@EmilySucksAtGaming
@EmilySucksAtGaming 4 жыл бұрын
"can you tell I'm not a rocket surgeon" I literally just got done playing KSP failing at reworking the internal components of my spacecraft
@morkovija
@morkovija 5 жыл бұрын
Been a long time Rob! Glad to see you
@d007ization
@d007ization 5 жыл бұрын
Y'all are way more intelligent than I lol.
@shortcutDJ
@shortcutDJ 5 жыл бұрын
1,5 x speed = 1.5 more fun
@stevenmathews7621
@stevenmathews7621 5 жыл бұрын
@@shortcutDJ not sure about that.. there might be diminishing returns on that ; P
@MrGustaphe
@MrGustaphe 5 жыл бұрын
@@shortcutDJ Surely it's 1.5 times as much fun.
@diabl2master
@diabl2master 4 жыл бұрын
@@MrGustaphe No, simply 1.5 more units of fun.
@eclipz905
@eclipz905 5 жыл бұрын
Credits song: Bad Company
@acorn1014
@acorn1014 4 жыл бұрын
I noticed an interesting quirk about the model that ignores the difficulty of finding the right task. If you take 361 people and have them all play Go, they can think of every move on the board, so they'd be able to beat our current AI, but this is not the case, so this is how important that ability to determine these things gets.
@donaldhobson8873
@donaldhobson8873 5 жыл бұрын
This is all making the highly optimistic assumption that the people in the corporation are cooperating for the common good. In many organizations, everyone is behaving in a "stupid" way, but if they did something else, they would get fired.
@gasdive
@gasdive 5 жыл бұрын
Yes, but individual neurons are 'stupid'. Individual layers of a neutral net are 'stupid'
@stevenmathews7621
@stevenmathews7621 5 жыл бұрын
you might be missing Price's Law there. (an application of Zipf's Law) a small part (the √ of the workers) is working for the "common good"
@NXTangl
@NXTangl 4 жыл бұрын
Also that the workers/CEOs are always aligned with shareholder maximization, as opposed to personal maximization. A company can destroy itself to empower a single person with money and often does.
@Gogglesofkrome
@Gogglesofkrome 4 жыл бұрын
what is this 'common good,' anyway? is it some ideologically driven concept that differs entirely between all humans? Ironically it is this very 'common good' which drives many companies to do evil. After all, the road to hell is paved in human skulls and good intentions.
@NXTangl
@NXTangl 4 жыл бұрын
@@Gogglesofkrome Common good of the shareholders in this case.
@jennylennings4551
@jennylennings4551 5 жыл бұрын
These videos deserve way more recognition. They are very well made and thought out.
@V1ctoria00
@V1ctoria00 4 жыл бұрын
I binged several of your videos and I noticed this example about the rocket comes up another time. As well as the example just before it. Thought I was somehow rewatching one over again.
@Ybalrid
@Ybalrid 4 жыл бұрын
A coworker just shared this video with me. I had no idea you had your own KZbin channel. I like Computerphile a lot, including your ML/AI videos so I instantly subscribed!
@cherubin7th
@cherubin7th 5 жыл бұрын
A corporation can also do something like alphago's search tree. Many people have ideas and others improve on them in different directions. Bad directions are canceled until a very good path is found. Also many corporations in competition behave like a swarm intelligence. But still great video!
@DavenH
@DavenH 5 жыл бұрын
Every one of your videos kicks ass. Some of the most interesting material on the subject.
@DJHise
@DJHise 5 жыл бұрын
It took one month since this video was made, for AI to start crushing Starcraft professional players. (AlphaStar played both Dario Wunsch and Grzego rz Komincz, who were ranked 44th and 13th in the world respectively, were both beat 5 to 0.)
@qmillomadeit
@qmillomadeit 5 жыл бұрын
I've always thought about the connection of corporations to AI as they do seek to seek to maximize their goals in the most efficient way. Glad you put out this very well thought out video :)
@dannygjk
@dannygjk 5 жыл бұрын
Corporations are far from efficient.
@ziquaftynny9285
@ziquaftynny9285 4 жыл бұрын
@@dannygjk relative to what?
@dannygjk
@dannygjk 4 жыл бұрын
@@ziquaftynny9285 Relative to AI ;)
@dannygjk
@dannygjk 4 жыл бұрын
@Stale Bagelz Corporations are plagued with many of the issues that humanity has in general. For example power struggles within the corporation.
@PsychadelicoDuck
@PsychadelicoDuck 4 жыл бұрын
@@dannygjk I think it's less "far from efficient", and more a stop-button/specification problem. The institutions (and the people making them up) are very good at maximizing the chances of their success, as given by the metrics that the broader systems (society/government for the institutions, and internal politics for the individuals) evaluate them by. The problems are, those metrics are not necessarily measuring what people think they are measuring (due to loopholes, outright lying, etc.), any attempts to change those metrics will be fought by the organizations currently benefiting from them, and that the fundamental social-economic system those original metrics were designed from presupposed that morality was either a non-factor or would arise naturally from selfish behavior. I'm also going to point out that the "general humanity issues" you mention are greatly exacerbated by that same set of problems.
@Garbaz
@Garbaz 5 жыл бұрын
Very interesting! And I really like the little "fun bits" you edit into your videos!
@blahblahblahblah2837
@blahblahblahblah2837 4 жыл бұрын
Love the Dont Hug Me I'm Scared reference! Also _wow_ this has become my favourite channel. I wish I had found it 2 years ago
@ThePlayfulJoker
@ThePlayfulJoker 4 жыл бұрын
This video is the kind that chanced my mind twice in only 14 minutes. I love the fact that it had a true discussion on the subject and not just a half-baked opinion.
@zzzzzzzzzzz6
@zzzzzzzzzzz6 5 жыл бұрын
I've always wondered this and have been pushing this idea... awesome to have a full video on it! Well not the 3 follow on conclusions, but the comparison to AI systems
@buzz092
@buzz092 5 жыл бұрын
Excellent clerks reference! Also the video was outstanding as usual. :P
@thrallion
@thrallion 5 жыл бұрын
Once again wonderful video. One of the most interesting and well spoken channels on KZbin!
@Mr30friends
@Mr30friends 5 жыл бұрын
This video is actually amazing. Wow. So much useful information covered. And not just useful for people interested in AI. Most of this could apply anywhere from how businesses work to how different political systems work and to pretty much anything else.
@DieBastler1234
@DieBastler1234 5 жыл бұрын
Content and presentation is brilliant, I'm sure matching audio and video quality will follow. Subbed :)
@RobertMilesAI
@RobertMilesAI 4 жыл бұрын
Is this about the black and white bits at the start that are just using the phone's internal mic, or is the there a problem with my lav setup?
@theblinkingbrownie4654
@theblinkingbrownie4654 2 ай бұрын
​@@RobertMilesAIMaybe they watched the video before it finished processing the higher qualities, do you release videos before they're done fully processing?
@TXWatson
@TXWatson 5 жыл бұрын
Looking forward to episode 2 of this! I've thought of the utility of this analogy in being that corporations, as intelligent nonhuman agents, give us the opportunity to experiment with designing utility functions that might be less harmful when implemented.
@brunogarnier2855
@brunogarnier2855 4 жыл бұрын
Thank you for this great video. It could be interresting to go through the same exercise, but with the whole world's economy. and evaluate the "invisible hand of the market" as an artificial selection AI... Have a good week-end !
@MrTomyCJ
@MrTomyCJ Жыл бұрын
I find that market's personification ("invisible hand") as a horrible mistake, as the whole point of the market is precisely that it's not a single entity, it doesn't have a particular intention. It's just a network of people with DIFFERENT ones.
@arthurguerra3832
@arthurguerra3832 5 жыл бұрын
Finally! I was tired of rewatching your old videos. haha Keep'em coming
@joelkreissman6342
@joelkreissman6342 4 жыл бұрын
I've said it before and I'll say it again, "bureaucracy is a human paperclip maximizer". Doesn't matter if it's a private corporation or governmental.
@jared0801
@jared0801 5 жыл бұрын
Great stuff, thank you so much for the video Rob
@tho207
@tho207 5 жыл бұрын
should someone can bring AGI to us, they must be a person like you. your sensibleness and sensitivity is outstanding. I'll resume the video now, cheers
@faustin289
@faustin289 4 жыл бұрын
"Evaluating solutions is easier than coming up with them" This is why I should earn more than my boss....I come up with all the ideas; the only thing he does is criticize and pick what idea to take forward!
@oldvlognewtricks
@oldvlognewtricks 4 жыл бұрын
Your reasoning makes perfect sense, assuming people get paid based on the difficulty of their work. Oh, wait...
@pluto8404
@pluto8404 4 жыл бұрын
Then become the boss if it is so easy.
@landonpowell6296
@landonpowell6296 4 жыл бұрын
@@pluto8404 Becoming the boss != Doing the boss's work. It's not easy to be born rich unless you already were.
@MrTomyCJ
@MrTomyCJ Жыл бұрын
@@landonpowell6296 yeah the issue here is that in reality, the market doesn't directly reward intelligence or hard work, it rewards the satisfaction of consumer's needs. It seems unfair, but the alternative is much worse. Besides, intelligence and hard work may not be strictly necessary but they very often do put you in the right path. And someone being born lucky or rich doesn't really mean they are being unfair to others.
@Supreme_Lobster
@Supreme_Lobster 5 жыл бұрын
Those layers arent gonna stack by themselves
@Bootleg_Jones
@Bootleg_Jones 5 жыл бұрын
I love that you used XKCD's Up Goer Five as your example rocket blueprint. Definitely one of the best comic's Randall has ever put out.
@JM-us3fr
@JM-us3fr 5 жыл бұрын
This was my question! Thanks Rob for answering it
@xDeltaF1x
@xDeltaF1x 4 жыл бұрын
I think the statistical model is a bit flawed/over simplified. Groups of humans don't just select the best idea from a pool but will often build upon those ideas to create new and better ones.
@CommanderPisces
@CommanderPisces 4 жыл бұрын
Basically this just means that an "idea" can actually have several smaller components that can be improved upon. I think this is more than offset by the fact that (as discussed in the video) humans still can't select the best ideas even when they're presented.
@commenter3287
@commenter3287 4 жыл бұрын
I have enjoyed your computerphile videos, but these scripted ones are even better. I had never heard the AI/Corporation comparison before, so in one succinct video you introduced me to a very interesting analogy and analyzed the problems with the analogy very well.
@adrianmiranda5531
@adrianmiranda5531 5 жыл бұрын
I just came here to say that I appreciated the Tom Lehrer reference. Keep up the great videos!
@IAmNumber4000
@IAmNumber4000 3 жыл бұрын
“Corporations certainly have their problems, but we seem to have developed systems that keep them under control well enough that they’re able to create value and do useful things without literally killing everyone.” _Laughs in climate change_
@mishafinadorin8049
@mishafinadorin8049 2 жыл бұрын
Climate change won't kill everyone. Far from it.
@MrTomyCJ
@MrTomyCJ Жыл бұрын
There's so far no way to satisfy our needs, to create value and do useful things, without affecting the environment. There's no reason to believe that any other alternative would be better. Fortunately, we, with the current system, are getting better at it. But gradually: there's no reason to believe the transition could've been made faster with an alternative system (and without more human suffering).
@IAmNumber4000
@IAmNumber4000 Жыл бұрын
@@MrTomyCJ So your solution to the possibility of biosphere collapse is “meh let’s wait and see if billions die because changing society is hard” If everyone thought like you, labor unions would never have produced improvements like the weekend, or the 8 hour workday. Society doesn’t just _improve_ on its own, as a function of its existence. The _only_ reason society improves _at all_ is because of the people who dared to dream of radical change, and relentlessly pushed for it. Society is a confluence of incentive structures. There is no reason why slavery, for instance, should _necessarily_ have ended if someone could still benefit from it today. It only ended because of those people who saw what was wrong with it, and suffered from it, so they refused to tolerate its continued existence. Now people see slavery as obviously wrong in hindsight, but when it existed, there was an entire ideological structure that had formed to protect the wealth it was producing. Seeing past that ideology is the first step to real change.
@davidriosg
@davidriosg Жыл бұрын
@@IAmNumber4000 that's an important distinction you've made. All your examples were problems that existed and already affected people, so there was great incentive for solutions, whereas the other is the "possibility" of biosphere collapse. Humans are great at solving problems right now, not so much at predicting or solving hypothetical problems in the future.
@IAmNumber4000
@IAmNumber4000 Жыл бұрын
@@davidriosg Do you not think climate change is already affecting people?
@Jack-Lack
@Jack-Lack 3 жыл бұрын
I've already conjectured a year or two ago that corporations are AI, so of course I'm going to say yes. My reasoning is: -Corporations make decisions based on their board of directors, which is a hive mind of supposedly well-qualified, intellectual elites. -A corporate board will serve the goals of its shareholders, at the expense of everything else. Even if this means firing an employee because they believe they're losing $50/year on that employee, they care more about the $50 than the fact that the employee will be out of work. It also means they may choose not to recall a dangerous product if they think a recall would be the less profitable course of action. Corporate boards are so submissive to the goals of their shareholders that it is reminiscent of the AI who maximizes stamp-collecting at the expense of everything else, even if it destroys the world in the process (see fossil fuel companies who knew about climate change in the 1960's and buried the research on it). -AI superintelligence is supposed to have calculation resources that make it beyond human abilities, like a chess AI that is 900 elo rating points stronger than the best human. An AGI superintelligence might manifest superhuman abilities that go beyond just intelligence, but also its ability to generate revenue in a superhuman way and its ability to influence human opinion in a superhuman way. Large corporations also have unfathomable resources to execute their goals, which (in cases like Amazon, Apple, Microsoft, or IBM) can include tens or hundreds of thousands of laborers, countless elite intellectuals, the power to actually influence federal legislation through lobbying, the financial resources to drive their competition out of business or merge with them, and public relations departments that can influence public opinion. Really, I think that the way corporations behave is an almost exact model for how AGI would behave.
@aenorist2431
@aenorist2431 5 жыл бұрын
They just prove that corporations are problems in similar ways. Not that somehow both are not a problem. Corporations have to be tightly controlled by the population (in the form of government) to utilize their potential without allowing their diverging goals to cause excessive damage.
@AiakidesAkhilleus
@AiakidesAkhilleus 5 жыл бұрын
Great quality video, congratulations
@lucbloom
@lucbloom Жыл бұрын
Is that a Don’t Hug Me I’m Scared reference in the graph??? Oh man so awesome.
@ricardoabh3242
@ricardoabh3242 4 жыл бұрын
Always really interesting and clear, with an nice open ended storyline
@thatchessguy7072
@thatchessguy7072 Жыл бұрын
@9:58 In answer to your rhetorical question, I need to reference the baduk games played between Alphago zero and Alphago master. Zero plays batshit crazy strategies where even the tiniest inaccuracies cause the position to spiral into catastrophe but zero still manages to win. Zero’s strategy does not look good to amateur players, nor to professional players, but it works, it just works. Watching these games feels like listening to two gods talk, one of which has gone mad. @10:02 ah… well we recognized move 37 as good after the AI showed that to us.
@pierfonda
@pierfonda 5 жыл бұрын
Ahhh the move 37/Clerks reference!! Perfect
@VadkoSmelyanskiy
@VadkoSmelyanskiy 4 жыл бұрын
In a row??
@Verrisin
@Verrisin 5 жыл бұрын
I like this idea overall. Somewhat smarter, but also somewhat slower. -- Controllable by other grouped-human entities (like governments) + a lot of other points, but I think that is kind of the main thing that differentiates it from ASI.
@pacibrzank78
@pacibrzank78 5 жыл бұрын
Every haircut you had so far was on point
@bibasniba1832
@bibasniba1832 4 жыл бұрын
Thank you for sharing!
@limitless1692
@limitless1692 5 жыл бұрын
Wow this video was really interesting .. Thanks for creating it
@nazgullinux6601
@nazgullinux6601 5 жыл бұрын
Loved the "Bad Company" acoustic at the end. As always, another 1-up to those not formally schooled whom routinely spout nonsensical "What-if's" at you as if they are the first person to think of the idea haha.
@hayuseen6683
@hayuseen6683 4 жыл бұрын
Wonderfully well considered problem and presented both bite-sized and expounded on. Logicians are some of my favorite people.
@BM-bu4xd
@BM-bu4xd 5 жыл бұрын
Yeah! terrific. Much thanks
@Nayus
@Nayus 5 жыл бұрын
This guy presses randomize on his hair every new video. Great video btw. I think the most important points of this "why not just" will be on the second video, because to me it is very obvious that a corporation's 'values' and goals are very similar to humanity's, at least comparing it to what could potentially be the goals of a unsafe AGI. Like yes some corporations might not care about the enviroment or work conditions of their workers, or many other things that they disregard in pursuit of their (probably money related) goal, but there's no corporation on earth which goal is to destroy the planet. Or to kill every human. Or to control their brains (that could get away with it). Or who knows what other incredibly weird things an AGI might have as an instrumental goal that it will not hesitate to implement towards its terminal goal. You can't model AGI as a corporation because corporations are ultimately made of humans, so they will never separate their goals too much from human goals, while AGI does not have that limitation.
@yondaime500
@yondaime500 5 жыл бұрын
I think the abilities of humans and corporations are more relevant than their values. Human values are not really aligned in general, and some individual humans or organizations, given enough power, could do pretty awful things from the point of view of other humans. I know that because it has already happened, multiple times. In fact it's happening right now in many parts of the world. The only reason they don't do worse is because they can't. But an AGI could. That's why I sometimes feel like value alignment is a lost cause. Ok, maybe you can get the AGI to align with humans, but which humans? We're probably screwed either way.
@Nayus
@Nayus 5 жыл бұрын
@@yondaime500 I think it's a combination of the both. Like even if you say that corp are super smart, they aren't like "nuclear" Smart if you know what I mean. But I disagree with you on the *scale* of what we mean when we say misaligned. Like yes there're groups on our world that value really different stuff if you only take into scope the space of human values. But a AGI can have much much more varied values. Like for example from one side of the planet to another, you could say that they are really "far away", but only if you look at the world. If you look at the galaxy or the solar system, opposite points on the planet are relatively very close. I agree that even those differences are still very dangerous and important.
@bobsmithy3103
@bobsmithy3103 5 жыл бұрын
xD Kinda reminds me of the OpenAI dude with the blue hair presenting the robot hand.
@micaelstarfire8639
@micaelstarfire8639 2 жыл бұрын
The history of corporate supported atrocities would suggest otherwise
@willemvandebeek
@willemvandebeek 5 жыл бұрын
Merry Christmas Robert! :)
@RoboBoddicker
@RoboBoddicker 5 жыл бұрын
Last year in the US, one of the big sporting goods retailers stopped carrying semi-automatic rifles and tightened restrictions on their gun sales in the wake of mass shootings. That decision was made solely by the CEO and it definitely didn't please a lot of shareholders. That's another big difference, I think, between corporations and AGI - the big decisions in a corporation are ultimately made by a small group of humans with human values. Not that we can always expect corporations to put morality over profits obviously, but executives can at least *recognize* an egregious situation and make moral judgments. An AGI doesn't have any such safeguards. Fantastic video as always, btw!
@ryanarmstrong2009
@ryanarmstrong2009 4 жыл бұрын
That clerks reference for move 37 was phenomenal
@leninalopez2912
@leninalopez2912 5 жыл бұрын
This is fast becoming even more cyberpunk than Neuromancer.
@loopuleasa
@loopuleasa 5 жыл бұрын
3:48 Nice thinking adding "(for now)" text in the video, as Starcraft was already beatne by DeepMind a month ago
@cupcakearmy
@cupcakearmy 5 жыл бұрын
Amazing content again. Keep it up!
@alexwood020589
@alexwood020589 Жыл бұрын
I think another important point about idea qualities in large teams is the selection process. No team is coldly evaluating every idea and picking the objective best one. The people who can articulate their ideas best, or shout the loudest, or happen to be the CEO's son are the ones who's ideas get implemented.
@albirtarsha5370
@albirtarsha5370 4 жыл бұрын
Anything You Can Do (Annie Get Your Gun) by Howard Keel, Betty Hutton AGI: Anything you can be, I can be greater. Sooner or later I'm greater than you.
@definitelynotcole
@definitelynotcole Жыл бұрын
Love that bit at the start.
@ToriKo_
@ToriKo_ 5 жыл бұрын
I just want to say thanks for making these videos! Also nice Undertale reference
@travcollier
@travcollier 5 жыл бұрын
A lot of the "sort of" points are very likely to apply to AGIs (at least in the early days) too. Anyways, we could certainly benefit from being better at aligning the goals and actions of corporations with humanity as a whole, and I think AI safety research could help with that while gaining insights about future AGIs.
@lobrundell4264
@lobrundell4264 5 жыл бұрын
Yeesss Rob is back as good as ever!
@hikaroto2791
@hikaroto2791 2 жыл бұрын
this was an astoundingly interesting video
@ChibiRuah
@ChibiRuah 4 жыл бұрын
I found this video very good as i thought about this and this expand the comparison and where it fails
@LeoStaley
@LeoStaley 5 жыл бұрын
The video you did on computerphile about Asimov's e laws of robotics was the most impactful, consise expression of what the danger of AI development is. You made the point that "you have to solve ethics" and the fact that the people building it are going, "hold on, I'm just a computer programmer, I didn't sign up for that." those two things combined have stuck with me for years.
@its.dan.eastwood
@its.dan.eastwood 5 жыл бұрын
Great video, thanks for sharing!
@DamianReloaded
@DamianReloaded 5 жыл бұрын
Yay! I'm always waiting for your vids. I always tell people, whenever its brought up, that AGIs are very likely what will destroy us but also probably the only thing that can save us from our own limitations. (besides jebus)
@GreenDayFanMT
@GreenDayFanMT 5 жыл бұрын
Very interessting topic. Thanks for this viewpoint
@thewhitefalcon8539
@thewhitefalcon8539 Жыл бұрын
This diminishing returns stuff presumably also applies to electronic AGI. Look at the server resources they pour into GPT.
@natfrey6503
@natfrey6503 5 жыл бұрын
Might also consider some forms of government as behaving as AI, even societies for that matter. They can all go awry when citizens that go along with the "program" are convinced their actions are for a higher good. It's the conundrum of how good natured people can participate in the making of an avoidable calamity. But this brings in the question of human evil, or moral failing (as we see so much in large corporations), that even when quite innocuous on an individual level can be brutal when added up on a mass level.
@FunBotan
@FunBotan 5 жыл бұрын
Goddamit dude I almost began to think this idea of mine was original
@user-js8jh6qq4l
@user-js8jh6qq4l 5 жыл бұрын
FunBotan it is genuine though and it's all that matters.
@LKRaider
@LKRaider 5 жыл бұрын
FunBotan : being original is overrated, capacity of application is where it's at
@christopherg2347
@christopherg2347 5 жыл бұрын
Yes, I know the feeling. To bad it never worked out for me.
@TheConfusled
@TheConfusled 5 жыл бұрын
Yay a new video. Mighty thanks to you
@hexzyle
@hexzyle 4 жыл бұрын
The flaw committed in this video is you assumed that there is always a better idea. Ideas have a cap for how good they are - the "perfect" solution, or with a problem with no solution because of conflicting parameters, the most effective solution is the peak of a bell curve. Often Humans come up with these ideas.
@petersmythe6462
@petersmythe6462 Жыл бұрын
In some ways your "have each person generate an idea and pick the best" actually understates the problem. There are many types of problems, e.g. picking a move in chess, where ideas are easy to come up with but hard to evaluate.
@DYWYPI
@DYWYPI Жыл бұрын
When thinking about AI as a metaphor for corporations, rather than the other way around, it's not necessarily the superhuman *intelligence* of the AI that is important or that makes them inherently dangerous - merely the fact that the intelligence makes it superhumanly *powerful*. Whether or not we accept that a corporation is significantly more intelligent than a human, they're fairly self-evidently significantly more powerful than one, with more ability to affect change in the world and to gather instrumental resources to increase that ability.
@ianprado1488
@ianprado1488 5 жыл бұрын
Such a creative discussion
@brr.petrovich
@brr.petrovich 5 жыл бұрын
We must have new video! Its a perfect time for it
@disk0__
@disk0__ 5 жыл бұрын
>Corporations creating digital corporations with hyperaccelerated intelligence to advance the corporation This sounds like a Unibomber fever dream wtf
@seanmatthewking
@seanmatthewking 3 жыл бұрын
Let's not forget about central governments doing the same 👍
@dantenotavailable
@dantenotavailable 5 жыл бұрын
Also don't forget communication costs. Scaling any human process to 1000 people becomes incredibly difficult due to overhead necessary to keeping everyone pointed in the same direction. Just documenting the suggestions from 1000 people is going to require a significant number of people and time, making sure you get the suggestions documented correctly and unambiguously and then evaluated is going to be a herculean task. It's not for no reason that most Agile Development techniques are most effective at 5 to 6 people and most advice for teams of size 10+ is "split into 2 teams that don't need to coordinate".
@NathanTAK
@NathanTAK 5 жыл бұрын
Projects you're not able to talk about? That's it; Miles has created the world's first AGI.
@MatthewStinar
@MatthewStinar 5 жыл бұрын
I love the use of XKCD Up Goer 5 diagram. 😀
@spirit123459
@spirit123459 4 жыл бұрын
It's SpaceX's "Bird 9" :)
@bscutajar
@bscutajar 5 жыл бұрын
At 11:45 he mentions you can keep adding more people and they will do the job faster. A little algebra shows that for the number adding example, the optimal number of people working in parallel is the sqrt of the number of numbers. Adding more beyond this point will slow down the process.
@richarddeese1991
@richarddeese1991 5 жыл бұрын
"...that even governments are sometimes able to move fast enough to deal with them [corporations.]" LOVE IT!!! 😂 Oh, and by the way; LOVE the acoustic rendition of "Bad Company" [by, of course, Bad Company - the ultimate eponymous song!] - BRILLIANT! :D ...and, is that a mandolin? Wonderful! Now, as to these corporations... I think it's pretty clear that most of them act as specialist A.I.s, geared to produce some product or service (or, sometimes, a whole range of them), & as such, they're mostly designed to maximize profits for the shareholders (as you pointed out.) I think this is very much like Deep Thought, or the Go! program; they do indeed act as specialized superintelligences. But they most certainly do NOT qualify in any way as general intelligences, much less general superintelligences. As to the question you posed [quite diplomatically, I must say, as you neatly side-stepped the issue of using any mental health terms!], "Are they 'misaligned'?" Well, in short, YES. Many of them ARE misaligned. They are profit-driven - some of them to the point of getting away with whatever they can. And on that note, the ONLY moral in a capitalist, or 'free-market' society, IS, "What can I get away with - and how much $$$ can I make DOING it?" I'm sorry, but that's it. If a company isn't run by people with good intentions AND good morals &/or ethics, then that's what you end up with, simply by default. In other words, if nobody's 'minding the moral store' so to speak, things WILL do badly wrong all by themselves. I believe this could be proved - at least by example - but I don't know how do prove it, myself. I have merely witnessed (and often worked for!) 54+ years worth of corporate shenanigans which amply proves it to ME. So, YES while some of them DO make good products, &/or have good services, that is ONLY because they are run by strong people with good morals - or, at least, good corporate & social ethics. The main problem is this: when nobody's in charge whose strong enough to infuse a company with their own good values, bureaucracy WILL take over by default, and it is ALWAYS 'misaligned' as you put it. In fact, it is actually badly broken & dysfunctional, by any standard you'd care to judge it by... EXCEPT the standard of, "What can I get away with, and how much $$$ can I make DOING it?" That's it. That's all there is. Probability either shows that, or is useless in gauging that. If we 'train' our A.G.I.s, they're going to HAVE to be given clear psychological tests, examples & exams; they're going to HAVE to be 'taught' by people who do not only NOT teach them, "Maximize profit, dammit - nothing else matters!!!" but rather DO teach them that people matter, intelligent (or 'sentient') beings matter, whether they are flesh or circuits or whatever. If you can't perform your task without harming sentients, then you can't perform you task at all, & you MUST ask for help. Notice that I'm NOT advocating for the 3 (or 4, really) laws of robotics. Lovely sci-fi concept, I'm sure, but lousy real-world philosophy. A.I.s (or A.G.I.s, or whatever new letters someone comes up with tomorrow...) cannot be "programmed" to be "moral" in ANY sense. Doesn't work. Try it. Anyway, that's my take. Thanks for the video! You talk about important things (in my opinion!) tavi.
@peabnuts123
@peabnuts123 Жыл бұрын
I agree with all the analysis in this video, but from a general standpoint it seems wild to even assert that corporations are like superintelligences when we have phrases like "design by committee" or "too many cooks" etc to describe the regression toward the mean when solving problems using a group of people. The differentiating factor of companies' ability to do things has always been person-power in my mind, definitely not their ability to generate solutions for problems. Anyone can have an idea, its the execution that counts. Some things require a lot of people to execute. This is IMO what gives organisations more capability than individuals.
@GAPIntoTheGame
@GAPIntoTheGame 5 жыл бұрын
Severely underrated channel
@EebstertheGreat
@EebstertheGreat 3 жыл бұрын
At 7:14, the graph looks wrong. That histogram should resemble the graph of the probability density of a sample maximum. In general, if X₁, ..., Xₙ are independent and identically distributed random variables (i.e. a sample of size n) with cumulative distribution function Fₓ(x), then S = max{X₁, ..., Xₙ} has cumulative distribution function Fₛ(s) = [Fₓ(s)]ⁿ. So if each X as a probability density function fₓ(x) = Fₓ'(x), then S has probability density function fₛ(s) = n fₓ(s) [ Fₓ(s) ]ⁿ⁻¹ = n fₓ(s) [ ∫ fₓ(t) dt ]ⁿ⁻¹, where the integral is taken from -∞ to s. Here, we assumed the variables were normally distributed and set μ = 100 and σ = 20, so fₓ(x) = 1/(20√͞2͞π) exp(-(x-100)²/800), and thus fₛ(s) = n/(20√͞2͞π)ⁿ exp(-(s-100)²/800) [ ∫ exp(-(t-100)²/800) dt ]ⁿ⁻¹. The mean of this is E[S] = ∫s fₛ(s) ds, integrating over ℝ. Doing this numerically in the n=100 case gives a mean of 150.152. We can also make use of an approximate formula for large n: E[S] ≈ μ + σ Φ⁻¹((n-π/8)/(n-π/4+1)). For the given parameters and n=100, we get E[S] ≈ 100 + 20 Φ⁻¹((100-π/8)/(101-π/4)) ≈ 150.173. In either case, it is not plausible that you got a mean of 125 with n = 100, σ = 20 like you said. You must have used σ = 10, not σ = 20. That also explains why you wrote "σ = 20" between those vertical bars at 6:31. You probably meant that the distance between μ+σ and μ-σ was 20, i.e. σ = 10.
@RobertMilesAI
@RobertMilesAI 3 жыл бұрын
That's correct! Though, since I picked the value for the standard deviation out of thin air, it can just be 10 instead and it doesn't affect the point I was trying to make
A Response to Steven Pinker on AI
15:38
Robert Miles AI Safety
Рет қаралды 204 М.
The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment
23:24
Robert Miles AI Safety
Рет қаралды 219 М.
ISSEI funny story😂😂😂Strange World | Magic Lips💋
00:36
ISSEI / いっせい
Рет қаралды 141 МЛН
Stupid man 👨😂
00:20
Nadir Show
Рет қаралды 29 МЛН
The Noodle Stamp Secret 😱 #shorts
00:30
Mr DegrEE
Рет қаралды 60 МЛН
How did CatNap end up in Luca cartoon?🙀
00:16
LOL
Рет қаралды 7 МЛН
Is AI Safety a Pascal's Mugging?
13:41
Robert Miles AI Safety
Рет қаралды 369 М.
Intelligence and Stupidity: The Orthogonality Thesis
13:03
Robert Miles AI Safety
Рет қаралды 664 М.
What can AGI do? I/O and Speed
10:41
Robert Miles AI Safety
Рет қаралды 117 М.
10 Reasons to Ignore AI Safety
16:29
Robert Miles AI Safety
Рет қаралды 335 М.
Quantilizers: AI That Doesn't Try Too Hard
9:54
Robert Miles AI Safety
Рет қаралды 83 М.
The other "Killer Robot Arms Race" Elon Musk should worry about
5:51
Robert Miles AI Safety
Рет қаралды 98 М.
Why Would AI Want to do Bad Things? Instrumental Convergence
10:36
Robert Miles AI Safety
Рет қаралды 244 М.
Training AI Without Writing A Reward Function, with Reward Modelling
17:52
Robert Miles AI Safety
Рет қаралды 234 М.
There's No Rule That Says We'll Make It
11:32
Robert Miles 2
Рет қаралды 32 М.
Sharing the Benefits of AI: The Windfall Clause
11:44
Robert Miles AI Safety
Рет қаралды 78 М.
Introducing GPT-4o
26:13
OpenAI
Рет қаралды 4 МЛН
СЛОМАЛСЯ ПК ЗА 2000$🤬
0:59
Корнеич
Рет қаралды 2,5 МЛН
How about that uh?😎 #sneakers #airpods
0:13
Side Sphere
Рет қаралды 10 МЛН
Carregando telefone com carregador cortado
1:01
Andcarli
Рет қаралды 1,1 МЛН