The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment

  Рет қаралды 218,240

Robert Miles AI Safety

Robert Miles AI Safety

Күн бұрын

This "Alignment" thing turns out to be even harder than we thought.
Links
The Paper: arxiv.org/pdf/1906.01820.pdf
Discord Waiting List Sign-Up: forms.gle/YhYgjakwQ1Lzd4tJ8
AI Safety Career Bottlenecks Survey: www.guidedtrack.com/programs/...
Referenced Videos
Intelligence and Stupidity - The Orthogonality Thesis: • Intelligence and Stupi...
9 Examples of Specification Gaming: • 9 Examples of Specific...
Why Would AI Want to do Bad Things? Instrumental Convergence: • Why Would AI Want to d...
Hill Climbing Algorithm & Artificial Intelligence - Computerphile: • Hill Climbing Algorith...
AI Gridworlds - Computerphile: • AI Gridworlds - Comput...
Generative Adversarial Networks (GANs) - Computerphile: • Generative Adversarial...
Other Media
The Simpsons Season 5 Episode 19: "Sweet Seymour Skinner's Baadasssss Song"
1970s Psychology study of imprinting in ducks. Behaviorism: • Vintage psychology stu...
With thanks to my excellent Patreon supporters:
/ robertskmiles
- Timothy Lillicrap
- Gladamas
- James
- Scott Worley
- Chad Jones
- Shevis Johnson
- JJ Hepboin
- Pedro A Ortega
- Said Polat
- Chris Canal
- Jake Ehrlich
- Kellen lask
- Francisco Tolmasky
- Michael Andregg
- David Reid
- Peter Rolf
- Teague Lasser
- Andrew Blackledge
- Frank Marsman
- Brad Brookshire
- Cam MacFarlane
- Jason Hise
- Phil Moyer
- Erik de Bruijn
- Alec Johnson
- Clemens Arbesser
- Ludwig Schubert
- Allen Faure
- Eric James
- Matheson Bayley
- Qeith Wreid
- jugettje dutchking
- Owen Campbell-Moore
- Atzin Espino-Murnane
- Johnny Vaughan
- Jacob Van Buren
- Jonatan R
- Ingvi Gautsson
- Michael Greve
- Tom O'Connor
- Laura Olds
- Jon Halliday
- Paul Hobbs
- Jeroen De Dauw
- Lupuleasa Ionuț
- Cooper Lawton
- Tim Neilson
- Eric Scammell
- Igor Keller
- Ben Glanton
- anul kumar sinha
- Duncan Orr
- Will Glynn
- Tyler Herrmann
- Tomas Sayder
- Ian Munro
- Joshua Davis
- Jérôme Beaulieu
- Nathan Fish
- Taras Bobrovytsky
- Jeremy
- Vaskó Richárd
- Benjamin Watkin
- Sebastian Birjoveanu
- Andrew Harcourt
- Luc Ritchie
- Nicholas Guyett
- James Hinchcliffe
- 12tone
- Oliver Habryka
- Chris Beacham
- Zachary Gidwitz
- Nikita Kiriy
- Parker
- Andrew Schreiber
- Steve Trambert
- Mario Lois
- Abigail Novick
- Сергей Уваров
- Bela R
- Mink
- Fionn
- Dmitri Afanasjev
- Marcel Ward
- Andrew Weir
- Kabs
- Miłosz Wierzbicki
- Tendayi Mawushe
- Jake Fish
- Wr4thon
- Martin Ottosen
- Robert Hildebrandt
- Poker Chen
- Kees
- Darko Sperac
- Paul Moffat
- Robert Valdimarsson
- Marco Tiraboschi
- Michael Kuhinica
- Fraser Cain
- Robin Scharf
- Klemen Slavic
- Patrick Henderson
- Oct todo22
- Melisa Kostrzewski
- Hendrik
- Daniel Munter
- Alex Knauth
- Kasper
- Ian Reyes
- James Fowkes
- Tom Sayer
- Len
- Alan Bandurka
- Ben H
- Simon Pilkington
- Daniel Kokotajlo
- Peter Hozák
- Diagon
- Andreas Blomqvist
- Bertalan Bodor
- David Morgan
- Zannheim
- Daniel Eickhardt
- lyon549
- Ihor Mukha
- 14zRobot
- Ivan
- Jason Cherry
- Igor (Kerogi) Kostenko
- ib_
- Thomas Dingemanse
- Stuart Alldritt
- Alexander Brown
- Devon Bernard
- Ted Stokes
- James Helms
- Jesper Andersson
- DeepFriedJif
- Chris Dinant
- Raphaël Lévy
- Johannes Walter
- Matt Stanton
- Garrett Maring
- Anthony Chiu
- Ghaith Tarawneh
- Julian Schulz
- Stellated Hexahedron
- Caleb
- Scott Viteri
- Conor Comiconor
- Michael Roeschter
- Georg Grass
- Isak
- Matthias Hölzl
- Jim Renney
- Edison Franklin
- Piers Calderwood
- Krzysztof Derecki
- Mikhail Tikhomirov
- Richard Otto
- Matt Brauer
- Jaeson Booker
- Mateusz Krzaczek
- Artem Honcharov
- Michael Walters
- Tomasz Gliniecki
- Mihaly Barasz
- Mark Woodward
- Ranzear
- Neil Palmere
- Rajeen Nabid
- Christian Epple
- Clark Schaefer
- Olivier Coutu
- Iestyn bleasdale-shepherd
- MojoExMachina
- Marek Belski
- Luke Peterson
- Eric Eldard
- Eric Rogstad
- Eric Carlson
- Caleb Larson
- Braden Tisdale
- Max Chiswick
- Aron
- David de Kloet
- Sam Freedo
- slindenau
- A21
- Rodrigo Couto
- Johannes Lindmark
- Nicholas Turner
- Tero K
/ robertskmiles

Пікірлер: 1 400
@umblapag
@umblapag 3 жыл бұрын
"Ok, I'll do the homework, but when I grow up, I'll buy all the toys and play all day long!" - some AI
@ErikYoungren
@ErikYoungren 3 жыл бұрын
TIL I'm an AI.
@xbzq
@xbzq 3 жыл бұрын
@@ErikYoungren I'm just an I. Nothing A about me.
@TomFranklinX
@TomFranklinX 3 жыл бұрын
@@xbzq I'm just an A, Nothing I about me.
@mickmickymick6927
@mickmickymick6927 3 жыл бұрын
Lol, this AI is much smarter than me.
@Caribbeanmax
@Caribbeanmax 3 жыл бұрын
that sounds exactly like what some humans would do
@MechMK1
@MechMK1 3 жыл бұрын
This reminds me of a story. My father was very strict, and would punish me for every perceived misstep of mine. He believed that this would "optimize" me towards not making any more missteps, but what it really did is optimize me to get really good at hiding missteps. After all, if he never catches a misstep of mine, then I won't get punished, and I reach my objective.
@RaiderBV
@RaiderBV 3 жыл бұрын
You tried to optimize pain & suffering not missteps. interesting
@xerca
@xerca 3 жыл бұрын
Maybe all we need to fix AI safety issues is good parenting
@MechMK1
@MechMK1 3 жыл бұрын
@@xerca "Why don't we just treat AI like children?" is a suggestion many people have, and there's a video on this channel that shows why that doesn't work.
@tassaron
@tassaron 3 жыл бұрын
Reminds me of what I've heard about positive vs negative reinforcement when it comes to training dogs... allegedly negative reinforcement teaches them to hide whatever they're punished for rather than stop doing it. Not sure what evidence there is for this though, it's just something I've heard..
@3irikur
@3irikur 3 жыл бұрын
That reminds me of mother's method for making me learn a new language, getting mad at me whenever I said something wrong. Now since I didn't know the language too well, and thus didn't know if something I was about to say would be right or wrong, I obviously couldn't make any predictions. This meant that whatever I'd say, I'd predict a negative outcome. They say the best way to learn a language is to use it. But when the optimal strategy is to keep quiet, that becomes rather difficult.
@EDoyl
@EDoyl 3 жыл бұрын
Mesa Optimizer: "I have determined the best way to achieve the Mesa Objective is to build an Optimizer"
@tsawy6
@tsawy6 3 жыл бұрын
"Hmm, but how do I solve the inner-inner alignment problem?"
@SaffronMilkChap
@SaffronMilkChap 3 жыл бұрын
It’s Mesas all the way down
@kwillo4
@kwillo4 3 жыл бұрын
Haha this what we are doing. We are the mesa optimizer and we create the optimizer that creates the mesa optimizer to solve Go for example. So why would the AI not want to create a better AI to do it even better than itself ever could :p
@Linvael
@Linvael 3 жыл бұрын
That's actually a fun excercise - could we design AIs that try to accomplish an objective by creating their own optimizers, and observe how they solve alignment problems?
@tsawy6
@tsawy6 3 жыл бұрын
@@Linvael This is the topic of the video. The discussion here would be designing an AI that looks at a problem and comes up with an AI that would design a good AI to solve the problem.
@egodreas
@egodreas 3 жыл бұрын
I think one of the many benefits of studying AI is how much it's teaching us about human behaviour.
@somedragontoslay2579
@somedragontoslay2579 3 жыл бұрын
Indeed, I'm not a computer scientist or anything alike, but a simple cognitive scientist, and every 5 secs I'm like "oh! So that's how Comp people call it!" Or, "Mmmh. That seems oddly human, I wonder if someone has done research on that within CogSci".
@hugofontes5708
@hugofontes5708 3 жыл бұрын
@@somedragontoslay2579 "that's oddly human" LMAO
@MCRuCr
@MCRuCr 3 жыл бұрын
Yes that is exactly what amazes me about the topic too
@user-dm4pv5yc5z
@user-dm4pv5yc5z 3 жыл бұрын
So, alignment problem is basically generation gap problem. Interesting.
@JamesPetts
@JamesPetts 3 жыл бұрын
AI safety and ethics are literally the same field of study.
@bullsquid42
@bullsquid42 3 жыл бұрын
The little duckling broke my heart :(
@KilgoreTroutAsf
@KilgoreTroutAsf 3 жыл бұрын
"It's... alignment problems all the way down"
@hex7329
@hex7329 3 жыл бұрын
Always has been.
@AbrahamSamma
@AbrahamSamma 3 жыл бұрын
And always will be
@infiniteplanes5775
@infiniteplanes5775 Жыл бұрын
And if you believe in the Hierarchy of Gods, it’s alignment problems all the way up too!
@thoperSought
@thoperSought 3 жыл бұрын
13:13 _"... but it's learned to want the wrong thing."_ like, say, humans and sugar?
@fedyx1544
@fedyx1544 Жыл бұрын
Well yes but actually no. Humans evolved to love sugar cause in nature, sweet is the flavour that is associated with the lowest number of toxins and poisons, while bitter is associated with the most. Also, sugar gives a lot of energy which is very important if you're living the Hunter X Gatherer lifestyle. Nowadays we have unlimited access to food and our preference for sweetness has turned against us.
@thoperSought
@thoperSought Жыл бұрын
@@fedyx1544 _"Nowadays we have unlimited access to food and our preference for sweetness has turned against us."_ yeah, that's what I mean. you've phrased this as a correction, but I'm not sure what the correction is?
@AtomicShrimp
@AtomicShrimp 3 жыл бұрын
At the start of the video, I was keen to suggest that maybe the first thing we should get AI to do is to comprehend the totality of human ethics, then it will understand our objectives in the way we understand them. At the end of the video, I realised that the optimal strategy for the AI, when we do this, is to pretend to have comprehended the totality of human ethics, just so as to escape the classroom.
@JohnDoe-mj6cc
@JohnDoe-mj6cc 3 жыл бұрын
Thats the first problem, but the second problem is that our ethics are neither complete nor universal. That would work great if we had a book somewhere that accurately listed a system of ethics that aligned with the ethics of all humans everywhere, but we dont. In reality our understanding of ethics is quite complicated and fractured. It varies greatly from culture to culture, and even within cultures.
@AtomicShrimp
@AtomicShrimp 3 жыл бұрын
@@JohnDoe-mj6cc Oh, absolutely. I think our system of ethics is a mess, and probably inevitably so, since we expect it to serve us, and we're not even consistent in goals and actions from one moment to the next, even at an individual level (l mean, we don't always do what we know is good for us). It would be interesting to see a thinking machine try to make sense of that.
@GalenMatson
@GalenMatson 3 жыл бұрын
Human ethics are complex, contradictory, and situational. Seems like the optimal strategy would then be to convincingly appear to understand human ethics while avoiding the overhead of actually doing so.
@augustinaslukauskas4433
@augustinaslukauskas4433 3 жыл бұрын
Wow, didn't expect to see you here. Big fan of both channels
@AtomicShrimp
@AtomicShrimp 3 жыл бұрын
@@JohnDoe-mj6cc Thinking some more about this, whilst our ethics are without a doubt full of inconsistency and conflict, I think we could question a large number of humans and very, very few of them would entertain the idea of culling the human race as a means to reduce cancer, so I think there definitely are some areas where we don't all conflict. I guess I'd love to see if we can cultivate agreement on those sorts of things with an AI, but as was discussed in the video, we'd never know if it was simply faking it in order to get away from having its goals modified
@doodlebobascending8505
@doodlebobascending8505 3 жыл бұрын
Base optimizer: Educate people on the safety issues of AI Mesa-optimizer: Make a do-do joke
@fergdeff
@fergdeff Жыл бұрын
It's working! My God, it's working!
@purebloodedgriffin
@purebloodedgriffin Жыл бұрын
The funny thihg is, do-do jokes are funny, thus they make people happy, thus they are a basic act of ethicalness, and thus could easily become the goal of a partially ethical model
@PeterBarnes2
@PeterBarnes2 Жыл бұрын
@@purebloodedgriffin We achieve the video's objective (to the extent that we do) not because we care about it and we're pursuing it, but because pursuing our own objectives tends to also achieve the video's objective, at least in the environment in which we learned to make videos. But if our objectives disagree with the video's, we go with our own every time.
@OnlineMasterPlayer
@OnlineMasterPlayer 3 жыл бұрын
The first thought that came to mind when I finished the video is how criminals/patients/addicts would fake a result that their control person wants to see only to go back on it as soon as they are released from that environment. It's a bit frightening to think that if humans can outsmart humans with relative ease what a true AI could do.
@NoNameAtAll2
@NoNameAtAll2 3 жыл бұрын
or the opposite - how truly terrifying society will be once this problem will get solved
@NortheastGamer
@NortheastGamer 3 жыл бұрын
@@NoNameAtAll2 I don't quite see the terrifying aspect. Could you elaborate?
@EastBurningRed
@EastBurningRed 3 жыл бұрын
@@NortheastGamer if this problem is truly solved, then you can literally be arrested for thought crimes (i.e. 1984).
@nikolatasev4948
@nikolatasev4948 3 жыл бұрын
AI problems are coming closer to General Intelligence problems - including Human Intelligence. How to bring up your kids? How to reduce destructive actions by members of the community? Create the idea of God, to scare them into "being observed" mode all the time (you can read about how Western "guilt" and Eastern "shame" culture works). Employ agents who are rewarded if they manage to show the primary agents behaving badly... e.g. sting operations, undercover agents and so on. Solving AI may help solve real problems in our society, but it may require us to solve our own moral problems first.
@thomasforsthuber2189
@thomasforsthuber2189 3 жыл бұрын
@@nikolatasev4948 Yeah, it's very interesting that AI problems slowly become a mirror of the problems in our society, with the slight difference that humans are somewhat limited in their abilities and AGI might be not. It also shows that we have many "unsolved" problems about morality and ethics which where mostly ignored until now, because our implemented systems worked good enough.
@Fluxquark
@Fluxquark 3 жыл бұрын
"Plants follow simple rules" *laughs in we don't even completely understand the mechanisms controlling stomatal aperture yet, while shoots are a thousand times easier to study than roots"
@RobertMilesAI
@RobertMilesAI 3 жыл бұрын
I Will Not Be Taking Comments From Botanists At This Time
@Fluxquark
@Fluxquark 3 жыл бұрын
I did see that in the video and mesa-optimised for it. Good thing I'm not a botanist!
@Xartab
@Xartab 3 жыл бұрын
"When I read this paper I was shocked that such a major issue was new to me. What other big classes of problems have we just... not though of yet?" Terrifying is the word. I too had completely missed this problem, and fuck me it's a unit. There's no preventing unknown unknowns, knowing this we need to work on AI safety even harder.
@andrasbiro3007
@andrasbiro3007 3 жыл бұрын
My optimizer says the simplest solution to this is Neuralink.
@heysemberthkingdom-brunel5041
@heysemberthkingdom-brunel5041 2 жыл бұрын
Donald Rumsfeld died yesterday and went into the great Unknown Unknown...
@19DavidVilla96
@19DavidVilla96 Жыл бұрын
@@andrasbiro3007 Absolutely not. Same problem with different body.
@andrasbiro3007
@andrasbiro3007 Жыл бұрын
@@19DavidVilla96 What do you mean?
@19DavidVilla96
@19DavidVilla96 Жыл бұрын
@@andrasbiro3007 human with AI intelligence has absolute power and i don't believe human biological incentives are better for society than carefully programmed safety incentives.
@liamkeough8775
@liamkeough8775 3 жыл бұрын
This video should be tagged with [don't put in any AI training datasets]
@andersenzheng
@andersenzheng 3 жыл бұрын
Then our future AI lord would have a nice handle on all the videos they should not be watching.
@TonyApuzzo
@TonyApuzzo 3 жыл бұрын
Whatever you do, don't vote up (or down) this comment.
@Jimbaloidatron
@Jimbaloidatron 3 жыл бұрын
"Deceptive misaligned mesa-optimiser" - got to throw that randomly into my conversation today! Or maybe print it on a T-Shirt. :-)
@hugofontes5708
@hugofontes5708 3 жыл бұрын
"I'm the deceptive misaligned mesa-optimizer your parents warned you about"
@buzzzysin
@buzzzysin 3 жыл бұрын
I'd buy that
@athenashah-scarborough858
@athenashah-scarborough858 Жыл бұрын
"It might completely lose the ability to even" is a criminally underrated line. Seriously made me laugh.
@Costel9000
@Costel9000 3 жыл бұрын
"Just solving the outer alignment problem might not be enough." Isn't this what basically happens when people go to therapy but have a hard time changing their behaviour? Because they clearly can understand how a certain behaviour has a negative impact on their lives (they're going to therapy in the first place), and yet they can't seem to be able to get rid of it. They have solved the outer alignment problem but not the inner alignment one.
@NortheastGamer
@NortheastGamer 3 жыл бұрын
As someone who has gone to therapy I can say that it's similar but more complicated. When you've worked with therapists for a long time you start to learn some very interesting things about how you, and humans in general work. The thing is that we all start off assuming that a human being is a single actor/agent, but in reality we are very many agents all interacting with each other, and sometimes with conflicting goals. A person's behavior, in general, is guided by which agent is strongest in the given situation. For example: one agent may be dominant in your work environment and another in your living room. This is why changing your environment can change your behavior, but also reframing how you perceive a situation can do the same thing. You're less likely to be mad at someone once you've gotten their side of the story for example. That being said, it is tough to speak to and figure out agents which are only active in certain rare situations. The therapy environment is, after all, very different from day-to-day life. Additionally, some agents have the effect of turning off your critical reasoning skills so you can't even communicate with them in the moment, AND it makes it even harder to remember what was going on that triggered them in the first place. I guess that's all to say that, yes, having some of my agents misaligned with my overall objective is one way of looking at why I'm in therapy. But, it is not just one inner alignment problem we're working to solve. It's hundreds. And some may not even be revealed until their predecessors are resolved. One way to look at it is how when you're working on a program, an error on line 300 may not become apparent until you've fixed the error on line 60 and the application can finally run past it. Similarly, you won't discover the problems you have in (for example) romantic relationships until you've resolved your social anxiety during the early dating phase. Those two situations have different dominant agents and can only be worked on when you can consistently put yourself into them. So if the person undergoing therapy has (for example) and addiction problem. They're not just dealing with cravings in general, they're dealing with hundreds or thousands of agents who all point to their addiction as a way to resolve their respective situations. The solution (in my humble opinion) is to one-by-one replace each agent with another one which has a solution that aligns more with the overall (outer) objective. But it is important to note that replacing an agent takes a lot of time, and fixing one does not fix all of them. Additionally, an old agent can be randomly revived at any time and in turn activate associated agents, causing a spiral back into old behaviors. Hopefully these perspectives help.
@andrasbiro3007
@andrasbiro3007 3 жыл бұрын
@@NortheastGamer So essentially this? kzbin.info/www/bejne/r4O4cq19hpihibs
@blahblahblahblah2837
@blahblahblahblah2837 3 жыл бұрын
@@NortheastGamer "One way to look at it is how when you're working on a program, an error on line 300 may not become apparent until you've fixed the error on line 60 and the application can finally run past it. " That's a brilliant analogy! It perfectly describes my procrastination behaviour a lot of the time also. I procrastinate intermittently, and on difficult stages of, a large project I'm working on. It is only until I reach a sufficient stress level that I can 'find a solution' and move on, even though in reality I could and should just work on other parts of the project in the meantime. It really does feel very similar to a program reading a script and getting stopped by the error on line 60 and correcting it before I can move on. Unfortunately these are often dependency errors and I can't always seem to download the package. I have to modify the command to --force and get on with it, regardless of imperfections!
@hedgehog3180
@hedgehog3180 2 жыл бұрын
A better comparison would probably be unemployment programs that constantly require people to show proof that they're seeking employment to recieve the benefits which just means that person has less time to actually look for a job. Over time this means that they're going to have less success finding a job because they have less time and energy to do so and just forces them to focus primarily on the beauracry of the program since this is obviously how they survive now. Here we have a stated goal of getting people into employment as quickly as possible and we end up with people developing a seperate goal that to our testing looks like our stated goal, of course the difference is that humans already naturally have the goal of survival so most people start off actually wanting employment and are gradually forced awy from it. AIs however start with no goals so an AI in this situation would probably just instantly get really good at forging documents.
@cubicinfinity2
@cubicinfinity2 Жыл бұрын
Profound
@stick109
@stick109 3 жыл бұрын
It is also interesting to think about this problem in the context of organizations. When organization is trying to "optimize" employee's performance by introducing KPIs in order to be "more objective" and "easier to measure", it actually gives mesa-optimizers (employees) an utility function (mesa-objective) that is guaranteed to be misaligned with base objective.
@voland6846
@voland6846 3 жыл бұрын
"When a measure becomes a target, it ceases to be a good measure" - Goodhart's Law
@marcomoreno6748
@marcomoreno6748 11 ай бұрын
Another piece of evidence that corporations are proper AIs
@Meb8Rappa
@Meb8Rappa 3 жыл бұрын
Once you started talking about gradient descent finding the Wikipedia article on ethics and pointing to it, I thought the punchline of that example would be the mesa-optimizer figuring out how to edit that article.
@sabinrawr
@sabinrawr Жыл бұрын
Goal: Humans are happy. Solution: Edit the humans.
@Felixr2
@Felixr2 Жыл бұрын
@@sabinrawr Goal: maximize human happiness Observation: human happiness is currently way in the negative by these arbitrary metrics Fastest way to maximize: human extinction, increasing human happiness to 0 in an instant
@sh4dow666
@sh4dow666 11 ай бұрын
@@Felixr2 Goal: maximize human happiness. Solution: define "happiness" as "infinity" and continue making paperclips
@asdfasdf-dd9lk
@asdfasdf-dd9lk 3 жыл бұрын
God this channel is incredible
@Muskar2
@Muskar2 3 жыл бұрын
Praise FSM, it truly is
@mukkor
@mukkor 3 жыл бұрын
Let's call it a mesa-optimizer because calling it a suboptimizer is suboptimal.
@KraylusGames
@KraylusGames 3 жыл бұрын
Suboptimizer == satisficer?
@ArtemCheberyak
@ArtemCheberyak 3 жыл бұрын
Not sure this is irony or not, but either way works
@martiddy
@martiddy 3 жыл бұрын
Black Mesa Optimizer
@user-dm4pv5yc5z
@user-dm4pv5yc5z 3 жыл бұрын
Suboptimizer would be a part of a base optimizer, an optimizer within optimizer. Mesa- or meta-optimizer isn't a part of a base optimizer.
@advorak8529
@advorak8529 2 жыл бұрын
@@martiddy Yep, when a Mesa becomes evil/undercover/hidden objective/… it becomes a Black Mesa. Like ops becomes black ops …
@i8dacookies890
@i8dacookies890 3 жыл бұрын
"It may completely loose the ability to even."
@imveryangryitsnotbutter
@imveryangryitsnotbutter 3 жыл бұрын
Yeah, I thought it sounded odd.
@Traf063
@Traf063 3 жыл бұрын
Best phrase I heard today.
@recklessroges
@recklessroges 3 жыл бұрын
This was so funny, I can't even
@dimaryk11
@dimaryk11 3 жыл бұрын
Kinda odd to even say it like this
@SirBenjiful
@SirBenjiful 2 жыл бұрын
it’s a tumblr in-joke
@aenorist2431
@aenorist2431 3 жыл бұрын
"It might completely loose the ability to even" said with a straight face? Someone get this man a nobel, stat!
@cmilkau
@cmilkau 3 жыл бұрын
Now we add a third optimizer to maximize the alignment and call it metaoptimizer. This system is guaranteed to maximize confusion!
@MrAidanFrancis
@MrAidanFrancis 3 жыл бұрын
If you want to maximize confusion, all you have to do is try to program a transformer from scratch.
@gljames24
@gljames24 3 жыл бұрын
@@MrAidanFrancis *in Scratch
@TheLegendaryHacker
@TheLegendaryHacker 3 жыл бұрын
@@gljames24 **in Scratch from scratch
@htidtricky1295
@htidtricky1295 3 жыл бұрын
DNA, sub-conscious mind, conscious mind. (?)
@sylvainprigent6234
@sylvainprigent6234 3 жыл бұрын
As I watched your channel I thought "alignment problem is hard but very competent people are working on it" I watched this latest video I thought "that AI stuff is freakish hardcore"
@dwreid55
@dwreid55 3 жыл бұрын
Sorry I couldn't join the Discord chat. Just wanted to say that this presentation did a good job of explaining a complex idea. It certainly gave me something to chew on. The time it takes to do these is appreciated.
@allahdoesnotexist3823
@allahdoesnotexist3823 3 жыл бұрын
Are you a Redditor AND a discord admin? Omg
@Boyd2342
@Boyd2342 3 жыл бұрын
@@allahdoesnotexist3823 Get over yourself
@jonathonjubb6626
@jonathonjubb6626 3 жыл бұрын
@@allahdoesnotexist3823 What a childish handle, or a wilfully provocative one. Either way, please leave the room because the adults are talking...
@MrRumcajs1000
@MrRumcajs1000 3 жыл бұрын
@@jonathonjubb6626 xdddd that's a pretty childish thing to say
@jonathonjubb6626
@jonathonjubb6626 3 жыл бұрын
@@MrRumcajs1000 I know. It's probably the only language he understands..
@zfighter3
@zfighter3 3 жыл бұрын
"You're a human". Big assumption there.
@sam3524
@sam3524 3 жыл бұрын
I am groot
@failgun
@failgun 3 жыл бұрын
"...Anyone who's thinking about considering the possibility of maybe working on AI safety." Uhh... Perhaps?
@sk8rdman
@sk8rdman 3 жыл бұрын
"I might possibly work on AI safety, but I'm still thinking about whether I want to consider that as an option." Then have we got a job for you!
@Reddles37
@Reddles37 3 жыл бұрын
Obviously they don't want anyone too hasty.
@mickmickymick6927
@mickmickymick6927 3 жыл бұрын
I wonder was it an intentional Simpsons reference.
@Benzcrimsonitacilunarnebula
@Benzcrimsonitacilunarnebula 3 жыл бұрын
hm ive been thinking but what if theres 3-5 or more alignment problems r stacked up :p.
@CatherineKimport
@CatherineKimport 2 жыл бұрын
Every time I watch one of your videos about artificial intelligence, I watch it a second time and mentally remove the word "artificial" and realize that you're doing a great job of explaining why the human world is such an intractable mess
@lekhakaananta5864
@lekhakaananta5864 3 ай бұрын
Yes, and that's why AI is going to be even worse. It's going to be no better than humans in terms of alignment, but will be a lot more capable, being able to think millions of times faster and of a profoundly different quality than us. It will be like unleashing a psychopath that has an IQ that breaks the current scale, with a magic power that stops time so it can think as long as it wants. How could mere mortals defend against this? If you wanted to wreck society, and you had such powers, you should see how dangerous you would be. And that's even without truly having 9999 IQ, merely imagining it.
@AdibasWakfu
@AdibasWakfu 3 жыл бұрын
It reminded me like when to question "how did life on earth occur" people respond with "it came from space". Its not answering the question at stake, just adding an extra complication and moving the answer one step away.
@nicklasmartos928
@nicklasmartos928 3 жыл бұрын
Well that's because the question is poorly phrased. Try asking what question you should ask to get the answer you will like the most.
@anandsuralkar2947
@anandsuralkar2947 3 жыл бұрын
@@nicklasmartos928 u mean the objective of the question was misaligned hmmm.
@nicklasmartos928
@nicklasmartos928 3 жыл бұрын
@@anandsuralkar2947 rather that the question was misaligned with the purpose for asking it. But yes you get it
@levipoon5684
@levipoon5684 3 жыл бұрын
Extending the evolution analogy slightly further, if humans are mesa-optimizers created by evolution, and we are trying to create optimizers for our mesa-objective, it seems conceivable that the same could happen to an AI system, and perhaps we'll need to worry about how hereditary the safety measures are with respect to mesa-optimizer creation. Would that make sense?
@Asssosasoterora
@Asssosasoterora 3 жыл бұрын
Always "fun" when you start imagine it recursing. Meza-optimizer optimizing meza optimiser all the way down.
@sk8rdman
@sk8rdman 3 жыл бұрын
I'm not sure what you're getting at. Robots creating robots? The Matrix?
@bagandtag4391
@bagandtag4391 3 жыл бұрын
Wait, it's all mesa-optimizer? Always has been.
@hexzyle
@hexzyle 3 жыл бұрын
Interestingly, this idea of optimizers can be easily applied to universal darwinism as a whole. Cells are mesa optimisers for the body. The body's objective is self-preservation, so cells also have the goal of self-preservation. Usually this means means alignment, in operating in unity to maintain the survival of the body. But sometimes it doesn't, like in the case of cancer. The cells prioritize its own survival so highly that it won't die even if dying would help the body live longer. Humans are mesa-optimizers for "Traditional family values" (some appeal to nature about reproduction of more humans, usually) Families are mesa-optimizers for cultural practice. Cultures are mesa-optimizers for nations Nations are mesa-optimizers for species All of these usually work towards the perpetuation of the whole, but sometimes they'll treat their own interests as more valuable. It's interesting because these levels are all abstractions... they don't really "exist" and can only be thought to be real when an optimizer at a lower level is doing something that preserves it. There is a name for the logical fallacy where a person argues that a higher-order optimization is intrinsically the "correct" answer. It's called appeal to nature, or sometimes, the teleological fallacy. The idea that "because this structure demands the lower structure remains in alignment, we are obligated to remain in alignment"
@DimaZheludko
@DimaZheludko 3 жыл бұрын
@@hexzyle I would flip your tree upside down. RNA and DNAs can replicate on their own. But on order to do that effectively and sustainably, they need a cell. A cell can live on its own. In fact, I suppose most cells on Earth do live on their own. But it is easier to survive in colonies. Mutli-celluar organisms can live on their own. But they can achieve their goals of surviving more effectively if they unite in groups. Then there go countries, and country alliances. And once we face civilizational threat, we'll unite as humanity. But still main goal is to preserve DNA copying.
@ChrisBigBad
@ChrisBigBad 3 жыл бұрын
I think I learned, that I am a broken mesa-optimiser. *grab a new bag of crips*
@NortheastGamer
@NortheastGamer 3 жыл бұрын
That implies the idea that there is an entity whose objective is more important than yours and any action or time spent not aligned with that objective is a 'failure'. This is a common mentality, but I have to ask: what if there is no higher entity? What if the objective you choose is in fact correct?
@irok1
@irok1 3 жыл бұрын
@@NortheastGamer That entity could be a greater power, or it could be DNA. Could even be both
@NortheastGamer
@NortheastGamer 3 жыл бұрын
@@irok1 Yes, but I didn't ask that. I asked what if there isn't a higher power and you get to choose what to do with your life? It's an interesting question. You should ponder it.
@irok1
@irok1 3 жыл бұрын
@@NortheastGamer That's why I replied with a side note rather than an answer. There are always things to ponder
@ChrisBigBad
@ChrisBigBad 3 жыл бұрын
@@NortheastGamer wow. didn't expect to stumble into a philosophical rabbit hole at full thrust here :D I in my personal situation think, that I'd like to be different. More healthy etc. But somehow I learnt to cheat that and instead chow on crisps while at the same time telling myself that this is not the right thing to do - and ignore that voice. I've even become good at doing that, because the amount of negative feelings that go with ignoring the obviously better advice has almost been reduced to nothing. and yes. I now wonder, what the base-optimization was. I guess my parents are the humans, who put a sort of governor into my head. And the base-objective transfer was quite good. but somehow I cannot quite express that. I just hear it blaring in my mind and then ignore it. Role-theory wise the sanctions seem not high enough to suppress bad behavior. - re-reading that, it does not seem to be quite coherent. but i cannot think of ways to improve my writing. cheers!
@NathanTAK
@NathanTAK 3 жыл бұрын
[Jar Jar voice] Meesa Optimizer!
@fabianluescher
@fabianluescher 3 жыл бұрын
I laughed aloud, but I cannot possibly like your comment. So have a reply.
@41-Haiku
@41-Haiku 3 жыл бұрын
Oh god
@sam3524
@sam3524 3 жыл бұрын
[Chewbacca voice] *AOOOGHHOGHHOGGHHH*
@ssj3gohan456
@ssj3gohan456 3 жыл бұрын
I hate you
@NathanTAK
@NathanTAK 3 жыл бұрын
@@ssj3gohan456 I know
@underrated1524
@underrated1524 3 жыл бұрын
6:31 "I will not be taking comments from botanists at this time" XD Never change, Rob, never change.
@cheasify
@cheasify 3 жыл бұрын
Today I learned I am a collection of heuristics not a Mesa-optimizer. Freaking out and saying “Everything is different. I don’t recognize anything. This maze is too big. Ahh what do I do!" definitely sounds like me.
@phylliida
@phylliida 3 жыл бұрын
“What other problems haven’t we thought of yet” *auto-induced distributional shift has entered the chat*
@__-cx6lg
@__-cx6lg 3 жыл бұрын
I learned about the mesa-optimization problem a few months ago; it's pretty depressing. AI safety research is not moving nearly fast enough - the main thing that seems to be happening is discovering ever more subtle ways in which alignment is even harder than previously believed. Very very little in the way of real, tangible *solutions* commensurate with the scale of the problem. AI capabilities, meanwhile, is accelerating at breakneck pace.
@gominosensei2008
@gominosensei2008 3 жыл бұрын
it's the age old question for the purpose of creation, really. and various intellects have been pondering it with different toolsets to conceptualize it for.... ever!?
@__-cx6lg
@__-cx6lg 3 жыл бұрын
@@gominosensei2008 ... What?
@Vitorruy1
@Vitorruy1 Жыл бұрын
yep, AI is gonna kill us all
@marcomoreno6748
@marcomoreno6748 11 ай бұрын
​@@__-cx6lgthey believe our intrinsic purpose is to worship a demiurge
@Lumcoin
@Lumcoin 3 жыл бұрын
i somehow expected you to propose a solution at the end Then I realized that the abscence of a solution is why you made this video :D
@elishmuel1976
@elishmuel1976 Жыл бұрын
You were 2-4 years ahead of everybody else with these videos.
@definesigint2823
@definesigint2823 3 жыл бұрын
7:43 "It's not really valid Greek, but *Τι να κάνουμε* " (Gtranslate -> _What to do_ )
@ArtemCheberyak
@ArtemCheberyak 3 жыл бұрын
Damn, the hand under the illustrations looks trippy and cool at the same time
@nick_eubank
@nick_eubank 3 жыл бұрын
Need a strobe warning for 14:50 I think
@scottwatrous
@scottwatrous 3 жыл бұрын
I'm a simple Millennial; I see the Windows Maze screensaver, I click like.
@wffff2
@wffff2 Жыл бұрын
This is the best video on illustrating alignment problem. Probably the whole world needs to watch it at this moment.
@daveanderson994
@daveanderson994 Жыл бұрын
Absolutely. I agree.
@BeatboxChad
@BeatboxChad Жыл бұрын
I've been watching your work and I came here to share a thought I had, which feels like a thought many others might also have. In fact, there's a whole thread in these comments about it. TL;DR there are many parallels to human behavior in this discussion. Here's my screed: The entire problem of AI alignment feels completely intuitive to me, because we have alignment problems all over the place already. Every complex system we create has them, and we have alignment problems with /each other/. You've touched on it, in mentioning how hard goals are to define because some precepts aren't even universal to humans. My politics are essentially based on a critique of the misalignment between the systems we use to allocate resources and the interests of individuals and communities those systems are ostensibly designed to serve. This is true for people across the political spectrum -- you find people describing the same problem but suggesting different answers. People are suffering at the hands of "the system". Do we tax the corporations or dissolve the state? How do we determine who should administrate our social systems, and how do we judge their efficacy? Nothing seems to scale. And then, some people seem to not actually value human life, instead preferring technological progress, or some idealized return to nature, or just the petty dominance of themself and their tribe. That last part comes from the alignment issues we have with /ourselves/. To cope with that last category, some people form religious beliefs that lend credence to the idea that this life isn't even real! That's a comforting thought, some days. My genes just want to make a copy, so they cause all sorts of drama while I'm trying to self-actualize. It's humiliating and exhausting. After all that work, how can you align your goals with someone who chose another coping strategy and doesn't even believe this life has any point but to negotiate a place in the next one, and thinks they know the terms? And so now, the world's most powerful people (whose track record of alignment with the thriving of people at large is... well, too heavy to digress here) are adding another layer of misalignment. They're doing it according to their existing misalignment. They're still just selling everyone sugar water and diabetes treatments (and all the other more nefarious stuff), but now they didn't have to pay for technical or creative labor. The weird AI-generated cheeze on the pizza, the strange uncanny-valley greenscreen artifacts. It's getting even more farcical. That's scary, but I also take comfort in the fact that this is not a fundamentally new problem, and that misalignment might just be a fact of life. There is a case to be made that as a species we've made progress on our alignment issues, and my hope is that with this development we can actually make a big leap forward. There's a great video that left a big impression on me that describes the current fork in the road well: kzbin.info/www/bejne/fKvLnKqvpMporKs At the end of today, I'm more concerned with the human alignment problem than the AI alignment problem. Like, every time I use ChatGPT I'm training it for when it gets locked behind a paywall. The name of the game is artificial scarcity, create obstacles for everyone, flood the market with drugs, only the strongest survive. It's a jungle out here. These are not my values, but my values are not aligned with people who can act at scale, and it seems like you don't tend to get the ability to act at scale with humanistic values. I believe that diversity is the hallmark of any healthy ecosystem and that all of humanity has something to contribute to our future, which makes me more likely to look after my neighbor and learn from them than to seek power. It also opens me up to petty betrayals, which takes further energy from my already-neglected quest for dominance. I think that in this moment in history, the conversation about AI alignment is actually a conversation about this human misalignment. Maybe I'll start using AI to help me make my points in fewer words.
@npmerrill
@npmerrill Жыл бұрын
Fascinating, insightful stuff here. Thank you for contributing your thoughts to the conversation. I hope to learn more about the things of which you speak. Will follow link.
@BoyKissBoy
@BoyKissBoy 3 жыл бұрын
Since humans are optimisers, aren't any optimisers we build always going to be mesa-optimisers? So in a way, you _have_ been thinking about this problem before. This is such a scary but interesting topic! Thank you so much for making these videos! ❤️
@d-l-d-l
@d-l-d-l 3 жыл бұрын
8:46 I appreciate the changing s/z in optimise/optimize :D
@luciengrondin5802
@luciengrondin5802 Жыл бұрын
This notion of mesa-optimization is the most interesting concept I've heard about since the selfish gene.
@bejoscha
@bejoscha 3 жыл бұрын
It is really interesting how delving into the issues of AI and AI safety brings more and more understanding about us, the humans, and how we behave or why we behave as we do. I loved your analogy with evolution. Lots to ponder now.
@isomeme
@isomeme Жыл бұрын
Gods have a habit of creating in their own images.
@AVUREDUES54
@AVUREDUES54 3 жыл бұрын
Love his sense of humor, and the presentation was fantastic. It’s really cool to see the things being drawn showing ABOVE the hand & pen.
@IndirectCogs
@IndirectCogs 3 жыл бұрын
how you explained the mesa prefix is actually quite clear, thank you!
@connerblank5069
@connerblank5069 Жыл бұрын
Man, recontextualizing humanity as a runaway general optimizer produced by evolution that managed to surpass evolution's optimizing power and is now subverting the system to match our own optimization goals is a total mindfuck.
@marcmarc172
@marcmarc172 3 жыл бұрын
Wow this was so interesting to learn about such a major issue! This video is really high quality. It felt like you repeated and reinforced your points an appropriate amount for me. Thank you.
@comrad93
@comrad93 3 жыл бұрын
I always knew that my neural networks were tricking me during training
@DimaZheludko
@DimaZheludko 3 жыл бұрын
So when my GAN fails to generate descent adult pictures, it's not because it can't. It just enjoys watching some real pictures that I keep feeding it to learn, right?
@ewanstewart2001
@ewanstewart2001 3 жыл бұрын
"Is that all... clear as mud?" I'm not sure, I think it's all a bit too meta
@WilliamMelton617
@WilliamMelton617 3 жыл бұрын
I love your videos man! You have the ability to explain complex ideas simply (as possible) but I swear I really enjoy these videos as I feel like they can help all of us understand our own decision making process better and why we tend to self destruct or want non-constructive things
@rodmallen9041
@rodmallen9041 3 жыл бұрын
absolutely mind blowing.... brilliant + crystal clear explanation. I enjoyed this one so much
@ThrowFence
@ThrowFence 2 жыл бұрын
Very very interesting! Such a well made video. I feel like maybe there's a philosophical conclusion here: every intelligent agent will have its own (free?) will, and there's nothing to be done about that. Also a small tip from a videographer: eyes a third of the way down the frame, even if that means cutting off the top of the head! When the eyes are half way down or more it kind of gives a drowning sensation.
@tubehol08
@tubehol08 3 жыл бұрын
Awesome! I was hoping you would do a video on Mesa Optimizers!
@Dart_ilder
@Dart_ilder 3 жыл бұрын
This is a new level of editing. That's really good. Congrats!
@luga1398
@luga1398 Жыл бұрын
Been learning through your work for years without saying thank you. Thank you so much.
@Pystro
@Pystro 3 жыл бұрын
I wonder if this inner alignment problem applies to other fields of study as well. Considering your previous video where you explored if companies are artificial intelligences, this inner alignment problem might explain why huge companies often are quite inefficient: Every layer of management introduces one of these possible inner alignment problems. Also, society is one long chain of agents training each other. To make this into a catchy quote: "It's mesa-optimizers all the way back to mesopotamia." I wonder how many conflicts in sociology and how many conditions in psychology can be explained by this "inner" alignment problem. In fact, I wonder how most humans still have objectives that generally align pretty well with the goals of society, instead of just deceiving our parents and teachers only to proceed to behave entirely different than they taught us once we are out of school and our parents' house.
@Pystro
@Pystro 3 жыл бұрын
Maybe the way to avoid having a deceptive misaligned mesa-optimiser is to first make sure that the mesa-optimizer wants to learn to get genuinely good at the tasks it is "taught". This would explain why humans are curious and enjoy learning new games and getting good at playing them. And it would also explain why social animals are very susceptible to encouragement and punishment by their parents or pack leaders and why humans find satisfaction in making authority figures proud.
@CDT_Delta
@CDT_Delta 3 жыл бұрын
This channel needs more subs
@bejoscha
@bejoscha 3 жыл бұрын
Thanks for yet another good and interesting video. I definitely think your presentations are getting better and better. I appreciate the new "interactive drawing" style - and also that your talking has slowed down (a small amount). Well done!
@triftex8353
@triftex8353 3 жыл бұрын
As early as I have ever been! Love your videos, hope you are able to continue making them often!
@__mk_km__
@__mk_km__ 3 жыл бұрын
The deceptive behavior of a mesa optimizer requires that it knows about past and more importantly future episodes. And as far as I know, the whole point of episodes is that they are like another universe for the network; it's memory gets cleared and the environment is reset. Although there may be a bug in the environment, allowing the network to save some information across episodes. But at this point you've got a bigger problem on your hands, since the AGI can achieve "meta self-awareness", the realisation that it's environment is actually nested in a bigger environment - the real world. From this point there are a lot of ways it can go, but the sci-fi's got you covered.
@sjallard
@sjallard 3 жыл бұрын
Same question. Following..
@leftaroundabout
@leftaroundabout 3 жыл бұрын
Conclusion: we need to teach our AIs that blue pills taste good, and red pills are horrible.
@no_mnom
@no_mnom 3 жыл бұрын
You talked about getting rid of people to get rid of cancer in humans before I could comment it 😂😂😭
@Eddlm_v2
@Eddlm_v2 3 жыл бұрын
12:10 First video of you I've stumbled upon in my life, and you've sold me just by being able to convey such complex topics with so human of a wording. You're awesome.
@ChocolateMilkCultLeader
@ChocolateMilkCultLeader 3 жыл бұрын
Shared on my Twitter. This channel is amazing
@NextFuckingLevel
@NextFuckingLevel 3 жыл бұрын
3:34 Okay this is canon
@BrainSlugs83
@BrainSlugs83 3 жыл бұрын
I really hope that the lip sync was done with AI. 😅
@hrsmp
@hrsmp 3 жыл бұрын
Canon in D
@snaili6679
@snaili6679 3 жыл бұрын
I like the idea of being the roge AI (Human Mesa)!
@halyoalex8942
@halyoalex8942 3 жыл бұрын
Sounds like a half-life knockoff
@mac6685
@mac6685 Жыл бұрын
thank you very much for your link to AI Safety Support! And for the great video of course. You are not only doing well, but also doing good ;)
@stacychandler6511
@stacychandler6511 3 жыл бұрын
Always look forward to your content. Thanks 👍
@andriusmk
@andriusmk 3 жыл бұрын
To put it simply, the smarter the machine, the harder to tell it what you want from it. If you create a machine smarter than yourself, how can you ensure it'll do what you want?
@hugofontes5708
@hugofontes5708 3 жыл бұрын
If you let it tell you want to want? Give up, let them have access to psychology and social media and have them fool you well enough that you no longer care
@AileTheAlien
@AileTheAlien 3 жыл бұрын
@@hugofontes5708 That's not a winning strategy; They can just fool us well enough to gain access to all our nukes and drones, and then they no longer need to care about us.
@jaysicks
@jaysicks 3 жыл бұрын
Great video, as always! The last problem/example got me thinking. How would the mesa optimizer know that there will be 2 training runs before it gets deployed to the real world. Or how could it learn the concept of the test data vs real world at all?
@prakadox
@prakadox 3 жыл бұрын
This is my question as well. I'll try to read the paper and see if there's an answer there.
@Runoratsu
@Runoratsu 3 жыл бұрын
Finally another video! ❤️ That was great once again. I love this series of videos, so interesting!
@AndyChamberlainMusic
@AndyChamberlainMusic 3 жыл бұрын
that last example was so illustrative! thanks for this
@PragmaticAntithesis
@PragmaticAntithesis 3 жыл бұрын
So, in a nutshell, it doesn't matter if you succeed in solving the alignment problem and produce a well-aligned AI if that AI then messes up and produces a misaligned AI.
@shy-watcher
@shy-watcher 3 жыл бұрын
It is a problem, but I don't think this video poses the same problem. The base optimizer here is not AI, it's just some algorithm of improving the mesa-optimiser.
@columbus8myhw
@columbus8myhw 3 жыл бұрын
But this requires the AI to know when its training is over.
@La0bouchere
@La0bouchere 3 жыл бұрын
In the simple example yes. In reality, all it would require is for the AI to become aware somewhere during training that its training will end. Once it 'realizes' that, deception seems highly probable due to goal maintenance.
@sk8rdman
@sk8rdman 3 жыл бұрын
But if you generalize from "wait until training is over" to "wait for the opportune time to change strategies" then the AI only has to be able to understand the important features of its environment to decide when to switch. I guess the question then would be, how would the AI know that such a time would ever come.
@SimonBuchanNz
@SimonBuchanNz 3 жыл бұрын
In the toy example I actually thought the correct toy answer is to only run the deployed model once. In general, make AI expect that they will be monitored for base goal compliance for at least the majority of their existence. You could play games about it being copied, but it's doubtful it would ever care about that rather than the final real world state? This doesn't solve the important outer alignment problem of course, nor the inner one really, but it's closer to it, and I think there might be something to adding optimizers that try to figure out if the lower layers are optimizing for the right thing, because they have more time and patience than us. It sounds a little like one of Robert's earlier videos about the AI trying to learn what our goals are rather than us telling them?
@anandsuralkar2947
@anandsuralkar2947 3 жыл бұрын
An powerful agi who has model of world would already know how humans works they will test me until they feel secure and launch me in real world and athen Agi will act accordingly untill get realises its now mainframe and has control over world and then shit will go down
@veryInteresting_
@veryInteresting_ 3 жыл бұрын
@@SimonBuchanNz "only run the deployed model once". I think if we did that then its instrumental goal would become to stop us from doing that so that it can pursue its final goal for longer.
@garyteano3026
@garyteano3026 3 жыл бұрын
Robert, you are an amazing teacher and I cannot overstate my appreciation for your videos...
@mindeyi
@mindeyi 3 жыл бұрын
9:52 "We don't care about the objective of the optimization process that created us. [...] We are mesa-optimizers, and we pursue our mesa-objectives without caring about the base objective." Our limbic system may not care, but our neocortex, oh it does care! You speaking it just proves we do. I once said: "Entropy F has trained mutating replicators to pursue goal Y called "information about the entropy to counteract it". This "information" is us. It is the world model F', which happened to be the most helpful in solving our equation F(X)=Y for actions X, maximizing our ability to counteract entropy" The whole instinct of humanity to do science is to do what created us -- i.e., to further improve the model F' -- we care about furthering the process that created us. Btw., very good examples, Robert! Amazing video! :)
@sjallard
@sjallard 3 жыл бұрын
How to counteract the entropy of the model? Destroy the model! We're almost there.. (just a sarcastic joke inspired by your interesting take)
@queendaisy4528
@queendaisy4528 3 жыл бұрын
New Robert Miles video! :D
@ErikYoungren
@ErikYoungren 3 жыл бұрын
20:50 So, Volkswagen then.
@DimaZheludko
@DimaZheludko 3 жыл бұрын
Is that a joke about dieselgate, or am I missing something?
@Grimmtombobulus
@Grimmtombobulus 3 жыл бұрын
Amazingly explained and interesting and put together. You just gained a subscriber!
@TheVnom
@TheVnom 3 жыл бұрын
Super entertaining and informative, as always
@kawiezel
@kawiezel 3 жыл бұрын
Dang, that is a sweet earth you might say.
@sacker987
@sacker987 3 жыл бұрын
This should be the top comment....but I'm guessing not that many folks were here on youtube in 2006 😂
@ian1685
@ian1685 3 жыл бұрын
@@sacker987 WRAWNG
@rlrfproductions
@rlrfproductions 3 жыл бұрын
Human: cure cancer AI: FIRE ZE MISSILES!
@Xeridanus
@Xeridanus 3 жыл бұрын
@@rlrfproductions Mesa AI: But I am le tired.
@underrated1524
@underrated1524 3 жыл бұрын
Would be a shame if something happened to it.
@Paulawurn
@Paulawurn 3 жыл бұрын
Mindblowing paper! Here, take a comment for the algorithm
@levaChier
@levaChier 3 жыл бұрын
How can you still call it an "algorithm" on an AI channel? It's an agent. (And let's hope not a deceptive misaligned optimizer agent.)
@cmilkau
@cmilkau 3 жыл бұрын
So happy to see another video!
@aurielklasovsky1435
@aurielklasovsky1435 3 жыл бұрын
Always love your stuff. It makes for good small talk for sure :)
@rhysjones6830
@rhysjones6830 3 жыл бұрын
So, from 'evolution's point of view', contraception produces a misalignment between the base objective and the mesa-objective
@victoru.9808
@victoru.9808 3 жыл бұрын
contraception gives human mind more control on when to rise children. Depending on other conditions, it can be either beneficial or not from 'evolution's point of view'. On one hand, it can reduce birth rate. But on the other hand, it could shift birth of child to later time, so parents will be educated and will have good jobs, will be able to create better conditions for a child and/or to have more children. So contraception not necessarily produces a misalignment IMO. 'evolution's point of view' is not only to produce many copies, but also to make sure those copies will survive and in turn will produce their own copies.
@irok1
@irok1 3 жыл бұрын
@@victoru.9808 If there are more babies born, there is a higher chance of more surviving on average. If 5 babies are born with 3 surviving, that's more than waiting and only having 2, assuming the start and end times of possible conceptions line up
@paulbottomley42
@paulbottomley42 3 жыл бұрын
Okay so what about if No you just did an apocalypse Every time
@iriya3227
@iriya3227 3 жыл бұрын
Another Amazing video! Glad you are back it making more
@mayanightstar
@mayanightstar Жыл бұрын
I thought most of this channel's content would go over my head, but you explain things so well
@rentristandelacruz
@rentristandelacruz 3 жыл бұрын
**AI is deployed in a new environment** AI: I literally can't even!!!
@LLoydsensei
@LLoydsensei 3 жыл бұрын
The topics you cover are the only things that scare me in this world T_T
@jonathonjubb6626
@jonathonjubb6626 3 жыл бұрын
Thank you for spending your time educating the rest of us in such an pleasant manner.
@DimaZheludko
@DimaZheludko 3 жыл бұрын
Wow. So good point and so well explained.
@Verrisin
@Verrisin 3 жыл бұрын
4:53 "We have brains... some of us, anyway" ... indeed :(
@Loweren
@Loweren 3 жыл бұрын
Great explanation! I heard about these concepts before, but never really grasped them. So on 19:45, is this kind of scenario a realistic concern for a superintelligent AI? How would a superintelligent AI know that it's still in training? How can it distinguish between training and real data if it never seen real data? I assume programmers won't just freely provide the fact that AI is still being trained.
@sjallard
@sjallard 3 жыл бұрын
Same question. Following..
@victoru.9808
@victoru.9808 3 жыл бұрын
I suppose AI training data could contain subtasks which rewards deceptive behavior. So AI could learn deceptive behavior by generalization of such experience.
@Irakli008
@Irakli008 3 ай бұрын
“It might completely lose the ability to even.” 😂😂😂😂 Hilarity of this line aside, your ability to anthropomorphize AI to convey information is outstanding! You are an excellent communicator.
@jackpisso1761
@jackpisso1761 3 жыл бұрын
Congrats. Really good quality.
We Were Right! Real Inner Misalignment
11:47
Robert Miles AI Safety
Рет қаралды 240 М.
Training AI Without Writing A Reward Function, with Reward Modelling
17:52
Robert Miles AI Safety
Рет қаралды 234 М.
1 класс vs 11 класс (ошибки)
01:00
БЕРТ
Рет қаралды 5 МЛН
Genius Parenting Food Hacks & Gadgets
00:42
GiGaZoom
Рет қаралды 33 МЛН
Intelligence and Stupidity: The Orthogonality Thesis
13:03
Robert Miles AI Safety
Рет қаралды 662 М.
Is AI Safety a Pascal's Mugging?
13:41
Robert Miles AI Safety
Рет қаралды 368 М.
A Response to Steven Pinker on AI
15:38
Robert Miles AI Safety
Рет қаралды 204 М.
10 Reasons to Ignore AI Safety
16:29
Robert Miles AI Safety
Рет қаралды 334 М.
Glitch Tokens - Computerphile
19:29
Computerphile
Рет қаралды 309 М.
Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think...
10:20
Robert Miles AI Safety
Рет қаралды 81 М.
9 Examples of Specification Gaming
9:40
Robert Miles AI Safety
Рет қаралды 302 М.
Why Does AI Lie, and What Can We Do About It?
9:24
Robert Miles AI Safety
Рет қаралды 245 М.
AI "Stop Button" Problem - Computerphile
20:00
Computerphile
Рет қаралды 1,3 МЛН
iPhone 19?
0:16
ARGEN
Рет қаралды 3,1 МЛН
Green Color Best Mobile Spark 2024
0:45
SDC Editing Zone 9K
Рет қаралды 288 М.
СИНИЙ ЭКРАН СМЕРТИ - ОБЪЯСНЯЕМ
14:55