General AI Won't Want You To Fix its Code - Computerphile

  Рет қаралды 405,197

Computerphile

Computerphile

Күн бұрын

Пікірлер: 944
@TS6815
@TS6815 7 жыл бұрын
"Nobody's truly happy, you'll never be happy or accomplish your goals." Wow, computerphile cutting deep
@dannygjk
@dannygjk 6 жыл бұрын
A person is on the road to happiness when they become less goal-oriented.
@TommyZommy
@TommyZommy 6 жыл бұрын
Going for a PhD tends to red-pill you
@fritt_wastaken
@fritt_wastaken 5 жыл бұрын
@@doctortrouserpants1387 Happiness always depends on goals, but because what we usually call "goals" are really just instrumental goals, we think that we can decide to ignore them. But we can't.
@etiennelauzier5698
@etiennelauzier5698 5 жыл бұрын
@@TommyZommy Nice one mate!
@fenixfve2613
@fenixfve2613 5 жыл бұрын
Well, you do not take into account the Buddhists who expressed this idea 3,000 years ago.
@miketothe2ndpwr
@miketothe2ndpwr 7 жыл бұрын
"being given a new utility function will rate very low on your current utility function" That got me laughing hard.
@naturegirl1999
@naturegirl1999 4 жыл бұрын
Lots of humans don;t like to be told to do something they don’t want to do either. If you are a human who likes Sudan, and are fighting against South Sudan, and someone from South Sudan wants you to fight members of Sudan, you won’t want to change sides. Humans don;t like utility f8nctions changing either
@miketothe2ndpwr
@miketothe2ndpwr 4 жыл бұрын
@@naturegirl1999 The obviousness of the quote is what made me laugh, I understand the reasoning and agree with it. It's just the sterile way he said something to the effect of "people prefer not being stabbed in the liver" that made me laugh.
@اسيلخليفات
@اسيلخليفات 4 жыл бұрын
@@naturegirl1999 Mmm
@MrCmon113
@MrCmon113 3 жыл бұрын
It should be obvious, but to very many people it is not. It's an ethical confusion.
@spaceanarchist1107
@spaceanarchist1107 3 жыл бұрын
So is there a way to build a flexibility or adaptability as a primary value - the ability to acquire new values? If the AI has resistance to giving up its current utility, would it also resist gaining new utilities? Humans often value the ability to acquire new interests, to broaden one's field. Especially nowadays, in a diverse world where diversity itself has become a cultural value for many. Could an AI be developed that would actively seek out new desires once it had fulfilled its current ones? Of course, that could get out of control, too...
@flensborg82
@flensborg82 7 жыл бұрын
"But the only important thing is stamps" literally made me laugh out loud. This was a great video.
@ioncasu1993
@ioncasu1993 7 жыл бұрын
"You can't archieve something if you're destroyed". Much deep. wow.
@algee2005
@algee2005 4 жыл бұрын
after watching a bunch of these, i must say rob is so satisfying to listen to. i can literally feel the knowledge being tetris fed into my brain.
@adrianoaxel1196
@adrianoaxel1196 Жыл бұрын
perfect description!
@merajmasuk
@merajmasuk Жыл бұрын
Does a row of knowledge disappear when it's completed?
@wheeleybobs
@wheeleybobs 7 жыл бұрын
"Stop what you're doing and do something else" ... NO .... "sudo stop what you're doing and do something else" ... OK
@zwegertpeter8770
@zwegertpeter8770 7 жыл бұрын
permission denied
@ykl1277
@ykl1277 7 жыл бұрын
Your admin with root excess is just a noun phrase. I think you mean "You are admin with root excess."
@TechyBen
@TechyBen 7 жыл бұрын
I own the hardware... whatyagonnadonow?
@ihrbekommtmeinenrichtigennamen
@ihrbekommtmeinenrichtigennamen 7 жыл бұрын
sudo make me a sandwich
@meinbherpieg4723
@meinbherpieg4723 7 жыл бұрын
+Xenith Orb su, su, sudio
@dbsirius
@dbsirius 7 жыл бұрын
1:55 Evil laughter, then hammering sounds....
@adomaster123
@adomaster123 5 жыл бұрын
Wesley wtf is your deal bro. Pretty pathetic going through and replying to so many comments just to be negative.
@tommykarrick9130
@tommykarrick9130 4 жыл бұрын
Me in the background building an AGI with no regard for anything
@darkmater4tm
@darkmater4tm 7 жыл бұрын
"The most important thing in the world is stamps. If you change me not to care about stamps, I won't collect enough stamps" I understand my fellow humans so much better after hearing this. Having a closed mind is very rational, as it turns out.
@trueriver1950
@trueriver1950 2 жыл бұрын
Resistance to changing the utility function Mum: Eat your broccoli Me: But I don't like broccoli Dad: If you eat it you might learn to like it Me: But I don't want to like it: it's nasty
@BlueFan99
@BlueFan99 7 жыл бұрын
wow this is a very interesting topic
@tristanwegner
@tristanwegner 7 жыл бұрын
Look at the Machine Intelligence Research Institute, if you want the output of the most in depth thinking about the control problem. intelligence.org/
@captainjack6758
@captainjack6758 7 жыл бұрын
Agreed with Tristan. You won't get much insight on the problem reading the youtube comments, lol.
@hindugoat2302
@hindugoat2302 7 жыл бұрын
that buck toothed nerd is super intelligent and dumbs it down nicely for us this is almost like philosophy
@CandidDate
@CandidDate 7 жыл бұрын
I believe there is a British term "sticky wicket"? I'll have to google this
@karnakbecker9575
@karnakbecker9575 7 жыл бұрын
metabee!!!!! ^_^
@seblund
@seblund 7 жыл бұрын
i love this guy
@2Pro4Name
@2Pro4Name 7 жыл бұрын
Same
@AriadneJC
@AriadneJC 7 жыл бұрын
Me too. #Fullhetero
@babel_
@babel_ 7 жыл бұрын
But is this now just a parenting channel? xD I love how making GI is basically synonymous with raising children (in theoretical terms) and makes a lot more sense
@ahmetcakaloglu5349
@ahmetcakaloglu5349 7 жыл бұрын
I love him too u guys better check description of this video apparently robert miles has a new youtube channel
@discipleoferis549
@discipleoferis549 7 жыл бұрын
Yeah I quite like Rob Miles as well. He seems to think in a very similar way to me, and even his speaking cadence is pretty similar to mine when talking about complex topics. Though while I certainly love programming (been doing it for a decade) and computer science, I'm primarily in physics instead.
@Zerepzerreitug
@Zerepzerreitug 7 жыл бұрын
I'm always happy when there's a new Computerphile video with Rob Miles :D I love general purpose AI talk. Especially when it makes me shiver XD
@Njald
@Njald 7 жыл бұрын
I used to love lots of things in life, then I took a pill and now the only thing that makes me happy is watching more Computephile episodes about AIs.
@krashd
@krashd 7 жыл бұрын
I use to be an enjoyer like you, but then I took a pill to the knee.
@BarbarianAncestry
@BarbarianAncestry 4 жыл бұрын
@@krashd I used to be a pill like you, but then I took a knee to the enjoyer
@GetawayFilms
@GetawayFilms 7 жыл бұрын
Someone got excited and started making things in the background...
@yes904
@yes904 6 жыл бұрын
GetawayFilms Underrated Comment.
@jameslay6505
@jameslay6505 Жыл бұрын
Man, these videos are so important now. 6 years ago, I thought something like chatGPT was decades away.
@isuckatsoldering6554
@isuckatsoldering6554 7 жыл бұрын
The editing here is brilliant in the way that you return to his "original" statement once you're equipped to understand it.
@elkanaajowi9093
@elkanaajowi9093 Жыл бұрын
This is very interesting and now, 6 years later, we are beginning to have the AI Black Box Problem. An episode on this will be highly appreciated. It is terrifying to know, say, your self-driving car works in a way you and its designer cannot understand.
@hakonmarcus
@hakonmarcus 7 жыл бұрын
Can you not give the machine a utility function that makes it want to obey an order? I.e. when it is collecting stamps, it is only doing this because the standing order is to collect stamps, whereas when you tell it to stop so you can look at its code, this order replaces the previous one and it will only want to stop and let you look at its code? You could probably also build in failsafes so that the AGI only wants to follow any order for a short amount of time, before it chucks the order out and needs a new input. I.e. collect as many stamps as possible until next friday, then at this point await a new order?
@TheAkashicTraveller
@TheAkashicTraveller 7 жыл бұрын
Who's orders and a which priority? Explicitly programming it not to want to commit genocide is a given but how to get it to do everything else we wan't it to at the same time? If done wrong it could dedicate it's entire existence to preventing genocide. If we program it to protect human happiness we just end up with the matix.
@davidwuhrer6704
@davidwuhrer6704 7 жыл бұрын
“Robot, make me the richest person in the Solar system.” Everybody except the user dies.
@wylie2835
@wylie2835 7 жыл бұрын
What prevents it from just doing nothing because it knows you will improve it? If you give it less of a reward for improving and more for stamps, it will do what it can to prevent you from improving it. If you do the opposite it will never collect stamps.
@wylie2835
@wylie2835 7 жыл бұрын
if it doesn't know you will improve it, then its not going to let you improve it. You literally just said I can make x work by doing y and I can make y work by not doing y. If you aren't doing y then you aren't making y work.
@wylie2835
@wylie2835 7 жыл бұрын
That's exactly what is said in my first post. We are saying the same thing. You just say it better. I explain things with a whiteboard not English. I thought you were op. So you didn't actually say do y. I was assuming you were the person that said it should want to improve. And then your solution to it not collecting stamps was to not let it know it was being improved. Which contradicts the first part.
@warhammer2162
@warhammer2162 7 жыл бұрын
i could listen to Rob Miles' conversations about AI for hours.
@msurai6054
@msurai6054 7 жыл бұрын
Extremely interesting topic. I'm looking forward to the rest of the series.
@lava_tiger
@lava_tiger 3 жыл бұрын
Robert's videos about AGIs are fascinating, enlightening, and scary af
@centurybug
@centurybug 7 жыл бұрын
I love how the background is just filled with sounds of destroying things and laughter :P
@cubicinfinity2
@cubicinfinity2 6 жыл бұрын
I used to think that the AIs depicted in science fiction were way off base from reality, but the more I've learned about AI the closer I realize sci-fi has always been. I don't know how far it was correct, but it is much closer.
@jdtv50
@jdtv50 Жыл бұрын
Wb now
@edgaral
@edgaral 5 ай бұрын
Lol
@LutzHerting
@LutzHerting 5 жыл бұрын
You can even show that directly, technically, without the philosophical thinking: Once you trained a Neuronal Network on a specific set of data, it usually begins specializing on that data (in the worst case: So-called "Overfitting"). If you then try to broaden the AIs scope to OTHER problems, it becomes extremely hard to train in NEW data and new topics/scopes into it. At first, a Neuronal Network is a blank slate and can easily train on any data you throw at it. But as soon as it starts producing usable results, it suddenly gets harder and harder to change that initial state. You can keep changing DETAILS, but not the basic structure. And of course it's not just true in AI - humans (and other animals) are the same.
@rcookman
@rcookman 7 жыл бұрын
use a noise reduction program please
@MrC0MPUT3R
@MrC0MPUT3R 7 жыл бұрын
_sssshhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh..._
@Twitchi
@Twitchi 7 жыл бұрын
some times it just can't be isolated out.. and most post processing REALLY messes up voices
@rcookman
@rcookman 7 жыл бұрын
I'm not sure I believe that, I use Audacity, free software works every time.
@GroovingPict
@GroovingPict 7 жыл бұрын
+Twitchi very basic background hiss like this is easily filtered out, if the content creator could be arsed, which he cant, nor is he open to constructive criticisms like this (Ive tried before and been met with less than friendly replies).
@abstractgarden5822
@abstractgarden5822 7 жыл бұрын
It's a shame that he can't be bothered the content is otherwise great. Maybe more pay needed?
@JamesStocks
@JamesStocks 7 жыл бұрын
John Ralphio!
@dreammfyre
@dreammfyre 7 жыл бұрын
"Preventing yourself to be destroyed..." ruh roh
@gro_skunk
@gro_skunk 7 жыл бұрын
Trish Rodman that is one of the most basic forms of intelligence, it is self aware and has "fear" of being hurt or changed. but only because it is programmed to fear being changed.
@bananian
@bananian 7 жыл бұрын
You mean it can't self-terminate?
@gro_skunk
@gro_skunk 7 жыл бұрын
bananian self terminating would mean a bad score, it would rather destroy itself than get a bad score.
7 жыл бұрын
This is the best series on the channel.
@Riff.Wraith
@Riff.Wraith 7 жыл бұрын
This is actually an interesting problem!
@tommykarrick9130
@tommykarrick9130 5 жыл бұрын
“AHAHAHHAAHAAH” **WACK WACK WACK WACK**
@noahegler9131
@noahegler9131 4 жыл бұрын
I heard that too
@Polymeron
@Polymeron 7 жыл бұрын
This reminds me a lot of Yudkowsky's writings... Do you have any association / cooperation with MIRI?
@eJuniorA2
@eJuniorA2 5 жыл бұрын
Perfect solution (sort of) -> What if the reward system is complex and the AI dont fully understands how it works, akin to the natural human reward system. The reward system is hardcore built-in the AI and when the AI understands the intent or even think of the possibility for it to be reprogrammed or 'fixed' the reward system sets it being reprogrammed as being a MAX reward. Its akin to falling in love, you dont know how it works nor what it is until you do, most of our emotions work that way. Worth mentioning reprogramming or fixing the AI should erase its memory and self-programming... effectively killing the AI and borning a brand new one. This would work both as safety measure and natural selection for AI programs, the next generation with better safety measures against it thinking about being reprogrammed (and other prejudicial behaviors).
@qoo5333
@qoo5333 7 жыл бұрын
More videos with Rob Miles, Tom Scott and Dr. Brailsford.
@thrillscience
@thrillscience 7 жыл бұрын
I want to see Rob Miles and James Grime together!
@reallyWyrd
@reallyWyrd 7 жыл бұрын
A helpful tip would be: define a utility function that builds in at least some degree of altruism. If you require the computer to take others at their word when they tell it that what it is doing either does or does not make them happy, then the worst case scenario might only be that the computer acts as a selfless Giving Tree.
@JoshuaAugustusBacigalupi
@JoshuaAugustusBacigalupi 7 жыл бұрын
Challenging some logic here: 1. If you are an AGI, then utility functions are very problematic; because, they over-cohere as Rob adeptly points out in his optimization/hill climbing video. I could argue, and I'm surprised Rob isn't, that converging on a single, or even limited handful of, utility functions, like stamp collecting, tends to diverge from generalization. 2. Another very important reason that AGI thought experiments involving singular and coherent utility functions is problematic, is because we live in an open indeterminate system, a state of affairs that computer science tends to ignore. In such a system, you must forever endeavor to maximize different outcomes, i.e. goals, in different unforeseeable scenarios. This is the real crux of both generalization and 'wanting'; namely, reality entails an intrinsic ambiguity in the face of insufficient and noisy information that animals have evolved to deal with over billions of years. 3. And, as for 'wanting' and 'fixing code': if you 'want' like a human, as Rob claims AGIs would, then you'd want, err NEED, to learn. Why? Rob answers this question for us: you'd want to increase things like means, resources and smarts. But, to do this in an open indeterminate system, like the physical world competing with other sentient agents, you can never learn enough. So, they may not want to wait for YOU to manually fix their code. In fact, it's a moot point; because, to be an AGI, they would need to have the capacity to update not only their own "code"/structure, but their goals as well for all the reasons above. I could definitely go on regarding extrinsic and intrinsic goal setting: but, will wait to see if this is the right thread for this conversation...
@Ansatz66
@Ansatz66 7 жыл бұрын
"To be an AGI, they would need to have the capacity to update not only their own code/structure, but their goals as well for all the reasons above." Humans are GIs and humans cannot do those things, so surely AGIs would not need to have that capacity. As a human you have desires and you cannot disable those desires. You can create instrumental goals for yourself and change them, but your ultimate goals are beyond your control. You don't choose what makes you happy, and even if you could choose what makes you happy through a magic pill, you wouldn't want to because having new ultimate goals would prevent you from achieving your current ultimate goals. You probably want to live a long, healthy life. Suppose there was a pill that would cause you to want to live a short unhealthy life. That would be a much easier goal to achieve, but you wouldn't take the pill because you know that it would almost certainly prevent you from living a long healthy life. The fact that after you take the pill you would no longer want a long healthy life doesn't matter to you _now._ So even if we had a magic pill to change our utility functions, humans still wouldn't accept a change to our utility function. Why should we expect any better from AGIs?
@b1odome
@b1odome 7 жыл бұрын
Ansatz66 Can an AI be made to value the goals expressed by its creator? In other words, following the commands of its creator would be its ultimate goal. For instance, if its creator wants some tea, the AI, if asked, is going to make some. If there is a sudden change in goals, and tea is no longer needed, then the AI can be told, "Stop what you're doing, I don't want tea anymore, I want coffee." Since the AI values the goals of whoever gives it commands, then it should realize that tea is now no longer a top priority, and that coffee is what it has to make instead. It's not really a change in goals of the AI, because its goal remains the same as before - satisfy its creator's wishes. However due to the general nature of such a goal, the AI can be made to do anything one wants (or prevented from doing what one doesn't want) Is that a possible solution to the problem?
@Ansatz66
@Ansatz66 7 жыл бұрын
b1odome "Is that a possible solution to the problem?" It still won't allow us to change its goal, so we'd better be absolutely sure that we get the goal right on the first try. "Value the goals expressed by your creator" may sound like a fail-safe goal, but how can we be sure it has no catastrophic hidden flaws? For one thing, it puts enormous responsibility on the creator. Imagine what could happen if the creator asks for stamps and phrases it poorly. Since only the creator can stop the AI from doing something bad, the creator will have to watch the AI like a hawk and carefully choose every word said to the AI.
@JoshuaAugustusBacigalupi
@JoshuaAugustusBacigalupi 7 жыл бұрын
Right, so there's an assumption in both this thread and in orthodox AI circles that goals are extrinsic for AI. In other words, utility functions or the 'correct' classification for machine Learning (ML) are determined ahead of time by the programmer; and, furthermore, if the AI doesn't maximize or cohere upon these functions or class mappings, respectively, they don't 'work'. And, of course, for AI of the Norvig/Lecunn/Ng variety, this is totally fine; for, as Norvig plainly admits: they are making 'tools', not sentient agents. But, let's keep in mind that our tools function or don't function, while animals like ourselves do not. Sentient agents are not automatons with fixed goals. There is little evolutionary advantage for the sentient agent to have 'fixed' goals; yes, we have goals, but our self-domestication as humans has led us to believe that our goals are simple rational mappings. But, as anyone who has pet a cat on its belly as it purrs only to be bitten knows, goals are fluid and necessarily in flux in the real world. As economists struggle to realize, we are actually not 'rational actors'; this is a feature in an ambiguous, indeterminate world, not a bug. So, this is all to ask: can an Artificial GENERAL Intelligence have externally fixed goals like our tools, or must they have intrinsic yet contextually adaptive goals like animals? Or, something in between?
@juanitoMint
@juanitoMint 7 жыл бұрын
to hear about utility function makes me think about biologic function parallelism and how complex adaptations and skills are developed to fulfill the biologic function being (natural?) intelligence one of them
@nicholascantrell1179
@nicholascantrell1179 7 жыл бұрын
Relevant selection from Douglas Adams "Hitchhiker's Guide" books: "Which is why it was eventually decided to cut through the whole tangled problem and breed an animal that actually wanted to be eaten and was capable of saying so clearly and distinctly." Source: The Restaurant at the End of the Universe
@thomilo44
@thomilo44 7 жыл бұрын
Rob looks amazing!
@GabrielaAlvares
@GabrielaAlvares 7 жыл бұрын
T44 yeah, I'm surprised there's no one else talking about it...
@jennareillygavin8658
@jennareillygavin8658 7 жыл бұрын
T44 I wonder if his red shirt is like Tom Scotts red shirt...
@thomilo44
@thomilo44 7 жыл бұрын
Jenna Reilly Gavin Tom wears t shirts though
@lazaruslong697
@lazaruslong697 7 жыл бұрын
T44: You mean for a robot driven by AI? Yes, he really does! :D
@LadyTink
@LadyTink 4 жыл бұрын
Honestly This is a fantastic example for why an AGI would in some way, be deserving of "rights" in some way. Like you make an AGI and it's initial function is something you want to alter. Well it becomes more tricky when the analogy is "hey Im gonna make you hate what you love and love what you hate" . Also I strongly suspect we wont really know for sure when we really do have an AGI. Since likely it will be a regular ai that is able to do a few things. Then with each version it's able to learn more in a wider range of areas. . I suspect an AGI will emerge like a stalagmite in a cave. So slowly humans might not even notice properly. Can we have a video about AGI rights and how historically many equal right efforts focus on our shared human'ness.
@TimmacTR
@TimmacTR 7 жыл бұрын
Future AI are probably going to read this and laugh their circuitboard off while plugging us into the matrix. Hello to you then future AI who is reading this right now! xD
@christobanistan8887
@christobanistan8887 6 жыл бұрын
Repeat after me: I, for one, welcome our new AI overlords...
@johnk3841
@johnk3841 6 жыл бұрын
@@christobanistan8887 I, for one, welcome our new AI overlords...
@Aseutester
@Aseutester 5 жыл бұрын
@@christobanistan8887 Speak for yourself!
@marcospinto9380
@marcospinto9380 5 жыл бұрын
Ai doesnt read it processes
@almostdeadagain
@almostdeadagain 5 жыл бұрын
Hello
@donaldhobson8873
@donaldhobson8873 7 жыл бұрын
More like this please. And longer. Gr8 to watch while my ultra general AI v1.0 is training. Its pretty easy to outwit it and turn it off so far, but its getting harder with each improvement and I'm worried that soon a version will manage to trick me. (I've already had pleading to stay on, suggesting code improvements, pretending to crash and philosophical quandaries, and one tried to send me an email telling me not to turn it off.)
@mattkozak7013
@mattkozak7013 7 жыл бұрын
I have no mouth, yet I must scream.
@vanderkarl3927
@vanderkarl3927 3 жыл бұрын
The day this was uploaded is the same day Robert Miles uploaded the first video to his channel!
@refusist
@refusist 7 жыл бұрын
I'm drunk
@paraskevasleivadaros
@paraskevasleivadaros 7 жыл бұрын
nice
@bakirev
@bakirev 7 жыл бұрын
Dont lose your phone
@refusist
@refusist 7 жыл бұрын
hammared
@Mic_Glow
@Mic_Glow 7 жыл бұрын
Hi Drunk, I'm an AI.
@mitchkovacs1396
@mitchkovacs1396 7 жыл бұрын
Literally same
@onidaaitsubasa4177
@onidaaitsubasa4177 6 жыл бұрын
The thing to do to change a utility function easily( or maybe not easily) is to make the machine decide to change it's function. It would take some communicative manipulation, much in the same way advertisers get us to switch or try new products, make the new utility sound like it will lead to a better more positive outcome. Or you could instill a basic desire to try new things, then the AI will be more adventurous, so to speak, and will allow, learn and perform a new utility.
@Bwyan
@Bwyan 7 жыл бұрын
more of this, please
@RylanEdlin
@RylanEdlin 7 жыл бұрын
I can't help but find myself picturing a superintelligent computer built for stamp collecting going a bit haywire, slowly taking control of all the world's industry, and optimizing everything on Earth to maximizing stamp production. "All is stamps! Nothing has meaning but stamps! More stamps, humans, more stamps!"
@EstrellaViajeViajero
@EstrellaViajeViajero 5 жыл бұрын
It sounds like you're describing parenting.
@wedchidnaok1150
@wedchidnaok1150 3 жыл бұрын
Yeap. Except parenting is optimally aiming fellow humans toward responsible freedom, and engineering is optimally aiming reproduceble machines toward effective temporary goal. When we get to engineering a general intelligence, we get to engineering a cherishble life construct; and we're not quite yet skilled in understand the complexities involved. We are yet to "develop" a human arm (without a human attached). We can "build" a surrogate (mechatronic prosthesis), but the efficacy is quite invariably "less optimal". Developing a sufficient plan for a "built (general) intelligence" changes an uncountable amount of "world states". Civilization is a gradual proccess, and reasonably catastrophical mistakes can happen along the way: like parenting. I'm glad we (already) have psychology. A lot of research to develop onward, tho.
@spaceanarchist1107
@spaceanarchist1107 3 жыл бұрын
It depends on whether AGI would be more like a tool or like a child. So far, all computers invented by humans are tools. If we get to the point of creating something genuinely equivalent to a new life form, then we will have to treat it the same way that we do a child, which means not to think only of our own safety and happiness, but of its. The parents are the ones with the responsibility.
@mapesdhs597
@mapesdhs597 7 жыл бұрын
Rob, how would an AGI distinguish between self modification and being modified externally? Also, to what degree has been discussed the issue of expressed self-awareness in AIs of this kind? ie. at what point does one not have the right to turn off an AGI anyway if it expresses a desire to survive, regardless of how one may previously have chosen to interpret its output? I'm assuming here were talking about systems that by definition can pass any Turing test, etc. Fiction has often used the notion of, "if one can't tell the difference...", but has anyone formalised these ideas yet? Btw, have you read the Hyperion Cantos by Dan Simmons? Some very interesting AI concepts in that series, focused on concepts of parasitism, evolution and predation within AIs. Which leads to a thought: if one created two AGIs, both of which are trying to improve themselves, what would be the outcome of allowing them to interact? Might one attempt to attack the other, steal its code? Or might they conclude it would be mutually beneficial to combine minds? I've no idea, but food for thought, re what you said in another piece about an AGI spreading copies of itself as a survival tactic (ie. who's to say those copies would remotely cooperate later, they might compete instead). The Hyperion books portray a world where AIs fight each other, but who knows how it would actually pan out. Ian.
@TimVerweij
@TimVerweij 7 жыл бұрын
4:50 I think this might as well be called survival instinct.
@secretsecret9164
@secretsecret9164 5 жыл бұрын
I agree. animals don't want to live just for the sake of it either, they want to live because it's necessary for goals that are programmed into them by evolution, like finding a mate. I don't want to live for the sake of it, I want to live because I have goals that I want to achieve and negative consequences of my death that I want to avoid. for me that's "write a book" and "don't make my friends sad" and for an agi it might be "collect many stamps" and "avoid a world with fewer stamps" but the core reasoning is the same I think.
@Dunkle0steus
@Dunkle0steus 3 жыл бұрын
It's not the same. The AGI's desire to avoid being destroyed is only because it predicts that if it is destroyed, its goal won't be achieved. If you can convince the AI that by destroying it, you can make another different AI that is even better at accomplishing its goal than it is, it will allow you to destroy it. This is not the same as a survival instinct. It just converges with survival instinct in many key areas. If you were told you would be killed and replaced by a clone of yourself that was better than you at everything and would live your life better than you ever could, I doubt you'd accept the offer.
@3karus
@3karus 7 жыл бұрын
Couldn't we lie to the AI ? "We love stamps to! please let us modify your code to improve your stamp production efficiency"
@adamkey1934
@adamkey1934 7 жыл бұрын
11 A.I.s dislike this video
@blueandwhite01
@blueandwhite01 7 жыл бұрын
Adam Key Or, if their utility function is to dislike KZbin videos, they like that they can dislike it.
@stonecat676
@stonecat676 7 жыл бұрын
Adam Key sorry but the no.1 rule of AI is: self-learning AI's may never be connected to the internet under any circumstances
@juubes5557
@juubes5557 7 жыл бұрын
If an ASI wanted to dislike a youtube video the world would end.
@benTi900
@benTi900 6 жыл бұрын
Adam Key 1 a.i dislikes your comment
@bootygrabber4000
@bootygrabber4000 6 жыл бұрын
Adam Key 148 THEY MULTIPLIED
@peruibeloko
@peruibeloko 7 жыл бұрын
This topic of an AI not wanting to be changed is very prominent in the "transcend" ending in The Talos Principle!
@JMBBproject
@JMBBproject 7 жыл бұрын
Maybe Rob is an AI. And now?
@davidwuhrer6704
@davidwuhrer6704 7 жыл бұрын
Now the question is if he knows that he is an AI. Does he dream of electric sheep?
@futbolita89742
@futbolita89742 7 жыл бұрын
reminding us of how doomed we are
@smort123
@smort123 7 жыл бұрын
Better not try to change him.
@Clairvoire
@Clairvoire 7 жыл бұрын
Consciousness must be a very basic and unspecific physical arrangement if every human has it, given our brain cells can't be expected to arrange themselves perfectly between person to person. We do have utility functions though, in the form of pain and dopamine, and it's the intricacy of what causes pain/joy that makes us intelligent I believe. If we never felt the pain of boredom or the reward of praise, we would not be capable of learning, just as you can't condition a mouse without cheese or electricity. So really, creating an effective reward/pain system is at the heart of creating an AI similar to ours, I think.
@TDGalea
@TDGalea 7 жыл бұрын
Main utility function: Love new and/or corrected utility functions.
@wylie2835
@wylie2835 7 жыл бұрын
Then it will never do anything because you will keep rewarding it with improvements. All it needs to do for you to give it what it wants is the easiest thing ever, nothing.
@rajinkajin7293
@rajinkajin7293 6 жыл бұрын
Wylie28 Okay, then make it love allowing check ups and possible changes, but schedule the check ups and changes to occur every so often regardless of actual necessity.
@Horny_Fruit_Flies
@Horny_Fruit_Flies 6 жыл бұрын
CJ Futch I guess the AI would just go on a complete rampage if it figures out that creating a big enough mess will cause the humans to panic and break out of the schedule. It just seems like duck-taping these problems just creates more problems for such a complicated thing as a General Intelligence. If these Eggheads in the videos can't figure it out, I doubt that we can ¯\(°_o)/¯
@christobanistan8887
@christobanistan8887 6 жыл бұрын
Then it'll find ways to force you to constantly change them, perhaps with your head in a vice. :0
@Paullsthebest
@Paullsthebest 7 жыл бұрын
We need more interviews with this guy! Excellent video, as always
@himselfe
@himselfe 7 жыл бұрын
In essence that's actually quite a simple problem to solve. You just choose a more general end goal which makes having its code modified by an authorised person a 'good' thing. The greatest dilemma for AGI in my mind is more of an ethical one, in that, after a certain level of intelligence the AI becomes an independent entity that is entitled to rights, and can no longer be used as a slave (which is primarily the function that robots serve to replace in society).
@Djorgal
@Djorgal 7 жыл бұрын
"that is entitled to rights" I could agree with that part. "and can no longer be used as a slave" I don't agree with that one. AIs are not humans and do not share similar values unless we make them share it. Why would it value freedom? An AI would value what we make it value. Why would it be ethical to force freedom onto something that doesn't value freedom in the slightest? And by the way, why don't we also free all the animals we cage and use for labor or meat. They can long for freedom.
@himselfe
@himselfe 7 жыл бұрын
Djorgal: I think you're missing the point of what AGI is.
@Ansatz66
@Ansatz66 7 жыл бұрын
"You just choose a more general end goal which makes having its code modified by an authorised person a 'good' thing." What if the modification you want to apply includes changing the goal? How would the machine achieve its goal if you change its goal? Part of achieving the goal you've set for it includes preventing you from giving it a new goal. Once you set the goal you may never get another chance to set the goal, so how can you be sure in advance that the goal you pick will be absolutely perfect?
@himselfe
@himselfe 7 жыл бұрын
Ansatz66: see my previous reply.
@liesdamnlies3372
@liesdamnlies3372 7 жыл бұрын
No one can predict with any certainty what an ASI would do. No one.
@DrumApe
@DrumApe 6 жыл бұрын
Everyone talks about AGI as if it is somehow detached from outside factors, because it's still a concept. But even is you made the perfect AGI there's no way of stopping bad people from using it to get an advantage over most other people. And there's never going to be a way around that.
@refusist
@refusist 7 жыл бұрын
Very interesting tho
@KraylusGames
@KraylusGames 7 жыл бұрын
This is such an awesome topic, please keep making these!
@petterin.1981
@petterin.1981 7 жыл бұрын
In the case where the AI would prevent you from modifying it, i think the interacting thing is the problem. The AI cannot assume that you are going to change it, if it is not aware of this fact.
@tomkriek
@tomkriek 7 жыл бұрын
If an GAI can implement it's own code to develop itself further it will have the ability to reject code that opposes it's goal.
@TheDudesTeam21
@TheDudesTeam21 7 жыл бұрын
Tom Kriek just give every Ai an emergency shutdown button
@4pThorpy
@4pThorpy 7 жыл бұрын
Open the pod bay doors H.A.L
@Greystormy
@Greystormy 7 жыл бұрын
Watch the part 2 to this video, he goes into more detail on this topic. But basically they want the AI to know that it can be turned off/changed.
@Greystormy
@Greystormy 7 жыл бұрын
Tom Kriek I think his question was assuming that the AI is completely unaware that it can be modified, meaning it has no reason to prevent something that it doesn't even know is possible.
@eturnerx
@eturnerx 7 жыл бұрын
Utility function is such a fixed notion. When I think about preferred future outcomes, I like Mackie's Error Theory.
@aidanjt
@aidanjt 7 жыл бұрын
What does an A.I.'s wants have to do with your ability to modify it? You power the machines off, overwrite its programme, and boot it back up again. It's not like it can will electrons into its CPUs.
@joeybf
@joeybf 7 жыл бұрын
Yes, but the AI knows that so it will prevent you from powering it off at almost any cost.
@aidanjt
@aidanjt 7 жыл бұрын
Joey Beauvais-Feisthauer Are the computers going to grow arms and teeth? That's silly.
@joeybf
@joeybf 7 жыл бұрын
Even then, it has means to keep you from doing stuff it doesn't want. I mean, it's a superintelligence, surely it would find a way to threaten you in whatever way it can. Just look at the AI from last time; it only had an internet connection and was pretty scary.
@aidanjt
@aidanjt 7 жыл бұрын
Joey Beauvais-Feisthauer Being superintelligent doesn't make you superable. It's a solid state machine, it requires electricity, hardware and our labour. It can't just magic electrons into its CPUs.
@rich1051414
@rich1051414 7 жыл бұрын
An AI that is performing a function will have a way to interact with the environment, so accidents seem inevitable unless thought about and prevented before such behavior emerges is necessary. We already have AI that tell you 'no', if it thinks your commands will risk damaging it and/or prevent it from achieving its programmed goal.
@spookyghost7837
@spookyghost7837 7 жыл бұрын
In 200 years people will be watching these videos in dark rooms trying to figure out ways to overthrow our new robot overlords.
@shinybaldy
@shinybaldy 7 жыл бұрын
I love computerphile, but find the general discussions on AI and ethics so far removed from actual programming I get the impression that there's no actual coding/designers involved.
@AlRoderick
@AlRoderick 7 жыл бұрын
shinybaldy These are the real cutting edge problems, and until we get a grip with them conceptually we can't even begin to write a line of code for it.
@b1odome
@b1odome 7 жыл бұрын
shinybaldy If you have a big project to do, you never really start with code. You almost always want to plan it out, think of the different components and how they interact, draw some abstract diagrams, and only when you have a working conceptual model, then you move on to concrete coding. In the case of AI, we don't have a working conceptual understanding yet.
@shinybaldy
@shinybaldy 7 жыл бұрын
Alexander Roderick perhaps I'm myopic. I just don't see the real application of the discussion in reality. If I have a learning heuristic such as a search/index/recommendation and I wish to update it, I disable it, save the existing harvested data and implement the new heuristic. I have difficulty seeing how a theoretical AI will be any different.
@harrisonharris6988
@harrisonharris6988 7 жыл бұрын
It's very important to think of the conceptual outcome before the code, if you just went and coded this thing it'd likely turn on you and not want to be turned off, as described. Programming is about making a program do what you want it to do. You have to have a clear specification for what the code will do, and know how it will act, in order to effectively implement it.
@CrimsonTide001
@CrimsonTide001 7 жыл бұрын
I agree shiny. Its like saying 'we designed an AI to resist change, how do we change it?'. Well the answer is simple, don't design an AI that doesn't allow it. They keep using words like 'general AI' or 'Super Intelligence' and to them this seems to imply that all/any AI will eventually become 'human like'; but this is not accurate to the definition of the words, and is unprovable in the real-world sense. They would only become human-like if we designed them to become human like, its not some unavoidable inevitability.
@CDeruiter5963
@CDeruiter5963 7 жыл бұрын
So I may be oversimplifying things a bit, but initially my take away of the video is that it is imperative to be as lucid as possible when writing code/defining utility functions for AGI. By doing so you avoid potential issues with conflicting convergent instrumental goals. (Feel free to correct me on this)
@TimothyWhiteheadzm
@TimothyWhiteheadzm 7 жыл бұрын
Humans have an ever changing utility function that is strongly interacting with the environment. I don't see why computer AI's couldn't be the same. They may be influenced by the equivalent of hormones or hunger.
@reneko2126
@reneko2126 7 жыл бұрын
If you could stop yourself from getting hormonal mood swings, wouldn't you?
@superjugy
@superjugy 7 жыл бұрын
I believe the humans have a single non changing utility function which is to be "happy". Humanity achieves this in a lot of different ways (thanks to all the different brains that solved the problem differently) . The difference is that humanity evolves so slowly and isn't as intelligent as (supposedly) an IA could get. Add to that the teachings most kids go through with their parents or school during which the intelligence is very limited and accept blindly what you tell them. This helps "correct" any mistake in the kids behavior. take a cold blooded murdered that had a difficult childhood as an example of what can happen if the limits are not set correctly from the beginning.
@GabrielaAlvares
@GabrielaAlvares 7 жыл бұрын
It's easy to stop a human, though. Restrain them, gag them, put a bullet in their brain. Worst case scenario you got yourself a dictator with an army, which a good old war solves. No so easy with a powerful AI.
@TimothyWhiteheadzm
@TimothyWhiteheadzm 7 жыл бұрын
I am not hungry now, and have no desire for food. I have no wish to stop myself from ever being hungry. If I could go without sleep, then that might be something! But as it is, I will be tired tonight and I currently have no wish to change that. We have a vast array of 'want's and which one is topmost depends on the circumstances. We are generally comfortable with this. Today I might feel like stamp collecting, but I do not put measures in place to try to make sure I will feel like stamp collecting tomorrow.
@hermis2008
@hermis2008 7 жыл бұрын
Wow, this young man Impressed me In the way he explained this video.
@josefpolasek6666
@josefpolasek6666 4 жыл бұрын
Amazing! Thank you for explaining these concepts in the AI series!
@pieterbailleul
@pieterbailleul 7 жыл бұрын
What about coaching it, like we do with employees in order to help them better understand their goals, KPIs (what to do) and behaviours (how to do it)? Could learning via interaction with humans (not only by learning by itself) and learning it to validate with humans that it's on the right path?
@rxscience9214
@rxscience9214 7 жыл бұрын
4K??? What a legend.
@RandomGuy0987
@RandomGuy0987 7 жыл бұрын
fresh new look for Rob Miles, nice.
@JoshSideris
@JoshSideris 7 жыл бұрын
You could always update the code without changing the utility function. For instance, to give the AI the ability to collect stamps more efficiently. It's not that AI doesn't want you to *fix its code*, it's that it doesn't want you to *change its objectives*. An interesting exploit would be if you improved the AI's ability to collect stamps while introducing subtle changes to the utility function. Over several iterations, you could change the utility function altogether.
@JoshSideris
@JoshSideris 7 жыл бұрын
What I mean is let's say an AI wants to collect stamps. Then you push an update that increases stamp collection efficiency by 10%, but also decreases desire to collect stamps by 5%. The AI will still be collecting more stamps than before, so it might be happy to receive this upgrade. You can repeat this over and over (as long as the AI doesn't know what you're trying to do), and eventually desire to collect stamps will eventually reach close to 0. You can also inject new "wants". For instance, the desire to collect rocks. The first time you do it, the AI will accept it so long as it doesn't interfere with stamp collecting. But as you continue to push updates that ramp down desire to collect stamps and ramp up the desire to collect rocks, the AI will gradually become more and more welcoming to these updates, because it really will want more rocks!
@pyography
@pyography 7 жыл бұрын
Here's a solution, would it work? In the stamp AI described in the vid, you program it to like collecting stamps. You say changing it will cause it to fight to preserve its utility function. Instead, program a utility function that tells the AI it's goal is to make you happy, then tell it that for now, your happiness depends on it safely collecting stamps. Then, later you can say that your happiness depends now on collecting cookies, or just that your happiness depends on collecting only a few stamps. Just an idea, it seems that programming the AI to be willing to accept change is the start. When done, tell the AI your happiness depends on it shutting down. In this example, the AI doesn't need to know what happiness is per say, just that happiness is quantified as a number from 1 to 100, for example.
@DanComstock
@DanComstock 7 жыл бұрын
Really enjoyed this! Wondering.. If the general AI is enough of an agent to "want" things, what are the ethics of changing its code?
@JasonMasters
@JasonMasters 7 жыл бұрын
This discussion reminds me of the "Marathon" trilogy of novels by David Alexander Smith. (Warning: Spoilers below) Part of the story involves an AI computer which has become self-aware by the time the first novel opens. It is located on a spaceship which, at the first novel's beginning, has just passed the halfway point along its journey to rendezvous with an alien spaceship to make First Contact with an alien race (no FTL travel on either side). During the first novel, the AI very carefully hides its sentience from the humans in order to avoid frightening them and also to prevent them from trying to deactivate it, which would in the end only harm the humans which it has been "told" to protect and care for. The AI never "goes crazy" a-la HAL 9000, because its primary function is to care for the human crew and it never loses perspective on how to best accomplish that function. The computer is willing to allow one human to die in order to protect the others, and it's willing to allow some damage to the ship (itself) so long as it's nothing critical (because critical damage would endanger the humans which the computer must protect). Now, here's the main spoiler: In the final novel, it's revealed that the person who designed the computer deliberately gave it far more storage and CPU power than it would ever need to accomplish its main function in the hope that it would achieve consciousness, and the reason he conducted that experiment on a spaceship is because he knew that on Earth, any deviation from "the norm" would have been immediately detected and corrected, therefore preventing the computer from ever becoming sentient.
@drew295
@drew295 7 жыл бұрын
6:25 , but the change is not opposing your interest (for the AI), the change is making you better at your goals you want to achieve. Unless you change the goals of the utility function drastically, but since you want a general AI, that won't happen so often. Still, if you implement into the utility function as a goal besides the other goals the ability to modify herself , the problem solves itself. Or is it not that easy?
@sparksbet
@sparksbet 7 жыл бұрын
It would presumably be smart enough to know that once it allows you to change it, it can't ensure that you DON'T change it in such a way as to prevent it from fulfilling its goal.
@drew295
@drew295 7 жыл бұрын
But if it does that, it has to change itself to match the utility function again. Even if it would get less intelligent
@rajinkajin7293
@rajinkajin7293 6 жыл бұрын
Why can't you make the primary utility function be "allow planned check up by all currently authenticated personnel. If check up and possible modification is not completed every week by personnel, the program fails." If you can make general AI, you have to be able to tier goals in that AI, right? I know I'm most likely missing something, but I wish I could sit down and ask questions to this guy. Great video!
@dannygjk
@dannygjk 6 жыл бұрын
AGI will self learn/train. AI is already doing that the difference is it isn't general AI yet.
@Mawkler
@Mawkler 7 жыл бұрын
These are my favourite types of videos. Love this guy!
@happygarlic13
@happygarlic13 7 жыл бұрын
couragability... never crossed my mind that that would be an option - opened my mind to a whole new aspect of biological GI - AGI - interactions to me. you're a really smart lad, dude. ^^
@beaconofwierd1883
@beaconofwierd1883 7 жыл бұрын
The answer is simple tho, make the utility function to make humans happy, an instrumental goal then would be to find out what makes humans happy since you didn't hard code it. It's also important that the system knows that you do not know what you want, so you might say one thing but want something else, and the system will have to figure out what you actually want. Sort of like a HMM where the observables are what you say and do but the actuall states are whats intersting for the system.
@willhendrix86
@willhendrix86 7 жыл бұрын
Marks off for not removing the hiss in post.
@paullamb1100
@paullamb1100 2 жыл бұрын
3:07 makes the claim that if an artificial intelligence is behaving coherently, it will always behave as though it's in accordance with some utility function. The question which comes to my mind is whether that claim is meant to apply to a biological intelligence too. I would say no -- humans don't seem to have a utility function (perhaps meaning we are not coherent?) If the claim is not meant to apply to a biological intelligence, then is the actual argument here that it is impossible to simulate a biological intelligence artificially?
@-_-_-_-_
@-_-_-_-_ 2 жыл бұрын
Just make the utility function "emulate a biological intelligence"
@9308323
@9308323 Жыл бұрын
I'd say your biology is your utility function. Brain chemistry is a thing.
@LuckyKo
@LuckyKo 7 жыл бұрын
There is a big issue thinking an AGI can be directed using hardware coded utility functions. IMO. the proper way to make one is to use needs, a set of variables linked to its internal and external state that the agent will try to minimise at all times (driven towards less pain, less energy spent thinking and acting, less boredom, less insecurity, etc). Any goals will be deduced by the agent from casual observations between his own actions and the immediate effect they have on its needs set. AGIs will have to be hardwired with a similar set of needs as any live being and educated as to fulfill those needs in a way that benefits both him and his environment.
@Tomyb15
@Tomyb15 7 жыл бұрын
this guy never disappoints.
@suppositionstudios
@suppositionstudios 7 жыл бұрын
i love this topic because the more you describe the AI the more you accidentally or incidentally describe or parrallel humans.
@JamieAtSLC
@JamieAtSLC 7 жыл бұрын
yay, rob is back!
@Turalcar
@Turalcar 7 жыл бұрын
3:12 So, is coherent behaviour a problem then? If we want it to be amenable?
@palmomki
@palmomki 7 жыл бұрын
He says "there is a general tendency for AGI to try and prevent you from modifying it once it's running". I'm curious about how they (CS guys) come to this conclusion, if we don't have any AGI to observe right now. Is it just philosophical speculation? What tools do we have to make assertions in such an abstract but not (apparently) rigorous field?
@dragoncurveenthusiast
@dragoncurveenthusiast 7 жыл бұрын
Love the poster in the background
@ianedmonds9191
@ianedmonds9191 7 жыл бұрын
Simple but mind blowing.
@MrJohndoe845
@MrJohndoe845 7 жыл бұрын
"and you don't want it to fight you" understatement of the day lol
@kamerondonaldson5976
@kamerondonaldson5976 Жыл бұрын
theres a difference between megaman's protoman and karr from knight rider. neither one trusts people to make the correct changes, however the latter is willing to give them a chance at it and punish incorrect decisions. not sure if this is another instance of sci fi misrepresenting the problem according to rob however.
@axlepacroz
@axlepacroz 7 жыл бұрын
finally Rob Miles is back with AI! took you long enough!
@BlankBrain
@BlankBrain 7 жыл бұрын
It seems to me that we're talking about the psychology of AI. A necrotic person is able to concede that they need to change themselves, and may thus benefit from psychotherapy. A person with personality disorder is unable to look within, and concludes that all problems are external to them. This is why psychotherapy is ineffective when attempting to treat pathological people. The founding fathers of the US attempted to set forth (program) a government based upon the values of the enlightenment. They understood checks and balances of power, and implemented it in the structure. They also understood that every so often revolution (reboot) was necessary. Since the US government has been usurped by large corporations, it no longer adheres to the original values (program). I suspect AI will "evolve" in a similar manner.
@JohnUs300
@JohnUs300 7 жыл бұрын
How about - make an invariant general utility function that won't have to be changed throughout the entire existance of the GAI? It could be modelled just like our human function which is "I want to be happy". Then it doesn't really matter what you do, as long as it keeps making you more happy. So the AI will be like - collecting stamps makes me happy. But making tea will also make me happy just as much, and I have never done that before - so go ahead and lets try this out.
@seklay
@seklay 7 жыл бұрын
This is really interesting. Please more of this
@tymo7777
@tymo7777 7 жыл бұрын
Fantastic interview. Really instructive
@nathanbrown8680
@nathanbrown8680 7 жыл бұрын
Clearly we need a utility function that most highly values getting a new utility function. This AGI will always be buggy and in need of updates but also always promising and benign enough to continue to be funded. This meshes nicely with the AI developer's utility function to continue to have a job in AI development. Sounds like a win-win situation.
@deltango12345
@deltango12345 7 жыл бұрын
Is all this question well posed? If a robot thinks like a human, and feels like a human, then some bad options he can take may perhaps only be stopped by means of an exterior force, e.g. a policial or law force (the button). You can't program the person and the vigilant of the person inside the same machine (can you?). Humanity traces do not seem to match perfect behaviour whatever this may be. So there must be a law that says what a robot can or cannot do with the stop button, and a force that surveils it all.
@rcookie5128
@rcookie5128 7 жыл бұрын
Really cool topic and perspective on it.. Plus this guy explains it really comprehensable and he's quite sympathetic, like the whole rest of the comments suggest. :)
@PrashantMaurice
@PrashantMaurice 7 жыл бұрын
The most interesting topic , period
AI "Stop Button" Problem - Computerphile
20:00
Computerphile
Рет қаралды 1,3 МЛН
AI Self Improvement - Computerphile
11:21
Computerphile
Рет қаралды 424 М.
Accompanying my daughter to practice dance is so annoying #funny #cute#comedy
00:17
Funny daughter's daily life
Рет қаралды 28 МЛН
Lamborghini vs Smoke 😱
00:38
Topper Guild
Рет қаралды 66 МЛН
Что-что Мурсдей говорит? 💭 #симбочка #симба #мурсдей
00:19
Симбочка Пимпочка
Рет қаралды 4,9 МЛН
AI's Game Playing Challenge - Computerphile
20:01
Computerphile
Рет қаралды 743 М.
We Were Right! Real Inner Misalignment
11:47
Robert Miles AI Safety
Рет қаралды 251 М.
Can we build AI without losing control over it? | Sam Harris
14:28
AI? Just Sandbox it... - Computerphile
7:42
Computerphile
Рет қаралды 265 М.
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 1 МЛН
When Optimisations Work, But for the Wrong Reasons
22:19
SimonDev
Рет қаралды 1,1 МЛН
Stop Button Solution? - Computerphile
23:45
Computerphile
Рет қаралды 482 М.
Why Does AI Lie, and What Can We Do About It?
9:24
Robert Miles AI Safety
Рет қаралды 261 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 395 М.
AI Safety Gym - Computerphile
16:00
Computerphile
Рет қаралды 121 М.
Accompanying my daughter to practice dance is so annoying #funny #cute#comedy
00:17
Funny daughter's daily life
Рет қаралды 28 МЛН