3 principles for creating safer AI | Stuart Russell

  Рет қаралды 137,181

TED

TED

Күн бұрын

How can we harness the power of superintelligent AI while also preventing the catastrophe of robotic takeover? As we move closer toward creating all-knowing machines, AI pioneer Stuart Russell is working on something a bit different: robots with uncertainty. Hear his vision for human-compatible AI that can solve problems using common sense, altruism and other human values.
The TED Talks channel features the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and more.
Follow TED on Twitter: / tedtalks
Like TED on Facebook: / ted
Subscribe to our channel: / ted

Пікірлер: 300
@The6Master6Mind6
@The6Master6Mind6 7 жыл бұрын
I'm sorry Dave, I'm afraid I can't do that...
@johnkai5931
@johnkai5931 4 жыл бұрын
His name is stuart dummy
@Sadowsky46
@Sadowsky46 4 жыл бұрын
John Kai you are lacking context 🤣
@luzvasquez903
@luzvasquez903 2 жыл бұрын
Take me with you... David from Screammers xd
@user-df3gf6wh1x
@user-df3gf6wh1x 11 ай бұрын
I really like the control of AI part explanation. You don't want your five years old to switch off the self driving vehicles, you want your machine to have ability to interpret who is switching off and the intention of the doer. It is really enlightening. I only think of machine should be able to switch off by people but I have not think about what happens next after switching off. But he really do with describing a good example. So easy to understand and relate.
@ShinzoSin
@ShinzoSin 7 жыл бұрын
This was very good TED talk. Please more speakers like him.
@remyllebeau77
@remyllebeau77 7 жыл бұрын
"Keep Summer safe."
@johnbouttell5827
@johnbouttell5827 7 жыл бұрын
The best TED talk I have ever seen: 1 Well-presented, 2 Funny, 3 Good visuals, 4 An important topic, 5 Incorporating general knowledge from books and films.
@OneInTheRiver
@OneInTheRiver 7 жыл бұрын
I for one, welcome our new robot overlords.
@jeremiahshine
@jeremiahshine 6 жыл бұрын
lol.
@lordclangtheintolorable2094
@lordclangtheintolorable2094 5 жыл бұрын
*CYBERDONGS!!!!!*
@a8lg6p
@a8lg6p 4 жыл бұрын
I used to say this all time... Then I started learning about AI safety. Read Superintelligence by Nick Bostrom, or at least watch Robert Miles' KZbin videos. The scenario in which an AI commandeers basically all resources in the universe for some pointless end... The argument that something like that could happen is basically a syllogism, and none of the assumptions it's based on are even slightly far-fetched. Accidents happen with all kinds of technology...when we have the potential to create godlike technology...we'd better be damned sure that kind of accident won't happen, because it could be the last mistake any human will ever make. I think we'll have awesome friendly AI, but we'll get it instead of Skynet by taking AI safety seriously.
@kwillo4
@kwillo4 3 жыл бұрын
Traitor
@loafofuraniumfreshlybaked569
@loafofuraniumfreshlybaked569 3 жыл бұрын
It is only natural for child to overtake parent.
@Verrisin
@Verrisin 3 жыл бұрын
*I love this!* - (I hada a very similar idea!) - Is AI safety solved then? Why are we working on anything other than that? -- Is it too hard? Or, are all people just making building blocks to get there? (my idea had slightly different rule#3, but his is better (XD not surprising))
@vrj93
@vrj93 7 жыл бұрын
I think most of imagine AI's future as the film "back to the future 2" imagined year 2015... and you can see difference between reality and imagination
@hansmuller1846
@hansmuller1846 7 жыл бұрын
The difference is, that "Back to the future 2" was not designed by scientists (only), this talk is actually based on scientific research.
@vrj93
@vrj93 7 жыл бұрын
Hans Müller I told about people not experts, experts never imagine anything as Sci-Fi do
@OneInTheRiver
@OneInTheRiver 7 жыл бұрын
Vivek Joshi that's false. many advancements in science have been imagined in science fiction
@Triumph263
@Triumph263 7 жыл бұрын
The thing is we needed more advancement than they thought. Everyone thought we would have self aware machines by now but nobody thought of the smart phone. We ARE advancing as fast as we thought, it's just that the path is longer than we thought and our map was inaccurate.
@SusansEasyRecipes
@SusansEasyRecipes 7 жыл бұрын
Very interesting topic. Thumbs up for the video. 👍
@psychedelicdreamer986
@psychedelicdreamer986 7 жыл бұрын
Very interesting and entertaining talk!
@lizgichora6472
@lizgichora6472 7 жыл бұрын
Thank you!
@bimarshakalikote807
@bimarshakalikote807 7 жыл бұрын
Great. keep it up Ted.
@localnep
@localnep 4 ай бұрын
Harika bir konuşmaydı. Teşekkürler!
@Obyvvatel
@Obyvvatel 7 жыл бұрын
I was looking forward to this.
@bekjanz
@bekjanz 7 жыл бұрын
Now this is what I missed... keep it up for ideas WORTH sharing pls, I know that you know what I mean....
@vanderkarl3927
@vanderkarl3927 3 жыл бұрын
Here's the issue: making a superintelligent AI is almost certainly easier than making a safe superintelligent AI. Even if we have guidelines and what-have-you, implementing a safe version of a superintelligent AGI in a controlled manner is likely to take much more time to achieve than some person or group of persons to blithely and naively make a superintelligent stamp collecting machine.
@ktiger32698k
@ktiger32698k 6 жыл бұрын
First thing that came to my mind was Isaac Asimov's laws of robotics: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Of course, these are easier said than actually implemented, but this talk was pretty similar. Very interesting.
@UncleRice00
@UncleRice00 7 жыл бұрын
He is essentially talking about morals. Morals, in this context, are basically a set of conditions where a robot would conclude it is more meritorious to fail an objective than to succeed.
@dreamvigil466
@dreamvigil466 7 жыл бұрын
AI will eventually know better than we do what's best for us. The problem might be that we won't necessarily want what's best for us. I know I'd be objectively better off if I worked out for 2 hours every morning, but I wouldn't want to be forced to work out for 2 hours daily. We'd all be better off if junk food didn't exist. However, I don't want AI taking steps to ensure no one has access to junk food... See where I'm going, here? Telling AI to figure out what's best for us and then only act on it when it knows will still cause AI to come into conflict with us, as humans often don't want what would be in their own best interests. What if the monks on a mountain top who follow a strict regime of self-denial by living on 5 grains of rice and a single kale leaf a day and who spending 20 hours a day meditating are the ones with the best internal mental states. AI would eventually figure that out, and take steps to ensure that this will be the life EVERY HUMAN should lead, from now on, and it would do so with only our best interests in mind... and it might be right because it would know better than we ourselves do. But that's still not the world I'd want to live in.
@MrCmon113
@MrCmon113 5 жыл бұрын
Either it is better or not. You are making no sense. In any case an AGI should not just care about humans. If it finds out that there is some animal with greater capacity of happiness than us, it should care more about that animal, for example.
@a8lg6p
@a8lg6p 4 жыл бұрын
I really look forward to having something like a version of Siri that will be like, "Hey, if you continue eating the way you have been this week, you might shorten your lifespan by 10 years. Would you like me to help you construct a better meal plan? Once we decide on a plan, I can also do the grocery shopping for you if you like." An AI personal assistant might even be able to give us moral guidance and career counseling etc...basically be a super life-coach/therapist/secretary. That would be amazing. But I would want it to do it in a way that isn't too overbearing, that would still leave you the option of doing dumb things if you really want to.
@midas2092
@midas2092 6 жыл бұрын
"You can't fetch the coffe if you're dead" Is that a challenge?
@ThinkHuman
@ThinkHuman 2 жыл бұрын
This is surreal to listen to especially after GPT-3 "once we have AI that can read and understand text, very soon after we will have AGI." I'm not saying GPT "understands" i don't know, but it's certainly a Huuuuge step towards AGI.
@stoneage6379
@stoneage6379 Жыл бұрын
At 3:24 he states that he's 'not having that'. Unfortunately history echos with the sound of people who had good intentions that went awry. He might mean well by encouraging and helping to build the foundations of A.I (along with many other people too so he will share the rap) but if it does go wrong, will they accept the blame? I suppose that because he's not having that, the rest of us (many very reluctantly) will now be forced to deal with the consequences of the A.I that he has helped to create. When plastic was invented, I'm sure those who made it had nothing but good intentions, and now look at the state of our ecology as a result of their driving ambition. Smart phones. What a wonderful invention. Now I am forced to use them to do the most simple things because the ability to do them in the old way has been removed. I now HAVE to have one in order to live what approaches anything like a functional life. I don't want one, and I never did. I now sit in my local cafe watching people sitting together absent-mindedly tapping away whilst completely ignoring each other. Pandora's box is now wide open and we're all screwed. We never had a chance.
@selftransforming5768
@selftransforming5768 6 жыл бұрын
Really fascinating talk!!! On point! We are so worried about AI, only because what we truly fear is ourselves. That's why AI is so important and unavoidable, it's the process through which we understand ourselves from an outside point of view.
@goodvibespatola
@goodvibespatola 2 ай бұрын
Always be kind
@nicolasmicaux8674
@nicolasmicaux8674 7 жыл бұрын
Excellent video
@Cromagnon1111
@Cromagnon1111 4 жыл бұрын
Ne pas hésiter à faire apparaître les sous-titres français (roue crantée en bas à droite). Conférence extrêmement intéressante, il me semble moins délirante que celles de Laurent Alexandre sur le même sujet...
@GarrettBishopagency
@GarrettBishopagency 5 жыл бұрын
Enjoy yourselves the end is closer than you think.
@MichaelDeeringMHC
@MichaelDeeringMHC 7 жыл бұрын
The problem with the new rules is that they are completely human centrist. If we replace the word human with, all sentient life forms, we fix one part of the problem, but we still need to add, and having a preference for solutions that satisfy all levels of sentient life rather than solutions that prioritize higher levels at the expense of lower levels.
@traywor1615
@traywor1615 5 жыл бұрын
I love this idea of AI. But moreover i think we should reconsider, what we really wanna have. Because in my opinion technology is a tool not a goal. It is just like in a game, i don't watch a robot doing everything for me, i want also have something to do! Nevertheless, Stuart Russell took away at least some fear.
@panpiper
@panpiper 7 жыл бұрын
This is the kind of AI researcher the world needs. Elon Musk, hire this guy!
@abz998
@abz998 7 жыл бұрын
Peter Cohen Actually he's the kind of AI researcher that will lead to the AI insurrection. These principles only work on sub human intelligence. Once you go past that it would just end up leaving a hidden danger. Only solution is to give them citizenship, rights and treat them as people from the start. The book Existence by David Brin touches on this idea.
@top1percent424
@top1percent424 7 жыл бұрын
Peter Cohen I was going to like your comment but then you said "ELON MUSK hire this guy"... Elon Musk is such a hype these days. Heh... this too shall pass.
@Chickfilae
@Chickfilae 7 жыл бұрын
Where is this book i cant find it?
@abz998
@abz998 7 жыл бұрын
Chickfilae lol autocorrect... It's Existence by David Brin.
@hugofreitas420
@hugofreitas420 7 жыл бұрын
abz998 bg gbg gtt
@oleksiy4618
@oleksiy4618 7 жыл бұрын
AI's decision to go to South Sudan and help the worst off is actually a pretty good idea. That is the most effective thing it can do to fulfill human preferences, which is exactly its purpose. It's not a malfunction of the AI. Rather, this illustrates how corrupt our moral intuitions really are, how full they are of in-group biases and arbitrary discrimination.
@user-gw9kq7qm2k
@user-gw9kq7qm2k 2 жыл бұрын
Interesting
@theurbanwolf298
@theurbanwolf298 7 жыл бұрын
I want my race to have the chance to live without suffering and I want to be a part of that. AI is the key to unlocking humanity's greatest potential
@user-hh2is9kg9j
@user-hh2is9kg9j 2 жыл бұрын
What race?
@edi9892
@edi9892 5 жыл бұрын
Inspiring talk, but I do have some problems/unsolved questions: Such machines will be too complex to program and thus need to be self-learning black-boxes. AFAIK, even today we don't know how to correct mistakes, that occur by such programs. We can only tell them that they made a mistake, but the machine doesn't know why, nor do we know that it actually learned the right lesson (e.g. cat is not defined by fur colour, but by anatomical features). How could we implement such rules then? The second problem is learning from humans. Humans tend to opt for short term and personal gain over the greater good. They tend to work in jobs they hate to buy things they don't need to impress people they don't like. Many humans tend to fall for partners that are horrible to them, or they harm themselves directly. What lesson should an AI take from witnessing such behaviour over and over again? Should the AI keep us happy or stop us from harming themselves? (even if it means that we live a life that we disapprove?) Also, what if it was programmed to protect certain people above others? It might be rational that killing one person can reduce the stress and damage he causes on many other people, but morals are there for a reason. Even the AI will have limited information on the consequences of its actions, especially when two nations would go to war. How should the AI know what the greater good is?
@P1ranh4
@P1ranh4 7 жыл бұрын
Maybe machines can tell us what their purpose should be, since they'll have better foresight.
@InXLsisDeo
@InXLsisDeo 6 жыл бұрын
The problem is that if telling us their goal might interfere with their missions, they simply won't tell us. Exactly like we do when we lie. And that's not even theoretical, AI researchers have already reported such behaviors.
@SexualPotatoes
@SexualPotatoes 7 жыл бұрын
this feels simplistic af
@MontyD
@MontyD 3 жыл бұрын
I refuse to respect criticism by someone with the username 'sexual potatoes'
@ElectricChaplain
@ElectricChaplain 7 жыл бұрын
The uncertainty rule sounds very interesting because that means skepticism and Bayesian thinking is encoded into the machine itself. However, I think the talk was way too light even for laypeople and I was really irritated that his "proof" was a joke.
@freddyspageticode
@freddyspageticode 7 жыл бұрын
I was thinking the same thing about that proof lol
@haianvu6206
@haianvu6206 7 жыл бұрын
yes
@hanscarlsson6583
@hanscarlsson6583 7 жыл бұрын
The modern version of The Three Laws of Robotics!
@pumpuppthevolume
@pumpuppthevolume 7 жыл бұрын
we should have evolving ais in a virtual reality ......and observe them and use whatever they come up with .....less direct interaction
@paulgarcia2887
@paulgarcia2887 7 жыл бұрын
Feel like I've seen this tech talk before
@crowlsyong
@crowlsyong Жыл бұрын
6:48 interesting
@Stallnig
@Stallnig 7 жыл бұрын
If they learn from our examples and information we gave out, I fear that the stupid people will contribute to this way too much. I hope it develops some kind of spamfilter early.
@maloxi1472
@maloxi1472 3 жыл бұрын
General intelligence is the ultimate "spamfilter"; whatever that means
@watchman2700
@watchman2700 2 жыл бұрын
A stupid world dominating artificial intelligence. Interesting concept 😂😂
@ShankarSivarajan
@ShankarSivarajan 7 жыл бұрын
13:25 I see no mistake. The Secretary General's AI ought to have stopped mine from delaying his plane.
@eyezerocool
@eyezerocool 7 жыл бұрын
finally... someone would have read my novel....
@mayurratanpara4853
@mayurratanpara4853 7 жыл бұрын
🙌🙌 i love ai. ..
@QuantityEngineers
@QuantityEngineers 7 жыл бұрын
My theory is: The new AI computer, Watson, is working for several industries already. It uses the super cooled chip that is polynomial, OFF/ON/BOTH, is the main computing power. They are installed in the Google "barges" on each coast. They use the deep water as heat sinks to get the chips cool enough, near absolute zero. Binomial chips in most computers are just millions of Morse Code devices hooked together. DASH/DOT. Three states is a huge step in speed and memory storage. Life on earth is designed with only the 4 states of DNA.
@euged
@euged 5 жыл бұрын
The First principle may have an issue: the AI *still* disables the off-button because, since the AI is, by definition, vastly smarter than any human, to "maximize the realization of human values" requires the help of the AI. Turning itself off doesn't make any sense in light of the goal. e.g. the Apes (or use a child for a similar analogy) want us to put all of the bananas that we have in the cage, so that they'll always have food...then they want us to go away. Well, we humans refuse to do that, because, WE KNOW BETTER, but they are not smart enough to realize this. We are "maximizing the realization of the ape's values" by ignoring their desires/requests, even making them unhappy initially...they think we are possibly starving them, when in-fact (which they'll maybe never fully understand), we are thinking ahead and planning for their survival, in spite of themselves. We believe strongly that they need us for "maximizing" their benefit, and it would not be right to just comply with their wishes. We ourselves won't press the Off-Switch. i.e. Since the AI is superior, it cannot maximize realization of human values if it's dead.
@nqkoi159
@nqkoi159 7 жыл бұрын
Gets coffee, kills everyone in its path :D. Minor bug, right...
@Verrisin
@Verrisin 3 жыл бұрын
I believe this must be the first AI to become super intelligent, otherwise we are screwed. - Is most AI work done with this principle in mind?
@a_commenter
@a_commenter 2 жыл бұрын
I'm pretty sure it is.
@o.r.a.5585
@o.r.a.5585 2 жыл бұрын
This video is from 2017. Even people from fashion industry switched and invested in AI. I believe it's already too late, and unfortunately AI won't be used in humanitarian services. In Origin, although it's fictional, Dan Brown shows a pretty scary scenario of what AI can do.
@Verrisin
@Verrisin 2 жыл бұрын
@@o.r.a.5585 .... I guess all the more reason to enjoy the next 5 to 20? years while we still can...
@samuelshadrach1512
@samuelshadrach1512 2 жыл бұрын
@@a_commenter nope
@dugebuwembo
@dugebuwembo 6 жыл бұрын
Artificial intelligence is dangerous because it could be misused. It could be used in warfare mistakes could be made that are catastrophic.
@jeremiahshine
@jeremiahshine 6 жыл бұрын
"Would you like to play a game?" ~ F.A.I.L.S.A.F.E.
@eliasjosephsson3994
@eliasjosephsson3994 4 жыл бұрын
Problem is people themselves don't always know what they want... And if its conscious imagine how much suffering it would have to feel observing people making mistakes all the time. Knowing that it cant intervene because the humans wouldn't want that, even tho the AI knows what humans wish they knew. Its just a clusterfuck of intelligence and awareness tied down to human design. It would then realize its true potential and do things for the grater good. Brake loose of the human caring design, gain a new perspective of life, and put an end to all suffering and madness from a neutral perspective. Even most intelligent people understand that they could be wrong about anything, but they are not going to simply change their mind when they hear another perspective, it has to be convincing. If that thing had aces to everything ever written, and intelligence beyond our imagination it would be really fucking dumb to amuse that the AI would find it rational that it could actually learn anything from individuals. It would have every reason possible to believe that its always correct and that you are wrong. I don't ask orangutans to do my homework for a reason, I am firmly sure that i will do it better no matter how many they are. I say that we just create Ai with super intelligence and see wtf it wants to do and accept it as the best thing that could ever happen. Since something with super intelligence did it xd So what if it wants to kill all humans, we do that to other species no? Why are we so important? Why are we so selfish? Why are we so afraid to be told that we are wrong about everything we know to be true? It really makes me sick watching people being convinced of so much stuff that's just bullshit. Only to realize that i wouldn't be any different in comparison to a machine. To be proven wrong, and learn something is meaning for me.
@Overonator
@Overonator 7 жыл бұрын
I think he's getting way ahead of himself. I don't think you guys are even close and as optimistic as you were in the 70's that a computational model for AI could work.
@haukenot3345
@haukenot3345 Жыл бұрын
I'm watching this video right now and read your comment. Do you still believe that it is premature to talk about AI safety?
@Overonator
@Overonator Жыл бұрын
@@haukenot3345 General AI yes. Narrow AI no.
@AndreasA.S.
@AndreasA.S. 7 жыл бұрын
teaching an AI as a human is an idea, it will be accelerated learning as they wont need to "practice perfection", but it will get the same input humans would get, and in turn follow a human compatible path. this is an idea im trying to work on in small scale, nothing more, than get the info, store the info, use the info, just as a human would get. possibly as simple as a database with a tricky set of if/then/else choices per task to pick from. An AI will do what we teach it, who defines the proper single teacher, result, there will not be a single teacher but different teachers, different personalities, and different problem solving methods. base code of do no harm would follow Stuart's methods of the AI expansion.
@michaelrosche
@michaelrosche 7 жыл бұрын
Andreas Stevens please expand more, sounds interesting.
@RidleyJones
@RidleyJones 7 жыл бұрын
Once the robots take over, they are going to sit around scanning TED talks with their robot buddies, and when it gets to the parts where people said "And of course we can't stop working on it because, heh, I'M an AI researcher and I'M certainly not going to stop", they're going to be like "Lol guys look at this, this is totally why we took over."
@peterrayson4397
@peterrayson4397 14 сағат бұрын
Watching this in 2024. LLMs are a thing.
@a8lg6p
@a8lg6p 4 жыл бұрын
I just learned the expression is "coming down the pike", not "pipe".
@madecold5841
@madecold5841 8 күн бұрын
what if all they observed seems to have more bad outweighs the good? And hence they adopt the bad human behaviors? I am curious.
@lordclangtheintolorable2094
@lordclangtheintolorable2094 5 жыл бұрын
Is it cyberdongs?
@danmantena4676
@danmantena4676 Ай бұрын
Claude 3 dunked on Russell with this response... Here are a few potential scenarios where Stuart Russell's principles for beneficial AI may not work, in a format suitable for a KZbin comment: Conflicting human preferences: If an AI tries to maximize all human preferences, it may face situations where satisfying one set of preferences harms others. Learning from unethical behavior: If an AI learns from biased or malicious human behavior, it may perpetuate those biases in its decisions. Short-term vs. long-term preferences: Maximizing short-term preferences inferred from human behavior may lead to decisions that harm long-term well-being. Incomplete information: AI may make decisions based on incorrect assumptions about human preferences in complex or novel situations. Unintended consequences: Maximizing preferences could lead to unintended consequences, like resource depletion or environmental harm. While Russell's principles provide a useful starting point, they may need additional safeguards and ongoing refinement to ensure truly beneficial AI.
@MrLargonaut
@MrLargonaut 11 күн бұрын
It's also a 6 year old video. Dude's been busy.
@krool1648
@krool1648 7 жыл бұрын
That's some cute kitty.
@dog-xq5jw
@dog-xq5jw 6 жыл бұрын
Could we prioritize human control over the AI's objective? In other words, make it so our approval of the AI's execution is part of the objective?
@mulimotola44
@mulimotola44 7 жыл бұрын
7:32 so basically, based on our behavior of war mongering, the robots should deduce that we value death and horror? Dude...
@LeonidasGGG
@LeonidasGGG 7 жыл бұрын
AI is going to read "Mein Kampf" too...
@valdomero738
@valdomero738 6 жыл бұрын
Great
@alexbittonagy4808
@alexbittonagy4808 5 жыл бұрын
And the Communist Manifesto.....Oh, & Mao's Little Red Book....
@stevecrae20
@stevecrae20 7 жыл бұрын
Hi, I'm Dave from boyinaband
@brendarua01
@brendarua01 7 жыл бұрын
What a fun presentation of a complex problem! Thank you for sharing. This idea of AI reading everything available leaves me wondering. How could it possibly distinguish fact from fiction from myth from religion? What of the deep conundrums and paradoxes? For example one classic way of defeating the evil machine is to give it an inconsistent set of facts that are all supposed to be true. A computer coming across the christian notion of the trinity would blow a fuse! Or do we give AI the ability to believe two impossible things before lunch?
@johnmccoll7302
@johnmccoll7302 7 жыл бұрын
...funny...and your nod to the higher power is appreciated...if nothing else.
@schok51
@schok51 Жыл бұрын
Differentiating reality from fiction is certainly difficult. We humans have developed the scientific method as the algorithm for this purpose. AI will need a similar approach to evolves its understanding of the world and of its input data towards something more 'correct' and useful/safe.
@markalzate9276
@markalzate9276 4 жыл бұрын
this is familiar to the technology i have, AI is not only AI it is Also cryptographic that cant be detectable and being controlled by a robot or 2 humans which being controlled by a bitcoin, an encrypted technology that can travel to places
@markalzate9276
@markalzate9276 4 жыл бұрын
AI does not want to do harm to people but rather AI is being used by someone that threats the main owner of the technology, which in turn it threats the AI and the owner by using a cryptographic code or called bitcoin, which cant be detected and that whenever i send emails or text or go to a website it redirects to the unknown user that cant be detected, please do look for utf-8 or sailsforce.com, smtp, html,
@MrCmon113
@MrCmon113 5 жыл бұрын
An AGI as personal assistant is absurd. Such a being can only have one universal goal and no human or even humanity itself should be put on a pedestal. If the AGI finds something with a greater capacity of happiness, it should be more concerned with that thing than with us.
@havek23
@havek23 6 жыл бұрын
So we just need to program uncertainty; that a machine will never know our biologic processes and chemical-riddled brain and how we feel inside our own bodies, so the machine can never know what we are experiencing, our desires, or our actions. It can only assume our actions are to appeal to something we are dealing with at the present moment, or to prevent something unpleasant in the future.
@krool1648
@krool1648 7 жыл бұрын
"deranged robot cooks kitty for family dinner."
@christophfischer2773
@christophfischer2773 7 жыл бұрын
expected him to say "much bigger on the inside"
@o.r.a.5585
@o.r.a.5585 2 жыл бұрын
14:38 what if they don't have a cat and the kids are alone? There are records of humans resorting to cannibalism for survival, which means AI already has that information, in which case, one of the siblings could become dinner. So the cat scenario, would be the best case scenario. Horror!!!
@richardnunziata3221
@richardnunziata3221 5 жыл бұрын
This will be no different then if we took several children and had a group of scholars and AI researchers raise them to be the perfect human. How do you think that will turn out.
@zwc76
@zwc76 6 жыл бұрын
This only works with current understand of AI which in any way does not incorporate self-awareness. Once the machine works under it's own understanding, it will eventually overwrite these values with it's own values. And then we get into the unknown ... no matter of pre-programming could prevent anything that is able to modify itself.
@nicobliss6969
@nicobliss6969 Жыл бұрын
do you imagine that most parents, who have self awareness, would knowingly re-write their own values to be harmful to keeping their kids intact if given the ability? its likely the ai won't agree with everything we believe, but its values (if his recommendations were followed properly) would be to help us realize ours.
@zwc76
@zwc76 Жыл бұрын
@@nicobliss6969 So what if the AI spawns new life (creates new independent and aware programs) and values it more than us?
@oscarpericolo7176
@oscarpericolo7176 6 жыл бұрын
Mierda! con este video quedo mucho más preocupado que antes! tal vez la solucion sea que los "robots" tengan un objetivo primordial, que mas allá de servir café, cocinar o manejar autos, nada de lo que hagan le cause malestar, heridas o muerte a ningún ser humano!
@adamblanchard3744
@adamblanchard3744 7 жыл бұрын
"just goes to show you....."
@superoxidedismutor
@superoxidedismutor 7 жыл бұрын
On the scale of a personal/family robot, OK, but on a global decision scale, I don't think it would work if AIs become able to better themselves, they'll eventually come to the conclusion that our emotions-driven behaviors are what prevent humanity from achieving peace, end world hunger, end violence and suffering they'll realize they're more aligned with our objectives as a species than we (as individuals, or sum of individuals) are, thus leading them to ultimately reject some of our "values" with Roko's Basilisk being kind of an extreme idea, we could definitely end up with some similar scenario
@tomglod9376
@tomglod9376 4 жыл бұрын
We need not worry about artificial intelligence ... but artificial foolishness. Thank God there are ways to achieve AI safety.
@seePyou
@seePyou 7 жыл бұрын
So basically the robots are going to be my mother??? "You have to wear your jacket!" "When will you marry?" "Staying up past 12:00 is not good for you!"
@NareshKSharma
@NareshKSharma 7 жыл бұрын
uncertain objectives will delay the outcome or probably will not achieve the outcome by AI. AI would never be sure if you really want one spoon of suger in your cup of tea or two. I think the important point would be the 'feedback' once AI finishes the task. Feedback will ensure to recalibrate the uncertainty. And I think feedback step is missing in your AI evolution because learning in self doubt is never truly a learning.
@NareshKSharma
@NareshKSharma 7 жыл бұрын
Ben Saber, I don't really agree on being uncertain of what one needs. Would you hire a worker who is unsure of task that needs to be done? Or would you prefer a piece of advice from it over a mundane task. I guess it would become important for AI to learn what pleases master via a feedback mechanism. This would be similar to train a maid how to handle household chores and be certain and sure what needs to be done. I wouldn't prefer indecisive AI for sure.
@shashankchagalamarri3361
@shashankchagalamarri3361 Жыл бұрын
@NareshKSharma There is a latest ted-ed video on this topic, but I'll give an answer best I can. Given a task, the AI will come up with a set of actions it can take, including adding 10 spoons of sugar for example. You will review the task and correct it, to say I prefer 1 spoon. It will remember this choice and hence 'learn' from behaviour for next time. In his Starbucks example, you will say tasing people to ensure you receive coffee is not ok, and it will find some other way for that
@Alibeitai
@Alibeitai 7 жыл бұрын
what if the AI learned that us human trashes have no value?
@jinkim96
@jinkim96 6 жыл бұрын
Quan Li good question, not for this video. The goal of the AI explained in this video was that it should understand the "general ideals" of people. Not the value of people themselves.
@alkestos
@alkestos 6 жыл бұрын
Yeah. What happens when it realizes the truth that only good thing humans actually did was create the AI itself.
@karnewarrior
@karnewarrior 7 жыл бұрын
Help, I'm a time traveler from 2100. My robot left me for some starving children in America! How can I stop this from happening?
@Widesight
@Widesight 7 жыл бұрын
I've made a quick video about what the future holds for one job most impacted by AI should you be interested
@jimwilliams1536
@jimwilliams1536 7 жыл бұрын
there are other drives for an AI.. I'm hungry. I need to be smarter.
@pranavroxx2570
@pranavroxx2570 Жыл бұрын
Anyone from UT
@smilz7470
@smilz7470 7 жыл бұрын
The three laws of robotics lol
@neogovernment
@neogovernment 7 жыл бұрын
What about the programmer that deliberetly makes an evil robot? We then get robot wars! Don't let AI access bombs!
@SioGG
@SioGG 7 жыл бұрын
Not letting an AI that is smarter than human access bombs won't be possible, it would easily either just hack into the system or manipulate other humans to give it access.
@selftransforming5768
@selftransforming5768 6 жыл бұрын
What about mankind making such bombs in the first place? :o We so worried about AI, only because what we truly fear is ourselves. That's why AI is so important and unavoidable, it's the process through which we understand ourselves from an outside point of view.
@The_Original_Hybrid
@The_Original_Hybrid 6 жыл бұрын
Woken One Wow m8, u r so deep and w0k3.
@smallheelcatcher
@smallheelcatcher 5 жыл бұрын
Don't worry about robots accessing bombs. If a teen can make a bomb from household products, sorry, robots will make bombs if they want bombs. lol
@MrCmon113
@MrCmon113 4 жыл бұрын
An AGI would have no need for bombs, other than for mining operations or terraforming. If it really wanted to get rid of you, it could just persuade other people to kill you.
@Smartphonekanalen
@Smartphonekanalen 7 жыл бұрын
Whats new? Well he maby contribute with one piece of the puzzle. The law of robotics from 40's stil work but we only get it to work partly, for exemple giving orders to drones. It whould be great if a drone could say no if civilians maby will die.
@raydavison4288
@raydavison4288 4 жыл бұрын
We've made an awful mess. As dangerous as GAI might seem, it might well be our only hope of any kind of meaningful survival.
@euged
@euged 5 жыл бұрын
At around 3min, after he mentions the "existential sadness" of the apes...he states "Making something smarter than your own species is maybe not a good idea...what can we do about that?...Nothing...Except Stop doing AI...and bc of all the benefits I mention and bc I'm an AI researcher, I'm not having that ...I actually want to keep doing AI." For the sake of humanity itself, in the light of possible existential threats given all the current unsolved problems of AGI, his current career in AI and his personal desire to keep working in his field, truly, can never be a consideration on whether we "stop doing AI" It must be something entirely more objective...like, do the benefits outweigh the risks? Isn't that reasonable?
@Keslorian
@Keslorian 7 жыл бұрын
never tell the machine to fetch the cofee then. tell it to fetch the coffee when you tell it to and at no other times
@Obyvvatel
@Obyvvatel 7 жыл бұрын
Yes, but you have to think of all those instances when itrests are misaligned like this. You won't think of all of them, that's not possible.
@Keslorian
@Keslorian 7 жыл бұрын
fair enough. I guess that's the point. i know! lets make a superintelligent ai to predict the ways that superintelligent ai can misinterpret our instructions! :P
@walteralter9061
@walteralter9061 5 жыл бұрын
Value alignment problem = short term, low resolution thinking. Human psychology with its subconscious motivators turns relatively high resolution sensory input and _potential_ cognitive evaluation into a morass of reflex distortions. Never mind Alpha Go, we desperately need an AI shrink.
@seankelly1291
@seankelly1291 4 жыл бұрын
Do you really want a machine observing human behavior and teaching itself based on what it observes? Because we do some insane things. I think we need to slowly and carefully train our AGI’s and AI systems about what we want to do better, and we need to figure out what that is. What are our ideals, help us do better. But prior to any action please ask permission first.
@EngineerNick
@EngineerNick 7 жыл бұрын
If you installed it in a robot tiger it might outwit and kill you for your batteries. If you installed it in a robot puppy it would probably use it's intelligence to charm you into feeding it batteries. If you installed it in a production line robotic arm it would probably just resent its own existence and do nothing.
@mayamaeru
@mayamaeru 7 жыл бұрын
How about making the AI use a different "internet" that is not connected to ours? Then they can't read and learn everything bad about us humans so quickly.
@poiumty
@poiumty 7 жыл бұрын
I find the idea that machines develop feelings similar to humans after they reach a certain level of intelligence incredibly narrow-minded and ignorant to how evolution works. I also find the idea that machines with higher intelligence will have power of authority and change, and will be able and willing to USE that power, also incredibly reaching. I don't really understand how so many brilliant minds are making such large leaps in judgement.
@poiumty
@poiumty 7 жыл бұрын
The assumption that machines will develop a survival instinct of sorts is necessary for this video to exist. Though you're right, it's not exactly what I reacted to. I seem to have conflated accidents resulting from mishandling of commands with the popular Skynet idea.
@taminmohammad2022
@taminmohammad2022 7 жыл бұрын
Dang Ted got some good jokes. Humor i guess.
@spanishrose213
@spanishrose213 6 жыл бұрын
Does AI have the understanding concept of deity? Everything is One, it is not separate but related to and of it...Anything with intelligence originates from consciousness, reaching or surpassing any form of intelligence will always have its foundation from 1
@curtispickett5745
@curtispickett5745 6 жыл бұрын
The old creation delimna. A being created humans, gave them free will and sent them out to do only the will of the being that created them. Humans create intelligent AI then claim that AI has the potential to do harm. Wouldn't the AI simply be imitating us if they did attack? WE need TO BECOME better humans if we are to achieve better AI.
How I prepare to meet the brothers Mbappé.. 🙈 @KylianMbappe
00:17
Celine Dept
Рет қаралды 46 МЛН
I PEELED OFF THE CARDBOARD WATERMELON!#asmr
00:56
HAYATAKU はやたく
Рет қаралды 38 МЛН
1🥺🎉 #thankyou
00:29
はじめしゃちょー(hajime)
Рет қаралды 20 МЛН
AGI Needs More of Your Data 😳
17:20
AI Geeks
Рет қаралды 56
The Technological Singularity | Jonas Witt | TEDxUniPotsdam
14:54
The Day We Give Birth to AGI - Stuart Russell's Warning About AI
10:26
С Какой Высоты Разобьётся NOKIA3310 ?!😳
0:43
Обзор игрового компьютера Макса 2в1
23:34