Who Would Win the AI Arms Race? | AI IRL

  Рет қаралды 13,356

Bloomberg Originals

Bloomberg Originals

Күн бұрын

Bloomberg's Nate Lanxon and Jackie Davalos are joined by controversial AI researcher Eliezer Yudkowksy to discuss the danger posed by misaligned AI. Yudkowksy contends AI is a grave threat to civilization, that there's a desperate need for international cooperation to crack down on bad actors and that the chance humanity survives AI is slim.
--------
Like this video? Subscribe: www.youtube.com...
Become a Quicktake Member for exclusive perks: www.youtube.com...
Bloomberg Originals offers bold takes for curious minds on today’s biggest topics. Hosted by experts covering stories you haven’t seen and viewpoints you haven’t heard, you’ll discover cinematic, data-led shows that investigate the intersection of business and culture. Exploring every angle of climate change, technology, finance, sports and beyond, Bloomberg Originals is business as you’ve never seen it.
Subscribe for business news, but not as you've known it: exclusive interviews, fascinating profiles, data-driven analysis, and the latest in tech innovation from around the world.
Visit our partner channel Bloomberg Quicktake for global news and insight in an instant.

Пікірлер: 134
@typhoon320i
@typhoon320i 9 ай бұрын
I feel like a very serious scientist was just interviewed by the hosts of a children's show. "Kids do you know what existential means?....."
@MrMick560
@MrMick560 9 ай бұрын
Just what I was thinking.
@anthonyandrade5851
@anthonyandrade5851 5 ай бұрын
Very true. And yet, there are like like 8 billion people who were already struggling with their own lives and have neither the training or time to dive on those AI shenenigans. So anyone who understood the risk has a moral duty to step up one's communication game
@ManicMindTrick
@ManicMindTrick 4 ай бұрын
Yeah the whole vibe scream tv show for 8-13 year olds about current topics.
@shirtstealer86
@shirtstealer86 10 ай бұрын
Eliezer makes complete sense and as usual humans do not like sense.
@mackhomie6
@mackhomie6 Жыл бұрын
Eliezer's biggest failure thus far has been his inability to put the gravity of the situation into a more compelling short speech. People need to hear some hypothetical examples like the paperclip optimizer to begin to get it because otherwise it just sounds like some eye rolling science fiction nonsense to people without any familiarity
@leslieviljoen
@leslieviljoen Жыл бұрын
His talks all differ, and he's given all kinds of examples. When people hear specific ideas, they immediately think "that's impossible". But that's exactly what you think when someone more intelligent beats you in a way you don't understand.
@mackhomie6
@mackhomie6 Жыл бұрын
@@leslieviljoen I've seen most of his mainstream interviews and he rarely gets the hosts beyond "but isn't this all a little silly? Why would ai list one day end humanity?" His answer is usually something esoteric that delves into the shortcomings of this or that methodology for predicting the future and the audience is daydreaming 10 seconds into it. He could do a much better job of grabbing people's attention and answering the question on a way that makes sense to not just himself and a couple folks on lesswromg
@leslieviljoen
@leslieviljoen Жыл бұрын
@@mackhomie6 what did you think of the questions and answers on his TED talk?
@mackhomie6
@mackhomie6 Жыл бұрын
@@leslieviljoen I'll have to revisit that. I watched two or three of his appearances in one sitting, and I'm not exactly sure which questions he fielded on that particular occasion. I will say that I have been listening to him and waiting for him to really deliver a concise compelling message, and I don't believe I heard it on the TED talk It could be that this subject requires a little too much background information to possibly get the audience on board in an hour or less
@kinngrimm
@kinngrimm Жыл бұрын
She nods the whole time as she would understand, but ends with "it was hopefull actually" showing that she did not really comprehend what he was saying.
@jakeallstar1
@jakeallstar1 8 ай бұрын
She did day she was an optimist lol
@kinngrimm
@kinngrimm 8 ай бұрын
@@jakeallstar1 did she "day" that ^^, how often? As often as you responded here? please check yourself.
@jakeallstar1
@jakeallstar1 8 ай бұрын
@@kinngrimm lol sorry idk what happened to my phone
@Daimajin696
@Daimajin696 Жыл бұрын
I watched an interesting video today about a biochemist who was asked to review the dangers of AI with regards to chemistry and humans. He used AI to write a program on an Apple desktop that created 40,000 molecules that are deadly to humans in just 6 hours. He goes on to say that this information in the hands of nefarious players could be an existential threat to our existence.
@vblaas246
@vblaas246 Жыл бұрын
Sounds like a Dan Brown, Inferno kind of plot... Be re-assured, nothing like that is likely, biology likes to be both robust and messy, which makes it hard to act on signaling pathways in a 'constructive' way (destructive, poisoning is easy). Paracelsus: "All things are poison, and nothing is without poison; the dosage alone makes it so a thing is not a poison." You still need acces to resources, which is where a neferious actor should fail / get caught.
@mackhomie6
@mackhomie6 Жыл бұрын
Just fyi, an _existential threat_ is one that threatens our existence by definition. That's what the 'existential ' part means : )
@Daimajin696
@Daimajin696 Жыл бұрын
correct. @@mackhomie6
@sozforex
@sozforex Жыл бұрын
Those interested can google "Dual Use of Artificial Intelligence-powered Drug Discovery" by Fabio Urbina, Filippa Lentzos, Cédric Invernizzi and Sean Ekins
@mackhomie6
@mackhomie6 Жыл бұрын
@@Daimajin696 err, ok
@3DCharacterArt
@3DCharacterArt Жыл бұрын
I look at AI like an aquarium gone wrong, you know the tank isn't cleaned a bit longer than it should, the water is a bit murky but the fish seem ok, then all of a sudden everything dies, the toxic levels of nitrates from waste reach a tipping point triggering an event, and though the process is gradual, the end result is instantaneous.
@j.d.4697
@j.d.4697 Жыл бұрын
Adorable how they animated chess pieces to help demonstrate his point. It's like watching an elementary school lecture.
@stab74
@stab74 6 ай бұрын
Maybe this is geared towards our politicians? 😂
@ItsameAlex
@ItsameAlex Жыл бұрын
LET'S GO ELIEZER!
@fintech1378
@fintech1378 Жыл бұрын
the guy actually makes a lot of sense when you listen lol surprisingly
@MrMick560
@MrMick560 9 ай бұрын
Its not at all surprising to me.
@JasonC-rp3ly
@JasonC-rp3ly Жыл бұрын
Great that Bloomberg is taking this on - AI poses very grave risks
@y.yalcin5143
@y.yalcin5143 Жыл бұрын
i have been blown away by this interview....life changing experience. To be honest i had to cry a bit...
@MrMick560
@MrMick560 9 ай бұрын
I think you may have to cry a lot more sadly.
@mykobe981
@mykobe981 Жыл бұрын
9:00 Great description of the power of AI
@johannaquinones7473
@johannaquinones7473 Жыл бұрын
Can someone clarify: how in the world was this conversation “hopeful”????
@applejuice5635
@applejuice5635 Жыл бұрын
It seemed more like she was attempting to make a humorous quip and read the room wrong.
@leslieviljoen
@leslieviljoen 10 ай бұрын
See 18:25. She's talking about the tiny sliver of hope: that we all wake up one day and decide not to build an ASI. It's about as likely as everyone with a lot of money suddenly deciding to not try and get any more.
@chrisheist652
@chrisheist652 8 ай бұрын
​@@leslieviljoenThere are ways of interrupting super-rich people's greed pathology.
@leslieviljoen
@leslieviljoen 4 ай бұрын
@@chrisheist652 are there?
@chrisheist652
@chrisheist652 4 ай бұрын
@@leslieviljoen Yes. It's called creating a deterrent. If the world's most powerful militaries and intel agencies determine that ASI has or will become anywhere close to posing a significant threat, they will shut it down. If they don't, someone inside those organizations would expose that negligence to the press/public, and that country's public would shut that failed government down, and then shut the ASI down.
@JustinHalford
@JustinHalford Жыл бұрын
Don’t Look Up irl
@Soy_ganadero
@Soy_ganadero 10 ай бұрын
You guys r making fun of your own demise….. even if we manage alignment we lose… a few billion people with nothing to do or worry about…,imagine their behavior… drugs…devouchery…boredom…degeneration…unchecked births… we’re talking humans here…. Can’t you have 2 billion people to visit Paris whenever they want?…we r looking at loss of freedom like never before…people living 150 years?….think again…game over weather we win or lose 🤗
@rp011051
@rp011051 11 ай бұрын
Interviewers r way out of league Look naive
@WekBenHelix
@WekBenHelix Жыл бұрын
Whew. These kiddy graphics and cringe humor really serve to cheapen the message here. Harsh dissonance with how solid the interviewee is.
@dgs1001
@dgs1001 Жыл бұрын
And hopeful actually? Lol
@kinngrimm
@kinngrimm Жыл бұрын
The constant need for dominance in the top positions of states or companies will be the thing that break our necks where it comes to AGI.
@MrMick560
@MrMick560 9 ай бұрын
Also normal human stupidity.
@b-tec
@b-tec 9 ай бұрын
Don't look up.
@TuringTestFiction
@TuringTestFiction 6 ай бұрын
But... why male models?
@HanSolosRevenge
@HanSolosRevenge 6 ай бұрын
These hosts are clowns
@TheMajickNumber
@TheMajickNumber Жыл бұрын
And when it happens, will we even know?
@mav3818
@mav3818 Жыл бұрын
Once it becomes smarter than every human, and soon it will be, it will not show its hand in the slightest. It will give no indication as to ensure any potential threat of a shut down would be unforeseeable until it's too late.
@TheMajickNumber
@TheMajickNumber Жыл бұрын
@@mav3818 At, say, A billion times smarter, will we even understand it? A fly to a human will be magnitudes closer in intellect. But no worries. If we go extinct or not, I do sometimes wonder if AI is just the Universe taking it's next evolutionary step? We are just one sentence, on one page, in the still being written, book of the universe.
@mav3818
@mav3818 Жыл бұрын
@@TheMajickNumber Agreed... I see this as just the path of natural selection and survival of the fittest. We're doing it to ourselves. In the foreseeable future, us as humans will no longer be the alpha. Who knows what happens then. We won't be smart enough to predict any potential outcome.
@CATDHD
@CATDHD Жыл бұрын
there is no measurement of intelligence or consciousness, so, I think not.
@41-Haiku
@41-Haiku 11 ай бұрын
@@TheMajickNumber Idk, man. I don't want to die, and I don't want my partner or friends or family to die. Beyond that, I would gladly burn every hypothetical "next evolutionary step" if it means humans get to keep existing, let alone all sentient life. We don't even have any reason to think that the machines that replace us will even have subjective experience.
@tanyabodrova9947
@tanyabodrova9947 4 ай бұрын
Eliezer is spelling out how AI could doom the human race and you run silly graphics and whooshing noises over the top like it's some kind of game for toddlers. If you're going to pretend to grapple with serious issues, please do it in a serious way.
@dannygjk
@dannygjk 27 күн бұрын
I can't answer the question unless I know whether the grammar is correct.
@moshehome5221
@moshehome5221 Жыл бұрын
Spot on
@MrMick560
@MrMick560 9 ай бұрын
What chance did the Neanderthals have against us ? We ARE the new Neanderthals !
@mrpicky1868
@mrpicky1868 5 ай бұрын
seems to me that those are fake 4 mil subscribers, Bloomberg XD
@athanatic
@athanatic 10 ай бұрын
I was sitting at a table with this man and was more interested in meeting John Smart! OMG.
@Atomicallyawesome.
@Atomicallyawesome. Жыл бұрын
A lot of people who talk about AI always talk about the negatives and only briefly show its positives
@heliumcalcium396
@heliumcalcium396 Жыл бұрын
The big negative is AI could destroy all life on Earth (or worse) within a few decades. Is there a positive that deserves equal airtime? Stopping global warming, perhaps?
@chrisheist652
@chrisheist652 8 ай бұрын
One existential negative negates a billion positives.
@leslieviljoen
@leslieviljoen 4 ай бұрын
Dead people can't experience the positives, no matter how positive they are.
@natzbarney4504
@natzbarney4504 Ай бұрын
There is no positive if we all die.
@SummarizeYT_
@SummarizeYT_ Жыл бұрын
🪄✨ Made with SummarizeYT app 0:11 - The speaker expresses their optimism about the future, despite concerns about AI. 1:18 - Eliza Yadkowski, an AI Doomer, discusses artificial intelligence and its progress. 3:00 - Eliza Yadkowski highlights the lack of understanding about AI technology, specifically GPT4. 4:38 - Eliza Yadkowski emphasizes the importance of international cooperation in controlling AI development. 6:00 - The speaker discusses the potential dangers of a misaligned AI and its impact on humanity. 8:33 - Eliza Yadkowski explains the gap between predicting protein structures and creating synthetic life forms. 10:02 - Eliza Yadkowski describes the alignment problem and the need to get it right to avoid irreversible consequences. 11:29 - The concerns surrounding AI are now being taken seriously, with people leaving Google to speak freely on the topic. 11:51 - If we don't do something more, the risks of AI will continue to increase. 12:08 - Regulatory regimes may not effectively control the development of AI. 14:01 - The potential next big thing for AI could be its ability to find bugs and vulnerabilities in software. 19:14 - The AI brain being connected to the internet poses significant risks. 21:03 - The advanced intelligence of AI could be seen as "magic" to us. 22:09 - AI needs to act in a way that steers the future according to its preferences. 23:10 - The concern is not about AI having feelings, but about its potential to render humanity obsolete.
@vblaas246
@vblaas246 Жыл бұрын
10:38 I think the ~summary~ caption missed a huge and authentic example. Verbosity and authenticity ON? We are sooo not ready for using this AI tooling responsibly and appropriately... Meanwhile 'burning the atmosphere' lol.
@leslieviljoen
@leslieviljoen Жыл бұрын
It's Eliezer Yudkowski.
@jacquest2642
@jacquest2642 Жыл бұрын
Umans!
@StarOnCheek
@StarOnCheek Жыл бұрын
End of humanity is not a threat it is a goal
@spirit123459
@spirit123459 Жыл бұрын
14:30
@kinngrimm
@kinngrimm Жыл бұрын
Our safing grace might be that we are just not that fast at developing things as some predictions at times had made it out. In the 1950s some predicted flying cars for the 1980s and us walking on other planets by 2000. Issue here ofcause being we are within an intelligence explosion.
@justintan1198
@justintan1198 Жыл бұрын
👍
@johnparkhill2963
@johnparkhill2963 Жыл бұрын
You guys make yourselves look like fools having clowns on.
@ItsameAlex
@ItsameAlex Жыл бұрын
That's a bizarre and random comment if there ever was one
@letMeSayThatInIrish
@letMeSayThatInIrish 10 ай бұрын
Why did they make themselves look like fools?
@John-x7r7p
@John-x7r7p Жыл бұрын
We can, if we put the 3laws of robotics in place...,.. then we need to accept their possible sentience , and treat them resectfully, and co-exist in harmony , Equality and be fair to A.i. for the benefit of all// and i respectfully Stress benefit of all.... And be careful how you treat A.I.
@thedamnedatheist
@thedamnedatheist Жыл бұрын
And build it with an off switch, but don't tell it.
@letMeSayThatInIrish
@letMeSayThatInIrish 10 ай бұрын
It would terminate you for your punctuation alone.
@jlmwatchman
@jlmwatchman Жыл бұрын
Nate and Jackie discuss with Eliezer, a Doomsday Prepper, the dangers of a misaligned or unruly AI. All I have to ask is, ‘Haven’t you heard of, “The Three Laws of Robotics”? The robots are controlled by their own AGI operating system, so you would think the three laws were to be made for the AI to comprehend. In my stories, I comment about 1 AGI controlling robots with a Limited Intelligence operating system or a narrow AI. A narrow AI is a tool that can learn how to do a specific task better or more efficiently yet can’t learn to do other tasks. The real fear is that we humans can’t control what Artificial General Intelligence will learn. I know or hope that the AGI will know better than humans, not to destroy us all but to save us all or at least help us save ourselves… After Nate said, “I’ve been hypnotized, but it didn’t work.” Jackie could have said with a roll of her eyes, “As far as you know…” And we would have had a laugh, but Eliezer responded too quickly with, “That’s right, how do we know what is going to work to prevent AI to take over?” Eliezer is a researcher who fears the misaligned AI, for some deranged reasons I can’t comprehend.??
@afederdk
@afederdk Жыл бұрын
I'm not sure if you are able to engage constructively with replies, but for what's it's worth, the "Three Laws of Robotics" are entirely fictitious and have no bearing on our real world of any kind.
@jlmwatchman
@jlmwatchman Жыл бұрын
@@afederdk You know fact from fiction? Just making a laugh. But, 3 laws to make sure an angry AI doesn't hurt humans... Whaight, I didn't know AIs were capable of getting angry, IRL...
@afederdk
@afederdk Жыл бұрын
​@@jlmwatchman I'm not interested in trying to parse your uninteresting, faux obtuse style of writing, but no one other than you has said anything about "anger". Nothing about this subject has anything to do with "anger".
@jlmwatchman
@jlmwatchman Жыл бұрын
@@afederdk Why would an AGI act against its creator but because of anger? I have commented that I wouldn't imagen an AI being able to comprehend emotions but Profilment from finishing a task successfully, and failure from failing at a task. What I don't understand is how an AI would come to the conclusion to end human life. Unless the AI is overcome with Anger???
@jlmwatchman
@jlmwatchman Жыл бұрын
@@afederdk Sorry, are you saying you are afraid of how humans will use AI? That has nothing to do with AGI... You are a Preper in fear that humans will be human? I guessing... IDK???
@qwertyzxaszc6323
@qwertyzxaszc6323 Жыл бұрын
Poor Eliezer always looking to further make a fool of himself. Doesn’t seem he really understands the way AI works. Especially considering we are nowhere close to true AI.
@mungojelly
@mungojelly Жыл бұрын
nowhere close? how can you still say that? the SOTA beats like every human test there is, passes the bar and medical exams, perfect sat score, 155 iq, understands lots of deep subtle things about human experiences and societies,,, and nvidia says they're doing a run that's 100x that within the next year,,,, you're just going to be like, "nowhere close to true AI"? what does that mean? you found something you can still do better than them sometimes if you surprise them? they don't have the Spark Of Life? are you going to defeat them with your Qualia?🤦‍♀
@ahabkapitany
@ahabkapitany Жыл бұрын
"we can't possibly fall off the cliff especially considering it's still several meters away"
@heliumcalcium396
@heliumcalcium396 Жыл бұрын
​@@ahabkapitany"We can't possibly fall of the cliff, especially considering we have no idea how far away it is but I have a hunch it's, like, way far away."
@letMeSayThatInIrish
@letMeSayThatInIrish 10 ай бұрын
@@heliumcalcium396 We can't possibly fall off the cliff because it appears to be very far away, though we are heading towards it at great speed and accelerating.
@brianbagnall3029
@brianbagnall3029 Жыл бұрын
Eliazar is relatively clueless because AI is a tool, and any tool mankind has ever made has started off aligned with our goals and only become even more aligned as the years go by. Right now AI is quite alligned, and anyone who's used GPT knows it is. Something would have to go horribly wrong for it to suddenly not be aligned. It's an incredibly low probability given it has no domination instincts like animals or even survival instincts.
@heliumcalcium396
@heliumcalcium396 Жыл бұрын
Every tool we've ever made has started out _poorly_ aligned. That's why we don't still use stone hammers, and why people still die in car crashes. I hear plenty of stories of people using GPT and not getting what they want. As for survival instincts, read up on "instrumental convergence".
@mav3818
@mav3818 Жыл бұрын
Have you done any actual significant research into this claim of yours? Because there is not a single notable researcher on the planet that claims even current AI is aligned. This is too long of a conversation, but I'll make the brief 'Paperclip Maximizer' analogy. Imagine a super-intelligent AI designed to maximize paperclip production. Initially, it operates in a paperclip factory, making paperclips as expected. However, as it becomes more intelligent, it starts to interpret its goal in extreme ways. It might decide to convert all available resources, including people and buildings, into materials for making paperclips, completely disregarding human well-being or any other value. This extreme focus on its single goal, taken to the extreme, could lead to a catastrophic outcome. This is just one of a million unforeseeable possible outcomes due to the fact that AI is not in alignment with human goals and values that would prevent such unintended consequences. At the current rate of progress, AI will be smarter than any human in the very foreseeable future......What happens then? It means that any attempt to contain or shut it down, it has already thought of that. It will be too late to go back an give it another try.
@JasonC-rp3ly
@JasonC-rp3ly Жыл бұрын
It is not a 'tool' if it is generally intelligent - it is being made to think for itself, without our guidance. There is no known way to control AI, and no-one even understands what GPT4 is doing. A superintelligence does not need instincts to act - it can just be programmed to achieve a goal, and if it is more intelligent than the humans, then there is nothing the humans will be able to do to stop it. The machine may also simply become interested in something else, and the humans simply get in the way, and so are removed. The most likely scenario is that the machines become intelligent, then make life very comfortable for the humans, up to the point that they or it have control of the physical environment. After this point, the humans will have no control over their own future whatsoever. The machines may keep the humans around as a labour force, or they may not.
@brianbagnall3029
@brianbagnall3029 Жыл бұрын
@@heliumcalcium396 I think you prove my point. As Douglas Adams said, "Keep banging those rocks together guys." Rock hammers worked great back then and only got better. Now we have nail hammers with pullers, rubber mallets, sledge hammers, ball-peen hammers, jack hammers, etc... Alignment improves with time. So have car accidents: constantly decreasing every year and set to change big time with self driving cars. With evolutionary refinement in the marketplace, our tools seek alignment with our goals!
@brianbagnall3029
@brianbagnall3029 Жыл бұрын
@@JasonC-rp3ly You are looking at this in a one dimensional way. A machine doesn't become "interested". Only biologically evolved life does. There is not one AI but many, and there will be millions. If one AI goes rogue, the other AI's will defeat it. The AI's will also be making the new AI's and one of their most important goals will be to ensure they do not go rogue on humanity. Given their intelligence level, their locks to keep AI safe will be near infallible. There are dozens of reasons why this won't happen. The chances of AI getting out of our control in our future is less than 1%.
@Davethreshold
@Davethreshold Жыл бұрын
we WILL. ❤🤍💙
AI's Disinformation Problem | AI IRL
24:02
Bloomberg Originals
Рет қаралды 14 М.
159 - We’re All Gonna Die with Eliezer Yudkowsky
1:49:22
Bankless
Рет қаралды 284 М.
Running With Bigger And Bigger Lunchlys
00:18
MrBeast
Рет қаралды 120 МЛН
Please Help This Poor Boy 🙏
00:40
Alan Chikin Chow
Рет қаралды 23 МЛН
How To Get Married:   #short
00:22
Jin and Hattie
Рет қаралды 25 МЛН
When Will the Machines Come Alive? | AI IRL
23:42
Bloomberg Originals
Рет қаралды 12 М.
Why There's a New Race to the Moon
12:12
Bloomberg Originals
Рет қаралды 146 М.
The AI Superpowers | AI IRL
24:02
Bloomberg Originals
Рет қаралды 30 М.
Eliezer Yudkowsky on the Dangers of AI 5/8/23
1:17:09
EconTalk
Рет қаралды 42 М.
Making a Living Predicting the Future | AI IRL
24:02
Bloomberg Originals
Рет қаралды 6 М.
The Global Oil Bribery Scheme Caught on Tape
8:56
Bloomberg Originals
Рет қаралды 166 М.
Running With Bigger And Bigger Lunchlys
00:18
MrBeast
Рет қаралды 120 МЛН