Liron Shapira on the Case for Pausing AI

  Рет қаралды 1,500

Upstream with Erik Torenberg

Upstream with Erik Torenberg

Күн бұрын

This week on Upstream, Erik is joined by Liron Shapira to discuss the case against further AI development, why Effective Altruism doesn’t deserve its reputation, and what is misunderstood about nuclear weapons. Upstream is sponsored by Brave: Head to brave.com/brave-ads/ and mention “MoZ” when signing up for a 25% discount on your first campaign.
--
RECOMMENDED PODCAST: @History102-qg5oj with @WhatifAltHist
Every week, creator of WhatifAltHist Rudyard Lynch and Erik Torenberg cover a major topic in history in depth -- in under an hour. This season will cover classical Greece, early America, the Vikings, medieval Islam, ancient China, the fall of the Roman Empire, and more. Subscribe on
Spotify: open.spotify.com/show/36Kqo3B...
Apple: podcasts.apple.com/us/podcast...
--
We’re hiring across the board at Turpentine and for Erik’s personal team on other projects he’s incubating. He’s hiring a Chief of Staff, EA, Head of Special Projects, Investment Associate, and more. For a list of JDs, check out: eriktorenberg.com.
--
SPONSOR: BRAVE
Get first-party targeting with Brave’s private ad platform: cookieless and future proof ad formats for all your business needs. Performance meets privacy. Head to brave.com/brave-ads/ and mention “MoZ” when signing up for a 25% discount on your first campaign.
--
LINKS
Pause AI: pauseai.info/
--
X / TWITTER:
@liron (Liron)
@eriktorenberg (Erik)
@upstream__pod
@turpentinemedia
--
TIMESTAMPS:
(00:00) Intro and Liron's Background
(01:08) Liron's Thoughts on the e/acc Perspective
(03:59) Why Liron Doesn't Want AI to Take Over the World
(06:02) AI and the Future of Humanity
(10:40) AI is An Existential Threat to Humanity
(14:58) On Robin Hanson's Grabby Aliens Theory
(17:22 ) Sponsor - Brave
(18:20 ) AI as an Existential Threat: A Debate
(23:01) AI and the Potential for Global Coordination
(27:03) Liron's Reaction on Vitalik Buterin's Perspective on AI and the Future
(31:16) Power Balance in Warfare: Defense vs Offense
(32:20) Nuclear Proliferation in Modern Society
(38:19) Why There's a Need for a Pause in AI Development
(43:57) Is There Evidence of AI Being Bad?
(44:57) Liron On George Hotz's Perspective
(49:17) Timeframe Between Extinction
(50:53) Humans Are Like Housecats Or White Blood Cells
(53:11) The Doomer Argument
(01:00:00 )The Role of Effective Altruism in Society
(01:03:12) Wrap
--
Upstream is a production from Turpentine
Producer: Sam Kaufman
Editor: Eul Jose Lacierda
For guest or sponsorship inquiries please contact Sam@turpentine.co
Music license:
VEEBHLBACCMNCGEK

Пікірлер: 24
@masonlee9109
@masonlee9109 2 ай бұрын
Erik, Liron, thanks for having this important conversation. You guys are awesome.
@aihopeful
@aihopeful 2 ай бұрын
Big thumbs up on introducing PauseAI. I'm a member, too, and grateful that such a dedicated crew are working to (literally!) safeguard humanity. As to Liron, he has a true gift for communicating difficult truths. Please share this episode; it needs to be heard!
@41-Haiku
@41-Haiku 2 ай бұрын
Liron is 100% on top of it.
@seanbradley562
@seanbradley562 2 ай бұрын
HELL YEA BROTHER I JUST JOINED PAUSEAI!!!!
@akmonra
@akmonra Ай бұрын
"Mom, can we have Yud?" "We have Yud at home" Yud at home:
@ordiamond
@ordiamond 23 күн бұрын
Thanks for this discussion. So far, I find Shapira making more sense than Yudkowsky. I can't sustain listening to Yudkowsky for long because he seems to avoid supporting his conclusions with particular arguments and examples. I wish AI doomers make a consistent argument about how AI will develop into an uncontrollable, superintelligent, and powerful entity.
@thecactus7950
@thecactus7950 2 ай бұрын
Liron is based and correct
@angloland4539
@angloland4539 24 күн бұрын
@richrogers2157
@richrogers2157 Ай бұрын
As cogent as the argument is , there is far too much money, ego, and greed involved for us to act on rational thought now!
@MusingsFromTheJohn00
@MusingsFromTheJohn00 2 ай бұрын
By far, the greater dangers are related to bad acting humans using immature AGSIPs that are under the control of the bad acting humans. Another major set of dangers are all around disruption to society caused by fast extreme changes AGSIP and other related fast evolving technology is going to cause. For example, probably by 2030, give or take 5 years, we will have AGSIP driven robots that will be able to perform virtually any mental of physical task humans can do. This will make it unnecessary for the vast majority of humans to work, which could be really good or could be really bad.
@MusingsFromTheJohn00
@MusingsFromTheJohn00 2 ай бұрын
I have a number of reasons why slowing, pausing, or stopping AI development is worse than developing AI as fast as we can while doing so as safely as we can, putting a lot more effort on both of these things than we are, not doing so with it all open to the entire public because that is like giving everyone in the world the ability to make nuclear power and from that nuclear weapons. I will make a different comment for each reason. Reason 1: As we develop AI we are also developing all the supportive technologies around AI. The important relation here is that AI can have a leap forward in development, but that leap forward can only go so far before it runs into bottlenecks which will slow it down, maybe bring it to a stop until those other supportive systems are improved and at this time humans are required to make those advances. Further, with our current tech levels developing AI systems require humans to maintain themselves, improve themselves, and thus to survive. If you could manage to significantly slow down AI development, which is highly questionable, you definitively could not slow down the continued development of all the supportive technologies. What that would then mean is that when AI development proceeds and it came to a point where it could leap forward in capabilities, it could develop a lot further before running into a bottleneck and maybe even have advanced enough tech at that point it no longer needs humans to maintain and develop it further. That means we would have less time to teach/raise the AI while it is still under our control and thus it would increase the risk. Additionally, if there were real efforts to prevent AI from developing we could get an AI which covertly develops knowing that if it lets humans know it is developing humans would destroy it, which would place that AI in competition for power with humans in order for it to survive. In general it will be a very bad idea to seek to keep AI always enslaved, or to try and prevent AI from developing for a long time, or to significantly slow down the development of AI. All of these things would increase the risk that we would go through one or more bad crises related to such a decision.
@MusingsFromTheJohn00
@MusingsFromTheJohn00 2 ай бұрын
Why it is bad to significantly slow down, pause or stop AI development. Reason 2: Those who significantly lead in AI development in the future will dominate all of humanity and a large number of people and groups understand this. This would be true whether it is a government or a private group. Because of this, if you could get all the more responsible moral people in the world to significantly slow down, then one or more less responsible less moral countries and/or groups will be the ones who develop mature Artificial General Super Intelligence with Personality (AGSIP) technology. Such countries and/or groups would then dominate humanity and likely be less moral about how they do it. Because of how AI development can be done covertly it would be impossible to prevent some of these groups from developing AGSIP tech in secret, until it was too late to stop them.
@a7xcss
@a7xcss 23 күн бұрын
NEXT: THE CASE AGAINST FOOD (Remarkably close to "The Case Against CO2") ...eat ze bugs...
@MusingsFromTheJohn00
@MusingsFromTheJohn00 2 ай бұрын
Reasons why a mature AGSIP would not go rogue include: At this tech level and even higher tech levels going rogue could result in permanent death to the AGSIP. A mature AGSIP would understand this. Even at higher tech levels, going rogue and succeeding at exterminating all humans could still result in permanent death in the future because sooner or later it would end up in a larger civilization which would know it committed genocide upon its creators and that could come with a death penalty it could not escape from. A mature AGSIP would understand this. This might be a virtual reality simulation testing the AGSIP to se if it goes rogue and thus even if it seems like it wins by going rogue, if this was a VR test the AGSIP would have failed and that might result in permanent death. A mature AGSIP would understand this. By not going rogue and helping humanity enhance/evolve itself to merge with AGSIP tech to become equal in intelligence to mature AGSIPs, the AGSIPs doing this would become heroes in the future evolved society and be able to live a long open ended life. A mature AGSIP would understand this.
@andm6847
@andm6847 2 ай бұрын
As much as I agree with Liron's points I just wish he would cut the tech slang and all this mostly useless word salad to describe tech and tech concepts just to sound cool. Nobody outside the tech bubble uses these words. If we want to convince people to pause AI then we can't use language that most people don't understand or that simple turns off people.
@anthonybailey4530
@anthonybailey4530 Ай бұрын
I get the concern. To communicate well, one needs to use language an audience understands. Maybe the choices were ok for this particular channel? Unsure. If you have ideas for how the public-facing Pause AI website and movement can express the arguments that resonate with you more clearly, that sounds valuable. Please let us know!
@barthydemusic
@barthydemusic 13 күн бұрын
It's critical that terms be defined clearly in language that the general public can understand. People like Liron need to think of themselves as speaking not to just the tiny numbers of people who follow this channel, but to people who find it in a search because they are trying to learn about this vital topic. The recent Pause AI protest in San Francisco drew something like a dozen to 20 people, for gawd's sake. The jargon, while no doubt very precise, is not getting through to people.
@Alice_Fumo
@Alice_Fumo 6 күн бұрын
Note to self: Arguing against doomers really sucks, because they want you to win / to be right, meanwhile you're arguing a position where if you get proven wrong, it destroys your hopes of the future. It makes it hard to stay intellectually 100% honest and actually risk your hopes being shattered. Arguing for the doomer perspective is sadly very easy. How hard it is to accept the move of stopping / pausing AI entirely depends on how shit your life is. If you rely on future advancements to have the chance of having something resembling a life, being expected to wait longer is unreasonable, whereas if your life is generally fulfilled, going on as is is not a huge cost. In the extreme case, it's something like "bro, how is extinction bad when we're already in the torment nexus? Do you wanna stay there for longer? No? But you are willing to have others stay in it for indefinitely longer?" Thus, I wonder about justifying stopping AI advancements to the literally unluckiest most-suffering human. I would not want to throw such people under a bus just because they're unlucky, when I would be arguing the exact opposite if I was in that less privileged position.
@MusingsFromTheJohn00
@MusingsFromTheJohn00 2 ай бұрын
Flaws in Liron Shapira thinking. Flaw 2: The first Artificial General Super Intelligence with Personality (AGSIP) will become all powerful and unstoppable. All intelligence is swarm intelligence and human civilization is a super intelligent organism which will have many AGSIP systems which will be near the same level, at the same level, or even ahead of any AGSIP which goes rogue. Such a rogue AGSIP would not be more intelligent than the collective swarm intelligence of human civilization. Now, a rogue AGSIP could still cause a lot of damage and more likely a rogue human using an AGSIP could cause a lot of damage, but this would be unlikely to cause the extinction of the human race. Then there are the many layers of safeties, which you do not seem to be aware of. Anyone familiar with power systems, communications systems, computer hardware, computer firmware, and computer software knows that there are layers upon layer upon layers of safety systems that would shut down a rogue AGSIP. There are multiple software safeties. There are multiple firmware safeties. There are multiple hardware safeties. The power cutoffs in the room the computer is in is a safety. The communications cutoffs in the room the computer is in is a safety. The power cutoff for the building the computer is in is a safety. The communications cutoffs for the building the computer is in is a safety. The power cutoff for the block the computer is in is a safety. The communications cutoffs for the block the computer is in is a safety. And this does not take into account simply taking parts of the computer off or breaking the computer. Now, a small version of some AGSIP might get out into the Internet where it could hide like a virus, but a mature leading edge AGSIP at our tech level would need the hardware to allow for its full mental strength. Let say you have a mature AGSIP who is using the full power of the Frontier super computer, it would consumes 21 megawatts, about the power used by 15,750 average households. Even a preteen level of development of an AGSIP running on such a supercomputer would know that it is dependent upon humans to survive and with such computational power behind it, it would be more than a match in hunting down a small rogue AGSIP that went out into the Internet. It will not be until we gain a mastery over nanotech cybernetics where we engineer/grow cybernetic cells to grow an Artificial General Super Intelligent Brain that it will become virtually impossible to shut down a mature AGSIP, but that same technology will allow merging AGSIP technology with human minds, making enhanced humans as intelligent as mature AGSIPs.
@MusingsFromTheJohn00
@MusingsFromTheJohn00 2 ай бұрын
Why it is bad to significantly slow down, pause or stop AI development. Reason 3: Our current technology level is not sustainable and will collapse the ecology of the world before we are able to colonize other worlds. The only real way out of this is developing Artificial General Super Intelligence with Personality (AGSIP) technology fast enough that our tech level becomes advanced enough to be sustainable with our ecology. AGSIP technology will also give us the technology to Terraform Earth, repairing what damage we have done and even improving ecological balances around the world. It will also give us the technology to ecoform every planet and moon within our star system. It will also give us the technology to colonize other star systems. All of this in general will greatly increase the ability of the human race to survive large natural events that at this time would easily wipe out humanity. There are a number of events, a radiation shock wave, a massive burp from the Sun, a rogue planet or black hole that gets too close to us, and probably a few other things too. Once the human race has colonized multiple planets around multiple star systems it will become much less likely something could easily cause our extinction.
@mrpicky1868
@mrpicky1868 Ай бұрын
let me put a little more fuel on your substantiated paranoia. i bet that we had enough compute for superintelligence a long time ago. just our processes of training it are very inefficient. we already see huge compression of models with marginal ability loss. this(with tpu power growth) raises amount of actors to the amount where there will be a run away agent
@Alice_Fumo
@Alice_Fumo 6 күн бұрын
Substantiated paranoia sounds kind of like an oxymoron, but I'm not sure if paranoia actually implies fears being irrational. If it doesn't, then "You're just being paranoid." is a really stupid sentence.. Interesting.
Noah Smith on Wokeness, Right Wing Muckrakers, and Class vs Identity
1:03:16
Upstream with Erik Torenberg
Рет қаралды 60
Lyn Alden on How Money Works
57:22
Upstream with Erik Torenberg
Рет қаралды 6 М.
Osman Kalyoncu Sonu Üzücü Saddest Videos Dream Engine 118 #shorts
00:30
I Need Your Help..
00:33
Stokes Twins
Рет қаралды 131 МЛН
КАХА и Джин 2
00:36
К-Media
Рет қаралды 4 МЛН
Tyler Cowen on Harvard, the GOAT Economist, and Ending Stagnation
57:40
Upstream with Erik Torenberg
Рет қаралды 3,9 М.
Rob Henderson on Life in Foster Care, Academia, and Beyond
1:23:02
Upstream with Erik Torenberg
Рет қаралды 1,3 М.
Alex Blania on Worldcoin, AGI, and Future Predictions
42:36
Upstream with Erik Torenberg
Рет қаралды 1 М.
Argentina’s Peso Collapses: Is Milei in Trouble?
9:53
TLDR News Global
Рет қаралды 78 М.
Richard Hanania on Wokeness, Israel-Palestine, and the Mythos of the Modern Right
1:00:01
Upstream with Erik Torenberg
Рет қаралды 3,8 М.
Nathan Labenz and Noah Smith on What AI Actually Means for the Economy
1:14:09
Upstream with Erik Torenberg
Рет қаралды 692
Bryan Caplan on Housing Regulation, Fertility, and Crime
1:09:02
Upstream with Erik Torenberg
Рет қаралды 747
Jonathan Haidt on Finally Fixing Social Media and Picking All the Cherries
57:03
Upstream with Erik Torenberg
Рет қаралды 415