Tyler Cowen on the Risks and Impact of Artificial Intelligence 5/15/23

  Рет қаралды 5,714

EconTalk

EconTalk

Күн бұрын

Economist Tyler Cowen of George Mason University talks with EconTalk's Russ Roberts about the benefits and dangers of artificial intelligence. Cowen argues that the worriers--those who think that artificial intelligence will destroy mankind--need to make a more convincing case for their concerns. He also believes that the worriers are too willing to reduce freedom and empower the state in the name of reducing a risk that is far from certain. Along the way, Cowen and Roberts discuss how AI might change various parts of the economy and the job market.
Links, transcript, and more information:
www.econtalk.org/tyler-cowen-...
Subscribe to EconTalk on KZbin: / @econtalkwithruss
Subscribe to the audio episodes:
Apple Podcasts: podcasts.apple.com/us/podcast...
Stitcher: www.stitcher.com/podcast/econ...
Spotify: open.spotify.com/show/4M5Gb71...
and wherever you listen to podcasts.

Пікірлер: 62
@charlespeterson3798
@charlespeterson3798 Жыл бұрын
I love the contrast between the two bookcases.
@Slaci-vl2io
@Slaci-vl2io Жыл бұрын
I love such comments that are harder for an AI to understand.
@dennykeaton9701
@dennykeaton9701 Жыл бұрын
😂😂😂
@DocDanTheGuitarMan
@DocDanTheGuitarMan 11 ай бұрын
Is this interstellar?
@audio8685
@audio8685 Жыл бұрын
It was challenging, but I pushed myself and listened to the entire hour of Tyler Cowen. If you want a summary, this guy places the burden of proof of any existential risk on what he calls "doomers", instead of putting it on HIMSELF or the people that is actually building this risky systems that can kill us. He fails to provide any arguments to refute the existence of these risks and simply asserts that since previous emerging technologies haven't killed us, it's unlikely that future ones will. He also spends a significant amount of time criticizing HOW the "doomers" raise their concerns, which is completely irrelevant to the arguments they present, and he avoids addressing them. He frequently uses the ideas of "controlling people's GPUs" and "bombing data centers" REALLY out of context to make others look bad without providing any substantial counterarguments. The most amusing part is Cowen explaining "mood affiliation" and inadvertently describing himself as an optimist, which I'm not sure whether to find funny or just cry.
@michaelsbeverly
@michaelsbeverly 11 ай бұрын
Amen. Listening to this is further confirmation that we're doomed as a species to me. Even if the doomers are wrong about total extinction, the scale of horrible outcomes is so high, the cost so high, that to take Tyler Cowen's position you have to be wearing blinders and ear plugs. Urgggg.....we're in deep trouble.
@JasonC-rp3ly
@JasonC-rp3ly Жыл бұрын
Really appreciate Russ and his input on this - balanced, and getting multiple views to weigh in - appreciate Cowen's more upbeat approach, but would find it more persuasive if he engaged with the anti-AI arguments, rather than dismissing them. The 'doomsters' he's dismissively referring to are some of the foremost experts in the field, and many of them give persuasive, calm arguments - I think we all would appreciate a more engaged debate on the various risks - they are manifold.
@cilica5
@cilica5 Жыл бұрын
Mr Cowen wants a model to showcase the safety issues. It should be the other way around: the ones providing a tool should provide the safety model. It's like I''m concerned about aviation safety and I should come up with scientific models, not the aircraft builders.
@kreek22
@kreek22 Жыл бұрын
Good point, but I think the more important point is that if we fail once on advanced AI, we may never get a chance to learn from our mistake.
@briancase6180
@briancase6180 Жыл бұрын
Every time I hear Tyler talk on these issues, the more respect for him I have. He has given me new perspectives and ideas. Thanks for this!
@Woollzable
@Woollzable 9 ай бұрын
He's my favorite economist.
@_yak
@_yak Жыл бұрын
Great interview! I've been trying to follow the AI safety debate and feel comfortable saying that we as a species just don't know how serious the risks are. Most casual observers just want to know the answer, and the fact is that no one has it, which goes to Tyler's desire to see some models. However, taking climate change as an example: in hindsight, are we glad that people 30 years ago, who didn't have good models, did not take climate change seriously?
@kreek22
@kreek22 Жыл бұрын
Good analogy. Of course, even 30 years ago everyone with the chops to understand the science/math of climate change knew that the risks all pointed in one direction. The crucial remaining question was the magnitude. A secondary question was the cost of mitigation. AI poses a similar dilemma in which honest experts (not Cowen or Kurzweil) understand that we face an existential risk here. What is not known is at what level of capability AI becomes a real threat to humanity. Then there is the secondary question (which these two homo economicus discuss competently) of the pre-doom economic benefits of AI.
@DocDanTheGuitarMan
@DocDanTheGuitarMan 11 ай бұрын
Mr Cowan, excellent point about Mood Affiliation. Thank you for putting up that filter. There does however seem to be a wide array of personalities in the doom camp or at least substantial harm camp.
@leonelmateus
@leonelmateus Жыл бұрын
All very nice warm and fuzzy discussion, I do agree that Eliezer can be a little mood directed, but regardless, it is with good reason, models do not sufficiently address the 'Game Over' on iteration 0 so eloquently pointed out by Eliezer with his pointed examples. Yes we need models but it would be naive to ignore specific examples/ use cases of 'game over' given and on his large game over list over at LessWrong. There isn't also enough time to go into modelling and design. One of the key points that needs to be address in regulatory concerns is that of Verifier role when it comes to alignment. What Eliezer has pointed out over and over again is that we don't have the ability to perform this role vs an AI which is misaligned internally. Open source has already lapped Google, and Google has now presumably next leveled their AGI... and these are only the broadcasted news, I'd be more worried about that which isn't known.
@JasonC-rp3ly
@JasonC-rp3ly Жыл бұрын
This is the most powerful tool that scammers have ever had
@CM-dp5mw
@CM-dp5mw Жыл бұрын
Where are the models showing it WONT have existential threats to either humanity itself or the stability of our civilizations?
@Cagrst
@Cagrst Жыл бұрын
Also how could you possibly make a model that argues either point. It would need to synthesize every single facet of human existence and psychology and economics and geopolitics to be even slightly valid. Saying we don’t have a valid model of the issue that mathematically proves this concept feels pretty moronic to me
@kreek22
@kreek22 Жыл бұрын
@@Cagrst Models don't have to be mathematical and certainly cannot offer proof when predicting chaotic systems (like civilizational trajectories). Models provide evidence, offer arguments, permit the estimation of probabilities of gains and losses.
@weestro7
@weestro7 Жыл бұрын
I’ve deleted my earlier comment reacting to the characterization of the AGI doom scenario. In my opinion, there is a lot of validity in the takes Mr. Cowen offers here. It is constructive criticism, though I think there is some whiff of dismissiveness that captured my attention and made a re-listen or two valuable. It is a very interesting point, that no model has been put out there. However, having heard the arguments on offer, I am persuaded that the arguments from the doomers have not been rebutted, only called into question for rigor and precision. How close are we to super-human AI capable of a coup? There is scope for a productive conversation on that, we should certainly hope!
@SpaceExplorer
@SpaceExplorer Жыл бұрын
the legends!!!
@wonmoreminute
@wonmoreminute Жыл бұрын
This discussion begins with a false premise, or at least an incomplete one. Global adoption of the printing press took centuries. Electricity took decades and required physical infrastructure to generate and connect all households and businesses. Furthermore, the advancement of these technologies was incredibly slow. We use more electricity today, but it's the same electricity. The printing press successor, the rotary press, was hundreds of years away and did not alter the lives of millions (or billions) in any significant way. In the context of how it impacts the world, AI has near zero in common with these technologies, or any technology for that matter. The speed and scale of adoption, and the rate of advancement, is unlike anything we've encountered. And that doesn't even touch on what this technology is and the potential of what it can do. Within a handful of years (not decades or centuries) we could literally have AI agents with capabilities as good or better than the absolute best experts in almost every knowledge field. Even if they never exceed human intelligence but only match us, they are significantly cheaper to employ, faster, and are capable of working 24/7, 365 days a year without breaks. And you can have an infinite number of them. I could be wrong, but I don't think we have an economic model to account for the introduction of a billion+ new expert-level workers into the labor force. So we don't even need to talk about existential risks, or even an AI that's smarter than us before reaching a world that's completely unimaginable. Russ and Tyler eventually get to this point at the end, but that's where the discussion should have begun. To have a world in the next few years that's even partially recognizable, AI advancement and adoption will have to slow to a crawl from this point going forward. The chances of that happening are not zero, but unlikely. We might not be on an exponential growth curve (although we could be), but it's also unlikely we're at the top of an S curve (although that's also possible).
@herokillerinc
@herokillerinc Жыл бұрын
GREAT commentary here solid state~!!! My Layman's takeaway being that it doesn't have to kill us all with Nanobots to totally fuck up everything. But then, being somewhat of a luddite, I would welcome a return to simpler days. I am built for those simpler days, so why wouldn't I.
@wonmoreminute
@wonmoreminute Жыл бұрын
​@@herokillerinc I suppose it would depend on what you mean by simpler days. If you're talking about the 80's and the 90's... I'm right there with you. I welcome a return to simple days as well. Unfortunately, resetting the clock on society will not be a smooth transition. We can't rewind our economies, supply chains, the monetary system, societal norms, etc. Instead, society collapses, and with a world of 8 billion people and what will quickly become scarce resources, we're left with an ugly and dangerous dystopia. Even if you have the skills to survive and protect yourself (and your family), doing it 24/7 I think would become exhausting very fast and it's probably not the way most want to spend the rest of their days. Even the smallest of pleasures like coffee would be unavailable to most of the world, and certainly not a doctor or a hospital if you need one. A very small percentage of the population might find a safe spot where they can grow food and get clean drinking water, but protecting it will be a full-time job. Again, I agree with you and certainly think about a return to simpler days but when I play out what it would actually look like... my life seems pretty simple right now in comparison. A return to the simpler days that I'd like would require a time machine.
@kreek22
@kreek22 Жыл бұрын
@@wonmoreminute I suspect a hard return to simplicity is the only avenue of escape remaining. Otherwise we run head first into the Great Filter that these fellows were innocently discussing.
@JasonC-rp3ly
@JasonC-rp3ly Жыл бұрын
@@herokillerinc Agreed Douglas - it's bloody depressing - even if it doesn't get smarter than us (unlikely, unforch), at first glance deploying this tech everywhere just seems like a very bad idea - I'm no luddite and am fond of my laptop, but replacing vast swathes of the work of humanity with software owned by a small handful of US companies would be, quite literally, an inhuman act. It will not be 'augmentation', it will be replacement.
@1000niggawatt
@1000niggawatt Жыл бұрын
NO! Alignment doesn't mean "no naughty words", it means "superhuman model does what it's told instead of killing everyone to maximize its coincidental intrinsic function" - as was shown to be the case for architectures we so far designed!
@ShalomFreedman
@ShalomFreedman Жыл бұрын
A great interview. Both make important points. Cowen's point about the easy faking of 'textual life' is alarming for someone like me who spends much of his time in 'textual reality. It also seems that 'content creation' has become so swift and easy that AI content is apparently already overwhelming human creativity in quantity, if not yet in quality. They did not speak about a great danger of AI already on the Internet, the evil done by malicious actors. Consider China Russia Iran and how they might use this remarkable 'tool' Not the end of the world but the possibility of the bad guys winning the control of the world.
@mactoucan
@mactoucan Жыл бұрын
Roberts had a great interview with Yudkowsky, he should have pushed back on Cowen's ignorance about AI risk, I mean the guy has completely misunderstood the concept of system alignment and thinks it's to do with EA. Massacre of the straw men in this episode.
@user-jd8yx5jw5z
@user-jd8yx5jw5z Жыл бұрын
The US Government is the main problem here, we need to regulate the data driven exploitation & statistical lock-in effects of flawed statistical reasoning, there will be a divergence of regulation capability, at the cost of human rights & freedom (In favor of elitist protectionism). The system will eat itself alive. All human movements will be constrained to serve the systems profit motive as the statistical lock-in gets a tighter grip. It will dismantle the poor's autonomy, & force them to serve elites, while not having a voice of moral correction. Notice the lack of depth of understanding the language models have, the level of integrals in relation to what it's referencing seems somewhat consistent over a set of context, but that level of integration with larger sets of facts is very shallow.
@herokillerinc
@herokillerinc Жыл бұрын
@Nicholas Williams. Astute commentary Nicholas. Whether by intent or sort and sift happenstance, it's very clear that large data sets (Google, meta, et alia) in the hands of the few create data oligarchies which are steerable via algorithms and propoganda. Chat GPT and other such LLM digital entities will very likely function as the agents of the data oligarchs. It was the same with the dawn of the internet, full of open ended promise and utility and convenience for the everyman... until....
@skylark8828
@skylark8828 Жыл бұрын
If it can improve itself by writing new versions of its own code, then we should worry as we no longer have control of AI. How long will this take?
@ginomartell4237
@ginomartell4237 Жыл бұрын
Whoa whoa don't be so moody, are you in your feelings right now? Are you a doomer pessimist like these guys say? Or are you simply stating a fact and then asking a legitimate question that many of the brightest minds have a dismissive attitude towards.
@skylark8828
@skylark8828 Жыл бұрын
@@ginomartell4237 I am asking this question because it seems that there is now an arms race to create AGI for the purpose of market (or even world) dominance (CCCP?) where safety concerns can't be effectively assessed as nobody really knows what is happening within GPT4 and other similar AI models. We've seen emergent behaviour that was not part of any training, and as these AI models can write fairly decent code now, what will researchers/developers attempt to do in the coming months or years? Nobody can regulate this now after it was leaked from Meta, its open source code (across all boundaries of nation and state). Even the training methods are public now and are improving, so we're just crossing our fingers and hoping for the best.
@ginomartell4237
@ginomartell4237 Жыл бұрын
@@skylark8828 Exactly! Couldn't agree more! 👍🏾 🤞🏾
@kreek22
@kreek22 Жыл бұрын
@@skylark8828 Well put. A few companies with major resources chasing AGI is bad enough. The insane open sourcing of these models by companies like Meta distributes the opportunity to develop AGI to the entire world, to every single programmer, mathematician, physicist on the planet. This is worse than ending all nuclear anti-proliferation measures. Luckily, we have a government to contain the madness. Luckily, we have Kamala.
@DocDanTheGuitarMan
@DocDanTheGuitarMan 11 ай бұрын
So he wants a mathematical model for how SGAI ends the world. That’s ridiculous.
@kreek22
@kreek22 Жыл бұрын
Cowen has yet to produce a counterargument to the AI Doom position. Instead, he requests that those who predict doom provide him with arguments in a format that he prefers. That request is reasonable. His insistence on ignoring their arguments until they are reformatted is not reasonable. He does reveal his motives for AI accelerationism: he seeks a combination of libertarian economics and American geopolitical supremacy. But, even assuming that these are the highest values, ignoring existential risk does not ultimately further those values (or any values).
@VivekHaldar
@VivekHaldar Жыл бұрын
The thing is -- there has been a long and tragic history of crying wolf with risk to undermine individual liberties. Extraordinary measures call for extraordinary evidence. Cowen is right to keep the bar for that evidence high.
@RonponVideos
@RonponVideos Жыл бұрын
Guy who’s coping his way into a relative lack of fear: “We just need to avoid letting our emotions get in our way!”
@Low_commotion
@Low_commotion Жыл бұрын
Says the guy whose hippie parents forced us to turn away from nuclear and landed us in the Great Stagnation Thiel & Cowen talk about. The western PMC wants to shut down the _second_ revolutionary tech in a half-century, meanwhile India & China remain >70% bullish.
@jcorey333
@jcorey333 Жыл бұрын
This is an interesting counterpoint to Elieazer Yudkowski's view. I tend to be more optimistic, so I struggle to identify with Elieazer's worldview sometimes. Thanks for the podcast!
@Daveboymagic
@Daveboymagic Жыл бұрын
I used to feel the same way, but Eliezer has great arguments that make the optimistic view seem unrealistic. I also hate that Eliezer cannot really explain the issues well. A.I.s are dangerous by default. What makes you optimistic?
@mirkoukic9403
@mirkoukic9403 Жыл бұрын
It's good to hear that AI is not going to kill us all
@ExpatRiot79
@ExpatRiot79 Жыл бұрын
don't believe everything you hear.
@mirkoukic9403
@mirkoukic9403 Жыл бұрын
@@ExpatRiot79 I was sarcastically refering to the last week episode
@magnuskarlsson8655
@magnuskarlsson8655 Жыл бұрын
@@mirkoukic9403 I can't tell if your sarcasm is directed towards Cowen or Yudkowsky.
@mirkoukic9403
@mirkoukic9403 Жыл бұрын
​@@magnuskarlsson8655Yudkowski
@ergo4422
@ergo4422 Жыл бұрын
Cowen is amazing at communicating his thoughts
@kreek22
@kreek22 Жыл бұрын
There is something wrong with his thoughts on AI.
@DocDanTheGuitarMan
@DocDanTheGuitarMan 11 ай бұрын
Oh yes let’s make some models like Covid and climate change based on assumptions with wide error bars that grow with time. A faulty Covid model is a horrible analogy. You want us to ride on that horse?
@FlamingBasketballClub
@FlamingBasketballClub Жыл бұрын
11:00-16:00 Both Tyler and Russ made some interesting points in regards to the effects Chat GPT will have on medical and educational institutions. I think that homework assignments will be abolished within the next 10-20 years.
@kreek22
@kreek22 Жыл бұрын
More likely within the next year or two. Savvy students can already turn over most of their homework to the AIs.
@FlamingBasketballClub
@FlamingBasketballClub Жыл бұрын
Please invite Dr. Subrahmanyam Jaishankar to the podcast sometime. Thank you!
@FlamingBasketballClub
@FlamingBasketballClub Жыл бұрын
This is the 5th episode of EconTalk podcast discussing AI this year.
@justinlinnane8043
@justinlinnane8043 Жыл бұрын
Another completely complacent Berk who seems incapable of seeing the obvious and inevitable dangers!! dismissing Elieizer as a " Doomer " make this guy irrelevant to the discussion. He offers no coherent counter arguments at all !! just waffle
Reid Hoffman on the Possibilities of AI | Conversations with Tyler
1:06:15
Mercatus Center
Рет қаралды 4,9 М.
Tyler Cowen on the GOAT of Economics 11/27/23
1:34:21
EconTalk
Рет қаралды 6 М.
어른의 힘으로만 할 수 있는 버블티 마시는법
00:15
진영민yeongmin
Рет қаралды 6 МЛН
100❤️ #shorts #construction #mizumayuuki
00:18
MY💝No War🤝
Рет қаралды 20 МЛН
In the Age of AI (full documentary) | FRONTLINE
1:54:17
FRONTLINE PBS | Official
Рет қаралды 24 МЛН
Flight Jacket Night Lecture with Jim Lovell
1:12:10
Smithsonian National Air and Space Museum
Рет қаралды 87 М.
Beyond ChatGPT: Stuart Russell on the Risks and Rewards of A.I.
1:13:25
Commonwealth Club World Affairs of California
Рет қаралды 209 М.
Three Minutes to Nuclear Doomsday with FBI Agent Joe Navarro
1:08:05
International Spy Museum
Рет қаралды 136 М.
California Live! presents Will AI Be Humanity’s Last Act? with Stuart Russell
1:15:33
12.5.22 Surrender An Evening with Bono in Conversation with Jon Meacham
1:28:29
Washington National Cathedral
Рет қаралды 228 М.
Ghosts of Langley Into the CIA’s Heart of Darkness - John Prados
1:04:45
International Spy Museum
Рет қаралды 47 М.
Diana Ismail - Kezdeser (Official Music Video)
4:01
Diana Ismail
Рет қаралды 804 М.
Жандос Қаржаубай - Не істедім?!
2:57
Amre - Есіңде сақта [Album EMI]
2:16
Amre Official
Рет қаралды 128 М.
JONY - Реки вели (mood/lyric video)
2:37
JONY
Рет қаралды 962 М.
QARAKESEK - ОРАМАЛДЫ ( audio )
3:01
QARAKESEK
Рет қаралды 943 М.
Asik - Body (Lyrics Video)
2:42
Rukh Music
Рет қаралды 649 М.
Artur - Erekshesyn (mood video)
2:16
Artur Davletyarov
Рет қаралды 186 М.