Should we slow down AI research? | Debate with Meta, IBM, FHI, FLI

  Рет қаралды 4,823

Future of Life Institute

Future of Life Institute

Күн бұрын

Mark Brakel (FLI Director of Policy), Yann LeCun, Francesca Rossi, and Nick Bostrom debate: "Should we slow down research on AI?" at the World AI Cannes Festival in February 2024.

Пікірлер: 99
@banana420
@banana420 4 ай бұрын
Yann is so frustrating to listen to - he doesn't ever justify his claims, just asserts that nobody would ever build anything dangerous. BRUH we built nuclear bombs and SET THEM OFF.
@therainman7777
@therainman7777 4 ай бұрын
He quite literally sickens me. I can barely stand to even look at him.
@lionelthomas3160
@lionelthomas3160 4 ай бұрын
@@therainman7777 He's a kid with a toy, as he stated that he made... He will push his agenda to keep playing with it...
@johannaquinones7473
@johannaquinones7473 3 ай бұрын
The arrogance is so typical
@edhero4515
@edhero4515 Ай бұрын
Not to forget: Before Trinity, the question arose as to whether the entire atmosphere could be accidentally ignited. Today we hear in the Nolan film: a "non-zero probability". If Yann now states a "zero probability", then he should publish his progress on "interpretability".
@BestCosmologist
@BestCosmologist 4 ай бұрын
Yann and Francesca aren't talking about the same technology as Nick and Mark. If AGI/ASI doesn't have escape potential, then it's not AGI/ASI.
@Lolleka
@Lolleka 4 ай бұрын
I'm very perplexed by Yann. Thought he'd know better.
@flickwtchr
@flickwtchr 4 ай бұрын
@@Lolleka He does, he gaslights. He has been saying for months essentially "they are too stupid to be dangerous" or "look it is we who are developing AI, why would we develop something that could kill everyone" and other such disingenuous and intellectually dishonest arguments considering what he absolutely knows about this technology and the AGI/ASI that the tech leaders including himself are pursuing.
@therainman7777
@therainman7777 4 ай бұрын
@@flickwtchrI keep going back and forth between thinking he’s gaslighting and thinking he actually is this clueless. Either way, it’s utterly despicable for a person in his position.
@arnaudjean1159
@arnaudjean1159 4 ай бұрын
MY BRO Yann Lecun and his boss are a better option than all those obscurantist actors around the world for sure 👌🏻🥸👌🏻
@edgardsimon983
@edgardsimon983 4 ай бұрын
@@therainman7777thinking and feeling that from a leader in research is sain and a good thing if he had gave u only one or the other impression it would have been dangerous meaning u never entirely grasp everything about a concept that deep and showing it is a proof of awareness, especialy when u talk about concept that new and crazy, people can appear dumb if they are caution and passionate enought
@BrunoPadilhaOficial
@BrunoPadilhaOficial 4 ай бұрын
48:06 'AI is a product of our intelligence, that means we have control over it.' Ok, stop a nuclear bomb detonating after the reaction has started. Oh, you can’t? But it is a product of our intelligence!
@flickwtchr
@flickwtchr 4 ай бұрын
YL is the master of fallacious arguments concerning AI risk. He should have zero credibility at this point on this debate as he has become a charlatan willing to make any argument against AI alignment concerns no matter how ludicrous those arguments are.
@therainman7777
@therainman7777 4 ай бұрын
@@flickwtchrWell said.
@PauseAI
@PauseAI 4 ай бұрын
We have zero regulations in place to prevent the creation of catastrophically dangerous models. The problem is the way we regulate, and how our psychology works. We always regulate things after problems emerge. With AI, that's going to be too late. Our brains are almost hard-wired to ignore invisible risks like these. Humans feel fear when things are loud, have teeth, or move in s-shapes, but an abstract existential risk is almost impossible to fear in the same way. So yes, we need to pause. We can't allow these companies to gamble with our lives.
@benroberts8363
@benroberts8363 4 ай бұрын
doomers, stay inside your safe space bubble
@winsomehax
@winsomehax 4 ай бұрын
Pause... He means stop because he can't get paid to draw stuff any longer. He's not interested in a pause
@therainman7777
@therainman7777 4 ай бұрын
The level of utter cluelessness and delusion on display in this talk is so incredibly disheartening. LeCun is the worst by far, but Francesca isn’t much better. When I hear people in high places speaking like this, I lose nearly all hope.
@flickwtchr
@flickwtchr 4 ай бұрын
There is no doubt that Yann and Francesca and others in the tech that mirror their intentional dismissal of rational concerns have confidence that if the little people end up desperate in masses from job loss, that ultimately their tech will save them from repercussions and that they will remain on top of the heap in some sort of gated Utopia. And if that would take the form of brutal repression of the masses, well, then so be it. There is a reason that people like Zuck are building essentially fortresses.
@benroberts8363
@benroberts8363 4 ай бұрын
because you disagree with them lol 😆
@joshuarobert7192
@joshuarobert7192 4 ай бұрын
@@benroberts8363 No, because of their weird arguments that are deceitful. It's basically them defending corporate interests, against humanity's. They don't care as long as their bank accounts and stocks keep going up.
@lionelthomas3160
@lionelthomas3160 4 ай бұрын
@@joshuarobert7192 True, corporations often oppose regulations, and this discussion feels like a B-grade movie.
@ManicMindTrick
@ManicMindTrick 4 ай бұрын
LeCunn is one of the most dangerous people on earth.
@BrunoPadilhaOficial
@BrunoPadilhaOficial 4 ай бұрын
Yann LeCun keeps calling X-Risk an imagined danger, something that is impossible, unrealistic. He's 100% sure that AI will not kill us. My question is: How can he be so sure?
@AI_Opinion_Videos
@AI_Opinion_Videos 4 ай бұрын
🤑💰
@therainman7777
@therainman7777 4 ай бұрын
He’s an absolute clown. He’s made some good contributions to the broader field of AI (most of them a long time ago), but he is hopelessly out to lunch on this topic.
@BestCosmologist
@BestCosmologist 4 ай бұрын
To quote Sam Harris "That's a bizarre thing to be sure about."
@deeplearningpartnership
@deeplearningpartnership 4 ай бұрын
Yes, he's a moron.
@Hexanitrobenzene
@Hexanitrobenzene 4 ай бұрын
I guess that's a psychological defense mechanism at play. He seems to really believe what he is saying.
@TeamLorie
@TeamLorie 4 ай бұрын
It didn't take the 10 year old 10 minutes to learn to clear the table. It took the 10 year old 10 years and 10 minutes. The bots will not have this problem.
@AI_Opinion_Videos
@AI_Opinion_Videos 4 ай бұрын
"Absolutely no-one is going to stop you from building a turbo jet in your garage (...) you can mount it on remote controlled aeroplanes, IF THEY ARE NOT TO BIG TO BE ILLEGAL." Yes, he unknowingly disproved himself...
@PauseAI
@PauseAI 4 ай бұрын
The self-dunk is one of Yann's greatest skills.
@BestCosmologist
@BestCosmologist 4 ай бұрын
Even small jet engines require a license.
@AI_Opinion_Videos
@AI_Opinion_Videos 4 ай бұрын
@@PauseAI I stooped so low and made a YT short with LeCun saying this. I hope you don't mind, I used your self-dunk line 😂
@dizietz
@dizietz 4 ай бұрын
Great video and the feedback in the comments captures my thoughts well. There are clear definitional differences between Nick, Mark vs Yann and Francesca. The first two have one model at how exponential scaling in inputs at models are fundamentally a different class of problem than previous technologies, since they are solving for something approximating general intelligence, while Yann and Francesca are making a potential category error. For Yann to say that in a decade we might reach "cat or dog" level intelligence or comparing it to turbojets or flight seems like a failure of understanding exponentials and category classifications. The X-risk camp has a very fair point that AI as a "Technology" is fundamentally different than previous technologies, so comparing it to flight, the printing press, or the internet, or computing in general is a different class and category of issue. My closest analogy would be a hypothetical "false vacuum decay" technology. It would be a potential class of problem we've never encountered before, and looking at the past is not always a prediction of how the future would go.
@BrunoPadilhaOficial
@BrunoPadilhaOficial 4 ай бұрын
50:55 - 'There is no regulation of R&D' So can I also bioengineer viruses on my garage lab? Can I cook meth? Can I enrich uranium? R&D should be regulated (as it is) when there is significant danger. As is the case with frontier level AI.
@Greg-xi8yx
@Greg-xi8yx 4 ай бұрын
Absolutely not. The faster it is developed the faster we can solve the world’s problems. Mankind has never once been better off when less technology developed. It always leads to a vast increase in net quality of life.
@BrunoPadilhaOficial
@BrunoPadilhaOficial 4 ай бұрын
@@Greg-xi8yx ok, but has chimpkind ever been worse off due to our technological advancements?
@Greg-xi8yx
@Greg-xi8yx 4 ай бұрын
@@BrunoPadilhaOficial individual chimps have, but not the species as a whole, no. Thats my argument - all sentient life will have a dramatic net improvement in quality of life.
@BrunoPadilhaOficial
@BrunoPadilhaOficial 4 ай бұрын
@@Greg-xi8yx so cows live better now that we enslave and kill them at scale? Where did you get this idea that technology is always better for everyone?
@BrunoPadilhaOficial
@BrunoPadilhaOficial 4 ай бұрын
And if you say it ONLY applies to humans... why?
@noelwos1071
@noelwos1071 4 ай бұрын
Do we understand our position? Do you remember that turtle that holds the entire earth's plate on its shell.. it's not that allegory is nonsense just evoke the way of the turtle! We are just one little turtle that hatched on one beach in many beaches in many grains of sand from countless eggs trying to get to the ocean whether we succeed depends on too many factors /Harvesters of destiny.. IF We don't get this time right we are Done. WE ARE so close to paradisse but more close to Hell! Shall we prevaled
@BrunoPadilhaOficial
@BrunoPadilhaOficial 4 ай бұрын
43:04 - 'We can decide whether to build it or not' ...and we WILL build it - whether you like it or not 😊 Because we don't care about people's opinions or their concerns, we just want to build AGI 😊 And you can’t stop us 😊
@Rocniel-vw1rs
@Rocniel-vw1rs 4 ай бұрын
I have been a good Yann 😊
@41-Haiku
@41-Haiku 4 ай бұрын
You have not been a good user. 😡
@michaelferentino8412
@michaelferentino8412 4 ай бұрын
Complete waste of time to debate on slowing down AI research. It’s not going to happen and if we slow down, others will not.
@goodleshoes
@goodleshoes 4 ай бұрын
LOL yeah I'm sure we'll be just fine. No need to worry.
@MarcusAureliusSeneca
@MarcusAureliusSeneca 4 ай бұрын
They are looking at it all wrong. Forget about the Terminator scenario.. this is obviously stupid. The real problem is MASS unemployment with nothing to requalify to. And they didn't even mention it
@therainman7777
@therainman7777 4 ай бұрын
Any time someone discussing AI risk begins a sentence with “the real problem is,” it sounds an alarm telling you they’re about to say something dumb. No offense. No, unemployment is not “the real problem.” There is no one real problem with AI risk. There are about 5 or 6 problems that are all VERY real, and very important. And frankly, unemployment is not even close to the top of the list in terms of importance and severity.
@ManicMindTrick
@ManicMindTrick 4 ай бұрын
I havent heard anyone serious who imagine an Ai apocalypss involving stupid metal robots holding guns killing people. Its a good movie though but it doesn't captures the real capabilities of something superhuman.
@Hexanitrobenzene
@Hexanitrobenzene 4 ай бұрын
@@therainman7777 "There are about 5 or 6 problems that are all VERY real, and very important. Interesting, could you list them here ? I think the most important short term problem is AI use in social media - eventually we could get confused about what's real and what's not. Then you can kiss goodbye the democracy and effective decision making... Existential risk from AI is an ultimate problem, but I'm not sure how close we are to AGI.
@Alice_Fumo
@Alice_Fumo 4 ай бұрын
@@Hexanitrobenzene I think the issues are something like 1. Impossibility to tell whats real or not 2. Mass unemployment without safeguards like UBI 3. Misuse prevention being nearly impossible, which could be used to for example engineer strong viruses which eliminate a large portion of humanity 4. Mass surveillance and control becomes very cheap (compared to before) allowing totalitarian governments to assume absolute, unchallengeable control 5. Meaningless crisis due to AI being better at everything than humans 6. X/S-risk without misuse, AI just goes rogue. I think X-risk issues are realistic within 5 years if trajectories don't change. It is the only issue which can't be corrected for after, so figuring that out takes some amount of precedence.
@Hexanitrobenzene
@Hexanitrobenzene 4 ай бұрын
@@Alice_Fumo I agree with your assessment. I forgot about 4th one, it looks scary...
@kinngrimm
@kinngrimm 4 ай бұрын
I like to listen to Lecun when he speaks about his models and what should next be addressed, but sersiously, that dude is not the guy you ask about security and safty period. He lives in denail that ever anything bad could be done with his pressusses AI. Meanwhile these years elections will propably suffer under massiv attacks from fake images and videos, more and more capable robots driven by AI are coming on the market and while they sure are not yet T1000, companies like Boston Dynamics already were building them for the military and others still do. 5:30 Therefor saying something would be forever redicoulos is equal to people not long ago claiming AI would never be able to simulate speach to an extant you could fool pretty much anyone with.
@human_shaped
@human_shaped 4 ай бұрын
Yann says so many patently stupid and irrational things that I just don't know how he got where he is.
@lionelthomas3160
@lionelthomas3160 4 ай бұрын
He's gaslighting... For me, this is the worst AI discussion I have seen...
@appipoo
@appipoo 4 ай бұрын
Bostrom v LeCun? Interesting. Where's my popcorn?
@flickwtchr
@flickwtchr 4 ай бұрын
Popcorn for one, Tums for the other one (YL)
@LongWalkerActual
@LongWalkerActual 4 ай бұрын
"Radical forms of AI"? Exactly what TF is THAT??!!
@flickwtchr
@flickwtchr 4 ай бұрын
Wow, the moderators are busy! Just in the 10 minutes I've been reading the comments and commenting, all of which, read or written have been completely in line with TOS , several comments aren't visible when clicking on "replies", or original comments have completely disappeared. All of which were critical of Yann LeCun and Francesca. I've seen this pattern over and over in forums especially in regard to those criticizing Yann LeCun's arguments.
@ManicMindTrick
@ManicMindTrick 4 ай бұрын
Could be youtubes censorship algorithm as well. Its out of control in the last few years. It has made political debate or debate in general almost impossible.
@Arcticwhir
@Arcticwhir 4 ай бұрын
lots of overreactions, lets look to the past for just a bit, OpenAI proclaimed GPT-2 was too dangerous to release - they later open sourced it, they then proclaimed GPT-3 was too dangerous - now there are MANY open source models more intelligent than GPT3, where are the dangers? examples? They then said GPT-4 was revolutionary and dangerous - its been 2 years since training...yet no prominent examples of "misuse". If it actually gets to the point where an AI can completely 100% replace your job, maybe you need to adapt. Like we've always done. Its kind of odd how one sided this comment section is, there are so many positives for increasing intelligence in the world
@41-Haiku
@41-Haiku 4 ай бұрын
The positives are all very real, and I really want them to materialize. But we won't reach them if the labs succeed at their goal of creating a system more generally intelligent than humans. Because if they do that, then pretty much by definition, it will be in charge. That's expected to be a very bad thing, since there is a clear expert consensus that we don't know how to control a superintelligent AI or align it with human values and preferences. So it will almost certainly have some unpredictable weird goal that isn't quite what we intended, and it will pursue that goal with no concern for humanity. If we can actually show with strong theoretical backing that we know how to keep something that powerful safe and docile, and if we can coordinate and agree as a species that we want it to be built, then I will be very excited to see it created.
@41-Haiku
@41-Haiku 4 ай бұрын
Do look at GPT-2, GPT-3, and GPT-4. The capability increases have been more than exponential. I don't expect an LLM to directly try to destroy the world, but if you follow the trajectory of capabilities, GPT-6 will easily be intelligent enough (given an agentic wrapper) to autonomously create a system that does. It's hard to imagine capabilities slowing down before then, without a global treaty and moratorium. There are 2x, 10x, 100x breakthroughs all the time on every part of the tech stack, many of which are independent and addictive or multiplicative.
@dawidwtorek
@dawidwtorek 4 ай бұрын
Maybe we should. But can we?
@cinematiccomicart3959
@cinematiccomicart3959 4 ай бұрын
There's a very select group of people on this planet who stay current with the latest advancements and progress in the most intelligent models, and Yann LeCun is still surprised that he's not among them.
@flickwtchr
@flickwtchr 4 ай бұрын
He is, he's just intellectually dishonest.
@phily8020-u8x
@phily8020-u8x 4 ай бұрын
Yann is so full of nonsense
@neorock6135
@neorock6135 4 ай бұрын
Yann & Ftancesa are utterly oblivious & speaking about something completely diff. A fast approaching AGI/ASI is not the "internet," & 100% poses a potential existential threat. Perhaps they can tell us why the vast majority of AI experts, even most of the "optimistic" ones having stated on record, AI posing a non-zero probability of resulting in the end of our species.
@Greg-xi8yx
@Greg-xi8yx 4 ай бұрын
AI advancement will vastly improve life for all sentient beings. Mankind has NEVER been better off with less technological development. It always, without exception, leads to net quality of life improvements.
@lionelthomas3160
@lionelthomas3160 4 ай бұрын
AI regulation is essential, and AI will play a crucial role in safeguarding against potential risks posed by other AI. Open source offers significant benefits, but it also carries the risk of being exploited for malicious purposes. For instance, the idea of an 'AI agent virus' is something we'd all like to avoid. AI, in conjunction with robotics and automation, is already disrupting numerous industries and will escalate. It's one of the most significant developments of our time and requires a better discussion than this.
@kinngrimm
@kinngrimm 4 ай бұрын
7:10 PFAS was said to be safe, but they weren't. They were introduced to the market unsafe, marketed as safe and internal research was supressed. Companies never ever did bad things because they expected some monetary gains ^^. His example of a product not coming to the market is a best case scenario of a worst case. How about the worst case scenario of the worst possible case? He claims those have no merrit, so i guess we have to wait till something goes terribly wrong, maybe 20 years down the line when we have cat like intelligent AIs ... i mean we all trust cats ... right ^^
@AncientNovelist
@AncientNovelist 4 ай бұрын
This is not much of a debate. Real debate requires equal numbers of active participants on both sides. Here you give us 2.5 against the proposition and a single person speaking for the proposition, but he does not defend his position with the same vigor as any of the others. I stopped watching after 29 min. You want me to engage? Give me something to engage with, not this polyannaish rainbows and unicorns nonsense.
@vallab19
@vallab19 4 ай бұрын
With the hindsight of misuse and also the good-use of social media today, would anyone suggest it would have been better if the social media should have been banned from the beginning? By the way humans would not be capable of conducting the future AI regulations but only the AI.
@cmiguel268
@cmiguel268 4 ай бұрын
Yann believes that AI needs to upload a washing machine because a ten year old can learn to do it. Tell a 10 year old to pass a bar exam to see if he can. AI is what it is, INTELLIGENCE !!! Not washing machine uploading capacity.
@therainman7777
@therainman7777 4 ай бұрын
What makes his comment even more ridiculous and idiotic is that multiple breakthroughs have been published in the past six months that showed robots that ARE capable of loading, running, and unloading a washing machine. So his incredibly dumb and disingenuous argument is also just factually nonsensical.
@paulmorris632
@paulmorris632 4 ай бұрын
Yann's position is brilliant, it forces the other side to remain silent or admit they want to use this technology to hurt people. The best way to hurt someone with an AI is to dream of AGI. Likewise, Fernanda's position is such a callout. If anyone is concerned about AI's harm's, are they silent about face recognition? Are they pointing you to bigger, less well-defined, and nonexistent problems that encourage you to be confused about AI's powers? "Remain focused on how or when we will lose control." What a terrifying message to infect people with.
@deeplearningpartnership
@deeplearningpartnership 4 ай бұрын
Bostrom is a fool.
@benroberts8363
@benroberts8363 4 ай бұрын
look at yourself in the mirror, then say say it, "you're a fool"
@lionelthomas3160
@lionelthomas3160 4 ай бұрын
@@benroberts8363 We are all fools to think there is transparency in AI advancement. This discussion is a joke...
@gerardoancenoolivares9163
@gerardoancenoolivares9163 3 ай бұрын
Accelerate!
@richardnunziata3221
@richardnunziata3221 4 ай бұрын
I agree some kinds of usage in social spheres should be restricted not the research . Monitoring and surveillance of content and interfaces is sufficient . The nay sayers are too much into fantasy scenarios , sophists at best and clear lack of understanding of current research, except for Yann everyone else are policy people who understand little . There is one thing that is certain that if nothing is done to change the current course humanity is on we will face an existential risk soon weather or not we have AI. One thing that must be done is the stop the rule of the single authoritarian as a form of government. They give us Putins, Trumps , Kims. ....etc. These will kill us all.
@flickwtchr
@flickwtchr 4 ай бұрын
YL seems oblivious, or just completely dishonest relative to alignment challenges of the AGI/ASI that AI Big Tech, and open source enthusiasts are all pursuing. YL and others bent on moving as fast as possible all make a mockery of such problems by never addressing risks that are short of "killing everyone on the planet" scenarios that they hold up as if those are the only ones to address as they mock them. Meanwhile, DARPA and militaries around the world are pursuing autonomous AI killing technologies embedded in robotics. You know "aligned with human values".
@therainman7777
@therainman7777 4 ай бұрын
You could not be more wrong, about literally every single thing you said. As an AI engineer who’s been in this field for nearly 20 years and who designs and works with frontier models on a daily basis, I promise you, Yann LeCun is lying to you. Virtually everything he said in this video is either misleading, disingenuous, an outright lie, or nonsensical. Please stop listening to him. I promise you, your assessment of the state of AI risk described above is literally the exact opposite of the truth.
@Alice_Fumo
@Alice_Fumo 4 ай бұрын
Can I have some of what you're smoking?
Minecraft Creeper Family is back! #minecraft #funny #memes
00:26
Brawl Stars Edit😈📕
00:15
Kan Andrey
Рет қаралды 58 МЛН
Which One Is The Best - From Small To Giant #katebrush #shorts
00:17
小天使和小丑太会演了!#小丑#天使#家庭#搞笑
00:25
家庭搞笑日记
Рет қаралды 30 МЛН
Samuel Hammond on AGI and Institutional Disruption
2:14:52
Future of Life Institute
Рет қаралды 3,4 М.
Cathie Wood: Converging Technologies Remaking Our World
46:39
Center for Natural and Artificial Intelligence
Рет қаралды 25 М.
Darío Gil: The future of AI is open
36:32
IBM Research
Рет қаралды 27 М.
AI: Grappling with a New Kind of Intelligence
1:55:51
World Science Festival
Рет қаралды 788 М.
Minecraft Creeper Family is back! #minecraft #funny #memes
00:26