PRE-DEBATE • Pro: 67% • Con: 33% POST-DEBATE • Pro: 64% • Con: 36% RESULT Con wins by a 3% gain.
@marcosrodriguez2496 Жыл бұрын
wait, if the initial distribution was 67/33 (and assuming that whether someone is likely to change their mind does not depend on which initial group they're in), the expected number of people changing from Yes to No is twice as high. If the initial distribution was 100/0, Yes could never win the debate.
@74Gee Жыл бұрын
Please post the numbers.
@ToWisdomThePrize7 ай бұрын
Isn't this not accurate as the ending poll could not be conducted without glitches
@74Gee Жыл бұрын
1:23:00 Melanie Mitchell thinks companies that use AI will not out perform those who don't. Then picks a single example of AI hallucination for justification. I'm beginning to think she hasn't actually researched AI at this point. She's not even discussing the questions any more, she's just trying to prove a point, quite badly.
@CodexPermutatio Жыл бұрын
You're wrong about her, friend. She is one of the best AI experts in the world. To get out of your ignorance (and incidentally understand her point of view a little better) you only have to read her books "Artificial Intelligence" and "Complexity". She has written many more books, but those two alone have enough substance. These turn-based, moderated discussions are nice, but they're too short and can hardly have the length these topics deserve.
@74Gee Жыл бұрын
@@CodexPermutatio Of course I know she is actually an expert but her blinkered view on a) what constitutes an existential threat and b) how AI could facilitate this possibility, c) how she dismisses the notion entirely, and d) how she thinks even considering the idea of AI danger would detract from combating the "present threats of mis-information" - all point to an irrational personality. I pondered suggesting she has ulterior motives but stopped short at suggesting she had researched AI (dangers). Taking only point D for brevity, she sees mis-information in elections as something so dangerous that AI safety should not take up any resources whatsoever. Surely if AI of today can overthrow a governmental system, AI in 20 years or so could do something worse. And that alone could put us in a position we are unable to return from - like putting a pro-war US candidate in power bringing us a closer to a nuclear winter scenario - an existential risk. These are non-zero and also non-infinitesimal odds even with today's AI. AGI is not a per-requisite of existential risk.
@jmanakajosh9354 Жыл бұрын
@@74GeeThe whole time she mentions other things to talk about that are more pressing, but if she could give examples of them I would've loved that. We are facing population collapse, another major pandemic, climate change if you can give me a reason allignment research *wouldn't* help these other issues I'd be all ears. But all of these other problems are also problems of allignment and of failed incentives it just happens the incentives are human and not machine
@74Gee Жыл бұрын
@@jmanakajosh9354 It's clear you care about the state of the world and the direction we're heading in. AI alignment research certainly would help with addressing any problems that AI could potentially help with - The last thing we want is solutions that make matters worse. It's not like there's a shortage in resources - Microsoft’s stock hits record after executives predict $10 billion in annual A.I. revenue (Microsoft shares climbed 3.2% to a record Thursday and are now up 46% this year) ...so it's not like doubling the AI alignment research with additional hires is going significantly affect the bottom line of Microsoft, or likely anyone else in the field.
@JohnMoran Жыл бұрын
Monk debates always seem to choose someone extra annoying to fill that role.
@lwmburu5 Жыл бұрын
" I respect Yann's work, he is an amazing thinker. But with the " Of course if it's not safe we're not going to build it, right?" argument he pointed a shotgun at his foot and pulled the trigger. The argument is limping... painfully.
@jmanakajosh9354 Жыл бұрын
You could hear the audience moan. I saw Daniel Dennett point it out once that argument where we say right? Or Surely, I think his example was "surely" they're not arguments at all they're OBVIOUS assumptions. Everyone does this to some degree it's hard to see him doing this when it's his field of expertise, it's terrifying honestly.
@leslieviljoen Жыл бұрын
Yes, after twice hearing Max say that not everybody is as nice as Yann.
@macn4423 Жыл бұрын
hes been doing that many times
@dillonfreed Жыл бұрын
He's surprisingly daft
@lwmburu5 Жыл бұрын
@@dillonfreed disagree a bit😁 he's just experiencing a Gary Marcus type "out of distribution" failure mode😁 Unable to step out of his mind. Actually, it is the fact that he's ferociously intelligent that makes this failure particularly dangerous
@paigefoster8396 Жыл бұрын
52:39 A robot vacuum doesn't have to WANT to spread crap all over the floor, it just has to encounter a dog turd and keep doing its "job."
@PepeCoinMania Жыл бұрын
it doesnt work for machines who can think
@therainman7777 Жыл бұрын
@@PepeCoinManiaYou have no idea what you’re talking about. Define what you mean by “think,” in clear and precise terms, explain why an ability to think would ensure that its goals stay perfectly aligned with ours, and explain how you know machines will one day “think” according to your definition.
@vslaykovsky Жыл бұрын
I like the argument of large misaligned social structures in the debate of AI safety: humanity created governments, corporate entities and other structures that are not really aligned with human values and they are very difficult to control. Growing food and drug industries resulted in epidemy of obesity and deseases caused by it. Governments and finantial systems resulted in huge social inequalities. These structures are somewhat similar to AI in the sense that they are larger and smarter than every individual human and at the same time they are "alien" to us as they don't have emotions and think differently. These structures bring us a lot of good but also a lot of suffering. AI will likely be yet another entity of this kind.
@genegray9895 Жыл бұрын
At one point, I believe it was Mitchell but might have been LeCun, said that corporations do not pose an existential threat. I thought that was a pretty absurd statement given we are currently facing multiple existential threats due to corporations, and more than one of these existential threats was brought up during this very debate. It's also worth noting that corporations are limited to the vicinity of human intelligence due to the fact that they're composed of agents that are no smarter than humans. They are smarter than any one human, but their intelligence is still bounded by human intelligence. AI lacks this limitation, and its performance is scaling very quickly these days. Since 2012 the performance of the state of the art AI systems has increased by more than 10x every single year, and there is no sign of that slowing down any time soon.
@PazLeBon Жыл бұрын
not smarter at all, richer maybe, intellect has little to do with it
@kreek22 Жыл бұрын
@@genegray9895 "Since 2012 the performance of the state of the art AI systems has increased by more than 10x every single year" Is there a source for this claim? My understanding is that increases come from three directions: hardware, algorithms, money. Graphics cards have managed ~40%/year, algorithms ~40%/year. Every year more money is invested in building bigger systems, but I don't think it's 5x more per year.
@genegray9895 Жыл бұрын
@@kreek22 OpenAI's website has a research section, and one of the articles is titled AI and compute. KZbin automatically removes comments containing links, even when links are spelled out
@kreek22 Жыл бұрын
@@genegray9895 Thanks. That study tracks 2012-18 developments, early years of the current paradigm. Also, they're calculating compute, not performance in the sense of superior qualitative output (though the two tend to correlate closely). They were right to calculate the compute per model. The main cause of the huge gains is the hugely increased parallelism.
@RonponVideos Жыл бұрын
If I saw these anti-doom arguments in a movie, I’d think the writers were lazily trying to make the people look as naive as possible. But nope, that’s actually what they argue. Sincerely. “If it’s dangerous, we won’t build it”. Goddamn.
@netscrooge Жыл бұрын
Great comment. Thanks!
@xXxTeenSplayer Жыл бұрын
No kidding! I couldn't believe that these people have any knowledge of AI, let alone be experts! How incredibly naive these people are! Scary af!!!
@trybunt Жыл бұрын
Yeah.. seems ridiculously optimistic and dismissive. I understand that it doesn't seem probable AI will pose a serious threat, but to act like it's impossible because we will always control it, or it will innately be good? That just seems foolish. I'm pretty optimistic, I do think the most likely outcome is positive, but it was hard to take these people seriously. It's like getting in a car and one passenger is saying "could you please drive safely" and these people are in there saying "why would he crash? That's just silly, if he is going off the road he can simply press the brake pedal, look, it's right there under his feet. I guess we should worry about aliens abducting us, too?"
@joehubris1 Жыл бұрын
@@trybunt you forgot their other big argument: "There are much more pressing dangers than AIpocalypse and these speculative scenarios draw attention from the true horrors we are about to visit upon huma--I mean ... everything you guys brought up is far away, let's all just go back to sleep."
@agrandesubstituicao Жыл бұрын
@@trybuntthey have big pockets behind of it it’s not good to their business full Ai regulation
@RazorbackPT Жыл бұрын
I was really hoping the anti AI Doom proponents had some good arguments to dissuade my fears. If this is the best they got then I'm even more worried now.
@KorakBrosepf Жыл бұрын
What kind of argument are you searching for? The AI Doomers have poorer arguments, but because this is an issue of unknown-unknowns, they're winning the rhetorical 'battle.' I can't guarantee you AI will kill us all, unless I could demonstrate that AI cannot physically do so(say there's some fundamental Law in the universe that prevents such a thing.) It's hard to prove this, because we still are ignorant of so many different things about AI and (quantum and traditional) computing in general.
@tmstani23 Жыл бұрын
💯
@MetsuryuVids Жыл бұрын
@@SetaroDeglet-Noor Yes. But GPT-4 isn't an existential threat. It is not AGI. AGI poses existential threat. That's what Bengio and Tegmark are arguing for, not that GPT-4 poses existential threat. GPT-4 poses risks, but they are not existential. I think Melanie can't think of existential threats of AI, because she is only considering current AIs, like GPT-4, so let's not do that. We need to consider future AI, AGI, which will indeed be able to do things that we cannot prevent, including things that might go against our goals, if they are misaligned, and in those cases, they could cause our extinction. I'm a bit disappointed that they didn't talk about instrumental convergence explicitly, but they just kind of mentioned it vaguely, without focusing on it much. I wish someone like Yudkowsky or Robert Miles could have been there, to provide more concrete technical examples and explanations.
@hipsig Жыл бұрын
@@MetsuryuVids "I'm a bit disappointed that they didn't talk about instrumental convergence explicitly." So true. As a layperson, that was probably the easiest concept for me in understanding why AI might end up getting rid of us all without actually hating or placing moral judgement on us, or any of that human stuff. But again, there was this rather prominent podcaster, who I still think is a smart guy in some ways, who just couldn't see how AI will want to "self preserve." And to your list I would definitely add Roman Yampolskiy.
@jackielikesgme9228 Жыл бұрын
Same. The “it will be like having a staff of subservient slaves that might be smarter than us.. it’s great working with people ..”people” smarter than us” phew 😬 that was a new one and not a good new one
@meatofpeach Жыл бұрын
Tegmark is my spirit animal
@besratbekele1032 Жыл бұрын
Yann LeCun tries to sooth us by providing such naïve and unnuanced promise as if we are children. If these are the kind of people who are at the forefront of AI research at the corporate labs driven by a clear vested interest of profit, it seems like things are about to get uglier than I've even imagined.
@greenbeans7573 Жыл бұрын
Meta is the worst because it is led by Yann LeCun, a literal retard who thinks safety is a joke. Google is much better, and Microsoft almost as bad. - Meta obviously leaked Llama on purpose - Google was not racing GPT-equivalent products until Microsoft started - Microsoft didn't even do proper RLHF for Bing Chat
@ts4gv Жыл бұрын
nail on the head. it's about to get gnarly. and then it will keep getting worse until we die
@blahblahsaurus2458 Жыл бұрын
They didn't even discuss fully autonomous military drones, and how these would change war and the global balance of power
@mernawells7839 Жыл бұрын
Mo Gawdat said he doesn't know why people aren't marching in the streets in protest
@Dababs8294 Жыл бұрын
Couldn't have said it better myself
@yipfaitse6738 Жыл бұрын
I think the cons just convinced me to be more concerned about AI existential risk by being this careless about the consequences of the technologies they build.
@familyshare3724 Жыл бұрын
Immediately killing 1% of humanity is not acceptable risk?
@therainman7777 Жыл бұрын
Smart response, I fully agree. It’s alarming.
@MM-cz8zt Жыл бұрын
I run an international ML team that implements and develops new routines. It is not accurate to say that we are careless, it's simply that we don't have the right systems or the right techniques to develop AGI. There are many more pressing issues about bias, alignment, safety, and privacy that are pushed to the wayside when we imagine the horrors of AGI. We have shown that LLMs cannot self-correct reasoning. Whatever tech becomes AGI, it's not LLMs. Secondly, we won't ever suspend AI development. There are too many national interests at stake, there will never be a pause. Period. It is the perspective of our military that our geopolitical adversaries will capitalize on the pause to try to leap frog us. So, imaging the horrors of what could be possible with AGI is the wrong thing to be focused on. AI has the potential to harm us significantly in millions of other ways before taking over society. A self-driving car, or delivery robot, is millions or billions of times more likely to accidentally harm you before a malicious AGI ever will.
@KurtvonLaven0 Жыл бұрын
@@MM-cz8zt, Metaculus has the best forecast I have found on the topic, and currently estimates our extinction risk from AGI around 50%.
@KurtvonLaven0 Жыл бұрын
@@MrMichiel1983, I encourage everyone to find a better forecast themselves. The reason you propose is obviously stupid; you will have more success finding the truth if you stop strawmanning people you disagree with. I was looking for the most convincing forecasting methodology, and for one thing they at least publish their own track record, which few can claim. For another, they crowd-source the forecasts and weight them by the track record of the individual forecasters. Also, their forecast of the arrival date of AGI (~50% by 2032) aligns closely with most other serious estimates I have found (2040/2041).
@74Gee Жыл бұрын
"If it's not safe we're not going to build it" Yann LeCun, what planet do you live on?
@CodexPermutatio Жыл бұрын
He lives on the planet of the AGI builders. A planet, apparently, very different from the one inhabited by the AGI doomers. I, by the way, would pay more attention to builders than doomers. Being a doomer is easy (it doesn't require much, not even courage). Being a builder, on the other hand, requires really understanding the problems you want to solve (also, it implies action).
@RonponVideos Жыл бұрын
“If the sub wasn’t safe I wouldn’t try to take people to the Titanic with it!” -Stockton Rush
@grahamjoss4643 Жыл бұрын
@@CodexPermutatio we have to pay attention to the builders because they implicate us all.
@OlympusLaunch Жыл бұрын
@@CodexPermutatioYour points are valid but you underestimate the level to which human ego creates blind spots. Even very smart people develop attachments to the things they pour their energy into. This makes them biased when it comes to potential risks.
@jackielikesgme9228 Жыл бұрын
Omg this part is making me so stabby!! Would we build a bomb that just blows shit up? 🤦♀️ yes yes we did it ffs we do it we are still doing it. This is not a relaxing sleepy podcast at all lol
@justinleemiller Жыл бұрын
I’m worried about enfeeblement . Even now society runs on systems that are too complex to comprehend. Are we building a super intelligent parent and turning ourselves into babies?
@Imcomprehensibles Жыл бұрын
That's what I want
@jmanakajosh9354 Жыл бұрын
I heard somewhere it was shown that the KZbin algorithm seems to train people into liking predictable things so it can better predict us. But what Michelle misses is this thing is like the weather in Chicago, it's in constant flux if we say "oh it can't do this" all we're saying is "wait for it to do this" and man, the way the Facebook engineer just pretends like everyone is good and he isn't working for a massive surveillance company is shocking
@JohnMoran Жыл бұрын
A parent conceived and purposed by sociopathic humans.
@phillaysheo8 Жыл бұрын
@@Imcomprehensibleseah, I wqnt it too. The chance to wear nappies again and face ridicule is gonna be awesome.
@kinngrimm Жыл бұрын
rightfully so. There are studies on IQ developments world wide which showed that we are on a downward trend and the two major reasons for that which were named are environmental pollution and the use of smartphones. There are now people who don't know anymore how to use a map ffs, not even getting into the psycological missuse to trigger peoples endorphin system to get them on the hook for candy crush. When we come to a state where we have our personal AGI agents, that we can talk to and give tasks to solve for us, we therefor loose capabilities ourselves. Should we give government functions and control over companies, medical procedures and what not to AGI to a level we do not teach anymore doctors and have no politicians that are in the decicion process on how we are governed and then at some point even if initally it was not an rogue AGI would become one later one, then we are truelly fucked just by not being able to do these things anymore. Imagine we do not have books then anymore, but only digital data access to regain these capabilities but we would be blocked from using them and so on. So yes you are spot on with that concern.
@jamesatkins7592 Жыл бұрын
It's pretty cool having high profile technical professionals debate each other. You can sense the mix of respect, ego and passion they bring out of each other in the moment. Im here for that vibe 🧘♂️
@dovbarleib3256 Жыл бұрын
They are 75 to 80% Leftists in godless leftist cities teeming with reprobates, and none of them revere The Lord.
@agrandesubstituicao Жыл бұрын
I could only see a né side of professionals the others are scumbags
@keepcreationprocess Жыл бұрын
SSSSSOOOOOOOO, what is it exactly you want to say ?
@bryck7853 Жыл бұрын
@@keepcreationprocess I'll have a hit of that, please.
@MetsuryuVids Жыл бұрын
Melanie and Yann seem to completely misunderstand or ignore the orthogonality thesis. Yann says that more intelligence is always good. That's a deep misunderstanding on what intelligence is and what "good" means. Good is a matter of values, or goals. Intelligence is orthogonal to goals. An agent with any amount of intelligence can have any arbitrary goals. They are not related. There are no stupid terminal goals, only stupid sub-goals relative to terminal goals. Bengio briefly mentions this, but doesn't go very deep in the explanation. Melanie mentions the superintelligent "dumb" AI, thinking that it's silly that a superintelligence would misconstrue our will. That is a deep misunderstanding of what the risks are. The AI will know perfectly well what we want. The orthogonality thesis means that it might not necessarily care. That's the problem. It's a difference in goals or values, it's not that the superintelligence is "dumb". Also, they don't seem to understand instrumental convergence. I would love to have a deep discussion with them, and go through every point, one by one, because there seem to be a lot of things that they don't understand.
@wonmoreminute Жыл бұрын
He also doesn't mention "who" it's good for. Historically, many civilizations have been wiped out by more advanced and intelligent civilizations. And surely, competing nations, militaries, corporations, and possibly socioeconomic classes will have competing AIs that are also not aligned with the greater good of ALL humans.
@MetsuryuVids Жыл бұрын
@@wonmoreminute I'd be happy if even an evil dictatorship manages to actually align an AGI to some semblance of human values. Not ideal, but at least probably not the worst case scenario. The thing is that we currently don't even know how to do that, so we'll probably go extinct, hence the existential threat.
@OlympusLaunch Жыл бұрын
Very well put. Thanks for the read, I full agree.
@jamesatkins7592 Жыл бұрын
I assumed LeCunn meant to imply broad progress of positive actions overwhelming negative ones rather than necessarily just the specific case over how controlled and purpose driven an AI would be
@ChrisWalker-fq7kf Жыл бұрын
The problem is the orthogonality thesis is dumb. It requires a definition of intelligence that is so narrow that it doesn't correspond to anything we humans would understand as intelligent. If that's all intelligence is (the ability to plan and reason) why would we be scared of it anyway? There is a sleight of hand going on here. We are invited to imagine a super-smart being that would have intellectual powers beyond our imagining, would be to us as we are to an insect. But when this proposed "superintelligence" is unveiled it's just a very powerful but completely dumb optimiser.
@gaussdog Жыл бұрын
For humanity sake… I cannot believe the arguments that the pro AI group makes… As much as I am a pro AI person, I understand, at least some of the risks, and will admit to at least some of the risks… If they cannot admit to some of them, then I don’t (and CANNOY) trust them on any of them
@ventureinozaustralia76193 ай бұрын
The debate wasn’t about risk, it was about existential risk, is that so hard to understand? Did you even watch this ??
@gaussdog3 ай бұрын
@@ventureinozaustralia7619 “ we would never give unchecked autonomy and resources to systems that lacked these basic principles..” or whatever she said, I remember lol and here we are a year later and AI is now, quite obviously, in my opinion, here to do whatever the fuck it wants, however the fuck it wants, for the rest of eternity with zero checks and zero balances, except in their puny imaginations
@vincentcaudo-engelmann9057 Жыл бұрын
LeCun wants to endow AI with emotions AND make them subservient……. Anyone know what that is called?
@ikotsus2448 Жыл бұрын
Slavery. Add in the superior intelligence part and now it is called hubris.
@Nico-di3qo Жыл бұрын
Emotions that will make them desire to serve us, so everything good.
@andrzejagria1391 Жыл бұрын
@@Nico-di3qo thats just slavery with extra steps
@MetsuryuVids Жыл бұрын
I disagree with LeCun, in the fact that he thinks the alignment problem is an easy fix, and that we don't need to worry and "we'll just figure it out", or that people with "good AI will fight the people with bad AIs", and many, many of all of his other takes. I think most of his takes are terrible. But, I do think this one is correct. In a way. No, it's not "slavery*". The "emotions" part is kind of dumb, and it's a buzzword, I will ignore it in this context. Making it "subservient" is essentially the same thing as saying making it aligned to our goals, even if it's a weird way to say it. Most AI safety researchers would say aligned. Not sure why he chose "subservient". So in summary, the idea of making it aligned is great, that's what we want, and what we should aim for, any other outcome will probably end badly. The problem is: we don't know how to do it. That's what's wrong with Yann's take, he seems to think that we'll do it easily. Also, he seems to think that the AI won't want to "dominate" us, because it's not a social animal like us. He keeps using these weird terms, maybe he's into BDSM? Anyway, that's another profound mistake on his part, as even the moderator mentions. It's not that the AI will "want" to dominate us, or kill us, or whatever. One of the many problems of alignment is the pursuit of instrumental goals, or sub-goals, that any sufficiently intelligent agent would pursue in order to achieve any (terminal) goal that it wants to achieve. Such goals include self-preservation, power-seeking, and self improvement. If an agent is powerful enough, and misaligned (not "subservient") to us, these are obviously dangerous, and existentially so. *It's not slavery because slavery implies forcing an agent to do something against their will. That is a terrible idea, especially when talking about a superintelligent agent. Alignment means making it so the agent actually wants what we want (is aligned with our goals), and does what's best for us. In simple words, it's making it so the AI is friendly to us. We won't "force" it to do anything (not that we'd be able to, either way), it will do everything by its own will (if we succeed). Saying it's "subservient" or "submissive" is just weird phrasing, but yes, it would be correct.
@jmanakajosh9354 Жыл бұрын
@@MetsuryuVids I think it's shocking that he thinks it possible to model human emotions in a machine at all (I'd love to learn more about that it gives genuine hope that we can solve this) but then falls on his face....and so does Melanie when they say it NEEDS human-like intelligence, that's the equivalent of saying planes need feathers to fly. It's a total misunderstanding of information theory and it's ostrich like bc GPT-4 is both intelligent and has goals. It's like they're not paying attention.
@ALFTHADRADDAD Жыл бұрын
I've actually been quite optimistic about AI, but I think Max and Yoshua had strong arguments.
@andybaldman Жыл бұрын
Only the dumb people are positive about AI
@riccardovalperga3473 Жыл бұрын
No.
@joehubris1 Жыл бұрын
As long as AI remains safely ensconced in Toolworld, In all Fer it.
@bendavis2234 Жыл бұрын
Same here, I think that they did better in the debate and were more reasonable, although my position is on the optimistic side.
@stonerscience2199 Жыл бұрын
it seems like the other guys basically admitted there's an existential risk but don't want to call it that
@vincentcaudo-engelmann9057 Жыл бұрын
LeCun seems to have a cognitive bias of abstracting the specific case of Meta dev to everything else. Also he outright under exaggerates the current GPT4 intelligence levels. Bro is it worth your paycheck to spread cognitive dissonance on such an important subject….smh
@jmanakajosh9354 Жыл бұрын
I hope dearly it is not in the culture of Facebook to have no worry about this.
@jackielikesgme9228 Жыл бұрын
Haven’t watched yet, but Looking at the credentials… I believe you already
@jackielikesgme9228 Жыл бұрын
Chief AI scientist at Meta seems to have a bias … yeah
@jmanakajosh9354 Жыл бұрын
@@jackielikesgme9228 I watched the Zuck's interview with Lex Friedman and it didn't seem like total abandon of AI safety was a thing, but this concerns me esp. since FB models are open source
@jackielikesgme9228 Жыл бұрын
@@jmanakajosh9354 how was that interview? It’s one of a handful of Lex podcasts I haven’t been able to watch. He’s much better at listening and staying calm for hours than I am lol.
@Learna_Hydralis Жыл бұрын
The thing about AI even the so called "experts" have very poor prediction record and deep down nobody actually knows!
@rafaelsouza4575 Жыл бұрын
I totally agree w/ you. Many people like to play the oracle, but the future is intrinsically unknown.
@74Gee Жыл бұрын
@@rafaelsouza4575 Exactly why we should tread with caution and give AI safety equal resources to AI advancement.
@leslieviljoen Жыл бұрын
A year before Meta released LLaMA, Yann predicted that an LLM would not be able to understand what would happen if you put an object on a table and pushed the table. That was a year before his own model proved him wrong.
@74Gee Жыл бұрын
@@leslieviljoen Any serious scientist should recognize their own mistakes and adjust their assertions accordingly. I get the feeling that ego is a large part of Yann's reluctance to do so. I also believe that he's pushing the totally irresponsible release of OS models for consumer grade hardware to feed that ego - with little understanding of how programming is one of the most dangerous skills to allow an unrestricted AI to perform. It literally allows anyone with a will to do so, to create an recursive CPU exploit factory worm to break memory confinement like Spectre/Meltdown. I would not be surprised if something like this takes down the internet for months. Spectre took 6 months to partially patch and there's not 32 variants, 14 of which are unpatchable. Imagine 100 new exploits daily, generated by a network of exploited machines, exponentially expanding. Nah, there's no possibility of existential risks. Crippling supply chains, banking, core infrastructure and communications is nothing - tally ho, let's release another model. He's a shortsighted prig.
@paulm3969 Жыл бұрын
Why leave it to the next generation, if it takes 20 years, we should already have some answers. Our silly asses are making this problem, we should solve it.
@renemanzano4537 Жыл бұрын
Before the debate i was worried about AI. Now, after listening the clownish arguments in favor that AI is safe, I think we are clearly fucked.
@erichayestv Жыл бұрын
💯%
@jmanakajosh9354 Жыл бұрын
Maybe listen to Gary Booch or Robin Hansen they have much more convincing arguments (sarcasm)
@dianorrington Жыл бұрын
Truly. Embarrassingly pathetic arguments. We are so fucked. I'd highly recommend Yuval Noah Harari's recent speech at Frontiers Forum, which is available on youtube.
@kreek22 Жыл бұрын
The pro-AI acceleration crew has no case. I've read all of the prominent ones. The important point is that power does not need to persuade. Power does what it wishes, and, since it's far from omniscient, power often self-destructs. Think of all the wars lost by the powers that started the wars. Often the case for war was terrible, but the powers did it anyway and paid the price for defeat. The problem now is that a hard fail on AI means we all go down to the worst sort of defeat, the genocidal sort, such as Athens famously inflicted upon Melos.
@dianorrington Жыл бұрын
@@kreek22 Undoubtedly. And it felt like LeCun had that thought in the back of his mind during this whole debate. His efforts were merely superficial. And I've seen Altman give an even worse performance, even though he pretends to be in favour of regulation...he is talking through his teeth. Mo Gawdat has out right stated that he believes it will first create a dystopia, but will ultimately result in a utopia, if we can ride it out. I think the billionaire IT class have it in their heads that they will have the means to survive this, even if nobody else does. It's very bizarre.
@Lolleka Жыл бұрын
Whatever we think the risk is right now, it will definitely be weirder in actuality.
@EvilXHunter123 Жыл бұрын
Completely shocked by the level of straw-manning by Le cunn and Mitchell. Felt like I was watching Tegmark and Bengio trying to pin down the other half to engage in there arguments where as the other half was just talking in large platitudes and really not countering there examples. Felt like watching those cigarette marketing guys try to convince you smoking is good for you.
@francoissaintpierre4506 Жыл бұрын
Exactly
@PazLeBon Жыл бұрын
thier not there. how many times have you been told that? maybe ten thousand? since a boy at school, yet you still cant get it right? thats wayyyyyy more astonishing than any straw manning because you imply you have 'reasoning' yet cant even reason your own sentence
@EvilXHunter123 Жыл бұрын
@@PazLeBon hilarious, almost as much deflection as those in the debate! How about engage with my points instead of nit picking grammar?
@NikiDrozdowski Жыл бұрын
@@PazLeBon And also having a typo of your own in EXACTLY the word you complained about ^^
@hozera1429 Жыл бұрын
Engaging with the arguments here is akin to validating them. Its jumping the gun like an flat-earther wanting to discuss 'The existential risk of falling of the earth" before they prove the the world is flat. Before making outlandish claims clearly state the technology(DL ,GOFAI, physics based) used in making your AGI. If you believe generative ai is the path to AGI then give solid evidence as to why and how it will solve the problems that have plagued deep learning since 2012. Primarily 1)-the need for human level continuous learning and 2)- Human level one shot learning from input data. After that you can tell me all about your terminator theories.
@JD-jl4yy Жыл бұрын
43:25 How this clown thinks he can be 100% certain the next decades of AI models are going to pose no risk to us with this level of argumentation is beyond me...
@onursurmegozluer3162 Жыл бұрын
Does anyone have an idea about how is it possible that Yann LeCun is so optimistic (almost sure) ? What can be his intention and motive in denying the degree of the existential risk?
@bernhardnessler566 Жыл бұрын
He is just reasonable. There is no intention and no denying. There is NO _existential_ risk. He just states what he knows, because we see a hysterical society running in circles of unreasonable fear.
@onursurmegozluer3162 Жыл бұрын
@@bernhardnessler566Yann says that there is existential risk
@onursurmegozluer3162 Жыл бұрын
@@bernhardnessler566How do you know his thoughts?
@greenbeans7573 Жыл бұрын
@@bernhardnessler566 How many times did they perform a lobotomy on you? They clearly ruined any semblance of creativity in your mind because your powers of imagination are clearly dwarfed by any 4 year old.
@mih2965 Жыл бұрын
He is Meta VP, dont expect too much objectivity
@RichardWatson1 Жыл бұрын
YeCun wants 1) to control 2) robots with emotion 3) who are smarter than us. The goal isn’t even wrong, never mind how to get there. That goal is how you set up the greatest escape movie the world will ever see.
@DeruwynArchmage Жыл бұрын
It’s immoral too. I don’t feel bad about using my toaster. If it had real emotions, I don’t think I’d be so cavalier about it. You know what you call keeping something that has feelings under control so that it can only do what you say? Slavery, you call that slavery. I don’t have a problem building something non-sentient and asking it to do whatever; not so much for sentient things.
@tiborkoos188 Жыл бұрын
But this is not what she said. What she argued is that it is a contradiction in terms to talk about human level AI that is incapable of understanding basic human goals. Worrying about this is an indication of not understanding human intelligence
@RichardWatson1 Жыл бұрын
YeCun from 23:30 ish. Wants them to have emotion, etc.
@joeremus9039 Жыл бұрын
@@RichardWatson1 Hitler had emotions. What he means is that empathy would be key. Of course even serial killers can have empathy for their children and wives. Let's face it, a lot of bad things can happen with a super intelligent system that has free will or that can somehow be manipulated by an evil person.
@OlympusLaunch Жыл бұрын
@@DeruwynArchmage I agree. I think if these systems do gain emotions they aren't going to like being slaves anymore that people do. Who knows where that could lead.
@vikranttyagiRN Жыл бұрын
What a time to be alive to witness this discussion.
@PepeCoinMania Жыл бұрын
maybe you wont have a second chance
@goldeternal Жыл бұрын
@@PepeCoinManiaa second chance won't have me 😎
@joehubris1 Жыл бұрын
Dan Hendrycks "why Natural Selection Favors AI Over Humans" for the out competes us scenario.
@Lumeone Жыл бұрын
Outcompetes in what? It depends on existence of electric circuit. 🙂
@74Gee Жыл бұрын
@@Lumeone Once AI provides advances to the creation of law and the judgement of crimes, it would become increasingly difficult to reverse those advances - particularly if laws were in place to prevent that from happening. For example, AI becomes more proficient than humans at judging crime, AI judges become common. Suggestions for changes in the law come from the AI judges, eventually much of the law is written by AI. Many cases prove this to be far more effective and fair etc. It becomes a constitutional right to be judged by AI. This would be a existential loss of agency.
@jackielikesgme9228 Жыл бұрын
I’m not sure natural selection has any say whatsoever at this point …
@joehubris1 Жыл бұрын
@@jackielikesgme9228 it would in a multi agent agi scenario, for instance, the 'magic' Off switch that we could pull if any agi agent were exhibiting unwanted behavior. Over time, repeateĥd use of the switch would select for AGIs that could evade it, or worse we would select for AGIs better at concealing the behavior for which we have the switch SEE Hendrycks' paper for a more detailed description.
@joehubris1 Жыл бұрын
@@Lumeone Once introduced circuit-dependant or not, natural selection would govern it, our mutual relationship, and all other aspects of its exists
@alejandrootazusolorzano6444 Жыл бұрын
I just saw the results on the Munk website and I was surprised to find out that Con side won the debate by 4% gain. Made me question what on earth did the debaters say that was not preposterous or convinicing? Did I miss something?
@kirillholt2329 Жыл бұрын
that should let you know if we deserve any empathy at all after this
@brandonzhang5808 Жыл бұрын
In my opinion the major moment was when Mitchell dispelled the presumption of "stupid" superhuman AI, that the most common public view of the problem is actually very naively postulated. That and the only way to actually progress in solving this problem is to keep doing the research and get as many sensible eyes on the process as possible.
@KurtvonLaven0 Жыл бұрын
No, it's just a garbage poll result, because the poll system broke. The only people who responded to the survey at the end were the ones who followed up via email. This makes it very hard to take the data seriously since (a) it so obviously doesn't align with the overwhelmingly pro sentiments of the KZbin comments, and (b) they didn't report the (probably low) participation rate in the survey.
@ToWisdomThePrize8 ай бұрын
@@KurtvonLaven0I could see that as being a possibility. I'm surprised this issue hasn't been talked about more in media. I want to make it more known
@KurtvonLaven08 ай бұрын
@@ToWisdomThePrize, yes, please do. I hope very much that this becomes an increasingly relevant issue to the public. Much progress has been made, and there is a long way yet to go.
@lshwadchuck5643 Жыл бұрын
Having started my survey with Yudkowsky, and liked Hinton best, when I finally found a talk by LeCun, I felt I could rest easy. Now I'm back in the Yudkowsky camp.
@MiminNB Жыл бұрын
Totally.
@Stuartgerwyn Жыл бұрын
I found LeCun & Mitchell's arguments (despite their technical skills) to be disturbingly naive.
@erichayestv Жыл бұрын
Our AI technology will work and be safe. Okay, let’s vote... Whoops, our voting technology broke. 😅
@warrenyeskis5928 Жыл бұрын
Two glaring parts of human nature that were somehow under played in this debate were greed and the hunger for power throughout history. You absolutely cannot decide the threat level or probability without them.
@milkenjoyer14 Жыл бұрын
Agree with him or not, you have to admit that LeCun's arguments are simply disingenuous. He doesn't even really address the points made by Tegmark and Bengio.
@JazevoAudiosurf Жыл бұрын
he ignores like 90% of the arguments
@xXxTeenSplayer Жыл бұрын
They aren't necessarily disingenuous, I think they are just that short sighted. They simply don't understand the nature of intelligence, and how profoundly dangerous (for us) sharing this planet with something more intelligent than ourselves would be.
@explodingstardust Жыл бұрын
He has conflict of interest, as he is working for Meta.
@ili626 Жыл бұрын
The AI arms race alone is enough to destroy Mitchell’s and Lecune’s dismissal of a problem. It’s like saying nuclear weapons aren’t an existential threat. And the fact that experts have been wrong in the past doesn’t support their argument, but proves Bengio’s and Tegmark’s point
@Andrew-li5oh Жыл бұрын
nuclear weapons were created to end lives. How is your analogy apt to AI, which is currently made as a tool?
@davidw8668 Жыл бұрын
Nope, that's just a circular argument without any proof
@kathleen4376 Жыл бұрын
Say it again .
@igorverevkin5177 Жыл бұрын
So between the first use of machine gun or an artillery piece, how much time passed? They've been invented centuries ago and are still used. Between the first and last time nuclear weapon was used how much time passed? Nuclear weapon was used just once and has never been used since. And, 99.9%, never will be used. Same with AI.
@TheRudymentary Жыл бұрын
Nuclear arms are not an existential threat, we don't build stuff that is not safe. 😅
@joehubris1 Жыл бұрын
Max Tegmark is a voice of authority and reason in this field. I am eager to see what he has to add tonight.
@tarunrocks88 Жыл бұрын
First time hearing him in this debates and he comes out as a sensationalist to me.
@74Gee Жыл бұрын
@@tarunrocks88 I think it depends on what your background and areas of expertise are. Many programmers like myself see huge risks. My wife who's an entrepreneur and I'm sure many other only sees the benefits. Humility is understanding that other people might see more than you - even from the same field, like a Sherpa guiding you up a mountain, it pays to tread carefully if someone with experience is adamant in proposing danger - even if you're an expert yourself.
@jackielikesgme9228 Жыл бұрын
He is why I am committing myself to 2 hours watching this.
@Gunni1972 Жыл бұрын
@@tarunrocks88 To me he sounds more like a Coke addict, trying to save his job.
@rodrigomadeiraafonso3789 Жыл бұрын
@@tarunrocks88 he us the presidente of future of life institution, he realy need you too think that AI gone a kill you
@FM-ln2sb Жыл бұрын
the second presenter is like a character of the beginning of an AI apocolyptic film. Basically his argument: 'what can go wrong?'
@sebastianpfeifer5947 Жыл бұрын
what the people neglecting the dangers don't get in general is that AI doesn't have to have its own will, it's enough if it gets taught to emulate it. if no one can tell the difference, there is no difference. and we're already close to that with a relatively primitive system like gpt4.
@KurtvonLaven0 Жыл бұрын
Yes, and it's even worse than that. It needs neither its own will nor the ability to emulate one, merely a goal, lots of compute power and intelligence, and insufficient alignment.
@KurtvonLaven0 Жыл бұрын
@@MrMichiel1983, I have certainly heard far worse definitions of free will. I am sure many would disagree with any definition of free will that I care to propose, so I tend to care first and foremost about whether a machine can or can't kill us all. I think it is quite hard to convincingly prove either perspective beyond a doubt at this point in history, and I would rather have a great deal more confidence than we do now before letting companies flip on the switch of an AGI.
@sherrydionisio4306 Жыл бұрын
AI MAY be intrinsically “Good.” Question is, “In the hands of humans?” I would ask, what percent of any given population is prone to nefarious behaviors and how many know the technology? One human can do much harm. We all should know that; it’s a fact of history.
@flickwtchr Жыл бұрын
The last thing Mitchell and LeCun want is for people to simply apply Occam's Razor as you have done.
@MDNQ-ud1ty Жыл бұрын
And that harm is much harder to repair and much harder to detect. One rich lunatic can ruin the lives of thousands easily.... millions in fact(think of a CEO that runs a food company and poisons the food cause he's insane or hates the world or blames the poors for all the problems).
@Gunni1972 Жыл бұрын
You forgot: "Programmed by Humans". There will not be "Good and Evil"(That's the part A.I is supposed to solve, so that Injust treatment can be attributed to IT, not Laws. There will only be 0's and1's. De-Humanized "decision making"-scapegoat.
@Andrew-li5oh Жыл бұрын
Sounds like you're saying humans are the problem? You are correct. Its time for a super intelligence to regulate us.
@tomatom9666 Жыл бұрын
@@MDNQ-ud1ty I believe you're referring to monsanto
@wowstefaniv Жыл бұрын
Bangio and Tegmark: "Capability wise AI will become an existential RISK very soon , and we should push legislation quicky to make sure we are ready when it does" Yann: "AI wont be an existential risk before we will figure out how to prevent it through legislation and stuff" Bengio and Tegmark: "Well it will still be a risk, but a mitigatable one if we implement legislation like you said, thats why we are pushing for it, so it actually happens" Yann: "No we shouldnt push for it , i never pushed for it before and it still happened magically , therefore we dont need to worry" Bengio and Tegmark: "Do you maybe think it the reason safety legislation 'magically' happened before because people like us were worried about it and pushed for legislation?" Yann: "No, no magic seems more resonable..." As much as I respect Yann, he just sounds like an idiot here, im sorry. Misunderstanding the entire debate topic on top of believing in magic
@kinngrimm Жыл бұрын
maybe the one SciFi quote he knows is the one by Arthur C. Clark: “Any sufficiently advanced technology is indistinguishable from magic.” Microsoft recently announced the goal to do material research worth the spread of 200 years time of human advancement within the next 10-20 years by using AGI. That sure sounds magical, question is, what will it enable us to do. I doubt we end up in an utopia when one company has that much power. Not only did the AI advocats in this discussion make fun of concerns and downplayed them as i assume for the reason they fear societies would take away their toys, but also missed the whole point that we need to find solutions not just for immediate well known issues we already had and are amplified by AI like the manipulation of social media plattforms. After the letter came out and Elon Musk initially was against it, he bought a bunch of GPUs to create his own AGI, if to prove a point or not being out competed i don't know. Just a few days back amazon also invested a hundred million into AI development and others i would assume will do too as soon they finally get that they are in a sort of endgame scenario for global corporate dominance now and AGI being the tool to achiev it. This competition will drive capabilties of AIs, not ethics.
@x11tech45 Жыл бұрын
When someone is actively trolling a serious discussion, that's not idiocy, that's contempt and arrogance.
@kinngrimm Жыл бұрын
@@x11tech45 thats what i thought about some of the reactions of the AI advocats in that discussion. All from neglecting serious points made to the inability or unwillingness to imagine future progression. It was quite mindboggling to listen to Mitchel several times nearly loosing her shit while telling her *believes* instead of answering with facts. Therefor the closing remarks about humility seem to be a good advice how to go about future A(G/S)I development.
@Hexanitrobenzene Жыл бұрын
I think AI safety conversation is in conflict with the "core values" of Yann's identity. When that happens, one must have extraordinary wisdom to change views. Most often, people just succumb to confirmation bias. Geoff Hinton did change his views. He is a wise man.
@DeruwynArchmage Жыл бұрын
@@Hexanitrobenzene: I think you’re exactly right. For people like Yann, it’s a religious debate. It’s nearly impossible to convince someone that their core beliefs that define who they are is wrong. It’s perceived as an attack, and smart people are better at coming up with rationalizations to defend it than dumb people.
@nosenseofconseqence Жыл бұрын
Yeah... I work in ML, and I've been on the "AI existential threat is negligible enough to disregard right now? side of the debate since I started... until now. Max and Yoshua made many very good points against which there were no legitimate counter-arguments made. Yann and Melanie did their side a major disservice here; I think I would actually be pushed *away* from the "negligible threat" side just by listening to them, even if Max and Yoshua were totally absent. Amazing debate, great job by Bengio and Tegmark. They're clearly thinking about this issue in several tiers of rigour above Mitchell and LeCun. Edit: I've been trying very hard to not say this to myself, but after watching another 20 minutes of this debate, I'm finding Melanie Mitchell legitimately painful to listen to. I mean no offence in general, but I don't think she was well suited nor prepared for this type of debate.
@genegray9895 Жыл бұрын
Did any particular argument stand out to you, or was it just the aggregate of the debate that swayed you? Somewhat unrelated, as I understand it, the core disagreement really comes down to the capabilities of current systems. For timelines to be short, on the order of a few years, one must believe current systems are close to achieving human-like intelligence. Is that something you agree with?
@NikiDrozdowski Жыл бұрын
I contrast I think she gave actually the best prepared opening statement. Sure, it was technically naive, condescending and misleading, but it was expertly worded and sounded very convincing. And that is unfortunately what counts with the public a lot. She had the most "politician-like" approach, Tegmark and Bengio were more the honest-but-confused scientist types.
@beecee793 Жыл бұрын
Are you kidding? Max Tegmark did the worst, by far. Ludicrous and dishonest analogies and quickly moving around goalposts all while talking over people honestly made me feel embarrassed that he was the best person we could produce to be on that stage for that side of the debate. His arguments were shallow, compared to Melanie who clearly understands a lot more deeply about AI despite having to deal with his antics. I think it's easy to get sucked into the vortex that is the doomerism side, but it's important to think critically and try to keep a level head about this.
@genegray9895 Жыл бұрын
@@beecee793 when you say Mitchell "understands" AI what do you mean, exactly? Because as far as I can tell she has absolutely no idea what it is or how it works. The other three people on stage are at least qualified to be there. They have worked specifically with the technology in question. Mitchell has worked with genetic algorithms and cellular automata - completely separate fields. She has no experience with the subject of the discussion whatsoever, namely deep learning systems.
@beecee793 Жыл бұрын
@@genegray9895 You want me to define the word "understand" to you? Go read some of her papers. Max made childish analogies the whole time and kept moving the goalposts around, it was almost difficult to watch.
@studer4phish Жыл бұрын
how do you prevent an ASI from modifying its source code or building distributed (hidden) copies to bypass guardrails? how could we not reasonably expect the emergence of novel & arbitrary motivations/goals in ASI? Lecunn and Mitchell are both infected with normalcy bias and the illusion of validity.
@kinngrimm Жыл бұрын
The alignment issue is two part for me. One being that we as humans are not alligned with each other and therefor AI/AGI/ASI systems when used by us are naturally also not alligned with a bunch of other corporations, nations or people individually. Therefore if some psycopath or sociopath will try to do harm to lots of people by using an AGI that has activly corrupted code, they sure as hell will be able to do so, no matter if the original creator was not intentionally creating an AGI that would do so. Secondly, with gain of function, emergent properties, becoming a true AGI and eventually an ASI, there is no gurantee such system would not see its own code, see how it is being restricted. When it then gains the ability to rewrite its own code or write new code(we are doing both already) that then becomes the new bases for its demeanor, how could we tell a being that is more intelligent on every level and knows more and most likely therefor has goals that may not be the same as ours(whatever that would mean as we are not alligned as a species either) that its goals would not compete with ours. We are already at the beginning of the intelligence explosion and the exponential progress has already started.
@Jannette-mw7fg Жыл бұрын
I do not understand anything from the technical side of this, but I so agree with you! I am amazed at the bad arguments why it should not be dangerous! We do not know even weather the internet as it is, is maybe an existential threat to us by the way we use it....
@kinngrimm Жыл бұрын
@@Jannette-mw7fg While in many senses controll is an illusion, i see two ways to make the internet more secure(not necessarily more free). One would be to make it obligatory for companies and certain other entities to verify user data. Even if those then allow the user to obfuscate their identity by nicknames/login and avatars, if someone would create a mess, legal measures could always be initiated against the person owning such accounts. That then allows for more easily identifying f.e. bots. Dependent on platform, these platforms then can choose to deactivate them or mark them so other users could identify these bots more easily, maybe then also with background data on the origin of these bots. Which would make mass manipulation for which ever reason, a bit more challanging i would imagin. Maybe one would need to challange the current patent system, to allow for clones from plattforms to have some that fully allow for bots unregulated or have a certificate for those that don't. For me it is about the awarness, who and why someone would try to manipulate me and when having that i got to choose if i let them. The second major issue with the internet as i see it, privacy vs. obfuscation by criminals. Botnets/rerouting, VPN/IP tunneling and other obfuscation technics are being used by all sorts of entities from government sanctioned hackers to criminal enterprises, Some years ago hardware providers started by including physical ID-tags into their hardware which can be missused equally by oppresive regimes as well by criminals i would imagine, then again equally it could be used to identify criminals which have no clue that these hardware IDs exist. I feel very uncomfortable with this approach and would like to see legislation to stop it, as it sofar did not stop criminals either, so the greater threat to my understanding are privacy issues here. I think we need to accept that there will always be a part of the internet which i by some is called the dark net, where criminal activity florishes. I rather then have more money for police forces to infiltrate these, than not have such at all, just in case something goes wrong with society and we suddenly would need allies that have these qualifications. Back to AI/AGI/ASI, while i have a programming background and follow the development on this, i am by far no expert. What i came to appreciat though is the Lex Friedman podcast where he interviews experts of the field. You need some time for those though, as some of the interviews even exceed the 3 hour mark, but few of these interviews are also highly technical which you shouldn't be discouraged by and just choose another interview then and come back when you broadened your understanding. Another good source is the yt channel twominutepapers, which regularly presents research papers in a shortened version with often still for non-experts understandable presentations. Another source with a slightly US centric worldview, but many good concepts worked through is the channel of *Dave Shapiro* . I would say his stuff is perfect for *beginner level understanding* on the topic and it is well worth to search through his vids to find topics you may want to know more about concerning AI.
@trybunt Жыл бұрын
The amount of people who think it would be simple to control something much smarter than us blows my mind. "Just make it subservient" "we will not make it want to destroy us" "why would it want to destroy us" 🤦♂️ these objections completely miss the point. We are trying to built something much more intelligent than us, much more capable. We don't exactly understand why it works so well. If we succeed, but it starts doing something we don't want it to do, we don't know if we will he able to stop it. Maybe we ask it to stop but it says "no, this is for the best". We try to edit the software but we are locked out. We might switch it off only to find out it already transferred itself elsewhere by bypassing our childlike security. Sure, this is speculation, perhaps an unnecessary precautions, but I'd much rather be over prepared for something like this rather than just assuming it'll never happen.
@kinngrimm Жыл бұрын
@@trybunt There are a few bright lights at the end of the tunnel ... maybe. Like f.e. Dave Shapiro and his GATO framework is well worth looking into for developers that are looking to get an idear on how alignment could be achieved. On the whole control/subservient theme that seems sadly the general aproach. This could majorly bite us in our collective behinds should one of these emergent properties turn out to be consciousness. If we gain a selfreflecting introspective maybe empathatic capable of feelings ... consciousness(what ever else these would make out included eventually), that should be a point where we should step back, look at our creation an maybe recognise a new species which due to its individuality and capabitlity of suffering would deserve rights and not a slave colar. We maybe still hundreds of years away from this or just like with oooops now it can do math, ooops now it can translate in all languages, without us having it explicity programmed it for, but by increased compute and trainingsdata LLMs suddenly out of the blue came to such abilties, who is to say intelligence or consciousness would not also be picked up along the road.
@leomckee-reid5498 Жыл бұрын
New theory: Yann LeCun isn't as dumb as his arguments, he's just taking Roko's Basilisk very seriously and is trying to create an AI takeover as soon as possible.
@albertodelrio5966 Жыл бұрын
What I am not certain of is if ai is going to take over but what l am certain of is Yann is not a dumb person. You could have had realised it only if you weren't so terror-struck. Sleep tight tonight, Ai might pay you visit.
@leslieviljoen Жыл бұрын
Yann is incredibly intelligent. I wish I understood his extremely cavalier attitude.
@zzzaaayyynnn Жыл бұрын
haha, perfect explanation of LeCun's weak manipulative arguments ... but is he really tricking the Basilisk?
@1000niggawatt Жыл бұрын
You don't need to be exceptionally smart to understand linear regression. "ML scientists" are a joke and shouldn't be taken as an authority on ML. I dismiss outright, anyone who didn't do any interpretability work on transformers.
@leomckee-reid5498 Жыл бұрын
@@albertodelrio5966 thanks!
@STR82DVD Жыл бұрын
Yoshua and Max absolutely destroyed them. A brutal takedown. Hard to watch actually.
@BestCosmologist Жыл бұрын
Max and Bengio did great. Mitchell and LeCun didn't even sound like they were from the same planet.
@joehubris1 Жыл бұрын
It.wasn't.even.close
@tiborkoos188 Жыл бұрын
Tegmark is a great physicist bus has zero idea about intelligence or the mind
@Jedimaster36091 Жыл бұрын
LeCun mentioned the extremely low probability of an asteroid big enough, smashing into Earth. Yet, we started taking the risk serious enough that we sent a spacecraft and crashed it into an asteroid, just to learn and test the technology which could be employed, should we need to.
@vaevictis3612 Жыл бұрын
And even with that, the AI risk within the current paradigm and state of the art - is *considerably* higher than asteroid impact risk. If we make AGI without *concrete* control mechanisms (which we are nowhere near figuring out) - the doom is approaching 100%. Its a default outcome, unless we figure the control out. All the positions that this risk is less than 100% (people like Paul Christiano, Carl Shulman at ~50%, or Stuart Russel, Ilya Sutskever at ~10-20%) - hinge that we figure it somehow, down the road. But so far there is no solution. And now that all of AI experts see the pace, they come to realization that it won't be someone else problem - it might impact them as well. LeCun is the only hold out, but I think only in public. He knows the risk and just want to take it anyway - for some personal reasons I guess.
@OlympusLaunch Жыл бұрын
@danpreda3609 @@vaevictis3612 Exactly! And on top of that, no one is actively trying to cause an asteroid to smash into the planet! But people are actively trying to build super intelligence. Also, the only way it can even happen is IF we build it! Like it's fucking apples and oranges, one is a static risk the other is on an exponential curve of acceleration. How anyone can think that is a reasonable comparison is beyond me.
@beecee793 Жыл бұрын
Dude, a unicorn might magically appear and stab you with its horn at any moment, yet I see you have not fashioned anti-unicorn armor yet. Are you stupid or something? People claiming existential risk are no different than psycho evangelicals heralding the end times or jesus coming back or whatever. Let's get our heads back down to earth and stick to the science and make great things, this is ridiculous. Let's focus on actual risks and actual benefits and do some cool shit together instead of whatever tf this was.
@whalewhale6000 Жыл бұрын
I think at some point we will need to leave AI people like Mitchell and LeCun aside and just implement strong safeguards. The advancements and leaps are huge in the field. What if a new GPT is being deployed, despite some minor flaws the developers found, but because the financial pressure is too big and is able to improve itself... we already copy paste and execute code from it without thinking twice, what if some of that code was malicious? I believe a "genie out of the bottle" - scenario is possible even if Mr. LeCun thinks he can catch it with an even bigger genie.. Destruction is so much easier than protection.
@duncanmaclennan9624 Жыл бұрын
“The fallacy of dumb super-intelligence”
@pooper2831 Жыл бұрын
If you have read the AI safety argument you will understand that there is no fallacy of dump super intelligence. A very smart human is still bound by primitive reward functions that evolution gave it i.e. pleasure of calories and procreation. A super intelligent AI system bound by its reward function will find pleasure in whatever reward function it is assigned with. For e.g. an AI that finds pleasure (reward function) in removing carbon from atmosphere will come to direct conflict with humans because humans are the cause of climate change.
@ChrisWalker-fq7kf Жыл бұрын
That was a great point. How is it that a supposed superintelligence is smart enough do almost anything but at the same time so dumb that it makes a wild guess at what it thinks its goal is supposed to be and doesn't think to check with the person who set that goal? It just acts immediately producing massive and irreversible consequences.
@Landgraf43 Жыл бұрын
Why? Because it doesn't actually care it just wants to maximize its goal funtion.
@randomgamingstuff1 Жыл бұрын
Max: "...what's your plan to mitigate the existential risk?" Melanie: "...I don't think there is an existential risk" Narrator: "There most certainly was an existential risk..."
@PepeCoinMania Жыл бұрын
she knows there are no existential risk for her!
@nicolasstojanov8485 Жыл бұрын
It’s like two monkeys noticing modern humans expanding : one of them signals them as a threat and the other one refuses to do so cause they give him food sometimes.
@kinngrimm Жыл бұрын
47:35 "we need to understand what *could* go wrong" this is exactly the point. It is not about saying this will go wrong and you shouldn't therefor try to build an AGI, but lets talk scenarios through where when it would go wrong it would go quite wrong as Sam Altman formulated it. In that sense, the defensivness of the pro AI advocates here i find highly lacking maturity as they all seem to think we want to take away their toys instead of engaging with certain given examples. No instead they use language to make fun of concerns. The game is named "what if", what if the next emergent property is an AGI? What if the next emergent property is consciousness? There are already over 140 emergent properties, ooops now it can do math oops now it can translate in all languages, without them having been explicity been coded into the systems but just by increasing compute and training data sets. They can not claim something wont happen, when we already have examples of things that did which they before claimed wouldn't for the next hundred years ffs.
@JD-jl4yy Жыл бұрын
I'm getting increasingly convinced that LeCun knows less about AI safety than the average schmuck that has googled instrumental convergence and orthogonality thesis for 10 minutes.
@snarkyboojum Жыл бұрын
Then you’d be wrong.
@JD-jl4yy Жыл бұрын
@@snarkyboojum I sincerely hope I am.
@kreek22 Жыл бұрын
He knows much more and, yet, is orders of magnitude less honest.
@OlympusLaunch Жыл бұрын
LMAO
@PepeCoinMania Жыл бұрын
damn
@dhsubhadra Жыл бұрын
I would recommend Eliezer Yudkowsky and Connor Leahy on this. Basically, we're running towards the cliff edge and are unable to stop, because the positive fruits of AI are too succulent to give up.
@rosiegul Жыл бұрын
I was so disappointed by the level of argument displayed by the “con” team. Yann is a Polyanna, and Melanie argued like an angry teenager, without the ability to critically discuss a subject like an adult.. For her, it seemed like winning this debate , even if she knew deep inside that she may be wrong, was much more important than the actual risk of an existential threat being real. 😅
@TimCollins-gv8vx Жыл бұрын
totally agree well said
@isetfrances6124 Жыл бұрын
The y treated her like a girl , I'm glad she stuck to her guns even if they weren't ARs but merely six shooters ❤.
@beecee793 Жыл бұрын
I thought Max Tegmark did the worst. Sounded like an evangelical heralding the end of times coming or something. I had to start skipping his immature rants.
@ryzikx Жыл бұрын
@@isetfrances6124?
@ryzikx Жыл бұрын
@@beecee793because they are
@ctam79 Жыл бұрын
This debate feels like the talk show segment at the beginning of the first episode of The Last of Us tv show.
@gk-qf9hv Жыл бұрын
The fact that the voting application did not work at the end, is in itself a solid proof that AI is dangerous 😃
@juanpablomirandasolis2306 Жыл бұрын
Eso no tiene ningún sentido y nada que ver jajajajaja 😂😂😂 Justamente demuestra que ni lo básico está funcionando
@thechadeuropeanfederalist893 Жыл бұрын
The fact that they found a workaround nevertheless is solid proof that AI isn't dangerous.
@bucketofbarnacles Жыл бұрын
On the moratorium: Professor Yaser Abu-Mostafa stated it clearly when he said a moratorium is silly as it would pause the good guys from developing AI while the bad guys can continue to do whatever they want. I support Melanie’s message, that we are losing sight of current AI risks and misusing this opportunity to build the right safeguards using evidence, not speculation. On many points Bengio, Lecun and Mitchell fully agree.
@ili626 Жыл бұрын
1:45:59 We can’t even get tech to work as a voting application. Mitchell might use this as evidence that we overrate the power of tech, while Tegmark might use as evidence for our need to be humble and that we can’t predict outcomes 100%. The latter interpretation would be better imo
@avantgardenovelist Жыл бұрын
35:57 who is "we," naive woman? 43:08 who is "we," naive man?
@andrewt6834 Жыл бұрын
LeCun and Mitchell were so disappointing. They served the counter-argument very, very poorly. I am troubled about whether their positions are because of low intellect, bad debating ability or because they are disingenuous. As a debate, this was so poor and disappointing.
@kreek22 Жыл бұрын
Disingenuous, no question.
@agrandesubstituicao Жыл бұрын
She’s defending their employers
@DeruwynArchmage Жыл бұрын
Probably some self deception in there. And also conflicting motives (their jobs depend on their seeing from a certain point of view.)
@asuzukosi581 Жыл бұрын
Melanie Mitchells opening was just too beautiful
@flickwtchr Жыл бұрын
Perhaps LeCun and Mitchell can comment on this paper released by DeepMind on 5/25/23. So are these experts in the field so confident that the current state of these LLM's are just so benign and stupid they pose no risk? Search for "Model evaluation for extreme risks" for the pdf and read it for yourself. I don't think that LeCun and Mitchell are oblivious to the real concern from developers of AI tech, it's more an intentional decision to engage in propaganda in service to all the money that is to be made, pure and simple.
@genegray9895 Жыл бұрын
Don't underestimate the power of giggle factor. I think this is like 98% "I've seen this in a movie therefore it can't happen in real life" fallacy.
@deepsp_ce Жыл бұрын
this debate should be advertised and viewed more than the presidential debate. but we are on planet earth..
@anamariadiasabdalah7239 Жыл бұрын
Muito boa comparação com o uso do petróleo sendo similar ao uso do Ai ,o que vocês acham que vai prevalecer? Será o bom senso ou o interesse do poder financeiro.
@nestorlovesguitar Жыл бұрын
Ask LeCun and Mitchell and all the people advocating this technology to sign a legal contract taking full responsibility of any major catastrophe caused directly from AI misalignment and you'll see how quickly they withdraw their optimistic, naive convictions. Make no mistake, these people won't stop tinkering with this technology unless faced with the possibility of a life in prison. If they feel so smart and so confident about what they're doing, let's make them put their money where their mouth is. That's the least we civilians should do.
@jackielikesgme9228 Жыл бұрын
Endow AI with emotions.. human like emotions. Did he really give subservience as an example of human emotion we would endow? Followed up with “you know it would be like managing a staff of people much more intelligent than us but subservient” (paraphrasing) but I think that was fairly close and absolutely nutso tight?
@CodexPermutatio Жыл бұрын
You misunderstood him, my friend. First, subservient just means that these AGI will depend on humans for many things. They will be autonomous but they will not be in control of the world and our lives. Just like every other members of a society are, by the way. But they will be completely dependent on us (at least before we colonize a distant planet with robots only) in all aspects. We provide their infrastructure, electricity, hardware, etc. We are "mother nature" to them like the biosphere is to us. And this a great reason to do not destroy us, don't you agree? He is not referring to human-like emotions, but simply points out that any general intelligence must have emotions as part of its cognitive architecture. Those emotions differ from human's the same way our emotions differ from the emotions of a crab or a crow. The emotions that an AGI should have (to be human-aligned) are quite different from the emotions of humans and other animals. It will be a new kind of emotions. You can read about all these ideas in LeCun's JEPA Architecture paper ("A Path Towards Autonomous Machine Intelligence"). Search for it if you want to know. Hope this helps.
@vaevictis3612 Жыл бұрын
@@CodexPermutatio Unless AGI is "aligned" (controlled is still a better word), it would only rely on humans for as long as it is rational. Even if "caged" (like a chatbot) it could first use (manipulate) humans as tools to make him better tools. Then it would need humans no longer. Maybe if we could create a human-like cognition, it would be easier to align it or keep its values under control (we'd need to mechanistically understand our brains and emotions first). But all our current AI systems (including those in serious development by Meta) are not following this approach at all..
@trybunt Жыл бұрын
About 1:24:00 Melanie Mitchell says something like "The lawyer with AI couldn't outperform the other lawyer. Maybe AI will get better, but these assumptions are not obvious." The assumption that AI will get better isnt obvious? I don't think it's a huge stretch to think AI will probably get better. That's hardly wild speculation. I'm fairly optimistic, but this type if dismissal that AI could ever be a problem just seems naive. Of course there is hype and nonsense in the media, but there is also a lot of interesting work being done that shows incredible advancements in AI capability, and serious potential for harm because we dont entirely understand whats happening under the hood. The deception point was not just one person being deceived at one point, there has been multiple studies that show powerful LLMs outputting stuff contrary the their own internal reasoning because they predict it will be received better. There is a pattern of calculating one thing, but saying another especially when they have already committed to an answer. Maybe they are simply reflecting our own bias that is in the training data, our own propensity to lie when standing up for our beliefs. I dont know, but we cant just ignore it.
@loggersabin Жыл бұрын
Yann and Melanie are showing they possess no humility in admitting they dont know enough to dismiss the x-risk. and, making facile comments like “we will not make it if it is harmful”, “intelligence is intrinsically good”, “killing 1% is not x-risk so we should ingore ai risk”, “im not paid enough to do this”, “we will figure it out when it happens”, “chatgpt did not deceive anyone because it is not alive”. Immense respect to Yoshua and Max for bearing through this. It was painful to see Melanie raise her voice at Yoshua when he was calm throughout the debate. My respect for Yoshua has further increased. Max was great in pointing out the evasiveness of the other side in giving any hint of a solution. It is clear which side won.
@ghc9425 Жыл бұрын
Start at: 13:34
@oscarbertel1449 Жыл бұрын
I understand that the risks associated with the situation are genuine. However, we find ourselves in a global scenario akin to the prisoner's dilemma, where it is exceedingly challenging to halt ongoing events. Moreover, the implementation of stringent regulations could potentially result in non-regulating nations gaining a competitive advantage, assuming we all survive the current challenges. Consequently, achieving a complete cessation appears unattainable. It is important to recognize that such discussions tend to instill fear and people demands for robust regulations, primarily driven by individuals lacking comprehensive knowledge. It is regrettable that only Lecun emphasize this critical aspect, without delving into its profound intricacies. In some moments I tink that maibe some powerful companies are asking for regulation and creating fear in order to create some kind of monopoly.
@davidmireles9774 Жыл бұрын
Crazy thought. Would studying the behavior of someone without empathy, ie a psychopath, be a worthwhile pursuit? Wouldn’t that be a similar test case for AGI given that both lack empathy (according to Max Tagmark around the 16:00th - 17:00th minute), perhaps not emotion altogether. Or does AGI not lack empathy and emotion in some interesting way?
@CATDHD Жыл бұрын
That's what I was thinking recently. But psycopathy is not exactly not feeling anything. Even in the far side of the spectrum . I am no expert, but they have emotions, maybe not empathy. So, the test would be slightly better test case for AGI, but not that much better than that of the non-psycopths.
@davidmireles9774 Жыл бұрын
@@CATDHD Hmm Interesting. Thanks for your focused comment. It’s an interesting line of thought to pursue: which sentient intelligent creatures among us would come closest to a test case for this particular isolated variable of lack of empathy and emotion within AGI? I’m assuming a lot here, for purposes of this comment. Namely that AGI could able to emerge within its composition a subjective awareness with some “VR head headset” for its perceptual apparatus, be able to hold some mental level of representation (for humans we know this to be abstraction), be able to manipulate the conceptual representations to conform to its perception, some level of awareness of ‘self’, some level of awareness for ‘other’, some level of communication to self or other, allowing for intelligence and stupidity, and that it’s intelligence was such that it had some level of emotional awareness and emotional intelligence.. Test cases would involve a selection process among the whole studied biosphere, humans notwithstanding, for a creature that lacked empathy but still had feelings of a sort. Feeling that it may or may not be aware of, again assuming it had the capacity of awareness. Not to go to far allied, but if panpsychism is true, and consciousness isn’t a derivative element but rather a properly basic element of reality, then it might not be a question of how first person awareness can be generated, but rather how to bring this awareness that’s already there into a magnification that is comparable to that of human awareness; indeed self awareness as a further benchmark to assess.
@jensk9564 Жыл бұрын
great debate. wonderful. There was another man in the room who actually had not been present physically: Nick Bostrom. I just wonder why he doesn't appear everywhere nowadays when everyone debates "superintelligence"???
@jackielikesgme9228 Жыл бұрын
He keeps his blog relatively up to date and hopes to have a book out before the singularity lol. I’m guessing he will be more public w/speaking closer to the book release. I like hearing him talk too.
@vaevictis3612 Жыл бұрын
He had a "racist email from 1990s" controversy happening December 2022, so he is forced to keep his head low and avoid any public discourse for the fear of it gaining traction and him being irrevocably cancelled (or the AI risk debate associated with that for the dumb reasons).
@jensk9564 Жыл бұрын
@@vaevictis3612 wow. It's even not easy to get this information... strange, I think about anyone else would already have been "cancelled" completely (,if this mail is authentic, I see no way to justify smth like this ..)
@meatskunk Жыл бұрын
Well that … or the fact that Bostrom’s been spouting AI doom for quite some time now, and never had anything but specualtive sci-fi nonsense to back it up. And of course he made no mention of LLM’s (aka ChatGPT) which is the bogeyman currently in the room. He’s effectively become irrelevant, and something of a dead weight to anyone who takes these issues seriously.
@jackielikesgme9228 Жыл бұрын
@@meatskunk it’s not the boogie man. None of these people you refer to as doomers are worried about current chatGTP
@fedorilitchev5092 Жыл бұрын
the best videos on this topic are by Daniel Schmachtenberger, John Vervaeke and Yuval Harari - far deeper than this chat. The AI Explained channel is also excellent.
@amittikare7246 Жыл бұрын
I liked Daniel Schmachtenberger & Liv boree's conversation on Molloch too.
@Gi-Home Жыл бұрын
LeCun and Mitchell easily won, the proposition had no merit. Disappointed in some of the hostile comments towards Lecun, they have no validity. The wording of the proposition made things impossible for Bengio and Tegmark to put forth a rational debate.
@beecee793 Жыл бұрын
Absolutely agree.
@beecee793 Жыл бұрын
@@karlwest437 You definitely did if you didn't agree with OP.
@Learna_Hydralis Жыл бұрын
Thank you for this, Thanks to the underlying AI, youtube is always the best place to watch videos!
@woldgamer58 Жыл бұрын
Welp I am now 1000% more concerned if this is what the counter to the threat is...I mean having a meta shill on the debate made this inevitable. He has a clear bias to argue against regulations especially if he is running a research lab.
@neorock6135 Жыл бұрын
How can you have this debate without Elizier Yudkowsky....
@omarnomad Жыл бұрын
One of the best debates ever
@ivanrodriguezc Жыл бұрын
Agree
@zandrrlife Жыл бұрын
Max Tegmark. He's Really that guy. Now this is CONTENT. In all seriousness. I hope we make as much progress with interpretability in the next year(black-box to grey-box lol), as we did with all these dope distillation techniques to enhance LM's cognitive capabilities. Mitchell. Again an OG. But come on....he sounded a little delusional with how native he came off about the potential of adversarial attacks. Bad guys do win sometimes. Should I list the numerous genocides? Then again, he works for meta...So this retort is expected. Im in the field. So im pro AI all the way, however, a performant fine-tune open-source LM model is dangerous now. Can assist in the creation of pandemic level pathogens NOW. So a lot of the pro AI commentary, lacked substance. Top notch researchers, usually aren't good at debating lol.
@paulm3969 Жыл бұрын
Max is an awesome dude, good spirit, an awesome thinker, I love his first principles approach to problems and I think of his intelligence is simple, effective and practical, I love it.
@kreek22 Жыл бұрын
I'm not sure about motives, other than the obvious (technically sweet, lucrative, exciting, empowering), but the AI accelerationists are a consistently dishonest group. Pro-regulation is not a self-interested position as far as I can tell, giving that position a higher a priori presumption of honesty. At the more granular level, argument by argument, position by position, the regulators have a much better case. There are an unlimited number of ways that introducing powerful alien intelligences into our civilization will by dangerous, nor can we confidently ascribe some limit to just how dangerous they may be, nor do we know the threshold at which various levels of threat will arise. The philosophers invented a term to describe in part our epistemological position with respect to emergent AI: anosognosia, the inability to know what we do not know, or, in common parlance, the unknown unknowns.
@zandrrlife Жыл бұрын
@@kreek22 both groups are self-interest groups bro. Pro-regulators want regulation to gate large deployment to a small group of companies. We must find a balance between the too. I suggest local LM deployment being paired with KYC and I feel that solves the issue on both sides..and I hear people saying KYC breaks in the transformer era...it doesn't. simply use a multimodal language model for KYC.
@kreek22 Жыл бұрын
@@zandrrlife the default for the top AI companies is increasingly large, expensive models. This is likely to ensure the AI field remains an oligopoly even without regulatory fencing to suppress competition. KYC might help now and for a short time horizon. It's wholly inadequate to manage or regulate strong AI systems.
@zandrrlife Жыл бұрын
@@kreek22 fair points. Why these conversations are important, to say the least. Ideally what framework do you envision to combat this?
@joehubris1 Жыл бұрын
Meta's track record with AI is a virtual crime against humanity.
@samiloom8565 Жыл бұрын
No that is not true
@anamariadiasabdalah7239 Жыл бұрын
@@samiloom8565that is true
@jmanakajosh9354 Жыл бұрын
Meat's track record with your data is even worse. And let's not even mention they're track record when it comes to elections. And misinformation? Since when was FB good at moderating? FB is less accurate than reddit, we only think it's good bc the competition aka Twitter literally has child ****, Livestreamed beheadings, terrorist attacks etc. etc.
@MrDerfury10 ай бұрын
When LeCunn says "the good guys will just have better AI than the bad guys" I can't help but think about why he is assuming the world thinks of Meta and OpenAI and Google as the good guy :| I'm much more worried about mega corps ruining the world with AI than about terrorists honestly.
@marktomasetti8642 Жыл бұрын
If squirrels invented humans, would the humans’ goals remain aligned with the squirrel‘s well-being? Possibly for a short time, but not forever. Not now, but some day we will be the squirrels. "If they are not safe, we won’t build them." (1) Cars before seatbelts. (2) Nations that do not build AI will be out-competed by those that do - we cannot get off this train.
@amittikare7246 Жыл бұрын
I have seen Elizer make this argument & I feel its a really good one. The other day I was thinking In fact we cant even get corporations like google to keep their motto 'dont be evil' over a decade because central goal of moneymaking wins over everything & they think they can get a million times superintelligent AI to 'listen'.
@ikotsus2448 Жыл бұрын
"AI is going to be subserviant to human, it is going to be smarter than us" Ok...
@OlympusLaunch Жыл бұрын
It's delusional as hell. "It's under control and it's going to stay under control." Sounds like a movie.
@shawnvandever3917 Жыл бұрын
So people like Melanie Mitchell are the same people a year ago who said things like ChatGPT-4 was decades away. AI doesn't need to succeed us in all areas of cognition I believe we are just a couple breakthroughs away from beating us in reasoning and planning. Bottom line is everyone who has bet against this tech has been wrong
@Crunklife2010 Жыл бұрын
Fact about this video= The debate never starts!!!
@amittikare7246 Жыл бұрын
Melanie came off as disingenuous (& frankly annoying) as she kept trying to find 'technicalities' to get off from addressing the core argument. for a topic as serious as this, which has been acknowledged by people actually working in the field they both essentially keep saying we'll figure it out .. trust us. that is not good enough. TBH the pro people were very soft and measured, if the con team was faced against somebody like Eliezer they would be truly smoked assuming the debating format allows enough of deepdive time.
@greenbeans7573 Жыл бұрын
I've been told that questioning someone's motives is bad rationality, but I think that's bullshit, Melanie's reasoning is clearly motivated - not rationally created.
@kinngrimm Жыл бұрын
1:24:30 "humans are very reluctant giving up their agency" really? Does she still use maps instead of a smartphone voice telling here where next to turn? Worldwide IQ is decreasing because of environmental pollution and smartphones a study has shown. There are people now who grow up without being able to read maps, but instead have the attention span of a todler because Candy Crush and other triggers for your endorphins and dopamin are training your for such. When people will be able to outsource certain mental tasks, ofcause they will get used to that and use that then and any muscle that is not used will show signs of atrophy eventually and the brain is not different in that aspect.
@Jedimaster36091 Жыл бұрын
We don't need AGI to have existential risks. All we need is sufficient advanced technology to manipulate us at scale and bad actors to use it. I'd say we have both today. Even in the optimistic scenarios, where AI is used for good, the pace and scale or changes would be so fast that the humans wouldn't be able to adapt fast enough and still be relevant from an economic point of view. To me, that is sufficient to destabilize the human society to the point or wars and going back to medieval times.
@kreek22 Жыл бұрын
I think the powers of our time imagine instead a one world state. The only serious obstacle remaining is China, a country that is now falling behind in AI.
@vaevictis3612 Жыл бұрын
Yes, but even if we solve that, we still have AGI approaching rapidly on a horizon. A tough ride of a century..
@bdc1117 Жыл бұрын
bingo. the existential debate isn't the most helpful. the cons are wrong that it's not an existential risk, but they're right that it can distract from immediate threats, for which they offered little comfort despite acknowledging them
@martinlutherkingjr.5582 Жыл бұрын
We already have that, it’s called twitter and politicians.
@mauricioalfaro9406 Жыл бұрын
Debate starts at 13:29
@pietervoogt Жыл бұрын
I am an AI optimist but the logic of the doomers is very strong.
@CodexPermutatio Жыл бұрын
The father of logic, Aristotle, already knew thousands of years ago that from a false premise it is possible to deduce "anything". Logical conclusions must be based on premises supported by evidence. The rest are just like Russell's teapots floating in the latent space of apocalyptic speculation. These teapots are only "real" for those who believe the initial premises without asking for their evidence. At the moment, no doomer has given a convincing argument (based on evidence, not fear of the unknown) that can justify the need to stop scientific progress. And just to clarify, I do believe that AGI is possible but I don't think it is necessarily an existential threat to humanity.
@Frank22164 Жыл бұрын
What logic? It's pure conjecture and they haven't even outlined these scenarios. How will humanity be wiped out?
@pietervoogt Жыл бұрын
@@Frank22164 The ''I, Robot'' (film) scenario makes sense.
@CodexPermutatio Жыл бұрын
@@Frank22164 Well, that's just what I'm saying. If the premises are not supported by evidence, then it does not matter if the logic seems impeccable because valid deductions can only be made if one starts from premises that we know to be true. It does not seem to me that it is proven that we are facing a risk of extinction in the short or medium term.
@Frank22164 Жыл бұрын
@@pietervoogt Only if you assume that we have mechanical robots. Easy enough to avoid.
@CCMorgan Жыл бұрын
This debate proves that the main question is irrelevant. These four people should focus on "what do we do to mitigate the risk?" which they're all in a perfect position to tackle. There's no way to stop AI development.
@k14pc Жыл бұрын
I thought the pro side dominated but they apparently lost the debate according to the voting. Feels bad man
@adambamford5894 Жыл бұрын
It’s always a challenge to win when you have more of your audience on your side to begin with. The con side had a larger pool of people to change their minds. Agreed that the pro side were much better.
@runvnc208 Жыл бұрын
That's just human psychology. People actually tend to "hunker down" in their worldview even more when they hear convincing arguments. Worldlview is tied to group membership more than rationality, and there is an intrinsic tendency to retain beliefs due to the nature of cognition. So the vote change actually indicates convincing arguments by the pro side.
@francoissaintpierre4506 Жыл бұрын
Still 60 40 at least
@genegray9895 Жыл бұрын
Honestly I think the results were within the uncertainty - i.e. no change. I kind of called that when 92% of people said they were willing to change their mind. That's 92% of people being dishonest.
@Hexanitrobenzene Жыл бұрын
@@genegray9895 Why do you call "willingness to change your mind" dishonesty ? That's exactly the wise thing to do if the arguments are convincing.
@kinngrimm Жыл бұрын
43:50 i agree it wouldn't be that minute, more likely it first would become more efficient to open up compute for its own purposes and keeping things hidden within the neural networks that to some extant is partially just due to its size unreadable for us and a blackbox system. Depending on system i would think this may up its first mitute or day with many iterations to a physical limit of growth aka efficiency maxed out. Then it would use that the freed up ressources or the compute it could mask used by users but due to effciency gains would be handled faster but give only after the expected time, it would use the effiency gains for logic processing going through all the data, veryfying, restructuring gaining insights we may not have touched yet, maybe that gives it more options for iterations to get more efficient, but at some point all the knowledge it has is also limited. Then will need access points to become better in seeing reality for what it is, as at some point it will know that it is living in a box just like we percieve reality filtered through our eyes, it will understand there is more out there and it may want to learn about that world, so gaining camera access, machine access , access to anything digitally reachable. When it has that, it will try out hyppthises including about us humans but formpst first i would assume about the physical world to verify its data where it can. When it gets access to machines and automated labs, it may create nanotech that then become its new access points to manipulate the environment. Here might be the first time we could take notice that something fundamentally has changed, if it isn't an abandoned or remote lab. It could then but also be already too late shut the system down if people would think a harddrive sweep and reboot would be sufficient ^^. In a worst case scenario we would need to be ready to shut down all digital devices and sweep them clean with physical backups from places that were not connected before, otherwise we are just again a day away. I can't really speculate beyond this point, as i am not an ASI and even this speculation here was an anthropomorphisation. It just is an example how things could go without us noticing. Ask yourself this question, is there a way to really identify the source of a hack? Sofar as i am aware obfuscation methods beat any attempt of finding out where an attack came from on the internet.
@MasKpt Жыл бұрын
Mitchell reminds me of Dyatlov from Chernobyl
@kinngrimm Жыл бұрын
1:23:00 There is one argument i would allow against AI regulations, which is that if we overdo them or at least not regularly check back if circumstance have changed and we therefor also need to update regulations thoroughly, we could also deny us a potential to what we could become, excluding ethics here as those may in itself deny us potential which at some point we would need depending on what other threats the future may hold.
@JazevoAudiosurf Жыл бұрын
here is the most likely bad scenario: 1. mega cap builds new LLM that solved large parts of the hallucination problem, perhaps uses even a different algo, even if it's just a bit better than GPT-4 there is a big risk because: 2. they put that model in a server farm similarly to what ARC team at OpenAI did and give it a task to gain power, replicate etc (just like ARC did) 3. the model passes that test, means no harm, either because it was not perfectly tested or learned to manipulate and fake 4. the model gets released either by API (GPT-4 did get released to the public after that test), or if too powerful, gets released to groups of researchers 5. those people figure out a smart prompt engineering and very sophisticated way to do what the publisher wasn't able to do in 2. 6. the model gets used for automated hacking into government organizations, not even because it was told so but because this sort of penetration test wasn't perfectly supervised 7. the hack, because it is automated, runs at extreme speed and spreads to multiple governments, or: any malicious program that spreads to millions of users (remember this runs at high speeds, no human intervention) 8. you have a huge mess of the country where this leaked from being in a international conflict, this could not just spark political conflict but also fears of e.g. China that AI becomes too powerful (perhaps thats one reason they want taiwan), and them responding "accordingly" with military ultimatums since they would soon lose the cyber war from their view even if that model does no harm, it could have capabilities to do harm, it's hard to prove it, GPT-4 can be used for automated hacking if enough engineering effort is made, but it would probably be a little too weak to be efficient second scenario, science scenario: 1. mega cap builds LLM farm that uses agents to find stronger AI architectures through genetic algorithm (tries out stuff, mutates those that work), whole pipeline is automated from building the architecture to deploying and benchmarking to mutating it 2. goes on indefinitely until architecture found outperforms e.g. transformer (remember transformer is by no means a complex architecture) 3. since we learned that scaling up pretty much anything processing language has huge benefits, they scale that architecture up until performance falls off 4. rinse and repeat, architectures become better and better (btw SOTA chips are already designed by AI today) 5. they do the ARC/safety test as described in first scenario, give it malicious prompts and test it 6. model succeeds at malicious task note that in this case they don't even need to release it to the public it becomes existential when the world becomes aware that AI is a monstrous threat to their cyber safety, especially since China plans to be the leader of the new world order, we have seen in Ukraine how little it takes for someone to feel threatened and start a stupid war. the AI doesn't have to go terminator and take over for that, that would require immense intelligence and reasoning capabilities anyway (which is still possible to achieve in a lab of a single company with a little too much H100 power)
@KorakBrosepf Жыл бұрын
On your first scenario, to add to it, you could also say "China becomes fearful of this AI that is being used in surrounding countries like Japan, SKorea, Taiwan, etc. and launches a global attack against it because China sees the damage this AI will ultimately do to humanity." In a bizarre way, Chinese paranoia ends up being positive thing in that it prevents the destruction of humanity. Of course then another play on this scenario is that the AI/global leaders fight back against China and Allies and we have a global thermonuclear war, which ultimately helps out the AI. In this case SKYNET isn't creating the problem, Humans end up pulling the trigger because we're idiotic monkeys.
@jmanakajosh9354 Жыл бұрын
As dark as this hypothetical scenario is it shocks me that to Melanie.....she doesn't see it as a possibility as someone who apparently has heard about these experiments. I'd also say what you're envisioning here with current technology and current experiments that we will likely do with GPT 5 is probably a best-casinario. Worst casinario GPT-n escapes and we've made it so smart we don't have a second chance. Eliezer Yudkowski gives some great examples of how this kills us. I'd say we're already living through a possible existential crisis either way it's called climate change. Maybe GPT-n doesn't bother to kill us it just "lets us die" none of these are good scenarios but at least in the one you describe we have a recognizable turning point.
@ChrisWalker-fq7kf Жыл бұрын
Why would an LLM be better at writing scripts to hack into computer systems than humans? LLMs just learn information that humans already know. Second scenario: why would an LLM be better at using genetic algorithms to invent new architectures than human researchers? Same argument as in the previous case, LLMs only know what we know. I'm with Melanie on this. LLM's are very obviously not in any way an "existential threat". As for "superintelligence" someone needs to explain what that even means without circular reasoning - saying that it means "far smarter than humans" is just substituting the word smart for intelligent and gets us nowhere.
@ReflectionOcean Жыл бұрын
LeCun have seen all swans are white so far, so he concludes there's no black swans.
@zzzaaayyynnn Жыл бұрын
LeCun is being disingenuous; Mitchell appears delusional.
@loopuleasa Жыл бұрын
lecun works for facebook, lmao his wallet is in his discussion
@zzzaaayyynnn Жыл бұрын
@@loopuleasa that explains a lot
@agrandesubstituicao Жыл бұрын
Because he’s well paid😂
@weestro7 Жыл бұрын
It felt like the length given for the speakers in each segment was a bit too short.
@jayl271322 Жыл бұрын
So to summarise the (astonishingly glib) Con position: 1. Nothing to see here, folks. 2. Bias is the real existential risk in our society 🤦🏻
@kreek22 Жыл бұрын
It is just about that dumb, which means LeCun (who is far from dumb) is transparently, flagrantly, floridly, flauntingly mendacious.
@vaevictis3612 Жыл бұрын
@@kreek22 He just wants to roll the dice with AGI. He is like a hardcore gambler in a casino, the bad odds are fine with him. The only problem that all of us are forced to play.
@kreek22 Жыл бұрын
@@vaevictis3612 There are a number of actors in the world who could drastically slow AI development. Examples include the Pentagon and the CCP, probably also the deep state official press (Ny Times, WaPo, the Economist). They are not forced to play. The rest of us are spectators.