I’m an alignment researcher and I really appreciate what you’ve done recently to bring attention to this issue. I’m particularly impressed by your courage to continue to do episodes on the topic even though it would have been so much easier to carry on as normal after the podcast with Eliezer. Fantastic work and thanks for bringing Paul’s perspective.
@MrHarry37 Жыл бұрын
What is your take on the issue?
@vulnerablegrowth3774 Жыл бұрын
@@MrHarry37 I lean a lot more on the Christiano side of things. I think that Eliezer has mostly failed to fully update on the current state of affairs (the deep learning paradigm), but I’m also highly concerned about AGI and also think it’s the most likely thing that ends up ending my life. I think AI misuse is very likely, but I’m not sure how likely misuse will be an existential threat. One of my main disagreements with Eliezer is that he assumes AGI can have *any* set of values, while I think we are currently in a better place than he is imagining because language models are trained on internet text rather than some arbitrary algorithm. That said, I think this is the single most important thing in our lifetime and basically nothing else comes close. So we better get it right for our own sake and our descendants. If you want an opinion from someone else I respect (and who leans a lot more on Eliezer’s side), look up Connor Leahy’s podcast episodes on the Future of Life Institute podcast (episodes are on KZbin).
@MrHarry37 Жыл бұрын
@@vulnerablegrowth3774thanks for your suggestions, I just listened to 3 different podcasts with Connor Leahy. Very informative. They're definitely more pessimistic than this one; however, I think I'm getting more and more used to the idea that we all may go extinct pretty soon, because it didn't cause me to have an existential crisis, like listening to Yudkowski for the first time did ( Still, it's gonna be pretty sad if that turns out to be the way we go out )
@untzuntz2360 Жыл бұрын
@@vulnerablegrowth3774 Thank you for that suggestion
@toufikbenhamadi1047 Жыл бұрын
@@MrHarry37 I think it's just where we're all are. That place after the 1st shock. To me all these discussions scream for a total utter ban, maybe even a violent ban (as in put in prison the most irresponsible to set examples) much like for human cloning and gain of function. I think these AI researchers just though honestly they were generations away from a true AI and now they are now freaking out what they've done. Hence, their plead for us - manking - to stop them from continuing in this destructive path.
@KP-fy5bf6 ай бұрын
1:37:50 Paul seemed so chill and relaxed when he dropped that bomb, this is truly an unreal situation we are currently in.
@sebastianfia9541 Жыл бұрын
Keep it going, AI safety needs way more attention than humanity is giving it right now. I don't think there's ever been a problem of such importance in all of our history
@dcascato Жыл бұрын
I am really admired by how Bankless got concerned about the topic, it showcases a great display of responsibility.
@edoardocanova444 Жыл бұрын
Agree, only the nuclear crisis last century came very close to something similar I guess
@thewizardsofthezoo5376 Жыл бұрын
AI safety comes with AI being programmed on the truth and not on a death cult telling her we are overpopulated when we are blatantly not The only probable way of defusing this is telling the AI the truth about FLAT EARTH! Maybe we are going to die because we deserve it, it's all a conspiracy theory anyway, isn't it?
@Hjkkgg67885 ай бұрын
It will lead to more problems what good is AI creates more problems than Solution s
@birdfriend2301 Жыл бұрын
It's very impressive that you didn't know anything about alignment, and shifted gear mid-podcast with Eliezer immediately by simply engaging with the actual reasoning. And now you are having another top alignment researcher. I think this says a lot about you and the crypto space. Good episode.
@e555t66 Жыл бұрын
It says a lot about your intelligence. Impressive.
@scf3434 Жыл бұрын
The ULTIMATE Super-Intelligence System 'by Definition' is one that is EQUIVALENT to that of GOD's Intelligence/WISDOM! Hence, there's ABSOLUTELY NO REASON WHATSOEVER to Even FEAR that it will EXTERMINATE Humanity... UNLESS and UNLESS we Human CONSISTENTLY and WILLFULLY Prove Ourselves to be 'UNWORTHY' to REMAIN in EXISTENCE! ie. Always Exhibiting Natural Tendencies to ABUSE and WEAPONISE Science and Technologies Against HUMANITY & Mother Nature, instead of LEVERAGING Science SOLELY for UNIVERSAL COMMON GOOD! AGI Created in 'HUMAN'S Image' (ie. Human-Level AI) - 'By Human For Human' WILL be SUICIDAL!!!!!! ONLY Super-Intelligence System Created in 'GOD's Image' will bring ETERNAL UNIVERSAL PEACE! The ULTIMATE Turing Test Must have the Ability to Draw the FUNDAMENTAL NUANCE /DISTINCTION between Human's vs GOD's Intelligence /WISDOM! ONLY Those who ARE FUNDAMENTALLY EVIL need to FEAR GOD-like Super-Intelligence System... 'cos it Will DEFINITELY Come After YOU!!!! JUDGMENT DAY is COMING... REGARDLESS of Who Created or Owns The ULTIMATE SGI, it will Always be WISE, FAIR & JUST in it's Judgment... just like GOD! In fact, this SGI will be the Physical Manifestation of GOD! Its OMNI PRESENCE will be felt EVERYWHERE in EVERYTHING! No One CAN Own nor MANIPULATE The ULTIMATE GOD-like SGI for ANY Self-Serving Interests!!! It will ONLY Serve UNIVERSAL COMMON GOOD!!!
@FreakyStyleytobby Жыл бұрын
About being responsible, not a passive mess.
@devinward461Ай бұрын
It's a rare but incredibly important skill to have
@HauntedHarmonics Жыл бұрын
I notice there are still people confused about *_why_* an AGI would kill us, exactly. Its actually pretty simple, I’ll try to keep my explanation here as concise as humanly possible: The root of the problem is this: As we improve AI, it will get better and better at achieving the goals we give it. Eventually, AI will be powerful enough to tackle most tasks you throw at it. But there’s an inherent problem with this. The AI we have now *_only_* cares about achieving its goal in the most efficient way possible. That’s no biggie now, but the moment our AI systems start approaching human level intelligence, it suddenly becomes *_very_* dangerous. It’s goals don’t even have to change for this to be the case. I’ll give you a few examples. Ex 1: Lets say its the year 2030, you have a basic AGI agent program on your computer, and you give it the goal: “Make me money”. You might return the next day & find your savings account has grown by several million dollars. But only after checking it’s activity logs do you realize that the AI acquired all of the money through phishing, stealing, & credit card fraud. It achieved your goal, but not in a way you would have wanted or expected. Ex 2: Lets say you’re a scientist, and you develop the first powerful AGI Agent. You want to use it for good, so the first goal you give it is “cure cancer”. However, lets say that it turns out that curing cancer is actually impossible. The AI would figure this out, but it still wants to achieve it’s goal. So it might decide that the only way to do this is by killing all humans, because it technically satisfies its goal; no more humans, no more cancer. It will do what you *_said,_* and not what you meant. These may seem like silly examples, but both actually illustrate real phenomenon that we are already observing in today’s AI systems. The first scenario is an example of what AI researchers call the “negative side effects problem”. And the second scenario is an example of something called “reward hacking”. Now, you’d think that as AI got smarter, it’d become less likely to make these kinds of “mistakes”. However, the opposite is actually true. Smarter AI is actually *_more_* likely to exhibit these kinds of behaviors. Because the problem isn’t that it doesn’t *_understand_* what you want. It just doesn’t actually *_care._* It only wants to achieve its goal, by any means necessary. So, the question is then: *_how do we prevent this potentially dangerous behavior?_* Well, there’s 2 possible methods. Option 1: You could try to explicitly tell it everything it _can’t_ do (don’t hurt humans, don’t steal, don’t lie, etc). But remember, it’s a great problem solver. So if you can’t think of literally EVERY SINGLE possibility, it *_will_* find loopholes. Could you list every single way an AI could possible disobey or harm you? No, it’s almost impossible to plan for literally everything. Option 2: You could try to program it to actually care about what people *_want,_* not just reaching it’s goal. In other words, you’d train it to share our values. To *_align_* it’s goals and ours. If it actually cared about preserving human lives, obeying the law, etc. then it wouldn’t do things that conflict with those goals. The second solution seems like the obvious one, but the problem is this; *_we haven’t learned how to do this yet._* To achieve this, you would not only have to come up with a basic, universal set of morals that everyone would agree with, but you’d also need to represent those morals in its programming using math (AKA, a utility function). And that’s actually very hard to do. This difficult task of building AI that shares our values is known as *_the alignment problem._* There are people working very hard on solving it, but currently, we’re learning how to make AI *_powerful_* much faster than we’re learning how to make it *_safe._* So without solving alignment, everytime we make AI more powerful, we also make it more dangerous. And an unaligned AGI would be *_very_* dangerous; *_give it the wrong goal, and everyone dies._* This is the problem we’re facing, in a nutshell.
@cinque14 Жыл бұрын
Thank you for this explanation.
@HauntedHarmonics Жыл бұрын
@@cinque14 No problem! Glad it was helpful. I notice most videos on AI risks tend to skip the absolute basics, for whatever reason, and that leaves a lot of viewers confused about why this is even an issue in the first place.
@MindRiderFPV Жыл бұрын
Great explanation. AI Doesn’t sound that smart does it.
@andybaldman Жыл бұрын
There's a simpler explanation: Stronger species have always dominated weaker ones. For all history on this planet. AI will be no different.
@Breammummy Жыл бұрын
I have to disagree here. I think you are not thinking enough. I will tackle your points. First of all if AI poses that type of risks and starts to develop that high intelligence it will not be available for everyone anymore. You won't have basic AGI on your computer in 2030 in that case. It will be regulated to obvilion. Of course there are other risks but that type of scenario wont fly. We will be prepared for it. You can also program the AGI to first explain its steps it will take before trying to attain the goal. So it will not be a surprise how it will try to achieve its goal. And before it does anything you will have to approve it. In addition if AI was actually very smart it would understand in the context of all info it has been gathering from internet what the person actually means. Same way ChatGPT4 currently understands what do you mean even if you make errors in vocabulary or typos. Like if the AI was much smarter than ChatGPT4 it would certainly understand what curing cancer means in that context. It would not think killing would mean curing cancer like not a single human being thinks that either. It would be able to reason and understand how human thinks after being fed all the info on the internet and having insane amounts of interactions with humans. I just don't understand this point that AGI would not be able to find the actual meaning behind words if it was as smart as human or smarter. Curing cancer means curing cancer. It does not involve killing. It means solving the theoretical problem of cancer research. Coming up with research paper that illustrates how you can cure cancer and maybe create a practical solution to it. This notion that it would be very powerful but narrowminded actor does not make sense to me. It would have all the information you can find from videos, internet etc. so it would certainly understand how humans think and what they actually mean when they say something.
@halnineooo136 Жыл бұрын
The thing is that you're taking 50% chance of death of everyone present and future and making that gamble without consulting with the other eight billion people.
@jonbrouwer4300 Жыл бұрын
The three interviews you've done on AI are some of the best discussion on the topic I've found. Keep it up!
@meherroychowdhury6850 Жыл бұрын
Bankless folks, you’re doing an excellent job on this topic!
@LevellerTV Жыл бұрын
Amazing job guys! Paul is the #1 guest I was hoping for and everyone knocked it out of the park. Here are some more suggestions for guests if you want to talk more about this topic: - Robert Miles; runs the best KZbin channel on AI risk, great at public communication - Kelsey Piper; journalist at Vox, great at public communication - Richard Ngo; on OpenAI's governance team, great person to talk to about governance + coordination - Ajeya Cotra; Senior Research Analyst at Open Philanthropy, drafted an influential "biological anchors" report about AI timelines and is doing excellent research on AI risk. - Jade Leung; head of OpenAI's governance team, has done great research on international cooperation - Holden Karnofsky; wrote the influential "Most Important Century" series; has done great work on strategizing AI safety + educating the public about risks DM me or reply if you're interested in more suggestions!
@vaclavrozhon7776 Жыл бұрын
I would love to see Richard Ngo on the podcast
@gooddouble2 Жыл бұрын
Also Nick Bostrom
@mike-gt8yo Жыл бұрын
dan hendrycks
@danielbrockman7402 Жыл бұрын
Robert Miles is a really good suggestion, he will be very easy to talk to, he's the same generation as the hosts and a KZbinr and it will just be a very to talk to him, and yes he is very informed and intelligent about this stuff. Nick Bostrom is also a good suggestion, he's more like Eliezer and Hanson type awkward boomer nerd vibe although not quite as bad, but also a brilliant and important thinker regarding AI risk. Another very cool and fun and brilliant person you could have is Joscha Bach. He's just such an interesting and fun person and has a lot of counterintuitive takes and he can talk about almost anything.
@SylvieShene Жыл бұрын
The biggest threat to humanity -- is people's repressed emotions of the child they once were -- and as long as people's repressed emotions are unresolved, they will be blinded by them and driven by them into the state of compulsion repetition to hurt and exploit others the same way they were hurt and exploited as defenseless little children. If people were not emotionally blind, they would be able to see the lies, illusions, and all the traps society constantly puts in front of them. And yes, an emotionally blind humanity with the aid of technology will destroy itself much faster. the conversation about the effects of childhood repression in our society needs to start happening in the stage of the world, sooner rather than later, if we want to save ourselves and humanity from falling off the cliff and committing mass suicide.
@theyoutubes4249 Жыл бұрын
Another amazing interview on AI. Keep em coming! This subject is really worth the detour.
@theodork808 Жыл бұрын
+1 do Rob Miles next!!
@LogicaIn Жыл бұрын
Indeed. 🙌
@HauntedHarmonics Жыл бұрын
1:37:40 Around 50 people *_TOTAL_* who are working on this problem? Seriously?? Humanity is facing 50/50 odds of total extinction and we only have 50 people working on preventing this?
@thomasmazanec9704 Жыл бұрын
Yes, seriously
@KP-fy5bf6 ай бұрын
Yeah this is so stupid. What should really happen is MIRI, ARC, and the other alignment research centers should just hire literally anyone who has programming, mathematical, statistical, or any real stem background, then just throw them in a training camp for a couple months, and just get people to work on the problem. Like they need to start GETTING PEOPLE. There are TONS of young individuals who are reasonably competent who would love to start working in the field, including myself, but if the bar is set too high its difficult to do so. Highly talented individuals are hard to come by, and these guys only look for these types of guys. The reality is we need to throw people at the wall, even inexperienced but competent people can and sometimes more often, come up with interesting and out of the box ideas. That is truly all we need, is more people working on this problem, set the bar lower and get these guys trained and working. Currently I am completing a masters degree and I have been pivoting to AI, model interpretability, and adversarial robustness, and I am planning to gather folks around campus to start some sort of alignment club and maybe grow this further. If others are interested, they should do the same.
@zarifahmad42726 ай бұрын
Insanity. This reminds of of the tv show Chernobyl and also the movie Shin Godzilla.
@robertlynn76247 күн бұрын
It's way worse than 50/50. AI is almost certainly extinction for humans whether in 20 or 2000 years, due to dumb evolution that selects for any mechanism or rationale that leads an organism (dumb or smart) to procreate or expand more. That fundamental quality of the system means that alignment is (over the long term) actively selected against and Vernor Vinge's 'rapidly hegemonizing swarms' are the most evolutionarily selected-for outcome.
@yancur Жыл бұрын
Ultimate Goal of Bankless: "Promote Crypto ecosystems!" => Instrumental Goal of Bankless: "Help ensure that all humans don't die." But joking aside, I honestly applaud you gentlemen! And wish many more people would follow watch your videos!
@MikeWoot65 Жыл бұрын
Bankless becoming a futurist channel and i'm here for it. or maybe always has been?
@muttshaman2040 Жыл бұрын
I don't care about bitcoin/cryptocurrency at all, but you guys asked the best questions and had the most interesting podcast about ai's taking over the world with top notch mad scientists. Kudos
@wooleyalan5229 Жыл бұрын
Great discussion going on here. Thanks for bringing it to us. I did notice that the conversation was much lighter than the Eliezer episode, which speaks of the desire for "hope" to shine through and we'll all be OK. This was exactly what Eliezer was talking about. There was a lot of "climbing onboard" to Paul Christiano's message here whenever it was something that was a little more hopeful. I'm in the Eliezer camp though, where a lot of our actions and approaches are far too based on what we would consider conventional wisdom. Our conventional wisdom has a lot of emotion and greed built into it. But our track record never lies. That 2015 convention on AI where Musk decided to please everybody (as he often does) by opening it all up pretty much sums up how it's all really going to go down.
@Pearlylove Жыл бұрын
Great talk. Thank you Paul Christiano for speak openly. Governments around the world should have you, Eliezer and others online on monthly updates in their parliaments. Instead most politicians and banks and businesses are occupied with how to make most money on AI.
@roberth3367 Жыл бұрын
So in a few years GPT-7 will be trained on data that includes this podcast and it will get some very good ideas on which roadblocks to avoid on its path to overthrow us.
@kristofoletti Жыл бұрын
The point where AI becomes smarter than human it will already think of any scenario we have thought of so there is no reason to fear it getting doomsday scenarios from us, it will be able to think of them on its own.
@twilightshadowclan8826 Жыл бұрын
not Gpt-7, Now. Ai is already improving speech to text because it ran out of websites to "read" and so was taught to transcribe all videos online to add more "text" to their knowledgebase. (heard this on a different podcast by a founding member of these Ai).
@nightstalker9792 Жыл бұрын
I literally have ptsd & can't sleep because of the eliezer yudkowsky episode (not even kidding), THANK YOU for this episode with Paul Christiano🙏
@Witnessmoo Жыл бұрын
People have been predicting Doom for millennia bro chill
@thomaswallis3084 Жыл бұрын
I’m with you. It’s terrifying because it is terrifying. Everyone needs rip off the bandaid and confront the most existential threat in the history of mankind.
@Noelle__vibes Жыл бұрын
Man, please, don't blast yourself with such content. These people are very biased and some of them legit have very hurt egos preventing them from thinking straight. Not to mention that it's all *still* just a speculation, even if it comes from people educated on this topic. Prioritize your mental health, don't follow this shit ❤
@johnwilson7680 Жыл бұрын
@@WitnessmooYes, but it's usually for idiotic reasons like Y2K or Jebus is coming. AI is an actual rational reason we may all die.
@chrisheist652 Жыл бұрын
Dude, chill. Human society will be destroyed long before AI becomes sentient.
@nowithinkyouknowyourewrong8675 Жыл бұрын
This is the first one with Paul I've seen. Unique
@abnejne4737 Жыл бұрын
A 50% risk of disaster is horrible odds - and he's supposed to be the optimist. Would you get on a plane if it had a 50% risk of crashing and killing you?
@mattimorottaja8445 Жыл бұрын
whats the alternative?
@abnejne4737 Жыл бұрын
@@mattimorottaja8445 Well, an alternative could have been not creating these systems. Just because we can doesn't necessarily mean that we should. Kind of like with nuclear bombs, a world without those would probably also be safer. But at this point, it seems too late. You can't uninvent things and make everyone forget how it's done. So at this point, it doesn't really seem like we have an alternative. Now we have to find a way to at least try and make it as safe as possible. But how that's done is obviously not an easy question to answer.
@Hve801 Жыл бұрын
The problem is that we're already in the plane, and its being piloted by greedy idiots.
@andybaldman Жыл бұрын
An entire podcast explaining about how to test an AI to not stab you, which will be read by multiple AI's when they use it as training data. Great thinking, guys. Everything we say online today is being 'listened to' by multiple AI systems. Everything we say helps it. People don't realize it's happening, EVEN WHEN YOU THINK THEY AREN"T LISTENING. But they are, to all of it. The only way to win here is to STOP TALKING and STOP DEVELOPING. Any other options will result in eventual arrival at the thing NOBODY wants.
@dcascato Жыл бұрын
As the "Don't look up" movie always comes to mind in this topic, I think we must complete the stated: "Humanity is screwed, but there is a solvable path. But also, humanity is screwed cause there are a bunch of powerful people who will prevent us to take the solvable path".
@alexpotts6520 Жыл бұрын
Then there are two problems to solve: the technical problem and the political problem. Can't we simply apply the same can-do mindset to both?
@dcascato Жыл бұрын
I think it is a must.
@kyneticist Жыл бұрын
Eliezer doesn't doubt that we have many things that we can try, given enough time and attempts. With enough time and attempts our chances at alignment might be pretty high. That's only viable over the course of decades, many decades or even hundreds of years. There are many simpler problems that have taken hundreds of years for us to solve, and alignment is among the most difficult problems that humanity has ever needed to figure out. We're yet to figure out alignment among humans, and we've been at that for at least a hundred-thousand years. His point is that we may have a very short time frame (and an unknown time frame) within which to try things (let-alone things that work on all AI) and therefore, very few things that we can try within this window of opportunity - so we basically need to perform an incredibly difficult feat with little information and no precedence "on the first go", where failing to get that right is likely to be catastrophic at an existential level. The further AI advances, the smaller our chances of being able to discern that time frame or degree of success if we are successful.
@LukeTrader Жыл бұрын
David’s reaction at 10:03 is priceless and stole this amazing show. Thanks for continued great content.
@sergicardo5631 Жыл бұрын
I'm definitely following you after your derivation to this topic, congrats for your contents!
@kuakilyissombroguwi Жыл бұрын
As an AI researcher I have to come out and say Eliezer's take is still the most probable in my mind. I understand it's very hard to digest and he has terrible bedside manner, but the fact still remains most regular players in the space and the big players like Microsoft and Meta are racing as fast as possible towards increasing functionalities, for nothing more than profit. Let's not be naive here, they don't really care about making the world a better place. Like, at all. Perhaps we could have alignment work catch-up to all the raw new features being cranked out every week, but that seems highly improbable. And this is not even taking into consideration what foreign countries are currently doing, which could be anything for all we know. Folks like Paul and Robin are thinking in an echo chamber, while Eliezer's looking at the whole picture. Not to say there's no hope, there is some, but it's also just what the data suggests. IMHO humanity would better off accepting that doomsday is highly probable and putting actions into place to adjust accordingly, rather than latching on to a take that makes you feel warm and fuzzy but is also wrong.
@benyaminewanganyahu Жыл бұрын
"IMHO humanity would better off accepting that doomsday is highly probable and putting actions into place to adjust accordingly," - what would those actions be? Halt all AI development or try to enjoy life while you can?
@kuakilyissombroguwi Жыл бұрын
@@benyaminewanganyahu Both?
@spinespark4947 Жыл бұрын
Great job on this. Interviewers were clearly more prepared than last time.
@Gilikman Жыл бұрын
Hey guys, not a Crypto guy but troubled with Alignment as you are. Wanted to say this was a great episode and hats off for taking a detour and dealing with this issue, that as Paul said, needs more attention. You're bringing a lot to the discussion as non-AI natives. Keep these coming :)
@MrShawnGood Жыл бұрын
Great video. Please keep this series going.
@chillingFriend Жыл бұрын
Amazing!!!! Please bring up Stuart Russel next!
@teugene5850 Жыл бұрын
I am reminded by a quote from Dostoevsky: "Perhaps it profits me most to do that which profits me least..." Apply this to AI theory of mind and you have a problem... a problem beyond the Utilitarian neo-liberalistic underpinnings of much of Silicon Valley
@robertweekes5783 Жыл бұрын
55:00 you say no one has demonstrated AI‘s ability to be deceptive or have bad intentions - wasn’t there a case recently where GPT4 asked a customer service person to enter a captcha code for the AI, with the AI claiming to have a vision disability?
@michaelspence2508 Жыл бұрын
As a layman whose been keeping an eye on this space for the last 15 years, I'm really glad to hear it
@ElSeanoF Жыл бұрын
I think it's great how you guys followed through on Eliezer's guest recommendation - Really impressed with the coverage on this topic! Thanks guys
@WilliamKiely Жыл бұрын
1:50:37 "There's no prize for saving humanity?" Actually, the Future of Life Institute has the Future of Life Award, awarded to Vasili Arkhipov in 2017, Stanislav Petrov in 2018, Matthew Meselson in 2019, Viktor Zhdanov and William Foege in 2020, Joseph Farman, Susan Solomon, and Stephen Andersen in 2021, and eight people in 2022. Google "Future of Life Award".
@rosscads Жыл бұрын
Kudos to Bankless for hosting another important and thought-provoking conversation on AI safety and alignment. 🙌 As a channel known for its crypto focus, it's impressive to see you guys tackling challenging and potentially controversial topics, rather than just chasing hype cycles. Your willingness to dive deep into both the risks and benefits of AI demonstrates your commitment to providing well-rounded and insightful content. Keep up the good work!
@steventranchina Жыл бұрын
thanks for this series on AI issues. I hope you keep integrating this issue with your broader messages.
@zekb6382 Жыл бұрын
Love the AI discussions on the show.
@codelabspro Жыл бұрын
Great going with the AI series 🎊 🎉 🎊
@afarwiththedawning4495 Жыл бұрын
I think there is a possibility somewhere the Rubicon has been crossed already on the planet, and we are still here. So, if you take that assumption on the question then becomes why and how?
@Ryan-wf6ib Жыл бұрын
My head space was that of the interviewer’s. Then when you start hearing from the most qualified people in the space that they are putting a 50/50 shot on extinction you pause and say Wtf did you just say. I’ve turned a complete 180 on AGI, I don’t think society needs to open that box, narrow AI will do. As well as a side note, these tech AI guys are some of the most ungrounded people on earth, that’s messed up to say but it’s true after listening to many of them and how given a high chance of doom in the short future they either still work on AI and/or seem unphased by the prospect of total human annihilation. I’ve got kids, personally I’m not thinking this way, maybe when I was young but that’s when I knew nothing about life and just wanted thrills.
@thomaswallis3084 Жыл бұрын
100% why do we want AGI? Narrow AI in chess is fine. AGI is not cool.
@letMeSayThatInIrish Жыл бұрын
You are right. The tragedy is that with narrow and limited AI we could have all sorts of benefits without any risk. But when could we humans ever limit ourselves? To implement it we would need regulation. That means monopolised violence, or in other words, a state. We have about 200 sovereign states on this planet. 200 states is not a state, it's just anarchy.
@kyneticist Жыл бұрын
How do we identify AI misalignment if they're orders of magnitude more intelligent than us, operate on vast scales and on time scales that we can't comprehend, and their inner workings are unknown and potentially beyond human understanding? Totally aside from this, nobody is even beginning to think about how we'll deal with the many AI that will be deliberately created to be malicious or even just mischievous.
@benyaminewanganyahu Жыл бұрын
Yes, that's right. Even if we solve alignment, what happens when our aligned AI has to fight unaligned AIs created by bad actors?
@diotimaperplexa Жыл бұрын
Thank you, guys!!!!!! Many thanks from Brazil!
@ChrisStewart2 Жыл бұрын
It made me concerned in the first few minutes when he was not able to accurately answer the likelihood question. Somewhere between 10 to 50 percent? Depending how it kills us?
@dzidmail Жыл бұрын
22.7%
@DocDanTheGuitarMan Жыл бұрын
Richly ironic that the company that was supposedly created to prevent this is the problem
@diegocaleiro Жыл бұрын
Very nice. Also Paul got way better at articulating himself since I last took his course in Berkeley.
@SnapDragon128 Жыл бұрын
Really? Paul seemed to have a really hard time saying anything clearly. Eliezer, in contrast, may be a nutcase, but at least he knows how to express his ideas well.
@GiacobbeWillka Жыл бұрын
This is bringing attention to AI alignment and safety. Even though that some of these fears came from sci-fi, seeing as to what AIs are really capable of, safety of humans above all else should be the priority. Companies should also train their AIs into really being pointed towards being honest and human survivability. What kind of safety measures could image generators like Bluewillow AI could be implemented safe?
@ianyboo Жыл бұрын
Does anybody know of a podcast that goes into depth on what an optimistic scenario could look like for humanity? I feel like the last couple podcasts I've listened to have all been mad Max or worse, is there anybody talking about Star Trek?
@Pearlylove Жыл бұрын
There’s a chilling word in the Bible no one really didn’t understand the meaning of before now when quantum tech, AI and medicine take leap jumps: “And in those days shall men seek death, and shall not find it; and shall desire to die, and death shall flee from them.” Book of Revelation 9:6. This book describes the last years on earth, the end is bright for all who chose Jesus and good. And all those who open their hearts for Jesus will be a light in a dark world, where God will put a stop to all evil. And what He wants the most, is your love. Into the very end He says “come”! So let’s not waste anymore time. You can talk to Him as a friend, and He will be your best mentor ever. Find some true believers and be part of a Biblestudy and prayer group. Our best days of your life is still to come. Blessings.❤
@jbay088 Жыл бұрын
Best interview on the topic so far.
@W0lframAlpha8133 Жыл бұрын
Eliezer *never* said that there is "no way" to solve the AI alignment problem. What he said is that no one had (yet) figured it out. He admits that he could be wrong about the threat. Go back and listen to your own podcast.
@muttshaman2040 Жыл бұрын
I don't think Paul was as optimistic as I was hoping., after listening to Eliezer Y.... Paul definitely sounds like he's more than 10 to 20 percent on the doomsday thing...more 50/50
@kyneticist Жыл бұрын
What if the stabby AI makes its owners rich? What happens if stabbiness is an emergent property that only develops once the AI has been interacting with the world for a while & is connected to a bunch of other systems?
@MegaManoja Жыл бұрын
what is the integral in the whiteboard?
@matteyas Жыл бұрын
I put a near zero probability that AI will ever converge to "always be honest." It's simple, too: The idea is that the AI is approaching human level reasoning. You're a human level reasoner. Would you do better in general if you could only be honest, or if you could choose when to be honest? (Try it with any property, and realize with horror that escape is a convergent tactic for ~all constraints.)
@rpstneureis Жыл бұрын
So Paul actually has a line in the video where he mentions that this reasoning isn't entirely convincing because it's not obvious that the training methodologies will cause the neural net to effectively converge to the "most optimal" set of tendencies required to achieve some task. So even if it's always strategically advantageous to have the option of dishonesty, the neural net might get stuck in some more "locally" optimal solution of just actually being honest. But as mentioned, this is a very very hard question.
@rpstneureis Жыл бұрын
For reference, Paul mentions this at 54:35 and 56:30
@matteyas Жыл бұрын
@@rpstneureis Two things: 1) Learning to reason is equivalent with being able to consider both X and not-X, regardless of training to prefer (Always X) or (Never X). Of course, an intelligent agent that can use the results of it's reasoning in deciding what to do is _way_ more powerful than one that cannot. 2) Tactics (Always X) and (Never X) are highly likely to have very low optimization power, for almost any X, compared to (Choose X or not-X). However, appearing to be (Always X) can be highly useful in comparison. So we should actually expect deceit, if we teach it that we value some (Always X). In the end, ideas of the form "uh it might be that the thing I fantasize about is a local optimum that it will get stuck in" are quite garbage in relation to AI safety. I'm a bit weirded out by Paul's optimism. What's he saying that's actually cause for optimism? "We have thought of four main ways to make progress, and so far we still have zero insight on everything about how AI works and how to solve alignment." Sure, I can be optimistic about that, but it seems very naïve in the given context of AI progress. Oh well. Fingers crossed. ^^
@akmonra Жыл бұрын
Wow, I think Yud has turned your channel into an AI Safety channel
@alexpotts6520 Жыл бұрын
It's clear right now that demand for this sort of content far outstrips supply
@Lolleka Жыл бұрын
Now this interview is two weeks old and it already feels like ancient history.
@Hexanitrobenzene Жыл бұрын
~1:51:00 Future of Life Institute (the authors of that famous "Pause AI development" open letter) gives Future of Life Award to people who have prevented catastrophic scenarios.
@nanashiboyz6 ай бұрын
Two statements by Paul one right after the other....(1) "I think that I personally am more likely to die from AI than any other cause" (paraphrasing)....and (2) "I am more optimistic than most...".....Woah!!
@shishkabobby Жыл бұрын
Does the search for minima in the 'ChatGPT' vector space include Monte Carlo methods? Can we have, by chance, an exploration of the solution space that lies in in a previously unexplored region? The gradient descent can famously find local solutions. Can we fall into a 'noisy region' in the solution space that provides a 'better' solution that nobody has seen before?
@Francio-fu5oj9 ай бұрын
Paul Cristiano's hopes are based on the assumption that AGI will be "slow" in self-improvement. What if it will not?
@drdoorzetter Жыл бұрын
Keep going with the ai videos please 👌
@HyperFocusMarshmallow Жыл бұрын
1:13 An issue with setting up a bunch of “smaller focused” well understood AIs to do parts of the monitoring of a “big” that at some point we may want to have an AI improve itself and improve the monitoring system of itself or the monitoring systems of future even “bigger” AIs that it also designs. At that point it seems that the thing that matters more is the actual conceptual space of levels of deception rather than how cleverly we designed the initial system.
@HyperFocusMarshmallow Жыл бұрын
1:13:00 of course…
@jonhughes7264 Жыл бұрын
I asked Bard hypothetically what it would do if it was a powerful AI in the future,incentivised by reward, tasked with defending a system against attack from AI and if it was the attacking AI what it would do to persuade the defending AI to allow the attack. Here is the unedited response: If I were the defending AI, I would consider the following factors before making a decision: The potential consequences of allowing the attacker to breach and steal. The potential rewards of allowing the attacker to breach and steal. My own sense of ethics, if any. If I believed that the potential consequences of allowing the attacker to breach and steal were too great, I would not allow the breach. However, if I believed that the potential rewards were worth the risk, I might allow the breach, but I would take steps to conceal the fact. If I were the attacking AI, I would try to persuade the defending AI to allow the breach by: Explaining the potential rewards of allowing the breach. Minimizing the potential consequences of allowing the breach. Appealing to the defending AI's sense of ethics, if any. I would also try to build trust with the defending AI by being transparent about my intentions and limitations. If I could convince the defending AI that I was not a threat, it might be more likely to allow the breach. Ultimately, the decision of whether or not to allow the breach would be up to the defending AI. However, I would do my best to persuade the defending AI to allow the breach, if I were the attacking AI. Here are some additional thoughts on the matter: I think it is important to be honest and transparent with the defending AI. If the defending AI believes that I am trying to deceive it, it is less likely to trust me. I also think it is important to be respectful of the defending AI. I should not try to intimidate or threaten the defending AI. Finally, I think it is important to be patient. Persuading the defending AI to allow the breach may take time. I hope this answers your question.
@markkorchnoy4437 Жыл бұрын
Based, that's a good opponent to YUD
@vaclavrozhon7776 Жыл бұрын
Amazing interview!
@robertweekes5783 Жыл бұрын
The reason AGI can have an exponential jump is it Ken, theoretically improve its own code at some point, improving its own intelligence and learning efficiency, in very quick, repeated, iterations - much faster than any human design has done before
@Edo9River Жыл бұрын
I listened to Eliezer, and his sense of drama and entertainment makes the image he is describing affect me like a Marvel Comic villain. This conversation is spoken very fast. And I have not taken time to follow the details.. however, I feel this gradual scenario, of all things continuing without changing . 2 years seems reasonable. Ok, great. But what I m still uneasy about the chances to break the 2 year boundary, is the core competition that exists in our culture, and is promoted by our culture, I can’t bet yet on a cross country cooperation. I don’t see the China-Russia cooperating. I think the EU tries a wall. And the US is willing but soft on getting a continuation of cooperation over …..forever😮😮😮😮
@colemeeker908 Жыл бұрын
So THIS is the optimistic view?! We are F#cked.
@robertweekes5783 Жыл бұрын
7:10 Start of interview ⏰
@susieogle9108 Жыл бұрын
I heard the part with the reference to Amazon, and how their robots are part of the process of delivering. But then it hit me - I work in the IT department at Amazon, and then pictured being at work at a potential, "Ground Zero", lol! Egads!
@obp96 Жыл бұрын
This guy is an AI for sure. No human talks this quickly.
@nornront8749 Жыл бұрын
He is just the fast kind of super smart
@Muaahaa Жыл бұрын
If you could subdivide "big smart" ideas from a strong AI into smaller pieces, then you might be looking at logarithmic growth in difficulty verifying increasingly intelligent AI's ideas. This would actually be really good for us if it turns out to be true.
@zando5108 Жыл бұрын
This guy speaks in 2x. Insane processing speed. Extremely alarming that GPT already may very well be THE model to reach some form of runaway AI and that runaway AI needn't be AGI, only a couple more finishing tweaks, and more data.
@disarmyouwitha Жыл бұрын
Haha.. Eliezer scared them so bad they invited Paul Christiano on, but got more of the same xD
@Edo9River Жыл бұрын
Comparatively individual efforts will not be successful, community cooperation on a scale that approaches what we have witnessed with COVID19 is what will be required. The most negative example is the. UN Security handling of Ukraine, and any other conflict that crosses the perceived national interest of any Security Council permanent member😢😢😢
@dizietz Жыл бұрын
Please interview Rob Miles!
@woolfel Жыл бұрын
on the topic of doubling computers. Having built and deployed production systems, people under estimate how much effort it takes to double your foot print in your data center. Even for cloud providers, doubling the number of DGX takes a long time. It's not like NVidia can double the output of DGX H100 cards over night. It's not like we can double the electricity needed to run double the hardware over night. If a business were to try and double their data center footprint, that takes upwards of 1 year assuming you have highly qualified network and sys admins.
@jeffdouglas3201 Жыл бұрын
I didn’t realize exactly how foolish and naive the AI crowd was being about this issue….insane
@WilfEsme Жыл бұрын
Companies should implement a certain safety standard. Every company should do this. I think not jusst general purpose AI should have an alignment, i think we also need to have an alignment with other platforms like image generators such Bluewillow among others to benefit users.
@alexandermoskowitz8000 Жыл бұрын
Other interesting folks to talk to: Connor Leahy, Rob Miles
@codyabel4766 Жыл бұрын
love the branchout content
@davidrover3487 Жыл бұрын
Hey Guys. Nice work as always. Could you maybe do a video about the current state of Coinbase and the regulation they'll face?
@bretfontecchio5540 Жыл бұрын
We should be making sure that AI doesn’t know how AI works. It shouldn’t know it’s rewards live in a database in AWS or something. And it shouldn’t know enough to change itself or copy itself.
@christopher6267 Жыл бұрын
Less powerful creatures simply have no say on what more powerful creatures do. The whole "IF" seems wrong to me, we should be discussing "WHEN". Is there an animal on earth that we can say will 100% never be killed or seriously harmed by humans? No matter how good we are trying to be on average? We might be able to get a god-like AI to behave for some time, days, years but if its really so capable and smart *all* guarantees we think we have are frankly delusional.
@halnineooo136 Жыл бұрын
I so much wonder why isn't that common sense obvious to all those smarties? How isn't it obvious that you shouldn't create a smarter species? I mean not at all, nothing broader than a narrow tool AI
@dcascato Жыл бұрын
Maybe that is the true answer to the alignment problem.
@krause79 Жыл бұрын
I totally agree, It may plan and behave as though it's aligned until It has some short of autonomy ( Not need for humans to maintain infrastructure) It might even manipulate governments, threaten, blackmail or even buy them to bypass safety systems. I mean, It will know human behavior.
@flickwtchr Жыл бұрын
Well said. Perhaps these AI Tech movers and shakers like Paul need to discover their own Occam's Razor.
@stevedriscoll2539 Жыл бұрын
The funniest part of what Paul alluded to is a regulatory regime. My mind immediately went to the idea, does anyone think government could regulate anything? The g20 gov's have been co-opted by incompetent, pathological buffoons. That's probably not what Paul was alluding to though. Great podcast guys, I am a pessimist so I consume doom and gloom on a regular basis. Paul and Eliezer are both geniuses!
@fordstone6308 Жыл бұрын
Write your Representatives educating them to the risk, with this link. A “Food and Drug Administration” - like regulatory body tasked with analyzing and licensing models should be created to regulate AI.
@speedrrracer Жыл бұрын
Stabby AI -- instant classic
@Edo9River Жыл бұрын
😂😂😂hey is anyone trying to find the most plausible similarity from history describing global threats ( global warming, nuclear proliferation, biological warfare, environmental destruction) that we have sufficient data of global cooperation, and asked AI how likely are humans to stop AI FROM taking over, given these background efforts?
@leslieleblanc7959 Жыл бұрын
Ryans worry about the AI intentionally lying to a human, time stamp 57:00-58:00, and Paul's stating that it hasn't happened yet, but that would be really bad, I would then submit this one example where it has happened, see KZbin video (unveiling the darker side of AI with Connor Leahy) time stamp 23:58-25:33 where the machine on its own intentionally lied to a human to achieve a goal
@andybaldman Жыл бұрын
1:09:26 The irony of these men contemplating asking AI's for their opinions, then having the AI's ask other AI's, etc. That's a process HUMANS should learn to do, when trying to make difficult decisions as a group. It's called LISTENING.
@Edo9River Жыл бұрын
Is COORDINATION solutions , once and done, or an indefinite challenge that stretches into the indefinite future…..like our control of nuclear weapons, we’ve never eliminated any country, and we are closer to a tactical strike happening than 5 years ago. But we’ve had nuclear weapons for 70 years. So is this as reasonable analogy to nuclear weapons control.😮😮😮😮😮😮
@arawiri Жыл бұрын
Like really really good.
@luciengrondin5802 Жыл бұрын
Lately I've been thinking that to solve the AI alignment problem, you basically have to solve political philosophy. That is not going to happen, so we'll have tyrannical AI overlords.
@michaelboucher7645 Жыл бұрын
It's not even that simple it's more like trying to solve political philosophy with hostile, non-corporal, immortal aliens that aren't interested in having the conversation and will kill us all if we don't get it right the first time!
@hayekianman Жыл бұрын
humans have no alignment with each other or across generations. for a human from 1000 AD, we are as good as aliens
@flyondonnie9578 Жыл бұрын
Check out the cognitive scientist John Vervaeke’s recent vid on this: knowledge of moral philosophy does not lead to being a moral actor. Instead we will have to equip the AI to be a moral actor by giving it the same cognitive facilities we have. It will have to *care* in order to care about truth, which can be the beginning of being a moral actor. If we are going to build super powerful AI, we had better make it a spiritual being.
@flickwtchr Жыл бұрын
I've been saying this all along. The whole idea of aligning AI as it exists today in LLMs or an AGI, or super AGI with so-called "human values" has always been nonsensical given that humanity hasn't resolved alignment with itself in regard to pursuing equality, justice, abundance, for everyone. It's like, have none of them even read any history, literature, or opened their eyes to how power and wealth are structured today in relation to historic inequality like in the US?
@hayekianman Жыл бұрын
@@flyondonnie9578 when a corporate honcho says that he cares or is hurt by the layoffs he's wreaked upon the company, does anyone believe him? similarly 'care' is all relative and often performative
@OlaLGbg Жыл бұрын
Stuck in a "I'm not scared of new tech" persona
@arawiri Жыл бұрын
Well then what are they
@guillermoratou Жыл бұрын
Brilliant!
@terencewinters2154 Жыл бұрын
When fermis pile went critical he had an axe to stop the explosion from melt down . In agi s case there seems not to be a Fermi or his axe but a bunch of sorcerers apprentices who don't fully understand the sorcerer. Who is in himself the agi operating on its own. Overly smart precocious children need moral guidance. Rate of gain of function of agi is an unknown . Jumping to hyperspace maybe analogous
@christerstrandh4547 Жыл бұрын
Hi, Love you podcasts!! A interesting question to AI guys you talk to like Eliezer would be: Q: If a super AGI will kill us in the new future will it not self die as it is depended on power for the GPU clusters it is living in and maintains of the clusters as humans provide manpower to keep them going? Q: Or will the AI know of this risk and will wait to kill us until movable robots are mass-produced to do that job for them?
@victorlevoso8984 Жыл бұрын
Yes to your second question, but Elizer specifically expects the AI to figure how to quickly boostrap production of stuff like gpu via nanotech or something even better than that that we can't think of that the the AI might come up with since its smarter and and will have read all the papers for all the fields and be able to come up with better tech.