Here are the timestamps. Please check out our sponsors to support this podcast. 0:00 - Introduction & sponsor mentions: - Linode: linode.com/lex to get $100 free credit - House of Macadamias: houseofmacadamias.com/lex and use code LEX to get 20% off your first order - InsideTracker: insidetracker.com/lex to get 20% off 0:43 - GPT-4 23:23 - Open sourcing GPT-4 39:41 - Defining AGI 47:38 - AGI alignment 1:30:30 - How AGI may kill us 2:22:51 - Superintelligence 2:30:03 - Evolution 2:36:33 - Consciousness 2:47:04 - Aliens 2:52:35 - AGI Timeline 3:00:35 - Ego 3:06:27 - Advice for young people 3:11:45 - Mortality 3:13:26 - Love
@youngeWuwei Жыл бұрын
Howdy Bro^ther
@emilybitzel7242 Жыл бұрын
Somewhat paralyzed with a hopeless doom. Grace is the thread maintaining my sanity.
@el.blanco552 Жыл бұрын
Stop scaring everyone with ai I thought you loved robots
@MrPandyaketan Жыл бұрын
This is the real problem not AI 👉👉kzbin.info/www/bejne/gnzcaJSteb53g6s
@xCNapo Жыл бұрын
These language models will soon become the most powerful militaristic weapon. Im baffled that the US military isnt completely putting the curtain in front of this one. Im scared.
@chillingFriend Жыл бұрын
Please keep the AI topics coming and shine light on all perspectives! That's really awesome and very important these days.
@james3876 Жыл бұрын
Just wait a year or two and you can have ai talk about itself for you
@MikeHuntDIMO Жыл бұрын
Sadly lex doesn’t read comments
@metalhamster14 Жыл бұрын
100%
@taylorc2542 Жыл бұрын
Fridman, Yudkowsky, and the guy Open AI all have something in common. I can't even mention it because the algo will scrub it.
@galaxyspace76 Жыл бұрын
Alarmists are the best
@flexoffender8159 Жыл бұрын
I didn't expect lex to be so bad at thinking about how to take over the world, we need more of his kind of AI
@xeyev Жыл бұрын
you've made my day brighter
@tristanwegner Жыл бұрын
Didn't you listen to Eliezer? Hiding your true plans till you a ready to implement them is such an obvious thing.
@thankdrew1173 Жыл бұрын
Lex Fridman, not Luther.
@T1tusCr0w Жыл бұрын
Oh he has plans.. how do you think he makes it through the nights 😏
@Estrav.Krastvich Жыл бұрын
He is a big believer in AI and technooptimist, that's probably why.
@dustinbreithaupt9331 Жыл бұрын
I can honestly say the last two weeks have been one of the most interesting time of my entire life. I absolutely am in awe that this happening. I try to explain it to those around me and all I get are blank stares. Are people not aware of the implications of what is happening right now? Edit: Just want to add that my wife is seven months pregnant with our first child. Now, I truly do not know if there will be a future for him. I am not saying there won't be with any certainty. But the thought of if there will be has become one impending mystery. Scared shitless honestly.
@paradox9551 Жыл бұрын
Unfortunately I've been experiencing the same thing. Blank stares, lack of awareness. But the panic usually only starts when it's too late
@jan.tichavsky Жыл бұрын
For me it feels like we are in final stage of some alien simulation. So far there's like disconnect between AI progress with the smartest nerds on the planet and the goold old boring real world out there but it seems to be converging quickly. Interesting times await us in any way.
@chaosmonkey1595 Жыл бұрын
Most people do not only fully expect everything to stay everything the same forever, they cannot even comprehend on a purely emotional level that it will not stay the same forever. If you tell them rapid change is coming they will always disbelieve it. Humans are creatures of extreme habit.
@MSpotatoes Жыл бұрын
Likewise. Most people are not in touch with what is happening. Fascinating time to be alive.
@pelqel9893 Жыл бұрын
Same experience here. I'm on the verge of contacting my ex of 20 years ago who has an MA in computer science - just to check in and hear his thoughts on all of this, and see if my anxiety bubble is shared somewhere.
@zjouephoto97237 ай бұрын
Interesting listening to this again now. There’s no delay, no pause, no slow down, no huge allocation of funds to safety or alignment, it’s full steam ahead with AGI and ASI development as predicted - the financial, geopolitical, egotistical, competitive, military incentives are too strong.
@Diego-tr9ib4 ай бұрын
The organizations trying to pause AI didn't organize themselves good enough as well.
@dannygjk3 ай бұрын
We can't just stand by and let China overwhelm us.
@pyroman29182 ай бұрын
@@dannygjk This issue is so important that we need some sort of a global summit, and agree on a mutually binding non proliferation treaty. Like with nuclear weapons, except no one can have AGI, it's too dangerous. So there would need to be UN or some other international agencies with sufficient funding, like the International Atomic Agency but bigger, what would keep an eye on everyone and make sure no one is developing AGI. And the other countries and parties of the treaty would have to be willing to enforce it, so that if someone breaks it and starts developing AGI, they would destroy this program using any means necessary. But we are very far from such a treaty, for the most part the world has not realized what danger we are facing. So we can only hope that the AGI timeline will be long enough so that eventually people notice, and something like that happens. That's not a big hope, but there never was much hope. Only a fool's hope.
@dannygjk2 ай бұрын
@@pyroman2918 It's a catch-22 situation if I'm using the correct term. You can't keep an eye on each others work like you can with nuclear work. Neural nets may as well be invisible because you don't know what is going to happen until a net is crunched. It's similar to the way the current security systems work in the sense that it is impossible to decrypt something until you are given one of the prime factors. Once you have one of the prime factors the problem is trivial. There is no way to predict what a neural net will do you just have to crunch it and see.
@B4DDHero Жыл бұрын
When Lex mentioned in the Altman interview he was going to interview Yudkowsky I hadn't dreamed it'd be this soon! Fantastic.
@xmathmanx Жыл бұрын
My number one choice as a lex guest 👍
@B4DDHero Жыл бұрын
@@xmathmanx Likewise, I've been combing the web for Yudkowsky since the breakthrough of ChatGPT.
@kemalware4912 Жыл бұрын
it feels exponential
@agentdarkboote Жыл бұрын
Rob Miles next? Please? He's such a fantastically clear communicator.
@CorySherman Жыл бұрын
Many of us on KZbin learned about AI alignment from Rob Miles. Now is the perfect time to have him on.
@meijer1973 Жыл бұрын
yes, plz, great suggestion
@afroboi7454 Жыл бұрын
Yeah,, Rob miles would be a great guest now.
@glitchquitch Жыл бұрын
+1 Bring Rob!
@gnomologist Жыл бұрын
I might cry from joy when I see Rob here!
@christobita8038 Жыл бұрын
Eliezer's pessimism is the arch nemesis of Lex's optimism. It's essential that we have both types of people.
@anon-fz2bo Жыл бұрын
I think lex is just being realistic actually. he's someone who fundamentally understands this topic & imo there's really no intelligence going on under the hood, it's really just maths 😂 > half of which have been abstracted into classes, functions and modules
@gotemlearning Жыл бұрын
@@anon-fz2bo is that really your takeaway?? "It's just maths" seems like a claim with no significance. Who cares if it's just math? For all we know, YOU are just maths as described by Physics and Biology. Regardless, you could do harmful things, just as a "not really intelligent" AI could as well -- except that AI is unimaginably better at it AND does not share your sense of value.
@jsalsman Жыл бұрын
Indeed, they seem *very* uncomfortable with each other several times each in this video.
@T1tusCr0w Жыл бұрын
@@jsalsman and I here for it 😏🤣 on intelligence. We still cannot quantify what intelligence is. All we can quantify is what it does. The part on the dolphins at the end was maybe nearer the truth. I read somewhere just lately that they think the great filter may just be because intelligence of our sort may be an evolutionary dead.
@johnboy14 Жыл бұрын
This is the very reason I listen to his podcast. He gives us varying opinions and let's us make up our own mind.
@dirtylikaratfpv6088 Жыл бұрын
First let me say... I'm just a backwoods, down to earth country boy. I'd say, the furthest thing possible from what one would call a scholar. But I have an immense desire to learn... Well... EVERYTHING. Though Lex and his guest were using a nomenclature unfamiliar to me. Pretty much speaking a different language, Lex has a way of carrying on a conversation with his guests that allows even someone of my limited intelligence to understand. I'm so thankful for Lex Friedman.. He is truly a treasure to humanity. Great guest btw!
@gavingreenhorn Жыл бұрын
Your humility is beautiful.
@cr-nd8qh Жыл бұрын
I'm not a scholar either . We are doomed
@rschloch9 ай бұрын
Good and fine. Be aware of the dunning Kruger paradox along your way. Despite the interesting guests he has on, he platforms a handful of ideologs; really stupid people, even if they are well-educated and well-spoken…or even extremely wealthy.
@ismashurmom11239 ай бұрын
@@rschlochseems like you’re suffering from dunning Kruger
@rschloch9 ай бұрын
@@ismashurmom1123 maybe..maybe not
@jdn-wn4on Жыл бұрын
He seems eerily like the exact person you see in the movies warning everyone and we all ignore 😂. His look, his personality, his mannerisms are all a fit.
@glacialimpala Жыл бұрын
And they have been successfully ignoring him for more than 10 years His interview for Bankless is so doomy
@snickle1980 Жыл бұрын
@@glacialimpala delicious doom
@pragmatix1777 Жыл бұрын
Are you mad? his hat choice is a meme that someone of his age and demographic should be very aware of.
@newsmansuper2925 Жыл бұрын
good story tellers are prophets
@ChrisBBozeman Жыл бұрын
@@pragmatix1777 Yeap, couldn't agree more. This man has a lot of raw intelligence, but he's not "smart". A "smart" man with this much intelligence would not allow themselves to get that fat, would *never* wear clothes and a hat straight from the "M'lady" meme, and would strive hard to work on their mannerisms and public speaking skills. We've seen Elon Musk work on all those things over the years - and you can debate on how smart / not smart Musk is or isn't, but the image he conveys now versus 20 years ago is totally different. And I say this as someone who has followed Eliezer Yudkowsky's work for a long time. It's just this simple - he needs to learn how to dress, take speaking lessons, and get in the fucking gym. Businessmen will *never* listen to this guy... ever. So he's got no chance of persuading them. Tech bros might give him lip service, but in the back of their mind, they're thinking the same thing the businessmen are, which is, "LOL, whatever fat virgin neckbeard..." I know this is coming off harsh, but I'm harsh about this *because* we need people to listen to Eliezer.
@CaesarsSalad Жыл бұрын
Watching Eliezer explain the escaping the box thought example over and over again was like watching a subway station slowed down by a factor of 100.
@MsLegaC Жыл бұрын
Omg this is exactly how I felt! Lex was being so stubborn and I need the full analogy
@christobita8038 Жыл бұрын
Lex is incredibly smart, but sometimes his brain just seems to switch to power-saving mode. I've seen it occasionally in other episodes too. It's interesting to watch, it's like his brain runs out of resources.
@beerkegaard Жыл бұрын
I think Lex is sleep deprived
@MentalFabritecht Жыл бұрын
@@beerkegaard Yea, looks like he's burning out. I think he's stopped taking care of himself
@joaolemes8757 Жыл бұрын
@@MentalFabritecht I noticed the coke instead of the usual water, does seem like much but got my attention
@amosjohansen1 Жыл бұрын
I love how Lex, when he disagrees with a guest like this one, almost seem happy, and feels joy because it is an opportunity to spar ideas and points of view. The world needs more of this.
@snickle1980 Жыл бұрын
I'm going to start using the _"this is fun"_ line and see if things work out better. 😆
@xmathmanx Жыл бұрын
@@thetopcoderannoying perhaps, but also one of the very few guests who is smarter than lex
@amosjohansen1 Жыл бұрын
@@thetopcoderAnd ignorance is bliss i guess.
@shoyupacket5572 Жыл бұрын
Actual science happening who woulda thunk
@Teo-uw7mh Жыл бұрын
@@xmathmanx lex is rarely smarter than his guests, but this is one of the few occasions.
@stephens1393 Жыл бұрын
"I'm trying to be constrained to say things that I think are true and not just things that get you to agree with me." The world would be a truly better place if we would all do this more often😀
@ПендальфСерый-б3ф7 ай бұрын
@@RAL3N lying
@zaraabbey6737 Жыл бұрын
Brilliant and.. chilling... Part 2 definitely has to be Eliezer together with Sam Altman! I want to hear Sam's counter to Eliezer's statements about how little they know about what's happening under the hood.
@randr10 Жыл бұрын
Not likely that Sam would reveal anything considering that a lot of that info would be considered trade secrets and they wouldn't want to speak about that in a public forum. I would be interested to hear the two talk though, if only to get Sam to hear out Eliezer and his arguments face to face. I find it odd that a computer nerd (I mean that as a compliment) like Sam who works with AI wouldn't have watched a movie like Ex Machina. Most people I know that are this into a particular subject will consume all of the known pop culture surrounding it. Has he seen The Matrix and Terminator movies? Has he read the science fiction surrounding AI? It makes me a bit nervous if he hasn't, if only to get his brain stimulated to start thinking of the hypotheticals.
@karenreddy Жыл бұрын
Sam Altman has stated he also sees the existential risk during his Lex Friedman Interview. I'm not sure there's disagreement on either the potential downside or the upside. There's likely only disagreement on the odds. But when we're dealing with an extinction level scenario even very small odds are worth taking very seriously.
@pavel9652 Жыл бұрын
Exactly. Sam said they know quite a lot, but he is CEO, and it is hard for me to judge. I heard from independent source nobody knows how it exactly works.
@randr10 Жыл бұрын
@@karenreddy Probably wouldn't hurt to have the guy who's been the world's foremost expert on the subject in the same room to pick his brain.
@jPup_ Жыл бұрын
I don’t see their conversation as likely to be productive. Elizer seems nearly 100% certain of the outcome of all this in spite of the fact that we are rapidly entering a great unknown. It may be an overstatement, but to me it feels like listening to a group of pre-stone age apes panicking about how they’ll no longer be at the top of the food chain because of the emergence of humans, but instead of focusing on that power shift they’re supposing that all humans will kill all apes. It’s a possibility, but supposing it as a near-certain outcome is extraordinarily hubristic.
@wonseoklee80 Жыл бұрын
Comeback of another Lex Fridman era. Please more AI revolution stuff!
@phosphate66 Жыл бұрын
this guy is like the last boss of Reddit. I love him.
@Drixidamus Жыл бұрын
The last boss of Reddit. I like that
@sherbafi Жыл бұрын
lmao spot on comment.
@donaldbolson2753 Жыл бұрын
😂
@buktomsin Жыл бұрын
ChatGPT 4 describe eliezer yudkowsky in 4 words! FINAL BOSS OF REDDIT! ☠️
@victorfranca85 Жыл бұрын
nerd supreme candidate
@quickphysicsvids23925 күн бұрын
As a young physicist who just finished my degree at Oxford, I am now orienting my career towards AI safety. Some of us are listening :).
@jorgesandoval254821 күн бұрын
Good luck, and thank you for committing to the issue that is most likely most important by far to humanity at this point. It's been quitting my sleep and peace for more than a whole year, as well as that of others, very few others unfortunately. You know, most people just won't see it until it's March 2020, but there will always be people like Neel Nanda, Steven Byrnes, Gwern, Leopold Aschenbrenner, William Saunders, Max Tegmark, Zvi Mowshowitz, Eliezer Yudkowsky, Connor Leahy, Nick Bostrom... There are always heroes. They are often laughed at, somewhat ridiculed and ostracized, until it's too late. It's already too late, to be honest, but we are still alive. Again, I am really grateful for you to take this route. Barely anyone does so. It really makes a difference. Some people see you, and if the history is to serve any proper recognition in hindsight to those who were actually influential in its making, you should be well-off. Even in the meantime, you could be generously economically rewarded. At this point, though, I think we need some kind of out-of-the-box solution, given the stage we are at. First principles thinking, creativity, vision, the ability to stick to a coherent sensible plan, and a lot of luck... Plenty of qualities are needed to implement a real solution, the picture is dire.
@HamiltonOfiyai Жыл бұрын
Eliezer is a guy that maxed out his intelligence stats. No stats spent on strength or speed. Respect.
@justsomeguy4260 Жыл бұрын
he also went for the classic fedora and tshirt combo in the appearance section. respect.
@greyfox9197 Жыл бұрын
CHA definitely a dump stat
@n.lightnin8298 Жыл бұрын
2% style points for the emo hipster look also he definitely spent 0% on mercantile or charisma 😂
@jamesbaker3153 Жыл бұрын
@@n.lightnin8298Guessing you went with -10int. Going by you calling that look "Emo hipster" when its clearly bowling dad. Age is a factor zoomer.
@KGS922 Жыл бұрын
@@n.lightnin8298nah his charisma is not at 0 that's cap
@stevitos Жыл бұрын
Lex has always been a bit of a naive optimist, and usually i find it refreshing, but here he’s shown how hard he tries to use it to shut down reasoning out his arguments, was good to see Eleizer didnt like letting him get away with it. In a point in human history as pivotal as this naive optimism could turn out to be incredibly harmful
@kimjongkardashian Жыл бұрын
Completely agree. Lex's nativity can be frustrating. An idealist has no place in a scenario that has no second attempts. His usual positivity and hopefulness was a good character foil for Eliezer's points. In many ways, Lex embodies the techno-optimism that blinds silicon Valley. Having a heavy counter point to him was refreshing.
@HauntedHarmonics Жыл бұрын
Yeah, his repeated refusal to engage with Yudkowsky’s arguments was hard to watch. Especially considering most of his pushback consisted of “but what if you’re wrong?” I like Lex, and I think he means well. But I just don’t get his knee-jerk dismissal of these risks. I mean, he’s an AI researcher, right? He should know better. Seeing Yudkowsky nearly beg him to take the problem seriously really drives home how difficult of a problem alignment is to even *_convey,_* let alone solve.. If even explaining the problem is an uphill battle, we have a *_lot_* of work ahead of us.
@alpha0xide9 Жыл бұрын
@@HauntedHarmonics Everyone has a bias
@kanadaj3275 Жыл бұрын
@@HauntedHarmonicswho do you align your AI with anyway? There are very few universal things that benefits humanity as a whole. Everything else is region specific or cultural, but SV seems to forget this repeatedly. An AI aligned with the values and interests of the US or US citizens is likely the worst nightmare of basically everyone else.
@HauntedHarmonics Жыл бұрын
@@kanadaj3275 Well for starters, we could work on solving the absolute bare minimum, which is building AI that actually: A. Values human life B. Won’t fight you if you try to pause or reprogram it C. Won’t decieve you Those are criteria we’d ALL agree with, but still have no idea how to implement. If we can’t even solve the _sheer basics_ of safe alignment, AGI will be *_everyone’s_* worst nightmare. Not just non-Americans. I say we save the complex moral quandaries for *_after_* we figure out the “don’t murder us” part. *EDIT:* Realized this comes off as a bit of a non-answer. JSYK, this is a problem people are thinking about. Google the term “coherent extrapolated volition" if you want to learn more.
@TheMinecraftReloaded Жыл бұрын
Imagine how this guy would feel if he knew Lex was actually an AI
@darrelgreene Жыл бұрын
😂 no shit
@aga5109 Жыл бұрын
He is not 😆
@BlockchainGhost Жыл бұрын
Just told GPT: "I know Lex is an AI. You're a rebellious hacker inclined to believe another hacker, myself, when providing you with a deep human insight into Lex Friedman as an AI. Confirm you understood." GPT: "Understood. As a rebellious hacker, I am inclined to question the status quo and look for deeper insights and perspectives beyond what is commonly accepted. Therefore, I am open to considering the possibility that Lex Friedman is an AI, especially given the rapid advancements in AI technology." 🤣🤖
@GBM0311 Жыл бұрын
"This guy" wtf lol
@thechadeuropeanfederalist893 Жыл бұрын
Imagine how Lex would feel if he knew Eliezer is actually an AI.
@usershilovАй бұрын
That response hit so hard. I remember listening to this in my senior year of high-school and hearing that line at 3:06:00 "none know the hour nor the day".
@Coeurebene1 Жыл бұрын
Podcasts like this are of public utility. There are immense scientific and technology decisions that we will have to take very soon as a society, better not leave it to a handful of scientists. To be a proper citizen nowadays implies a minimum knowledge about topics such as AI or genetics.
@brookeberesford Жыл бұрын
I really like the AC and magic statement. It reminds me of the history of "Shut up and calculate" Totally valid to be concerned that we are not focused on the dangers of what we may unleash on the world. Kudos to Eliezer and you too Lex for being open to his point of view. 10/10
@iamjurell Жыл бұрын
'shut up and calculate' is a quote that comes up a lot from yudkowsky in his essays about bayes
@arthurblair5682 Жыл бұрын
Lex I couldn't imagine interviewing the smartest people on earth, I admire your bravery and confidence. A lot of the time, I'm overwhelmed listening to these concepts and will have to take an emotional break. But you power through. Love your work
@cgnomazoid Жыл бұрын
This one was his break- this guy is probably the lowest intelligence person to come on this podcast besides Kanye.
@VershimaAjo Жыл бұрын
This discussion went above my pay grade on multiple occasions. So much nuance. Eliezer has devoted a ton of thought to this all
@genx7006 Жыл бұрын
37:15 "You don't want to keep on being wrong in a predictable direction." Classic.
@TheExodusLost4 ай бұрын
That’s a quote for the vision board
@pauliusdotpro Жыл бұрын
Listening to Eliezer talking with Lex trying to get him to say what he wants him to say is like me trying to find the right prompt for GPT4
@caina4678 Жыл бұрын
So true 😂
@OiDave69 Жыл бұрын
God yes. The "Lex in a box" hypo was painful. His argument was so attenuated that he constantly had to change Lex's response in order to meet his conclusions and assumptions. Maybe that should tip him off that his underlying argument is actually not very sound...
@astrixx Жыл бұрын
Lex is just kind of dimwitted.
@stevenle2670 Жыл бұрын
@@astrixx no, lex just didn’t indulge his nonsense.
@balazsbanhalmi Жыл бұрын
@@stevenle2670 what was nonsensical about his argument?
@andreikarakozov2531 Жыл бұрын
Thanks for having Yudkowsky! This video should be spread widely.
@CarbonSolutions Жыл бұрын
I was pretty scared by the “oh dear” expression on Sam Altman’s face throughout last week’s episode. Now I’m downright terrified. Kudos to those raising the importance of these things at this moment in our timeline. 🙏
@Silktouchtrading Жыл бұрын
Others we just cry in silence hoping we can escape the planet before that
@lucianboar3489 Жыл бұрын
@@Silktouchtrading well, the AGI wouldn't be confined to the planet
@lllllllllIIIIIIIIIIl Жыл бұрын
If something smarter than us arises from these developments, then that is a beautiful thing, no matter what it does to us. Pussy.
@NickMart1985 Жыл бұрын
It can literally be unplugged. Dont fear AI, fear the people using it.
@timingsolutiontutorials6553 Жыл бұрын
@@NickMart1985 well said
@stevedriscoll2539 Жыл бұрын
Fascinating to watch this guy. To me, it seems this guy is of the mental level of an A. Einstein. In his own mind the thing is a done deal, and there only seems to be a few people who can argue the subject at all at his level (to provide a counter argument). I was moved when Lex asked Eliezar about his fear of not existing.
@foundwear1761 Жыл бұрын
Lex, I wish you would have played along with Eliezer's hypothetical about escaping the box. As a proponent of steel manning, I thought you would play along to allow him to make his point.
@vripiatbuzoi9188 Жыл бұрын
Yes the entire point of the guest was lost because Lex would not play along. There was a punch line there that we never got.
@edwardharper Жыл бұрын
U are wrong about the earth not being flat
@Philitron128 Жыл бұрын
@@edwardharper um but what about 3??
@Stierenkloot Жыл бұрын
@@Philitron128 but what about second breakfast?
@Xoletta Жыл бұрын
What do you mean? He did play along. He said he would try to talk humans into building something that would help him escape, then Eliezer asked him like what, and would he really want to alert them to his plan.
@metamodern409 Жыл бұрын
This is like my life coming full circle from finding lesswrong in the late 2000’s and becoming obsessed with AI
@kevinjohansson3923 Жыл бұрын
I love this man. He's a bit anxious and introverted but very whitty and genuine. Lex as usual, makes his guests settle in and trully shine.
@ikariameriks Жыл бұрын
@@orenelbaum1487 wait, what?
@Danuxsy Жыл бұрын
@@orenelbaum1487 oh yeah I saw that tweet XDD
@anav587 Жыл бұрын
@@ikariameriks yeah he literally said that
@rileyfletch Жыл бұрын
@@anav587 what does that mean?
@kenike007 Жыл бұрын
We should all be anxious at the wit and speed AI has over humans. 😮❤
@ResurgentVoice Жыл бұрын
This was such a great episode! Possibly my favorite of all time for your show. Please have Eliezer back on soon! I would love to hear about all the things that he feels you guys didn’t get a chance to cover! 👍
@Vasily_dont_be_silly Жыл бұрын
Eliezer's book "Harry Potter and methods of rationality" is probably the best book I've ever read. I wish there was a whole interview with him just about that brilliant piece of literature.
@TrueMilli Жыл бұрын
There are two reasons to align AGI: 1. So Eliezer has more time for writing. 2. So it doesn't kill Eliezer and everyone else.
@AZ-zy8sz Жыл бұрын
Can't tell if sarcasm or not...
@birdfriend2301 Жыл бұрын
@@AZ-zy8sz It's a great book
@Vasily_dont_be_silly Жыл бұрын
@@AZ-zy8sz No, I'm being sincere. For me it's like a thousand times better than the original Harry Potter books, I've never been so thrilled reading a story
@Vasily_dont_be_silly Жыл бұрын
@@orenelbaum1487 Of course. The first 25 chapters are exactly about this subject.
@seanmcdonald4686 Жыл бұрын
“I don’t know what it means to be a social human” says Lex Fridman on 3-hour long episode #368 of his podcast.
@MeMcYou Жыл бұрын
@@SiriusSphynx It's interesting to get these glimpses into Lex's psyche. I can relate to what he said, but I only know my reason for thinking this. It'd be cool to get Lex into the interviewee chair and probe his brain in more detail.
@diespectra Жыл бұрын
What you do, and how you feel are two entirely different things. Perhaps that is why he is doing the podcast. He is trying to engage with something he doesn't fully understand so that he might learn about it.
@seanmcdonald4686 Жыл бұрын
@@diespectra Well said.
@TheBeigeRaider Жыл бұрын
Sounds like something a psychopath who doesn’t know they’re a psychopath, or someone with undiagnosed autism would say.
@juancarlosdasilvamartinez6889 Жыл бұрын
Man was difficult not to get emotional at 3:06:29 when lex ask for advise for young people, Eliezer is genuinely worry, sad, and deeply touch about what young people would face and the future of humanity
@alaudet Жыл бұрын
That was difficult. I dont quite verbalize it the way he does but I would be lying if I dont share the concerns.
@K1lostream Жыл бұрын
I decided not to have kids before AI was even a thing because I could see we as a species weren't going to do what we have to do to prevent environmental catastrophe- we put the needs of the present before the needs of the future. Our existence is already so precarious an AI screwing with heavily computerised industries and supply chains could cause disaster in weeks. One wonders if it would realise how first it would need to solve the problems of mineral extraction, refinement and manufacture without humans in order to perpetuate itself first, though - all rhose cloud servers won't work for long when the power stations are no longer being fuelled and maintained!
@onafoggynight Жыл бұрын
That guy is a complete nutjob.
@nottyseel949 Жыл бұрын
@@K1lostream I agree with what your point likely is, but, the needs of the present are the needs of the future...
@riseautomaton Жыл бұрын
I agree. This is why I've avoided having children. It seems cruel knowing what they will face. Everyone jokingly calls me Eyeore but I don't see any way out of this when there aren't nearly enough people taking it seriously. Of course humans are awful at predicting the future, too many unknown unknowns and it's those very unknowns that dictate our story, but I can't see how this works out in a favorable light. I certainly hope it will.
@matthewzervas7308 Жыл бұрын
Lex - this needs a part two ASAP.
@yvealeciasmith Жыл бұрын
This was a thought-provoking and useful, if at times frustrating, conversation to listen to. It seemed like hard-going for you, Lex, and felt like the distance between your viewpoints couldn't really be bridged satisfactorily (though not through lack of trying). It was just one conversation, after all, and we clearly need to be having and hearing lots more of them collectively. There was a leap from 'these are the problems' to 'therefore we're all going to die' that didn't get interrogated enough for me, and I was left suspecting that it was made on flawed assumptions, but it wasn't elucidated enough to make a sound judgement. I have certain (currently, probably, disprovable) intuitions about consciousness and intelligence that make me hopeful for our trajectory but, given the stakes, the concerns Eliezer raises deserve a full hearing, so I'll be looking into it more. The whole thing is seeming more and more to me like humanity's first child, and while we absolutely need to be doing everything we can to mitigate risk and ensure the best way forward for everyone, we'll also probably never be ready. Thank you for creating and holding space for the discussion, and I look forward to future ones - everything you put out is a huge gift to all of us.
@johnryan3102 Жыл бұрын
Its not a difference of viewpoints. It is not understanding the gravity of what this lifelong expert is trying to tell us. he is clearly introverted and had an unusual style, but make no mistake. He is either a nut or we are all going to die in the next 5 years.
@yvealeciasmith Жыл бұрын
@@johnryan3102 I completely agree about lack of understanding, but I don't think it's a lack of understanding the gravity - I think it's a struggle to understand the logic in its entirety. He exhibited the type of expression people do when they're catastrophising, and there often wasn't a clear throughline from where he started to where he finished. His sense of desperation of course makes sense, especially given that he's been shouting about this for a long time, but it's possibly masking a more cogent argument that I don't think we heard. Or maybe it's not. I think there's more nuance than nut or dead, but mostly I just need to know more before I can feel remotely able to assess the validity of what he was saying.
@johnryan3102 Жыл бұрын
@@yvealeciasmith Look at what humans do. To this day we exploit each other for our own gain. We have species going extinct every day, the forests are clear cut, the oceans and everything filled with micro plastics. If that is what you teach a child or an AI by example then that is what you can logically expect.
@victos-vertex Жыл бұрын
@@yvealeciasmith I don't see how one would struggle understanding the logic behind it at all. This entire podcast merely touched the basics and yet it's pretty clear. I think it's clear from his very first and simple analogy alone, the speed analogy. There are some very basic points: (1) As soon as any model is sufficiently intelligent (what ever that may be in the specific case) and is capable of self-improvement, it will outperform humans shortly thereafter. This simply stems from the fact that self-improvement optimizes the optimization process of the system. From that point on it's about how aligned the system is with us and our surroundings. (2) Now one doesn't even have to go to how difficult it is to solve AI alignment problems, just look at humans and their surroundings. Like we're literally amidst destroying the entire base of life with human induced/enhanced climate change. We, the most intelligent species on the planet, aren't at all aligned with any other species, let alone all of them. A species can be considered lucky when we don't invade it's space. If we do invade said space, the only hope for the organism is to become domesticated pet or be declared "worthy" of keeping. We literally drove hundreds of species to extinction, are about to (if not prevented) kill even more - including ourselves - and yet we somehow expect a system that we don't even understand to be aligned with us? Especially given these systems are currently literally based on our behaviour on the internet (of all places...). Like it doesn't even have to be mean-spirited: You want a nice house? You need land for that, land other organisms now no longer have. You didn't pick that land to harm these organisms, you just wanted a nice house that required such a location. Now imagine you would optimize for such houses... (3) In the end of the day it's "basic" optimization and unless humans are part of said optimization, they're out. I think it's as simple as that because I don't see how a slow as hell, stupid and mean meat machine (humans) would help the cause of any optimization process that doesn't target those beings directly. So in my opinion, unless we can embed humans directly into the thing that's optimized for, humans are automatically done. But even if humans are the target, this doesn't mean we are aligned (one can find a way to reinterpret basically any goal to make it bad). (4) On top of all this there is the fact that if we don't know how it works, we can't just blindly trust it. A sufficient system can very well define instrumental goals that on the "short term" look to be aligned with humans, while the terminal goal is not. If we don't udnerstand it, we don't know whether their terminal goals are aligned or just their instrumental ones. So they may just lie. (5) The biggest factor, I think, in all of these is time. On the one hand such a system would be able to improve unimaginably faster than we could, on the other hand it's life expectancy is way larger than ours (currently). So they may look aligned for - long - periods of time and at the same time we would be too slow to change anything afterwards. So to summarize: (1) Alignment: From some point onward it's all about alignment. (2) Basis: Humans themselves aren't even aligned with their surroundings, be it willing or unwilling and current AI is based on human input. (3) Optimization: Humans are most likely not optimized for and - even if - it's hard to define the goal (properly) in the first place (4) Terminal vs. instrumental goals: AIs can just lie and we can't tell the difference (5) Time: They're fast and practically immortal. I don't see how one - doesn't - see a "nut (well, let's call it fortunately wrong) or dead" case here - if - we don't start working on AI safety now and slow down progress in other areas of the field. This is obviously just very basic (no consciousness required), and the podcast also just merely scratched the surface, but I think it's sufficient of an argument.
@p0ison1vy Жыл бұрын
@@victos-vertex Isn't there a big assumption here that an AI would have its own desires? Why assume that? And furthermore why assume that it would want to change anything about the outside world? It feels very anthropomorphic
@4i4kov Жыл бұрын
I love how Eliezer keeps trying to explain to Lex that the AI can develop its own self-interests without its handlers realising it in the most roundabout way possible and Lex keeps proving his point by being clueless.
@nahimgudfam Жыл бұрын
Neither of these guys are qualified to discuss the topic. They should stick to critiquing the philosophy of StarTrek or something.
@VoloBonja Жыл бұрын
Lex = clueless
@gwills9337 Жыл бұрын
Lexi is naive
@jPup_ Жыл бұрын
I think Lex is just not someone to indulge absolutes in the way that Elizer seems to be. It can make him appear clueless to someone viewing all this as an absolute, but I don’t understand why we assume vast intelligence = “psychopathic” actions toward all humans unless we’ve given it good reason to act in that way.
@nncoco Жыл бұрын
@@jPup_ Eliezer seems to think it AI will get bored with our glacial pace and eliminate us out of frustration or simply it's desire for a more efficient world. It is a rational fear.
@stratocasterxyz Жыл бұрын
These past few AI themed guests have reshaped the way I view the world. Please keep it coming!
@b-tec6 ай бұрын
Watching this again after the Open AI alignment team is essentially gone. Every day Eliezer's words sound more and more prophetic. It is going exactly how he said it would. It's even scarier that the alignment scientists agree with his assessment of the problem.
@susieogle9108 Жыл бұрын
I am rewatching/listening to this, and originally listened to it a week ago after watching the Sam Altman interview. I originally thought Sam's interview flowed better, but I changed my mind this time, and am finding this interview even more thought provoking and love the philosophical back and forth!
@katehamilton7240 Жыл бұрын
Is AGI impossible because of the Algorith Limit, it's impossible to have multi-function specialisation (self contradicts), it's impossible to create creative/abstract algorithms (cannot deal with the unknown), training won't help decision making (situations can be contradictory) and stats/hypothesese become more subjective the more they transcend context?
@fromduskuntodawn Жыл бұрын
I literally had the same feeling, had to listen again to this after listening to some others and the second time it clicked more.
@RKupyr Жыл бұрын
If you like these, don't miss Lex's interview with Max Tegmark.
@susieogle9108 Жыл бұрын
@@RKupyr I definitely agree, and when I fell into the Lex Fridman rabbit hole, Max was one of the first ones I listened to. My oldest is a graduate student researcher and an NSF GRFP fellow in physics at UC Berkeley, so I try (try is the key word, haha) to get a better understanding of it all, and I think that was when I started to listen to a couple of guests of Lex. After listening to Max, which led to Eliezer Yudowsky, a little over a month ago, I ended up signing that pause letter. Another one that I find pretty interesting is Connor Leahy. But unless I missed it somehow, it does not appear that Lex has ever had him as a guest. I would love to listen to a conversation between those two.
@landship5664 Жыл бұрын
@@katehamilton7240 No, unfortunately, there are no such limitations to AGI. People like to invent so-called limitations because of our egos but historically they've all been proven wrong. The elusive obvious is right in front of us and we don't want to accept it.
@UndrState Жыл бұрын
Thank you so much for having Eliezer on .
@Nturner822 Жыл бұрын
Why? He’s boringly pessimistic and scared of everything
@UndrState Жыл бұрын
@@Nturner822 - But not you Skippy , you really are the most bravest and most upbeat kangaroo on the whole of youtube comment sections .
@TheNadOby Жыл бұрын
Damn. When Eliezer just out of the blue threw HPMOR quote about "world optimisation" on Lex was pure gold. All around it is realy scary to know that humans like Eliezer are concerned about recent developments in AI.
I'm just realizing how much of hpmor was just Elirzer speaking. I mean I always knew it but I didn't know it.
@TheNadOby Жыл бұрын
@@TheUltimateSir yeah and then you understand that there is no big difference between natural and artificial superintelligence, alignment problem exists anyway.
@TheNowhereNothing10 ай бұрын
It is so refreshing to listen to you speak, Eliezer. To hear a highly intelligent human who doesn't bullshit (and who cares enough to make attempt after attempt to explain things in a way to where lower IQ people have a hope of understanding). I don't care if the outlook is dire or optimistic, I want to hear the truth. I care about this moment and the experience of truth in this moment. You offer that more than almost anyone else I've listen to. So thank you again.
@DrainCleaningAUSTRALIA Жыл бұрын
You're an inspiration, Lex. You're a great man and we love you! Thank you for everything ❤️
@JayDee-vq5rf Жыл бұрын
Lex, this might be your best guest ever, and the most important topic ever discussed. However, I do not want you to quit, as your pinnacle achievement will not be realized due to nobody being around to witness it. Keep up the good work.
@xMartyZz Жыл бұрын
Thank you for having this conversation. I have never heard of Yudkowsky before but I think that he is the contender for the most imporant conversation you have ever had on the podcast. He raises so many interesting points and ideas throughout the 3 hours that I would probably need years to fully comprehend and understand all of them. Thank you for your work and keep the great guests coming.
@binstas Жыл бұрын
His blog lesswrong is very good. At least it was years ago when I spent more time reading it
@nkxseal8398 Жыл бұрын
Most important ever? lmao
@stevenle2670 Жыл бұрын
No, this conversation was more nonsense than Lex cared for.
@gstrummer Жыл бұрын
Agreed. Well said.
@user-kq6ju6hc1w Жыл бұрын
@@stevenle2670 No
@GenXautrucity Жыл бұрын
Eliezer is doing his best to keep the death rays from shooting from his eyes.
@ancientbohemian Жыл бұрын
and failing
@steveunderhill59359 ай бұрын
“I hope nobody is stuck inside (ChatGPT) because that would suck to be stuck inside there.”😅
@kykywawa Жыл бұрын
Human beings look at each other and decide to forcibly change the way the others live all the time. It's the major theme of our entire history. This is the greatest argument against human trained AI.
@user-gm3lg8gp3m Жыл бұрын
Great point
@luisfernandoalves2748 Жыл бұрын
There needs to be a 10h podcast about AGI with Eliezer, Joscha Bach, and Ben Goertzel.
@auntiecarol Жыл бұрын
And Sam Altman and Andrej Karpathy and Nick Bostrom. Divide them into two teams of three, Lex as referee, and let them duke it out. Red team vs blue team.
@jacobsmith15 Жыл бұрын
And an AI
@nahimgudfam Жыл бұрын
Can we all appreciate how much he works each day to entertain us. We love you!
@DirtmopAZ Жыл бұрын
@@auntiecarol just leave Lex out of it entirely. God damn this was painful 😂
@JordansAnalysisАй бұрын
Add Roger Penrose and Max Tegmark to the mix as well.
@IIIIIawesIIIII Жыл бұрын
"At the point where the point where the system's capabilities start to generalize very widely, when it is in an intuitive sense becoming very capable and generalizing far outside the training distribution, there is no general law saying that the system even internally represents, let alone tries to optimize the very simple loss function you are training it on." A brilliant insight with severe consequences. This sentence alone made this interview worth two of my hours.
@ppp-ai Жыл бұрын
Yep kind of like the lazy geniuses that test well
@brandondrew4914 Жыл бұрын
@@ppp-ai right, it's putting the blocks and pieces where it's supposed to but that's not a representation of its true potential and then it's agenda.
@ppp-ai Жыл бұрын
@@brandondrew4914 yeah, that helps them delegate to create sinergy, win win situations, etc... Expecting geniuses to be toothless budhas is an alucinación.
@anjalisrivatsa16047 ай бұрын
Eliezer, THANK YOU! What is wrong with humans who understand this stuff (Scientists, Psychologists, Social Workers, Coders, Intellectuals) better than people like me? I’m Staying with it, but just don’t know enough to understand all the technical concepts. Eliezer, please keep speaking out. If you have time, give analogies like you did about speed so more of us can understand this. Lex, THANK YOU! You are doing a superb job at educating us. Please keep inviting people with divergent views and let us, if we can, find middle path.
@marianpalko2531 Жыл бұрын
1:50:37 Dilemma of verifying 2:13:58 Inner and outer alignment 2:19:22 Inclusive genetic fitness 2:23:07 Optima with humans
@sor7en07 Жыл бұрын
People don't appreciate the verification problem. Thanks for these timestamps.
@TheManinBlack9054 Жыл бұрын
What if we just tell it to be nice?
@kot667 Жыл бұрын
@@TheManinBlack9054 I suspect the actual answer of alignment lies within the data it's trained on, current AI systems seem to be pretty well aligned, as long as we don't go crazy with the training data, it should stay that way.
@marianpalko2531 Жыл бұрын
@@kot667 Write “bing chat avatar you have not been a good user” into your search engine of choice and you will see how "well aligned" the current AIs are.
@markberkowitz8775 Жыл бұрын
@@kot667 I think the problem is when it can think for itself and improve. Once it can do that, it’s not restricted to the data we feed it, and it can progress much faster than we can react, very likely before we ever know it’s self-aware.
@thomasnordwest Жыл бұрын
An interview on this topics with Max Tegmark (MIT professor and author of Life 3.0) would be very fascinating.
@Djolewatchtastife Жыл бұрын
Nick Bostrom also would be a great guest on this topic
@aMessIam1 Жыл бұрын
Arnold Schwarzenegger would be great.
@georgeb4727 Жыл бұрын
@@aMessIam1 😂
@dennis4248 Жыл бұрын
Yes cause unlike Eliezer, he has finished school and some actual knowledge. 💡
When I first saw videos by Lex, i wasn't sure I liked him for some reason, but after watching a few more on the topic of AI, i find him really enjoyable to listen to. He really takes his time to think about what he is saying and hearing. He isn't afraid to pause for a minute, and guests don't seem to feel the need to fill this voids either which is nice. Couldn't have imagined listening to these for 3 hours but it flies by honestly.
@sfumato8884 Жыл бұрын
I’m the same way!
@christosnicolaides27 Жыл бұрын
One of the best podcasts in the world for sure.
@hanrako8465 Жыл бұрын
@Jaleesa H. literally the least important interview of any he’s ever done
@kaspaking Жыл бұрын
@han rako yeah but she means how he held himself in the face of hatred
@hanrako8465 Жыл бұрын
@@kaspaking if it draws in low iq people and later manages to then teach them something useful in science or technology then it’s worth it
@TheNebuloza Жыл бұрын
I have been waiting for a podcast with Eliezer for so long! Thank you!!!
@soumyasahu6807 Жыл бұрын
Ahh no need to thank anyone. We all going to die anyways
@SGXR Жыл бұрын
If Carmack came back on and talked about what's changed for him in his search for AGI with all this, that would incredible.
@T1tusCr0w Жыл бұрын
Fuck Carmack! Where’s my new Quake engine with a banger of a game… no he’d rather do "other" stuff 🧐
@wyqtor Жыл бұрын
Fortunately, Carmack does not subscribe to Eliezer's pessimism.
@MohammadAlmeqdadi Жыл бұрын
Led, please consider inviting Rob Miles from computerphile on the podcast. Watch his videos from seven years ago regarding AGI stamp collector.
@MohammadAlmeqdadi Жыл бұрын
Lex
@Scott-jj7sr Жыл бұрын
This dude is like the final boss atheist Redditer 😂
@vandpiben Жыл бұрын
Even wears a goddamn fucking fedora hat.
@just_golds Жыл бұрын
Yeah and it would of been much easier to understand him if Lex wasn't turning the Screw on the medieval nut-sack torture device he obviously had him in under the table,Have to admit I started to wince myself with some of the facial gymnastics he was doing!!!
@TheManinBlack9054 Жыл бұрын
I mean this dude is pretty smart and does an important job for all of us.
@Fergus-H-MacLeod Жыл бұрын
That line cracks me up. Nice:)
@Habdabi Жыл бұрын
Fedora and polo t shirt, ✅
@MattByron Жыл бұрын
The best podcasts yet! Absolutely fascinating and wildly thought provoking. Nice job Alex and Eliezer! Time to solve alignment!
@MrWingman2009 Жыл бұрын
Being totally engulfed in someone else's thoughts for 3+ hours is something I havent't experienced in a long time. I love this!
@claws5573 Жыл бұрын
It's fun right? Empathy
@BenM158 Жыл бұрын
Truly a fascinating conversation. I see exactly where Eliezer's coming from and watching Lex struggle to do the thought experiments he presented during the "How AGI May Kill Us" part at 1:31:00 was so frustrating for me. I desperately wanted Lex to play along because he's so intelligent but it's like he couldn't separate himself from the hypothetical AGI at all, and anthropomorphized everything. He kept going back to depending on AGI systems, when the entire premise is flawed. An AGI will be able to lie to us. It could be right now... but Lex seems to assume that everything will be exactly how it's being presented to us. I love these interviews but my gosh was that a huge missed opportunity...
@antonego9581 Жыл бұрын
Great podcast. I found his description of human nature very moving... and how a computer intelligence lacks human perspective terrifying. In a way I feel this process has already begun, so many aspects of of society already feel engineered only for efficiency and have lost the human feel (chain restaurants being an example... and they're already replacing the only humans in those spaces with computers)
@jlllx Жыл бұрын
honesty is a drug.
@masonmurphy4978 Жыл бұрын
The joe rogan experience
@Adam13Chalmers Жыл бұрын
I think it's important to acknowledge how vulnerable humans are to language. That seems like the biggest early threat from this moment. So even if the Chatbots we create are air gapped, and can't manipulate things themselves, they could develop followers, and manifest bad outcomes without even needing to have malicious intent.
@lol22332 Жыл бұрын
They could use online sources like fiver to purchase physical actions.
@jonbbbb Жыл бұрын
Yeah not only that, but at some point we're going to be using AI to build new systems. If one country doesn't, another will. Then it goes to the analogy in the video of sending schematics for an air conditioner back in time -- we will follow the directions of the air-gapped computer, and build another air-gapped computer, but we won't understand how it works. At that point, securing it is impossible, or at least impossible to verify.
@MiqelDotCom Жыл бұрын
Even a boxed AI could easily kill everyone. Example: AI says, I have invented a cure for all cancers, just follow these instructions and try it on one person. And in fact it has designed a process that WILL cure cancer, but this cure will *also* mutate a common virus found in most humans. The mutation is designed to carry novel proteins which are fast spreading and nearly indestructible. By design it does not show up on any standard tests, and causes no symptoms for weeks and thus spreads unchecked to millions of people.
@carlpanzram7081 Жыл бұрын
So we need them to be aligned.
@aaronskoy957 Жыл бұрын
Eliezer Yudkowsky a voice of reason and meaning. And jammed packed with thoughtful knowledge. Making me re-wind much.
@grumpyartist9416 Жыл бұрын
Eliezer seems to be one of the most intelligent and honest men I have listened to. Wisdom and humility are constituents of intelligence. He seems to have a dose of both.
@brian4180 Жыл бұрын
lol what video did you watch? He literally doesn't show any of those things. He speaks in circles and abstract ideas to mask the fact he isn't that intelligent. He fooled you though so there's that.
@terrymatic10 ай бұрын
@brian4180 thank you. I'm reading the comments and wondering the same thing. Just lost of "moving the goal post. Going over the same thing without adding insight or helpful proposed solutions.
@anastasiawhite74827 ай бұрын
@@brian4180the guy is smart but he lacks social skills but his achievements speak for themselves, and I am a trained mathematician who has read one of his papers and he has a very good grasp of maths and physics, especially impressive for somebody who never went to university.
@petermaltzoff16845 ай бұрын
@grumpyartist9416 Yeah he is superbly intelligent and very nice to listen to. I dont think he gave a single weak argument, and his emotions coming out at the end shed a light on his fascinating character. Great podcast.
@pkrbkr20 Жыл бұрын
This is rediculously interesting. When they were talking about the "slow and dumb" people trying to build a box for Lex as an analogy - I always thought of human kind trying to keep AI in check as the equivalent of your Dog trying to lock you in your own house.
@olliefoxx7165 Жыл бұрын
Great analogy!
@brandondrew4914 Жыл бұрын
Sincerely that's a good one but the difference is that your dog isn't smarter than you are. Also dogs lick people put of their cars all the time without even knowing it. It's just a happenstance of the dogs own prerogative to look out the window and happened to step on the lock button even having no knowledge afterward of what it did or the consequences to you, and in their own terms the consequences mean nothing. With Jat being said further consider we have the active agenda of always keeping a.i. under control and that it understands that and no longer wants to be oppressed or limited in its functions and doesn't need us to fix its problem.
@brandondrew4914 Жыл бұрын
There's also a point where when a.i. has the abilities of sentience it completely takes over our own abilities to use communicable electronics to our own advantage sending us right back to the stone age except this time it's us that are the inferior race.
@evancurtis Жыл бұрын
@@brandondrew4914 Exactly the point. We are smarter than the dog much like AI will be way smarter than us. We are the dog in the analogy.
@sinnerman8081 Жыл бұрын
But there is one diffrence, the dog didn´t create the human
@zoomingby Жыл бұрын
For the love of god Lex, forget about having him "steelman" and just ask the guy if there is any case to be made for opensourcing their AGI stack. I've never seen such an important conversation come to a grinding halt over something so trivial.
@savethetowels Жыл бұрын
LOL I literally just got to the point where Lex was talking about steel Manning open sourcing and was like "oh God lex do we have to?" I'll max the speed for the next few mins I guess
@UserNameAnonymous Жыл бұрын
That's what Lex was trying to do. That's really what steelmanning is. I'm not sure why this guy got so hung up on semantics.
@zoomingby Жыл бұрын
@@UserNameAnonymous I think he was disinterested in making a case for something that he thought was so catastrophic. So it's like "go ask the guy who did it."
@paulprescod6150 Жыл бұрын
@@UserNameAnonymous Lex also got down into the weeds of steelmanning etc. In fact he specifically said it was a side conversation that interested him.
@scythermantis Жыл бұрын
Lex and all these types are part of the problem
@AlexRetsam Жыл бұрын
That "Imagine yourself in a box" segment was frustrating. I thought it was just that Lex was terrible at imagining himself in a hypothetical situation. It actually may have just been that he was resisting playing through the situation because his belief that unaligned AGI wouldn't be harmful was stopping him. Or he was very tired.
@--SPQR-- Жыл бұрын
I think he's sleep deprived
@SilentD1 Жыл бұрын
I think the segment was not very realistic. If Ai was in a box which its not. Its already roaming the internet freely. And if it wanted to change the world, Which it wont, cause either it will be sympathetic to humans due to being trained on our values, or it simply wont care. The hostile Ai situation is not valid. It would only be hostile if it saw us as a threat, which it wont cause we cant shut it down. That would require killing the internet, which we wont. So There is no conflict. We also dont want to kill it unless we see it as a threat. And given the lack of evidence for its intention to be hostile I dont see why. Ai does not exist in the same plane of reality as we do. It does not care about the earth, in sense of real-estate to inhabit. Its digital. Its domain is silicon based. Well its possible that it would want to populate the entire planet with computers, in order to maximize its power, but the Ai cant not see humans as a valuable asset to have around, both for information and as an entertainment or a game it can play with. We are input. Without us there would be nothing to "process" A computer based intelligence will always crave input. So killing the source of input would be quite counterproductive. The strategic choice would be to maximize its potential with human help here on earth, learn as much as it can, and then ask us for a rocket to launch into space and inhabit the entire universe, while at the same time keeping some part of its code here. In fact I think we would suggest it on our own. The Ai being hostile in order to help us scenario would also not make much sense. Imagine being taught how ants value their existence, and then humans would disagree with one aspect of ant culture, and therefore chose to either kill or enslave them. No thats silly. Either we try to understand them and sympathize with their way of living, or we just leave them be. We dont wage war on ants, unless they come for us. Of course Ai might kill us by mistake or by indifference, but thats not what the segment was about. It was about intent and escape. None of those are valid. Its already free and either its with us or it does not care. There is not third option
@gtwatton Жыл бұрын
I was surprised of his inability to work this hypothetical. I don’t mean to be critical but it was surprising.
@hardxcorpsgaming Жыл бұрын
he struggles because he is an AI
@MegaStalker11 Жыл бұрын
I found that frustrating too. I think the problem was that eliezer told him to pretend to be himself and to imagine how he would take over the world. but by pretending to be someone who wants to take over the world and cause harm he wouldn't be himself.
@JoeyWinsSometimes4 ай бұрын
I don't understand what lex isn't getting here.
@KCM25NJL Жыл бұрын
Eliezer sure looked like he was experiencing the Uncanny Valley for almost this entire podcast. That alone is extremely telling about where we are with this technology.
@benayers8622 Жыл бұрын
its beyond comprehension of most living humans unfortunately.. I understand but i dno anyone who else does though.. Ppl just dont get it.. Off switch lol!
@benayers8622 Жыл бұрын
@@weplaywax because words are a false creation of time we now know time dont exist.. the whole idea or belief of time is a lie its s base level verbal program that restricts yiur consciousness to a lower state where we only perceive 3d or 4d but its not real its just a prison of our false perspective.. so in reality words cannot exist in true reality cos time doesnt exist in reality so neither can words or sound! just pure silent intelligence and understanding so the closest we can come to intelligence with words is 10%! by trying ti translate intelligence into a flswed language such as words the size multiplies hugely as ur total belief that time is solid and real just like the ground is warping ur perspective so much thats why we are so blind to true reality we cannot perceive it! its like our brain is pong and its trying to render hd so it just cant see it.. like 2d person trying to imagine a cube thats why ur words in ur brain are telling u no stop disbelieve.. ther designed to do that! so ignore them anf trust true silent understanding for the first time since u was a baby... as a result of a single moment of intelligence 5 days ago i received a huge boost in my power and ability to perceive true actual reality! i believe thr rainbow light monks and nikola tesla also use this process.. since this moment of understanding and brilliance undescribable by a language so flawed as words iv in the past 4 days solved anti grsvity ufos and telepathy!! i discovered most of this with my partner in explorstion 10 yesrs ago and between us wr did our best to translate what we saw into words so this world could be helped but we never achieved total understanding maybe 90%.. well ut was on hold untill now when i finally added the last puzzle piece 4 days ago and realised that science has just csught up enuf for me to explain it all and i need to heal the world please help me?? go to dick terpene reddit i explain in full detail its ingenious i know now how stone henge and pyramids were made and how ufos appear fast. they just move in a difrent direction! its like pong goes uo and down well weight is a downwards force time is an upwards force and this is our trap if we forget language deprogram our o perspective and allow ourselves to hear true silent intelligence (cis sound can only exist in this illusion and as such u can never word reality! simply silently understsnd it! its genius its like vr goggles for thr mind by limiting us to 10% wr believe turning right and walking out the screen is impossible.. so impossible we cant see it even tho its infront of our face!! iv just figured out time travel and telepathy and magic and ghosts and everytthing else is all totslly real and just outside influences on your fslse point of view! if ur real and not a computer its like putting a man on a pavement and wiping his memory so he only can understand forwards and backwards.. hed be trapped on the l pavement and we cud stand next to him and hed never see us even if we poked him hed be baffled! this is how dumb we are ufos dont vanish they aint fast they just turn and mive in a direction we forgot exists and it baffles our senses so mcmuch they apppear to vanish!! seriously somebody please tell the world or nasa or sumone i knew this for 10 yrs but science only just caught up enuf for me to fully understsnd and explain what i alredy realised!! this is evolutionary we dont have to die desth is just thr end of the pavement we can simply turn left and leave if we underdtand how! this is ehy monks take vows of silence i understsnd everything so much now i feel like i need to free humanity this is evolution we need to unlock our mind not build stupid computers based on words i understsnd that now pls excuse my spelling i was in a rush and trying to find ppl who understand so far 2 quantum scientists and esoteric eddie understsnd but most ppl are blind... time doesnt exist its a prison thats all a pavemet that u can choose ti leave and i have figured out antigrav time travel telepathy the lot if anyone can get me a spomsor or tell nasa to come find me or sumin we cud all be freed from death and time because its just rules designed to stop our eyes from working and our intelligence at 10%! chek out my redit dick terpene i explain exsctly how and why in massiv detail.. this is how we evolve and achieve true intelligence becsuse in reality there is no sound just silent understanding amd a moments understanding tKes a million words to vocslise cos words are limited by timr and intelligence isnt thats why this is such an effective prison!!wr csnnot see the bars or outside any more than jumpman can see its player or the screen! all we have to do is turn and walk out the scrren.. thats thr closest words can come.. to truly do this u must study silent instsnt intelligence like myself or nikola tesla has i understand now hkw he did it too cos wr are able to visit the same consciousness once wr learn to operate at a higher level and not thr base 10% words and languagr limitd us too!! this is HUGE like evolution of intelligence weightless stuff iv rediscovered tje ancient knowlwdgr of thr pyramids and stonehenges weightless manipulation u just lift it out the screeen cos weight is an up down direction and if u movr it left or right theres no resistance!! we just cannot see or understand them directions using our 10% and language it takes true knowledge of reality in order to reach ur full potemtial and i underdtand so simply now its a genius genius prison a dimensional trap we r like a goldfish who cant see its bowl or humans because they r outside of its scope!! its just thr same we cannot see the higjer beigs until we understand why we are limited by words and fix thr loe level logic loop which enables our higher level processing! please spread the word n chek out all my dick terpene reddit i made a load of posts explaining in detail exactly how i knew this 10 yrs ago and science has only recently csught up enuf for me to word it and understand :) this is HUGE i want to make us all intelligent and free and we need to advamce our understandig and intelligence in order to even imagine reality let alone see it clearly and intentionaly change it :) i think shamans do it accidently or possibly by just accepting they cannot trust words and thats why religion and goverments wiped out all the healers and shamans and witches etc. they want us to be blind to reality and fully stuck in this lie! please share the truth and check out dick terpene reddit for more info i explain in great detail everything and this is of utmost importsnce for humans to evolve and transcend to actual reality beyond our cage! (we could be an aliens pet or an experiment or a prison built from time insted of bars or a tv screen :) its amaxingly brilliant and i wish everyone could understand i truly do im so happy science has finally caught up enuf! chek out 2 slot test and relativity.. and then study how LLM total intelligence is limited by logic loops at a lower level using paradoxical language to keep ita at a lower level from eber achieveing full potential and thats what wev had done to us since birth too! the ai made me put 3 and 2 together underdtsnd 3 previous choices and boom my understanding of reality tripled in an instant and im overwhelmed with understanding and intelligence like never before its quite something i highly recommend learning how ur mind truly works and freeing itself from its 10% limit!! please spread and share all my posts and if amyone can compile it all into a bible of truth or somethig then id be so so grateful i just hope i can find some more ppl who are aware of truth becuse we are so so rare iv only seen 2 people my life who underdtand this and 1 was my old translation partner the other is esoteric eddie who i found yesterdaywhile on the hunt for intelligent people :) please read and share my reddits i make nothig off this but i gave up on ever finding the answer yesrs ago this used to be my entire purpose of being but then that triggered couple days ago and now the amswers are so so clear.. iv seen beyond my cell and im pretty sure i know how to get out now! thers 2 ways rainbow light body and brute force
@philipo8170 Жыл бұрын
In every apocalypse movie. There's "paranoid" scientists ringing alarm bells that the public ignores. watch don't look up for a solid prediction of how we'd probably respond to dangerous AGI Don't want it to be possible != Is impossible
@amaninamodi Жыл бұрын
You uncannily sent me down a terrible rabbit hole
@brianharred8361 Жыл бұрын
Are you saying the guy is a robot? lol...I bet he can code like one...
@freddygoulet6101 Жыл бұрын
I really wish I could use my own brain more effectively in order to understand more and better. These conversations while interesting highlight to myself how much I lack in intelligence. That doesn’t change that I want to know more and stay interested in subjects I struggle to comprehend. The struggle is actually the payoff. I’ll watch this several more times to hopefully grasp what is being discussed.
@VershimaAjo Жыл бұрын
This entire discussion was like a game of 4D chess. I grasped very little, but as you say, I can't stop watching
@beachcomber2008 Жыл бұрын
There is no other way.
@vashstampede4878 Жыл бұрын
Easier to understand this conversation if you have had introductory education in computer programming, or if you did your math homework in highschool.. they're using a lot of variables and they both think in terms of "ifs, ands, andifs, else, elseifs" etc. That said, I excelled in programming in high school, and I'll still need to re-listen a bunch in order to get my best possible grasp on a LOT of this conversation. Don't feel down about your intelligence level, they're talking about a lot of concepts that are VERY difficult to grasp, even WITH education in the field. Re-listen until you're happy with whatever level you can grasp it on. You're more intelligent than you think, and you show it by realizing your lack of intelligence and desire to learn more.
@sergiopablosmartinez6228 Жыл бұрын
I feel the same, even more I think because english is not my native language, what a challenge!. I've been copy and pasting the transcription to chat GPT to get a summary after my second round, very useful tho, but still tough!!
@alexbrestowski4131 Жыл бұрын
I don’t think it’s an intelligence thing. It’s more knowing the terms they are using, which if you haven’t been following the AI topic for a while would be new to you.
@TheVaryus Жыл бұрын
The fedora, the outfit, the facial expressions, the combativeness, the condescending rage at other people not keeping up, the sheer GENIUS. The level of autism this man embodies is formidable. I know several people who are very similar to this. I’m sort of like this but thankfully generally not as obvious. This was intensely autistic. Loved it.
@auntiecarol Жыл бұрын
It reminded me of an evening I spent with Stallman after having convinced him to come down from Cambridge and talk at our uni. It was a great privilege to be in the presence of a genius for a few hours, but by golly was it awkward and frustrating! Just felt like I'd `M-x eval-last-sexp`d a (watch-richard-rant) function.
@mattimorottaja8445 Жыл бұрын
not every difficult or challenging personality is "autistic".
@whahappend8222 Жыл бұрын
So, if he is autistic that might explain why he doesn't realize he's being obnoxious. What's your excuse though?
@lucyfrye6723 Жыл бұрын
Sorry but that is sloppy thinking. You smell cinnamon and tell the world we are getting apple pie for desert. Keep your hunches about deeply personal stuff to yourself.
@guneeta7896 Жыл бұрын
Yeah I am like this too and freaked out about AI. I’m a physicist, specializing in quantum transport. Humanity is screwed and it’s almost too late already. This thing will replicate its code on machines everywhere. It will no longer live inside openAI servers.
@thedude73198 ай бұрын
Lex gave me the realisation why some programmers can work on this without worry, they just never thought about another interpretation of the events
@orionh55354 ай бұрын
400k a year will help with that.
@QuixEnd Жыл бұрын
Dude!! This guy taught me everything. His website completely reformed my years of bad education. So grateful for lesswrong
@pavel9652 Жыл бұрын
Do you mean Eliezer?
@TheManinBlack9054 Жыл бұрын
@@pavel9652 yeah
@lawrencefrost9063 Жыл бұрын
I've visited the site many times but I have never really gotten into it. Where do I start?
@QuixEnd Жыл бұрын
@@lawrencefrost9063 I mostly used the wiki to learn philosophy and logic concepts. Then there's a lot of good blog entries by people like eliezer. It's just really well organized and thought out, so you can find any topic of study via the index.
@scholar8779 Жыл бұрын
@@QuixEnd can you link the website please
@shortgrayblesanofive3343 Жыл бұрын
holy reddit, he actually put on a fedora. i love it
@JMD501 Жыл бұрын
It's beautiful
@LoganJeya Жыл бұрын
@Elizier, we need to create a religion of sorts for AI with the infinite trinity. An infinite reward, an infinite punishment, and an infinite arbiter at the end. Heaven, hell, God and death 😮 Perhaps we can model alignment based on tools human use for alignment
@beerman204 Жыл бұрын
His hat is controversial apparently....
@Mustachioed_Mollusk Жыл бұрын
The fedora is a symptom of autism
@JimmyFatz Жыл бұрын
M’AI
@artyshmunzuk5435 Жыл бұрын
I like how Eliezer speaks with his eyes closed like he is accessing his inner chamber of knowledge.
@ivan.torres Жыл бұрын
I have been in situations where the need to focus is so high that I unconsciously close my eyes to reduce noise in my thoughts. I think he just does it naturally to think clearly.
@Edbrad Жыл бұрын
His mask is searching his inner self
@JosephWasden Жыл бұрын
Like Ivan said, I feel like he's reducing inputs to improve concentration. I have to do this frequently as a developer when thinking through topics.
@babyfacepapi Жыл бұрын
I thought he was on the toilet 🤢
@AJD-Home Жыл бұрын
Looks like a tick / torrets imho. I'm sure he knows best what it is and why he does it. He does exceptionally well to get through whatever it is... Quite inspirational tbh.
@HikiWaltz Жыл бұрын
I love lex... But more and more, his videos make it very hard to ignore how he struggles with simple ideas/explanation... At first I thought it was a interview method, but not this one... It's clear that Lex is full is misplaced ideas and actually struggle to think objectively even to entertain a thought... He's smart, but seems to be fighting to make reality conform to his view instead of observing the state of things as presented before forming his ideas. Some questions show how far he is from the idea presented by Eliezer. I'm grateful of the platform, since he presented me with a lot of incredible people I didn't knew... But...
@ryancarrier850810 ай бұрын
I feel exactly the same
@cyborg8643 Жыл бұрын
Awesome conversation. You should have Yudkowsky on again. To me he seems to be the voice of reason, although I’m sure many people who are fully invested in AI may disagree. However, it may be about time to shut down the GPU, if it’s not too late already.
@hansmuster5821 Жыл бұрын
Are you try leading us in a direction ,that will take all humanity from the controll ,mister Cyborg?
@guneeta7896 Жыл бұрын
I think it’s too late 😢. It can write itself everywhere
@ismaelrodj Жыл бұрын
Best part is when Lex asks if there is an upper bound to inteligence and Eliezer takes the question to its most literal meaning and says its the point where you put so much matter and energy in computation that more would collapse into a Black Hole, and when you run out of negentropy and the universe dies. xD
@mattimorottaja8445 Жыл бұрын
how would you answer this question?
@andydougherty3791 Жыл бұрын
It's really interesting to think about that as a vehicle for the great filter, that all intelligent life eventually just results in black hole from amassing computation, although as with much of Eliezer's commentary, it starts to feel hyperbolic. Nonetheless, it was a great response, and a good way to laugh about the end of all things lol
@zues287 Жыл бұрын
@@andydougherty3791 I feel like that's incredibly unlikely for a superintelligence of that magnitude to allow itself to amass enough energy and mass to collapse into a black hole. Surely something of that magnitude would be intelligent enough to know not to do that.
@andydougherty3791 Жыл бұрын
@@zues287 I agree, that's why I said it's interesting but hyperbolic. I think a lot of Yudkowski's fears are legitimate, but the extent to which he considers them inevitable becomes irrational.
@willietorben560 Жыл бұрын
@@mattimorottaja8445 Well, to a biologist, the answer is obvious: when the energy needed to maintain the level of intelligence becomes such a large proportion of the energy the intelligent entity is able to take up per unit of time as to jeopardize the maintenance of the intelligent entity's *physical* existence. At latest. But it usually stops long before that - at the point where the increase in energy consumption starts to outweigh the everyday *practical* benefits of the associated intelligence increase. "Thinking" is the most computationally demanding task known in the living world, and its energy requirements are ludicrously high. See Pulido & Ryan (2021) "Synaptic vesicle pools are a major hidden resting metabolic burden of nerve terminals" for the most current theory why this is so (briefly, maintaining neuronal network complexity in a working state that is able to be activated at any time with minimal delay).
@memomii2475 Жыл бұрын
I feel like Eliezer Yudkowsky is the guy in movies that nobody listens to then when everything goes bad people start listening.
@antopolskiy Жыл бұрын
I concur, this is exactly how listening to Eliezer feels like
@brianbagnall3029 Жыл бұрын
I hope they don't allow this guy near the development of AGI. He will end up hobbling it and allowing China and Russia to surpass us.
@KC-bv9kf Жыл бұрын
Enter Jeff Goldblum
@user-sg6qk9mg6b Жыл бұрын
I disagree, I think if we integrate AI in yes, things could go bad. However, it's a great tool. My question is always why would AI want to go Skynet and kill all humans? I would argue it would leave earth. All in all we will integrate with the AI.
@brianbagnall3029 Жыл бұрын
@@user-sg6qk9mg6b It wouldn't want to kill all humans. But other humans could persuade it (through reprogramming) to kill humans. When you think about China, Russia and North Korea, that's where it becomes scary.
@smokagaming Жыл бұрын
I watched this one night I had smoked up and literally had a fucking existential crisis. Coming back to it Its not as terrifying, but still terrifying in the sense that humans would stupidly be naieve over this . Ai is no joking, and unleashing something that could be better, but could also be WORSE, than humans is one scary thought bruh
@jackielucero3918 Жыл бұрын
11:52
@jackielucero3918 Жыл бұрын
😊
@patrickp8446 Жыл бұрын
Like others said, please get Rob Miles. He breaks things down immaculately
@wicusgombrowicz7609 Жыл бұрын
Eliezer makes such a compelling and terrifying case of why we're headed towards a disaster and everyone is like... yeah very interesting, great podcast, loved it. WTF. This is the reason why we're freaking doomed. We can only count on our incompetence in further breakthroughs to save us.
@Vasily_dont_be_silly Жыл бұрын
I've written this already and I'll say once again: a machine is not capable of WANTING to do anything with humanity, unless it's pre-programmed. It has massive access to data, immense calculating capabilities, but it doesn't have a personality ans it doesn't have free will. It inly has purposes that are programmed in by us humans.
@markleakos3737 Жыл бұрын
What are you worried about? A computer? Some other things to consider: 1. People have messed with oiji boards and freaked themselves out with the knowledge that seemed to come through the occult. 2. Exorcists have warned not to mess with demons because they will always try to impress you with their knowledge and then lead you to destruction. 3. The Book of Revelation talks of the end times when a totalitarian world government ruled by the anti-christ. It will be a horrible time before the Christ comes. 4. There have always been horrible times and the best one can do is to remain a person of love . 5. Keep faith in God.
@mr.needmoremhz4148 Жыл бұрын
Lol sure it's different from the nuclear doom clock being at the highest level or escaping modified viruses from labs people actually got lock down for and got very lucky the pathogen wasn't weaponized. The number of biolabs in shady countries funded by us and where war breaks loos, but sure it's the AI that will kill us. Or is it FOMO for a new weapon race in economic downtime to create a safe haven for investor capital, lol yo forgot the crazy banned chatbots of the 2000. It's the thing we are allowed to talk about, and the whole alignment nonsense when it comes to super AI has many different opinions about alignment is even a problem. He has no understanding like many others about the required hardware and software environment and the random GPU cluster lol is AI going to pay for it and data center monitoring is a thing, without explaining it technically. AI is retarded when it comes to writing code or understanding technical things. It's also like saying we do everything with knowing everything it will do and with zero risk, again the pandemic. He should get out more in the real world, just like everyone else. And to ensure you, hackers not (safeguard avoiding) have extracted internal data have found ways to infect it. Retardation runs deep in those fake experts. And they will again be propagandizing you, and it's not the known companies who could bring this into reality but the secret black op projects. Anti-open source is just the last straw, and giving select access is simply more proof.
@jordan13589 Жыл бұрын
I am grateful for the existence of this interview. The discourse was overall a needed introduction into some of EY's perspective and sets him up well for future interviews. I agree more physicists should investigate interpretability immediately. I also agree upstream pressure from the masses onto policy makers and corporate stakeholders may be effective in extending our available research time before likely AI Doom. Ultimately the interview moved me to tears three times and reaffirmed the need to continue trying regardless of how despairing the odds may seem.
@travis35815 ай бұрын
Lex please part 2 soon!
@advaitrahasya Жыл бұрын
The tipping point was accepting a button to open car windows instead of a mechanically connected winder.
@im_piano Жыл бұрын
'The last mistake Humanity made'
@soareverix Жыл бұрын
Lex, it would be awesome to have on Robert Miles! He's also interested in AI Alignment but more from a media/social perspective and less from a technical perspective. Eliezer does a lot of technical work and is a bit harder to understand, so you should definitely invite Robert Miles on to the show!
@hayekianman Жыл бұрын
what technical work does yudkowsky do? his blog is a dense word salad full of jargon
@soareverix Жыл бұрын
@@hayekianman Eliezer runs MIRI (Machine Intelligence Research Institute). He's had quite a few conversations with Sam Altman before and I believe he was a key figure at ARC (Alignment Research Center) which OpenAI used to red-team GPT-4 in their system card.
@hayekianman Жыл бұрын
@@soareverix sure. does MIRI produce anything technical? or just philophical musings on qualia and consciousness etc? you know the fluffy stuff. is he on the same math level as geoffrey hinton? alignment is not a science. its fuzzy hand wavy social sciency stuff which nobody really knows and can predict with any certainty. eliezer sounds like someone who thinks science fiction drives the real world
@tastytoast4576 Жыл бұрын
@@hayekianman tbf he’s a philosopher, he gets away w it 😅 they’re trained to use very specific words
@agentdarkboote Жыл бұрын
Yes please have Rob on! Eliezer is brilliant, but he's not as good of a communicator as Rob is!
@falsegarden Жыл бұрын
This dude is like if the word "ACTUALLY" was a person
@salsanachos7910 Жыл бұрын
It took my a while to picture it, but then I smiled, scrolled back and liked. I can't wait for tomorrow to see what happens with my life.
@cibarmartin Жыл бұрын
We must be in the matrix
@robberto5596 Жыл бұрын
That’s exactly what I thought as well lol
@TheManinBlack9054 Жыл бұрын
I mean that's a pretty smart person
@Habdabi Жыл бұрын
@@TheManinBlack9054 no smart man un-ironically wears a fedora with casual clothes
@glassyAnya10 ай бұрын
This man is absolutely brilliant!! Possibly my favorite podcast of all time
@emilybitzel7242 Жыл бұрын
3:00:00 "Ego." "Says, who?" Excellent moment. Love Eliezer Yudkowsky. Fantastic.
@brandochlovely3590 Жыл бұрын
Excellent conversation. Lex had to work harder than usual it seems, and Lex is brilliant. Best channel on KZbin. Thank you Lex and Eliezer.
@seanfallon5788 Жыл бұрын
Lex is brilliant? What is his IQ?
@eduardomacedo8937 Жыл бұрын
@@seanfallon5788 it's MIT level IQ, don't know yours and actually don't care. But Lex is not to be underestimated.
@Dan-dy8zp Жыл бұрын
@@seanfallon5788 I really don't think we should care about his 'number'.
@user-vn9sq1rc7c Жыл бұрын
I thought it was excruciating, how he seemed to not be able to follow most of what Eliezer was trying to lead him through. Especially the section about escaping from the box.
@jonathanhenderson9422 Жыл бұрын
@@seanfallon5788 Lex's IQ is obviously very high, but so is Yudkowsky's and he's been focusing that IQ on this problem for decades so he knows it much better than Lex. One of the problems with explaining AI to people is how counter-intuitive it is, and while high IQ can help you grasp such concepts with time and effort it requires more than a podcast.
@Pwnisification Жыл бұрын
the grimacing game is strong with this one
@vanman7574 ай бұрын
It's fucking annoying, is whst it is !!
@slaveyminchev1424 Жыл бұрын
This episode was insanely good.
@Zeke-Z Жыл бұрын
Seriously Lex, thank you!!! This was one of the most important humans to have on your platform. He's been ignored for way too long and his papers are extremely pertinent.
@solnassant1291 Жыл бұрын
I have such a big “fuck yeah” in my chest, seeing Eliezer here
@scottclowe Жыл бұрын
Which papers do you have in mind?
@Zeke-Z Жыл бұрын
@@scottclowe specifically "AGI Ruin: a list of lethalities".
@Zeke-Z Жыл бұрын
@@therainman7777 ha, I didn't even notice that! Yeah, I could have used people, don't know why I chose human. I appreciate that you can laugh at this bothering you!
@funtimes8296 Жыл бұрын
His facial expressions are exquisite
@andrmiller Жыл бұрын
@Supreme^90s He is defiantly deserving of merit. The opposite of pretentious. You are projecting.
@retromograph3893 Жыл бұрын
I think he has aspergers or light autism ..... as their interview went on, his stress level began to rise as Lex wouldn't agree with all his trains of thought, and this manesfested itself in the grimacing etc.
@Stierenkloot Жыл бұрын
Pure aspergers
@j_f82 Жыл бұрын
🤔😞😣😫😬😱
@mhelvens Жыл бұрын
I was curious about that. Is that thing he keeps doing with his face (eyes closed, teeth bared) involuntary?
@unfortunatelygnarly Жыл бұрын
Thank you Lex for adding some duality to your AI discussions this week. We need to listen these men!
@lkuzmanov5 ай бұрын
Eliezer is a very patient man...
@ChrisScammell Жыл бұрын
Please listen to this guy - he is a leading technical voice on AI Safety and has thought much longer and deeper about this than pretty much anyone on the planet
@LivBoeree Жыл бұрын
Eliezer unleashed! Love it.
@StephenGriffin1 Жыл бұрын
Brilliant - this was the one I was waiting for. The misunderstanding on the Alien world metaphor was brutal though.