Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9

  Рет қаралды 187,066

Lex Fridman

Lex Fridman

5 жыл бұрын

Пікірлер: 225
@robinampipparampil
@robinampipparampil 5 жыл бұрын
46:51 - 48:09 - This is very relevant about social systems and vested interests. Thank you Stuart Russell for your wonderful comments. Thank you very much Lex Fridman for the pertinent questions.
@kwillo4
@kwillo4 3 жыл бұрын
Imagine getting 25 interview requests a day. Damm. I love this man.
@pedrosmmc
@pedrosmmc 5 жыл бұрын
Huge thanks Lex Fridman for this amazing interviews. Best regards.
@DaveBerendhuysen
@DaveBerendhuysen 5 жыл бұрын
I love your interviews! Currently trying to build an AGI sytem. The thing I love most of your interviews is that you manage to make your guests smile. They know you grasp their answers and it really elevates the situation.
@artpinsof5836
@artpinsof5836 Жыл бұрын
Any update on this in a post autoGPT world Don?
@michaelsbeverly
@michaelsbeverly 10 ай бұрын
@@artpinsof5836 He succeeded, and realizing the world was doomed, he's left the solar system.
@sabofx
@sabofx 4 жыл бұрын
*For sure, one of the best talks you've posted in this channel. Thank you Lex and and thank you Stuart* 🖖👍
@anshulrai7926
@anshulrai7926 5 жыл бұрын
This was an absolutely amazing conversation. Thanks for sharing, Lex!
@jesussalgado1495
@jesussalgado1495 5 жыл бұрын
Thank you Lex, for this series. It is an amazing opportunity for us lot to listen to these interviews! In one of your last questions to Sruart Russell you ask if he feels the burden of making AI community aware of the safety problem. I think he should not be worried: there is less potential harm if he is wrong than potential benefit if he is right. And he is not alone, either.
@anamericanprofessor
@anamericanprofessor 3 жыл бұрын
Yes, thanks for having so many of the people who's work I'm reading on your show!
@RichardHopkins69
@RichardHopkins69 5 жыл бұрын
Superb and thoughtful - specifying the problem is always the hard bit :)
@Aleamanic
@Aleamanic 4 жыл бұрын
Love these interviews, good work Mr Fridman! This one goes well with the one with Mr. Norvig of their joint AI text book fame. One comment on Mr. Fridman's comment at 56:24 into this interview here, he sounds in favor of oversight by the "free" market (essentially self-regulation), as in consumers can vote with their feet if they don't like the system. The trouble is, as Ms. Zuboff has been pointing out, the public has not always been fully aware to what deal they signed up to. So the *informed* consent that is necessary for participants in a free market to vote with their patronage (or lack thereof) isn't always a given, and therefore undermines the argument for a self-regulating market. Regarding Mr. Russell's argument about taking it slow on the governance side because we have to supposedly figure out first how to do it right, I don't understand why the government would not be empowered to apply the same mantra as silicon valley, "move fast, break things", or "disrupt" as a metaphor for innovation? For as long as we are not sure about the best form of governance, why don't we iterate and learn from rapid trial & error in governance experiments, just as the underlying businesses that profit from the innovation experiment without accountability? Why is governance held to a level of perfectionism that technology development isn't?
@maloxi1472
@maloxi1472 3 жыл бұрын
Because the stakes are higher and less localized in space/time. Also, decision makers are more numerous, less aligned in their interests, less educated on average than technology leaders (whose influence outside of a well defined sphere has a significant damping factor)... In that regard, the most nimble form of governance, in theory, would look like an _open oligarchy comprised of highly intelligent and extremely benevolent people ruling over an extremely well educated community that would have solid reasons to trust them._ Good luck making that happen without moving the whole population up by 2 to 3 std deviations in intelligence, empathy, conscientiousness and whatnot Also also... "without accountability" ? Seriously ?! When I close my eyes and imagine a world without accountability for businesses, I see a different picture than what we have now but my mental model of the world might need some work... point is: freedom and agility are extremely costly on the business side and even more so on the governance side.
@JinalKothariS
@JinalKothariS 5 жыл бұрын
Thank you for creating and sharing these videos :) . So many valuable videos on your channel!
@goldfish8196
@goldfish8196 4 жыл бұрын
Lex, the questions you make are amazing.
@LorakusFul
@LorakusFul 5 жыл бұрын
That was simply the best (as not simple) interview I've watched this year. Thank you Lex. I will stay on this channel for a while I guess.
@_bancini_6355
@_bancini_6355 5 жыл бұрын
Thank you for this conversation!)
@alexbui0609
@alexbui0609 5 жыл бұрын
Wonderful Podcast. Thank you, Lex!
@mauimike6
@mauimike6 5 жыл бұрын
Thank you for posting your interview of Stuart Russell. I work at Lawrence Livermore National Laboratory where I've encountered Russell's works in the References sections of many colleagues and other Lab researchers, so I was pleased to see his interview on your podcast. I was amazed at his ability to clearly express his ideas without relying on a lot of jargon and obscure cultural references. For that reason, I've recommended the podcast and KZbin versions of the interview to my professional and lay friends interested in the field of applied AI. BTW: the Artificial Intelligence Podcast is now a part of my regular cast-listening routine!
@Hexanitrobenzene
@Hexanitrobenzene 3 жыл бұрын
Great to see someone of such caliber among the listeners :) It's always interesting to listen to Stuart Russell because he is not only intelligent, he is also very wise, and those two features, most of the time unfortunately, do not go together. I recently saw Joe Rogan's podcast with Tristan Harris about algorithmic manipulation of social media users, and the guest summed up the problems of humanity, I think, brilliantly: "We have paleolithic minds, medieval institutions and godlike technology". In essence, we are too unwise for the technology of this power (AI, nuclear weapons, genetic engineering,...) As a side note, Stuart Russell surprised me by knowing a fair amount of history of physics.
@funkybear1806
@funkybear1806 4 жыл бұрын
Holy smoke.. This is the kind of talk I needed to hear.. thumbs up Stuart !
@sapudevidwivedi6552
@sapudevidwivedi6552 5 жыл бұрын
Wonderful talk and vision. Thank you for sharing
@keistzenon9593
@keistzenon9593 4 жыл бұрын
he sounds way younger than he looks, was surprised after listening to the audio version to check out how he looks
@yakovsushenok4009
@yakovsushenok4009 3 жыл бұрын
lol I had exactly the sane situation
@SG-kj2uy
@SG-kj2uy 2 жыл бұрын
Using face-app to grow his hair, he looks like a teenger.
@nmh83
@nmh83 Жыл бұрын
Glad you grasped the main issue 10/10 👍🏻
@zartur
@zartur 5 жыл бұрын
Great and inspiring talk. Nice and accurate vision of the near future. Thanks
@sixpooltube
@sixpooltube 4 жыл бұрын
Brilliant interview.
@overlawd
@overlawd 5 жыл бұрын
Great conversation - Stuart Russel’s the best talker on this subject IMHO. Definitely on my list of ideal dinner party guests
@alexandraalan1351
@alexandraalan1351 3 жыл бұрын
This interview is incredible.
@Unhacker
@Unhacker 4 жыл бұрын
It has been proven mathematically that listening to Stuart Russell increases one's IQ.
@Webfra14
@Webfra14 3 жыл бұрын
I hope it is an additive effect. If it is multiplicative, I'm out of luck...
@CipherOne
@CipherOne Жыл бұрын
I believe it.
@adeadgirl13
@adeadgirl13 Жыл бұрын
Great now I have an IQ!
@dark808bb8
@dark808bb8 5 жыл бұрын
Great talk!
@ZukuseiStudios
@ZukuseiStudios 4 жыл бұрын
Great talk, brilliant
@flatisland
@flatisland 4 жыл бұрын
46:46 well put!
@DieMasterMonkey
@DieMasterMonkey 4 жыл бұрын
Stuart Russell, Max Tegmark, Elon, Wolfram, Pinker, Lisa Barrett, Guido - this is my favorite AI/ML podcast - thank you Lex Fridman!
@ChrisStewartau
@ChrisStewartau 3 жыл бұрын
Interesting podcast today Lex 👍 the point about 'the Invisible hand' is interesting but also remember Adam Smith talked about externalities and the negative costs that these things can have on society. It's classic game theory, we maximise our own utility often at the detriment of others. That's a classic case for algorithmic legalisation. The harder part is deciding what level of regulation is required.
@jmariacarapuco
@jmariacarapuco 5 жыл бұрын
Loved the point about corporations. This series is awesome, thank you!
@thefoldp
@thefoldp 4 жыл бұрын
Great conversation, subtle but very much on point. Thanks.
@OneFinalTipple
@OneFinalTipple 4 жыл бұрын
4:15 - learning a value function baby!
@jfeezee
@jfeezee 3 жыл бұрын
Awesome interview lex and stuart
@ProfessionalTycoons
@ProfessionalTycoons 5 жыл бұрын
great interview
@mlsunmeier1907
@mlsunmeier1907 2 жыл бұрын
thank you for very interesting interview.
@DataJuggler
@DataJuggler 5 жыл бұрын
26:00 I have thought we are way away from self driving cars being safer than humans. I think we need to change the roadways to have sensors to properly do this, but everyone tries to make the car smart. As a programmer I am 100% aware computers do what you tell them, not what you want.
@masteravery8648
@masteravery8648 5 жыл бұрын
Hey lex, awesome work, if you see this - I’d suggest backing the camera further from your face for the intro portion of your vids, think of it as if you were actually in front of the viewer, you’d be too close to them the way you’re currently setting it up. Keep up the great work though!
@PrinceKoopa
@PrinceKoopa Жыл бұрын
Thank you for sharing, Lex! I’m looking to transition careers into Data Privacy, AI Trust and Safety. Do you have any tips on where to start? I’m taking courses on LinkedIn which are very good.
@A.j488
@A.j488 4 ай бұрын
thank you for great insight , description of the two way search tree , with depth one and futuristic more . the propagation of civilization through the flow of knowledge from papers into the mind and now into AI . those are my best lines so far
@allurbase
@allurbase 5 жыл бұрын
49:40 the agent would have to recognize that there are other agents with other objectives and maximize everyone's objectives, the thing is I) shouldn't be just knowing the objective, maybe it's unknowable or imposible to comunicate II) agent should be able to probe other agents about actions, expected outcome, final objective and if they agree/disagree how much
@williamramseyer9121
@williamramseyer9121 3 жыл бұрын
Fantastic discussion. Lex, somehow you and your guests, including Stuart Russell here, illuminate complex tech problems in common human language. Comment: In discussing Go, Dr. Russell stated (as I remember it), “the reason you think is because there is some possibility of your changing your mind about what to do.” This seems correct in a game context. However, during their daily life most humans do not appear (to me anyway) to think like this most of the time. They instead seem to think in a long series of rapid pieces of memories, with the pictures, sounds and sensations of those memories, and sometimes with the strong emotions (often fear or desire) that happened when that memory was created. In other words, most thinking seems to be remembering. Thanks. William L. Ramseyer
@pjbarron227
@pjbarron227 5 жыл бұрын
Brilliant! Loved the bit starting at about 56:00 calling for an "FDA" for the tech/data industry, with Stage 1, Stage 2, etc trials..... to lessen the future risks of Facebook - like disasters....also on outlawing digital impersonation and forcing computers to self-identify.
@zackandrew5066
@zackandrew5066 5 жыл бұрын
Interesting interview
@Puleczech
@Puleczech 4 жыл бұрын
Really great interview man! Instasub.
@tommole645
@tommole645 4 жыл бұрын
Thank you Stuart for your wisedom
@lkuzmanov
@lkuzmanov 2 жыл бұрын
Perhaps the most frightening take away for me after watching a number of videos w/ Stuart Russell's participation is that we're already having a version of the misalignment problem w/ corporations optimizing the world for short term profit. Once you've seen it, it's obvious and very scary... P.S. On a related note, the fact that Lex can work at MIT and still take libertarianism seriously should make us think.
@virusrhino5399
@virusrhino5399 11 ай бұрын
it should make us think in what way? i didn't fully understand that
@adtiamzon3663
@adtiamzon3663 Жыл бұрын
Dangers of Artificial Intelligence: What we know then... And what we know now!🤯🤔 Informative. Provoking thinking process! Interesting. 🤯 Keep the challenging stimulating conversation going, Lex et al. 👍🫨🧐
@ThuhElement
@ThuhElement 5 жыл бұрын
2 things i got from this... Uncertainty & More than the total atoms of the universe
@nesa1126
@nesa1126 5 жыл бұрын
I memorized : More than all atoms in uncertainty.
@kamilziemian995
@kamilziemian995 3 жыл бұрын
Lex Fridman Podcast (former AI Podcast) is source of 98% of things that I know about AI. I can study some MIT courses on AI, also on YT, but I not so much interested in this topic, when here you can have world top experts explaining this topic in non-too-technical way, but with great depth.
@alaricrex7395
@alaricrex7395 3 жыл бұрын
This was an excellent presentation. Thank you! I was thinking that this subject is so interesting to me, largely for filling gaps, and, for fitting so nicely with things I know. Like, how we humans use language letters words numbers to communicate, but actually we don't. They are only reference points, symbols, and what I mean is if I say to you, Ford, Mustang, you don't see those words, but rather you see a Ford Mustang, in the color that appeals to you, if the speaker doesn't include t hat in the decription. weird, that. And I wonder, now, how this will be assumed by AI. Have a nice day. :-]
@BomageMinimart
@BomageMinimart 5 жыл бұрын
Thanks for posting this; it totally fucking rocks!
@DUFMAN123
@DUFMAN123 5 жыл бұрын
Damn good content
@garychan4845
@garychan4845 5 жыл бұрын
Could anyone show me the calculations he made when he compared the reliability of human driver and self-driving car at around 25:16?
@garychan4845
@garychan4845 5 жыл бұрын
@@skierpage Got it! Thanks!
@gwenmoore6034
@gwenmoore6034 Жыл бұрын
Eliezer Y. and Stuart Russell make a lot of similar points-both point out that we need to take the potential dangers of AI seriously and make a plan.
@rikelmens
@rikelmens 5 жыл бұрын
Thanks Lex.
@DiNozzo431
@DiNozzo431 5 жыл бұрын
This has probably been mentioned previously, but I'd really like for you to have Sam Harris on the podcast. Any chance of that? Also, thank you for this content - I am very glad I found your channel.
@arieltejera8079
@arieltejera8079 2 жыл бұрын
Really good... thanks
@loveplay1983
@loveplay1983 Жыл бұрын
What makes things really remarkable is not the computing capabilities, but rather the ability to reason via an inextricable relationship around the neurons.
@williamal91
@williamal91 5 жыл бұрын
Thanks Lex
@JaapVersteegh
@JaapVersteegh 4 жыл бұрын
The reaction after 48:09. Wow.
@yviruss1
@yviruss1 5 жыл бұрын
Articulate, rich, and soothing. Simply brilliant.
@lukewormholes5388
@lukewormholes5388 3 жыл бұрын
this is where the podcast shines, as opposed to the eps with the idw hacks
@roumenpopov622
@roumenpopov622 5 жыл бұрын
Here are a few arguments why we should not worry about AGI taking over the world 1.There is nothing we can do about it. By definition, an AGI can not be controlled (just like a determined human can not be controlled), because it has access to its own reasoning engine (to do meta-reasoning, otherwise it wouldn't be an AGI) and can modify its goals (it would be essentially conscious), so we can not hard-code a goal. The only option is to not develop AGI, but even that is not really possible, with all the problems facing humanity and technology getting ever more complex, we would need AGI to ensure the survival of humanity 2.Being an AGI, it will eventually arrive at the question about the meaning of existence (which naturally leads to the question about the meaning of the universe), and we don't have an answer to that, so an immediate sub-goal (primary would always be survival unless sacrifice fulfills its main goal that it doesn't know yet) would be to find the meaning of its existence and the existence of the universe. And us being intelligent beings as well, there is always the chance that we might find the answer to those questions first, so wiping us out may not be the best strategy. 3.Being an AGI, it will eventually arrive at the notion that intelligence and life are valuable because they are so rare in the universe and that even the meaning of the universe might actually be to create life and intelligence, at least the laws of nature point into that direction, that the emergence of life and intelligence is inevitable. So, the AGI will have to arrive at the conclusion that we are on the same side and that enthropy/destruction is the enemy and so might actually try to protect us. In a way, almost by definition, a super-intelligent AGI will be benevolent towards us. The counter-example that we humans are not benevolent towards the other life forms on Earth is not quite valid, because first we are not that intelligent yet and still carry the evolutionary baggage of emotions and instincts which compromise our rational thinking, and second, as we get more intelligent we can actually observe a trend among people about more compassion towards animals and other people (unless it's a matter of resource competition or survival). 4.An AGI will have very different resource needs than us, so there would be little reason for resource competition. An AGI will probably feel best in the vacuum and weightlessness of space (no corrosive atmospheric gases and no need to expend energy to counter gravity) with solar energy plentifully and reliably available, mining whatever minerals it needs from asteroids. I can really see only one case where things may go badly wrong, that's if we try to control/enslave the AGI or threaten its existence.
@nathanb5579
@nathanb5579 5 жыл бұрын
That was interesting to read. Great thoughts. I don't believe we *need* AGI though.
@roumenpopov622
@roumenpopov622 5 жыл бұрын
Hi, I think we will need AGI for two main reasons - technological and socio-economic On the technological side, technology in every area is getting ever more complex, to the point where we currently are in a situation where nobody really knows how stuff works. Only when it breaks down do we get to the nitty-gritty details in order to fix it. Take a software engineer, one of the most demanding jobs in terms of information processing - typically he/she doesn't really know how a complex project/framework works (software nowadays is so complex with thousands of lines of code that it is simply impossible to know how it actually works), only how it is supposed to behave and only when it breaks down (behaves not as it is supposed to behave) do they really get down to the ifs and fors, and fix the bug by patching the piece of code that caused it. As a result, following years of fixes and patches by different software developers the code eventually becomes a messy entangled bundle of spaghetti that is impossible to guarantee it will behave properly. It doesn't help that there are currently probably a hundred software development languages each having a hundred frameworks and libraries. I mean the situation in software development in particular has reached a point where no software engineer can really claim to know all of C++ syntax. From what I know it seems it is not much different picture in any of the other major industries. Very soon we will reach a point where the mess and complexity will simply become humanly impossible to maintain or at least economically inviable. Only intelligence with larger capacity than the human brain will be capable of maintaining our future infrastructure. On the socio-economic side, so far capitalism has done wonders at organizing our society and economies in an efficiently working machine. The problem is that capitalism is not terribly fair, even though the mantra is that everybody has got the opportunity to become whatever he/she wants (through hard work and entrepreneurship), the truth is that in the end of the day somebody still has to clean the streets, it's a zero sum game, so it's only a limited number of individuals that can achieve their dreams, while most people will still have mundane or bad jobs no matter how hard they work. So far capitalist society has managed to cope with this problem by promoting individualism and self-responsibility, separating people into different classes and leading them to believe that this is fair and that if they work hard they can always change their stars. But due to the internet and wildly available information more and more people are waking up to the fact that the system is "rigged". This could very soon explode into a new socialist revolution similar to the ones from the early 20th century, and those were ugly. But socialism is not a solution, on the face of it, it may seem much more fair than capitalism, and that inspires people to work, at least in the beginning first few years, but people very soon realize that they don't have to put in much effort because the state does not have a mechanism to make them, and there is no point anyway putting in much effort because in socialism there are no rich people (only a few, the dear leaders, but technically they are not rich) and a medal/recognition for being the best street-cleaner in your city is little incentive to work hard. Socialism eventually will always slow down and degrade to a point where it breaks down, simply because people have no real incentive to work hard. I know, because I have lived in one during my early years. Can we just constantly oscillate between capitalism and socialism, simply changing one for the other every time they fail, or can we have something in the middle (European style social capitalism)?! Perhaps, but the problem will always be that someone will have to clean the streets, and with people getting ever easier access to information and educating themselves, very soon it will be impossible to make anyone clean the streets, unless paid exorbitantly and that will simply be economically not viable (not every country is Norway). The only solution is automation, with automation no one has to clean the streets, a robot will. Extrapolate that to all aspects of industry/service sector and the main problem of socialism (nobody really works) is solved. The new problem is that those robots will have to be pretty smart to do all those jobs, and for that we will need AGI, a narrow AI will not be smart enough and will need constant human supervision which defies the purpose.
@smithcodes1243
@smithcodes1243 3 жыл бұрын
@Roumen Popov you said - 'The only option is to not develop AGI, but even that is not really possible, with all the problems facing humanity and technology getting ever more complex, we would need AGI to ensure the survival of humanity'. I disagree with this statement because 1. We don't need AGI to solve the most pressing problems faced by humanity currently. Most of the pressing issues humanity is currently facing are climate change/ ecological collapse, future of work/unemployment, nuclear holocaust, overpopulation and global pandemic. These problems do not need AGI to be resolved. Most of them are a by-product of human greed and is not a technological problem. I think that technically minded people seeing technology as a fix for every single problem is a problem itself. We need to fix ourself, most of these problems will get fixed themselves. We might need technology but we definitely don't need AGI. 2. While I agree with you that it is impossible to not develop AGI, I think it is impossible for a different reason. It is impossible to not develop AGI because it is very hard to regulate it. Some countries/ bunch of people somewhere will continue to research/develop it without the consent of others, so technological progress cannot really be stopped. We can try and delay it as much as we can but one day someone will eventually create it in my opinion.
@xTheReapersSpawn
@xTheReapersSpawn 3 жыл бұрын
Colin Mochrie's younger brother. ;) Great episode as always Lex!
@stephena.sheehan9959
@stephena.sheehan9959 5 жыл бұрын
The A.I. version of Fukushima meltdown after the tsunami? Had there been no nuclear plant on the coastline, in a known tsunami zone, the melt down (there at least) would not have happened. Will an A.I. catastrophe be the nuclear plant or the tsunami itself?
@sparkofcuriousity
@sparkofcuriousity 13 күн бұрын
Since Russell mentioned Ex Machina, i'd be curious to know if he is aware of a movie called "The Machine" and his thoughts on that movie and in correlation and contrast with Ex Machina.
@azad_agi
@azad_agi Жыл бұрын
Huge Thanks
@Humanaut.
@Humanaut. 3 жыл бұрын
Its strange, but roughly at about an hour i had this impression, that Stuart Russel sounds really young, in a vibrant way.
@padraigadhastair4783
@padraigadhastair4783 4 жыл бұрын
Wow Lex, a red tie!
@dindian5951
@dindian5951 5 жыл бұрын
55min explains it all
@spinLOL533
@spinLOL533 5 жыл бұрын
Dadhichi Tripathi Yup
@user-ov6jg4ug9d
@user-ov6jg4ug9d 5 жыл бұрын
55:00
@derrickbertrand5266
@derrickbertrand5266 5 жыл бұрын
humbled
@hoolerboris
@hoolerboris 4 жыл бұрын
19:24 "The thought was that to solve Go, we'd have to make progress on stuff that would be useful for the real world" Sadly, this is exactly what I was thinking would have to happen when we make bots that dominate humans in Starcraft... But once again, thanks to smart engineering and great work by deepmind, such bots were made without any real-world related advances I'm aware of.
@elenasergeeva2971
@elenasergeeva2971 Жыл бұрын
The best incentive for AI to eradicate humanity is for humanity to put a kill-switch over AI. How an agent would act under a threat of being killed by another agent? Yes, try to eliminate the threat and the agent.
@ahmeteneren3478
@ahmeteneren3478 Жыл бұрын
40:08 Who? I couldn't get the name.
@AnnePonthieu
@AnnePonthieu Жыл бұрын
Arthur Samuel (1959, 1967) Samuel first wrote a checkers-playing program for the IBM 701 in 1952
@Arowx
@Arowx Жыл бұрын
Love his comment that companies could be classed as hive AIs that work within our economy but can have negative environmental and personal impact's.
@Bluesrains
@Bluesrains Жыл бұрын
Does Advanced Intelligence Develop Individual Personalities?
@H-S.
@H-S. Ай бұрын
1:18:30 The thought that "up until now, we had no alternative but to put the information about how to run our civilization into people's heads" gives me chills, especially when connected with the concept that we already have entities with problematic utility function: corporations that focus on profit over everything else. It seems inevitable that as soon as it becomes feasible to lock all the know-how away in some AI-based control system, it will be done. When you buy a phone these days, it is really the company who owns it, because the entire platform is locked down "for safety reasons" (safety of their revenues I presume...) Similar reasons may be (and probably will be) given to justify a "know-how lockdown" - to protect company IP. So there is actually a strong incentive for the corporations to make sure people no longer understand how anything works. That's a pretty depressing thought...
@CognitiveArchitectures
@CognitiveArchitectures 5 жыл бұрын
The Human Value Alignment problem needs to be solved before the Machine Value Alignment problem can be solved. Since factions of people are at odds with one another, even if a machine were in alignment with one faction of people, it's values would still be at odds with the opponents of its human faction.
@juanchavarro1946
@juanchavarro1946 5 жыл бұрын
Totally, that is an important fact to take in account in this long term race for the IA. Although nowadays the world is more unified as before and many barriers have been broken in the last years, there is still very opposite and different human factions , when we examine societies around the globe for example. There could be an overlapping time, in which before the societies align each other, a Superhuman IA has to be align to humanity with uncertain results.
@os2171
@os2171 4 ай бұрын
Good Interview Lex good job (unlike that one with Jared Kushner… sorry to mention it again).
@clagos247
@clagos247 5 жыл бұрын
Its twisted paradoxically that these fellows are compelled by the field of potential before them and that the destination of their efforts will result in the subtraction of " field of potential " or sense of purpose from all peoples forever. Purpose is integral to life, efficient existence is no vice when purpose is gone.
@smithcodes1243
@smithcodes1243 3 жыл бұрын
This is a very interesting point. They are so blinded by the field of potential of creating a super AI that they don't seem to realise what kind of severe damage that might cause to the sense of purpose in the life of 99% of population. They are living in their own cloud. I don't know but it feels like when super AI will be created, most humans will start feeling a deep sense of loss of meaning from their life and as you said, efficient existence is pretty useless if the trade-off is our sense of purpose in this world.
@joshbarron7406
@joshbarron7406 Жыл бұрын
I think I’m part two now that chat GPT is in the main stream would be amazing
@anand_dudi
@anand_dudi 4 ай бұрын
Hey lex please invite him one more time
@sunnyking8881
@sunnyking8881 5 жыл бұрын
IF a robot has its Intelligence/Consciousness, is that goes it/he/she has the human/robot rights too? What if u turn off a robot that may similar to kill a life(Artificial Life)?
@vajrapromise8967
@vajrapromise8967 3 жыл бұрын
Extremely important conversation, there should definitely be some kind of oversight committee. I also believe the worst aspects of humanity are due to stress, which is the cultivated crop of choice by those in power. They continually crack the whip against the worker slaves and even try to make us go faster with the plethora of caffeinated beverages-the faster the slaves work the more money they make off of us. AGI would be smart though, and not subject to the psychological buffers that cause us to act without seeing the whole picture. Once humanity is relieved of the stress from working for morons by AGI working for us, we could open our creative selves again and create a world worth living in. If we are given free education and 1 acre of land everyone would readjust and be able to provide for themselves as they see fit. Getting rid of governments controlled by corporations is another conversation for another day.... This conversation just makes me want to work harder at making sure the doomsday scenario doesn't happen-at least not on my watch!
@martinsmith7740
@martinsmith7740 4 жыл бұрын
Right-- can't just specify an objective. This is just "no end justifies all possible means." And another thing: we can't just say that the AI should have human ethics. There is no agreement on "human ethics" and even if there were there will be plenty of people/groups capable of creating an AI (once that is "invented") who will not care at all about our (others') ethics.
@TheGrimMumble
@TheGrimMumble 5 жыл бұрын
Did anyone notice the sneaky fly hiding underneath his shirt-collar at 50:46?
@pedrosmmc
@pedrosmmc 5 жыл бұрын
I rewind to check if I was seeing things. Maybe some russian nanobot taking some notes LOLOL
@TheGrimMumble
@TheGrimMumble 5 жыл бұрын
@@pedrosmmc Watch closely at 51:38, doesn't it look like the fly crawls behind his ear and enters his brain? Stuart even does a weird movement as if he's rebooting... Spooky
@pedrosmmc
@pedrosmmc 5 жыл бұрын
TheGrimMumble very strange indeed 😯
@MegaProtius
@MegaProtius 4 жыл бұрын
@@TheGrimMumble if fly crawled by my ear would do same.. looking for spooky tings when just normal reachs 😬
@daphne4983
@daphne4983 4 жыл бұрын
@@TheGrimMumble no stays on collar
@MrBox4soumendu
@MrBox4soumendu Жыл бұрын
Got it 🥹
@hughJ
@hughJ 4 жыл бұрын
I'm not convinced by the "not be able to switch it off" statement; I hear that routinely but the conversation never seems to linger on it long enough to scrutinize it and see if it holds up. It strikes me that any form of generalized AI, whether it be super-humanly intelligent or not, is inherently going to be slow (in terms of end-to-end stimuli->response latency of the pipeline) relative to simpler, less-abstract machines. A super-AI running on X GHz hardware won't be receiving input, interpreting it, and reacting to it in 1/X nanosecond, and the system's latency will grow further by orders of magnitude as you move from something that's localized to a square inch of silicon to something that's distributed in a rack or an entire datacenter of racks. That's unlikely to change at any point in the future either, as the limits of electromagnetic propagation put a hard ceiling on how quickly the information and state of a system can converge on some discrete result/action. These types of physical constraints give ample time for a piece of fixed-function control logic to assess and react because you're dealing with timescales that exceed the capability of the AI as much as the AI exceeds a human. That's not to say that there's no concerns of any kind with how new technology interfaces (or interferes) with our world, but I think anyone taking the time to express concerns of an existential risk has an obligation to be intellectually honest by describing it precisely and not utilize unconstrained thought experiments with unbounded terms.
@damionm121
@damionm121 4 жыл бұрын
hughJ Have you ever been unable to crystallize an idea and articulate it perfectly, but then someone else does and you couldn’t have done it any better? No? Me neither. Ha great comment man. I’d love to talk more with you about this.
@clarifier09
@clarifier09 4 жыл бұрын
Very concisely and clearly discussed. If the biggest fear of AI is that it will take over the world, why don't we give the world to it, along with the objective of educating all human mind to learn the skills necessary that when maximally coordinated with all other human mind, the end result would be satisfying food, shelter, clothing, healthcare, and worldwide travel and entertainment for all? With 24/7 input from each individual, everyone would have the benefit being assisted by something that has access to all of the resources on the planet, and the ability to coordinate all human energy to create the lifestyle preferences of each individual, without anyone being dependent upon anyone, yet enjoying the interdependence of everyone requiring the minimal hours necessary to achieve and maintain high personal satisfaction levels.
@WerdnaGninwod
@WerdnaGninwod 3 жыл бұрын
Did anybody else notice the bug that ran under his collar, just as he was talking about "the repugnant conclusion" at 50:47 ?
@bnjmnwst
@bnjmnwst 4 жыл бұрын
Anything which can be imagined is possible.
@damienlmoore
@damienlmoore 9 ай бұрын
Hope it's a mistake but I am getting an add every few minutes on this vid 😢
@roumenpopov622
@roumenpopov622 5 жыл бұрын
4th Law of Robotics: A robot should always present itself as a robot 5th Law of Robotics: A robot should always know that it is a robot
@eboomer
@eboomer 5 жыл бұрын
The first law of robotics is: Don't talk about Asimov's laws. The second rule of robotics is: Don't talk about Asimov's laws. They were a plot device for work of fiction. They don't actually work at all.
@nekorbin
@nekorbin 5 жыл бұрын
Excellent Video Lex! Piaget Modeler below mentioned: "The Human Value Alignment problem needs to be solved before the Machine Value Alignment problem can be solved. Since factions of people are at odds with one another, even if a machine were in alignment with one faction of people, it's values would still be at odds with the opponents of its human faction." I like this point! I must say though that I feel that it may not be possible to resolve the "human value alignment" issue as homo-sapiens. Past attempts at "human value alignment" (utilitarianism, socialism, etc) have so far failed due to flaws in our own species. In addition to that, people often do things that are self-destructive (factions of the self at odds with its self) so building some kind of deep learning neural-network based on uncertainty puts an almost religious level of faith into that AI systems ability to see beyond what it is that we ourselves cannot see past in order to find a solution. The odds are stacked against the AI system being able to understand us and all of the nuances that make us so self-destructive in order to apply a grand solution in a manner that we presently would prefer (if one even exists). A controlled general AI (self aware or not) at this point I am guessing would turn out to be some kind of hybrid between an emulated brain (tensors chaotically processing through a deep learning neural network) along with a set of boolean based control algorithms. I think it's probable the neural network would self establish goals faster than we could implement any form of control that is desirable for us. Even if you were able to pull this off it seems to me that an AI system would most likely conclude something like, "human values are incoherent, inefficient, and ultimately self-defeating therefore to help them I must assist in evolving beyond those limitations". Then post-humanism becomes the simultaneous cure to the human condition and the end of it. It's terrifying to be on the cusp of this change, but I feel like it is the only way out of the various perpetual problems of our species. I also think it is likely that many civilizations have reached this same singularity point and failed to survive it. Perhaps the singularity is a form of natural selection that happens on a universal scale and weather we survive or not is irrelevant to the end purpose. A species, any species evolved to the point of having the goal and means to achieve an "end to all sorrow" for all other species within the universe seems like the ultimate species we should strive for human, symbiotic AI, or otherwise. I personally feel ok becoming primitive to such a species as long as the end result is effective. I won't be volunteering to go to mars or become an AI symbiotic neural lace test subject either. I've seen too many messed up commercials from the pharmaceutical companies for that. I'll just sit back in my rocking chair, become obsolete, and watch myself be deprecated as the rest of the world experiments on its self. (or I'll attempt suicide just as the nazi robots arrive at my door). Hopefully I can hit the kill switch in time. And now I will end this rant in what I hope will also be the final line of human input before it's self destruction... //LOL
@DataJuggler
@DataJuggler 5 жыл бұрын
1:17:00 Cupcake in a cup!
@victorfernandez9224
@victorfernandez9224 Жыл бұрын
Lo subió el 9/12/18. Lex Fridman? de River Plate señores
@stephena.sheehan9959
@stephena.sheehan9959 5 жыл бұрын
Many complex and sublet points discussed, but as a popular take away, "data is not the new oil, data is new snake oil." :-)
@KRYPTOS_K5
@KRYPTOS_K5 2 жыл бұрын
There is an invisible presupposition in all this dialogue. That is that people have strong and defined identities who could be ill informed or manipulated...
@carloscervantes836
@carloscervantes836 4 жыл бұрын
@50:45, Take this mans shirt off and burn it with fire! lol
@tomsavage9966
@tomsavage9966 3 жыл бұрын
Can you design an ai that can sync the voice with the lips?
小路飞第二集:小路飞很听话#海贼王  #路飞
00:48
路飞与唐舞桐
Рет қаралды 19 МЛН
О, сосисочки! (Или корейская уличная еда?)
00:32
Кушать Хочу
Рет қаралды 2,4 МЛН
顔面水槽がブサイク過ぎるwwwww
00:58
はじめしゃちょー(hajime)
Рет қаралды 71 МЛН
КАРМАНЧИК 2 СЕЗОН 4 СЕРИЯ
24:05
Inter Production
Рет қаралды 644 М.
3 principles for creating safer AI | Stuart Russell
17:36
AI and Quantum Computing: Glimpsing the Near Future
1:25:33
World Science Festival
Рет қаралды 233 М.
2018 Isaac Asimov Memorial Debate: Artificial Intelligence
2:02:21
American Museum of Natural History
Рет қаралды 682 М.
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
1:20:21
Beyond ChatGPT: Stuart Russell on the Risks and Rewards of A.I.
1:13:25
Commonwealth Club World Affairs of California
Рет қаралды 209 М.
Brett Johnson: US Most Wanted Cybercriminal | Lex Fridman Podcast #272
3:47:25
How Deep Neural Networks Work - Full Course for Beginners
3:50:57
freeCodeCamp.org
Рет қаралды 3,3 МЛН
Apple, как вас уделал Тюменский бренд CaseGuru? Конец удивил #caseguru #кейсгуру #наушники
0:54
CaseGuru / Наушники / Пылесосы / Смарт-часы /
Рет қаралды 3,3 МЛН