I really enjoyed this conversation with David. Here's the outline: 0:00 - Introduction 1:06 - Biological vs computer systems 8:03 - What is intelligence? 31:49 - Knowledge frameworks 52:02 - IBM Watson winning Jeopardy 1:24:21 - Watson vs human difference in approach 1:27:52 - Q&A vs dialogue 1:35:22 - Humor 1:41:33 - Good test of intelligence 1:46:36 - AlphaZero, AlphaStar accomplishments 1:51:29 - Explainability, induction, deduction in medical diagnosis 1:59:34 - Grand challenges 2:04:03 - Consciousness 2:08:26 - Timeline for AGI 2:13:55 - Embodied AI 2:17:07 - Love and companionship 2:18:06 - Concerns about AI 2:21:56 - Discussion with AGI
@anthonymannwexford4 жыл бұрын
Super interview. Thank you Lex, for all your hard work and dedication to these wonderful podcasts and KZbin uploads. Very much appreciated by all of us.
@QuantPhilosopher894 жыл бұрын
Lex, your podcast keeps getting better and better. You seem to become more comfortable and ask better questions and your guest selection has always been great. I think one thing that could potentially be improved is that the talk could be a little more technical, if you would talk about specific papers that changed the field and ask about why certain algorithms were designed the way the were and how those might be developed further, but I know that's very difficult to do with the broad range of guests you have and the frequency with which you upload new podcasts. Thanks a lot for doing these.
@M04814 жыл бұрын
I respectively disagree with what you are suggesting. Technical details can be found in the respective papers. The 'why' is often embedded in the characteristics of the person that is being interviewed. As such, a lot of it can be deduced from interviews. I personally really enjoy getting to know the scientists behind the science.
@jacobvanveit34374 жыл бұрын
QuantPhilosopher89 I think these talks work because they are so organic. If he’s constantly sourcing material you loose the deeper conversions.
@frankdinies51114 жыл бұрын
Watched this episode multiple times. Thanx Lex, for getting inspiring people, like David.
@trimbotee46534 жыл бұрын
It's a crime this podcast only has 30k views. Keep up the good work Lex!
@InfiniteCyclus4 жыл бұрын
Loved it. Have to watch it twice though. So dense!! Keep it up..
@tommylee38194 жыл бұрын
Amen
@chrisb.30314 жыл бұрын
That story about his dad (approx 1:50:00) just WOW.
@deeliciousplum4 жыл бұрын
Exceptionally profound talk. Though this talk is ~2h30m long, it's full of ideas being which are deeply stirring. David's experiences with the statistics-based decision-making processes made about brain death have changed my understanding of such sensitive, yet necessary health and well-being service issues.
@exacognitionai4 жыл бұрын
Most interesting podcast guest yet.
@kipling19574 жыл бұрын
Another excellent episode! Threads of this conversation remind me of the philosopher, Simon Shaffer’s contention that scientific agreement of what constitutes fact is ultimately a social enterprise of trust and human relationship (see Leviathan and the Air Pump). Tenuous, I know, but out-of-the-box.
@kozepz4 жыл бұрын
Interesting conversation and good to see Lex learned more about the interpretation on political systems via frameworks :). I'm learning so much on the diversity in thinking between all these experts by means of your questioning. Love you Lex and I'm grateful to you having these conversations!
@eeshanbhagwat72574 жыл бұрын
Amazing podcast!!! Thank you so much!!
@metafuel4 жыл бұрын
Wow! This was a fantastic episode. Thank you.
@johnrayworth27604 жыл бұрын
What I love best about these podcasts is that I can see my own “Ah ha”s reflected in your “Ah ha”s, and that I can feel an almost real time, collective stretching of our minds and our understanding.
@Sergiuss5554 жыл бұрын
okay okay, buddy, you feeling okay there now?
@miscellaneous99324 жыл бұрын
Lex Fridman Experience with David Ferrucci. Great podcast!
@SingularityFM4 жыл бұрын
Great interview, brilliantly conducted! Kudos!!!
@Jaroen664 жыл бұрын
Awesome podcast, as always! So when can we expect Geoffrey Hinton on the podcast?
@stevejones6398 Жыл бұрын
Great discussion toward the end on risks of too much reliance on inductive reasoning particularly w.r.t statistical inference, an elephant in the room in much scientific discourse
@wyattbarry57944 жыл бұрын
Another one, get this guy back, would love if every podcast was 2 hours
@darkstardream15514 жыл бұрын
The best conversation ever in AI💖
@TheMechanic2044 жыл бұрын
This is absolutely great. Thank you!
@Dazzer12345674 жыл бұрын
Very interesting podcast, keep up the great work!
@peterk31394 жыл бұрын
Awesome podcast Lex! Love this content.
@souravthakur12524 жыл бұрын
Welp, Got my podcast for tomorrow subway travel.
@jacobvanveit34374 жыл бұрын
Lex asks all the best questions. He’s constantly focusing these great minds into trying to answer the big question: what do we need to come together to create the singularity! Or better yet, how do we infuse creativity into computer systems that makes sense to human understanding.
@NoWay-vm2oz4 жыл бұрын
I would love to see Bryan Johnson, founder of Kernal on your show. In general, I'd be interested in the merge between human and artificial intelligence via brain-computer-interfaces. Keep up the awesome work!
@stoikr4 жыл бұрын
Yumans lol great interview/conversation! :)
@newenglandbarbell46474 жыл бұрын
Lex what you say at 22.40 regarding advertising is interesting, I recently read a book called “Alchemy, the surprising power of ideas that don’t make sense” by Rory Sutherland and think you would find it interesting. Love the podcast and thank you for taking the time to share these conversations 🙌👏
@courtneypuyear28773 жыл бұрын
The other problem is. if you are only presented what fits your world view, then how are you ever going to be skeptical enough too think thru a problem.
@RalphDratman4 жыл бұрын
This is more philosophy than electronics. I heard someone lamenting that even computers that have learned a lot "understand nothing." But the brain, considered purely as a physical entity, also understands nothing. We understand the world with our whole organism: brain, muscles, skin, eyes, limbic system, and so forth. To understand the world, I suggest that a computer would need some kind of body, one that can both perceive and act on that world.
@Passiday4 жыл бұрын
A wonderful, deep episode, thank you, Lex. Too bad we live in the tl;dr age, when one must present ideas in sound-bite sized chunks in order to get traction. There just are those things like a 500 page book or two hour long conversation of wise men that can not be compressed for easier consumption.
@seanfitzgerald42074 жыл бұрын
great discussion! thank you so much for this Lex! i have been thinking that reinforcement learning with reward function being curiosity could lead to maximizing innovation and exploring to learn more and more instead of just focused on a fixed outcome that it seeks to maximize ad infinitum.
@MiguelPousa4 жыл бұрын
Seems to me that they got wrong the discussion about the need for mortality for an agent to be considered intelligent. I think is rather the objective to be more free, in the sense of having more options. Since that general "metric" will always help with almost all of their other objectives.
@dapdizzy4 жыл бұрын
“Conscious of what?” - right, you got to have an inherent relation to something to be aware of it. Thus it (consciousness) could be something inherent to whatever possesses it and it could be impossible to sort of create something that has consciousness without it already possessing it. Like when you making some random stuff, you got to have some external source that is inherently random. Without it there is no way of creating randomness.
@Bookhermit4 жыл бұрын
I think the next step would be an AI that can learn a new game simply by visually observing it being played (i.e. it has to determine EVERYTHING, game space, pieces, moves, winning conditions, ect just by watching humans play the game on camera). Possibly allow it plain-text questions to the players, which they would answer as they would to a human asking the same question about the game.
@jeff_holmes4 жыл бұрын
In some ways, getting AI to develop an understanding of the world (38:00, like "stuff falling to the ground") is a bit similar to predicting the contents of images. The system needs a way to run hundreds (or thousands) or experiments in order to develop abstract understandings of physics. Is this so different than what has been done with ImageNet? Isn't it just another type of model? Perhaps the actual physical experience, with some or all of the sensory feedback experienced by humans, is required (that is, we need AI in robots). At least in the case where an intelligence similar to that in humans is desired.
@bloodypommelstudios71444 жыл бұрын
I think the reason people distrust of AI on social media is mainly due to two related reasons. 1. As Ferrucci said the AI can't explain it's behavior. 2. Nor can the social media companies and furthermore they aren't very forthcoming about the data sets they have been trained with. Without this it can be very easy to assume ill intent on behalf of these companies.
@bearwolffish4 жыл бұрын
I don't know if intelligence can be boiled down to efficiency. I think sometimes it's even in contrast. I quite liked Chollet's, take, where environment and specialization are as important factors in the equation. Design an autonomous rover that navigates your room with increased efficiency, then throw it in a bath tub and watch it sink.
@richarddevenezia81864 жыл бұрын
@1:41:47 Waiting for "uncanny valley" or "rubicon", got threshod.
@johnbremner41543 жыл бұрын
I’m interested in the evolution of AGI with the point of view that our own GI didn’t require a creator, and it may follow that AGI could evolve from AI naturally, substituting huge memory capacity, huge data, and huge chains of inference for our own huge history... Was wondering what you think. Could it arise spontaneously from the mud of the Internet and the cloud?
@thegreatdanbino694 жыл бұрын
Powerful goatee
@forgetfulfunctor29864 жыл бұрын
simply connect with lex because he's got no holes!
@AZTECMAN4 жыл бұрын
1 hour 58 minutes. I can't agree more. Cognitive biases and (problem solving) intelligence are tightly intertwined.
@bearwolffish4 жыл бұрын
Another excellent talk. David made a great point about setting the higher bar for what constitutes intelligence, not being satisfied with a super parrot as he put it. We are so easily manipulated, I'm sure we'll have believable customer services long before we have truly "intelligent" A.I.
@AlistairAVogan4 жыл бұрын
Great conversation. Super Point/Observation made at 1:43:40: we when communicate we construct a version of the other person in our minds - lots of evidence supports this - and it is with this version that we interact. This is why the Turing Test as a benchmark for AI is perhaps enough. In order for embodied AI to be socially engaging and impactful, it doesn't matter that it isn't human or even really conscious. It only matters that we construct a consciousness in our minds. Then the magic begins...
@907borrego4 жыл бұрын
What are yoomans?
@NikolayMurzin4 жыл бұрын
After the discussion about relative nature of intelligence and its goal dependence, I wonder is there a rigourous mathematical definition of intelligence stated in such terms?
@Fiscus1284 жыл бұрын
The problem is that 'intelligence' has never been properly defined (as far as I know). As such, AI struggles to make the next step (towards AGI). Through deep learning techniques a computer is almost perfectly capable to determine / identify whether there is a cat or a dog inside an image, but it has no idea what a cat or dog is (as a human being can), let alone it can independently combine elements from cats and dogs to create something new..
@ArtfrontNews1004 жыл бұрын
You have to rewatch the south park episode with the german funnybot :-) ha ha
@courtneypuyear28773 жыл бұрын
The fear I have, is that the most beautiful things we value as humans will be looked at as weaknesses by aritificial intelligence. Therefore, weakness must be destroyed. Art, Music, emotion, will be snuffed out thru the evolution of technology. Because all require human emotion, human suffering, and human irrationalism.
@RalphDratman4 жыл бұрын
The tremendous versatility of human language derives from our ability to create and convey implicit analogies. So, for example, if you are familiar with the phrase "climb a tree," and one day I tell you that I "climbed a mountain," you are usually able to infer what I mean, even if you never before heard "climb" applied to anything other than a tree. Currently, as far as I know, a machine might be able to make a syntax-level connection between "climb a tree" and "climb a mountain," but I have not heard of any machine that can see how the use of "climb" could be extended to include "a mountain". The machine might note that "climb" is used with both "tree" and "mountain" from reading some text, but to infer that "climb" refers to traveling against the gravity vector sounds hard to me. To do that, it would help a lot for the machine to have actually climbed a tree, or at least looked up to the top of some tree. The machine would have to know what it means for a human to travel in some direction by movements of the human's body. And I suspect that is hard to explain or acquire. Yet after I, as a human, have learned to crawl and then walk, I already know what it means to propel myself and thereby arrive in some different place.
@jefffhaynes4 жыл бұрын
Autopilot fails every couple miles? Have you driven a Tesla?
@Mangolay10003 жыл бұрын
What I want, and what David wants are worlds apart I guess...Hell give me Watson to help me search the net for my simple questions!! I mean, I was simply trying to learn how to play a Beta vs. of a game and had no idea how to opt in to the newest beta. Simple search right?? 3 hours later I stumble on a video that answered my question accidentally.
@RalphDratman4 жыл бұрын
David's imagined AGI that goes out and reads some body of literature "in three milliseconds" is essentially a second brain with its own separate time budget. Like a having a really smart grad student who never leaves school and gets a real job. Or a highly intellectual servant who would never think of changing jobs. Someone like Jeeves for Bertie Wooster, assuming Bertie was himself a thinking person.
@WHORTH-TheUglyDucklingOfFORTH4 жыл бұрын
I would be perfectly satisfied if it could do it in three month.....
@RalphDratman4 жыл бұрын
@@WHORTH-TheUglyDucklingOfFORTH Let us perhaps compromise on three hours -- while you are asleep. Then in the morning you would know so much more about, um, that body of literature. Or the literature of that body.
@cerca114 жыл бұрын
Wondering about the question on "How to show that an AI has consciousness?"... Is it possible to answer this question to any other human or entity besides regarding **yourself**? You are aware of your consciousness. Beyond you, this consideration becomes speculations and assumptions.
@johnnyneckar49773 жыл бұрын
Great stuff but good lord, please don't microwave your grilled cheese
@katiavramova762910 ай бұрын
It would be interesting if some day one of these ai-s would play not Jeopardy, but a Что? Где? Когда? game as an equal / contributing team member. From what I heard looks like we're not there yet - but it would be interesting if someone at least tries to develop capabilities of that level...
@elizabethcaton63554 жыл бұрын
lol now I know why gary the bot thought I'm an alien
@bloodypommelstudios71444 жыл бұрын
Yeah I think self preservation is trivial for an AI. Generally speaking for an AI death means failing its task so avoiding death will naturally become a priority. Sometimes self preservation becomes too high of a priority and they can be overly cautious such as CodeBullet's Snake AI which constantly went around in circles rather than collecting fruit or the Tetris AI which paused the game when it knew the next piece would kill it (it's possible I'm overly anthropomorphising these behaviors but I think my interpretation is logical). As AI get smarter and have greater ability to manipulate the world I could see this becoming a big potential risk we need to overcome.
@rwang56883 жыл бұрын
IBM Watson sounds like a massively parallel exert system, just with advanced hardware and algorithms in semantic analysis, search generation, candidate answer generation, and scoring. In short: IBM Watson was a massively parallel hack 😄👻
@rwang56883 жыл бұрын
Ok ... ML was used to evaluate the various hacks. I love Lex’s reactions about how the project was managed and the hack was put together. It’s obvious the approach is not as extensible to other domains. But the old IBM management didn’t care.
@Stacz_Dinero4 жыл бұрын
Wow, at about 1:22 when Lex asked him looking back at that,what are you most proud of? It seemed like no1 has really asked him that before, you see him instantly like phase off into deep thought and the emotions just blasting thru him. He was even having trouble getting the words out damn. Can't imagine how he must feel to make such groundbreaking breakthrough in a tuff field like AI that literally since being theorized viewed by most as just a dream, not achievable. And slow growth for like 30+ years. And He takes task like that no1 else wants. Then actually succeed. Finally showing world AI tech can actually be capable of some of visions tht seemed out of reach! IBM Watson set in motion revolutionary changes made since and the future aspirations of technology coming. They proved finally computers were capable of being better then expert humans at functionally hard human tasks. And igniting mass fear now AI might become so intelligent and sentient and take over humans lol. Actually showing humans intelligent not actually so special and not improving, while AI progresses to other levels. He literally changed world and future mankind.
@Torterra_ghahhyhiHd3 жыл бұрын
if alan turin is alive i think the will like to work with ibm no matter the pay check.
@lasredchris4 жыл бұрын
Watson Engineering intelligence Memory capacity
@casperhansen30124 жыл бұрын
David Ferrucci ?= Walter White with hair Great conversation though.
@PiyushSihag14 жыл бұрын
Still waiting for don knuth
@dapdizzy4 жыл бұрын
They speak some really interesting thoughts, but I can’t stop noticing how biased they are towards what they believe to be true. This is definitely a benefit when doing your daily job, but a conversation becomes entertaining when some ideas (probably doubts) appear right as you speak. There definitely is a grasp of this kind of things, but when they speak about their vision into far future, they really just speak their (old) beliefs, like AI will outperform humans without questioning (in an instant) what does it mean to outperform, does it even have something to do with intelligence. I mean they are press assertive and it does not do any good for the discussion. A discussion becomes entertaining when you really open up and som new grounds get created right in an instant. We tend to wrap even the most brilliant ideas into some sort of mental wrap (you can call I crap) and discussion is a perfect place to open up (clean it up) if you know how it works. Anyways this are some of the best discussions out there, I really enjoy watching them.
@dapdizzy4 жыл бұрын
Wrote this while watching in the middle. I must admit he really opened up later very well. You can almost see their discussion go into the new grounds by the end. Very impressive and I really relate to it and sort of get a feel of it. Amazing ending!
@hoolerboris4 жыл бұрын
Human emotions are probably not "explainable" in simple language in the first place. The words we use like "I'm happy because ____ happened", are just an imprecise approximation of the millions of hormones and neuron firings and whatnot that build up our "mood" and influence our decisions. In that way, "uninterpretable" "savant" machines would be offering a more true explanations of our emotions, than machines which are built to express their findings in simple human language. This may be true of much more than just our emotions, and is why some people claim that the quest for interpretable machine learning is a waste of time. The question is whether symbolic thinking is actually a necessary step at some point of the advancement of congnition. Can machines do without it forever, or does any complex learning system actually end up using something that is equivalent to symbolic thinking? (you could think of it as compressing known knowledge and making use of it without decompressing)
@Matis_7474 жыл бұрын
100👍🏻
@seguramx4 жыл бұрын
talks like a seller, 2+hrs of analogies and too broad generalizations
@Sergiuss5554 жыл бұрын
well he was a project manager lol. he gave some specifics how the job was organized and implemented without analogies. and he was rather precise and clear with his answers. also he definitely seems to think deeply about this topic.
@mennovanlavieren38854 жыл бұрын
kzbin.info/www/bejne/jZnXpWV-asScqa8 "And if the machine can help us break that argument down and said:'Wait a second, what do you really think about this?' " Does the machine literally mean a metric second when it said "wait a second"?
@tractatusviii74654 жыл бұрын
Get Karl Friston on this motherfucker. Please. Спасибо, Лекс.
@lonesaiyan273 жыл бұрын
Did Watson predict COVID though?
@samisoumya89674 жыл бұрын
First
@GroovyVideo24 жыл бұрын
Bernie Sanders 2020
@weerobot4 жыл бұрын
Lex sounds like he needs Prozac...
@lop21674 жыл бұрын
Good podcast. Although, lex likes to talk shit about millenials, but he is technically a millenial himself (1981-1996.) I'm sure a large majority of his audience is millenials. Its funny to see smart people like him show such basic human flaws. I'm glad david wasn't really entertaining that thought.
@Torterra_ghahhyhiHd3 жыл бұрын
the very bad idea makes ia imitates the human ego to survive. that can brig wisdom to human because human suffer the idea that h will end of all thing that he liked to experience he doesn't want to say goodbye to all the people that he appreciates all the subjective enjoin experience all the dreams or goal that he still fighting to get son on this make us to reflection a recalibrate the true priority to do and this over time ca create the auto discipline with internal emotion intelligence. but it doesn't happen with Watson because he dont suffers he imitated the behavior. human thins that are unreplaceable soul son on. Watson, not Watson can be restructured don have time perception don't have the lifetime born grow get old and die. then don't conceive this concept as his form of being. and don't fell the same qualia or don't have. so I even if it conceives as living can thinks this is very stupid error I prefer to be rock i then suicide. if ai don fells qualia also don't need discipline, he won't get distracted by emotion, don't have resentment, don't hate etc.