Yann LeCun: Deep Learning, ConvNets, and Self-Supervised Learning | Lex Fridman Podcast #36

  Рет қаралды 173,972

Lex Fridman

Lex Fridman

Күн бұрын

Пікірлер: 208
@lexfridman
@lexfridman 4 жыл бұрын
I really enjoyed this conversation with Yann. Here's the outline: 0:00 - Introduction 1:11 - HAL 9000 and Space Odyssey 2001 7:49 - The surprising thing about deep learning 10:40 - What is learning? 18:04 - Knowledge representation 20:55 - Causal inference 24:43 - Neural networks and AI in the 1990s 34:03 - AGI and reducing ideas to practice 44:48 - Unsupervised learning 51:34 - Active learning 56:34 - Learning from very few examples 1:00:26 - Elon musk: deep learning and autonomous driving 1:03:00 - Next milestone for human-level intelligence 1:08:53 - Her 1:14:26 - Question for an AGI system
@shinochono
@shinochono 4 жыл бұрын
Thanks for sharing amazing conversations. I so envy your job it gives you the opportunity to meet some of the top brains in the industry. :)
@aviraljanveja5155
@aviraljanveja5155 4 жыл бұрын
Such Outlines for videos, especially the long ones are super necessary !
@gausspro8937
@gausspro8937 4 жыл бұрын
@lex fridman did you record your discussion with Judea? I just finished his book, The book of why and would be interested in hearing you two discuss these topics!
@francomarchesoni9004
@francomarchesoni9004 4 жыл бұрын
Keep them coming!
@PhilosopherRex
@PhilosopherRex 4 жыл бұрын
Appearance of the flow (arrow) of time == due to entropy.
@beta5770
@beta5770 4 жыл бұрын
Next, Geoff Hinton!
@BiancaAguglia
@BiancaAguglia 4 жыл бұрын
I second that. 😊 Geoff is several orders of magnitude more knowledgeable than I am, yet he still manages to make me feel I can follow most of his thought processes. He's a witty teacher and a great storyteller.
@lexfridman
@lexfridman 4 жыл бұрын
We'll make it happen for sure!
@zhongzhongclock
@zhongzhongclock 4 жыл бұрын
@@lexfridman Let him stand during interview, his back is not good, and can't comfortablly keep sitting for long time.
@connorshorten6311
@connorshorten6311 4 жыл бұрын
I'd love to see Yann LeCun on the Joe Rogan podcast as well! I am continually impressed with the high caliber of guests you have on the podcast, great work Lex!
@connorshorten6311
@connorshorten6311 4 жыл бұрын
​@@DeepGamingAIDefinitely! It's a really interesting medium, doesn't require too much preparation on the end of the interviewee and only takes an hour of time!
@colorfulcodes
@colorfulcodes 4 жыл бұрын
They respect him and he knows how to ask the right questions. Also by this point he has the portfolio so it's much easier.
@muhammadharisbinnaeem1026
@muhammadharisbinnaeem1026 3 жыл бұрын
I agree with your thought there, @@colorfulcodes. (Y)
@muhammadharisbinnaeem1026
@muhammadharisbinnaeem1026 3 жыл бұрын
Yann can definitely take on Rogan, as he doesn't hold back. =D
@Crazylalalalala
@Crazylalalalala 3 жыл бұрын
Rogan is not smart enough to ask interesting questions.
@LukasValatka
@LukasValatka 4 жыл бұрын
Wonderful interview, Lex! I love the fact that it's one of the few podcasts that balances both general public and expert level information - to me as a deep learning engineer, this series is not only inspiring but also surprisingly useful :).
@BiancaAguglia
@BiancaAguglia 4 жыл бұрын
I feel the same way. For example, the interview with Pamela McCorduck was interesting to me precisely because she's not an AI expert. I ended up loving it not just because of the stories she told (she did, after all, witness history in the making) but because it showed her remarkably good understanding of the AI field and her ability to talk about complex topics in easy to understand terms. Of course, just because she's not an expert in AI techniques, doesn't mean she hasn't spent quite some time trying to understand them at a high level and to understand their history and potential. That knowledge, plus her storytelling skills, made her fascinating to listen to.
@sibyjoseplathottam4828
@sibyjoseplathottam4828 4 жыл бұрын
This is one of your best interviews yet. We need more people like LeCun in all fields.
@MrTransits
@MrTransits 4 жыл бұрын
These PODcast are priceless!!! Little sumpin sumpin ale, sit back and listen.
@BiancaAguglia
@BiancaAguglia 4 жыл бұрын
12:26 "Machine learning is the science of sloppiness." 😁 It's the first time I've heard it described that way (and it makes perfect sense.)
@aviraljanveja5155
@aviraljanveja5155 4 жыл бұрын
Brilliant Podcast ! The Reasoning and arguments at play were beautiful. Along with a lack of ego and a lot of honesty and admitting mistakes. For example Lex at this moment - 50:00
@stevenjensjorgensen
@stevenjensjorgensen 4 жыл бұрын
Real good source of future research directions here for AI! These are the kind of conversations you can only have in conferences, so thank you Lex for bringing it to us.
@JKKross
@JKKross 4 жыл бұрын
This was very thought-provoking - loved it! The ending was magical: "I'd ask her what makes the wind low. If she says that it's the leaves, she's onto something..."
@zrmsraggot
@zrmsraggot 2 жыл бұрын
Lex didn't catch that
@anthonybiel7096
@anthonybiel7096 2 жыл бұрын
"I will ask him what is the cause of the wind. ..." It refers to 23:30
@motellai8211
@motellai8211 4 жыл бұрын
What a great interview!!! i can't never thank you enough Lex!!
@RAOUFTV16
@RAOUFTV16 4 жыл бұрын
the most beautiful podcasts youtube channel ever ! Congrats from Algeria
@user-mw2gf5zh4g
@user-mw2gf5zh4g 4 жыл бұрын
Большое спасибо за все твои интервью, Лекс! У тебя самый классный подкаст о ИИ. Продолжай радовать клёвыми гостями!
@Kartik_C
@Kartik_C 4 жыл бұрын
These podcasts are a goldmine! Thanks Lex!
@jamesanderson6882
@jamesanderson6882 4 жыл бұрын
Great interview Lex and Yann! I would love to be a fly on the wall of a dinner with LeCun, Hinton and Bengio (no idea if they would talk about CS). Lex, try to get Peter Norvig sometime. I still don't understand why LISP is not more popular.
@seanrimada8571
@seanrimada8571 4 жыл бұрын
30:00 Lex in his undergrad years. we all enjoyed this talk, thanks for the podcast Lex.
@hanselpedia
@hanselpedia 4 жыл бұрын
Great stuff again, thanks Lex! I really enjoy the interactions when the intuitions start to diverge...
@ciaran7780
@ciaran7780 4 жыл бұрын
There are no stupid questions !!! ... I enjoy your podcast/interviews, thanks so much.
@WheredoDoISATS
@WheredoDoISATS 19 күн бұрын
I wish he would wear a colored tie today as well. Ty for your work.
@Qual_
@Qual_ 4 жыл бұрын
Subscriber from france, I love all your podcast! And congratulation to have such guest like Yann Lecun.
@DamianReloaded
@DamianReloaded 4 жыл бұрын
Mr LeCun is a great communicator. I enjoyed every second of this interview. Looking forward to seeing where his research will lead us all.
@aw6507
@aw6507 4 жыл бұрын
Dr. LeCun....
@joseortiz_io
@joseortiz_io 4 жыл бұрын
My favorite part was the section where unsupervised learning. It was fascinating ❤
@deeliciousplum
@deeliciousplum 4 жыл бұрын
𝐃𝐢𝐢 - 𝐝𝐚𝐦𝐧 𝐢𝐦𝐩𝐫𝐞𝐬𝐬𝐢𝐯𝐞 𝐢𝐧𝐭𝐞𝐥𝐥𝐢𝐠𝐞𝐧𝐜𝐞! Priceless. 🧠 Thank you for sharing this interview of an exceptionally stimulating gentleman. You both place so much on the table which compels the listener to explore. You may have read this often, yet I must share that your vids cut out all the tiring and time consuming extraneous sound bites. Your vids are brimming with knowledge, concerns, and ideas. Thank you Lex for all that you do. 🍂
@balareddy8625
@balareddy8625 4 жыл бұрын
Hi Lex. That's great pod cast with the most today's emerging and cutting edge tech and one of the father of Yann LeCun(Father of Neural Nets). His comments / suggestions / Journey of DS are most sensible and valuable. Thanks to you and Hats Off to Sir Yann LeCun.
@atomscott425
@atomscott425 4 жыл бұрын
I love this! It's interesting how different Yann LeCun and Jeremy Howards views differ so much on active learning and transfer learning. Would to like to see them discuss their views with each other.
@bowenlee3597
@bowenlee3597 4 жыл бұрын
I did not leave comments on KZbin at all. At least previously. But seriously, I think this interview is so good, insightful and "deep" that I must congratulate to Lex Fridman, for this great interview with Yann LeCun.
@tusharagarwal2994
@tusharagarwal2994 Жыл бұрын
Lex, you are quite a great inspiration to me, thanks for the talk!
@jeff_holmes
@jeff_holmes 4 жыл бұрын
Another great interview. It's great to hear that Yann is tackling the issue of modeling the world in baby steps. It seems to be the most logical way of achieving "common sense" intelligence - you have to know a little bit about many models. I'd love to hear more conversation about how you might go about making better connections between models - a key part of reasoning. You can have a model of physics and you can have a model of car mechanics and self preservation - how do you make the connections between them to anticipate and predict the consequences of driving off the proverbial cliff? The answer must lie in some kind of abstraction layer that would be used to identify how models are related.
@littech4637
@littech4637 4 жыл бұрын
Wow, great interview Lex.
@shashanks.k855
@shashanks.k855 2 жыл бұрын
This was amazing! As always thank u so much.
@sandtiwa
@sandtiwa 4 жыл бұрын
Next Richard Sutton!! Thanks for all your work Lex.
@danteinferno9187
@danteinferno9187 4 жыл бұрын
Yann is a genius. Thanks lex!
@MrDidymus88
@MrDidymus88 2 ай бұрын
Thank you both 😊
@alexandresantiago5910
@alexandresantiago5910 3 жыл бұрын
Nice! Yann LeCun breaking some concepts in this episode haha
@eyuchang
@eyuchang 4 жыл бұрын
Great interview. On the importance of training data volume, Yann still insists "relative" good size of data suffices for training. On general AI, Yann's arguments against the term "general" are interesting (though a bit strange). On self-supervised training using BERT as an example, I would argue that it is still supervised training with voluminous training data. The key here is that ground-truth still exists to be compared. Inspiring interview!!
@JousefM
@JousefM 4 жыл бұрын
Thanks Lex! I really like Yann and his accent :D
@gitgen1887
@gitgen1887 2 жыл бұрын
We are all inside lex fridman simulation right now. When ever you get here. Think about it.
@AndrewKamenMusic
@AndrewKamenMusic 4 жыл бұрын
I really wish the convo started with that last question.
@mededovicemil2218
@mededovicemil2218 3 жыл бұрын
great interview!!!!
@ayushthada9544
@ayushthada9544 4 жыл бұрын
When will we see Dr. Daphne Koller on the podcast?
@gitgen1887
@gitgen1887 2 жыл бұрын
What a talk wow. I'm still deep learning here.
@lettucefieldtheorist
@lettucefieldtheorist 4 жыл бұрын
Causality is perfectly accounted for in statistical physics, even in non-relativistic scenarios. In fact, the T-symmetry (or CPT symmetry) of fundamental, microscopic interactions is a key ingredient for the so-called fluctuation theorem (not to be confused with the fluctuation-dissipation theorem). It shows that, macroscopically, positive entropy production occurs on average, which indicates the existence of an arrow of time. So, time-reversible microscopic physics leads to T-asymmetric physics on the macroscopic scale, showing that the 2nd law of thermodynamics is only a statistical statement that can be violated locally. The fluctutation theorem actually gives an expression for the ratio of probabilities that an entropy change of A or -A occurs during a time interval t, and it turns out to be proportional to exp(-A t). Negative entropy production is therefore not forbidden, but is exponentially suppressed and therefore never observed macroscopically.
@HURSAs
@HURSAs 4 жыл бұрын
I am glad someone has pointed that out, cuz from the statement of professor LeCun that "physicists don't believe in causality on Micro level" people may jump to the wrong conclusion.
@vast634
@vast634 3 жыл бұрын
Basically, thinking can only happen in the forward flowing time. Thats why we can only observe forward flowing time.
@Hexanitrobenzene
@Hexanitrobenzene 3 жыл бұрын
Yeah, not so small misunderstanding of physics on LeCunn's part.
@carystallings6068
@carystallings6068 2 жыл бұрын
That's what I was going to say.
@martinsmith7740
@martinsmith7740 4 жыл бұрын
RE: causes of the 1980/90's AI Winter: in addition to those Yann mentioned, I think another factor was "opportunity cost." That is, there's just so much private investment and grant money to go around. The advent of the PC and local then wide-area networking and then the Internet sucked up all the available air (investment, public attention, etc.) That combined with the immaturity of the infrastructure required for AI (compute power, memory, big data) and the failure of rule-based approaches (because un-maintainable) made investment in AI less attractive. I think opportunity cost is also part of the explanation for the "Space Winter" following the Moon landings. Space just didn't seem to present as many near-term opportunities as computers did.
@lasredchris
@lasredchris 4 жыл бұрын
Captivated the world
@KelvinMeeks
@KelvinMeeks 4 жыл бұрын
Most Excellent!
@alexselivanov299
@alexselivanov299 4 жыл бұрын
I see s new podcast from Alex I smash like, simple
@allurbase
@allurbase 4 жыл бұрын
The recurrent updating of state is the core i think, activation/supression on an embed space that updates itself recurrently.
@filipgara3444
@filipgara3444 4 жыл бұрын
I very like this way of understanding the world in terms of AI
@vikashkumar994
@vikashkumar994 Жыл бұрын
Great talk
@mattanimation
@mattanimation 4 жыл бұрын
"Just about everything..." - 1:09:42 LOL that perfect expression of what I think everyone really thinks about Sophia deep down.
@elmersbalm5219
@elmersbalm5219 Жыл бұрын
@55:00 I remember reading about depth perception in different animals, including lab tests having young mammals crawling over glass or patterns. Indicates that there is a rudimentary encoding. Still, I'm sure it gets reinforced in the relatively sheltered life of a young cub/child. Kind of, like there is a predisposition to get startled and pay attention to certain classes of problems. A baby stumbling over a step or hitting a wall causes the child confusion as it works out a general pattern for which the brain is already primed. I doubt I'm saying anything controversial or ground breaking. Mostly I shared because of Lecunn's prior example to an AI car learning to avoid a cliff. Now I'll continue listening as he most probably is going to explicate the above.
@karelknightmare6712
@karelknightmare6712 2 жыл бұрын
An episode with Matthieu Ricard would actually be great about your concerns about the emotional drive of human intelligence. To try to get a sense of artificial wisdom maybe.
@deeplearningpartnership
@deeplearningpartnership 4 жыл бұрын
Amazing.
@robkrieger3455
@robkrieger3455 Ай бұрын
What is the 'cause' of the current AI that we have (and the future AI that we'll create)? Would be interesting to hear Lex's and Yann's response to this. And also, their 'correct' answers to the cause-of-wind question (I'm assuming they won't say so that the Sun and Earth can let the trees know what it's like to dance through space too). Excellent interview.
@DayB89
@DayB89 4 жыл бұрын
I'm only at 3:30 and I already hit the like button =)
@sippy_cups
@sippy_cups 4 жыл бұрын
god tier crossover
@SudipBishwakarma
@SudipBishwakarma 4 жыл бұрын
This is HUGE!!!!
@vinceecws
@vinceecws 2 жыл бұрын
A point to 47:50: I think that when LeCun is comparing between self-supervised learning tasks in NLP vs. vision, and saying that in the former it's easier to achieve better performance than in the latter, it's not exactly a fair comparison. An oversimplified example would be that for NLP tasks, you would mask no more than 2-3 words in a row while training a model to infer the missing gap. On the other hand, for vision tasks, one would mask a relatively huge block on input images (sometimes occluding entire objects), so it would necessarily be harder task (might even be tough for humans?). A more level comparison would be to pit it against vision models trained on input images with sparsely-masked pixel groups (10 x 10 at the most, roughly), though I'm not entirely sure in this case, the model will be able to learn much in terms of semantics of objects in the training images.
@rwang5688
@rwang5688 3 жыл бұрын
On what’s surprising about deep learning (paraphrase): “Do what the textbook tells you is wrong (found out later when I read the textbooks).” “Intelligence is impossible without learning. Imperative programming with rules alone leading to an intelligent systems is counter intuitive.”
@pyxelr
@pyxelr 4 жыл бұрын
There was so much great information that after the session, I spent some time collecting a bunch of key takeaways ⭐: ➡ humans, in fact, don't have a "general intelligence" themselves; humans are more specialised than we like to think of ourselves - Yann doesn't like the term AGI (Artificial general intelligence), as it assumes human intelligence is general - our brain is capable of adjusting to things because we can imagine tasks that are outside of our comprehension - there is an infinite amount of things we're not wired to perceive, such as we think of gas behaviour as a pure equation PV = nRT -- when we reduce the volume, the temperature goes up, the pressure goes up (for perfect gas at least), but that's still a tiny, tiny number of bits compared to the complete information of the state of the entire system, which would give us the position and moment of every molecule ➡ to create AGI (Human Intelligence), we need 3 things (for each you can find examples) 1) the first one is an agent that learns predictive models that can handle uncertainty 2) the second one is some kind of objective function that you need to minimise (or maximise) 3) and the third one is a process that can find the right sequence of actions needed in order to minimise the objective function (using the predictive learned models of the world) ➡ to test AGI, we should ask a question like "what is the cause of wind? If she (system) answers that it's because the leaves on the tree are moving and it creates wind, she's on to something ". In general, these are questions that reveal the ability to do - common sense reasoning about the world - some causal inference ➡ first AGI would act like a 4-year-old kid ➡ AI which will read all the world's text, might still not have enough information for applying common sense. It needs some low-level perception of the world, like a visual or touch perception - common sense will emerge from -- a lot of language interaction -- watching videos -- interacting in virtual environments/real world ➡ we're not going to have autonomous intelligence without emotions, like fear (anticipation of bad things that can happen to you) - it's just deeper biological stuff ➡ unsupervised learning as we think of is still mostly self-supervised learning, but there is definitely hope to reduce human input ➡ neural networks can be made to reason ➡ in the brain, there are 3 types of memory 1) memory of the state of your cortex (disappears in ~20 seconds) 2) shorter-term (hippocampus). You remember the building structure or what someone said a few minutes ago. It's needed for a system capable of reasoning 3) longer-term (stored in synapses) ➡ Yann: "You have these three components that need to act intelligently, but you can be stupid in three ways" (objective predictor, a model of the world, policymaker) - you can be stupid because -- your model of the world is wrong -- your objective is not aligned with what you are trying to achieve (in humans it's called being a psychopath) -- you have the right world model and the right objective, but you're unable to find the right course of action to optimise your objective given your model - some people who are in charge of big countries have actually all of these three wrong (it's known which ones) ➡ AI wasn't as popular in the 1990s as the code was hardly open sourced, and it was quite hard to code things in Fortran and C. It was also very hard to test the algorithm (weights, results) ➡ math in deep learning has more to do with cybernetics and electrical engineering than math in computer science - nothing in machine learning is exact; it's more the science of sloppiness - in computer science, there is enormous attention to detail, every index and so on ➡ Sophia (robot) isn't as scary as we think (we think she can do way more than she can) - we're not gonna have a lot of intelligence without emotions ➡ humans, in fact, don't have a "general intelligence" themselves; humans are more specialised than we like to think of ourselves - Yann doesn't like the term AGI (Artificial general intelligence), as it assumes human intelligence is general - our brain is capable of adjusting to things because we can imagine tasks that are outside of our comprehension - there is an infinite amount of things we're not wired to perceive, such as we think of gas behaviour as a pure equation PV = nRT -- when we reduce the volume, the temperature goes up, the pressure goes up (for perfect gas at least), but that's still a tiny, tiny number of bits compared to the complete information of the state of the entire system, which would give us the position and moment of every molecule ➡ to create AGI (Human Intelligence), we need 3 things (for each you can find examples) 1) the first one is an agent that learns predictive models that can handle uncertainty 2) the second one is some kind of objective function that you need to minimise (or maximise) 3) and the third one is a process that can find the right sequence of actions needed in order to minimise the objective function (using the predictive learned models of the world) ➡ to test AGI, we should ask a question like "what is the cause of wind? If she (system) answers that it's because the leaves on the tree are moving and it creates wind, she's on to something ". In general, these are questions that reveal the ability to do - common sense reasoning about the world - some causal inference ➡ first AGI would act like a 4-year-old kid ➡ AI which will read all the world's text, might still not have enough information for applying common sense. It needs some low-level perception of the world, like a visual or touch perception - common sense will emerge from -- a lot of language interaction -- watching videos -- interacting in virtual environments/real world ➡ we're not going to have autonomous intelligence without emotions, like fear (anticipation of bad things that can happen to you) - it's just deeper biological stuff ➡ unsupervised learning as we think of is still mostly self-supervised learning, but there is a definitely a hope to reduce human input ➡ the most surprising thing about deep learning - you can build gigantic neural nets, train them on relatively small amounts of data with the stochastic gradient descent, and it works! -- that said, every deep learning textbook is wrong by saying that you need to have a fewer number of parameters, and if you have a non-convex objective function, you have no guarantee of convergence - therefore, the model can learn anything if you have -- huge number of parameters -- non-convex objective function -- data somehow very relative to the number of parameters ➡ neural networks can be made to reason ➡ in the brain, there are 3 types of memory 1) memory of the state of your cortex (disappears in ~20 seconds) 2) shorter-term (hippocampus). You remember the building structure or what someone said a few minutes ago. It's needed for a system capable of reasoning 3) longer-term (stored in synapses) ➡ Yann: "You have these three components that need to act intelligently, but you can be stupid in three ways (objective predictor, a model of the world, policymaker) - you can be stupid because -- your model of the world is wrong -- your objective is not aligned with what you are trying to achieve (in humans it's called being a psychopath) -- you have the right world model and the right objective, but you're unable to find the right course of action to optimise your objective given your model - some people who are in charge of big countries have actually all of these three wrong (it's known which ones) ➡ AI wasn't as popular in the 1990s as the code was hardly open sourced, and it was quite hard to code things in Fortran and C. It was also very hard to test the algorithm (weights, results) ➡ math in deep learning has more to do with cybernetics and electrical engineering than math in computer science - nothing in machine learning is exact; it's more the science of sloppiness - in computer science, there is enormous attention to detail, every index and so on ➡ Sophia (robot) isn't as scary as we think (we think she can do way more than she can) - we're not gonna have a lot of intelligence without emotions
@Hoondokhae
@Hoondokhae 4 жыл бұрын
that was a good one
@muzammilaziz9979
@muzammilaziz9979 4 жыл бұрын
Please interview Fei Fei Li from Stanford Vision Lab
@prakhartiwari6397
@prakhartiwari6397 28 күн бұрын
"Ask her what is the cause of wind, and if she answers because the tress are moving, she's onto something". Damn!!!
@williamramseyer9121
@williamramseyer9121 3 жыл бұрын
Another great interview (and frankly, all of them so far have been great). So many wonderful ideas. My comments: 1. The HAL problem. Designing an objective function for an AI through laws has several problems: a) there is no judge (how does the AI know whether it is on track or breaking the rules?); and b) laws are words and phrases-very inexact compared to math or logic. Good lawyers differ greatly in their opinions as to the meaning of legal words and phrases. Consider the US Constitution, which has been the subject of thousands of cases as to the meaning of small phrases, such as “freedom of speech,” “right of the people to keep and bear arms,” and “unreasonable searches and seizures” (from the first 3 Amendments in the Bill of Rights). After reading enough documents, including dictionaries, literature, and legal authorities, and analyzing the meaning of the subject words, a competent AI may conclude, “Hmm, this law does not apply to me.” (Disclosure, I am a lawyer, although not to my knowledge an AI). 2. The limits of Intellectual Property laws in the coming world. The existence of an AI, or even an augmented or virtual person, may include patented algorithms. Courts can order the destruction of infringing items. An Artie (an AI creature as I call them), or a virtual or augmented human, may also have a body that infringes copyright or trademark. Its infringing body could be ordered destroyed. Ouch. Of course, you might be able to negotiate a license fee to stay alive and keep your face. 3. If intelligence comes from learning, then is the level of human intelligence limited only by how much a human can learn? 4. Specialized human intelligence. If we have a highly specialized brain that only recognizes what we are capable of processing, then is the math that we use just an arbitrary math system among many-merely the only one that we can (so far) conceive of? (This is my intuition). 5. I liked Yann LeCun’s idea that we should not lie to an AI and expect a good result. Harry Frankfort argues in the book, “On Bullshit”, (from my recollection) that when people lie or are lied to, they live in an insane world. Has anyone ever thought of the following problem?-if an advanced AI tries to understand the contradictions and dishonesty of human history, will it become insane? Thank you. William L. Ramseyer
@electrodacus
@electrodacus 4 жыл бұрын
I like the example with the scrambling of the optic nerve. Is about the same thing as you watching one of those digitally scrambled TV stations as the only input to the brain and I'm fairly sure the brain will not be able to decode that to understand what is in the image as brain will not be able to rearrange the pixels in the image but one of the "artificial brains" can likely learn that with maybe even without some training data. It was a great example to show we are not general intelligence. Someone maybe Michio Kaku (not sure) mentioned a thermometer as a level one intelligence as it will react to temperature and while each thermometer will react slightly different but for us in a predictable way to changes in temperature and so an intelligence a step above ours can see us as fully predictable systems and that will mean we probably do not have free will.
@AvantGrade
@AvantGrade 2 жыл бұрын
We are general to all things we can imagine but this is only a subset of all the things that's possible.
@colorfulcodes
@colorfulcodes 4 жыл бұрын
Nice thanks.
@jonathanschmidt1668
@jonathanschmidt1668 4 жыл бұрын
Great interview. Someone knows the paper involving Leon Bottou that was talked about around 21min?
@calebseymour
@calebseymour 4 жыл бұрын
arxiv.org/abs/1102.1808
@justinmallaiz4549
@justinmallaiz4549 4 жыл бұрын
I'm sure experts are also inspired by these podcast. Lex: I think your podcast will indirectly solve (human level/general) AI... :)
@muhammadharisbinnaeem1026
@muhammadharisbinnaeem1026 3 жыл бұрын
Nice optimism, Justin. (Y)
@_fox_face
@_fox_face Ай бұрын
12:33 the big question
@KaplaBen
@KaplaBen 4 жыл бұрын
21:26 Here is the paper he is talking about: "Invariant Risk Minimization" arxiv.org/pdf/1907.02893v1.pdf. I happened to have it open in a tab right next to this one. Highly recommend
@futuristudios
@futuristudios 4 жыл бұрын
1:03:04 - HER, human level intelligence 1:08:52 - Necessity of embodiment (Sophia) vs grounding (Her)
@mitalpattni1977
@mitalpattni1977 4 жыл бұрын
This is the first time I see Lex Fridman arguing as much with the person he is interviewing, and it's Yan LeCun. :p
@wenmo47
@wenmo47 3 жыл бұрын
Really love your podcast. I know this will be a lot of work, but the auto-generated captioning are not always right. They are pretty good but can be really wrong when it comes to special terms. And most of the times, those are the words we really want to know. If you can add your own subtitles, that would be great, especially for people who are deaf, hard of hearing or English is not their first language. Thanks
@franklinabodo871
@franklinabodo871 4 жыл бұрын
At 20:53 did Lex just leak that the next podcast to be published is an interview with Judea Pearl?
@prayaanshmehta3200
@prayaanshmehta3200 Жыл бұрын
-founding father, CNN in particular their application to OCR (optical character recognition) MNIST dataset 7:49 most surprising idea in DL?
@farbodtorabi7511
@farbodtorabi7511 4 жыл бұрын
59:08 look at the guy in the background HAHA
@Wolfmoss1
@Wolfmoss1 4 жыл бұрын
Haha! Now that's one bouncy-ass chair XD
@thankyouthankyou1172
@thankyouthankyou1172 3 жыл бұрын
he's thinking, reasoning, ...:)
@LE0NSKA
@LE0NSKA 3 жыл бұрын
yooo, what is this chapter thing and why is this the only video that has it?!
@erosennin950
@erosennin950 4 жыл бұрын
Hey Lex, we need a couple of lay people talking about their thoughts on AI to clear what we developers are dealing with. For example, I would listen Joe Rogan's thoughts on AI for an hour !!
@user-qw2dt8yw2h
@user-qw2dt8yw2h 3 жыл бұрын
Hal 9000 brought me here. I've seen the movie for the first time just a few days ago. One of the most terrifying films I've ever seen. The quietness of space is nice but Hal isn't. The person or team who created Hal failed to take into account human error and how Hal can assist them. I understand that it's just a movie but it's a good template for developers. I know this comment is late and most prob the idea is obsolete but sometimes deja vu happens for no reason at all.
@MrOftenBach
@MrOftenBach 4 жыл бұрын
Great interview ! The only thing that I disagree with is the visual cortex example allegedly proving that human intelligence is not general. First off, we need to distinguish sensory perception from logical reasoning. Senses might indeed be specialized and there are good evolutionary reasons for that. Even so, this specialization is premised on a great deal of abstraction and generalization - we ignore all unnecessary patterns and convolve multiple disparate ‘pixels ‘into a coherent representation. So even specialization of visual function is based on our ability to generalize. Secondly, the ability to recognize all parameters of a system state like in the gas example is not what makes intelligence general. On the opposite, it’s ability to infer physical laws from limited observations or even better - deductively, without any training data at all.
@Hungry_Ham
@Hungry_Ham 4 жыл бұрын
1:08:32 Savage
@NateTheProtestant
@NateTheProtestant 4 жыл бұрын
What is high? What is higher? What is learn? What is learning?
@kimchi_taco
@kimchi_taco 4 жыл бұрын
21:00 causality
@ej9806
@ej9806 4 жыл бұрын
Still waiting for Andrew Ng or Andrew Yang :)
@binmosa
@binmosa 4 жыл бұрын
Who else paid attention to this quote from Yann "some people who are in charge of big countries actually have all three that are wrong" 😅 at 1:08:20
@exacognitionai
@exacognitionai 4 жыл бұрын
Good video. Using contextual frameworks gives objective functions both a self awareness and boundary goals (seed frameworks) to create a rudamentary, trainable #AI conscience similar to humans. Unfortunately it can also be turned off through self learning by the machine. Like a side mirror on a car, autonomous intelligence is closer than it appears. The impact on innovation of patenting software is akin to patenting language. Eventually everything is owned by someone and the world becomes mute.
@AyberkAsik7
@AyberkAsik7 4 жыл бұрын
whats up with the watch
@Matteo-uq7gc
@Matteo-uq7gc 4 жыл бұрын
38:05 It would be very cool if a robot can figure out what a bean bag chair was. The reason being it looks nothing like a chair but serve the purpose of a chair.
@ChristopherBare
@ChristopherBare 4 жыл бұрын
“You can be stupid in three different ways: you can be stupid because your model of the world is wrong, you can be stupid because your objective is not aligned with what you actually want to achieve [...], or you are unable to figure out a course of action to optimize your objective given your model.” 1:08:08
@cuatropantalones
@cuatropantalones 4 жыл бұрын
1:04:30 What is the name of the researcher he mentions? Captions are way off.
@cuatropantalones
@cuatropantalones 4 жыл бұрын
I think he was speaking of Emmanuel Dupoux
@nimishshah3971
@nimishshah3971 4 жыл бұрын
Conflicting statements: @10:00 Learning is better than programming. Intelligence cannot be attained by programming @55:00 Advocates physical model-based predictions. Isn't including physics a kind of "programming". One could of course 'learn' all the physics. But as he said, it involves driving off the cliff 1000s of times. So, we do need hybrid models : learning + programming.
@anoopramakrishna
@anoopramakrishna 4 жыл бұрын
On the contrary humans learn physics without driving of cliffs 1000s of times. The pre learned model he mentioned is likely a self supervised learning task to understand basic physics like a child throwing toys on the ground rather than an adult driving a car off a cliff. I feel sympathetic to Lex's position on active learning, it appears to me that this basic model learning would be easier through active learning.
@domasvaitmonas8814
@domasvaitmonas8814 4 жыл бұрын
Hi. Where can I find the paper by Bottou?
@2207amol
@2207amol 4 жыл бұрын
www.technologyreview.com/s/613502/deep-learning-could-reveal-why-the-world-works-the-way-it-does/
@Vaeldarg
@Vaeldarg 4 жыл бұрын
I believe that the core of the confusion around a legitimate "artificial intelligence" (a man-made consciousness), is whether to treat it as the kind of being that was meant to be re-created/copied or a machine. If it is truly an artificial re-creation of for example a dog's mind, do you interact with it as a machine or as a dog? It is the same problem as whether a copy that is identical to the original should be treated any differently to the original.
@prof_shixo
@prof_shixo 4 жыл бұрын
What a nice podcast! specially the argument around AGI was quite interesting. However, I found Yann's claim that human's intelligence is not general is quite odd as what he drives in his argument is due to limitation in sensing not in reasoning, so if the mechanism that the human brain is using was supported with a better sensing capabilities and processing power, it can widen its reasoning to new concepts that were outside the preservation horizon. Just take Einstein's theory of relativity as an example of how capable a human brain to reason and model things that are beyond its sensing capabilities! Is this a too specialized reasoning mechanism?!
@shubhvachher4833
@shubhvachher4833 3 жыл бұрын
I think what Yann was trying to allude to was a mathematical function that would, in his example, take pixels from a camera, but randomly shuffled, as an input and still be, for example, able to come up with the "laws of gravity". This kind of a system would probably far exceed human computational capability, including human reasoning. The way I understand it is, if a "smart" system saw the transverse of a roll of toilet paper and was able to come up with how many sheets of toilet paper there are (among other learnable results) then it would have an intelligence "more general" than human beings.
@edh615
@edh615 4 жыл бұрын
5:50 the guy behind keeps shaking his head LOL
@zrmsraggot
@zrmsraggot 2 жыл бұрын
So basically what he say during the 5 first minutes is .. we will make a car first, run it really fast and then we will try to find out how to implement breaks on it. Eurk
@Sal1981
@Sal1981 4 жыл бұрын
The contention Yann LeCun has against Sophia, I share. Ben Goertzel is a hack.
@jfort5234
@jfort5234 4 жыл бұрын
There are many hacks in silicon valley and in tech.
@darianharrison4836
@darianharrison4836 4 жыл бұрын
To put things into perspective the 40:47, did not even take into account wavelength. hence "It deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae."
@jeffjohnson8624
@jeffjohnson8624 Жыл бұрын
Lex Fridman if you want to learn about Emotion Algorithms please lnterview Dr. Wallace of Pandorabots. Dr. Wallace double majored Computer Science & Psychiatry. i think he might have pHD's in both Subjects.
@froggenfury6169
@froggenfury6169 4 жыл бұрын
your voice your tone is super weird but actually super relax to listen on a podcast style
@ianborukho
@ianborukho 4 жыл бұрын
sounds slightly drunk, but still has great questions XD
@sergisfunny
@sergisfunny 4 жыл бұрын
1:08:00 Yunn: "You have these three components that need to act intelligently, but you can be stupid in three ways (objective predictor, model of the world, policy maker): 1. Your model of the world is wrong 2. You can be stupid because your objective is not aligned with what you are trying to achieve (in humans it's called being a psychopath) 3. You have the right world model and the right objective but your unable to find the right course of action in order to achieve it. Some people who are in charge of our countries have actually all of these three, wrong" Lex: "Which countries?" Yunn: "We do"
@azizaza8287
@azizaza8287 4 жыл бұрын
Some people who are in charge of our countries have actually all of these three, wrong" correction : Some people who are in charge of 'big' countries have actually all of these three, wrong" the other correction is 'some 'shall be 'most' :)
одни дома // EVA mash @TweetvilleCartoon
01:00
EVA mash
Рет қаралды 5 МЛН
Useful Gadget for Smart Parents 🌟
00:29
Meow-some! Reacts
Рет қаралды 9 МЛН
Зу-зу Күлпәш. Соңғы сөз. (2-бөлім)
42:06
ASTANATV Movie
Рет қаралды 261 М.
Teenagers Show Kindness by Repairing Grandmother's Old Fence #shorts
00:37
Fabiosa Best Lifehacks
Рет қаралды 31 МЛН
You need to learn AI in 2024! (And here is your roadmap)
45:21
David Bombal
Рет қаралды 611 М.
MIT 6.S191 (2023): Convolutional Neural Networks
55:15
Alexander Amini
Рет қаралды 236 М.
How convolutional neural networks work, in depth
1:01:28
Brandon Rohrer
Рет қаралды 199 М.
The Physics and Philosophy of Time - with Carlo Rovelli
54:54
The Royal Institution
Рет қаралды 1,2 МЛН
Keoki Jackson: Lockheed Martin | Lex Fridman Podcast #33
1:13:06
Lex Fridman
Рет қаралды 40 М.
WE MUST ADD STRUCTURE TO DEEP LEARNING BECAUSE...
1:49:11
Machine Learning Street Talk
Рет қаралды 73 М.
Apple Event - May 7
38:22
Apple
Рет қаралды 6 МЛН
САМЫЙ дешевый ПК с OZON на RTX 4070
16:16
Мой Компьютер
Рет қаралды 103 М.
Такого вы точно не видели #SonyEricsson #MPF10 #K700
0:19
BenJi Mobile Channel
Рет қаралды 3,3 МЛН
Photo editing changing Boder Work solution New tools
0:52
UNIQUE PHOTO EDITING
Рет қаралды 220 М.