Mindscape 280 | François Chollet on Deep Learning and the Meaning of Intelligence

  Рет қаралды 15,994

Sean Carroll

Sean Carroll

Күн бұрын

Пікірлер: 88
@antonart2.0
@antonart2.0 4 ай бұрын
Finally, someone who isn't trying to distort or shift the discourse for their own profit by exploiting general illiteracy, misconceptions, and the human nature to animalize anything they interact with. Thank you so much, gentlemen. In my opinion, this was the best talk on the current state of affairs to date.
@philipkopylov3058
@philipkopylov3058 Ай бұрын
Speaking of profit: waiting to see how openai gonna pay back their debts if they continue burning money that they raised "to build AGI", but really have only gotten as far as beefed up LLM.
@mastermusician
@mastermusician 4 ай бұрын
The fact that everyone is so upset at Francois in this comment section is supposed to let you know that the LLM crowd and message has become totally dogmatic. There’s nothing more annoying than seeing software “engineers” talk about “philosophy”
@kroyhevia
@kroyhevia 5 ай бұрын
Watch the whole thing and then react.. starting off another great episode as mindscape does
@ehfik
@ehfik 4 ай бұрын
that was a great, information packed episode. thanks
@SandipChitale
@SandipChitale 5 ай бұрын
Excellent episode. Thanks Sean and Francois! About the discussion at 29:00, if you want a very technical understanding of how LLMs work I highly recommend: The following two videos on the 3Blue1Brown channel on YT: But what is a GPT? Visual intro to transformers | Chapter 5, Deep Learning Attention in transformers, visually explained | Chapter 6, Deep Learning It is very very very good explanation that is also relatively easy to understand. You will thanks me for this pointer. You are welcome!
@MarkoTManninen
@MarkoTManninen 4 ай бұрын
I found Francois explanation even more helpful than 3B1B.
@rachel_rexxx
@rachel_rexxx 4 ай бұрын
I gained a greater understanding! Twas a good one
@leastromain79
@leastromain79 3 ай бұрын
Great episode, Francois Chollet and Yann Lecun are bringing some sense and measure to “AI” discussions, LLM are great but a better architecture, less energy consuming, with the ability to represent actions before they are being executed may (or not) be the answer. It’s worth exploring all the path (no Feynman puns intended) rather than focussing on one in a proper dogmatic manner
@yuvalfrommer5905
@yuvalfrommer5905 5 ай бұрын
Can you imagine a colour outside the visible spectrum? In what sense are the questions in the ark challenge not themselves some high dimension interpolation of Francois' past experience. Thats the one question that should have been asked. Listening was frustrating
@wwkk4964
@wwkk4964 4 ай бұрын
Thanks for noticing. It appears Francois thinks human beings are magical. As if he had the problem of changing his accent on the fly and speak to Piraha in the Amazon or catch fish with a bow and arrow in 3 shots, he could do it. He doesn't realise that he limited in the very same way the LLMs are despite him thinking that a 3 year old child climbing a rock is amazing like the child is flying like an eagle.
@maloxi1472
@maloxi1472 4 ай бұрын
@@wwkk4964 Whether human beings are "magical" or not is irrelevant to the fundamental difference between ANI and AGI (which humans DO exhibit and current LLMs do not). If you are actually interested in understanding the "why" behind these objections, I'd recommend watching Deutsch's video on the subject ( kzbin.info/www/bejne/f5a8aYSXgtiMp7s )
@saturdaysequalsyouth
@saturdaysequalsyouth 3 ай бұрын
@@wwkk4964 I'm pretty sure Francois would say those examples are demonstrations of skill. What I took from what Francois was saying is that LLMs are much more limited than the general public seems to think, which I agree with him. I also agree with you that even human intelligence is limited. But also think human intelligence is more general that you give it credit for. What I mean by that is not that Francois can catch a fish with a boy and arrow right now, but that he can learn to do it. An LLM like ChatGPT cannot learn to do that. AlphaZero cannot learn to do that. A Tesla cannot learn to play chess. But me and you, we can learn to drive and play chess. The Human genome, in combination with an environment, specifies an architecture for learning. Machine learning systems specify an architecture for using a specific type of training to build a limited set of semantic models based on the training data. So I agree with Francois that there still is a fundamental distinction between Human and machine learning.
@brightstar9870
@brightstar9870 4 ай бұрын
A really great interview - debunking some of the Ilm hype
@alexanderg9670
@alexanderg9670 5 ай бұрын
The most plausible hypothesis for me is that LLMs contain world models. Likely primitive, frozen-errored and alien in some ways, but communicable and useful nonetheless. Bigger = better so far Looking forward to multimodal AI models very much, especially embodied with movement tokens
@Eric-vy1ux
@Eric-vy1ux 5 ай бұрын
Is there something about being embodied that make human general intelligence difficult to achieve in AI? Any work being on this front?
@deeptochatterjee532
@deeptochatterjee532 3 ай бұрын
I'm kinda curious about this too. I'm curious if integrating LLMs with some computer vision stuff will help to ground some of the word interpolation in some sort of visual/geometric understanding? That's how I envision humans working a bit (at least in part), but I also generally have no idea what I'm talking about
@vutat1443
@vutat1443 3 ай бұрын
I understand that Yann LeCun is at least thinking about embodiment and how that could be implemented.
@davegrundgeiger9063
@davegrundgeiger9063 5 ай бұрын
This is so good! Thank you for this!
@jutjub22
@jutjub22 3 ай бұрын
Such a good channel, but adding video too would make it so much better and raise attention, making it more engaging...
@gtziavelis
@gtziavelis 4 ай бұрын
Human group #1: "We will not have AGI in our lifetimes." Human group #2: "We have AGI already. It is here now." Clearly, the spotlight is not on artificial intelligence, but on the natural intelligence of humans, and how they compete more than collaborate; turns out collaboration is required when wanting to do cosmic-scale things like reaching out to the stars to contact extraterrestrials, or diverting an asteroid hurtling towards earth, to ensure our safety, etc., so if the AGI is as bad as us, its inventors, then maybe it's a good thing we haven't invented it yet, and it can certainly wait. Bonus shout out to Sir Roger Penrose who posits, I believe correctly, that consciousness is not a computational process, which, if true, implies that we will never have AGI because we are too intelligent ourselves.
@DirtmopAZ
@DirtmopAZ 5 ай бұрын
Exciting!
@desgreene2243
@desgreene2243 2 ай бұрын
François explained the limitations of LLMS very well. It makes me wonder why so many eminent minds (Max Tegmark etc.) see such a different view of the current direction of AI development. Are they deluded?
@DanFrederiksen
@DanFrederiksen 6 күн бұрын
The first interview question should be how is his name actually pronounced in french :)
@maspoetry1
@maspoetry1 4 ай бұрын
I can't believe this has so few views...
@SandipChitale
@SandipChitale 5 ай бұрын
I think there is some misunderstanding about the existential threat part of the discussion at 1:28:30. The existential threat may not be from AGI because there is no AGI today or in short term near future. The issue is that current LLMs may convince someone, (and all of us have had that uncanny experience with LLMs, lets be honest) to the extent that they may actually employ an LLMs in a critical decision making loop to decide on a critical task based on how convinced they were of it's abilities. That is the issue. Of course if the real AGI is invented then the odds get potentially that much worse.
@randoomain7485
@randoomain7485 4 ай бұрын
That sounds like a danger, but not an _existential_ threat.
@zack_120
@zack_120 5 ай бұрын
AI is at 100% error rate now. Can it ever reach 0.0001%?
@davidcampos1463
@davidcampos1463 5 ай бұрын
What Francois needs is a greeting for everyone. I propose :Please state the nature of the AGI and or the emergency.
@andybandyb
@andybandyb 5 ай бұрын
Large language models just read the test
@missh1774
@missh1774 5 ай бұрын
Thank goodness the AI scores only as low as 0% ... Can't imagine what it means if it went below that 😏
@jonathanbyrdmusic
@jonathanbyrdmusic 5 ай бұрын
If you don’t care about quality, accuracy, or anything else, they’re great!
@zxbc1
@zxbc1 5 ай бұрын
LLMs are more accurate at finding a lot of information than the average human at a fraction of the speed. You can criticize it for not having true human intelligence but quality and accuracy is the reason why millions of people use it.
@joshnicholson6194
@joshnicholson6194 5 ай бұрын
Clearly a musician, aha.
@generichuman_
@generichuman_ 5 ай бұрын
It's amazing to me the number of people that clearly have no knowledge in this space that speak with such confidence. They hear headlines like "Hallucinations!" and immediately think they understand the problem.
@takyon24
@takyon24 5 ай бұрын
I mean it's basically as good as a database for common tasks/queries. That's fairly useful, not exactly earth shattering but still
@zxbc1
@zxbc1 5 ай бұрын
@@takyon24 It's far more than a database for queries. Go try chatGPT 4o right now, and give it some complex task. Yesterday I asked it to look up the standing of the teams in the Euro 2024 and give me a rough estimate of the chances Hungary has in qualifying as one of the best third-placed teams (actual prompt slightly more detailed, but not by much). ChatGPT went on the web, searched for the group standings and the remaining matchups, did *individual* win-draw-loss chance estimate on each of the matches, and used math to calculate the probability of Hungary advancing. It gave me a 5 page analysis detailing its math so I could check that it was correct, all within about 20 seconds. This is just a small application. The other day I posted a detailed lab test result including a bone marrow test image of my aunt, it correctly and accurately diagnosed the disease exactly like the hospital doctor did (and gave more explanation than the doctor, too), suggested the exact medication that the doctor prescribed her. And when briefly prompted, it also gave a very detailed weekly meal plan that supplement the treatment. I don't think most people realize the degree of autonomous agency "simple" AIs like LLMs already achieved. They're not close to anything we've had before.
@TheReferrer72
@TheReferrer72 5 ай бұрын
While I agree with some of François Chollet's criticisms of LLM, I think he is wrong on a few details. 1. LLM's have not in any form exhausted the data, they trained on text from the Internet and books, but most of the data we produce is in the form of images and sound. 2. To say they have peeked is a bad call we have only had 18 months since the original ChatGPT, training runs take time. 3. They are not an off ramp even if they don't reach AGI (which I think they won't by themselves), the amount of compute and interest from people in AI because of these LLM's means that machine learning as a whole is going to be in much better shape than if they still stayed the curiosity of the giant Tech labs. Good to see him doing the rounds.
@TheReferrer72
@TheReferrer72 4 ай бұрын
@@deadeaded That's a bit false. in all the family of models Llama, GPT, Claude, Phi, Gemini we have seen uplifts from roughly two or three generations. Not only have they improved but they have gone multi-model and the performance has skyrocketed while models have become smaller. llama3 9b, and Phi can be run on consumer hardware. Peak won't happen for decades because of the hardware cycle. We have not even burnt these models into FPGA's yet let alone into raw silicon. The only way I can see these LLM's reaching a peak is if a new architecture completely dethrones them!
@learnflask5652
@learnflask5652 2 ай бұрын
Weak people always needed a golden calf
@MaxPower-vg4vr
@MaxPower-vg4vr 4 ай бұрын
Let me propose some initial theorems and proofs that could be explored in developing a mathematical framework that treats 0D as the fundamental reality: Theorem 1: The existence of a non-zero dimension implies the existence of a zero dimension. Proof sketch: If we consider a non-zero dimension, say 1D, it must be constructed from an underlying set of points or elements. These points or elements themselves can be considered as having zero spatial extent, i.e., they are 0D objects. Therefore, the existence of a 1D line or higher dimensions necessarily implies the existence of a more fundamental 0D reality from which they are built. Theorem 2: Higher dimensions are projections or manifestations of the 0D reality. Proof sketch: Building on Theorem 1, if 0D is the fundamental reality, then higher dimensions (1D, 2D, 3D, etc.) must emerge or be constructed from this 0D basis. One could explore mathematical frameworks that treat higher dimensions as projections, embeddings, or manifestations of the 0D reality, akin to how higher-dimensional objects can be represented or projected in lower dimensions (e.g., a 3D cube projected onto a 2D plane). Theorem 3: The properties and structure of the 0D reality determine the properties and structure of higher dimensions. Proof sketch: If higher dimensions are indeed projections or manifestations of the 0D reality, then the characteristics and laws governing the 0D realm should dictate the characteristics and laws observed in higher dimensions. This could potentially provide a unified framework for understanding the fundamental laws and constants of physics, as well as the nature of space, time, and other physical phenomena, as arising from the properties of the 0D reality. Theorem 4: Paradoxes and contradictions in higher dimensions can be resolved or reinterpreted in the context of the 0D reality. Proof sketch: Many paradoxes and contradictions in physics and mathematics arise from the assumptions and axioms associated with treating higher dimensions as fundamental. By grounding the framework in a 0D reality, these paradoxes and contradictions could potentially be resolved or reinterpreted in a consistent manner, as they may be artifacts of projecting the 0D reality into higher dimensions. These are just initial ideas and proof sketches, and developing a rigorous mathematical framework would require significant work and collaboration among experts in various fields. However, some potential avenues to explore could include: 1. Adapting and extending concepts from point-set topology, where points (0D objects) are used to construct higher-dimensional spaces and manifolds. 2. Drawing inspiration from algebraic geometry, where higher-dimensional objects can be studied through their projections onto lower dimensions. 3. Investigating connections with quantum mechanics and quantum field theory, where point particles and fields are treated as fundamental objects, and exploring how a 0D framework could provide a unified description. 4. Exploring parallels with number theory and arithmetic, where zero and non-zero numbers have distinct properties and roles, and how these could translate to the treatment of 0D and non-zero dimensions. Ultimately, developing a consistent and empirically supported mathematical framework that treats 0D as fundamental would require substantial theoretical and experimental work, but the potential payoff could be a deeper understanding of the nature of reality and a resolution of longstanding paradoxes and contradictions in our current physical theories.
@GeezerBoy65
@GeezerBoy65 4 ай бұрын
I find this guest's speech in English difficult to understand. Like listening to someone talking under water on many words. Played around with equalizer, not any better. Had to give up and look at the transcript.
@matthieukaczmarek
@matthieukaczmarek 4 ай бұрын
I can easily understand François' French accent 😉but this highlights a common problem. People often criticize foreign accents or expressions. But listening requires effort too. If we don't make that effort, we limit ourselves and miss out on diverse perspectives.
@shinkurt
@shinkurt 4 ай бұрын
He is actually very very easy to understand
@argonthesad
@argonthesad 2 ай бұрын
@@shinkurt No he's not. lol He badly mispronounces several words making it a chore to follow what he's saying. He should work on his spoken English is he's going to do more of these talks.
@shinjirigged
@shinjirigged 4 ай бұрын
every time machines meet the metric for intelligence, we move the bar. It's funny, i wonder how you think your intelligence works.
@jimmyjustintime3030
@jimmyjustintime3030 4 ай бұрын
sorry but this guy is really really bad at explaining the basics and even his few higher level observations are just the same thing restated over and over whatever you ask him. He is not even trying to answer your questions or thinking of a wider audience like you probed him about the seize of an LLMs and he steers the conversation to his narrow pet project and says 8B "is actually quite big" lol zero knowledge transfer. Please interview someone else on this important topic.
@MrFaaaaaaaaaaaaaaaaa
@MrFaaaaaaaaaaaaaaaaa 4 ай бұрын
I found his explanations about transformer models as topology to be quite good.
@matthieukaczmarek
@matthieukaczmarek 4 ай бұрын
The explanations are clear, *if* you have the necessary mathematical and calculus background. It is true that he did not address that part.
@jimmyjustintime3030
@jimmyjustintime3030 4 ай бұрын
@mindurownbussines dude is obviously amazing all kinds of ways but explaining AI to a wider audience is not his strong suit.
@2CSST2
@2CSST2 5 ай бұрын
How surprising, all the AI related guests you bring just serve as an echo to your already held opinion... Many, many many AI forefront leaders do not agree *at all* that LLMs are just some sort of stochastic parrot
@yeezythabest
@yeezythabest 5 ай бұрын
Any recommendations on that front?
@alexanderg9670
@alexanderg9670 5 ай бұрын
​@@yeezythabestLeahy vs Hotz debate
@2CSST2
@2CSST2 5 ай бұрын
@@yeezythabest Really the obvious one is the best one: Geoffrey Hinton, one of the godfathers of AI. Most of his talks and interviews are insightful, and he argues quite well why LLMs don't just reproduce statistics but do gain understanding.
@jonathanbyrdmusic
@jonathanbyrdmusic 5 ай бұрын
Follow the money.
@zxbc1
@zxbc1 5 ай бұрын
To hold the strong opinion that LLMs with a neural network architecture cannot achieve intelligence similar to human's is to proclaim that you understand exactly how human intelligence works beyond being a very complex neural network. It's the ultimate form of hubris. I'm actually surprised that a skeptic like Sean does not challenge this opinion from this angle, especially on someone who is very far from an expert in human intelligence such as a computer scientist like François Chollet.
Mindscape 277 | Cumrun Vafa on the Universe According to String Theory
1:22:26
Mindscape 258 | Solo: AI Thinks Different
1:20:46
Sean Carroll
Рет қаралды 53 М.
The IMPOSSIBLE Puzzle..
00:55
Stokes Twins
Рет қаралды 159 МЛН
Молодой боец приземлил легенду!
01:02
МИНУС БАЛЛ
Рет қаралды 1,8 МЛН
HELP!!!
00:46
Natan por Aí
Рет қаралды 76 МЛН
It's Not About Scale, It's About Abstraction
46:22
Machine Learning Street Talk
Рет қаралды 93 М.
François Chollet: Measures of Intelligence | Lex Fridman Podcast #120
2:34:21
Mindscape 295 | Solo: Emergence and Layers of Reality
1:35:00
Sean Carroll
Рет қаралды 22 М.
Mindscape 279 | Ellen Langer on Mindfulness and the Body
1:11:52
Sean Carroll
Рет қаралды 8 М.
General Intelligence: Define it, measure it, build it
53:41
ARC Prize
Рет қаралды 10 М.
Mindscape 270 | Solo: The Coming Transition in How Humanity Lives
2:09:23
Pattern Recognition vs True Intelligence - Francois Chollet
2:42:55
Machine Learning Street Talk
Рет қаралды 40 М.
Why a Forefather of AI Fears the Future
1:10:41
World Science Festival
Рет қаралды 137 М.