Mindscape 280 | François Chollet on Deep Learning and the Meaning of Intelligence

  Рет қаралды 6,731

Sean Carroll

Sean Carroll

5 күн бұрын

Patreon: / seanmcarroll
Blog post with audio player, show notes, and transcript: www.preposterousuniverse.com/...
Which is more intelligent, ChatGPT or a 3-year old? Of course this depends on what we mean by "intelligence." A modern LLM is certainly able to answer all sorts of questions that require knowledge far past the capacity of a 3-year old, and even to perform synthetic tasks that seem remarkable to many human grown-ups. But is that really intelligence? François Chollet argues that it is not, and that LLMs are not ever going to be truly "intelligent" in the usual sense -- although other approaches to AI might get there.
François Chollet received his Diplôme d'Ingénieur from École Nationale Supérieure de Techniques Avancées, Paris. He is currently a Senior Staff Engineer at Google. He has been awarded the Global Swiss AI award for breakthroughs in artificial intelligence. He is the author of Deep Learning with Python, and developer of the Keras software library for neural networks. He is the creator of the ARC (Abstraction and Reasoning Corpus) Challenge.
Mindscape Podcast playlist: • Mindscape Podcast
Sean Carroll channel: / seancarroll
#podcast #ideas #science #philosophy #culture

Пікірлер: 60
@rachel_rexxx
@rachel_rexxx 2 күн бұрын
I gained a greater understanding! Twas a good one
@kroyhevia
@kroyhevia 4 күн бұрын
Watch the whole thing and then react.. starting off another great episode as mindscape does
@DirtmopAZ
@DirtmopAZ 4 күн бұрын
Exciting!
@ehfik
@ehfik 2 күн бұрын
that was a great, information packed episode. thanks
@davegrundgeiger9063
@davegrundgeiger9063 4 күн бұрын
This is so good! Thank you for this!
@yuvalfrommer5905
@yuvalfrommer5905 4 күн бұрын
Can you imagine a colour outside the visible spectrum? In what sense are the questions in the ark challenge not themselves some high dimension interpolation of Francois' past experience. Thats the one question that should have been asked. Listening was frustrating
@SandipChitale
@SandipChitale 4 күн бұрын
Excellent episode. Thanks Sean and Francois! About the discussion at 29:00, if you want a very technical understanding of how LLMs work I highly recommend: The following two videos on the 3Blue1Brown channel on YT: But what is a GPT? Visual intro to transformers | Chapter 5, Deep Learning Attention in transformers, visually explained | Chapter 6, Deep Learning It is very very very good explanation that is also relatively easy to understand. You will thanks me for this pointer. You are welcome!
@MarkoTManninen
@MarkoTManninen Күн бұрын
I found Francois explanation even more helpful than 3B1B.
@SandipChitale
@SandipChitale 3 күн бұрын
I think there is some misunderstanding about the existential threat part of the discussion at 1:28:30. The existential threat may not be from AGI because there is no AGI today or in short term near future. The issue is that current LLMs may convince someone, (and all of us have had that uncanny experience with LLMs, lets be honest) to the extent that they may actually employ an LLMs in a critical decision making loop to decide on a critical task based on how convinced they were of it's abilities. That is the issue. Of course if the real AGI is invented then the odds get potentially that much worse.
@Eric-vy1ux
@Eric-vy1ux 4 күн бұрын
Is there something about being embodied that make human general intelligence difficult to achieve in AI? Any work being on this front?
@ronkrate609
@ronkrate609 4 күн бұрын
his audio bad
@zack_120
@zack_120 4 күн бұрын
AI is at 100% error rate now. Can it ever reach 0.0001%?
@missh1774
@missh1774 3 күн бұрын
Thank goodness the AI scores only as low as 0% ... Can't imagine what it means if it went below that 😏
@andybandyb
@andybandyb 4 күн бұрын
Large language models just read the test
@alexanderg9670
@alexanderg9670 4 күн бұрын
The most plausible hypothesis for me is that LLMs contain world models. Likely primitive, frozen-errored and alien in some ways, but communicable and useful nonetheless. Bigger = better so far Looking forward to multimodal AI models very much, especially embodied with movement tokens
@davidcampos1463
@davidcampos1463 4 күн бұрын
What Francois needs is a greeting for everyone. I propose :Please state the nature of the AGI and or the emergency.
@jonathanbyrdmusic
@jonathanbyrdmusic 4 күн бұрын
If you don’t care about quality, accuracy, or anything else, they’re great!
@zxbc1
@zxbc1 4 күн бұрын
LLMs are more accurate at finding a lot of information than the average human at a fraction of the speed. You can criticize it for not having true human intelligence but quality and accuracy is the reason why millions of people use it.
@joshnicholson6194
@joshnicholson6194 4 күн бұрын
Clearly a musician, aha.
@generichuman_
@generichuman_ 4 күн бұрын
It's amazing to me the number of people that clearly have no knowledge in this space that speak with such confidence. They hear headlines like "Hallucinations!" and immediately think they understand the problem.
@takyon24
@takyon24 4 күн бұрын
I mean it's basically as good as a database for common tasks/queries. That's fairly useful, not exactly earth shattering but still
@zxbc1
@zxbc1 4 күн бұрын
@@takyon24 It's far more than a database for queries. Go try chatGPT 4o right now, and give it some complex task. Yesterday I asked it to look up the standing of the teams in the Euro 2024 and give me a rough estimate of the chances Hungary has in qualifying as one of the best third-placed teams (actual prompt slightly more detailed, but not by much). ChatGPT went on the web, searched for the group standings and the remaining matchups, did *individual* win-draw-loss chance estimate on each of the matches, and used math to calculate the probability of Hungary advancing. It gave me a 5 page analysis detailing its math so I could check that it was correct, all within about 20 seconds. This is just a small application. The other day I posted a detailed lab test result including a bone marrow test image of my aunt, it correctly and accurately diagnosed the disease exactly like the hospital doctor did (and gave more explanation than the doctor, too), suggested the exact medication that the doctor prescribed her. And when briefly prompted, it also gave a very detailed weekly meal plan that supplement the treatment. I don't think most people realize the degree of autonomous agency "simple" AIs like LLMs already achieved. They're not close to anything we've had before.
@TheReferrer72
@TheReferrer72 3 күн бұрын
While I agree with some of François Chollet's criticisms of LLM, I think he is wrong on a few details. 1. LLM's have not in any form exhausted the data, they trained on text from the Internet and books, but most of the data we produce is in the form of images and sound. 2. To say they have peeked is a bad call we have only had 18 months since the original ChatGPT, training runs take time. 3. They are not an off ramp even if they don't reach AGI (which I think they won't by themselves), the amount of compute and interest from people in AI because of these LLM's means that machine learning as a whole is going to be in much better shape than if they still stayed the curiosity of the giant Tech labs. Good to see him doing the rounds.
@deadeaded
@deadeaded 3 күн бұрын
Regarding your second point, remember that we don't just have one training/progress timeline to observe. There are now multiple competing LLMs, and they've all hit roughly the same level of performance. If the rate of progress was still fast (or even exponential, as some are still claiming), you would expect to see larger gaps in performance between the competing models.
@TheReferrer72
@TheReferrer72 2 күн бұрын
@@deadeaded That's a bit false. in all the family of models Llama, GPT, Claude, Phi, Gemini we have seen uplifts from roughly two or three generations. Not only have they improved but they have gone multi-model and the performance has skyrocketed while models have become smaller. llama3 9b, and Phi can be run on consumer hardware. Peak won't happen for decades because of the hardware cycle. We have not even burnt these models into FPGA's yet let alone into raw silicon. The only way I can see these LLM's reaching a peak is if a new architecture completely dethrones them!
@GeezerBoy65
@GeezerBoy65 3 күн бұрын
I find this guest's speech in English difficult to understand. Like listening to someone talking under water on many words. Played around with equalizer, not any better. Had to give up and look at the transcript.
@MaxPower-vg4vr
@MaxPower-vg4vr 3 күн бұрын
Let me propose some initial theorems and proofs that could be explored in developing a mathematical framework that treats 0D as the fundamental reality: Theorem 1: The existence of a non-zero dimension implies the existence of a zero dimension. Proof sketch: If we consider a non-zero dimension, say 1D, it must be constructed from an underlying set of points or elements. These points or elements themselves can be considered as having zero spatial extent, i.e., they are 0D objects. Therefore, the existence of a 1D line or higher dimensions necessarily implies the existence of a more fundamental 0D reality from which they are built. Theorem 2: Higher dimensions are projections or manifestations of the 0D reality. Proof sketch: Building on Theorem 1, if 0D is the fundamental reality, then higher dimensions (1D, 2D, 3D, etc.) must emerge or be constructed from this 0D basis. One could explore mathematical frameworks that treat higher dimensions as projections, embeddings, or manifestations of the 0D reality, akin to how higher-dimensional objects can be represented or projected in lower dimensions (e.g., a 3D cube projected onto a 2D plane). Theorem 3: The properties and structure of the 0D reality determine the properties and structure of higher dimensions. Proof sketch: If higher dimensions are indeed projections or manifestations of the 0D reality, then the characteristics and laws governing the 0D realm should dictate the characteristics and laws observed in higher dimensions. This could potentially provide a unified framework for understanding the fundamental laws and constants of physics, as well as the nature of space, time, and other physical phenomena, as arising from the properties of the 0D reality. Theorem 4: Paradoxes and contradictions in higher dimensions can be resolved or reinterpreted in the context of the 0D reality. Proof sketch: Many paradoxes and contradictions in physics and mathematics arise from the assumptions and axioms associated with treating higher dimensions as fundamental. By grounding the framework in a 0D reality, these paradoxes and contradictions could potentially be resolved or reinterpreted in a consistent manner, as they may be artifacts of projecting the 0D reality into higher dimensions. These are just initial ideas and proof sketches, and developing a rigorous mathematical framework would require significant work and collaboration among experts in various fields. However, some potential avenues to explore could include: 1. Adapting and extending concepts from point-set topology, where points (0D objects) are used to construct higher-dimensional spaces and manifolds. 2. Drawing inspiration from algebraic geometry, where higher-dimensional objects can be studied through their projections onto lower dimensions. 3. Investigating connections with quantum mechanics and quantum field theory, where point particles and fields are treated as fundamental objects, and exploring how a 0D framework could provide a unified description. 4. Exploring parallels with number theory and arithmetic, where zero and non-zero numbers have distinct properties and roles, and how these could translate to the treatment of 0D and non-zero dimensions. Ultimately, developing a consistent and empirically supported mathematical framework that treats 0D as fundamental would require substantial theoretical and experimental work, but the potential payoff could be a deeper understanding of the nature of reality and a resolution of longstanding paradoxes and contradictions in our current physical theories.
@jimmyjustintime3030
@jimmyjustintime3030 3 күн бұрын
sorry but this guy is really really bad at explaining the basics and even his few higher level observations are just the same thing restated over and over whatever you ask him. He is not even trying to answer your questions or thinking of a wider audience like you probed him about the seize of an LLMs and he steers the conversation to his narrow pet project and says 8B "is actually quite big" lol zero knowledge transfer. Please interview someone else on this important topic.
@MrFaaaaaaaaaaaaaaaaa
@MrFaaaaaaaaaaaaaaaaa 3 күн бұрын
I found his explanations about transformer model topology to be quite good.
@2CSST2
@2CSST2 4 күн бұрын
How surprising, all the AI related guests you bring just serve as an echo to your already held opinion... Many, many many AI forefront leaders do not agree *at all* that LLMs are just some sort of stochastic parrot
@yeezythabest
@yeezythabest 4 күн бұрын
Any recommendations on that front?
@alexanderg9670
@alexanderg9670 4 күн бұрын
​@@yeezythabestLeahy vs Hotz debate
@2CSST2
@2CSST2 4 күн бұрын
@@yeezythabest Really the obvious one is the best one: Geoffrey Hinton, one of the godfathers of AI. Most of his talks and interviews are insightful, and he argues quite well why LLMs don't just reproduce statistics but do gain understanding.
@jonathanbyrdmusic
@jonathanbyrdmusic 4 күн бұрын
Follow the money.
@zxbc1
@zxbc1 4 күн бұрын
To hold the strong opinion that LLMs with a neural network architecture cannot achieve intelligence similar to human's is to proclaim that you understand exactly how human intelligence works beyond being a very complex neural network. It's the ultimate form of hubris. I'm actually surprised that a skeptic like Sean does not challenge this opinion from this angle, especially on someone who is very far from an expert in human intelligence such as a computer scientist like François Chollet.
Универ. 13 лет спустя - ВСЕ СЕРИИ ПОДРЯД
9:07:11
Комедии 2023
Рет қаралды 6 МЛН
OMG😳 #tiktok #shorts #potapova_blog
00:58
Potapova_blog
Рет қаралды 3,7 МЛН
Economics, deep learning, and basal cognition
47:01
Michael Levin's Academic Content
Рет қаралды 2,4 М.
The Futurists - EPS_254: The Future is a State of Mind with Brett King and Rob Tercek
43:54
The Futurists Podcast - Robert Tercek & Brett King
Рет қаралды 75
The Real Reason Why Music Is Getting Worse
12:42
Rick Beato
Рет қаралды 1,4 МЛН
Building A Theory Of Everything | Stephen Wolfram | Escaped Sapiens #70
1:53:48
The World In 2024 With Niall Ferguson: Crisis, Conflict And The New Axis of Evil
1:30:07
String Theorists Have Calculated the Value of Pi
7:10
Sabine Hossenfelder
Рет қаралды 275 М.
159 - We’re All Gonna Die with Eliezer Yudkowsky
1:49:22
Bankless
Рет қаралды 279 М.
Samsung S24 Ultra professional shooting kit #shorts
0:12
Photographer Army
Рет қаралды 32 МЛН
Samsung Galaxy 🔥 #shorts  #trending #youtubeshorts  #shortvideo ujjawal4u
0:10
Ujjawal4u. 120k Views . 4 hours ago
Рет қаралды 11 МЛН