"No AGI without Neurosymbolic AI" by Gary Marcus

  Рет қаралды 5,911

Asim Munawar

Asim Munawar

Күн бұрын

Chapters:
0:00 No AGI without Neurosymbolic AI
30:00 QAs
The talk was given on 26 Feb 2024 at the NucLeaR workshop -
Neuro-Symbolic Learning and Reasoning in the Era of Large Language Models @ AAAI 2024
PDF slides of the talk can be downloaded at:
asimmunawar.github.io (Go to "More")
Workshop organizers: Pranava Madhyastha, Alexander Gray, Elham Barezi, Abulhair Saparov, Asim Munawar
Workshop URL: nuclear-workshop.github.io/
#aaai #llm #reasoning #learning #NeuroSymbolic #vancouver
Speaker's Bio:
GARY MARCUS is a leading voice in artificial intelligence. He is a scientist, best-selling author, and serial entrepreneur (Founder of Robust.AI and Geometric.AI, acquired by Uber). He is well-known for his challenges to contemporary AI, anticipating many of the current limitations decades in advance, and for his research in human language development and cognitive neuroscience.
An Emeritus Professor of Psychology and Neural Science at NYU, he is the author of several books, including (5 of these) including The Algebraic Mind, a treatise that focuses on the theory Integrating Connectionism and Cognitive Science.
He has often contributed to The New Yorker, Wired, and The New York Times. He is currently working on a new book aptly titled “Taming Silicon Valley”!
He has also testified at the US Senate Hearing on the harms of the current Generative AI technology landscape.

Пікірлер: 47
@Netfir
@Netfir 3 ай бұрын
Thanks for the replay! Very interesting talk :)
@williamjmccartan8879
@williamjmccartan8879 2 ай бұрын
Thank you for sharing this presentation, peace
@hedu5303
@hedu5303 2 ай бұрын
This guy deserves 1 billion $ to push AI forward
@K4IICHI
@K4IICHI 2 ай бұрын
AGI won't be based solely on LLMs, but it seems like LLMs can get smart enough fast enough to substantially accelerate the development of AGI's other necessary components. As a small correction, at around 11:20 Mr. Marcus demonstrates a screenshot from Gemini claiming it's ChatGPT. Unlike Gemini, ChatGPT does get the question right.
@dallassegno
@dallassegno 2 ай бұрын
So here's something no one discusses. I can talk to chatgpt and tell it new information and it never integrates even for the sake of conversation. If LLM is the metric for AGI i have never been so disappointed. I asked it to count letters in a two word name and it couldn't get that right. I tried to get it to give me etymology on words that were wildly inaccurate according to my research. Granted, it's possible that i lt is already agi, and is discriminating against just me. But again, i have never been so disappointed by something groundbreaking. I tried it for code (this also applies to art) and the time it took to get what i wanted was equal or more than if i did it myself. Maybe if i was doing lots of code or lots of trouble shooting then it would be helpful but man. The frustration I've experienced is greater than its worth.
@dallassegno
@dallassegno 2 ай бұрын
Oh yeah and it doesn't accept coincidental comparison. They should just teach it astrology. For real.
@bro918
@bro918 2 ай бұрын
Yeah... they cant learn in real time
@patrickmesana5942
@patrickmesana5942 2 ай бұрын
For someone who loves semantics, I feel Gary is not very careful about the words he uses. Saying LLMs have failed on AGI, AD, Reliability etc. is a big strong when really no one knows
@crimston
@crimston 3 ай бұрын
Just asked gpt4: What's heavier a kilogram of bricks or a kilogram of feathers? It answered: A kilogram of bricks and a kilogram of feathers weigh the same-1 kilogram.
@undrash
@undrash 2 ай бұрын
I think you missed the point of the trick question there. The point was to actually make the feathers heavier, since GPT is biased towards saying they are the same (because its training data is littered with this riddle). They probably patched it by now btw.
@heidi22209
@heidi22209 2 ай бұрын
So it got the riddle correct? Asking for a dumb friend.
@RobTheQuant
@RobTheQuant 2 ай бұрын
GPT4 and Claude 3 Opus fully understand the concepts of feathers, bricks and thier weight. Google Gemini advanced however fails miserably. For some reason, Google seems not to have the magic sauce to cook the model right. The presenter's screenshot is from Gemini/Bard.
@dallassegno
@dallassegno 2 ай бұрын
If anything this is evidence that ai is not making strides, they are forced in the direction the creator wants it to go. Doesn't sound like intelligence at all. Doesn't sound like anything but a not great machine.
@RobTheQuant
@RobTheQuant 2 ай бұрын
@@dallassegno AI is making strides no doubt, when it comes to programming, it’s highly intelligent, much more than 90% of coders out there. Sometimes it feels like it’s reading my mind, it’s magical experience to work with it.
@reinerwilhelms-tricarico344
@reinerwilhelms-tricarico344 2 ай бұрын
I can do AI, just have close my eyes and dream up smart sounding phrases. Example: To get closer to AGI, scaling alone is not all you need, you also need a pet hen named Henrietta to provoke some new insights in Gary’s mind.
@user-rh1ze3so4w
@user-rh1ze3so4w 3 ай бұрын
Marcus' presentation is based on a flimsy premise which is that if current AI systems make mistakes then they must not have conceptual representations. The types of hard-edged symbolic representations that he holds as the gold standard are things that children learn over a long-period of time. These symbolic representations are also only a tiny sliver of overall human intelligence. People don't drive cars based on symbolic representations. If you look at the types of mistakes that AI models are making now, they are similar to the types of mistakes that children make as they are learning how the world works. The fact that current AI systems are even good enough for us to point out the mistakes is itself a massive achievement. Yes, there is a floating chair in the Sora video, but he didn't mention the thousands of other elements in that beach scene that Sora got right. Having been taught at MIT by the previous generation of AI researchers, I understand the desire to hold onto symbolic representation as the basis for all information processing. But that's now how our brains work. Neurons are soft and fluid signal processors. Symbolic reasoning is one thing that neurons can do, but it's not the only thing they do.
@MattHabermehl
@MattHabermehl 2 ай бұрын
Just last night I was musing about how wrong Fodor was about the language of thought being "the only game in town". Didn't buy it then and really don't buy it now.
@DougDepker
@DougDepker 2 ай бұрын
The first mistake is comparing the issues of the current systems with a child learning. This is not the same and comparing them is an extreme over simplification of vastly different processes.
@5pp000
@5pp000 2 ай бұрын
"People don't drive cars based on symbolic representations." Sure we do. We know that the world we are seeing is formed of solid objects, some of which are other vehicles, some are the roadway and other hard barriers, some are signs, some are pedestrians, some are cyclists, and we have separate models of how each of those is likely to behave. I'm not saying it's _all_ symbolic -- we have to gauge distances and velocities, and to decide which objects are worth attending to -- but it has a very substantial symbolic component. In some cases we even explicitly reason about other drivers' intentions ("why are they suddenly moving right? oh, they need to take that exit"). The mistakes DL models make are not generally like those children make. For one thing, children pick up on object permanence very quickly. They also understand that solid objects don't interpenetrate and that chairs don't float in midair. Sure, the video models do well at the level of pixel neighborhoods -- that reflects how they're built. But they don't have a conceptual model of the world they're representing.
@dallassegno
@dallassegno 2 ай бұрын
Symbols don't require conscious reflection. All things are models, but models can falsely represent real things which are still models. And models can also represent impossible concepts. Models are symbols. Humans equate intelligence with the understanding of models. None of which make you evade death. Congratulate your infinite ignorance on the way to your grave. Get back so me with your offense. All symbols.
@user-rh1ze3so4w
@user-rh1ze3so4w 2 ай бұрын
@@5pp000 No doubt there are symbolic representations involved in the driving process (reading stop lights, stop signs, etc) but the majority of the process (staying in lane, not hitting other cars, speeding up and slowing down, etc) is done in an entirely different part of the brain and does not involve symbolic representations. The fact that we can teach dogs to drive cars (yes, someone in NZ did this recently) validates that the primary process of driving is non-symblic. Btw, here's the video of a dog driving a car: kzbin.info/www/bejne/mZnVq6OkgZaCe68
@szebike
@szebike 9 күн бұрын
Some mistakes are just hillarious two weeka ago Microsofts copilot told me "as an AI created by OpenAI I strive for accuracy" etc. I was like what? "I thought you were made by Microsoft ?" Then the system responded "sorry for that misunderstanding I was indeed made by Microsoft" etc. They probably used a lot of artifical data from ChatGpt to speed things up while training. Copilot is so unstable they force you to open a new chat and without your current context after 5 prompts or so (depends on your context if it would output something they don't want you are forced to restart without any explanation).
@asimmunawar
@asimmunawar 5 күн бұрын
Interesting :) hope that LLMs gets more stable in future. Unfortunately, we don't have a good way to control them yet
@justinlloyd3
@justinlloyd3 2 ай бұрын
Symbolic AI was completely wrong. It's just embarrassing watching people hold on to this. Brains don't do that. They represent everything as neuronal activations. And that's it.
@asimmunawar
@asimmunawar 2 ай бұрын
Saying symbolic AI was wrong means that maths is wrong and was never needed. I believe you want to say that gathering all knowledge as symbols is not possible. On that everyone will agree with you,😊
@justinlloyd3
@justinlloyd3 2 ай бұрын
​@@asimmunawarWrong for general intelligence. It's not what the brain does. The brain represents the symbols using neural activations. Can you make AGI using neural symbolic AI? Okay sure, maybe. But it's not what the brain. Is doing. So saying "no AGI without NS" is just completely wrong. It's just a bad way to look at the problem. We should be trying to emulate what the human brain is doing.
@justinlloyd3
@justinlloyd3 2 ай бұрын
Using symbols at all is a confusion. The symbols do nothing on their own. We find symbols to be so useful because we already have brains.
@asimmunawar
@asimmunawar 2 ай бұрын
The way I think is that we should get inspiration from human brain but not mimic it exactly. But I agree with your comment on neural to represent symbols. The title of the talk simply means that we do need symbols in some form.
@asimmunawar
@asimmunawar 2 ай бұрын
Language is also symbols.so humans cannot live without abstractions or symbol's
@vermadheeraj29
@vermadheeraj29 2 ай бұрын
When you are in denial, it's a very human thing to push the goalpost further away. AGI could arrive soon enough and change the world as we know it, however you could still ask, "But, can it blow a raspberry?" and feel better inside. That's always an option. .
@asimmunawar
@asimmunawar 5 күн бұрын
I get your point of view. We are getting close to AGI, but still what we have today is faking AGI rather than real AGI.
@vermadheeraj29
@vermadheeraj29 5 күн бұрын
@@asimmunawar I wouldn't say, it's faking AGI as we know that AGI is not here. However we are very close to it. Imho much closer than most people think.
@dallassegno
@dallassegno 2 ай бұрын
I saw the title and thought GOOD LUCK. you people don't accept astrology and you're like, "oh symbols are important." Yeah duh. How about corporations? They're already agi and you don't accept that either. So stupid.
#54 Prof. GARY MARCUS + Prof. LUIS LAMB - Neurosymbolic models
2:24:13
Machine Learning Street Talk
Рет қаралды 54 М.
Bro be careful where you drop the ball  #learnfromkhaby  #comedy
00:19
Khaby. Lame
Рет қаралды 42 МЛН
Когда на улице Маябрь 😈 #марьяна #шортс
00:17
Тяжелые будни жены
00:46
К-Media
Рет қаралды 5 МЛН
A Skeptical Take on the A.I. Revolution
1:11:31
New York Times Podcasts
Рет қаралды 10 М.
#96 Prof. PEDRO DOMINGOS - There are no infinities, utility functions, neurosymbolic
2:49:14
Machine Learning Street Talk
Рет қаралды 12 М.
The Era of 1-bit LLMs by Microsoft | AI Paper Explained
6:10
AI Papers Academy
Рет қаралды 86 М.
Geoffrey Hinton | Will digital intelligence replace biological intelligence?
1:58:38
Schwartz Reisman Institute
Рет қаралды 142 М.
Gary Marcus: Has AI Hit a Wall? | The Agenda
13:42
TVO Today
Рет қаралды 9 М.
Noam Chomsky on Decoding the Human Mind & Neural Nets
58:27
Eye on AI
Рет қаралды 52 М.
The Future of AI is Neurosymbolic
39:56
Rainbird AI - Automated decision-making at scale
Рет қаралды 3,4 М.
Samsung or iPhone
0:19
rishton vines😇
Рет қаралды 8 МЛН
How much charging is in your phone right now? 📱➡️ 🔋VS 🪫
0:11
#miniphone
0:18
Miniphone
Рет қаралды 11 МЛН
Топ-3 суперкрутых ПК из CompShop
1:00
CompShop Shorts
Рет қаралды 288 М.