I would say that LLMs are just mathematical models, and I would be happy with that, except they display presence-of-mind and some of the answers I have had back from my prompts have been deeply insightful. So I am undecided as to their level of understanding, if there is any level of understanding at all. I want to say that one of the oldest arguments about AGIs, even from when the first idea of them was born, is if we would create an AIG through a greater size of computation or some special undiscovered technique. I find it exciting that we are still having that same discussion today even after so many leaps forwards.
@hahahasan Жыл бұрын
It's a really hard question to answer. Right now I think the answer is almost certainly no, it doesn't understand. But a couple iterations down the line and I'm really not so sure. Also I've always found invocations of the Chinese room argument to be lacking, largely because to use it according to the scientific method we'd need a control. As far as i understand that control usually is a human. But I struggle to see the sense in which a human is said to understand Chinese but a computer isn't.
@doshin2019 Жыл бұрын
Would appreciate a link to the interview. Thanks.
@underfitted Жыл бұрын
Andrew posted only the summary of the conversation: twitter.com/AndrewYNg/status/1667920020587020290
@WilliamDye-willdye Жыл бұрын
Nah, it's easy. A single neuron doesn't understand, but when you scale them up, they do. A language model doesn't understand, but they're getting bigger. Scale alone is not a guarantee, but at least sometimes, scale is all you need.
@actualwords2661 Жыл бұрын
That's why we hear voice when neuron loss
@controlthenarrativ3866 Жыл бұрын
i mean, the same basic cognition that is found in single cells that makes them react to the environment in different ways is the same cognition GPT4 has in its own digital environment, it reacts with what it thinks the right answer is.
@knutjagersberg381 Жыл бұрын
What is understanding? Real understanding requires causal models and reasoning on top of that. GPT get's lucky at those, but it will happily provide us with inconsistent and illogical views. Its knowledge is the knowledge of a complicated dice biased so it can talk. It has learned useful representations that yield the illusion of understanding and it is a tool for NLU. It understands as much as bert. Its a giant lookup table with handy representations plus a randomizer.
@lakeguy65616 Жыл бұрын
Neural Networks can do two things, predict & classify. Large Language Models like GPT-4 are making predictions of "what" words, and phrases will follow the prompt text. It's extremely impressive but it's not magic, it's not intelligent, and it's not "aware". Its math. (maybe math is magic....)
@simongross3122 Жыл бұрын
It's mathemagical :)
@MrMoman7 Жыл бұрын
I always loved exactly this framing - math und physics are certainly dark magic for sure.. And similar to the LLM‘s we humans have this process: the moment before a concrete word forms in your mind or comes out your mouth you have no direct „agency“ over the output of your brain.. you have the illusion of being free to think whatever, but those are the neurological echochambers of one self leading to chains of thought:) Still doesn’t mean the machine has a contextual understading, but I like to think of ‘intelligence’ in an abstract way as being functionally recreatable, with or without conciousness.. wether this denies “understanding” relies on the definition i suppose..
@underfitted Жыл бұрын
Lol
@blasandresayalagarcia347211 ай бұрын
No it doesnt understand, its just a statistic predictor, thats is in fact literally all. People ar having this questions because of how it delivers unformation and how it kind of can pass of as a human interaction although it isnt. GPT-4 and LLM's at the moment could eventually be used as interpretators for true simulated understanding of concepts, just like LLM can be used to interpret natural language commands into sets of instructions in specific formats for systems. They have great potential as an interface for aditional tehcnologies that may as a whole someday be understanding. Or at least that my opinion. You should watch Digit robot using LLM to interpret commands and generate the necesary steps to carry out a task, its amazing.
@phillipneal8194 Жыл бұрын
How many humans walking on the street have a level of 'understanding' better than GPT4 ? How many congressmen ?
@KPreddiePWSP211 ай бұрын
it may not understand. but it. may not matter as long as its attention is in the right space. there are different kinds of useful intelligence besides human
@actualwords2661 Жыл бұрын
Make a video on your education levels
@AlexDings Жыл бұрын
I mean, don't we _know_ that they are next word predictors? What else is there supposed to be there? People are falling for the Chinese room trick here
@jasondads9509 Жыл бұрын
I mean if you have a Chinese person with you in the Chinese room that tells you what to write, sure you don’t know Chinese but some on in that chain does