I mean I can see how chatgpt might not be an agent right now, but how can you argue that it's not intelligent? does it not require intelligence or cognition to process and understand language? and if it does not require that, isn't this just semantics where the definition of the word does not reflect how we use it everyday?
@AICoffeeBreak Жыл бұрын
Thanks for your thoughts on this. Her point was that intelligence is this one scale that measures things, but this is not a good description of reality, where cognitive systems have multiple capacities (so many dimensions and scales across which we should measure, not just one which we could call intelligence). These capacities can even go one against each other, meaning that if you optimise on one dimension, you lose on the other (originality vs. imitation).
@jolieriskin4446 Жыл бұрын
@@AICoffeeBreak Yes, although I would argue that intelligence can be measured by the range of capabilities on a variety of metrics. Think of a radar chart, you can have imitation at one end and originality at the other and many other capabilities. A more intelligent model will have more range on both sides. It feels like the narrow vs general AI debate. Obviously, with an LLM you could potentially have many many more dimensions. The wider the coverage of all of these dimensions the more intelligent the model.
@AICoffeeBreak Жыл бұрын
I do not think that what you say is irreconcilable: If I understand you @jolieriskin4446 well, you see the utility of aggregating all different dimensions of cognition into one score and call it intelligence. She argued that it is impractical to do that summary / aggregation, because in some capacities, you can provably not maximize them at the same time (therefore it is best to keep all dimensions fanned out rather than summarized to capture which capacities are stronger and which are weaker).
@scottmiller2591 Жыл бұрын
This is correct - LLMs are distillations of the (currently primarily written) cultural artifacts that have been fed into them. You need to wrap something around them if they are to act as agents. There is some creativity in the sense that various sampling techniques result in variations of output, but without additional mechanisms, there's no goal other than successful imitation based on the loss function.
@maxziebell4013 Жыл бұрын
Great format. Keep it in your repertoire ...
@AICoffeeBreak Жыл бұрын
Thanks, this feedback is encouraging. 🤜🤛
@gluuu Жыл бұрын
Miss coffee bean in attendance 😊
@naasvanrooyen2894 Жыл бұрын
Very interesting, I certainly fell for their charm🤭
@elburdeldelospandas Жыл бұрын
Amazing approach!! Thanks for the explanation.
@harumambaru Жыл бұрын
That was interesting and educational. Coffee bean and airplane arrow animation are lovely and gives amazing spice up to the video!
@harumambaru Жыл бұрын
I think because it was so short and to the point I liked it - good to mix up format from longer paper reviews
@AICoffeeBreak Жыл бұрын
Thank you kindly! This feedback is so motivating.
@YoannBuzenet Жыл бұрын
Thank you for that video! Interesting indeed to see that we must improve our definition of intelligence, as these llm challenge us on some parts (only some parts! For now at least) 😂
@soumyasarkar4100 Жыл бұрын
why was hilton's keynote controversial ?
@AICoffeeBreak Жыл бұрын
Because he said that LLMs understand to some degree. They understand some things, while not others. See this tweet by @JayAllamar that summarises it briefly: twitter.com/JayAlammar/status/1678403889320566786?t=48k7Ldbexib-Se7JhJrJ1A&s=19
@AICoffeeBreak Жыл бұрын
You can imagine that some linguists were not happy with this. A great read for this is Bender&Koller. We made a video on that position paper: kzbin.info/www/bejne/iGmTnJZrmNx4i9k
@soumyasarkar4100 Жыл бұрын
@@AICoffeeBreak Thanks !
@ScottSummerill Жыл бұрын
Very fresh look at this. Nice.
@AICoffeeBreak Жыл бұрын
Yes, it was great to get a point of view from outside the field. :)
@zerotwo7319 Жыл бұрын
Our culture is so obsessed with inteligence... Yet we are still very far away from understanding it. I like the term cultural technology. Second time seeing it.
@vadrif-draco Жыл бұрын
Very nice, thank you for this summary.
@Akshaylive Жыл бұрын
Hi! Are you still around? If so, I'd love to say hi in person! :)
@AICoffeeBreak Жыл бұрын
Yes, I am! I will be here today for the workshops. Write me on rocketchat in case you don't find me. :)
@Akshaylive Жыл бұрын
Sounds good. Just sent you a message there, see you tomorrow! Cheers
@sriramgangadhar2408 Жыл бұрын
can chatgpt hallucinating its answer be considered as inovation (not in a good way ) ?
@AICoffeeBreak Жыл бұрын
Well, random noise is innovative as long as we have a means to filter out the good ideas from the bad ones. Natural selection does so for evolution in biology (only the good random mutations survive, the bad ones die). So yes, but so far we do not have ways to filter out good hallucinations from useless ones.
@sampruden6684 Жыл бұрын
I expect I've not fully understood the nature of Prof. Gopnik's talk, but at first glance this point about intelligence seems like an example of a rather empty form of discussion I see a lot in ML. People have these debates ("is it intelligent?", "is it reasoning?", "does it understand?"), but instead of discussing anything interesting about the model's capabilities and functioning, they're actually just disagreeing on how to define those terms. It's not even an ML disccusion - it's a philosophy and language conversation. But perhaps that's the point Prof. Gopnik was making! I'm not sure that I agree about them not being agents either, but that would just be a disagreement about the definition of agent. :)
@younessamih3188 Жыл бұрын
💗 Thank you :)
@arminneashrafi2846 Жыл бұрын
Technology and is are misspelled in the title. Nice video Btw 😊
@AICoffeeBreak Жыл бұрын
Oh thanks. Fixed it. 😅
@jaredgreen23639 ай бұрын
So far much of that innovation has been up to the user. But the actual reason they are called agents is that they have the capacity to act and react, to ‘respond’ to arbitrary circumstances(as far as the relevant data can be entered, anyway)
@derrickxu7784 Жыл бұрын
it is a fancy sql on language😌
@Thomas-gk4210 ай бұрын
Interesting
@johngrabner Жыл бұрын
Innovation is imitation with noise added
@AICoffeeBreak Жыл бұрын
And a way to filter useful noise from useless / bad noise.
@zenithparsec Жыл бұрын
Agency is all that is required to be an agent. As in, if there is something choosing between alternatives, and those alternatives would result in different outcomes, then there is agency and the thing choosing is an agent. If someone duplicated you, would the duplicate be cultural technology? All you are doing is repeating and recombining words and sounds you have heard and seen... But this is just my opinion.
@AICoffeeBreak Жыл бұрын
As far as I understood, you do not contradict Prof. Gopnik here. She said that a robotics research would very well deliver agents. But LLMs in their form (e.g., ChatGPT) are not agents.
@sampruden6684 Жыл бұрын
@@AICoffeeBreak To take the human analogy, I suppose she's saying that a brain is not an agent unless it's in a body that can act in the world around it. ChatGPT + tool use seems to quite clearly meet this definition. AutoGPT is certainly an agent. It may not be a good one, but it's an agent nonetheless. Talking about robotics seems to be overcomplicating things - people are using language models as agents today.
@AICoffeeBreak Жыл бұрын
@@sampruden6684 I agree with you. By making this video, I find myself arguing on her behalf to make sure I make her points clearer than in the video. But by no means it implies that I agree with her in every aspect.
@sadface7457 Жыл бұрын
I was thinking its a lot like the chinese room thought expiriement today and its relation to llms. There is interpretation called the system reply which the the system can posesses its properties because their a human in the loop much like rlhf.
@mgostIH Жыл бұрын
0:52 "Cognitive scientists are in general against the concept of a general, or not so general, intelligence" Besides the decades of research on the g-factor and IQ tests, which are one of the very few replicating results in psychology. Why are these researchers, who just regurgitate "Chinese room" level arguments, considered any different from their mental image of what a large language model is? How do we reconcile the fact that GPT-4 is actually useful to people, unlike them, if all it supposedly does is just to repeat what it has seen in the training data? Of course the topic then immediately moves to "How should we -control- regulate this thing so it obeys our political view?"
@ingoreimann282 Жыл бұрын
well put.
@zerotwo7319 Жыл бұрын
"decades of research" "one of the very few replicating" - Oh the hubris...
@AICoffeeBreak Жыл бұрын
I am repeating what I said in other comments, but she did explain why Intelligence is useless to describe reality: It is one single scale of measuring things, but cognitive systems have multiple capacities, so many dimensions and scales across which we should measure, not just one which we could call intelligence. These capacities can even go one against each other, meaning that if you optimise on one dimension, you lose on the other (originality vs. imitation).
@DerPylz Жыл бұрын
I don't think anything needs to be reconciled here, at least not as you put it. There is no reason, why a tool can not be actually useful for people and still just repeat the training data (if the training data is so large that no human could have read it all, which is the case for LLMs). The usefulness of GPT-4 is not in question here. If we label it intelligent, or an agent, or a cultural technology, as Prof. Gopnik prefers, has nothing to do with the usefulness of the tool.
@mgostIH Жыл бұрын
@@AICoffeeBreak > but cognitive systems have multiple capacities, so many dimensions and scales across which we should measure But this is not true. The idea of "multiple intelligences" has been developed as an alternative to g-factor, but it has never found any empirical evidence in the scientific literature. We see IQ predicting a vast array of skills, not just general educational attainment, but also gun proficiency [Is There a g in gunslinger?], the US military filters candidates based on it, not admitting below 80IQ (See McNamara’s Folly), to more creative endeavors like chess and music (see Gwern's blog on IQ Halo Effect for a huge amount of references and examples). IQ has strong social effects, if ignored or avoided by hoping that nice theories like "Everyone being different means everyone is smart in something", we'll end up implementing bad policy. Regarding your comment about RL, of exploitation vs exploration, that too can be solved, there exists optimal amounts for each assuming you have a perfect bayesian agent for your environment. In the limit, an extremely intelligent agent only needs to maximize its own expected utility, exploration and exploitation come down as a direct consequence of that.
@Navhkrin Жыл бұрын
If she is going to argue they are not intelligent, she should also argue what is the definition of intelligence and why chatgpt is not intelligent. Feels like her argument is "no lol" but written academically. Tests so far clearly show Cgpt has sparks of human level intelligence. For example it is able to comprehend and infer alien concepts that does not exist in jts training data. You can introduce rules of a game you came up with on your own and ask it to play it with you and it sometimes successfuly does this, dont just take my word, try this yourself, its free after all
@AICoffeeBreak Жыл бұрын
She did clarify this point in the talk: Intelligence is this one scale that measures things, but it is useless to describe reality: cognitive systems have multiple capacities, so many dimensions and scales across which we should measure, not just one which we could call intelligence. These capacities can even go one against each other, meaning that if you optimise on one dimension, you lose on the other (originality vs. imitation).
@AICoffeeBreak Жыл бұрын
To continue on your second point: she admitted that some of the cognitive capacities of ChatGPT exceed human capabilities (for example memory par excellence), but in other capacities, ChatGPT does not even reach a child's capacity (for this she gave examples of causal generalization tests).
@jrkirby93 Жыл бұрын
Intelligence is poorly defined, so I wouldn't necessarily make the argument that LLMs are in fact intelligent. To do so, I'd need more concrete definitions. But I will say that I expect people who argue definitively that "LLMs are not intelligent" will make poor predictions of future practical capabilities of LLMs. The other thing is I think "cultural technology" is an even worse description of LLMs than "intelligent agent". An impressive achievement given that "intelligent agent" is quite a bad description itself. I struggle to form the classification of cultural technology. Are steam engines a cultural technology? Capitalism? The english language? The printing press? The catholic church? The scientific method? Copyright? Which are and aren't cultural technologies? Further, the statement that we should "regulate it like we regulate existing cultural technologies" is even more confusing. I can see how regulations ought to be desired around LLM technology. But actually implementing those regulations feels like a practical impossibility. And given that I am so confused about what a cultural technology is, I really can't visualize what regulations are implied by this statement. I can't really see how LLMs can be regulated by any existing regulatory frameworks either, except the regulatory framework of "just let people do whatever they like" which seems like it's inevitably going to cause problems, but likely the path society will follow. As an addendum, I wouldn't not classify most LLMs today as "agents". To be agents they need to be given agency, and there seem to be a couple experiments on attempting to do that with LLMs, but it is not the natural state of LLM operation. It does seem to be a likely popular usecase for LLMs in the future, when a lot of existing challenges have been overcome, but I don't think those challenges are particularly insurmountable. Agentifying LLMs does appear like it will have vast social consequences, as the mass profit incentive and speed of scaling operations will cause extreme economic upheaval.
@gr8ape111 Жыл бұрын
Or high-tech plagiarism as Chomsky puts it
@AICoffeeBreak Жыл бұрын
😂 witty wording.
@ingoreimann282 Жыл бұрын
@@AICoffeeBreakno
@AICoffeeBreak Жыл бұрын
@ingoreimann282 , you disagree with the meaning or with my characterization of the wording as being witty? I really think it is a great wordplay, even if I partly disagree with the meaning of the words.
@gr8ape111 Жыл бұрын
@@AICoffeeBreak I would ignore comments with a length of 2
@DavenH Жыл бұрын
I would like them to make predictions exclusively based upon their claims here. All of those predictions based on a faulty understanding of intelligence, or lack thereof, will turn out wrong, or already demonstrably wrong. The deflection from intelligence as a general capacity is very weird. Most likely an artifact of political information filtering. Would they bet their hard earned money that learning someone's IQ does not improve your ability to predict their specific capacities? As though there is no mutual information there. I would like to engage anyone who does not believe in general intelligence in such wagers... all day, any stakes. I don't know why this irritates me so much. Maybe it's the arrogance of asserting expertise in areas where they don't have any understanding. Maybe it's the knowledge that disinformation is creeping into science. But if there is nothing like "intelligence" but merely a suite of uncorrelated capacities, how does further training GPT-4 improve on ALLLLLLLL BENCHMARKS and all capacities? That would seem astronomically unlikely no? Quite unlikely enough to reject null hypothesis to an arbitrarily small P-value. Sigh. They're so obviously correlated, they must know this..
@AICoffeeBreak Жыл бұрын
You sound quite irritated indeed, but I do not even see a disagreement between you and Prof. Gopnik. 😅 She did not say that LLMs could not surpass humans in cognitive capacities. They already did in some capacities (such as memory), but not in others where they are still below children (causal generalization). She did not say that there is anything stopping them to become better in all cognitive capacities. The only point where you disagree is that you see the utility of aggregating all different dimensions of cognition into one score and call it intelligence. She argued that it is impractical to do that summary, because in some capacities, you can provably not maximize them at the same time (therefore it is best to keep all dimensions fanned out rather than summarized to capture which capacities are stronger and which are weaker).
@orenelbaum1487 Жыл бұрын
I'm sorry but I don't think any of this makes sense. it sounds like meaningless philosophy to me. and I think that the advice about regulation is pretty bad.
@andriik6788 Жыл бұрын
I think we should not listen too much to what psychologists say about these topics, because psychology is still at least half a pseudoscience.
@AICoffeeBreak Жыл бұрын
I'm afraid one can make the statement about Machine Learning research as well. 🙊
@ingoreimann282 Жыл бұрын
so this is the state of academia? wordplay/semantics?
@zerotwo7319 Жыл бұрын
@@ingoreimann282 The state of human language in general.
@MachinaMusings Жыл бұрын
@@ingoreimann282discussing existing concepts and constructing new ones is philosophy, which is the foundation of science.