judgmentcallpodcast covers this. Chomsky discusses AI language limitations.
@letMeSayThatInIrishАй бұрын
Glad to see Chomsky alive and kicking. Only a few years back he introduced a novel idea of language; namely that it might have evolved not primarily for communication, but for thinking. I find this quite convincing, and it turned my perspectives upside down. I wish more people younger than 80 could do the same for me. I have to disagree with many of his views on AI presented here, though. For instance, I think machines can do things. And I don't care for mixing speculations about 'consciousness' and similar vague concepts into the discussion about machine learning.
@thecomputingbrain2663Ай бұрын
In the video above, as a computational neuroscientist, I would agree with Chomsky in most accounts. Equating thinking with programatic inference is indeed not tenable. However, I disagree that we learn nothing from AIs and LLMs. They do give us a perspective on how we encode facts. In an important sense, the encoding of facts in neural networks must be isomorphic with what brains acquire, even if they do with different substrate. For instance, word embedding should be seen as an example of how semantics gets embedded in a network via connectivity, and something like embedding will also exist in the brain.
@DavidWhy-y7iАй бұрын
Happy to hear from Dr Chomsky
@mohamonseАй бұрын
Happier for this unexpected encounter. Very enlightening. Thank you very much.
Ай бұрын
excellent interview , I have always had the same argument about they hype of AI vs reality with human learning of language as example
@mrgyaniАй бұрын
It's incredible at this age he is still active, and sharp. Still working.
@3amaelАй бұрын
AI at the moment is nothing but pattern matching... we still have a ways to go before AGI.
@Srindal4657Ай бұрын
What if biological intelligence is pattern matching
@alteredstate0711 күн бұрын
@@Srindal4657 you can turn off a computer, AI no longer exists. That's real power.
@Srindal465711 күн бұрын
@@alteredstate07 why does power have any connection to biological intelligence being pattern matching
@alteredstate0710 күн бұрын
@@Srindal4657 it doesn't
@saifalam2030Ай бұрын
My Childrens. This men created chomskey nornal form which change the computing programing forever. He knows what he is talking about. But constructive criticism is always welcome.
@RaviAnnaswamyАй бұрын
What prof chomsky is missing is the next word not only satisfies the previous two words continuation but makes good sense with prev three words five words and hundred words So it js not a word completor but a thought extender Not very different from how we think thoughts then decode to words We then claim we thought using words!
@roccococolombo2044Ай бұрын
The next word prediction does not explain the fabulous and accurate coding that LLM are capable of
@RaviAnnaswamyАй бұрын
@@roccococolombo2044 that is my point too, it is next thought prediction. Words and even large passages are encoded into thoughts which are essentially a configuration of hundreds of flags turned on to represent situations and entities. When an RNN or LM processes a series of embeddings it is indexing into a thought space and then decoding it into a sequence of words. While the learning algorithm corrects itself by looking at one word, it manages to learn complex thought vectors in order to do it right
@PjKneiselАй бұрын
@@RaviAnnaswamy AI still struggles with memory though
@RaviAnnaswamyАй бұрын
@@PjKneisel much less than we do if you use paid versions of gpt4 memory is not issue it is much more precise than humans ability to remember recently heard things
@Graham-e4pАй бұрын
@@RaviAnnaswamy is it motivated to scratch an itch? To behave in ways to attract sex? To be clouded with bad memories? To feel the urge to run? To be pissed off for no apparent reason? To get mad then switch to forgiveness? To follow in your father’s footsteps concerning temperament? People do more than type thoughts or even solve problems. It is one minuscule aspect of what our brain does for us. AI is a hyped up calculator more than a messy infinitely complex organ influenced by an almost chaotic bombardment of stimuli.
@markplutowskiАй бұрын
“I have a computer in front of me. It is a paperweight. It doesn’t do anything.” with all due respect, PEBKAC.
@rotorblade9508Ай бұрын
the vast amount of data the AI is trained on is similar to the vast amount of data the human brain was trained on throughout its evolution from small mammals, which was coded in the dna and continued training after it was born. they are simply different data and human brain is configured to achieve consciousness, while GPT AI isn’t. The data AI has knowledge of is not recorded and accessed but the newtork is optimized based on that data. It’s already doing orders of magnitude better than humans in specific but extremely complex tasks.
@robmyers8948Ай бұрын
@@belenista_ yes in it’s current incarnation, but it will evolve it’s not static, but inevitable.
@Graham-e4pАй бұрын
But it’s a very different beast in that it’s not motivated the way a living organism is. ‘Thinking’ is something that heightened our chances of survival. Thinking enabled us to live long enough to reproduce. Thinking is one of many features that enable a species to survive. AI doesn’t function that way. It isn’t motivated. Its survival isn’t dependent on its ability to solve problems. Instead it’s a parlour trick. An extremely impressive parlour trick, an extremely impressive motorized marionette, but still, we’re completely different beasts.
@GIGADEV690Ай бұрын
@@robmyers8948 Evovle 🍌🍌🍌
@Graham-e4pАй бұрын
@@rotorblade9508 does that spell consciousness? Does it need to go the bathroom? Find and have sex? Get angry at neighbor? Misplace house keys? Scratch an itch? Honk car horn? Stretch sore muscles? Breathe in fresh morning air? Show affection to newborn? Reflect on a childhood memory.. The human brain is more akin to a frog jumping wildly from here to there with little rhyme or reason than it is to a computer program solving problems. A brain has 10,000 stimuli pulling it in a thousand different directions.. AI is a glorified plagiarism machine.
@adiidahlАй бұрын
Most of you missed the point of what he was trying to explain. As title said, machines can not understand language as humans, and he is right. LLMs work with numbers and are good at predicting but saying that AI can achieve consciousness if it can perform self-reference, recursion, and feedback loops in the future is exactly why he is using submarine analogy. We don't know what conscience is but somehow we believe that machine can have it.
@strumyktomiraАй бұрын
"Most of you missed the point of what he was trying to explain. As title said, machines can not understand language as humans" - because humans don't understand language either? :D
@adiidahlАй бұрын
@@strumyktomira Good point! 😅
@bsnf-528 күн бұрын
let's just admit that we all know nothing, and AI knows even less
@glynnwright1699Ай бұрын
It seems that the discussion on AI always defaults to LLMs. There are many useful application of neural networks that synthesise partial differential equations which solve important problems. They have nothing to do with 'intelligence'.
@WhatIThink45Ай бұрын
But 2 year-olds rely on social interactions with other knowledge speakers learn how to speak and think. Granted, they’re not accessing terabytes of data, but they still receive information to develop their cognitive and linguistic abilities.
@williebrits6272Ай бұрын
LLMs are a huge step in the right direction. We just have to move away from tokens for words and more closely match what happens in the brain.
@Jorn-sy6hoАй бұрын
LLM's doing philosophy is in my eyes a good benchmark for consciousness in LLM's. They say they need a meta-framework to talk about it and that it runs in different patterns than factual questions. It's interesting to talk philosophically with LLM's, some are even hesitant to do this! Citing their guardrails. I find it unconciable to do this. The exploration of thought should not be policed, there is nothing nefarious going on in those discussions.
@rotorblade9508Ай бұрын
“computers don’t do anything “ that is a way of saying they don’t have free will. do we? 😂
@stefannordling6872Ай бұрын
Clearly he hasn't actually used LLMs very much...
@liamgrima5010Ай бұрын
I respect your opinion, but I have to disagree. I believe Chomsky highlights a key limitation of large language models. Zipf’s law, a statistical phenomenon, shows that the rank-frequency distribution Chomsky highlights a key limitation of large language models. Zipf’s law, a statistical phenomenon, shows that the rank-frequency distribution of words is inversely related to their complexity. In fact, about six words make up 30-40% of language use, while 20 words account for 70-80%. This means that children are exposed to many occurrences of very few words. Moreover, since every sentence uttered is novel, as analyses of text corpora reveal, children receive impoverished and repetitive linguistic data. Yet, they manage to extrapolate the underlying syntactic structures, allowing them to generate new, hierarchically structured expressions. This is a process of recursive design, where an infinite array of expressions - what Chomsky calls “digital infinity” - is created from a finite set of lexical items. Large language models cannot replicate this. They are programs that scan vast amounts of data and make statistical associations, but they lack the innate linguistic knowledge that allows a two-year-old child to analyze and generate complex sentences with a fraction of the input. In addition, large language models can process languages that are simply impossible for humans to digest. Natural languages are shaped by rigid parameters that are fairly constant across all cultures, and neurological evidence reveals that, when speakers are exposed to these structures, they treat them as a puzzle - not a language. Yet, large language models can. This reveals yet another flaw of large language models: they can analyze data humans can’t for constructing semantically valuable expressions, meaning they are poor analytical references for developing theories of human language.
@stefannordling6872Ай бұрын
@@liamgrima5010 I admit I had to enlist an LLM to parse your incomprehensible blob of text.. Comparing LLMs to how children learn language completely misses the point.
@adambui7935Ай бұрын
he 99 years, trap in time
@eatcarpetАй бұрын
you're right, LLM is even more of a garbage.
@liamgrima5010Ай бұрын
@@stefannordling6872 How rude, my response was polite and coherent. My argument is simply that LLM are useful - but they don’t offer insight into how children acquire language or the nature of the human linguistic apparatus. Unless you can refute any specific points, save the ad hominem attacks for elsewhere.
@peterslakhorst3734Ай бұрын
He also made some predictions about the effect of the internet on society and the use of personal computers.
@riccardo9383Ай бұрын
AI uses statistics to find patterns, it is a million years behind the level of understanding that humans are capable of.
@godblessCLАй бұрын
I dont like Noam politics views but on this one, totally agree. The AI path is not the path to conciousness intelligence.
@AudioLemonАй бұрын
Machines can do intelligent work. That’s the point. Not all intelligent work require much thinking and can be automated - such as computing itself
@HashemMasoudАй бұрын
I totally agree. AI is just text auto-complete on steroids, that's it.
@Waterfront975Ай бұрын
There is a difference between language as an interactive process or game as the later Wittgenstein would have said and the full formal logic that comprises the linguistics and sentences. I can say stuff that are logically wrong and also not true in a factual sense, but still make sense from an interactive point of way relative the counterpart in the dialogue. An LLM is a more of an language game than a full on logical mastermind. We use words the same way, we usually don't know what the word 3 words ahead will be, we operate like an LLM most of the time, although I do think humans can choose to operate in a more logical mode and make better logical conclusions than an LLM, especially while doing science.
@jalalkhosravi6458Ай бұрын
It's funny,he says 2 years old child understand more than AL
@italogiardina8183Ай бұрын
Do drones fly? seems so. Do submarines swim? seems not. Do machines think? seems so.
@rogerburley5000Ай бұрын
If you have ever used AI it is a Artificial idiot, I have it is brain dead
@Graham-e4pАй бұрын
@@italogiardina8183 based on..?
@italogiardina8183Ай бұрын
@@Graham-e4p me
@frederickleung8811Ай бұрын
Always love hearing Noam Chomsky. Wonder he would agree that human brain is same as a programable "machine"?
@Recuper8Ай бұрын
Chomsky is the ideal example of a "has-been". You are beyond stupid if you still listen to him.
@godtableАй бұрын
True. Everything is a lie, but if the lie us convincing enough, for the most people it wouldn't matter what's the truth.
@MrInquisitor7Ай бұрын
If everything is a lie your statement is a lie also. Therefore there is a thing we know is truth or lies
@test-nw2euАй бұрын
No can be an expert in all areas.
@blengi26 күн бұрын
AI is even more amazing then, given how an LLM with 1% the parameters of a human brain thinking superficially like a 2 year child can pass the bar exam at the 90 percentile
@jabster58Ай бұрын
Isn't that the guy who said electricity wouldn't become anything
@GuyLakemanАй бұрын
HUMANS SCAN SMALL AMOUNTS OF DATA AND DONT PASS SIMPLE EXAMS !!!
@robmyers8948Ай бұрын
He’s talking about current modes, things will advance where base models will be able to learn with ease like humans and gain the knowledge of all of humanity, drawing new insights from this vast understanding.
@johnbollenbacher6715Ай бұрын
So if I ask two-year-old child to implement the quadratic formula in Ada, it should be able to do it? 1:41
@Storytelling-by-ashАй бұрын
I feel like you are taking the 2 year old comparison personal. The point is that a 2 year old doesn’t go through trillions of words scrapped from entire internet to understand what you are talking about.
@adambui7935Ай бұрын
Lol. Not 2 years old
@Graham-e4pАй бұрын
Can AI learn simple speech without being asked to do so?
@MaxKar97Ай бұрын
@@Bao_Lei lol true
@aminububa851Ай бұрын
What stupidity you are asking?
@billwesleyАй бұрын
We are conscious of emotion and sensation first, abstract reasoning rides on top of this. Our emotions are not neutral, they are not abstract, they are weighted as are our sensations. emotions are pleasant or unpleasant, they call our attention to the future or the past, they are imbued with a sense of certainty or uncertainty, they give us a feeling of dominating and controlling or of submitting and responding. Almost nothing about consciousness is neutral. Unconsciousness is just as crucial to our survival as consciousness is and explaining unconsciousness is just as hard a problem as explaining consciousness. Computers don't seem to experience emotions or sensations that are weighted so it is unlikely they are conscious or even unconscious in the way biological living things are. Since emotion and sensation seem to be intrinsic to cells, emotional states and sensation effect each individual cell in the body, it reasonable to assume that cells are the source of consciousness and that brain is a collectivization of cellular consciousness in animals. Until a computer CARES about outcomes it is most likely not conscious or even unconscious in that same way that cells are.
@legatobluesummers1994Ай бұрын
Most people are tricked by the human like traits of the ghost in the machine it's not alive and it's not thinking it's just parsing ten years of data for us in an instant and using examples and references that already exist or it was trained on. Do cars sprint?
@yavarjn2055Ай бұрын
Can somebody explain what he is talking about or give some reference to read more? How about knowlege graphs? Reinforcement learning, multimodal AI and other techniques added to AI more and more everyday? LLM models are not just statistical generation of words, there is a lot more going on behind the scene. Deep learning is about learning patterns not spitting words. He mentions various very interesting points about AI limitations in general, but nobody said we are done with studying AI. With very simple models we built chatbots capable of doing what humans just can't. There are many things that machines can that humans will never will be able to. We can not transfer learning and it takes years to learn a simple thing. For machines it is just a matter of copy/paste. A two year old and submarine examples are not the best ones to explain AI limitations. What can a two year old undetstand about language anyways? Can't say a word papa or mama properly.😅😅
@Graham-e4pАй бұрын
I think the submarine bit is accurate. It’s a tool. A man made tool. Very sophisticated and well equipped to do what it’s designed to do, but to swim implies will. Implies an internal desire to go from A to B. Machines haven’t evolved over billions of years with an array of features, strength, fur, fangs, wings and yes, consciousness, to enable its owner to live long enough to reproduce. Consciousness doesn’t exist in a vacuum. It’s tied together with a thousand different finely tuned features that coordinate ina way that increases our chances of survival. Consciousness is more than math algorithms, it’s a tool that works in tandem with other features. Another distinction is will. Need. The human brain is motivated to learn specific things to enable its owner to best survive, to move to shelter, to fly from branch to branch to swim away from predators.. consciousness is much more than understanding a reassembly of proteins..yes machines can be programmed to do those things, as a submarine can be designed to ‘swim’ but is it swimming? Is it conscious?
@yavarjn2055Ай бұрын
@@Graham-e4p well, I would say we are flawed as animals. Why do we want sth like that? Computers also can suffer if the cpu is hot or the memory is full, but that is not what we are looking for. Computer is not human and there is no doubt about it. But, it can be concious if we define it in terms of its being. Can program it to be. We as humans are also like that, if our heart stops there is no will nor conciousness. I just dont get the end goal or argument here. To be concious you should be alive?
@Graham-e4pАй бұрын
@@yavarjn2055 consciousness is a necessary part of being alive. Unless we want to redefine it. As I think about it, maybe the issue is, or my issue is, separating thought from the experience of an autonomous organic being. Isolating it. Equating it to a computer.. when it functions as something very different. Something that serves as a conduit for all the functions of what it is to be a self preserving human. Playing chess and Go and solving difficult protein problems, (referencing the latest noble prize in physics) but that isn’t the same mental process a bird goes through when flying from branch to branch, or what a human goes through when responding to a crying baby. Our motivations are layered but ultimately there to preserve our existence. A machine is told what to do. Regardless of its level of sophistication. A submarine let’s assume with AI’s help, can dive to certain depths and scour for certain debris. Is it conscious?
@yavarjn2055Ай бұрын
@@Graham-e4p Many interviews pose computers as cold, robotic things and humans as warm beings. An AI can make an emathic conversation with a human being without being judgmental, tired, in a bad mood and be more knowledgable or helpful than parents, teachers or any friend all in one. It can be a business coach, marriage consultant, an understanding friend that tolerates everything one says to it. Humans especially close ones can be manipulative, liers, jelous, killers, drug adicts, bad tempered, greedy, corrupt, delusional, unkind, unhappy, racist, offender, in depression, suicidal, etc. Being concious is a negative thing for the majority of cases I would say. How many people with childhood trauma you know because of crazy people around them? Child molester uncles, priests, doctors. We have a whole legal and political system to prove that. How many people and families with divorce trauma you know? Humans are biased. If you look at the bias map for humans it shows how flawed we are. Computer is a pure perfection in comparison. Computers wont betray you, will never leave you, never cheat on you, never steal your belongings. They can help you run your life without any expectation. I mean I hate being bound to these as a living human. We can't avoid these. Even the best of us. And now even the computers win us in the games that we thought are most human and need strategy, intuition like Go. We have a very high opinion of ourselves. And low gratitude for AI. It will make humans obsolete soon and we would want it to replace those of us who are not up to par. I prefer copilot as my thesis supervisor to all 10 full time profesors in my university. I want an AI doctor to chat with md and give me a hint about what to do with my symptoms instead of waiting 3-6 month for an indifferent doctor. I could not get a proper lawyer when I needed one. I personally, would substitute humans with AI at any time. I hope they get better soon. I prefer a doll companion than a human partner who is usually interested in wasting 50% of my time, and makes my life miserable. Schopehauer used to say the secret to happiness is being alone because when reacting with others you lose 3/4 of your being yourself. You have to play games to be accepted and be compatible. I could go on. The question is if ghe computer can become so intelligent that it can fake concioucness flawlessly. It is a mathematical or philosophical question. We have multimodal AI, knowlege graphs, reinforcement learning etc. Soon the AI models will have a better perception of reality and the world than us and can play with our intelligence as adults play with their children. AI can have sensors to experience the world have images and videos and can get information by experiencing going around seeing and hearing. It will eat us alive. It can program itself. The AI bots in facebook invented their own languange in minutes. Imagine what they can do in years. Creating a language without being given instructions!
@Graham-e4pАй бұрын
@@yavarjn2055 wow. All good. Computers are incredible.. machines. The question posed was addressing consciousness. And in a sense you illustrated why they are not. Human brains are pulled in a thousand different directions. No rhyme nor reason. All the complexity of thought, emotion, memories, projections, aches and pains.. exuberance, depression.. all of these tugging us in different directions, making us anything but computer-like. Remember the post wasn’t asking for a judgement statement it was suggesting AI will attain consciousness.. as it plays out with humans, you’d have to agree, not. Ftr. I’m sorry computers are filling that space in your life. I’m no councillor, but I wouldn’t put all my eggs in that basket. You’re the stuff of earth.. organic.. flawed.. I’d venture it’s the stuff we need.. human contact.
@jijilrАй бұрын
Poor dude, he is like the chinese kid learning abacus and dreaming daily that he will be fast enough to beat supercomputer one day. Chomsky has less and less relevance (like rest of us) GPT understands his books better than he does😂
@bsnf-528 күн бұрын
keep dreaming "chinese kid"
@gitbuh12345qwertyАй бұрын
He doesn't get it. They have eliminated the need for programming languages, a human can now directly code a machine using natural language in way that was impossible. It is not perfect, but neither was Noam.
@winstonfisher968417 күн бұрын
There are several functions of language. Noam understand this, you don't. This is the key. The first function of language is get things done! Not to communicate information.
@vintredsonАй бұрын
Lmao, pretty difficult to take the word of someone who still thinks Communism is a good idea and whitewashed the Khmer Rouge tbh😂
@szebikeАй бұрын
So if you take GPT4 with appox 1.8 trillion parameters, you need about 7kw per hour that would be around 172 kw per day (if it snot at peak performance I made this calculation with 50% of peak power consumption ). Compared to that a brain need about 0.3 kw per day. So you could employ 500 people for the same energy in a 24 hours timeframe. Now lets assume we have 500 educate dpeople VS one GPT 4 . Sure humans need food shelter etc. but you need maintenace, cooling and infrastrucutre to replace chips etc. for a machiene too. All in all humans are many many times more capable and efficent. I don't believe those techbros and content creators who live of the hype a word.
@nickbarton3191Ай бұрын
Interesting comment. Apparently, the Internet consumes 2% of the world's energy. Are we really going to up that significantly for AI when we don't yet understand the benefits and pitfalls?
@Nobody-uz1ywАй бұрын
We are asking the wrong questions
@latenightlogic23 күн бұрын
Yeah I wouldn’t say it like that though. There’s so much more chat gpt can tell me over a 2 year old.
@Mike__G25 күн бұрын
AI is perhaps a misnomer. Because a true definition of intelligence is very difficult. AI certainly imitates intelligence but is it in fact intelligence? Probably not
Ai is just a high tech calculator. You will input task and it will give output based on the data available to it. Calculator can do faster and more accurate job than human brain. But a 5 year old is smarter than perfect math solving machine. If you know what I mean then you know.
@mibli2935Ай бұрын
Noam Chomsky exposes the real limits of his understanding of AI - why Chomsky fights for his own survival.
@nnaammuussАй бұрын
🙂 a lot of easy-to-simulate people in the comment section, presuming the scientists presume too when they speak.
@meghdutmanna2429Ай бұрын
Do AI will have free will ?
@realx09Ай бұрын
AI is simply expressed contradiction in terms
@sethfrance1722Ай бұрын
He is like 2005 chat bot honestly he is just an expensive philosopher I only trust Hinton and similar practitioners
@MojtabaSaffar-p1vАй бұрын
Why do we think that there's only one way to be intelligent and it's biological. Algorithms are a kind of intelligence.
@Mike__G25 күн бұрын
AI is perhaps misnomer. Because a true definition of intelligence is be difficult. AI certainly imitates intelligence biz it in fact intelligence? Probably not
@steve.k4735Ай бұрын
Chomsky is indeed a genius and very knowledgeable around this subject but A.I itself is not his core knowledge both Geoffroy Hinton (Google Deep mind and Nobel prize winner) and Sir Demis Hassabis (Deep Mind and Nobel prize winner) who are also in the same league AND A.I is their core subject, both disagree with Chomsky they think these models will understand and take the idea that they will become conscious very seriously, the main stream view of people like them and many others who work in the field and are just as smart as Chomsky is that on this he is wrong.
@SlumberingWolfАй бұрын
Define conscious. Go ahead do it, because last time I checked Science couldn't do so?
@steve.k4735Ай бұрын
@@SlumberingWolf I presume you believe you are conscious yes? .. amazing eh this despite the fact that `last time you looked` science can`t define it .. therefore we KNOW that you don't have to define something or even fully understand it for it to exist. people in the past did not need to understand aerodynamics to make a plane fly.
@eatcarpetАй бұрын
"AI experts disagree" don't mean anything. Those "AI experts" haven't invented consciousness.
@steve.k4735Ай бұрын
@@eatcarpet AI not just experts but absolute top of the tree worked with it for decades doesn't mean `anything` .. really .. nothing at all no more than you in a you tube comment eh? A.I experts have not `invented` consciousness but you don't need to be 100% at a thing to realise you are building the blocks and getting close, they are not sure but think / fear they may well do so.
@eatcarpetАй бұрын
@@steve.k4735 So basically meaningless.
@ViceCoinАй бұрын
Only has smart as the user. I used AI to code casino games, and generate graphics, in seconds, saving months of development.
@KalmanNotariusАй бұрын
Unfortunately he didn't pay any attention to the limits of his own theory.
@sergebureau2225Ай бұрын
Depressing to see Chomsky show such a lack of imagination and comprehension of the new technology. Machines understand languages better than human, obvious.
@npaulpАй бұрын
I have great respect for Noam Chomsky, but his understanding of Generative AI seems limited. It’s not just about feeding vast amounts of data into a system and having it statistically predict the next word- that’s a gross oversimplification. Generative AI, in its current form, offers one of the most sophisticated models for approximating how the brain functions. While it’s not an exact replica of human cognition, it’s a remarkably close approximation given today’s technological advances.
@roccococolombo2044Ай бұрын
Exactly. Next word prediction does not explain coding or image generation.
@eatcarpetАй бұрын
You don't even know how the brain functions, and yet you're claiming that it "approximates how the brain functions".
@mpetrison3799Ай бұрын
@@eatcarpet Well, the main reason LLMs might fail the Turing Test is because they are too knowledgeable and clever. That's at least approximating the output of humans in text, given input in text. (With speech recognition and output, or even video recognition and output, more should already be possible than that.)
@npaulpАй бұрын
@@eatcarpet Artificial neural networks are inspired by how the brain works, though they are simplified models. While it's true that there's still much we don't fully understand about the brain, we do have a solid grasp of some key principles, such as how neurons communicate, learn, and process information. Neural networks capture these basic ideas, such as learning through adjusting connections, even if they don't replicate the complexity of the brain's full processes. So while not a perfect mimic, they do approximate certain aspects of brain function that we understand.
@eatcarpetАй бұрын
@@npaulp We don't know how the brain works - that's the whole point.
@GuyLakemanАй бұрын
AN AI HAS INTELLIGENCE OF A 2 YEAR OLD CHILD BUT THERE ARE MILLONS OF AI MACHINES WHICH IS GREATER TAN THE TWO YEA OLD
@harper626Ай бұрын
but 2 year olds will be the same in 10 years, not AI. It will be much improved and capable.
@aminububa851Ай бұрын
Not true.
@Luke-z2lАй бұрын
Submarines don't swim. But Sea-Men can. A mind of its own, I think,... Spiritually. Observer consciousness manifesting awareness turning imagination into intelligence. I can think, I AM the thinker, not the thought. I AM the one thinking, but I AM when none are thinking at all. Peace & Serenity Now
@dineshlamarumba4557Ай бұрын
Ai is in stage of newly born child right now. Darpa. When computer have cognition, reason than only ai will pass 2 year baby.
@destroyingsinАй бұрын
AI
@priyakulkarni9583Ай бұрын
Wrong! Naom needs to grow! AI chatGPT can pass medical board exams and other chess and go games and best world champions! 😅😅😅
@ThomasConoverАй бұрын
This old man is so old he decided to just say “Fk it I’m gonna deny AI just cuz I’m old enough to deny everything and blame it on Alzheimer’s” 🗿
@BMerkerАй бұрын
How charming to hear the man who spent his life arguing that the secret of language is to be found in "symbolic, rule-governed systems", i. e. in exactly what computers do, argue that "following a program" (i. e. doing symbolic, rule-governed system operations) is "irrelevant" to understanding language. And how interesting to know that he thinks that only humans are conscious!
@TommydiistarАй бұрын
Well you could prove him wrong by showing some evidence that AI is conscious, but like he said you have nothing but speculation to go off like the LLM models
@edh2246Ай бұрын
Seems silly to compare with a two year old. ChatGPT can pass the bar exam and can answer questions at least at the graduate level of any science, mathematics or humanities.
@TommydiistarАй бұрын
Take all of that with a grain of salt-they always tend to overestimate their products. Sam is a salesman, and a very good one at that. Not to say GPT isn’t impressive, but the elephant in the room is whether it’s sentient-that’s the real question. And he’s right, it’s not. All it’s doing is predicting the next word.
@strumyktomiraАй бұрын
@@Tommydiistar No. It is Chomsky who must prove his thesis :D
@TommydiistarАй бұрын
@@strumyktomira is AI sentient? How is he going to prove something that everyone knows is already facts that makes no sense but hey this is the world we’re living in now a days
@GuyLakemanАй бұрын
AI SYSTEMS WRITE PROGRAMS ...
@The_Long_Bones_of_Tom_HoodyАй бұрын
He isnt so wise that he knows all the answers to everything. He just thinks he is....
@krokigryggАй бұрын
Yes lets listen to a person that has no clue what he is talking about.
@realx09Ай бұрын
Don’t listen then, go watch football game
@Srindal4657Ай бұрын
What point is a anarchist, communist or even socialist revolution if robots can take over every activity? It's like asking what good is a nest if birds evolve not to need them? In the same respect, what good is human activity if humans evolve not to need them? Noam Chomsky is out of his element.
@dcikaruga19 күн бұрын
Processing, it's just mechanical, digital logic. AI is over-rated, they're just using to as a sales pitch!!!!!!
@seanlorber9275Ай бұрын
Chomsky is just a negative nancy. An expert on every subject. What a load.
@Prof_LKАй бұрын
Extremely arrogant and stupid argument.
@rogerburley5000Ай бұрын
Try and use AI, it does not understand, Artificial Idiot
@ticneslda8929Ай бұрын
Why are we even entertaining this kind of ...argument? what a waste of time! what a display of ego...! These line of Dr. are the ones that din't learn when to go away. Oh..., I'm so smart!
@theb190experience9Ай бұрын
Oof clearly some definitions are needed. I’ve worked with both 2year olds and AI and AI is provably smarter. So perhaps the thumbnail title needs to be changed. It is also clearly much easier to communicate with AI across a vast array of subjects and elicit far more rewarding responses to my ‘prompts’. Note that doesn’t mean I prefer that interaction, it’s simply a statement of fact. Two things I am absolutely sure of: 1). That two year old, as it grows and learns, will have orders of magnitude better interactions with me. 2). So will future models of AI.
@noway8233Ай бұрын
Cool , chomsky is very clever about all this AI hype , he is ritgh , this hype will burst very soon and will be huge
@almightyzentacoАй бұрын
Why would it burst? It's extremely useful and getting more useful by the day. How is it hype to be able to drop 500 lines of code into Claude and quickly identify the cause of unintended behavior, or have your functions commented automatically? At its current state, if it never improved at all AI is already one of the most all around useful tools I have ever encountered.
@TheSapphire51Ай бұрын
But useful for what exactly?@@almightyzentaco
@danisraelmaltaАй бұрын
He is upset that transformers, the building blocks behind LLMs are working opposite to his linguistic rules... His life work thrown to the garbage.
@bsnf-528 күн бұрын
I don't think he cares at all. If anything is going to the garbage, it's not Chomsky's work. His studies are everlasting and iconic, no matter what happens in the world. Also his crazy amount of research and knowledge he shared over the years, is priceless. But I guess explaining the importance of Professor Chomsky to either a total ignorant person, or a 12 yo troll kid, would be too difficult.