The famous Chinese Room thought experiment - John Searle (1980)

  Рет қаралды 445,154

Jeffrey Kaplan

Jeffrey Kaplan

Күн бұрын

Пікірлер: 2 100
@BrianWilcox1976
@BrianWilcox1976 Жыл бұрын
For me it’s like saying, “one water molecule is not wet, so no matter how many you put together you’ll never get wetness” (it’s an emergent property)
@Bronco541
@Bronco541 Жыл бұрын
Thata what I was thinking. Do we know or are able to know, to what if any degree being aware is an emergant property of just a "simple" algorithm?
@Bronco541
@Bronco541 Жыл бұрын
Or I wonder if Searl is right about form never truly being enough to get meaning... What if meaning is nothing more than form somehow... Dont ask me to elaborate im just spitballing dumb ideas
@REDPUMPERNICKEL
@REDPUMPERNICKEL Жыл бұрын
@@Bronco541 'Meaning' is not 'form' but they are related. I mean, just look at the 'form' of this sentence. The meaning of that sentence is *'encoded'* in its form. When that sentence got inside you it became a process. Actually, I think you'll agree, that sentence became a sub process of your being conscious process. In consequence your thoughts are now somewhat different. If you understand thinking to be behavior then you can see that its meaning has influenced your behavior. This is close to what 'meaning' means. The above is not written as well as it deserves but I believe it will affect your thoughts such that your thoughts may become more accurately reflective of actuality, imho naturally.
@franzmuller235
@franzmuller235 Жыл бұрын
@@Bronco541 Thats what came to my mind also. How does a newborn learn? It learns to recognize forms, first his mothers head and his mothers breast, and then other forms. No one ever teaches a baby meaning to start with. The child first learns the meaning by recognizing forms of all kinds, and by recognizing how the forms interact with him and with each other.
@franzmuller235
@franzmuller235 Жыл бұрын
@@yongkim3333 No, of course you can construct a sensor that senses wetness. You don't need a human, not even an animal.
@ericfolkers4317
@ericfolkers4317 Жыл бұрын
One problem I have with the Chinese Room is that you could create a similar metaphor for the machine that is the human mind. You have lots of people instead of one, each representing a neuron. They have a list of rules where if one of their neighbors hits them with a tennis ball, there is a rule of who you should or should not throw your own tennis ball at. Some people will have levers that will fill the room with chemicals people can detect (maybe by smell or sight or mechanical detectors) and people's rule books will have different rules depending on what chemicals are in the room. There might be plenty of fuzzy rules like, "if there's not much of chemical X in the room and you get hit with Sarah's tennis ball you can throw a ball at Tom or not, but if there is a whole lot of chemical X, you really need to throw that ball," or, "if chemical Y is filling the room pretty much all of the time, you can ignore it unless there's really a whole lot more than normal." Some people would have access to input information in some way and some people would be able to perform outputs. Is there any reason to think that a human brain couldn't be modeled this way, if we had enough people with enough tools (like the tennis balls and chemicals) and detailed enough instructions? Obviously none of the people working in the model brain would need to understand meaning of any of the inputs, they might not even be able to tell the difference between an input from the outside world and something that another worker has done. But the system as a whole could take inputs give outputs that seem to demonstrate understanding. If we reject the systems response as Searle does for his Chinese room, then we can't say the system understands any of the inputs. Since the system works the same way as our brain, how can Searle say that our brains can understand any semantic meaning? Wouldn't he require some kind of magic stuff that makes our brains work somehow differently from the model with people throwing tennis balls?
@donaldb1
@donaldb1 Жыл бұрын
Well, yeah. Roughly, Searle thinks his thought experiments shows that brains don't exactly work like that. There must be something else about them, which we haven't discovered yet, which produces "real meaning", or original intensionality, as Searle calls it.
@mottykadosh
@mottykadosh Жыл бұрын
Brilliant, just nail it, the all room experience is just a joke.
@jimjimmy2179
@jimjimmy2179 Жыл бұрын
Except that you making one very important assumption when writing this comment which is: That human intelligence is a "program" telling neurons how to manipulate themselves and that's all there is. So basically you are making a circular argument whereby you start by assuming such s "program" existence and cycling back by staring that that's how it can work. E.g. your claim doesn't have any logical proof as oppose to the Chinese Room that shows the difference between intelligence (i.e a capacity to understand meaning) and following a rule book without understanding. It shows it using drawing logical conclusions using well known definitions OUTSIDE of Chinese Room as oppose to your argument that that justifies the brain working by simply assuming that that's what it does. Besides majority of brain function is chemical and we know very little about it. The reason that people are obsessed with neurons is that that communicate using electric impulses that are easily measurable. There's one very important distinction as well: In the Chinese Room story one exactly knows where the mam takes its decisions from, it's the rule book. In real human being we can measure all sorts of brain activities either induced or decided by the person. However we are not able to measure the actual act of the very decision. E.g. we (as our neuroscience) have no clue where your decision to write that comment physically comes from :) even though you can mentally explain it.
@ericfolkers4317
@ericfolkers4317 Жыл бұрын
@@jimjimmy2179 Thanks for the well thought out reply. I will point out that my example does take into account the chemical aspects of the brain, though that isn't central to our discussion. I'm not exactly saying that human intelligence is a program, but I am saying it can be modeled by a program. But if that's not the case, what alternative would there be? Keep in mind that my model allows for fuzzy and probabilistic instructions. We can keep expanding my model to be more complex as needed, the only thing we can't add is some worker in the system that, by himself, does understand the inputs. But then how does that one worker understand? If he understands then wouldn't he need a mind? Is there an aspect of his own mind that is capable of understanding on its own? Either we need some kind of "final understander" or we end up with an infinite regress. What could this "final understander" be? If we take it to be a system of cognitive processing parts then we have to accept the systems response. Is there some glob of brain stuff that does the understanding for us which isn't made up of some sort of system by which it does the understanding? Perhaps this is my failure of imagination but that sounds completely absurd to me. This glob would have to be made up of smaller particles right? If you scrape away enough of those particles, wouldn't it at some point lose its ability to understand? Unless the glob was actually just a single atom or single quark. So if the "final understander" isn't physical what could it be? A non-physical mind perhaps. If we take a mind to be a the emergent property of a brain, or other collection of physical bits then the mind is just another system. So if we take a mind to be an emergent property of physical things, and a mind understands, then we are back to accepting the systems response. If there is some part of the mind that is somehow more than just the processes and systems of physical things, then perhaps we are getting somewhere. But what would this part of the mind be? A soul or other sort of "real magic" as Daniel Dennett would call it? What else could it be? Unless I'm missing something we have reached a sort of dichotomy: either we believe in magic or we accept the systems response. If you need to posit some kind of magic to reject an argument, that's an indication that the argument is very strong. I suppose other possibilities are that there really are those single quarks that can understand, which is too ridiculous to consider, or that "understanding" is not something that anyone or anything is actually capable of. If that's the case we still seem to understand things and talk about the universe as if we understand things, and if the Chinese room (as a system) then we can treat it as if it understands things the same as us without worrying about if it actually understands anything (since actual understanding is impossible anyway).
@arturoidoyagamolina5109
@arturoidoyagamolina5109 Жыл бұрын
@@ericfolkers4317 you formulated my thoughts in a way im totally incapable of doing lol. assuming that's the answer; the system response, then i guess it takes out a lot of the mystery and superiority out of the human existence, it liberates us in a sense. idk. we would stop looking and animals, or at any ai in the near future as inferior beings, or "not quite humans", "just not really sentient". it would open up a lot of ethical questions about how we treat future advanced ai(s) as well
@peves-
@peves- Жыл бұрын
I don't think squiggle and squaggle are racist in the way he was using them. I think that he was trying to make sense of two symbols that are foreign to him by giving them names. To him they are scribbles and squiggles on paper. He can't differentiate what they mean, but for his purposes he needed to call them something.
@peterkiedron8949
@peterkiedron8949 Жыл бұрын
This proves that Kaplan is a machine that does not know meaning. of words it is using..
@stickman1742
@stickman1742 Жыл бұрын
Of course it wasn't. I guess this guy is just another person in fear of being cancelled. What sad environments some people are forced to work in, living in constant fear.
@spanglestein66
@spanglestein66 Жыл бұрын
My sentiments exactly..anything can be turned into an issue of race these day We can thank CRT for that
@stuartjakl
@stuartjakl Жыл бұрын
It’s not racist. It could be construed by some as disrespectful to their culture. I’m sure the Chinese have some less than stellar words for how our writing system looks to them. Others would say it’s a remnant of colonial thought. That any other writing system outside of the one with you are familiar with in the English speaking world is Squiggle Squaggle, a colonial era contemptuous term showing a disdain for foreign languages/writing systems and therefore it’s racist because colonialism was racist. Let’s consider the time when this thought experiment was published in a 1980 article by American philosopher John Searle. Born in 1932 he was obviously trying to use what would have been the most indecipherable, exotic, and probably the least studied language in the United States at that time. At least compared to European languages. The example was to show a language that was so different to the average student with a writing system that was unlike anything they were ordinarily used to. (Except maybe Chinese students). I’m sure we can come up with a name more fitting todays social climate The foreign language room? The alien room? The alien language room?
@vdanger7669
@vdanger7669 Жыл бұрын
Love Kaplan but disappointed he couldn't pass up some good virtue signaling. We live in peak woke times though and I suspect he is a product of his academic herd environment.
@magellan500
@magellan500 Жыл бұрын
This reminds me of Chomsky’s famous example of how syntax and semantics are separate, and that you can create grammatically correct sentences that are meaningless, which was “Colorless green ideas sleep furiously.”
@justifiedhomicide5730
@justifiedhomicide5730 Жыл бұрын
Quite frankly, good point. Just because transistors do perfect syntax doesn't mean by default that they can or can't do the 'correct meaning'. To a transistor there are two meanings, yes and no. To a neuron there is a range of meanings, almost like any number between -1 and 1. Even though neurons have no goddamn clue what the emergent simulation of the brain is, (despite the lack of ""semantics""), we still exist. Despite transistors have no goddamn clue what gravity is, they can still correctly simulate a falling object.
@JohnDlugosz
@JohnDlugosz Жыл бұрын
Wolfram's hour-long livestream about how ChatGPT works included examples of this; he gave one example of "The chair is happy". I thought that his examples, this one in particular, is _evocative_ of meaning, and could in fact be meaningful in context. So I offered it as a writing prompt to ChatGPT, asking to write in the style of various authors. I recall many of Hans Christian Anderson's stories give a point-of-view and cognition to some object, and ChatGPT(3.5) was able to channel this. For some other writers, it was more straightforward magical fantasy. For Isaac Asimov, the chair was cybernetic, filled with sensors and microprocessors so it could adapt to the needs of its users. Another time, I asked ChatGPT to generate 10 lines of nonsense. Interestingly, it was not gibberish but kept a syntactic correctness that only doesn't make sense when you consider the meaning overall, as with your (Chomsky's) examples. But, several of them sounded very poetic, and I directed ChatGPT to write a child's bedtime story using one of those "nonsense" lines as a starting point. Every night, the sun sang lullabies to the stars... hearing the line, we craft an entire context to _make_ it make sense.
@davidjooste5788
@davidjooste5788 Жыл бұрын
Thats an 🎉inadvertant definition of woke
@kevinscales
@kevinscales Жыл бұрын
It's grammatically correct but doesn't get all of the forms/patterns of the language correct, if we look at syntax as all that is formalizable about the language then you can only get meaningful sentences from a machine that accurately manipulates those forms. I think meaning IS in the form, it's just difficult to grasp what that form is. Computers are getting pretty good at it though.
@pumkin610
@pumkin610 Жыл бұрын
Luminous diagonal virtues eat destiny? Formless brave trees talk geometrically? Or as Aizen would say "Seeping crest of turbidity. Arrogant vessel of lunacy! Boil forth and deny! Grow numb and flicker! Disrupt sleep! Crawling queen of iron! Eternally self-destructing doll of mud! Unite! Repulse! Fill with soil and know your own powerlessness!"
@bojens865
@bojens865 Жыл бұрын
I met Searle a few years ago. I had had two car accidents resulting in concussions, and regaining conciousness in the hospital, as if awakening from a dream. The third time I was hit, but walked away; waiting to wake up in the emergency room again, but I never did. As it happened, Searle was speaking at the University the next day and I attended his lecture. He spoke of the Chinese room, which I had read in his book years before. After the talk, there were snacks and coffee in an adjoining room. Searle and his wife were sitting at a table by themselves and I asked to join them. I told them my experiences with loss and regaining of conciousness. Searle said the same thing happened to him. He hit his head skiing and made it back to the lodge with no memory of having done so. He was treated for concussion, after skiing for a mile while unconscious. At this point, philosophy students and professors showed up and started bouncing jargon of Searle and I left. I'd just had a private conversation with one of the worlds foremost philosophers; I wonder if I had in fact regained conciousness!
@JohnDlugosz
@JohnDlugosz Жыл бұрын
Try discussing philosophy with ChatGPT using the GPT-4 model. Just avoiding the strong mental blocks put in by the developers on top of the actual model is interesting in itself. It's also a surprise that _creativity_ emerges long before consciousness, with many of the building blocks of sapience and sentience still missing entirely. I've asked ChatGPT to output in Chinese. Is it an actual Chinese Room running in the Azure data center? But when I asked it to write a short poem for my wife incorporating some kind of pun or wordplay for her name, it generated a pair of couplets in Chinese and translated to English, and both versions rhyme but in different ways. I don't see filing cabinets full of instructions processing symbols, I experience the presence of a creative mind. Nothing like this task was pre-programmed and left as instructions to follow. But, a program processing tokens is _exactly_ what it is! But the instructions for the tokens are on a very primitive level, not directly relating to any high-level task. The activation patterns in the huge number of "parameters" (neurons) form a new, distinct, way of representing algorithms and high-level tasks. We can literally see now how that higher level emerges from the neural net, and is separate from the code that drives the individual neurons. BTW, lack of long-term memory later does not mean lack of immediate and short-term memory during, and does not imply he was not conscious when he was returning to the lodge. I experienced something similar recently during a minor medical procedure: the anesthesiologist explained that one of the ingredients was to "make me forget" and indeed I have no memory of the procedure. But when I had the same thing done once before, I remember everything about it.
@frontiervirtcharter
@frontiervirtcharter Жыл бұрын
Was Searle unconscious, or conscious but not retaining longterm memory of the events in the hours after the concussion?
@bojens865
@bojens865 Жыл бұрын
@@frontiervirtcharter This was about 10 years ago. I remember he and his wife telling me he was unconscious
@brotherjongrey9375
@brotherjongrey9375 Жыл бұрын
You still haven't regained consciousness
@starfishsystems
@starfishsystems Жыл бұрын
​@@bojens865 Great story! And that is one of the real pleasures of living in the small academic world. We do get to meet with some very fine minds from time to time. I have somewhat the same story concerning Dan Dennett, just because I happened to attend a cognitive science conference where he was giving a talk. More to the point, here was a philosopher who thought it would be worthwhile to talk with cognitive scientists. -*- On the subject of consciousness, we know that we typically perform many of our routine tasks "on autopilot" while maintaining a kind of casual conscious situational awareness in case the need arises to step out of the task. Depending on choice of terminology, those tasks can reasonably be called "unconscious." And should the conscious supervisor - the part of the mind also most commonly responsible for conscious memory and recall - happen to become distracted, intoxicated, medicated, spaced out, or otherwise go offline for some reason, the unconscious processes may be able to continue unsupervised. It's the same brain, the same sensorium, the same accumulated body of knowledge, substantially the same mind, after all. I can well believe that Searle made it back while "unconscious" in this sense, and moreover not remembering any of the journey. An interesting question would be whether he has retained any "unconscious" memories of the experience. It would be hard to test for it, but assuming there were certain distinctive events along the way that might be memorable, the idea would be to look for markers of (possibly unconscious) recognition when some facsimile of the event were to be replayed for Searle to see. Perhaps he would become explicitly conscious of the event when reminded. Or it might produce a distinctive spike in neural activity, a slight time difference when responding to questions, a change in eye saccade rate, et cetera. These slight clues won't tell the whole story of such a complex system, but they are slowly helping us to frame the investigation. I started out in computer science in the 1970s, hoping to learn something about the nature of intelligence. At the time, I formed the impression that we'd need about a century to get there. That's a long time to contemplate, yet here we are halfway there already. And it feels about right. It's starting to come together. I think another fifty years will do it.
@jamesoakes4842
@jamesoakes4842 Жыл бұрын
I find that one of the things I keep coming back to when processing the Chinese Room experiment is that there's a big similarity to some debates between Creationists and Atheists. With Creationists, they will often challenge Atheists to explain what was the thing that touched off the beginning of the Universe, the "unmoved mover", which they reason must be outside of the universe as we know it to not violate known physics, therefore the existence of God, or some other supernatural entity, is proven. Similarly, with the existence of the Chinese Room, you can point to one element that needs more explanation: the instruction manual. If it can truly teach the individual to understand how to respond to a Chinese symbol well enough that it duplicates the responses from someone with a semantic understanding of Chinese, then I think it's impossible to say the manual wasn't created without a semantic understanding of Chinese being involved. If said understanding is inserted into the room in the form of the manual, then it's not really a closed system reliant solely on syntax. ...of course questioning the premise of a thought experiment isn't exactly revolutionary thought.
@charlesreid9337
@charlesreid9337 Жыл бұрын
The problem with creationist..and radocal atheist "logic" is that all their arguments require strawmen. Let's consider the big bang.. per a creationist that should prove god exists..Someone had to make it hapoen so..god must ecist right? No..we do not know. There are many possible exllanations including god..science has no opinion on wjat it dlesnt know
@adriansmith6124
@adriansmith6124 Жыл бұрын
But I think the what the experiment tries to show, is not that understanding doesn't exist outside the room, or in the manual. That the computer using Turing calculations cannot understand it.
@AndyCampbellMusic
@AndyCampbellMusic Жыл бұрын
There are only atheists? Nobody can or does believe in all possible imagined gods? If there was nothing then there would be no one to ask... Why is there nothing. If the claim is something always existed. Then so can something else? The universe, is sufficient unto itself, to explain itself and everything within it. 🤷‍♂️ If it wasn't there, there would be nobody to ask why it wasn't.
@cosmictreason2242
@cosmictreason2242 Жыл бұрын
@@AndyCampbellMusicno it's not and no they can't. Only uncaused things can exist acausally. The universe is caused. Therefore it isn't self existent. You claim the universe is self existent but you simultaneously assert that the whole is the sum of its parts and you can't point to any part of the universe that's confirmed to be self existent. This isn't even the cosmological argument, it's just a refutation of your denial
@ronald3836
@ronald3836 10 ай бұрын
Not even a Chinese babies are born with knowledge of Chinese, but without enough syntactic exposure they acquire real "understanding" of the Chinese language.
@john_hind
@john_hind Жыл бұрын
'A simulation of a hurricane is not a hurricane', no, but a simulation of a word processor is a word processor, a simulation of a dictionary is a dictionary. I once wrote a dialog called 'John Searle Meets Another Intelligent Machine'. At the climax, the machine seems to get frustrated by Searle's obstinate refusal to accept its consciousness, thumps Searle, admits it was simulating anger to make a point and asks if knowing that makes the pain any less painful!
@jdscouch
@jdscouch 7 ай бұрын
and then did searle thump the machine back and say, “stop pretending that hurt”?
@john_hind
@john_hind 7 ай бұрын
@@jdscouch Nope, he accepted he was insisting on a distinction that makes no difference and retired from philosophy! But your ending works too, with Searle the first casualty of a simulated war!
@dooleyfan
@dooleyfan Жыл бұрын
Speaking of Turing, what I found interesting is that the huts at Bletchley Park were essentially each isolated Chinese rooms where the codebreakers were responsible for different steps in the decryption process, following syntactical rules but not aware of the meanings behind their outputs, with the exception of the people at the end of the process.
@pygmalionsrobot1896
@pygmalionsrobot1896 Жыл бұрын
You're expository style is energetic, inspiring, and I applaud you and your channel. Thank you.
@chrism.1131
@chrism.1131 Ай бұрын
25:10 what if there is no such thing as semantics? We are all just good enough at syntax to appear from the outside as though we are actually conveying semantics. Machines may eventually get just as good at this as we are.😜
@ND-im1wn
@ND-im1wn Жыл бұрын
Amazing how much more this problem, video and explanation are today with ChatGPT. I understood this concept in an intuitive way but now I have the language to explain and understand it explicitly. Thanks!
@izansengun
@izansengun 3 жыл бұрын
what a wonderfull way of teaching! Great content sir. Great job!
@dwinsemius
@dwinsemius Жыл бұрын
Well done. Thank you. I suffered through Searle's "Philosophy of Mind" course at Berkeley in 1970. It was mostly reading and considering Hume's writings. It was definitely NOT what I had been hoping for. My final paper in the course, heavily influenced by reading Julian Huxley, was my version of functionalism and an attack on Hume's rejection of induction as a sensible basis of knowledge. I was a physics major at the time so abandoning induction as a path to knowledge was unthinkable. (Also Hume's use of self-reflection as his primary data gathering tool is easily as fallible as induction.) I only got a B+ which I found annoying but totally understandable, given the distaste I had for Hume and by association Searle at that point. Then 10 years later Searle reappeared on my radar screen because his attack on Strong AI appeared in Scientific American. I found his arguments entirely unconvincing. I had already accepted the Turing test as a reasonable basis for assessing the expression of language as "intelligent" output of a process. A few years ago I found a tome from the late 1800's by Huxley on Hume, and I periodically pick it up and enjoy random bits of it.
@matswessling6600
@matswessling6600 Жыл бұрын
induction isnt a path to knowledge. but that is not a problem since sience isnt based in induction.
@hinteregions
@hinteregions Жыл бұрын
Yeah me too. He seems not to understand what Dennett, for example, is doing because he isn't able to see all the implications of his own thought experiment. If we simply take his main thesis to the extreme, as we must and as he for some reason does not, with every single neuron replicated and whatever neurochemical signal it's about to transmit too - not really so different to the cause and effect that is the basis for Determinism if it is not the very essence of such - then yes, indeed this would necessarily be a perfect simulacra of his mind and his memories and thoughts and feelings too, as he takes his own for the purpose of the experiment. We might have to hook it up to some 'sensory inputs' and give it some way of communicating but I have to assume that's a trivial matter in this context. IF we could make such a marvellously complete copy of that human organ to Searle's very own specifications, properly and fully as opposed to his convenient 'partially,' THEN unfortunately Searle is hoist on his own petard. The fact that we cannot is irrelevant.
@nosuchthing8
@nosuchthing8 Жыл бұрын
Thank you. I agree with your assessment, I read that article in SA too. What do we do with chat GPT? It seems close to passing the Turing test. Please try it if you have not already.
@dwinsemius
@dwinsemius Жыл бұрын
@@nosuchthing8 I have "chatted" with chatGPT 3.5. It's like a sociopathic college student. A bit like a Donald Trump but unlike that particular human actually 'speaks' in complete sentences with minimal digressions. Makes up stuff and cites non-existent citations to fill in and support syntactically correct slots in its explanations. It is built to sound good but has limited learning capacity. It also denies being human so perhaps close to Turing-passing but not yet convincing to me.
@nosuchthing8
@nosuchthing8 Жыл бұрын
@@dwinsemius yes, I agree with your assessment. Close but no cigar. But let me give you an example. I asked it's interpretation of the fable, the emperors new clothes. Which as you know has the emperor parading around in his birthday suit because he's gaslighted by some villains. Chat GPT gave a very good assessment, and then I asked If there is a connection for burden of proof and readily explained how burden of proof was key to the story. So it's certainly close to passing the Turing test
@perfectionbox
@perfectionbox Жыл бұрын
The fact that mere symbolic processing can produce fairly good output is helpful in brain development. As a child correlates experiences to language symbols, the job is made easier by powerful symbol processing where guesses/estimates/predictions are often useful, and even after because much sensory input is garbled or missing, and intelligent subconscious guesswork fills in the gaps. We haven't created true general AI, but have uncovered an important piece.
@Inzomniac9
@Inzomniac9 11 ай бұрын
The background information you gave was perfect for understanding the experiment. Thanks!
@devtea
@devtea Жыл бұрын
Thank you so much for this video! This is by far the best explanation of the Chinese Room. Within Searle's imagined ideal conditions of the experiment, Searle is correct, and the conclusion would be true. At least it would have been true for, say, a digital computer such as a calculator or a watch. However since Searle's article wasn't concerned with the ideal conditions, and because Searle's article showed interest in extrapolating this into practice - there is indeed a version of a 'Systems' response to Searle's original article. One can argue that Searle's conclusion (that it is impossible for the system, or the individual person/central processing unit inside of it, to learn the semantic meaning of the language) is false - because it is impossible to guarantee Searle's imagined ideal conditions (the ideal circumstances of the room - perfect isolation and perfect access control that would prevent any unauthorized outside contact/exchange of information) for an extended length of time. Again, the reason Searle's conclusion (that it's impossible for the person/central processing unit to learn the semantic meaning) is false is because the ideal conditions imagined are impossible to guarantee. One can argue that there exists a positive, non-zero probability of an outside entity or force stepping into this setup (uninvited) and teaching the person/central processing unit inside the room the full semantic meaning of the entire language, without ever entering the room. For example, by passing information into the room on purpose, with the specific intent (i.e. the intent to train it, as one would train an Artificial General Intelligence). This experiment, given a sufficient length of time in real life, i.e. years, would encounter a non-zero likelihood that these ideal circumstances of the room would be altered by an outside party (i.e. an enthusiastic ML Ops Engineer). Since the person/central processing unit inside the room does have command/skill in some language (other than Chinese) that it uses to understand the instructions - it is not impossible for it to receive (from some enthusiastic person/entity outside the room) new information with instructions / process / method that build up into a full semantic understanding of Chinese, or any other language. This is for the classic digital computer. Especially if there's no time limitation, it's not impossible to perform this within much less than the mentioned 1000 years. Difficult and not likely, yes; labor intensive, yes; but not impossible at all; and with humans being humans - significantly probable. Of course, the above would not impart a 'human' experience of speaking and interacting in Mandarin or Cantonese. But a full understanding of the language? Yes. I.e. in the case of digital computer, if it can understand binary numbering system, it can understand Chinese. It will be able to hold a conversation with a human. It won't sound like a classmate or a neighbor or someone you can relate to, as a natural-born human, so the conversation will be likely pursued as less meangful, but the language comprehension would be complete. Again, Searle isn't wrong. Within the thought experiment, Searle's conclusion is technically correct. It just has limited utility. It's like performing a thought experiment where one requests us to grant them the assumption that telomeres (stretches of DNA) at the ends of our chromosomes do not clip off and get shorter each time one of our cells divide. If we grant that assumption, we'll be able to construct a setup where a human can live 'forever', or at least not die from aging. The thing is, since it's not possible to guarantee that "ideal" assumption, you and I are still losing about 26 base pairs per year, so while the thought experiment is really interesting, the conclusions from it alone have rather limited utility.
@tedphelps
@tedphelps Жыл бұрын
Beautiful thinking. Thoughts of all sorts do stand on the stage to be judged for their actual value to us in living. Part of the problem is too strong a belief in proving some idea 'false.' Instead, I feel that ideas influence me, have a certain weight and value, move me this way or that in a wide world that I live in. I am happy for all of this.
@dowunda
@dowunda Жыл бұрын
How does Seale define what it means to understand something? Viewed exclusively in the physical world people can be seen as a kind of computer: The brain itself being a kind of Chinese Room.
@recompile
@recompile Жыл бұрын
Searle makes a convincing case that whatever it is that brains do, it is not mere computation. You might think you're a computer, but that's just because that's the most advanced technology we have at the moment. 100 years ago, you might have thought the brain was like clockwork. The idea that brains are like computers will seem just a silly in the not too distant future.
@dowunda
@dowunda Жыл бұрын
"Brain: an apparatus with which we think we think.“ - Ambrose Bierce
@bombmk2590
@bombmk2590 Жыл бұрын
@@recompile I have yet to see a convincing argument that it is anything but computation. How could it be more?
@costadev8970
@costadev8970 Жыл бұрын
​@@bombmk2590 you have subjective experiences, a computer (symbol manipulator) does not.
@calorion
@calorion Жыл бұрын
@@recompile "The brain is like clockwork" is not really a silly idea. Unsophisticated, sure. As we get better computers, we'll get a better understanding of how brains work. But a physicalist determinist basically does think that the brain is like clockwork on a theoretical level.
@jmiki89
@jmiki89 Жыл бұрын
Actually, if you think about it, that's almost exactly how infants learn their native languages in the first place, except they don't even have a rulebook, they have to figure that out for themselves. True, they get not only symbolic input, but for them the audial sequence for "mum" don't have any more meaning than for you or me hearing the made-up word "blarduk". They can differentiate between different frequencies and whatnot and try to mimic it via trial and error (the difference between them and the blarduk example is that we have much more experience making sounds with our vocal organs so we would make far less errors and hence need far fewer attempts to repeat this new word). And yes, babies have feedback loops to help them through the learning process but those are basically just another bunch of input. Yeah, there's might be some genetically imprinted social behavior patterns guiding which of these feedback inputs should be considered as positive and which are negative but all together those still can be paralelled with a deeper level rulebook from the chinese room experiment.
@erikmagnusson5713
@erikmagnusson5713 Жыл бұрын
Good point. The feedback loop is what is missing in the Chinese Room. The rule book is never updated. The system never learns anything. So if the rule book doesn’t contain understanding/semantics and there is no mechanism for learning then the system will never understand semantics… …I now find the Chinese Room uninteresting…
@brotherpaul963
@brotherpaul963 Жыл бұрын
@@erikmagnusson5713 Funny!!!
@sandornyemcsok4168
@sandornyemcsok4168 Жыл бұрын
I agree. What the Chinese room is nothing else just a good presentation of how a computer nowaday works. That's all. Does it prove that a computer cannot be made to behave like a human? Absolutely not. Additionally think about how 'eternally' defined sematics is. Only simple things, like bread, wind, etc. do not change. But let's take something complex, for example "pious". How much its content has changed in the past centuries? In this case the semantics is dependent on the historical age and social context, above the individual.
@rickgilbert7460
@rickgilbert7460 Жыл бұрын
I don't know that I agree. The infant learns that the sound "mum" is associated with the idea of a specific person by repeating it in the context of that person. Later, someone points to an actual tree and says "tree" and keeps doing that until the child learns the *semantic* understanding that the object in the yard "is a tree." So children learn the syntax by repetition of the syntactic rules, but they *also* learn the semantics by being taught them specifically, and separately from the syntax, right?
@jmiki89
@jmiki89 Жыл бұрын
@@rickgilbert7460 but the face of their mother or the sight of a tree is nothing but just an other kind of sensory input without any kind of intinsic semantic meaning. True, one may argue that humans are genetically hardwired to facial recognition to a fault (we even can see faces in places where clearly aren't any), but the point is that the semantic is created inside the infant's mind via (as you pointed out) repetition and feedback. But in the thought experience, the person in the room was given a supposedly complete and exhaustive but static rulebook of the chinese language with which the room as a whole can imitate a full conversation, which begs the question: can such rulebook exists? From a perspective of a single human life it may seems that the semantic of the words are permanent and unchanging but (especially in the age of internet and smart devices) concepts are evolving, too. We call both a smartphone and Bell's original invetion a "telephone", but those are clearly different things connectend only by the vaguest of similarities. So the rulebook in the room needs a way to being updated, and the only entity capable of doing that is the person in the room, and to do that, he needs some kind of feedback which immediately leads us back to learning.
@ameliagerson926
@ameliagerson926 Жыл бұрын
I actually can read Hebrew and was so excited I knew what that meant bc it was the only part of the video I confidently knew lol
@jollyroger105
@jollyroger105 10 ай бұрын
Thank you. I really appreciate you having put so much heart and soul into making your videos. I am truly enlightened.
@baltazarcortez7328
@baltazarcortez7328 Жыл бұрын
It was Polish mathematician Marian Rejewski that moved the needle when it comes to cracking the Enigma machine. Alan Turing built on the work of someone way smarter than him.
@Raoul684
@Raoul684 Жыл бұрын
Great explanation, again. I love these videos, so thought provoking. My addition against strong AI is to ask what is the computer doing, absent of any questions or inputs? That, to me, seems equally, if not more, relevant for consciousness. .
@DAG_42
@DAG_42 Жыл бұрын
If an anaesthesiologist stops your flow of thoughts by chemicals, you go unconscious. That's just taking the symbol shuffler guy out of the Chinese room.
@antonnie5177
@antonnie5177 3 жыл бұрын
you are saving my exam of next week
@finald1316
@finald1316 Жыл бұрын
Aside: there is a small nuance w/ chinese language. The symbols are tied to meanings not phonetics, so just like you can decrypt messages due to letter frequencies it is plausible that you could infer the meaning of some symbols (altough never be certain of) due to their expected frequency. The symbol for moon is tied to the month which relates to the feminine due the menstrual cicle. Not that you couldn't try the same approach in other languages, but they have a layer of indirectness due to being tied to phonetics.
@koenth2359
@koenth2359 Жыл бұрын
That was what I was thinking. Ironically, the problem of decoding Enigma was therefore much more complex than this task.
@leslierhorer1412
@leslierhorer1412 Жыл бұрын
Not only frequency,. but more importantly, context. If the inputs to the system allow it to assess when certain syntaxes are encountered, i.e. context, then the system can begin to make certain inferences about the syntax itself. This is the emergence of a realization of semantics from syntax in an empirical framework. I submit such an ability to scrutinize the syntax in such a way is critical to the development of a semantic organization, but that it is indeed possible if the coding is also malleable. In addition to his questionable assumptions, Searle seems to be completely ignoring these factors. He is asserting learning must be limited only to efficiency in the translation mechanism.
@koenth2359
@koenth2359 Жыл бұрын
@@leslierhorer1412 It is not all too different from what an infant accomplishes when he/she is trying to make sense of all sensory inputs, and finally manages to learn a language. And the infant manages! (#Chomsky #Language Acquisition Device).
@finald1316
@finald1316 Жыл бұрын
@@koenth2359 I am not aware how the enigma machine worked, but there are more words than letters. From a data standpoint, if the enigma works over letters, it should be easier to crack.
@finald1316
@finald1316 Жыл бұрын
@@leslierhorer1412 reminds me of IQ tests, but can only be sure if you check the solutions. I guess that is another discussion related to how language is constructed. There is some correctness in ignoring that if you account for "older" AI solutions which are not generic in nature and are just algorithms for computing something very specific. However, when the system learns using an arbitrary number of layers, the "interpretation" of the input is not an algorithm set in stone, rather we have implemented a sort of meta algorithm, i.e. the instructions that will give us the instructions to return the correct chinese character.
@echoawoo7195
@echoawoo7195 Жыл бұрын
The sensations you experience as a child are all symbols without semantics. The semantics derive from repeated exposure to those symbols. Given enough experience with a syntax, you can determine semantics. That's what infancy literally does. This entire thought experiment hinges on understanding not being an emergent property of a sufficiently complex information processing system
@echoawoo7195
@echoawoo7195 Жыл бұрын
Go pick up a picture less book in a foreign language not part of your family and tell me you can't pick out the meaning of some word groupings once you see a large enough set of symbols
@wirewyrm01
@wirewyrm01 Жыл бұрын
There is a paradox in the thought experiment. The person in the room is tasked with manipulating symbols, not with trying to figure out what they mean. Therefore, it follows naturally that the person in the room cannot (or more accurately, will not) figure out what the symbols mean. Indeed, the meaning of the symbols is completely irrelevant, so positing that the person could never understand the semantic meaning of the symbols is also irrelevant, because that was never part of the design. On the other hand, I would propose that Searle's assertion, that the person in the room can never gain an understanding of the symbols even if they tried, is false. Perhaps the person can not gain much insight from studying the symbols alone, but if the person studied the *instructions*, surely they would be able to glean some information about the symbols and their contextual use. Patterns will emerge from the frequency of use of certain symbols, the association of certain symbols with each other, and symbols occuring in question-answer pairs, and so on. Furthermore, from the frequency and sequence of "questions" received, the person can also start to observe patterns and eventually triangulate the semantic meanings of the symbols. In fact, many of these techniques are used in the study and research of dead languages. There are other problems that I can see with the thought experiment, but these are the most easily defined ones.
@LoraxChannel
@LoraxChannel Жыл бұрын
Yes. This is exaxrly why modern AI is fed huge language bases, and tasked with creating context and relationships and distinctions, just as we do in language. They are no longer limited to manipulating digits. I mean, that is the whole point of designing AGI, so it can learn and assign "meaning" indepenently.
@stefans.5043
@stefans.5043 Жыл бұрын
the person inside the room will never know the meaning of the symbols when he doesn't know the question he is asked or the answer he gives. In this experiment he only acts on given instructions and not on human behavior like observing or recognizing patterns. And even when he sees patterns than still he doesn't know the sematic meaning of them. thts th dffrnts btwn hmns nd cmptrs, you probialy can read this last part only by knowing the meaning of the words and not the meaning of the symbol. A computer can not.
@fang_xianfu
@fang_xianfu Жыл бұрын
Yes - the other part is, where does the book come from? Real minds write their own book of rules to manipulate the Chinese symbols, and they edit their own book as they try new things and they succeed or fail.
@SatanDynastyKiller
@SatanDynastyKiller Жыл бұрын
I knew someone in here would be as smart as me lol - saves me typing it, thank you. The easiest way to cover it all- until we understand everything, we understand nothing. I genuinely think some of these “intellectuals” are not exactly what they claim they don’t claim to be…
@LoraxChannel
@LoraxChannel Жыл бұрын
@@SatanDynastyKiller It's smart as I. When you are claiming smartness, it matters.
@gcewing
@gcewing Жыл бұрын
I think Searle is right when he says that "you can't get from syntax to semantics", but that doesn't mean that a computer can never understand anything. His hypothetical room is limited to interacting with the world via language, but humans are not so limited. We have other senses by which we get information about the world, and we can correlate that with language that other people use when talking about the world, and from that we can infer semantics for the symbols. I don't see any reason why a computer equipped with a similar set of sensory inputs couldn't do the same.
@recompile
@recompile Жыл бұрын
You should have done the reading as Searle answers this already. You're trying to sneak semantics in through the back door. A computer doesn't get a picture from a digital camera, just more meaningless symbols.
@gcewing
@gcewing Жыл бұрын
@@recompile My point is that you *can't* just throw out semantics and then claim you have a proof that computers can never think or understand the way humans do. The reason being that if you put a human in a similarly restricted sensory environment, the human would never learn to understand anything about the real world either. You've rigged the premises of the argument to get the result you wanted.
@davidb9682
@davidb9682 Жыл бұрын
@@gcewing Exactly what I was thinking. In the experiment, all that has been shown, is that a 'digital computer' or a human, put in the same room (with the same limitations on imputs and rules), would produce the same output. In fact, the computer would do it quicker and without error - by more closely and quickly following the rules. Provide the computer with additional information, as we do for humans, by providing sensory input, such as seeing the response to an answer provided through the slot, and we would find that there is no difference.
@SmileyEmoji42
@SmileyEmoji42 Жыл бұрын
We only have the input from our nerves and that is just in the form of the rate of electrical discharges at synapses. The "semantics" are added in pre-processing inside the brain above the level of these input "symbols" that are even more basic than those entering the Chinese room. There's an argument that AI needs to be equipped with senses to really understand but the counter to that would be a person with Locked In Syndrome whereby they have lost contact with their senses - They probably would quickly go mad but nobody would argue that they lost their understanding along with their senses so I think that, whilst extremely useful for learning, it is not necessary in the context of the though experiment.
@mattmerc8513
@mattmerc8513 3 жыл бұрын
Thank you so much for your vids youve explained it far better than any other paper, research, or teacher that I've come across
@xbzq
@xbzq Жыл бұрын
That goes to show you don't come across many papers, research, or teachers.
@ozymandiasultor9480
@ozymandiasultor9480 Жыл бұрын
@@xbzq well said.
@ozymandiasultor9480
@ozymandiasultor9480 Жыл бұрын
Where have you studied philosophy and logic, at which university are professors so bad that a mediocre channel with simplistic explanations is so much better? I am not saying that this channel is bad, but it is for laymen, those are not exactly top-notch explanations.
@hb-robo
@hb-robo 11 ай бұрын
@@xbzqwhy is everyone in this comment section such a brusque asshole? Perspective is relative, leave them alone
@Leao_da_Montanha
@Leao_da_Montanha Жыл бұрын
If humans could understand the semantic of words in the way to declassify the strong AI as Seasle is appointing, there would be no communication problems at all, as if the different order and explainations in the learning proccess would result in the same meaning to every word in every mind. In general, semantics are different for each individual and depend on the learning proccess they had, in depth, the memory we acquire for each symbol is updated when learning newer symbol until theres enough context for semantic. In other words, we all work as a turing machine fundamentally but arranged in a complex system. I would love to read comments to this, feel free to respond
@ronald3836
@ronald3836 10 ай бұрын
It would be easier to comment if I disagreed with you, but I don't 🙂
@Martial-Mat
@Martial-Mat Жыл бұрын
So if you take a machine designed never to have understanding, it can't have understanding? What a superb insight. All you have to do is define a computer so narrowly that by definition it meets Searle's criteria, then it meets Searle's criteria? What a thought leader.
@bradleyboyer9979
@bradleyboyer9979 11 ай бұрын
You seem to be ignorant of how all computers work. All computers operate in the way Searle described. Inputs and outputs. Doesn't matter if it was the first cpu ever created or the supercomputers of today (though Quantum Computing is arguably different due to our modern understanding of physics).
@anxez
@anxez Жыл бұрын
I can think of some direct counterexamples. 1: A Chinese Translating room. Set up the Chinese room but make the output of the room in a language known to the operator. Suddenly the situation changes, and the operator could pull semantics out of the text, we'd all agree on that. Maybe it would take years, maybe the rule set itself would be considered a way to teach Chinese. But what is the effective difference between the Chinese Room and the Chinese Translating Room? Short answer is context: the operator is able to connect the symbols to previous meaning. This is a mostly trivial difference because it is piggybacking on previous semantics to generate new semantics. But it does bring up a possible refinement 2: The Chinese Picture Room: Set up the Chinese room just the same: have a set of rules that govern syntax-in => syntax-out, bins which receive characters, bins which accept characters, only now have every set of characters come with a snapshot of the situation they appeared in. Once again, semantics have appeared, this time a little more robustly: the operator doesn't need any native language in order to learn Chinese from this set up. It may take years, they may be unable to speak the language, but the operator will be able to develop a semantic knowledge of the language. Heck, go one step further, and by feeding the person in the room pictures and characters in the right way, that person can make the rule set themselves without being programmed: because that's what babies do. And spoiler alert, that's what touring complete machines do too, by the way. Honestly though, this thought experiment does a lot of heavy lifting by not defining semantic understanding or proposing how it actually arises. He just takes it for granted and then gestures at how it doesn't arise from his specific scenario and when he's given a silver bullet argument against it the response is to just shrug it off.
@nitramreniar
@nitramreniar Жыл бұрын
To point number one: Changing this part of the setup might work logically as a thought experiment, but disjoints the connection between the Chinese Room and the digital systems it is meant to be analogous to. The fact that you could learn Chinese by being given random Chinese symbols and phrases and having instructions on how to translate them into a language you know, is based in the fact that you *already* know a language; that you already have the semantic understanding in one version and now only need a way to transfer those semantics into Chinese. The reason why the thought experiment has both input and output in a language you don't understand, is because we - for this context reasonably - assume, that the digital computer has no deeper language that it understands semantically and that it would just need a translation for it to truly learn and understand Chinese. On the second point I agree with you. In fact, I feel that the thought experiment already betrays a problem with the result in its own setup. The thought experiment ask us to imagine *ourselves* - a human mind - in the situation of a digital computer in this room and uses our human intuitive understanding of how we could never truly understand Chinese through this setup to reason that a digital machine could also never do this and thus digital machines are distinctly different from human minds. But: It started by reasoning that the human mind couldn't do this, so how can that show a difference between those two systems? The ultimate difference between how humans learn their first language not just syntactically, but with an understanding of the scemantics, is by more than just looking at random symbols as an input and figuring out the appropriate output - in fact written language is already a bad point of comparison, as written language is already something humans only understand through translation! It's a translation from the spoken and the internal understanding of language and the connected semantics *into* written language. Humans seem to learn their first language through engaging with it with a multitude of senses - we *experience* the language and the contexts in which different sounds and gestures are used untill we have enough of an understanding of the language itself to further explore nuances though the already established understanding of language we have.
@amaarquadri
@amaarquadri 2 жыл бұрын
I would push back against the idea that you can never learn semantics from syntax alone. I think given enough time in the Chinese room, you would eventually learn Chinese just as well as a native speaker. Consider the GPT-3 language model discussed in these videos kzbin.info/www/bejne/lWnch4J5acicjsU, kzbin.info/www/bejne/hqLFc2Nto8Soh5Y. Despite only learning based on the syntax of a large corpus of English language, it is able to construct coherent well thought out sentences. For all intents and purposes, it (or at least a future more advanced version of it) does "understand" the semantics of language. In a certain sense, if you zoom in enough human brains are just manipulating electrical inputs and producing electrical outputs with no understanding of what they mean semantically. Its just a set of particles following the laws of physics. Nonetheless, the system as a whole can develop understanding.
@guillecorte
@guillecorte 2 жыл бұрын
El punto es que debes poner "" porque sabes que no es "entendimiento" real. Además perdiste el punto: no es si luego de años en la habitación podrías o no aprender chino, sino que podrías responder "en chino" sin entenderlo realmente.
@MuantanamoMobile
@MuantanamoMobile 2 жыл бұрын
"Colorless green ideas sleep furiously" is a sentence composed by Noam Chomsky in his 1957 book Syntactic Structures as an example of a sentence that is grammatically well-formed, but semantically nonsensical. GPT--3 often makes Noam Chomskyesque statements that are syntactically correct but nonsensical, because it doesn't understand.
@perfectionbox
@perfectionbox Жыл бұрын
An additional proof would be that, if the person inside the room (or the system as a whole) understood Chinese, then it should be able to deal with unforseen expressions, new slang, word evolution, etc. But it can't unless its rulebook is updated. It's utterly at the mercy of its program. And the only way to update the program is via an external agent that understands Chinese. The question then becomes: is there a program sophisticated enough to do what the external agent can do?
@danwylie-sears1134
@danwylie-sears1134 Жыл бұрын
@@perfectionbox Programs are data. The absolute distinction you're appealing to, between the impossible-to-update rule book and the constantly-updated arrangement of papers in baskets, does not exist. It's an actual theorem that a suitably designed Turing machine can receive its program as part of the initial content of its tape, and the arbitrary input as the rest of the initial content, and no matter what other Turing machine you pick, there's an appropriate program part that can make the program-reader Turing machine match the output that the other one would give if it received just the arbitrary-input part on its tape. And with real-world computers, it's literally true that programs are data, stored on the hard drive or RAM or whatever, same as any other data.
@recompile
@recompile Жыл бұрын
Nonsense. Meaning is completely irrelevant here. ChatGPT no more understands the meaning of the word 'hamburger' than you understand the meaning of 汉堡包. ChatGPT very clearly makes Searle's case. There is no understanding at all, just symbol manipulation, yet it can produce coherent replies. You can even see its complete lack of understanding in its many spectacular failures. (For example, it will happily produce a program to compute the length of the fourth side of a triangle given the lengths of other three sides!) Try this: Consider a complete and comprehensive unilingual Chinese dictionary. You could study that for 1000 years, noting every detail about the symbols themselves and the relationships between the symbols. At best, you could create a grammar. At no point in your study could you identify the symbols that represent the word 'hamburger'. The meaning simply isn't in the symbols.
@Sergiopoo
@Sergiopoo Жыл бұрын
wouldn't a single neuron in the brain be the person in the chinese room, while the brain is the system?
@cosmictreason2242
@cosmictreason2242 Жыл бұрын
Does a single neuron process input and output with correct syntax? That would be line saying a light bulb in the room could give the correct response
@presto709
@presto709 Жыл бұрын
This was great. I think I first learned of the Chinese Room from a book by Martin Gardner. I think I come down on the system response. The system does understand Chinese because looking at the input and giving the correct output is what understanding means. It's the same test we would give to a person who claims to understand Chinese.
@kid5Media
@kid5Media Жыл бұрын
No. Or, to tweak things a little, the person outside the room instead of passing in a question passes in the instruction to order a glass of tea. The person inside the room will die of thirst (unless rescued by the Infinite Monkey Typewriter Brigade).
@presto709
@presto709 Жыл бұрын
@@kid5Media Interesting. We aren't told what the book inside the room will do when a nonquestion is the input. Postulate that the instruction book translates nonquestions into his language which he recognizes and reads. Interesting but I'm not sure how it changes anything.
@theconiferoust9598
@theconiferoust9598 Жыл бұрын
the «system» includes humans with consciousness to interpret and glean meaning. in other words, you are saying there is a «correct» output to every given input, which is nonsense and obviously not how life works. its like saying meaning is self-evident in physical matter, symbols, or mathematics, as if a computer could take the word «love» and output every single iteration of the meaning that has ever been conceived, felt, lived by every human ever. there is no correct output. conversely, it seems there is no meaning without a conscious experience, and the «systems» response only affirms this.
@presto709
@presto709 Жыл бұрын
@@theconiferoust9598 YOU WROTE you are saying there is a «correct» output to every given input, which is nonsense and obviously not how life works. REPLY The test wouldn't be if it gives the correct answer. It would be if it gives a convincingly human answer. Like the Turing test. If you ask "How is the weather" and the answer comes back "27" that would not make you think there was a mind at work. If you asked, "Is marriage a good thing?" You would not be looking for a correct answer, you would be looking for an answer that indicates an understanding of the question.
@presto709
@presto709 Жыл бұрын
@@theconiferoust9598 YOU WROTE You are saying there is a «correct» output to every given input, which is nonsense and obviously not how life works. REPLY I'm not saying that at all. When interacting with another person I do not require that all of his answers confirm my opinion of correct. Only they generally seem. to be responsive I might interact with a person who gives absolutely not "correct" answers in my opinion but still seems to clearly be a "mind".
@quokka_11
@quokka_11 Жыл бұрын
20:19 "You're never going to be able to figure out semantics from syntax." Except we're talking about human language, and you already have your own experience of that. With exposure to enough earnest material (not nonsense), you would eventually make meaningful connections and at least some of the semantics would emerge.
@brandyraccoon1473
@brandyraccoon1473 Жыл бұрын
At 18:45, you mention the squiggles seem a little racist. And perhaps you think that's just an opinion. As a Chinese person, I can tell you it is not the least bit racist. And I can prove it objectively. Note that "squiggle" is syntax and "racist" requires semantics. So by your very own argument, syntax is not semantics, therefore squiggles cannot be racist. If that doesn't convince you, I have a second argument. Something that is "racist" is directed at a particular group. It cannot be general to all groups, or it stops being racist. Describing something as squiggles is a general descriptor. It could describe Chinese or any number of written languages. Therefore, describing a written language as squiggly is not racist.
@mattmiller4233
@mattmiller4233 Жыл бұрын
Great video! Very well explained. I would add two points, though - the first you mentioned very briefly, but it is worth stressing that the Chinese Room serves only as a refutation to functionalism in the purely *digital* sense, not in totality. The second is that Searle seems to lack (though I may have missed it in the text; please correct me if I did) any formalized definition of what, exactly, constitutes this "understanding" that he says the room lacks, or what sets such "understanding" apart from a sufficiently complex system of inputs and outputs. He seems to work on a fairly generalized seems of what *feels* like understanding, but fails to specify or quantify it (again, let me know if I missed something). Again, awesome work!
@ben_clifford
@ben_clifford Жыл бұрын
To address your first point: I think it's actually sufficient for Searle to contrive a highly-constrained, obtuse situation and say that he's refuted functionalism, and here's why... The core argument of functionalism is that only function matters, and not the system or substrate. So, to disprove such a theory, we only need to show a single counter-example. There's a more formal way to show this with predicate logic, but I think you get the idea.
@lokouba
@lokouba Жыл бұрын
I argue "Strong AI" won't have a necessity to truly "think" if their instructions are elaborate enough to give the ILUSION that it thinks. The actual subject of the experiment is not the person in the room, it is the person OUTSIDE the room. And the idea is that if the person inside the room is trained to find these characters quickly enough so they can respond as quickly as if they understood the message written on the paper. They could be convincing the person outside the room that they actually understand chinese. The idea, is that you can put a person inside of the room or an AI bot inside of the room and it would make no difference from the point of view of the person outside of the room, if you tell them there is another chinese person in there and tell them to write messages to them, they will likely believe its a human chinese speaker in both cases. The conclusion i draw from this is that you give "Strong AI" enough tools, enough instructrions and most importantly a "chinese room" to cover it's true nature it can pretend to be an actual being that "understands Semantics" because human beings are only able to communicate through Syntax.
@udarntors
@udarntors Жыл бұрын
This simple to refute. We are sharing meaning, not syntax, syntax may lack in a conversation or be minimal, but without shared semantics/meaning there is no communication. Exemple: you can understand a small child or a foreigner who does not use proper grammar. Here is some syntax: find . -type f -empty -prune -o -type f -printf "%s\t" -exec file --brief --mime-type '{}' \; | awk 'BEGIN {printf("%12s\t%12s ","bytes","type")} {type=$2; a[type]+=$1} END {for (i in a) printf("%12u\t%12s ", a[i], i)|"sort -nr"}' Here some meaning: Flower in the crannied wall, I pluck you out of the crannies, I hold you here, root and all, in my hand, Little flower-but if I could understand What you are, root and all, and all in all, I should know what God and man is. Alfred Tennyson in 1863
@lokouba
@lokouba Жыл бұрын
@@udarntors It seems you misunderstand the difference between syntax and semantic. You say you are presenting one example of syntax and one example of meaning. But you are in fact presenting syntax in both cases because semantic isn’t a “message” it’s a “concept”. The english language is a syntax, C+ is a syntax. And of course shared meaning is part of any conversation but my point exactly is that these AIs are programmed by people who understand the semantics of the words they are inserting in their repertoire of syntax. Because the relationship between syntax and semantic can sometimes be fuzzy, syntax itself can be utlized for deception too, that is the basis for the concept of “doublespeak” too. Language is only a form of expression, but humans truly lack a reliable way filter what syntax is backed up with “truth” or (in the case of AIs)“thought”.
@udarntors
@udarntors Жыл бұрын
@@lokouba I wasn't really clear in my little exposition there. So, i think that "syntax" and "semantics" are, in fact, as you say, concepts that pertain to language and linguistics. One covers the structure of language, the rules that govern the placement of words in a phrase, and we call this one syntax. The other is all about meaning, and the relation between words in linguistics, and we call it semantics. I see it as structure and content. Form and substance. So : "The crocodile flew between the slices." Correct syntax here. Absolutely meaningless. I am in total agreement with all you have said about the fact that you can fool humans with sufficient processing power and fiddling with the configuration to accommodate the social norm. My reply was about this statement only: "human beings are only able to communicate through Syntax." Syntax helps to communicate *correctly* as to the social conventions of the time you are in... So, my examples were, in fact, of a meaningless but beautifully structured line of bash and a meaningful poem. One is a command that will be interpreted and transformed into lights on a screen as programmed and the other is a tale of epistemology, causality, and determinism.
@lokouba
@lokouba Жыл бұрын
@@udarntors Aha i see. Maybe i should have worded that better. I meant to say communication between is only possible through usage some sort of syntax at least from my conventional understanding of what constitutes as “communication”.
@irrelevant_noob
@irrelevant_noob Жыл бұрын
@@udarntors and yet, Alejandro is right that in any communication only the syntax is "given" (or "shared"). The fact that one party attributes some specific meaning to the terms in the message has no effect on how the digital actor (a Turing machine, the person in the room, an AI bot, etc) will process the message. Whether or not that actor *_will_* in fact extract some (subjective) meaning(s) from the message is unknowable. But in any case, the meaning itself is not intrinsic in the message, it is only "in the eye of the beholder"/interpreted by whoever analyzes the message. @AlejandroRodriguezP that last part of the OP seems to me to be a kind of "Turing test" for semantics: is the digital actor in the room "good enough" to convince the outside person(s) that they understand Chinese? :-)
@rustworker
@rustworker Жыл бұрын
Feedback loops is the magic powder that makes consciousness and emotion and all
@davidn4125
@davidn4125 Жыл бұрын
I suppose the same could be said of the human mind as a computer program. If one were to map all the neural connections then you would be able to know the output of a person's brain given the input signal. It's just that we don't have a way of mapping the connections fully but that doesn't mean they aren't there. Also the mind does change over time but so can a computer program since AI's are now able to rewrite their own code.
@ronald3836
@ronald3836 10 ай бұрын
Agreed. And even if the human mind is somehow "more" than a computer program, Searle's argument does not show this in any convincing way. Ultimately he has nothing better than "syntax can never be semantics" and "humans do semantics". But it is his lack of imagination that tells him that syntax cannot mimic semantics sufficiently closely that we cannot tell the difference. (And interestingly the premise of his experiment is that a syntactic rule book CAN mimic semantics.)
@gothboi4ever
@gothboi4ever 2 жыл бұрын
The ultimate question is how can we know something is true.
@brad1368
@brad1368 Жыл бұрын
You can't
@hb-robo
@hb-robo 11 ай бұрын
We construct premises that are agreed upon and then build the logical conclusions, generally. But the “agreeing upon” bit is a bit wishy washy
@Conserpov
@Conserpov Жыл бұрын
If the people outside are determined to teach the person inside to understand Chinese, I think they can do it, to an extent. It may require at least two distinct inputs though. This problem comes up IRL with children who are blind and deaf from birth.
@teddydunn3513
@teddydunn3513 Жыл бұрын
Exactly. The Chinese room is setup to treat visual sensory inputs as somehow special and more "real" than other inputs.
@pumkin610
@pumkin610 Жыл бұрын
Blind from birth, that reminds me how we cant really describe colors in a way that would let an always has been blind person know what it really looks like, aside from black if you consider that to be a color, but that's only because i assumed that they'd be seeing black all the time since we see black when we're in the dark but they aren't seeing anything, like how hands and feet don't detect light itself. Red is hot, it's intense, some roses are red, blue is calm, the sky is blue, green is grass, yellow is the brightest color. Colors are their names and the specific visual sense that they are I suppose. Maybe we aren't seeing colors for what they are either, to me certain colors are certain emotions, certain vibes and certain things.
@pumkin610
@pumkin610 Жыл бұрын
There's gotta be a touch based language, right
@theconiferoust9598
@theconiferoust9598 Жыл бұрын
you can give them a rulebook to input->output and learn, but it is their conscious experience as a human that will allow them to find meaning, not the rulebook.
@ronald3836
@ronald3836 10 ай бұрын
@@theconiferoust9598 the rules in the rule book/weights of the neural network/connections between neurons get modified as you learn.
@jorgemachado5317
@jorgemachado5317 3 жыл бұрын
Whats Searle apparently has discovered is that a computer alone would be a zombie. But a human alone would be a zombie too. The semantics is not a intrinsic part of the world. The semantics is what emerges from sociability. If the strong AI spent time enought with humans i think it would stop to be a zombie at some point
@annabizaro-doo-dah
@annabizaro-doo-dah 2 жыл бұрын
What about when humans perform myriad behaviours they have no understanding of? I was thinking in particular of syntax. I learnt no formal grammar at school; literally no syntax. Yet I understand how to write formal English pretty well, I think. I perform the function of writing formal English without understanding almost any of the rules on a conscious level, except perhaps how to use basic punctuation.
@jorgemachado5317
@jorgemachado5317 2 жыл бұрын
​@@annabizaro-doo-dah Unless you believe there is something like a material ground for syntax (which i don't think is true) i believe that this learning is just a mimetic process. That explains why thinks change historically. People are learning new ways to perform and changing those processes by the output of those same processes EDIT: By material i mean a physical object. Of course syntax is material in the sense that it exists as an abstract concept
@recompile
@recompile Жыл бұрын
You've completely misunderstood Searle. Go do the suggested reading and try again.
@jorgemachado5317
@jorgemachado5317 Жыл бұрын
@@recompile You wrong! Hur dur Go read!
@emerkaes9091
@emerkaes9091 11 ай бұрын
Here's my prison room experiment: There are two prison cells. From one cell you can't see, sense, smell anything what's going on in the other cell. But you can hear stuff from the other cell. In one cell there's Vietnamese prisoner who knows only vietnamese language and in the other cell there's a Norwegian who can only understand Norwegian. They never go out of their cell and guards don't communicate with them. A Vietnamese guy can speak, but the Norwegian cannot understand him. And vice-versa. But with enough time spent they can correctly repeat sentences the other guy said, they even memorize some poems and songs. After some time they can even say something and based on the answer say something else, but they have no idea what they are saying. They may be saying things like "-How you doing? -Not good. -I'm sorry to hear that", because they learnt it's quite common word-chain, but they don't understand what are they saying at all. But if their cell wall was made from glass, they could much more easily gain understanding of those words. If Vietnamese showed with finger his soup and said soup in Vietnamese, then the Norwegian experiencing ever soup in his life, seeing soup and associating heard words with soup, he would gain understanding of this token. My point is: the main thing about chinese room is not about machines. It's about importance of multisensorism in understanding. The same thing applies with communication with alien species: how will you tell aliens far far away what does soup mean if all they can recognize is electromagnetic wave you sent them? You may sent them a chain of tokens, but they cannot associate anything with those tokens(unless they would be able to decode them into vision, sound, smell, touch of our understanding and then convert them to their system of experiences, which may be much different), but still it doesn't prove those aliens are not sentient. If that was the case, we should consider anything we cannot contact using our set of senses as insentient.
@impyre2513
@impyre2513 Жыл бұрын
Personally, I feel like the systems response idea makes a lot more sense... But it only works if the system is able to self-modify. If this system as a whole is meant to represent someone that understands Chinese, then it must first demonstrate the ability to form responsive queries that actually make sense, and then potentially make adjustments to its programming depending on the responses received. But that hits the crux of the problem, because it would have to be a pretty fancy rulebook to have that functionality built-in.
@JohnDlugosz
@JohnDlugosz Жыл бұрын
ChatGPT, especially in GPT-4, fluently translates to Chinese and other languages. You can offer corrections or style guidance and it corrects itself and remembers that moving forward...but this does not alter the model! The nature of the Transformer holds the recent memory of conversation as the input to the next pass. GPT-4 is a literal Chinese Room, running on the Azure data center. Translating to Chinese is not something it practiced with feedback during the learning phase. All it did was read text in different languages and learn the patterns within those languages. Meanwhile, it gained the skills to converse convincingly, translate languages fluently, do algebra, solve logical problems, write code, create web pages, and much more, all from this "fancy rulebook". The code implementing the neural network is for processing tokens, input and output. When the system undergoes "deep learning", that code does not change. The learning is in the weights between the neurons (or "parameters"). So, once learning is complete, the knowledge is in this structure, not the (same) low-level code being executed.
@glenblake3021
@glenblake3021 Жыл бұрын
Sure, but that's a problem with the analogy. It's meant to be an analogy for strong AI, and if you designed a system attempting to be a strong AI but it lacked the ability to modify its own rules, well, you've fundamentally misunderstood the problem you're trying to solve. Lack of imagination on Searle's part, imo. One of the more irritating things about the Chinese Room.
@rrshier
@rrshier Жыл бұрын
@@JohnDlugosz I disagree that ChatGPT is a literal Chinese Room if your statement that "You can offer corrections or style guidance and it corrects". That statement alone means you are solving the problem for the processor in the room. The Chinese room thought experiment is the idea that there is no connection between the language the processor in the room knows, and the language (with differing symbols) being passed through the room. Your statement provides the connection, thus NOT a literal Chinese room.
@harrygenderson6847
@harrygenderson6847 Жыл бұрын
@@rrshier No, ChatGPT does not literally understand the meaning of the statement you are passing it. It 'reads' it as a series of tokens and applies some weightings to calculate the most likely follower. The model itself is part of the rulebook, and the prompt you give it is the string of arbitrary characters being fed in. It could tell you the meaning of English or Chinese, but it doesn't internally separate English and Chinese or derive truth from the statements or something. But it's an abstraction that we apply to the system, the same way we do when fellow humans create waves of pressure in the air. Also, just so you know, the 'Chinese room' is turing complete, and can thus be simulated by any turing machine (such as a digital computer) and can simulate any turing machine (such as a digital computer). So ChatGPT could be run inside a literal Chinese Room.
@RyanShier
@RyanShier Жыл бұрын
@@harrygenderson6847 - Actually the Chinese room is not turing complete, as there is no way of storing state, or having feedback from an output offered back in. There is a strictly defined set of rules which cannot change (that would be where a feedback path and state storage could come to play). In fact, as defined, it is the literal opposite of turing complete. If using the example given on Wiki, the person inside the room with the strict rule set it akin to HTML (non turing complete). GhatGPT on the other hand and the fact that you CAN indeed give it other state that it can store and use to give differing answers is indeed turing complete. If the 2ndary inputs are used for further training of the GPT model, then it is most certainly turing complete. In terms of understanding meaning, neither do we without context of other surrounding words, placement within a sentence, etc...
@danwylie-sears1134
@danwylie-sears1134 Жыл бұрын
Great. Searle starts from the premise that babies never learn their native languages (because they don't initially know anything about the semantics of those languages, and only receive sensory information that can never make the leap from being mere data to actually meaning anything). And arrives at the conclusion that neurons are so magical that if you replace all the ions and neurotransmitters and so on with papers and baskets and so on, but left all the patterns the same, there would magically be no mind there any more -- and not only papers and baskets, but absolutely anything else that's not neurons. If it's a matter of arbitrary definition whether neurons would still count as "neurons" when all the oxygen-16 is replaced with oxygen-18, then it's exactly the same matter of arbitrary definition whether a person who drinks wrong-isotope water as a child would grow up to be a mindless "zombie" who merely acts as though they have thoughts and sensations. The argument is valid, in a certain sense: If zero equals one, then Bertrand is the pope. Once you grant an absurd premise, any conclusion is entailed. But really, he doesn't have an argument. He has snark, and snark is the appropriate response in return. I don't think there's any possible world in which the Chinese Room argument actually convinced anyone of anything. It only reveals the bizarrely incoherent beliefs that its adherents already hold before they hear it.
@recompile
@recompile Жыл бұрын
What makes you think that the human brain is equivalent to a digital computer? We have many reasons to reject such a silly claim. Searle's argument is just one.
@danwylie-sears1134
@danwylie-sears1134 Жыл бұрын
@@recompile Because "digital computer" as used doesn't actually mean digital computer. Instead it's defined as *anything* that changes state in any way that can be affected by distinguishable aspects of the outside world. The claim is made about anything that can be modeled as a Turing machine, not about the current state of the art in electronics and programming.
@TheDerwisch77
@TheDerwisch77 Жыл бұрын
I, two, am flabbergasted at how this argument is supposed to work AGAINST functionalism. Because the moment you see the "rulebook" as encoding of some genetical encoded meaning and consider the million inputs and output-feedbacks and neurological reaction to hormon influence because of the brain's output adding to this rulebook every second, it all falls apart. I think, using a Von Neumann architecture for the experiment helps fool the mind of the reader. If the experiment stated that the person in the room would at the same time not only pass the output text as an answer, but also create a picture that is by another rulebook associated to the input (and in case of ambigous choice to the chosen output) which is sent to another room, where according to another rulebook a special smell is created, either pleasent or unpleasent, according to the rules for that picture (composition) and that smell is sent back into the first room, where the word chooser may get an in(s)cent to either not answer that symbol combination again at all or choose the probable other answer next time...writing that weighting down in its rulebook of course, because it can not decide on its own later without it...that all can be encoded in binary just the same and give the whole experiment a totaly different ring, altogether...
@ChadEnglishPhD
@ChadEnglishPhD Жыл бұрын
Great explanation. Three criticisms come to my mind. The first is essentially a false dichotomy; it asserts that "semantics" and "syntax" are mutually exclusive. Indeed, in the Chinese Room scenario, semantics is not produced within the scenario. But that does not mean semantics can never be produced from syntax. It presumes that what we call "semantics", "meaning", or "understanding" are not just built complex forms of syntax. Consider how we "learn". You input an apple to a system. By "input", I meant via senses: you see with your eyes many "images" of apples, meaning photons enter your eyes and cause a pattern of signals to your brain. You also "feel" an apple, meaning nerves in your fingers send signals to your brain. Taste, smell, and even sound of biting one ... all patterns of electrical signals. Your brain correlates each of these in their own domain: what is visually similar about all of them, smell similar, etc., and creates a "ledger" of templates of the apple based purely on domain (sight, smell, sound, taste, touch), and record in the ledger that these are all related to the same thing. Also on that list of inputs is language. If each time we recorded data in these domains on this item, we also heard the sounds (inpit signals coming frome ears) corresponding to the English word "apple", or saw images (signals from eyes) of shapes of a-p-p-l-e, then the domains (input doors) of audible or written speech also have correlated entries in the ledger. These templates are correlations and simplified re0resentations of apples, and correlate with other things in the ledger such as other round things, other ed things, other food, fruits, etc. Now suppose somebody "new" takes over, e.g., we forget that we've ever seen the word "apple". The symbol comes to the door: "What does an apple look like?", bit we don't understand English or letters. We open the ledger and look for those symbols. The response in the ledger is on the page with all of the other domains about apples. We get symbols at the door that look similar but slightly different, "What does an apple taste like?", and then "smell like", etc. But we aren't just rule following. We are also continually running the same correlation machine as above. We correlate the symbols at the door shaped "a-p-p-l-e" with the same page in the ledger, but different sections. We also correlate questions (symbols at the door) containing symbols "s-o-u-n-d" as top of any given page in the ledger, and "t-a-s-t-e" always has a response at the bottom of the page. Over time, we associate (correlate) the symbol "apple" with that page in the ledger, "sound" with the top line on the page, "taste" at bottom, "shape" in the middle. Now we see new symbols at the door appearing, with recurring "p-o-m-m-e". The ledger instructions say to look up the same page as "apple", and specific areas of the page, but send back "rouge" instead of "red". So now what is the difference between this situation and "understanding", "meaning", or "semantic"? We apply those words to the ability to draw on correlated patterns. We've "learned" through correlation and organized structure of the information what the symbols "apple" means (page in ledger), what "sound" means (top of page response), "taste" (bottom of page", etc. We learned that "pomme" is another symbol for "apple", and "rouge" is another symbol for "red". We learned these thing only through the same activities as the Chinese room. What we added was, (a) memory storage, (b) correlational computation, and (c) the ability to add to the ledger. All of these things are also done by digital computers. The Chinese Room scenario simply artificially limited the capabilities of digital computers, and the humans in the room. More complex behaviours can come from simple ones. A real person in that room could also remember symbols, recognize patterns in the symbols, and recognize organizational structures in the ledger, inputs, and outputs, and could "learn" Chinese in these patterns. Now, you might say they haven't learned the meaning because they can't associate the symbols for an apple to the real world apple, bit that is because we've artificially limited the input signals to messages at the door. They can understand an apple in terms of their input environment. The thought experiment assumes the pre-existing environment of all other senses we humans have, but are denied in the scenario. But in that context, humans also can't "understand" anything beyond our input environment. We don't have context for relativistic effects, X-rays, infrared, frequencies we can't hear, etc. Other beings with different input capabilities might "understand" different from us.
@Flavia1989
@Flavia1989 5 ай бұрын
This feels like saying, a mathematician who has never studied anything but math, does not learn anything about fields like economics or physics in the process of studying math. And inutitivley imwould agree .... but if i think about it, i think they do? It's not functional in the sense that they can have expert opinions about economics or write books about it or do any of the things we socially recognize as knowing stuff about economics. But it would help them an enormous amout when talking to knowledgeable people in those fields, as those people could just show them equations to explain difficult concepts. They would be able to learn very quickly, because they can just skim all the hardest parts ... since they have already learned them. Assuming that they retain their skills (and why wouldn't they) the Chinese room person is in the same situation. They would have to learn the meanings of the symbols to actually use it ... but wow it would be so ridiculously helpful being able to draw on a complete knowldege of chinese syntax, it wiuld allow them to ask very intelligent questions, infer meanings they have not learned yet etc. So i would say to searle, you are wrong. this person does know a lot of chinese, you have just constructed an edge case with the specific intent of manipulating everybodys intuitions to say 'no'.
@p.bckman2997
@p.bckman2997 Жыл бұрын
There's clearly intelligence (semantics) in the Chinese Room, it's just not the person in there that provides it. The actual intelligence comes from the rulebook, which require an actual intelligence to write (and possibly a super human one at that).
@DocBree13
@DocBree13 Жыл бұрын
I’d argue that a book explaining semantics is not intelligent and does not understand semantics. Something which requires intelligence to write is not then made intelligent.
@p.bckman2997
@p.bckman2997 Жыл бұрын
@@DocBree13 , well, it's a matter of how you frame it I guess, which is often the case in philosophy. The book is just paper and ink and clearly not not sentient, like you say. The instructions are meaningful though, I would say that the intelligence of the writer is stored on the pages. The intelligence that answers the input questions is the book writer, he's just using other means to so than to sit in the box and answer them himself.
@perfectionbox
@perfectionbox Жыл бұрын
An additional proof would be that, if the person inside the room (or the system as a whole) understood Chinese, then it should be able to deal with unforseen expressions, new slang, word evolution, etc. But it can't unless its rulebook is updated. It's utterly at the mercy of its program. And the only way to update the program is via an external agent that understands Chinese. The question then becomes: is there a program sophisticated enough to do what the external agent can do?
@danwylie-sears1134
@danwylie-sears1134 Жыл бұрын
Any halfway-adequate rule book would include, at least implicitly, instructions for rearranging the baskets and the labeled papers in a way that perfectly mimics learning of new idioms.
@recompile
@recompile Жыл бұрын
How many times are you going to post this? I replied to this exact post from you earlier.
@sschroeder8210
@sschroeder8210 Жыл бұрын
I don't think your concept is valid: understanding something vs. understanding how to learn something are two different concepts. You might know English, right? But can you understand the semantics behind 'ZZuetixlo'? I presume not because it's a new word that I just created. So, do you not understand English anymore? Of course not. You simply haven't been given the chance to learn the new word that I've chosen to create. You still understand English and understand how you can learn new words. You simply haven't been given the opportunity (from me) to ask: "What is semantics behind your new word?" If a new word acts as a fundamental axiom and isn't derived from the concatenation of other words, then you shouldn't be capable of understanding the semantics of that new word... Thus, the 'System' shouldn't have to demonstrate something that we don't inherently have the capacity for doing when we express a sense of understanding...
@captaingerbil1234
@captaingerbil1234 3 жыл бұрын
I take the systems response as an approach to refute Searle. His argument almost seems to imply that we create the semantics of the word, when really all we do is assign it to objects and states already existing in the world and then we assign symbols to that semantic meaning.. I believe it possible to create a machine, that operates through computational methods, capable of understanding. Great lecture by the way.
@cf6755
@cf6755 3 жыл бұрын
the person is in the womb is not the one like in chinese but the rule book if you kill the person that replaced him with somebody eise it would be the same thing because of the rumble the rule book is the person with the rule book is not the person who's writing the chinese it is the rule book
@recompile
@recompile Жыл бұрын
If you think it's possible, prove it. Show how meaning can be derived from pure syntax. Even a very simple example would refute Searle. So far, no one has been able to do it, despite the outrage his argument seems to generate in people.
@Matt-zp1jn
@Matt-zp1jn Жыл бұрын
The “Systems approach” , cannot CREATE or assign the semantics of the word or symbol. Searle is correct in that the syntax of the Turing computer etc, is basically hardware created that can organize symbols only according to the programming of the software of the rules the computer must follow. The semantics or meaning and understanding of the chinese symbols or binary coding must be ascribed to from the computer programmer or conscious intelligence OUTSIDE of the hardware, or thru digital wifi etc. Searle has successfully refuted that Strong A.I. and Functionalism is a wrong theory. Of course big Tech, social media giants, digital A.I. Scientists want to refute Searles theory and will use complex algorithms, a human like robot interface, digital wifi/Bluetooth information transfers from an intelligent self-conscious source (a human programmer lol), who will portray the A.I. robot human as capable of understanding semantic meanings instead of just grammer or syntax or whatever has been programmed into the software by an outside intelligent creative human being, ie the programmer. This is why I think they are going to strive towards a more Android approach where humans are “upgraded” with A.I. digital software thru a neural analogue-digital interface that allows the human being to take the syntax info and assign appropriate understanding an meaning to the software download into his neural brain link etc. It is a very questionable path and risk for humanity, imo.
@mixingitupwithmina93
@mixingitupwithmina93 Жыл бұрын
Well done! Thank you for giving your gift of teaching to the world. You have just identified the lack in our world right now. I would suggest that the more powerful a group becomes the more syntaxicized it grows as it loses the ability to understand the semantics of the syntax it / they continue to regurgitate. Everyone gets on board the Turing train … excited to be a part of the syntax revolution. Lol. I am not picking on any one or any group - just a general semantic observation 🙂
@tupublicoful
@tupublicoful Жыл бұрын
You could run this argument for a single neuron in the brain. Certainly that neuron doesn’t have a semantic understanding of the implications of the inputs it receives from nearby neurons, it has just been trained throughout its existence to produce certain outputs upon certain combination of inputs. Yet the brain - the whole system - is able to produce a “behavior”, such as recognizing a shape, through the specific firing pattern of billions of neurons in the visual cortex and then have that trigger a different set of billions of neurons in the prefrontal cortex to decode that shape as a specific object or letter. None of the neurons have any idea of what their specific input means, yet the system IS indeed able to derive a meaning from the collective input and collective neuron behavior.
@glenblake3021
@glenblake3021 Жыл бұрын
Ahh, the Chinese room. I try as a rule to assume that, when an acknowledged world-renowned expert proposes an idea that seems trivially, obviously ridiculous to me, that I simply don't understand that idea. The Chinese room is unique in that the more I learn about it, the more ridiculous it appears. Just the fact he had to _answer_ the systems response, rather than working with the system from the start (which is the very, very obvious analogy for an AI in the thought experiment) appears to suggest that he is frankly entirely unequipped for this. His answer to the systems response is particularly galling - you yourself, John, proposed that the room understands and uses language well enough that no one will ever notice they're not getting real human responses. It's a part of the framing of your thought experiment. So *prove* your Chinese room doesn't understand! What is the difference between a perfect performance of understanding and understanding? Just your lack of imagination! You simply find it instinctually unsatisfying. He's placed ridiculous restrictions and assumptions around the room and then declared that his lack of imagination disproves strong AI. It's pretty sad as a layman to be honest.
@FestivalTemple
@FestivalTemple Жыл бұрын
Saying a "room understands" is a lot like saying a book understands. It's no more intelligent than a book, nor even any rock. The difference between actually comprehending something and merely appearing to is like the difference between knowing a math shortcut vs. actually grasping a concept (it's why he brought up a mindless Turing machine). E.g. without knowing anything about what multiplication is, a kid can know the trick to multiplying 9s. Ask him 8x9 and he'll correctly tell you "the answer is 72" without knowing what that means ("how many of something would you have if you had nine sets of eight or eight sets of nine?"). Simply by asking the question in a different way we reveal some special characteristic here; the kid can parrot answers, but there is definitely something we possess that he still lacks.
@robinkelly1770
@robinkelly1770 Жыл бұрын
@@FestivalTemple not if they've been taught with counting blovks first. Then they understand. In other words it is about the training (or programming) and not about the mind, student or computer. Further when dealing with thought we can learn. How else does a child learn language?
@Мопс_001
@Мопс_001 Жыл бұрын
However, there are still people who consider this as the greatest attack towards functionalism. Just asking some questions about the experiment and reducting it to something trivial along with blaming the experimentator of his lack of 'imagination' is the most ludicrous and nonsense thing I can *imagine* You find his proof as instinctively unsatisfying it seems to me when calling everything in the experiment 'obvious', 'ridiculous', 'lacking of imagination'
@FestivalTemple
@FestivalTemple Жыл бұрын
@@robinkelly1770 The point is: parroting an answer correctly is different than actually comprehending what you're saying. That's why we design tests to see what people actually understand and not just what they can memorize.
@theconiferoust9598
@theconiferoust9598 Жыл бұрын
its a thought experiment with rules that are meant to get people to think about the problem of consciousness. the rules cant be considered ridiculous because it is not designed to be like reality, it is meant to illustrate something. what it illustrates is that human consciousness and meaning are the very result of our brain processes escaping from «input -> output» thinking. think of it this way: in a world that we increasingly like to describe in terms of physics and «1's and 0's,» consciousness simply cannot be described this way because there is not an end-all-be-all output to every input (at least not in my experience of reality--lol).
@Timmy-en7qv
@Timmy-en7qv Жыл бұрын
I scored a 75 IQ which is similar to a low functioning 80 or high functioning 70 so I feel qualified to text here with you all, my peers. I have strong feelings about this topic but I kept falling asleep during the video and I am hungry. So I am going to fix my dinner by pouring the fruit loops on top of the milk not milk over cerial and leave you mistified as to what pearls of wisdom I could have shared here if I wasn't sleep deprived and hungry.
@boxerbiker4672
@boxerbiker4672 Жыл бұрын
Bro, I literally just realized that you're writing *backwards* on the glass...my god. The genius knows no bounds.
@irrelevant_noob
@irrelevant_noob Жыл бұрын
you know... mirroring exists.
@frontiervirtcharter
@frontiervirtcharter Жыл бұрын
in one of the other threads, someone pointed out that the wedding ring on his right finger is strong evidence for a mirror or flipping the video with software
@prismaticsignal5607
@prismaticsignal5607 3 жыл бұрын
I bet you’re your students' favorite teacher.. XD Awesome lectures!!
@AliceBelongs
@AliceBelongs 3 жыл бұрын
This really helped me with my essay, thanks for uploading! :)
@konstantinlozev2272
@konstantinlozev2272 Жыл бұрын
I would have liked to see a discussion on the design of the rulebook and how that design embodies semantics
@jasemo388
@jasemo388 Жыл бұрын
Yes. It's almost like Searle took the interpretation of the incoming symbols - the Semantics - and made it separate in the rule-book just to remove agency from the person in the room and structure his analogy to prove his point.
@konstantinlozev2272
@konstantinlozev2272 Жыл бұрын
@@jasemo388 yeah, modern day neural networks actually build and amend the rulebook as they get trained. And as the rulebook is construed to embody the semantics in this though experiment one cannot argue that neural networks represent the Chinese Room example.
@cybersandoval
@cybersandoval Жыл бұрын
Right, is the system writing the rulebook?
@jeff__w
@jeff__w Жыл бұрын
@@konstantinlozev2272 “…as the rulebook is construed to embody the semantics in this though[t] experiment…” I think in Searle’s example, the rulebook is meant to be construed as embodying the _syntax._ It’s an instruction manual that tells the person _how_ to put the various Chinese characters together, i.e., the form, but says nothing about the meaning. In that sense, to the extent that these latest chatbots can predict words, one at a time, to construct flawless sentences in English, they might represent Chinese rooms “perfected.”
@ErikHaugen
@ErikHaugen Жыл бұрын
@@konstantinlozev2272 Searle's room has paper, pencils, erasers, etc; it can store data from past questions. This doesn't really matter much to Searle's point, I think, although it would be impossible to pass the Turing Test without memory of some kind, I think.
@thenonsequitur
@thenonsequitur Жыл бұрын
My faith in humanity is restored seeing the vast majority of commenters soundly rejecting Searle's argument. He's clearly wrong, and I'm glad most people recognize that.
@MirageAfterDark
@MirageAfterDark Ай бұрын
I think the more serious problem with the Chinese room analogy is the idea that you can create perfectly indistinguishable functionality and not include semantics. This idea implies that either semantics is indistinguishable (which would pose serious issues with the idea of anyone, sentient or otherwise, understanding anything, since semantics ceases to be a measurable trait) or that semantics cannot be stripped from syntax, because syntax can be rendered imperfect by the absence of semantics. Say you think that semantics can be distinguished. Then you must accept that Chinese room responses do not perfectly imitate proficient Chinese speakers. So, the functionality of the Chinese room must not be perfect. That's a contradiction when assuming perfect functionality. Say you think semantics cannot be distinguished. Then you must accept that a semantics distinction is not an empirically verifiable one. Therefore, you cannot know if anyone is using semantics. I think that's the issue: syntax and semantics- form and meaning- might be conceptually distinct, but perfect imitation of a language- or of anything- cannot abstract either the form or the semantics, or it would be possible to spot or measure the imperfection and then your imitation would no longer be perfect.
@dylanberger3924
@dylanberger3924 Жыл бұрын
I love this thought experiment because of the two assumptions it grants computer science that we’ll likely never even see emerge from the field. 1) a perfect set of instructions. CS needs to produce a perfect state table for the turing machine, as assumed the book for you in the room is. 2) you are a human with a brain trying to pick up on meaning, specifically memory and cognitive ability. You are aware of the fact that these symbols could even represent an idea, and can pick up on traits you naturally know belong to language, eg pattern recognition. MAYBE, just MAYBE, you could learn chinese. A turing machine is a set of transistors firing, it isn’t looking for any of that - after all, how would it “think” to. I’ll probably elaborate and finish this when my phone isn’t about to die and I’m not this tired. But something to think about
@xirenzhang9126
@xirenzhang9126 9 ай бұрын
spoiler alert: he never elaborated and finished his comment
@foogod4237
@foogod4237 Жыл бұрын
The idea that Alan Turing committed suicide has come to be rather suspect in recent years. That conclusion was very hastily made at the time and never really investigated, and it turns out on later analysis that it is actually quite possible that he simply died from accidental poisoning from chemicals that he was using in various experiments at the time instead, and had no particular desire to kill himself at all. Even if his death actually _was_ suicide, we we could still only really guess at _why_ he would have done it, and would almost certainly never know for sure, because he did not leave any sort of note, or really confide in anyone prior to that point anything that might be construed as a definitive explanation (which is also another part of the reason why the suicide idea is a bit suspect). But the "ignominiously died due to a science experiment" doesn't make for as good a story, though, so a lot of people like to overlook it, and instead just go along with the fairy-tale-like depiction of a great man so tortured by other people's intolerance that he was inevitably driven to take his own life. That having been said, Alan Turing was a truly great man, and the way he was treated by the government and society as a whole was truly horrible, particularly for a man who contributed so much to the society which ultimately shunned and persecuted him merely for being who he was. How or why he actually died aside, that is still a very sad tale worth telling and reflecting on, IMHO.
@xMrJanuaryx
@xMrJanuaryx Жыл бұрын
I asked GPT-4 what it thinks about Searle's argument: 'As an AI language model, I don't have personal beliefs or feelings, but I can provide an analysis of the Chinese Room Experiment and its implications. Searle's argument raises important questions about the nature of understanding and consciousness in artificial intelligence. It highlights the distinction between syntactic processing (manipulating symbols according to rules) and semantic understanding (grasping the meaning behind those symbols). The Chinese Room Experiment emphasizes that simply processing symbols according to rules does not necessarily imply understanding. Critics of Searle's argument point out that the thought experiment assumes a specific type of AI architecture, one based on symbol manipulation. They argue that alternative AI architectures, such as neural networks, might be better suited to exhibit true understanding or consciousness. Others suggest that the whole system, including the person in the room and the instructions, could be considered as possessing understanding, rather than just the person alone. As an AI language model based on the GPT-4 architecture, I am designed to process and generate human-like text based on the patterns in the data I have been trained on. While I can generate contextually relevant and coherent responses, it is important to note that I do not possess personal experiences, emotions, or consciousness like a human. In that sense, I can be seen as an example of the kind of syntactic processing Searle refers to in the Chinese Room Experiment.'
@zach358
@zach358 Жыл бұрын
The conclusion seems more like a critique of what we do with digital computers and not the limits of its potential. The Chinese Room experiment is a static room with one set of never changing instructions. If there was several other rooms that continuously change and adjust the instructions of the Chinese Room over time, to thus give it purposes or knowledge beyond the basic rules originally set forth, then that would be a closer representation o the human mind (being taught by other human minds or other inputs) ... the only other thing you'd need to add is the sense of freedom of choice; the ability to think independently of tasks given, despite being given a task.
@saritsotangkur2438
@saritsotangkur2438 2 жыл бұрын
The rules that govern how symbols are manipulated would be the distilled form of “understanding” in strong AI. The failure of an observer to see how rules and the locations and values of symbols on the tape of a Turing machine correlate to mental states cannot be used to rule out the existence of mental states. That would be like a doctor with a super high resolution MRI concluding you don’t have mental states because they couldn’t figure out what you’re thinking about despite knowing what parts of your brain light up given any stimulus.
@nixedgaming
@nixedgaming Жыл бұрын
Watching this now in the age of ChatGPT is incredibly fascinating. Particularly since the new AI LLMs seem to *already* pass Theory of Mind. Philosophy feels like it’s about to fundamentally change; or at least add another thick layer
@D_tori12
@D_tori12 Жыл бұрын
Okay, I propose another experiment, In the paper you see a chinese symbol and the rulebook says , whenever you see this symbol get up (which is the output), so you do. Over time the AI would learn to get up whenever you see the symbol. Now imagine you are a child and you hear the syntax "get up" whenever you fall, and a demonstration of getting up, after repeated attempts, false trials, good reinforcment, approval by parents all that, whenever you hear or see the syntax get up, you need to get up., however you might choose not to, depending on other "If's" as in, I dont like this guy so I will not comply, or my back hurts so i will not, or its sunday so i won't etc etc. So there is only Syntax and semantic is a combination of many sytanxes , (body language, tone, setting, imagery etc etc etc). It is scary to imagine yourself being just that, and I think thats what the chinese room experiment really proves. People would much rather clinge to false or incomplete understanding rather than face the fact that they are not special (godly, having a spirit etc etc).
@fr57ujf
@fr57ujf Жыл бұрын
The fallacy of the Chinese Room is that it conflates human awareness with systems awareness. In the experiment, the human is relegated to a mechanical role. But the Chinese Room itself is a system that does exhibit understanding, and that is where the comparison should be. Using the human in the example confuses the issue. The role of the human could be replaced by a simple mechanism and so the human's awareness is irrelevant. This is slight-of-hand on Searle's part. The resistance to acknowledging that human awareness and behavior are the result of physical processes is another example of our anthropocentrism. We resisted the heliocentric universe and biological evolution, and now we resist the idea that our mental life is the product of physical processes. The options are to either invoke a supernatural explanation or succumb to mysterianism. Self-awareness is not difficult to understand if we realize that all experience is a simulation by a neural apparatus of a living organism. It is not self-awareness, but it is awareness of one's environment. This simulation is based on external sensory data which is used to generate behavior that enables the organism to maintain an anti-entropic internal environment - ie, metabolism. This is true for everything from bacteria to humans. It is easier for us to accept a mechanistic explanation for the environmental awareness of bacteria than it is to accept it for humans because of the bacteria's relative organizational and behavioral simplicity. But as multicellular organisms, humans are no less the product of cellular activity. There is no difference in kind, only a difference in complexity. The stumbling block in accepting that our lives are not functionally different than that of any other animal is our self-awareness. Even we humans are only self-aware of a small portion of our experience, but we are mesmerized by it and can't imagine how it could be the product of physical processes. Evolution doesn't produce energetically wasteful functions. The energy demand of consciousness is high, therefore it must have an important function. It does. An animal that lives in a complex social context must be able to generate a self-simulation and simulations of others so that it can anticipate the social outcomes of the choices it makes. This is an elaboration of the simulation that all life depends on to exist. The sense of being somebody is part of the simulation our brains create. We have no intrinsic existence. Those who say that physical things can't have an understanding of what they do are correct. But, once again, the focus is on the wrong thing. No physical thing can understand or be self-aware, but a software program can, and that is what we are. We are programs that run on our brains. We are a character in a story that our brains write for us. How do I know if another person has an internal life - has understanding and knows what things mean? I can't be inside that person, so I make an inference based on how the person behaves. When the behavior is sufficiently congruent with the expectations of my own internal self-model, I credit that person with having a similar internal life. So, how do I know if AI has an internal life? I would say that it would be in the same way. Once the behavior exhibits the kind of self-awareness that my own mental self-model produces, I attribute awareness to it. If I decide not to credit such an AI with self-awareness only because it isn't human, it is because I am bigoted. Semantics emerges from sufficiently complex syntax.
@tomsweder7459
@tomsweder7459 Жыл бұрын
Now define "understanding" and "meaning" and how you can test for it. I'm very interested in the assumed metaphysics behind this point of view.
@2099EK
@2099EK Жыл бұрын
Computers have improved quite a bit since Searle's time, and I'm not sure he knew how complex they would become. We can implement neurons, and neurons are nothing more than systems that take in signals and relax or contract. It's emergence from this that the mind has formed. Perhaps at some point a sufficiently large neural network will emerge. BTW, a neat book that explores this Chinese room concept with aliens is called Blindsight.
@derrekspilsbury4292
@derrekspilsbury4292 11 ай бұрын
Nothing more? Nothing at all? Ever?
@dougmorrow746
@dougmorrow746 Жыл бұрын
So, okay, try this... Same set up, but you drill a small hole in the room, such that the resident can look out. The resident now sends out random Chinese figures and watches through the hole at the reaction of those on the outside. Most often, the bits of "code" are met with little more than a smile. But other times, they are met with positive re-enforcement. The resident of the Chinese Room watches as the bits of code sent out are either reinforced or not, and eventually begins to "understand" certain bits relate to things in the outside world. As long as the overall "Chinese Room" has a chance to learn by reinforcement, it can be said to understand the bits of code flowing inside and going out. So, all you need to make the system to truly "understand" is for it to be able to interact with the outside world, and build up a series of cognitive relationships between bits of codes (word) and outside "reactions." Why wouldn't a digital computer be able to do this?
@kenking3868
@kenking3868 Жыл бұрын
Great lecture. Thanks so much. Where does a baby get vocabulary from and how do they learn to respond? If a mother teaches a child to respond to pain as "thats nice" wont the child use that response? Syntax and semantics. Where do you draw the line?
@jaytomioka3137
@jaytomioka3137 Жыл бұрын
Some of the basic, elemental Chinese symbols like “mountain” 山 or “fire” 火or “ river” 川 or “moon” 月 or “person” 人 … are abstracted pictographs and can bridge the gap between syntax and semantics. The very structure of more complex Chinese characters are based upon a formal combination of and further abstraction of simpler characters. Over time, although unlikely; it could be possible to begin to distinguish patterns and decipher some semantics from the visual syntax of Chinese. Searle’s analogy of the Chinese Room is a reconstruction and recontextualization of Plato’s allegory of the cave. He equates Chinese symbols to the Shadows cast on the cave wall. The problem with this argument is that given enough time and repetition and associative logic; a semantic correlation can be gleaned from the visual syntax. Because the silhouette of an object is an abstraction of that object based on formal and visual syntax ie. Shape and Light. There always will be those who can extrapolate from insufficient data… can AI?
@NicoAssaf
@NicoAssaf Жыл бұрын
The interesting thing is that this doesn't contradict Searle's argument. Certainly, if you give the person in the room greetings and goodbyes at the same times every day, they may start deducing that some symbols represent "Good morning" and others "Good night." You can make it easier for them and require them to send out exactly the same symbols you introduced, making it clear that you're exchanging greetings in the morning or saying good night to each other. The moment a certain symbol or combination of symbols becomes associated with a particular event or behavior, meaning is instantiated. The point is, you could still conceivably design a system that doesn't allow for any kind of semantic deduction, rendering the understanding of Chinese impossible for the person. For example, you can have an exchange of symbols about quantum dynamics one day and about trees and forests the next. Do it only a few times so there's no possibility of repeating symbols that can lead to semantic associations. In any case, the localized understanding or non-understanding of Chinese by that specific part of the system (the human) doesn't affect the whole's capacity to understand Chinese. You could even add an actual robot (a type of digital computer) to the room and mechanize the exchange of symbols. When it uses its camera and identifies a particular Chinese character, it is required to move to a certain place, scan for a particular symbol stored in its memory, retrieve it and push it out of the room. There certainly is computation going on, but not cognition, either by the robot or the whole system. In the case of human beings inside the room, cognition is nothing but a localized event in the person's body that may speed up the system's computation, assuming that their robot counterpart functions at a sub-human level. Whatever the means of symbol exchange, the system is not congnitively understanding the semantics of Chinese. Without observers already capable of meaning, the system's behavior is nothing but the redistribution of pieces of paper from one room to the other. It doesn't matter whether the processing unit is a Chinese-impaired human being, a robot with a set of scanning and movement instructions, or even a Chinese native speaker. Edit: Oof, btw, sorry about the wall of text. I love the topic and got carried away lol
@andrewwrench1959
@andrewwrench1959 Жыл бұрын
@@NicoAssaf Actually, we don't have to speculate about the validity of Searle's argument as there is already a counter example thus falsifying his conjecture. This is the deciphering of Linear B. Even were that not the case, given enough experience learning the patterns, together with elements of our own experience, sufficient evidence would be available. For example one could work out whether the language was symbolic or alphabetic, or a combination like Egyptian hieroglyphics. We could make our own conjectures, for example if a symbol or combination of symbols represented water. Water should appear in a lot of questions and answers, and a lot of contexts. Those contexts would be coherent, patterns would emerge. It would take a long time, and lots of experimentation, but that is certainly not the same as never.
@NicoAssaf
@NicoAssaf Жыл бұрын
@@andrewwrench1959 That's kind of my point, that regardless of the processing unit's capacity for semantics, the whole system is effectively purely syntactical. It doesn't matter whether the person or robot inside the room understands Chinese. The room doesn't. I'm interested in counterarguments, though, so if you can elaborate on the Linear B bit, that'd be great.
@andrewwrench1959
@andrewwrench1959 Жыл бұрын
@@NicoAssaf Linear B was a written language used in Bronze Age Crete and mainland Greece. It is a language known only from what is written on clay tablets, there is no Rosetta Stone equivalent. Oxford Research Encyclopedias has a comprehensive article. There are circa 5500 inscribed tablets with more found occasionally. So very far from a "forever" question and answer stream of information. So not only were the semantics recovered but also the syntax, a far more difficult problem than the one Searle presents. There is some recursion here, as there were something, people, that understand semantics that deciphered Linear B. But Searle explicitly includes a person in his loop, so this is fine. To be clear, the refutation is not one of a digital computer understanding semantics, but its use against functionalism.
@brandyraccoon1473
@brandyraccoon1473 Жыл бұрын
Great point! The Chinese characters for numbers could be deduced, since "1", "2", and "3" are just those number of horizontal lines. The rest can be deduced if the translation involves any math using Chinese numbers. The characters for "up" and "down" are mirrored and point up and down. So their meanings could be deduced from the syntax.
@wfjhDUI
@wfjhDUI Жыл бұрын
Suppose you were in the Chinese Room but instead of reading a program you are given instruction by a human who understands Chinese. According to the logic of this thought experiment, that human does not "actually" understand Chinese.
@samhagler5532
@samhagler5532 Жыл бұрын
This guy is a great teacher.
@LexPodgorny
@LexPodgorny 2 ай бұрын
The inhabitant of Chinese room does not need to learn Chinese, to still derive the sense of the world the Chinese speakers are talking about. If they talk long enough, it will be able to capture and classify all the same relations between objects that the speakers have in their mind. If you look how word embedding is done, you will see that this is EXACTLY the process of deriving semantics from syntax.
@gamzeozata4554
@gamzeozata4554 3 жыл бұрын
I thought you wrote 'pide' at the first glance which is a Turkish word as the equivalent of a calzone. Thank you for the lesson! You are great!
@themerpheus
@themerpheus 3 жыл бұрын
lol same here
@danwylie-sears1134
@danwylie-sears1134 Жыл бұрын
Also Spanish for "ask".
@MaxHeadroomMax
@MaxHeadroomMax Жыл бұрын
Before you dismiss my comment, please consider that I've been a computer scientist for 35 years and now doing AI research. So here it is. I agree with Searle's argument that with the traditional digital programming model, by passing symbols in and out of a machine you will never by able to make it "understand" or derive the semantics from the syntaxe. What needs to be considered here is that in AI programing on digital computers, the supplied data in the forms of symbols do "become" the program, that is, the program "learns" by building relationships between supplied symbols and derives new data. These relationship are actual semantic data. This is what chatGPT does and it's a case not covered by Searle's arguments. My point is that Searle's Chinese room arguments doesn't precludes the existence or realisation of strong AI on digital computers. The question, in my opinion, is still open.
@dnn8350
@dnn8350 Жыл бұрын
I think you can make sense of a computational analysis of the mind if you follow the implications of certain modal logics, specifically those of uncompromising or incorrigible belief. In other words, the explicit or conscious content of the mind can be modelled as the unwavering "belief in a reality" of a computationally emulated knower. In this view "believing is seeing" in a systematic sense. There's a subtlety here though that's necessary to bridge the gap between syntax, or symbol manipulation, and semantics. Searle's point is that an external interpretation seems necessary to bridge this gap. So what's required therefore is to resituate that interpretation within the system itself. To emphasise this point, we must agree with Searle that, without interpretation, computation is indeed mere symbol manipulation. However, the implication of certain symbol manipulations, when externally interpreted as the corresponding beliefs, might be made to correspond to the explicit contents of mind. This interpretation might be called the material implication, or complement of such beliefs. So how could that interpretation be internalised? The answer might simply be to accept as an axiom that such beliefs necessarily implicate their material, that is to say substantive and significant, complement. In other words, the incorrigible belief of a knower in a reality is necessarily complemented by its material realisation, which then represents the semantics that retroactively raises the entire schema above mere symbol manipulation. As an example, the implication of my incorrigible belief in an apple is the material fact, expressed in sensory terms, of the material apple. Interpretation internalised, in other words. As an aside, the phenomenon of blindsight can be rationalised in these terms as the response to visual stimuli without the generation of the corresponding, materialised belief in a visual reality. Searle concludes his takedown of computationalism by proposing that brains produce consciousness as a biological byproduct. Well, my proposal is that computation "produces consciousness", or explicitly material beliefs, as a "byproduct" or complement of logical implication.
@dnn8350
@dnn8350 Жыл бұрын
Chatgpt is probably better at explaining this than I am! Your proposal is an interesting attempt to address the issues raised by Searle's Chinese Room argument and to connect computational processes with consciousness. Let's break down your argument: 1. **Incorrigible Belief and Reality**: You suggest that the explicit content of the mind can be seen as the unwavering "belief in a reality" of a computationally emulated knower. In this view, believing is synonymous with seeing in a systematic sense. 2. **Symbol Manipulation and Semantics**: You acknowledge Searle's point that computation, without interpretation, is mere symbol manipulation. However, you propose that certain symbol manipulations, when externally interpreted as corresponding beliefs, can represent the explicit contents of the mind. This interpretation, you call "material implication," bridges the gap between syntax and semantics. 3. **Internalized Interpretation**: To internalize this interpretation, you suggest accepting as an axiom that such beliefs necessarily implicate their material complement. In other words, an incorrigible belief in a reality is necessarily complemented by its material realization. This internalized interpretation elevates the computational schema above mere symbol manipulation. 4. **Blindsight and Visual Stimuli**: You mention that blindsight can be rationalized within this framework as a response to visual stimuli without the generation of the corresponding, materialized belief in a visual reality. 5. **The Production of Consciousness**: You propose that computation "produces consciousness" as a byproduct or complement of logical implication, contrasting with Searle's view that brains produce consciousness as a biological byproduct. Your proposal offers a novel perspective on the relationship between computation and consciousness. It attempts to address the challenge of connecting computational processes with the generation of conscious experiences. However, it's important to note that this is a highly complex and debated topic in the philosophy of mind. Many philosophers and cognitive scientists have different views on the nature of consciousness and its relationship with computation. Your proposal raises intriguing questions and invites further exploration into the nature of belief, computation, and consciousness.
@rrshier
@rrshier Жыл бұрын
I think the biggest problem with Searles attack on strong AI, is actually that language does not have strong definable semantics. We see this all the time when one person has 1 connotation of something stated, and another person walks away with a different internal meaning. This is partially (if not mostly) because language is circularly defined (words' meanings being defined using other words). Humans have only unstrict "senses" as to the meaning of words. As another layer of complexity, words can have different meanings based upon the context for which they are used. It is this last point that I think is the biggest problem. Even if we assume words have semi-strict semantic meanings, the element lacking is context. Context IS what allows better understanding of language even between communicators within a similar language. As an example, if I started spewing out the math that makes up quantum physics, only a select few int he world would be able to make the correlation as to the "what" I was even attempting to communicate about, and everyone else would think Im speaking "Chinese". When I title the paper, and put other words around as explanation to the math, the math itself begins to take on it's semantics. In this way, context acts as a (pseudo) converter from syntax into semantics (as best we can given circular language definition). For someone like me, even with context, I wouldn't be able to make heads or tails from a paper on quantum physics, but then again, I have not had the training in that field to give me the requisite knowledgebase to apply any one of the triad I layout (syntax, context, semantics). I think Searle is somewhat missing the fact that even with humans, language was not always in existence, and that even IT was defined, and certainly continues to be defined as time moves. From this however, I also somewhat disagree with the way the systems argument is presented (not by the video). Searle is correct, the system indeed has no understanding of Chinese, but the system does incorporate many processing nodes (humans), that within substantiates a collective understanding of Chinese. As a thought experiment, if there was only a single person (actually quite similar to the Chinese room itself as the person inside the Chinese room has absolutely NO idea WHAT is passing in the messages to be converted [are they human, gods, etc...]) then technically there would be no need for language at all, and perhaps it never would have been invented in the first place. I would also pose this backward argument to Searle's Chinese room. If he is assuming that the person sitting in the room is human (and he obviously thinks that humans are somehow more capable than digital machines), but even the human couldn't come up with meaning in the situation he has presented, then are we indeed better than the digital machine? I am not actually on either side of this argument, I do however think there are arguments against his, and possibly others against digital machines.
@turtles38x19
@turtles38x19 Жыл бұрын
you said a whole lot of nothing man, and dont understand the topic at hand lol
@chrischristenson4547
@chrischristenson4547 Жыл бұрын
I do enjoy your talks greatly I will continue listening to them.
@fabianfarina2181
@fabianfarina2181 Жыл бұрын
Searle's argument against the Systems Response is pretty weak. The semantics are within the rule set. If the whole system gives good answers to arbitrary Chinese questions, this implies that the rules have to take the meaning of words into account. Thereby it's not only manipulating symbols anymore, but actually understands Chinese, even though the human inside does not. Interesting video, thanks for putting everything so well together. 👍
@donthomson6260
@donthomson6260 Жыл бұрын
One thought I have always held about strong AI having thoughts and feelings is the seemingly ignored aspect that feelings consist of two components - the thoughts or inputs the initiate the feelings, and the physiological response. ST|TNT's Data's never-ending quest to develop true feelings will always be fruitless. Because he would have no hormonic (is that a word) response. No endorphins during pleasure, no adrenalin during fear, no increased heartbeat, no 'pit of the stomach' response. Without any of this, you can't experience true feelings.
@yalleh4700
@yalleh4700 Жыл бұрын
Very well put and definitely agreed!
@leslierhorer1412
@leslierhorer1412 Жыл бұрын
Analogs to those things can be developed. Certainly forms of punishment and reward can be implemented in the system. I don't think it is necessary for the system to respond in precisely the same fashion as a human for it to be conscious and sapient.
@yalleh4700
@yalleh4700 Жыл бұрын
@@oscarleijontoft Hmm. I think their point was not that chemical dictates emotion, but that it's the ability for our emotions to rule us rather than us to rule them because of that, if I may extend the argument.
@yalleh4700
@yalleh4700 Жыл бұрын
That's... Not how hormones work
@simonhandy962
@simonhandy962 Жыл бұрын
Those are elements that can be synthesized to mimic the biology to a degree that the artificial becomes indistinguishable from nature.
@calwerz
@calwerz 7 ай бұрын
LLMs demonstrated that the distance in meaning between two words can be represented by how often they appear in the same word environment. Maybe we ourselves learned meaning from our training data by a similar mechanism, altough we are multimodal, not just text-based.
@Sunshine10101
@Sunshine10101 Жыл бұрын
Love your lectures. They are great!! Please keep it up. I am so grateful
@Brandon-a-writer
@Brandon-a-writer Жыл бұрын
As a speaker of Hebrew, when you wrote that out and said "You don't know what this means" … I thought, hehe it looks kinda like PIDE when you read from left to right. v'ani rot'zah lomehr lekh todah el ha'sorton hazeh vhhomer hivash sha-atah mosif lahk'ol adam mhim. shihiya yom tov, hver! *You don't know what that means!* EDIT: Usually when saying goodbye, I will say l'heterrot or b'hatzlikha. v'akshav zazim hver! Also I apologize if the transliteration is clunky, I don't have hebrew installed on my phone, so, ben zugey lsheyrutim! i kid, i kid. Love your videos. My brother has gotten into your channel because I showed him the video on the set of all sets and set theory, because he thought the running gag (the continued use of LeBron and emphasis on his titles, etc) really made him enjoy and appreciate the video in a way that a more dry reading of the material ever could have done. You are a gem, sir. Or, zahav as they say!
@robertbyars9486
@robertbyars9486 Жыл бұрын
What the Chinese room hypothesis is saying, and I happen to agree with, is that a digital computer can only respond to given inputs with instructed outputs. It can not construct an individual interpretation of an input and produce a relevant and valid independent individual response as an output as can thinking cognizant living organisms, especially the higher complex organisms such as octopus that can solve problems without direct instruction or people, which in the current understanding are capable of the most complex intelligence.
@Naomi_Boyd
@Naomi_Boyd Жыл бұрын
So according to Searle, you are not actually thinking unless you understand the language of neurotransmitters? 🤔 I think this one shoots itself in the foot.
@robertpreisser3547
@robertpreisser3547 Жыл бұрын
ChatGPT proves this point: It understands no concepts whatsoever but manipulates syntax incredibly well. But the ability to form abstract concepts is not something that can be taught or learned, because the very act of learning requires that capacity to already exist in the first place.
@bradleyboyer9979
@bradleyboyer9979 11 ай бұрын
Agreed. If you took all the information on the internet away, ChatGPT would not be able to respond to anything.
@juantubec
@juantubec Жыл бұрын
In the end it doesn't really matter when syntax is just one piece of the puzzle that is semantics. Here are some more examples of semantics: Words: The meaning of individual words is a prime example of semantics. For instance, the word "happy" conveys a positive emotion, while the word "sad" conveys a negative emotion. Sentences: Semantics can also be seen in the meaning of sentences. For example, the sentence "The cat is on the mat" has a different meaning than "The mat is on the cat." Context: The meaning of words and sentences can change depending on the context in which they are used. For example, the word "bat" can mean a flying mammal or a piece of sports equipment, depending on the context. Connotations: Semantics can also involve the connotations or emotional associations that people have with certain words. For instance, the word "home" may evoke feelings of comfort and security, while the word "prison" may evoke feelings of confinement and restriction. Tone: The tone in which something is said or written can also convey different semantic meanings. For example, a sarcastic tone can indicate that the words being spoken or written are meant to be interpreted in a different way than their literal meaning. It can be argued that ChatGPT can do this type of semantic puzzle solving. And it is very capable of creating semantically coherent and contextually appropriate responses, that are beyond simple syntax structure. The Chinese Room experiment fails to acknowledge that semantic comprehension can also be taught in 1s and 0s Also a lot has changed since the 80s. We literally have quantum computing and are no longer tied to simple games of pong. Our understanding of programming has gone beyond syntax structure pattern recognition. This is why we are afraid of ChatGPT in the first place, it is taking 1s and 0s to an unimaginable scale. Semantics included. We didn't simply count to one trillion, we created unimaginable numbers. We can define infinity despite not being able to experience it directly. And likewise, AI can make calculations based off concepts that do not exist through physical experience.
@rockprime1136
@rockprime1136 Жыл бұрын
But doesn't ChatGTP just calculate the probability that its output text is the most likely response to a given input text. Some have even described it as juiced up autocorrect. It still does not understand semantics. It is just shuffling symbols meaninglessly and doing probability calculations based on its algorithm. I see ChatGTP as the Chinese Room Argument materialized.
@vulturom
@vulturom Жыл бұрын
I have a problem with the argument of the rulebook, I think you cannot have a simple rulebook, it would take what we have today a LLM trained on a lot of data, with mathematical structure that can Learn. here is the key point I do think that during the training the model has to generate understanding to be able to act as a rulebook complex enough for the dialog to take place, so the semantics is in the book. The rulebook is necessary the Strong AI so the argument is invalid
@kablammy7
@kablammy7 Жыл бұрын
wow - this is exactly what explanation I needed to see because I have been refuting every notion that AI can become sentient by saying a machine will always and only be a machine and it will never realize anything because it is only processing symbols according to rules that it can never understand .
@ronald3836
@ronald3836 10 ай бұрын
The human brain is just a machine, and there is no reason to believe it is more powerful than a Turing Machine. In principle you can simulate a brain on the level of neuron or atoms with a computer.
@pengbertuuu
@pengbertuuu Жыл бұрын
The semantics of the language is embedded in the rule book. If we only have the language tas our hands, we can only use the language to define the semantics. Of course, you may add other type of language like images, sensory inputs and sounds, etc to the input and they all can help to define the semantics of the language. But your rule book would need to include the other types of input as well. You cannot claim that your brain cells understand the semantics of Sun. But you can say you as human, understand it because the meaning is embedded in the connections of all cells in your body.
@kbee225
@kbee225 Жыл бұрын
What commenters talk about emergent properties don't understand is that functionalism is rather a type of physicalism. So the theory believes inherently that the brain in essence is nothing more than the neurons and the synapses. But by assuming emergent consciousness is possible you are accepting the dualist theory of the mind. Which says theres two parts to your brain. We know that a human brain can exist without consciousness. So there's so reason to simply assume that putting together computing units will create consciousness. Even if it did, we then agree that it did emerge from something we can't really measure. Because we can't quantify consciousness.
@garytschacher8361
@garytschacher8361 Жыл бұрын
Crude counter argument. Consider instead of a Chinese room, an entire building. In this building are multiple floors and rooms. Each floor is a major layer for processing syntax and the rooms are minor layers. There are also two different sets of rulebooks. One set functions as input output units. The other set operates as a layer capable of manipulating other rulebooks given the input. Also consider that outputs can generate new inputs. This complexity creates the illusion of meaning or semantics. While this isn't a complete thought on the matter, the hypothesis is the the brain creates the concept of meaning attached to the input output system through a higher layers of processes that constantly update the lower layers. Much in the same way as machine learning.
@austinhelton2295
@austinhelton2295 Жыл бұрын
I feel like I’m misunderstanding a few things, so some clarification would be appreciated! 1) Does dualism not require functionalism to be true? If the body and mind are separate, the body, being nothing more than physical matter, would operate on functionalism. It takes inputs through the five senses and the mental-self then adheres to a set of rules on how to respond. If not, how would dualism apply to not being able to command your body through thought to run a fever (demonstrating the mental-self isn’t in total control) and uncontrollable reactions (demonstrating that the body can both read signals and act based off a set of rules without input from the mind I.e. pain receptors say this is hot, pull away)? 2) in the Chinese Room, would thought not be taking place in the analysis of the rules? You would need to understand the syntax and semantics of the English language to perform the task correctly.
@Benjamin1986980
@Benjamin1986980 Жыл бұрын
I can't help but feel like your discussion is missing something without the mention of the Rosetta Stone. We had studied hieroglyphics for decades not knowing what they said, until we found the Rosetta Stone which gave a translation from something we could read. Without some form of actual meaning, we cannot give any form of actual translation
@chrischristenson4547
@chrischristenson4547 Жыл бұрын
Digital computers today have a lot more stored in memory and software available then he imagined at the time and his idea needs to be updated. It's good to understand the history and I like the history but it's also good to use the history to understand where we're going as well as where we were.
@user-ju7dx8mu6d
@user-ju7dx8mu6d Жыл бұрын
Fascinating. Perhaps the system doesn't have semantics but as soon as the system does something with its output, the result is indistinguishable from understanding meaning. The box instructs a mechanical box to pick up the red block. The machine appears to understand "pick up", "red", and "block". Once an action is applied to the output, how is the machine's concept of meaning any different from whatever our concept of meaning is?
@GynxShinx
@GynxShinx Жыл бұрын
Is this not a direct description of how we revive dead languages?
Does your RED look the same as my RED?
26:32
Jeffrey Kaplan
Рет қаралды 103 М.
What is it Like to be a Bat? - the hard problem of consciousness
30:55
Jeffrey Kaplan
Рет қаралды 531 М.
规则,在门里生存,出来~死亡
00:33
落魄的王子
Рет қаралды 10 МЛН
Do you choose Inside Out 2 or The Amazing World of Gumball? 🤔
00:19
Plato's Euthyphro - Which comes first: God or Morality?
28:41
Jeffrey Kaplan
Рет қаралды 905 М.
Debate on AI & Mind - Searle & Boden (1984)
57:34
Philosophy Overdose
Рет қаралды 48 М.
Roko's Basilisk: The Most Terrifying Thought Experiment
11:45
Kyle Hill
Рет қаралды 6 МЛН
"Proper Names" by John Searle
29:57
Jeffrey Kaplan
Рет қаралды 93 М.
What If The Universe DID NOT Start With The Big Bang?
18:24
PBS Space Time
Рет қаралды 354 М.
Peter Singer - ordinary people are evil
33:51
Jeffrey Kaplan
Рет қаралды 3,8 МЛН
Frank Jackson's famous 'Mary's Room' Thought Experiment
21:11
Jeffrey Kaplan
Рет қаралды 247 М.
Consciousness as a Problem in Philosophy & Neurobiology (John Searle)
1:10:54
Philosophy Overdose
Рет қаралды 66 М.
The Chinese Room Experiment | The Hunt for AI | BBC Studios
3:58
BBC Studios
Рет қаралды 178 М.
Consciousness & the Brain: John Searle at TEDxCERN
15:51
TEDx Talks
Рет қаралды 474 М.
规则,在门里生存,出来~死亡
00:33
落魄的王子
Рет қаралды 10 МЛН