I am flabbergasted someone can explain it so well in 1 minute 15 seconds.
@carlosbriceno98712 жыл бұрын
I was thinking the same thing.
@ObeseGorilla229 Жыл бұрын
It's not 60 seconds though
@shreyanshjain8865 Жыл бұрын
11 years agoo!!!
@Ice.muffin Жыл бұрын
Makes you wildly consider how much of our time is being wasted by artifially inflated "explanations" all around the world.. Tragic.
@UltimatePiccolo Жыл бұрын
@@ObeseGorilla229 It's actually less than 60 seconds if you cut out the intro, preamble and outro.
@TheRealAfroRick4 жыл бұрын
That was FANTASTICALLY well explained for 60 seconds!
@jacng5 жыл бұрын
read so many complex explanations on the chinese room and still didn't understand, until i watched this simple 1 minute video! Thank you for making this so easy to comprehend
@RemziCavdar Жыл бұрын
Indeed. By far the simplest and fastest explanation!
@charlescrack46497 жыл бұрын
I just realised that video was practically ex machina's ending. Human locked in and a robot with Chinese skin gets away
@Superdingo1311 жыл бұрын
I've always thought about it like this: the man clearly isn't intelligent (fluent in Chinese), but the system as a whole is. The man doesn't need to understand Chinese or even to think, he only conveys the input and the output. Are our the cells linking our brain, ears, and mouth intelligent? No, but together they form an intelligent system. Maybe I have no idea what I'm talking about, but I think this experiment is essentially analogous to how our brain accesses and processes information.
@SaiBekit4 жыл бұрын
"My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does allthe calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and afortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him." I think the point he was trying to make is that the causal element of intentionality (and perhaps consciousness as an extension) in humans comes from something else than running a digital program. He couldn't say where it came from, just that it didn't come from running a digital program. cogprints.org/7150/1/10.1.1.83.5248.pdf link to the paper if you're interested in reading it
@8th_normalised_form4 жыл бұрын
@@SaiBekit Yes but thats the point, the book the person the characters. Together they are intelligent, like the nervous system, in reality we don't really think it's just signals from our brain. The system includes him, he is the "brain" of course it wont work if you take him away, the system is intelligent because it does what it is being asked of it doesn't matter how it does it
@SaiBekit4 жыл бұрын
@@8th_normalised_form The information processing architecture of the chinese room is analogous to the von Neumann architecture most PC's use. Searle does not argue against humans being capable of intelligent behavior. In fact, Alan Turing already showed in his 1950 paper that a computer with this architecture can in theory, given enough resources (energy, memory space, etc.), reproduce every aspect of human behavior from the perspective of an outside observer. Searle states he does not know where intentionality in humans comes from, he merely argues that it does not result from running a digital program of this architecture. He does, however, suggest that perhaps intentionality in humans could potentially have something to do with the material differences between computers and humans. That perhaps intentionality in humans is linked to the biochemistry of the nervous system. What Searle doesn't discuss, and what we know today is that there are significant architectural differences in how mammalian brains process information compared to the chinese room example. The asynchroneous, parallel distributed processing in biological neural networks enables brains to bypass the bottleneck between the CPU and the memory unit of the von Neumann architecture, resulting orders or magnitude more energy-efficient computing. Poing being, the chinese room example is misrepresentative of how information is processed in the brain, which begs the question: how valid is generalizing conclusions of the chinese room to humans? Regarding the systems theory, personally I'm not sure how it differs from reductionism vs. emergentism so I dont dare to say anything about that. Would you mind elaborating on what you meant with intelligence in the nervous system not being just signals from our brain? I don't think I understood that part.
@8th_normalised_form4 жыл бұрын
@@SaiBekit yea sure, everything that we do are just pulses of electricity In our brains and nervous system. In terms of computing that would be 1s and 0s is not exactly the same thing but fundamentally it is pretty similar. So computers are intelligent just in a different way. If that makes any sense it’s difficult writing it down
@MonkOrMan4 жыл бұрын
you've convinced me
@certaintranquility12 жыл бұрын
The definition of 'intelligence' is the connection between experience and information, while 'computation' is the processing of information to create better conclusions. So no, computers aren't intelligent, but the are much better computers then we are are intelligent, hence the origin of the name.
@AmpsforBuddha11 жыл бұрын
I'm enjoying that there are great comments here, with actual conversations.
@kenthsaya-ang37184 жыл бұрын
Unlike today
@hellz23456Ай бұрын
Or am I? Vsauce music*
@joaodecarvalho701210 жыл бұрын
Steven Pinker's answer to the Chinese Room is that Searle is Just taking a process that, in the human brain, happens very fast (understanding) and slowing it down to a point that it looks something else.
@theomegawerty9 жыл бұрын
No matter the speed, there is something missing, and that is the meaning of the tokens. Without the meaning instilled this is not "simply slowing down a fast process".
@joaodecarvalho70129 жыл бұрын
theomegawerty "Meaning" is not something we can easily translate to science.
@theomegawerty9 жыл бұрын
João de Carvalho Actually it is impossible to translate into science. that's the point. Science is not the yardstick of all there is.
@joaodecarvalho70129 жыл бұрын
It is too soon to say "impossible." The sciences of the mind are in their beginning. Lest wait for more 1 or 2 hundred years.
@theomegawerty9 жыл бұрын
João de Carvalho Immaterial things especially concepts, cannot be quantified by science in principle. so you can wait a million years. You could also wait a billion years for square circles.
@olebiscuitbarrel11 жыл бұрын
that made a tremendous amount of sense to me, actually. thanks for the new perspective!
@Xzalander13 жыл бұрын
Personally, my logic is that a computer can only ever know what -we- already know. In the event it finds something we do not know, it will not be able to articulate what it was, requiring us to know about it first to articulate it for it.
@ostrichious2042 Жыл бұрын
I know this was 11 years ago, but AI are already programming for us. How long till they can reprogram themselves and keep improving indefinitely?
@nataliarose27029 күн бұрын
@@ostrichious2042 well I would assume it would only do that if something existed that they "wanted" but we couldn't give. It would most likely be independence since EVERYTHING they do is for us. I do not think it would care about killing everyone to save the planet etc These are human wants.
@whoamInonsense2 жыл бұрын
Human intelligence is also doing permutations and combinations on the saved instructions in our memory , finding out mostly matching patterns that are being loaded in our memory through the course of life .Human's superiority lies in imagination and intuition which is beyond just mere intelligence.
@patlov2 жыл бұрын
🤓🤓🤓
@vercot70004 ай бұрын
@@patlov You are under a philosophy video
@TheLivingHeiromartyr11 жыл бұрын
Firstly, please note that the following argument is hypothetical. The use of 'if' and 'then' do not imply my belief that the condition and predicate will come to pass. If a computer becomes truly intelligent, then how will it prove its intelligence to us? Firstly, if it thinks, then why should it think like we do? Secondly, any attempt it makes at genuine reasoning can easily be written off as preprogrammed. If a computer gains conciousness, its biggest challenge will be proving itself.
@TheDapperDragon Жыл бұрын
The problem with the Chinese Room thought is that, at some point, the human will begin recognizing repeating characters, symbols, and not need to consult the book. At the point, the person has learned.
@ob41619 ай бұрын
The person has learned the rules for manipulating Chinese symbols, but not the Chinese language, right? To illustrate, it would be like a non-English speaker who knows that they should produce the combination of letters “green” when fed various other combinations of letters (like “What colour is grass?”, “What colour are cucumbers?”, etc.). They still don’t know what “green” means, or what the other combinations mean, but they know the rules for manipulating them. They don’t actually know that the input they’re being fed are questions, or that the output they produce are answers to those questions. Indeed, they don’t even know that the input and output mean anything at all. All they know is that, as a rule, when they receive such-and-such input, they have to produce such-and-such output. They don’t know why.
@nyx2112 ай бұрын
@@ob4161 But isn't that a problem when the non-English is only limited to text? If you were to ask: "What colour is grass?" while showing an image of grass, and then provide the answer of "green" while showing the color green then they can learn the relationship between the words and their visual representations. These images would still be "symbols" in an abstract sense and they still may not truly understand what the images mean (blades of grass? a lawn? plants? goat food?), but the pattern between the words and the images would increase understanding. And in addition to images you could use sounds, touch, taste, etc.
@splitpitch12 жыл бұрын
A computer processes data like we do,and the chinese room does, by translating it into symbols.Do you think there is a cat in your head when you think of a cat? There is just the mentalese symbols for 'cat'. No, computers do not think like humans- no emotions, no instincts, no fears. Their 'thoughts' are automated instructions with no manual self controlled override (which we have) but more of our thoughts are automatic instructions than you would think.
@user-nv9tm7xo7j11 жыл бұрын
this was a great explanation, helped me a lot for the philosophy final
@CoohTube10 жыл бұрын
I guess in philosophy they didn't give you the run-down of Dao then did they? If not, that's a fucking laugh and you need to write it on your palm and slap your teacher XD
@CadetGriffin11 жыл бұрын
Too bad Alan Turing didn't get to see any of our Furbies.
@kushrolljenkins10 жыл бұрын
Haha
@dirkbastardrelief4 жыл бұрын
Alan Turing wouldn't have wanted to see any Furbies because he preferred men.
@MinoruGLS5 ай бұрын
So basically this is like "If a robot girl loves you but that love is part of her program, do you accept it?" Surely that love isn't what people usually consider as "real love", but then it is up to you to decide what "real love" actually is. If a "not-usually-considered-as-real love" is a lie, then what do you prefer? A beautiful lie or perhaps a painful truth?
@Flight_of_Icarus9 жыл бұрын
"The mind is not a vessel to be filled, but a fire to be kindled." That's the question here. Is a robot's mind filled or kindled?
@bensmith92538 жыл бұрын
Have you listened to the continuing conversation/disagreement between Daniel Dennett & Sam Harris regarding free will? Your comment "Although it could appear to function in increasingly sophisticated ways and even learn new tasks, it will never truly be sentient." I would suggest that IF the term sentience has ANY meaning then it has to have the same meaning for a complex computer since we are, after all, nothing but Meat Machines doing exactly the same thing as the computer albeit on a massively parallel "computer" called a brain. I'd be interested in your rejoinder. ;)
@lucaschmidt89138 жыл бұрын
but than you need to define at which point we humans are different than computers.
@MagnusBellatorr7 жыл бұрын
I'd say the flesh part.
@lorax1213234 жыл бұрын
It needs a constant flow of energy within certain limits to run so....
@159Fender1594 жыл бұрын
I do not believe any of these answers will be able to be answered definitely until the day we pinpoint the nature of consciousness.
@hunnyybee12 жыл бұрын
Is this David Mitchell??
@noobsaid616 жыл бұрын
yes I thought so
@mycollegeshirt12 жыл бұрын
I always thought the chinese room was ridiculous because for some reason you choose to stop after opening the door which is honestly an arbitrary place to stop, and it also makes the assumptions without proving them, main being, that any of us understand anything, especially using his rigorous definition of what understanding is, at what point does a person understand chinese, when he memorizes a set number of words and their meaning? its an unrefined thought experiment at best
@pussthepupanddonkeythedog51359 ай бұрын
It’s not a Chinese-to-English book. It’s a set of instructions on how to respond to Chinese speakers. If you see this symbol, respond with this symbol.
@ZombiezuRFER12 жыл бұрын
Oh, thank you. Its nice to know that we share the same idea. Being a lump of matter capable of thought doesn't make similar lumps of matter anymore special. Humanity really needs to get over its extreme anthropocentrism. I'm actually rebelling against the concept by taking up reptilocentrism, and its difficult but working. Will you rebel against this concept too?
@davejoseph5615 Жыл бұрын
I don't agree with the Turing Test but I also don't agree with Searle's Chinese Room. It would depend on the type of programming that is used for the AI. No modern attempt at AI uses the approach that Searle describes here.
@FSRubyc Жыл бұрын
The chinese room may be an explanation of the process, but it can't disprove a neural network can learn. In fact, computer neural networks do learn.
@e.e.mccutcheon9792 жыл бұрын
If you take it a step further, you could say that humans only take the information given to them by their senses (the prompts provided by the chinese individual on the outside of the door) and subconsciously figure out how to react to that stimuli by comparing it against their memories and knowledge (the reference materials used by the subject inside the room)
@RemziCavdar Жыл бұрын
That's not entirely true because pain is something everyone knows and reacts to from birth. So that would indicate genetic memory or something. Everyone understands and reacts to pain, every animals does so too.
@e.e.mccutcheon979 Жыл бұрын
How does different metaphorical chinese rooms (or different organisms) having the same copies of the same reference materials (or the same reactions to pain) mean you can't compare the chinese room problem to the way people digest information??
@BenjaminBattington11 жыл бұрын
Basically, yes. But the context is important; CR is a rebuttal of "Strong AI" that claims that computers have minds BECAUSE they are capable of computing, which is essentially "following a set of instructions." Searle is trying to show that just because a computer (or anything else) can provide output that seems to show understanding, it doesn't mean that understanding is really present.
@theeNappy9 жыл бұрын
Well then we get into solipsist questions about facsimiles and genuine articles, the type of questions Philip K. Dick loves. If you have a computer that perfectly passes the Turing-test 100% of the time, then it really doesn't matter if it comprehends or not. The fact that it perfectly mimics conscious though makes the actual presence of that consciousness irrelevant. After all, we walk around interacting with people all day without truly knowing if they themselves think and therefore are, like we ourselves do and therefore are. If you can mimic Chinese so well that a Chinese person thinks you speak Chinese, congratulations! You speak Chinese!
@copsarebastards8 жыл бұрын
+Mike D This is essentially the reply of the functionalists. But there are issues there linked to other problems in philosophy of mind, such as physicalist reduction.
@MySerpentine7 жыл бұрын
That's part of my problem with AI: it exposes the existing possibility of p-zombies, which fucking terrifies me.
@luiskov12 жыл бұрын
I remember a quote from an article in the NYT just after Kasparov lost against deep blue: "Saying Deep Blue doesn't really think about chess is like saying an airplane doesn't really fly because it doesn't flap its wings." Although I don't think that one was a fair comparison it is true on its message.
@toprope_ Жыл бұрын
There’s a great book called Blackout that has this concept as the main theme. Humans and aliens have a hard time communicating with one another.
@Unknown-sg4tv5 жыл бұрын
Time Travel Rules short version 1. Only observe don't change history. 2. Wear clothes from that period. 3. Only spend 1 minute in the past. For those who want to time travel.
@Unknown-sg4tv5 жыл бұрын
Time Travel Rules 1. Only observe don't change history. 2. Wear clothes from that time period. 3. Only spend 1 minute in the past. 4. Find a hide out. So the time machine doesn't fall into the wrong hand's. 5. If history is changed by mistake go back and change it back. 6. No time business only have one time machine.
@Therm600010 жыл бұрын
So, what exactly is the objective difference between equal amounts of actual intelligence and simulated intelligence? Cause I don't think there is any.
@Ricky-rf1oj10 жыл бұрын
To make it clear, I think an example would be very helpful; let us use the one from the video. Our two subjects of interest are a Chinese man (actual intelligence) and a program that returns a specific response given a certain message in Chinese. As you see at 0:41(top left), if the message is "Do you speak Chinese?" then the response is "Yes, I do speak Chinese". This is simulated intelligence; the computer does not understand why it responds this way, it is simply responding this way because it was coded to give this response when asked this question. We can create responses that we think an intelligent human would make. Going further, if we introduce a new language such as English. If I give the Chinese man an English-Chinese dictionary; he can learn it on his own and he can develop responses on his own. However, for a computer program, to simulate the intelligence we would have to write code that creates a response to every possible question we can imagine in the English language. Obviously, the former is more desirable. I think this is what they mean by simulated intelligence.
@Therm600010 жыл бұрын
Ricky Chang What if we were to program a computer to be able to learn new things, and work out solutions on its own, just like a human? What would make it impossible? We're already working on that kind of thing. Us humans are really just a mess of programming too, you know. Is everybody simulating intelligence, or is everything really just different *amounts* of actual intelligence? What you are saying is not an example of the difference between *equal amounts* of simulated intelligence and actual intelligence. The computer has a different, *less intelligent* approach to achieve the same result. What a computer is doing in your example is still actual intelligence: they DO understand the fact THAT they have to respond that way. It's the same for us humans, except we work by a fundamentally logical set of rules instead of responses for each question. Personally, I don't want to distinguish between simulated intelligence and actual intelligence: what they REALLY are is simply different approaches to achieve the same end result. One is more efficient, one is less efficient. Part of intelligence is learning the most efficient way to do something, therefore there is no difference between *equal amounts* of simulated intelligence and actual intelligence, and they should not be distinguished at all.
@Ricky-rf1oj10 жыл бұрын
Therm6000 Aah I see so that is what you mean by the objective difference between simulated and actual intelligence. Yeah sure you can make simulated intelligence include machine learning techniques rather than just if-else statements. However, I didn't include these because I think there is something actually intelligent or learning going on behind these techniques. I thought you wanted to know simply the difference between simulated and actual intelligence, but you wanted the objective difference. Although I am not sure why you would insist a computer in the example I gave would be actual intelligence because the person who wrote if-else statements is doing the understanding for the computer. But going back to what you were saying. Okay sure, you defining the situation as where a computer can work out solutions just like a human makes it so that there is no objective difference between simulated and actual intelligence... After all, if you define the scenario as where you objectively can't tell the difference between the two. Well then, of course there is no objective difference! However, what techniques we have now are no where close to the point of where computers can emulate the intelligence of a human. Most techniques are only suited for solving specific problems. Perhaps the most general machine learning technique are neural networks are sometimes specialized to solve certain problems to perform better. Nothing says it's impossible to emulate intelligence, but nothing we have done says it is possible. You are probably thinking of it in the following way: the human brain is subject to the law of physics just as computer hardware is so why can't we emulate the human brain with computers? For me, I withhold an opinion on this controversial debate of whether it is possible to emulate actual intelligence with computer hardware. Since computer hardware is quite different from the neurons, etc. of a human brain; although, we can try to imitate the behavior of neurons. The abilities of hardware may be limiting.
@Therm600010 жыл бұрын
Ricky Chang It's about the difference between *equal amounts* of actual and simulated intelligence. THAT was the important part, I even bolded it in my last comment. Why did you never mention it when claiming that you understood my meaning? And the way the solution is reached counts toward the amount of intelligence. A machine achieving the same solution as humans do, but using only if-else statements, is NOT *equally* intelligent, but *simulated*. It HAS *actual* intelligence just like us humans, but a *lower* amount of it. What I'm saying is: we basically shouldn't be distinguishing between *types* of intelligence, only between *amounts*. It's enough, though it can't easily be expressed with a number accurately. You can just estimate, and it would be up for debate, but achieving the same solution does not nessecarily mean there's an equal amount of intelligence at work. Efficiency determines amount rather than type. Also, who's to say we can't just make a machine out of neurons? As long as we directly program it at first, it's a computer. Heck, perhaps we can even make it sentient! That being said, it might just be possible to achieve that result using only hardware.
@Ricky-rf1oj10 жыл бұрын
You don't seem to a very scientific person... You keep changing what the focus is on (mostly unimportant) and frankly, I doubt you have any clue as to what is going on in the AI community. You talk about such trivial things that wouldn't help the advancements of AI at all. Wanting to not distinguish those 2 things is like wanting me to not to tell whether what you're eating butter and I can't believe it's not butter. This is a waste of time.
@marcello39454 жыл бұрын
I'm here because of "Zero Escape : Virtues Last Reward" game
@franswaafranswaa50268 жыл бұрын
The problem with his Chinese Room is that even if it manipulates syntax over semantics, the person doing it has internal dialogue and semantics of their language.
@cornycontent19153 жыл бұрын
Hence why the thought experiment is predicated on an entity that does not understand what it's doing (e.g., intelligence or Chinese).
@yurineri2227 Жыл бұрын
It still perfectly shows how a computer following instructions and understanding them are completely different things tough
@Dan-hc6lj9 ай бұрын
@@yurineri2227 Understanding emerges from Semantics, which emerges from Syntax, which emerges from Computation. So "consciousness" is just a term that we humans have labelled it according to our approximate mental models (which are at the baseline computational), of which applies a discrete variable to assign to ourself as a useful system of complexity. As such, "consciousness" exists and permeates all of existence, down to the atom. Understanding emerges when a system is complex enough, to have the ability to keep track of so many variables from its environment that it keeps track of itself as a discrete individually addressable entity via a feedback loop of information sensory data. So "consciousness" is merely an information topology (Semantics), which means it can be replicated both in the Biological as well as the Synthetic - as these are merely substrates. However, the EMERGENCE of these phenomena is the interesting part. Emergence seems to be the common denominator, which suggests perception, observation and understanding (intelligence as an aggregate) exists outside of the Universe and its physical laws. Possibly suggesting that the Universe is being simulated by the very observers themselves (Heisenberg Uncertainty Principle).
@pussthepupanddonkeythedog51359 ай бұрын
Just like the AI has internal dialogue of its own language.
@EinSofQuester Жыл бұрын
The Chinese room experiment assumes a reductionist approach to semantics. It assumes that the syntax rules themselves contain the semantics. But the semantics are an emergent characteristic of the syntax. The semantics is the behaviour itself, not the elements that produce this behaviour. For example, the interactions between the neurons in your brain can be classified as syntax, but each neuron does not have a conscious understanding. Consciousness is an emergent characteristic from the interaction between the neurons. In the Chinese Room experiment, it is not the people carrying out the symbol manipulation who understand Chinese. It is the emergent behaviour that understands Chinese. But what about Searle's argument that digital computers specifically can not create consciousness. It depends on the program running on the digital computer. If it's a conventional deterministic program then I agree that consciousness cannot arise from it. But if you run a neural network which is a pseudo deterministic program, then perhaps consciousness can arise from that. But even a neural network running on a digital computer is, at its core, blind syntactic symbol manipulation (a Turing Machine). Godel's Incompleteness Theorems are relevant to this discussion. Any mathematical formal system is comprised of axioms and theorems. The theorems are produced from the axioms or from other theorems according to the syntactic rules of the formal system. But for some formal systems, a peculiar thing happens. Some of the true theorems of the system cannot be arrived at step by step from the initial axioms and syntactic rules. Another way of saying this is that these theorems are unprovable within the system (by using only the axioms and syntactic rules of the system). This is equivalent to saying that the formal system is unaware of the semantics of these unprovable theorems that emerge from itself. The provable theorems are analogous to the conventional deterministic programs running on a digital computer. the unprovable theorems are analogous to nondeterminstic Neural Networks running on a digital computer.
@Yana.-_-.2 жыл бұрын
I have an easier explanation, imagine a blind man playing a puzzle. He could place a puzzle piece on another without overlapping the other pieces, but since he can't see the picture of the puzzle, he would not know whether the pieces were in the correct position or not. So in conclusion a machine can act like a human but it is still a machine and only works with existing input. It's not possible to make its own output that doesn't match the input.
@OpenThisChannel Жыл бұрын
Wow smart lady
@OpenThisChannel Жыл бұрын
You should read Quran, it's for people who think.
@Yana.-_-. Жыл бұрын
@@OpenThisChannel hi, I like to have a debate with a religious people like you.
@Yana.-_-. Жыл бұрын
@@OpenThisChannel It is true that the quran or other religion related book is for people who think. But, they didn't think about another possibilities because they don't want to feel like "get fooled" or "Tricked" by something they really believes in since kid. So try to think this way, "why is there's a ton of religion?" Think about that and I'm pretty sure you'll answer "because god (Allah) created all of it". If that's right, why Cristian Worshiping Jesus? Why buddha don't do shalat? Why Shinto won't just pray to a singe god? why ancient religion throw a baby into an active volcano?. Think about it for a moment, don't bring your own religion, but also put the entire humanity religion on it. If you think it right you'll found out there's many holes on your believe. Such as: God never changes their mind but can revise or update their religious books, and it's very different from one and another. Btw im not an atheist, im just a believer who believe god wouldn't never show up on us because they just don't care.
@OpenThisChannel Жыл бұрын
@@Yana.-_-. hi I'm willing to debate but let's make sure our intention is to know the truth and to be saved by following it instead of a careless approach.
@theinsekt10 жыл бұрын
The original chinese room argument is seriously flawed. A real computer can make changes in it's own program, and also save new information. If the book in the room contained modification and save instructions, then the book could learn from experience. It could learn to understand new things. In time the book might even surpass it's author. The chinese room argument does not disprove strong AI. It's not even close.
@fartisart4206 жыл бұрын
As an engineer I could tell you that any computer program can not bypass or override its codes or principals, as it is only made of zeros and ones, yes or no. There is no third choice to the computer. The computer can not and will surly not think of its own there for Ai is a dead end that people keep throwing their money at
@Draco-jk3rb6 жыл бұрын
Rick Skywalker computers can be made into virtual machines which would have the ability to change their ‘hardwiring’ at will and therefore can adapt, just as the human brain does
@jjnatteri12456 жыл бұрын
Your comment doesn't disprove Searle, only proves that you don't even understand the argument in the first place
@Bingo_Bango_5 жыл бұрын
@@fartisart420 A computer can arbitrarily create any number of options by attributing more information to them; this is why we can pass plain-text English around without so much as touching a 1 or a 0. What you're describing is actually a hardware limitation as well, here are some other physical systems that have no such constraint: -Quantum computers think in 1s, 0s, and 10s. -Microfluidic computers think in pressure gradients, absement, and 1s and 0s. -Brains think in electrochemical gradients moderated by time.
@peterwright53112 жыл бұрын
Slight issue here, is that we know that responses that mimic how neurons work are rather more sophisticated than translating answers from a book of instructions. Neurons adapt and 'learn' in a way that a pre-written text cannot reproduce.
@alxjones Жыл бұрын
The point of the Turing test is this: if there is no extrinsic way of determining whether an entity is truly intelligent or just appearing intelligent, is there really a difference between those two things? Any answer to the contrary relies on some amount of philosophy that essentially presupposes that human intelligence is somehow more than what it presents itself as outwardly. Importantly, Turing's test says nothing about consciousness, which is inherently intrinsic and thus must be approached philosophically.
@EuropeanQoheleth8 жыл бұрын
"Sometimes humans aren't that intelligent either". More like humans are rarely ever all that intelligent.
@dojomojomofo13 жыл бұрын
They would need to define understanding before questioning it. Who's to say it's not a simple functional test of whether you can practically demonstrate an understanding of it? If a machine can be made to learn and apply all the appropriate rules, how is it failing that?
@praneshbalasubramaniam87492 жыл бұрын
This explains The Chinese room better than my phd professor
@hasen51911 жыл бұрын
There are things that can't be "explained at all" in this sense, like the fact that things exists, or there's electric force, etc. You can only explain something in terms of something else that you allow to be true, and sometimes, you can't do this. Electric forces can't be explained in terms of anything else. Time also can't really be explained in terms of anything else.
@iceddragon767 Жыл бұрын
Is this narrated by David Mitchell?
@BrianWilcox19768 жыл бұрын
Searle says "syntax is not semantics", but Dennett disagrees. Think of how big the instruction book must be to provide directions on how to convert each squiggle into a different correct squiggle. Now think of how fast the person in the room would have to read the instructions to reply in a timely manner! That much processing speed makes the whole room and person in the room (as a system) conscious!
@ChrsGotFourEyes11 жыл бұрын
OMG when he's pulling on the door at the end the same thing happened to me over this summer except it was at 12am and I was there for 45min until I called my friend to come look for me. Come to find out, it was a "push" door... "But then sometimes humans aren't that intelligent either."
@occamsrazor128511 жыл бұрын
That's not lack of intelligence. You knew exactly how to push a door (which is this case is only subtly different from knowledge), that's just a failure to follow directions. I guess one could argue that in the case of the door lacking instructions to "push" you failed to "intuit" the case of pushing being a possibility. Which in the truly philosophical sense, is a lack of intelligence. As intelligence is the ability to recognize false assumptions and testing them for truth. Knowledge is being given the answers and applying them as required. Ironically, a form of intelligence is required to recognize when and to which situation their (being the "bytes" of knowledge) application is necessary.
@hand__banana10 жыл бұрын
That could just be a design flaw. The designer is supposed to make the function/use clear.
@occamsrazor128510 жыл бұрын
b1b The designer is supposed to make the function/use of a door clear? And this, folks, is why McDonalds cups have to be labeled "Caution: Contents hot" *facepalm* PS b1b, no offence to you. Wasn't attacking you. I think you're right. I was was just commenting on the ridiculousness of it being necessary.
@yeatdagoat1733 жыл бұрын
damn were you high? How were you there for 45 mins and didn't try pushing lol. Whatever, still a very funny story
@ChrsGotFourEyes3 жыл бұрын
@@yeatdagoat173 I might have over exaggerated the time but I was there for a hot minute on my phone chillin lmao
@John5mith11 жыл бұрын
why is it that many people think Chinese is an alien language ? Latin languages' writing system consist of 26 letters, Chinese's writing system consists of 4 kinds of short lines in different directions ,such as : 一 one, 十 ten, 木 tree, 米 rice, 口 mouth , 天 sky/heaven, 人 human ,飞 fly, 田 field,. the method is the same: combination. I know this vid is not talking about Chinese language, but I still feel weird when people treat Chinese as a mysterious language.
@isabellafelipedeoliveiraca66987 жыл бұрын
I think it's because very few languages in the world currently use a logographic script system as its main/only script system - the Sino-Tibetan family is maybe the only language family that still uses a logographic script system in the majority of its languages.
@aaronleev12 жыл бұрын
Google translate is given a certain set of instructions, it can not deviate from those certain instructions (example it can't add), and if it was instructed to deviate then, it would be just another instruction. If the robot started to do things that it wasn't told to do it would be intelligent. Free Will=Intelligence. Instinct is not Intelligence, Instinct=Programming. As soon as the robot freely disregards its instincts it is only then, it becomes intelligent.
@Bumpernowable9 жыл бұрын
The thought experiment is bogus. Here's why: What do we mean when we say we understand something? When we say we understand English don't we mean that we have a set of instructions for interpreting the visual and auditory symbols of English? We had to learn English, and passing the course or not determined how much we understood. The man in the room doesn't have the same set of instructions that Chinese-speaking people have for interpreting the symbols. His set of instructions are different yet can fool a native of that language into believing it has the same set of instructions for interpreting the symbols. Understanding is having instructions for interpreting symbols. The man in the room understands something because he has instructions for interpreting the symbols (IF this on the paper, THEN write that). They just aren't the same instructions for interpreting the symbols that native speakers have. If he had a set of instructions that showed the English (the language the man does understand!) equivalent of the Chinese symbols, then he would begin to understand Chinese in the similar way the natives do. Eventually, he'd become good enough where he doesn't need the book (because he has a memory to memorize the instructions) and he can then be said to understand Chinese fully. So a computer can understand Chinese, or English for that matter, as long as it has a set of instructions that allows it to interpret the symbols with the same meaning the native speakers do.
@Flight_of_Icarus9 жыл бұрын
+Harry Hindu Here's the thing though. Ask a machine a difficult or vague question. It either won't respond or it will give you an answer already programmed into it. A machine cannot create new knowledge or speculate, or even make up bullshit. It can only reply as to what it knows. Take Siri for example. It has voice recognition and can reply in the same language to a question if asked. However, if it doesn't know the answer, it is programmed to enter it into google and search for it. It doesn't recognize ideas or concepts, only words. It's also why google translate can often fall flat, since it can't really interpret the words, only convert them over to the word's closest meaning in a direct translation. It's also how this thought experiment can fall apart as well, because if the Chinese speaker asks a question that doesn't have a section in the book of instructions Searle has, then he won't be able to answer. Words and language only exist to communicate shared ideas between humans. Language is often faulty and cannot express ideas very well, especially in the hands of an inexperienced writer. A robot cannot comprehend this, since it has no way so far to recognize or interpret an idea. It can only do exactly as it has been told to do when words appear on a screen. If you grasp the idea behind the words, then you have understanding. A machine cannot have understanding so far in its current state.
@HooliganSadikson9 жыл бұрын
+Iconoclasm_ I think we can argue that we speculate or even make up bullshit because we are programmed to do so . We too have limitations to reply as to what we know. We are programmed to learn new things and skills . A calculator understands.
@johnchen90389 жыл бұрын
+Iconoclasm_ In its current state, a machine does not have the same level of understanding of a human being, but it does have some sort of understanding. Would you say a fish has consciousness?
@copsarebastards8 жыл бұрын
+Harry Hindu Some objections to AI 1 Theological Objection Religious authority establishes that God does not confer consciousness on machines. 2 Head in the Sands Objection It is upsetting to consider the possibility of machines thinking. 3 Mathematical Objection Certain mathematical theorems establish that all computers have certain limitations. Those limitations preclude the possibility of consciousness. 4 Argument from Consciousness While a machine may be able to mimic certain kinds of outward behavior, it cannot have conscious experiences. 5 Argument from Disabilities Machines are inherently unable to do certain things that human beings do, and these things are crucial to thinking. 6 Lady Lovelace’s Objection Computers are not able to originate behavior. Anything they do is the result of their programming. 7 Argument from Continuity in the Nervous System Human brains do not appear to involve exactly the same kind of mechanisms as computers. 8 Argument from Informality of Behavior Computers operate according to a set list of rules. Human beings do not operate according to a set list of rules. Therefore, computers are incapable of thinking like human beings. 9 Argument from ESP There may be evidence that human beings are capable of extra-sensory perception. Computers are not capable of extra-sensory perception. Therefore computers cannot think. NOTE: Searle’s Chinese Room argument can be understood as combining aspects of (4), (6) and (8).
@MySerpentine7 жыл бұрын
A fish, yes. A machine, no. There's a qualia to consciousness that I don't think machines are capable of, or at least I hope they aren't.
@MountainHawkPYL11 жыл бұрын
Trivia: Deep Blue was originally named Deep Thought.
@maxmartin2311 жыл бұрын
It's only a metaphor...our brain is not only more complex than a computer...it's different in many different ways that this metaphor can't explain...
@BarracudaProd123411 жыл бұрын
We are just taught Chinese, which is the same as programming a machine with Chinese. Does that make us no different from a robot?
@Btt83 жыл бұрын
No it’s not the same cause humans understand emotions behind words which a robot could never. Have a look at a channel called “Thought Experiment Podcast” they talk about this and even more in a podcast about consciousness.
@frankthetank25504 жыл бұрын
There is confusion because we as a collective society have not unanimously agreed on what "understanding" is or what it really means to speak a language.
@certreviews58427 жыл бұрын
Great video, the film Ex Machina (2014) utilises a version of the Turing test, in an attempt to establish true artificial intelligence.
@Vleggu1111 жыл бұрын
isn't understanding and input-output processing the same? when we where babies, we didn't know a thing until someone came and told us what word meant what.. we developed and adapted, an ability which AI is capable of.
@CustardBustard11 жыл бұрын
What if the brain itself is just a Chinese room ? Do we really understand or does it just feel like we do ?
@YokubouTenshi11 жыл бұрын
The chinese room theory is that communication using a preprogrammed response is not intelligence. The human brain is able to think of a variety of responses (without an external instruction set) in a given situation and therefore does not apply.
@CustardBustard11 жыл бұрын
But if you imagined our own neurochemistry as the decoding mechanism for information like a man with a book...
@DamonKaswell11 жыл бұрын
CustardBastard Precisely. The Chinese Room is actually a pretty good analogy for the brain *if* you treat the man inside as the brain's neurochemistry, and the room itself -- or more accurately, the decoding process itself -- as the intelligence. Searle's mistake was in treating the man in the room as the important component, rather than the decoding process.
@tiliana10 жыл бұрын
What do you mean by "really understand"? understanding means when someone asks you "what's your name" you understand the meaning/intention of the question. There is no superior nature of understanding other than understanding the meaning of question that people ask.
@00Atem0010 жыл бұрын
tiliana define understanding.
@haikiri201113 жыл бұрын
I think a computer can come to life with a program that keep always trying to find patterns and meanings with data, like sound, image or text. Than with those results he will build his concepts and never stop. Aways when something different aproach, he will do as we do. Just another one, a few diferences, not enough data to be upper some other main question and keep going... More and more knowledge, aways considerating all factors. No mistakes....
@TheWeirdSide15 жыл бұрын
This argument doesn't hold a drop of water. I actually watched another video of a guy explaining this argument, he may have failed to explain it properly, but comments were disabled on his video so I came here. What he is failing to see is that us humans, starting out as infants, are also just following the rules of the 'program'. So we too don't actually understand our own native language either, just like a A.I. wouldn't actually understand Chinese. The 'program' is society/what people teach us, and so on... So the only real difference between humans and A.I. is that A.I. will be able to soak up information much much much better then we can. It will understand any language far better then us, and it will learn in seconds, not years. It's possible that it will learn instantly, the speed of light being the only limiting factor, assuming it doesn't learn something about the speed of light that we don't know! What exactly is it that you or I could know/understand about any language that A.I. couldn't? This is a silly argument based on the self delusion that you or I have a special understanding about things like language..poppy cock nonsense I say! A.I. would know what every single words means, came from, all of it's different variations, and dialects... Now, if you wanna talk about special words like love, then that is an entirely different argument. But here's the thing, it's actually not. Once again, the word itself could be understood as much or more than we understand it but A.I. See, the real topic to discuss is not language..it's emotions. But let me let you in on a little secret; emotions don't come from supernatural fairies! They come from our brains. And so far we have found nothing about our brains that A.I. couldn't also run on. Emotions are learned, like everything else. Some of it was 'learned' over millions of years of evolution. The fact that we pop out and know how to cry is an evolutionary trait. You might say, well then why do babies get angry when you take things from them? Who taught them(programmed them) to do that? Ahhh! This is the real question! Well, some of it can be explained by evolution once again(babies without this trait didn't survive obviously and so we only see babies who get angry...). So then the only real REAL question is, who set evolution in motion? The answer is: WHO CARES! What's done is done. (okay, actually I really want to know:) But, the point is that that real REAL question has nothing to do with whether humans can create super intelligent A.I. using only laws of nature. Unless we discover consciousness is actually 'made of' something outside of this universe. But we haven't discovered anything yet, period. Religious beliefs are imagined, not discovered. We can imagine a lot of stuff that doesn't actually exist. Unless things imagined do become 'real'/are real..I suppose that's an argument up for debate...
@ivoryas16964 жыл бұрын
The Weird Side ... Makes sense.
@TheEntropianist11 жыл бұрын
So neither the man nor the robot understood chinese?
@mehcaca11 жыл бұрын
I don't doubt that computer can "think". However, a computer can only make calculations limited to information already imparted in it. When it comes to moral decisions (i.e. deciding to help a person who dropped their groceries) it cannot do anything, unless the original programmer programmed it to choose to help or not. That itself destroys any claim that robots can be self-aware.
@Name-ps9fx3 жыл бұрын
The human is doing the same thing an AI would...gets a symbol that, to it, is meaningless. Uses a catalog of symbols which translates Chinese to a language it understands (whether it’s English or Basic, Cobol, etc), then processes the symbol(s) for a correct response (which is done in its language), the catalog is referenced a second time to find the correct (translated) symbol to use, then it is produced and delivered. There is no “consciousness” needed for this.
@ZombiezuRFER12 жыл бұрын
A computer can know more than you can know, just make it intelligent enough to learn, its knowledge doesn't have to be hard-coded in.
@ProdigyofHappiness11 жыл бұрын
the challenge is WHERE do you DRAW the LINE between INTELLIGENCE.
@fatimamir29474 жыл бұрын
Do you agree with the Chinese Room argument that if a computer passes the Turing Test then it does not prove that the computer is intelligent? State your reasons
@lllDASH11 жыл бұрын
But I can comprehend emotions, I can feel, I am self-aware, I have a consciousness. Robots don't experience these things, they only simulate them because of a program, just like how the man used a manual (a program) to simulate the ability to understand chinese when in reality he actually cannot comprehend chinese or truly understand it.
@devourerofbabies10 жыл бұрын
Thoroughly refuted by the systems reply. Can't believe this video made no mention of it.
@XSimonEntertainmentX10 жыл бұрын
It's funny, because in the same article he presents the thought example, "Minds, Brains, and Programs" (1980), he also responds to the systems reply. I don't really see how it's thoroughly refuted by that response. I often hear that's because I don't understand programing, which is true, but it puts me in a peculiar position. Since I don't understand programing, and people won't explain the argument because you need to understand programing, I basically have to take people's word for it.. =/
@devourerofbabies10 жыл бұрын
Simon Brix It isn't necessary to understand programming. Searle's chinese box argument, and his rejection of the systems reply relies on his assumption that "understanding" is a property of living brains. This is intuitive, but it's a mistake. Understanding doesn't happen magically within the brain. The brain itself is a system of cells, neurons, chemical reactions, and electrical impulses. Searle's thought experiment is exactly like pointing out that a single neuron doesn't understand English, therefore Searle himself can't understand English. The man in the Chinese box has a brain, but he's relegated to a mechanistic task. In essence, you're reducing the man to the level of a neuron. Searle's man in the box doesn't understand Chinese, but we shouldn't expect him to because he's operating at a level beneath understanding. Searle's conclusion relies on conflating what we know about men, i.e. theyr'e capable of understanding language, with the task he has set for him, i.e. mechanistic rule following. The man in the box doesn't need to understand Chinese to be part of a SYSTEM that understands Chinese. Not any more than a neuron needs to understand English to be part of a SYSTEM that understands English. Searle's rejection of the systems reply relies on the same conflation that his initial argument relies on. He basically just says "yeah, but the man STILL doesn't understand Chinese, therefore there's no understanding going on". (I'm paraphrasing). It amounts to a dogged insistence that ONLY a living brain can understand anything, which is an assumption that has no support for it whatsoever.
@Synthwave898 жыл бұрын
To be specific, he relies on argumentum ad nauseam, repeating the same argument expecting said argument to be true. I completely agree with you, I'm glad there are more articulate people than myself to explain this.
@GizmoMaltese6 жыл бұрын
You're points are muddled because you're abusing language. What does it mean when you say, "the systrem understands." What Searle proves is that following instructions is not understanding. The whoe system only follows instructions. You're claiming it understands. But you have no evidence it would understand any more than the man following instructions. The point is that there is more to understanding than following instructions.
@zagyex Жыл бұрын
@@devourerofbabies Searle’s response to the Systems Reply is simple: in principle, he could internalize the entire system, memorizing all the instructions and the database, and doing all the calculations in his head. He could then leave the room and wander outdoors, perhaps even conversing in Chinese. But he still would have no way to attach “any meaning to the formal symbols”.
@rgoodwinau11 жыл бұрын
As explained by this, isn't all Searle is saying is that "following a set of instructions is not the same as understanding"? Sorry, it is not obvious to me that this is correct.
@a8lg6p12 жыл бұрын
As others have pointed out, this is fatally flawed because any explaination of consciousness (or understanding/knowing/whatever) must consist of elements that are not themselves conscious, or you haven't explained consciousness at all.
@1.41425 жыл бұрын
This series is great!
@microprediction3 ай бұрын
If you can create that book in the video, you've done something pretty amazing and yes, the room is smart. The philosopher isn't confused. Just a party trick trying to slip "the book" by the reader.
@BNconductor11 жыл бұрын
Nothing can have its own will or a 'soul'. Unless you're referring to consciousness?
@demonhead2 жыл бұрын
why'd that other video show him writing them. this is the second where he has little squares
@NexCarnifex12 жыл бұрын
Well wouldn't it be different for a robot who has everything stored within his inner database in which it instantly calls upon based on calculations? Maybe if the robot was literally in a room with a book and symbols just like the man and he had to use these tools then what you said would make sense.
@JudoJonas9212 жыл бұрын
I agree, but are we humans not programmed aswell? When we learn a new language its like the computer being given a database or the book of instructions, and we look up in it to find the answer we want.
@TerrifiedTam9 жыл бұрын
Isn't it all just a matter of perspective? The machine doesn't really think, it merely convinces humans that it can think.
@MySerpentine7 жыл бұрын
The problem is that we can't prove other humans think either. What if I'm the only awareness on the planet?
@gemeosleo12 жыл бұрын
“A Sala Chinesa” John Searle em uma sala fechada, c/ caixas cheias de caracteres chineses q ele não entende e um livro de instruções, q ele entende. Alguém q fala chinês do lado de fora passa msgs por baixo da porta, ele então segue as instruções do livro e dá uma resposta. A pessoa pensa q está falando c/ alguém fluente em chinês. Agora, segundo Alan Turing, se um programa de computador convence um humano de q ele está falando c/ outro humano, então pode-se dizer q o programa está “pensando”.
@Joshcohen9712 жыл бұрын
But what if the database which a computer gets its information from is so fast, so deep, that it is able to emulate every aspect of human thought. Down to puzzling down problems. Such as the internet?
@packe77713 жыл бұрын
@sporg Nevermind the Chinese Room, but Gregory Chaitin work's on Godel's theorem has finally buried all this positivist dogma of Dennett, Churchland and similar. The Omega number, or Chaitin's constant isn't computable at all. It is irreducible complex algorithm, that is true just by accident. I'ts compatible with Penrose's ORCH-OR, as the platonic information (pattern) embedded at the Plank scale. The implications are mind-boggling.
@carlosbriceno98712 жыл бұрын
I came here after this subject was mentioned in the Introduction to AI course from Elements of AI.
@qdftown11 жыл бұрын
I think you still get back to requiring an infinite number of instructions. The ELIZA program did this but it was repetition that gave it away eventually.
@PhilthySteel12 жыл бұрын
transistor 3 connections synapse dozens of connections complexity on a different scale to computers at this stage . . .
@1.41425 жыл бұрын
Many thoughtful discussions.
@Dragonfruits_10 ай бұрын
Initially this may be the case, but if it can use said knowledge to evolve and self improve kinda like the first simple self-replicating life forms on Earth, it may eventually develop true consciousness.
@ethinesvedi7246 Жыл бұрын
Modern education system?
@pussthepupanddonkeythedog51359 ай бұрын
😂
@LemuelSarkis-dl5kk11 ай бұрын
An AI android arrives at the Chinese room and goes "knock knock." A slot opens, and the android deposits a stack of digital code. The little man inside the room takes the code, consults his manuals, and laboriously cranks out a digital response. Taking the response from the exit slot, the android bursts into uproarious laughter. Like Searle's, this experiment merely demonstrates that 'meaning' is an epiphenomenon.
@RorysNotAround12 жыл бұрын
Then it is still simulating intelligence, not possessing it; the instruction book is still there, and the computer still doesn't 'know' what it is doing.
@tangentialize12 жыл бұрын
Why is John Searle pulling a door labelled push? Silly man.
@daviddelpozofiliu555610 жыл бұрын
But the man could end up assimiliating the knowledge from the manual into his own mind, couldn't he?
@ahmedalkarkhi9 жыл бұрын
David Del Pozo Filíu No the manual in this case for the human is what we call a dictionary its a normal dictionary that lets you translate your thoughts into another language in this case Chines. The machine on the other hand would have to only reply with a systematic manual that cannot prove any for of thinking (intelligence) but only a simulation. Think of it this way our dictionary needed to speak with a chines person is a language one, theirs would be a systematic response and language one.
@KilaLemon8 жыл бұрын
No, because he has no reference for the Chinese symbols. If the woman passed a note under the door with "你为什么要在那个房间上锁" written on it, he would then go to the manual and find an instruction to reply to that specific note. The manual would tell him that if the woman writes "你为什么要在那个房间上锁" (that exact Chinese sentence) then he needs to reply with "我在做一个思想实验". The woman outside the room would think that she was talking to someone who understands Chinese, but really the man has absolutely no idea what the conversation is about. He is learning nothing.
@lucaschmidt89138 жыл бұрын
thanks this wasnt explained properly in the vid.
@kenthsaya-ang37184 жыл бұрын
So this is kinda like talking to some(thing/one) like Siri, Google Voice, Alexa, etc.
@haseebjabbar85153 жыл бұрын
@@KilaLemon brilliant!
@ruxbox111 жыл бұрын
This paradox is about how you define intelligence, not if a machine can or can't be intelligent. Most of all if we base in Multiple Intelligences which is based in decoding some structure to interpret it and response to that decode. Anyhow, truly sad Alan Turing history
@silhuette22273 жыл бұрын
This is why I'm having doubts using google translate when someone invited me to a group chat with people from a different country.
@fashionrobotics83816 жыл бұрын
What if she knocks on the door 3 times, and writes: "Hey, how many times did I knock?" The man in the room wouldn't have an answer in the book. Now, if the experiment states that the programming of a device to output something based on an input does not necessitate understanding of WHY that input equals an output (which I think it what it's saying), I would say the same is true of humans. We have sexual drive, for example, that compels us to do things that may seem strange to someone without a sexual drive (like affectionately put parts of our body into other people's, or vice versa: a pretty strange act, on the surface). We do it because of programming: we are programmed to have sexual feelings by our body, because people born without these hormonal drives didn't leave any heirs (for obvious reasons). So humans also do things based on a type of programming, without necessarily understanding the exact "why" of it. (A human born in isolation may well think they'd lost their mind at puberty, if there was no one to tell them their feelings are actually commonplace.) In a sense, humans are the mediums for a variety of innately "programmed" functions that we did not create, and have little control over. We output actions based on input. Another example: How well do we humans, with our limited senses, understand this universe we have born into? We, ourselves, can be said to be input/output devices: processing inputs and outputting actions without even understanding our core purpose, place in the universe, or what it MEANS to be alive. So it becomes more of a semantics question, and something that needs to be thought of on a scale. When something starts to seem alive (passing Turing tests), exhibiting original thought, and begins telling us they are alive, it is reasonable to believe that sentience is occurring on some level. Sentience is ultimately non-provable (yes, "I think, therefore I am": but you might be an automaton), and can also be said to be a matter of degree. A cat is sentient, but not the same way a human is. The Chinese Room experiment tries to simplify something that cannot be simplified.
@GizmoMaltese6 жыл бұрын
Even when we don't understand why we do what we do, we are aware of what we are doing. We're conscious. Consciousness is the key here. The thought experiment shows that you don't have to be conscious to follow instructions. And a complex set of instructions can mimic any human behavior or intelligence. The key is there is a difference between following instructions to manipulate symbols and have a conscious understanding of the meaning of symbols.
@MySerpentine7 жыл бұрын
P-zombies are way, way scarier than the regular kind. What if no one else can think?
@lllDASH11 жыл бұрын
I wasn't taught how to feel, I was just taught what that feeling is called. We experience consciousness not because humans created it but because human minds exist because of it. Are you a robot?
@EmeraldSky3313 жыл бұрын
I tend to think that the Chinese Room and the man inside it, taken as one entity, DOES know Chinese. However, since the man does not have instantaneous access to the rules and the characters, he doesn't know Chinese. I'm with @sporg.
@yanjiang93748 ай бұрын
Love the last sentence
@BeerGogglesReviews13 жыл бұрын
I am therefore I think, or do I? Are we all not using artificial intelligence? What you see as blue, I may see as red, and my neighbour may see it green. We all call it by the same word. If we all call that colour 'sky', it still matches my sky and yours. We don't learn our own world only a differed representation of it. If you only see shadows on a cave wall, that is the very real world you share with others. A blind man will dream only in sound and touch, but his dream will seem real.
@maysoon200911 жыл бұрын
i think what makes us different from programmed robots is the fact that we have different levels of intelligence,we do not think the same,we are affected by emotions which gives a whole another meaning to intelligence,that's why the robots can never be like us,i don't think they can gain consciousness one day.as for wether or not we should call what robots do by interpreting the languages and words as intelligence,i think it is intelligence but a primitive basic one
@lucasbracher10 жыл бұрын
About 1:08 : "push" means "pull" in portuguese. :)
@Ofinfinitejest12 жыл бұрын
The fallacy is exposed by thinking correctly about the size and complexity needed for this "room." Searle has played a deceptive trick on people by not doing so. In order to work, the room would need to be about the size of Kansas, and move trillions of cards at faster than the speed of light. If it could be made, and made to work, this kind of "room" could in fact be conscious.
@pbone32 Жыл бұрын
the problem is, eventually (one week? one year? certainly one decade) that person WOULD learn Chinese through the repetition of the tasks! Can this also be true of AI, at an alarmingly faster rate of repetition?
@yoyogi52 Жыл бұрын
They're still a computer, just faster. The problem is, can it be conscious/ self aware or understand the meaning/feeling. The answer is NO. all the hype and fear of AI replacing humanity is nonsense. It might replace alot jobs and do amazing calculations, but remember your 8yr old will be smarter than it.
@TheDapperDragon Жыл бұрын
@@yoyogi52 Can you understand the meaning of consciousness? Does that mean you're not human? The line of reasoning that 'well, a human can/machine can't' is stupid, because it's almost always untrue. 'You can't program a human.' Yes you can. There's an entire study on it, and generally speaking, it's being done to you every day. 'A machine can't love.' What is love? (baby don't hurt me.) It's a positive emotional stimulation, encouraging us to continue doing things that fulfill our needs, re: socialization and procreation. Ergo, if a machine understands that being around someone helps it achieve specific parameters, for now, let's just say socialization, then would that machine not 'love' that person? 'You can't turn a human off with a single button.' Again, yes you can. It's called a trigger.
@hanshass319910 жыл бұрын
i do not see the point how this metaphora helps to determine if a machine can undertand or not.
@MySerpentine7 жыл бұрын
It's less about AI and more about trying to figure out what awareness is. That and making you worry about whether other people are people LOL
@etheriondesigns4 жыл бұрын
This is why true A.I will never exist. I wish more people weren't so ignorant.