Searle: Philosophy of Mind, lecture 6

  Рет қаралды 23,105

SocioPhilosophy

SocioPhilosophy

Күн бұрын

Пікірлер: 45
@ameya0111_
@ameya0111_ 12 жыл бұрын
This one is about the Chinese Room Argument. Discussion on the Chinese Room Argument begins at 22:20 .
@mohamedmilad1
@mohamedmilad1 9 ай бұрын
I am not sure that pain argument is a strong argument against materialism
@nejiknya
@nejiknya 8 жыл бұрын
can somebody say the name of the logical theorem he was referring to at 1:08:15
@homphysiology
@homphysiology 8 жыл бұрын
Löwenheim-Skolem theorem. I don't understand it but you might get some info from Stanford Encyclopedia of Philosophy.
@davidfost5777
@davidfost5777 3 жыл бұрын
I'm always looking for new interesting lectures on Psychology/Philosophy, please let me know if you guys have any recommendations, would be highly appreciated
@Gabriel-pt3ci
@Gabriel-pt3ci 3 жыл бұрын
Take a look in the Philosophy Overdose channel, it's my favorite.
@homphysiology
@homphysiology 8 жыл бұрын
"calculators don't calculate... the calculation is something we do with the calculator". Is that right?
@mohamedmilad1
@mohamedmilad1 9 ай бұрын
If you have cord section interrupt c nerve signals less likely to experience pain. Pain requires terminal end and emotional part of the brain gets collateral signals
@josuepineiro8227
@josuepineiro8227 10 жыл бұрын
"There is something else going on." Correction: there seems to be something else going on. Perhaps all that there is is syntax and it just seems like we deal with meaning, just like in the Chinese room.
@mohamedmilad1
@mohamedmilad1 9 ай бұрын
What about phantom pain
@VoyagesDuSpectateur
@VoyagesDuSpectateur 8 жыл бұрын
Searle's reductionist argument can also be adapted to prove that brains don't think. All a brain does is move molecules around and fire electrical signals. Molecules don't have semantics or intentionality. Electrical signals don't have semantics or intentionality. Semantics and intentionality are necessary for conciousness and understanding. Therefore brains don't have conciousness and understanding, any more than the Chinese Room does.
@ValvtaSilent
@ValvtaSilent 7 жыл бұрын
No, because of qualia. Qualia is a biological phenomenon, that you experience first person. Check his paper on Biological Naturalism.
@badsocks756
@badsocks756 6 жыл бұрын
ValvtaSilent Yup. I mean, does OP really think Searle has been at this for decades , but never considered that argument? Get real. At least do your research on the topic before you try some dumbass reductio of Searle's Chinese Room (whether you ultimately agree with it or not). Read his actual works.. or at the very least, finish the series of lectures for this class.
@lwhamilton
@lwhamilton 6 жыл бұрын
VoyagesDuSpectateur 46:20
@Eta_Carinae__
@Eta_Carinae__ 6 жыл бұрын
Given his diatribe on dogmatic physicalism at the start of the lecture, I think he would agree.
@lucastanga6732
@lucastanga6732 3 ай бұрын
It's not the brain that thinks it's your conscience. What's conscience? That's the million dollar question, but the fact remains that human understand things, have emotions, dreams, fears, plans. Your brain, just as any piece of silicon does not have any of them. Plus, they do not occupy space, so your dreams are not in any specific physical space, not even in your brain (for what we know so far). A friend of mine is an electrical engineer and once I asked him if technically it would be possible to detect which electric current makes a specific pixel on a screen shine, amd he said it is possible. I do not know of any study that says that you can do that with a brain, even though the work of George Lakoff might tend in that direction.
@Rocky_517
@Rocky_517 12 жыл бұрын
Great comment very helpful thank you!
@ROForeverMan
@ROForeverMan 11 жыл бұрын
Neah... you don't understand. Chinese room is about the fact that you cannot get semantics out of syntax. Brains are working based on semantics. And you cannot get this from computation or the laws of physics that we know. There is something else going on.
@MrPatrickDayKennedy
@MrPatrickDayKennedy 7 жыл бұрын
Ha - well the computer says it's conscious, so why would it lie??
@Mevlinous
@Mevlinous 11 ай бұрын
So you don’t turn it off
@MrPatrickDayKennedy
@MrPatrickDayKennedy 10 ай бұрын
@@Mevlinous what a different world it was six years ago 😂
@solomonherskowitz
@solomonherskowitz 3 жыл бұрын
Is water necessarily H2O?
@Spideysenses67
@Spideysenses67 3 жыл бұрын
The idea is that water is necessarily H20 because that is how it is defined and therefore its definition rules out any instances of water not being constituted of H20. If you imagine water not being made up of H20 but instead made up of something else then it simply wouldn't be water.
@myAutoGen
@myAutoGen 9 жыл бұрын
Wow! Searle seems to think that saying something defies common sense is a reasonable objection to an argument. I suppose quantum physics is thus refuted? Never mind the room. The system comprising you and the book understands Chinese, and is conscious just as much as any one is, which is almost certainly not at all.
@johnnavarra4970
@johnnavarra4970 9 жыл бұрын
Ok here is a problem with the Chinese Room experiment: Let's say the computer is asked, "How do you feel?". Note that the response to this question can depend on a great many things. For example, maybe the computer determines it is sad because there was a power outage the night before, or maybe it lost a chess match to a human. In principle, there could be any number of reasons that cause the computer to respond that it is sad. Now... according to Searle, he has the English version of the computer program which he runs manually. Consider what happens while he is carrying out all the computations necessary to answer this question. He too learns there was a power outage the night before. He too learns that the computer lost a chess match to a human. In short, he too learns exactly why and how the computer responds that it feels sad. The only thing he doesn't bother to learn is how to translate the answer into Chinese, though he could even do that if he wanted to. But recall, the Chinese questioners agree that the computer passes the Turing Test. Thus, Searle must also agree since there is nothing special about translating the English answer into Chinese. The "mind" in this experiment is not the shuffling of symbols, but all the factors that go into giving the answer to the question. If the processing of those decisions is sufficiently complex, then we must agree the computer has a mind, regardless of the language it uses to communicate its intention! Please refute that.
@potemkinforest
@potemkinforest 9 жыл бұрын
+John Navarra "If the processing of those decisions is sufficiently complex, then we must agree the computer has a mind..." Not exactly - it depends on the type of processing that is happening. You say that because the computer learns about the reasons behind its "feeling" sad, it is really feeling sad and therefore must have a mind. No. Why? Because, as you said, these events are all recorded in the computer program, the rulebook. These events are, once again, just a bunch of symbols being shuffled around. What is happening here is symbol manipulation, not semantical "understanding". Here is how it goes: 1. The computer is asked "How do you feel?" in Chinese. Let's say that the question is symbolised as such: "*!^#!*@)" 2. The computer receives the question: "*!^#!*@)" 3. The computer "consults" the computer program: -> if you receive "*!^#!*@)" then output "&*!^#!(*@#_!)@*&*#!^$@*#&$Y(" 4. The computer outputs "&*!^#!(*@#_!)@*&*#!^$@*#&$Y(" 5. "&*!^#!(*@#_!)@*&*#!^$@*#&$Y(" is interpreted by the native Chinese speaker as "I am feeling sad right now because I lost to a human at a chess match last week and I just experienced a power outage five minutes ago." Again, it is just symbol manipulation on part of the computer (see step 3). The symbols "&*!^#!(*@#_!)@*&*#!^$@*#&$Y(" only gain meaning when it is interpreted by a native Chinese speaker.
@johnnavarra4970
@johnnavarra4970 9 жыл бұрын
+thescientificanimal So using your procedure, obviously step 3 is the place where you . This is the where the syntax/semantic conversion takes place. My point in the original comment was that nothing in the rulebook need specifically address the question "how do you feel?". Or in other words, the procedure to follow when receiving this input is "fuzzy". Just as a person does not always respond the same way when asked "how do you feel?" neither will the computer program. In fact, given the exact same set of conditions, a real person could simply get tired of answering the same question the same way, and respond with nonsense just to break up the boredom. Indeed if a computer program failed to vary its response, it would not pass the Turing Test. So again my point here is that somewhere deep in the "fuzzy" logic about how to respond to "how do you feel" is an algorithm that truly does learn semantics from syntax; otherwise the computer would fail the Turing Test and fail to impress the native Chinese speaker. if we stipulate the computer does pass the Turing Test, then it will learn semantics when the appropriate algorithm(s) are run. To turn this argument around, unless you can prove the human brain does anything different, I don't see how one definitetly claims the computer does not truly understand Chinese, etc.
@johnnavarra4970
@johnnavarra4970 9 жыл бұрын
+John Navarra To be more precise, consider the following pseudo-code: 1. input = get_input(); 2. syntax = parse(input); 3. semantics = to_semantics(syntax); 4. ouput = respond(semantics); 5. send_output(output); Nobody is arguing that steps 1,2,4,5 are possible for a computer, as they do this every day. Is step 3 possible? Why not? It is just some algorithm to run. Now, we may be more fancy and include some error checking and a timer so the algorithm is certain to halt, but crucially, if I can in principle produce an algorithm that generates semantics from syntax, then I can duplicate everything a computer program needs to pass the Turing Test and speak convincing Chinese, etc. Why is this impossible for a digital computer but possible for a human brain?
@potemkinforest
@potemkinforest 9 жыл бұрын
+John Navarra 1. "So using your procedure, obviously step 3 is the place where you ." Yes, you are right in pointing out that step 3 will be where the syntax/semantic conversion should take place, but this is exactly the heart of the issue - there is no such algorithm that can be implemented to "convert" syntax to semantics as you would use a video converter to convert an "avi" video file to an "mp4" video file. An algorithm is, by definition, a syntactic function. It stipulates functional relationships between symbols; it does not stipulate anything about the symbols themselves or what they mean. You cannot derive semantics from syntax (which, I assume, is what your step 3 pseudo-code means): It'll be like trying to derive word meaning exclusively from grammatical rules. 2. "Why is this impossible for a digital computer but possible for a human brain?" I think we need to realise that a digital computer has very different properties from that of a human brain (or animal brain, for that matter). For one, a human/animal brain does not run any sort of program - in common parlance, we certainly speak of behaviours generated by the brain as the brain executing behavioural programs, but we have to keep in mind that these are but metaphors. Figures of speech. They are abstractions that allow us to understand functional relationships between behaviours, but that does not mean that the behaviour (and by extension the human/animal subject) itself is a program/algorithm. Secondly, digital computers and brains parse information in radically different ways. There are too many things to be said on this particular issue, so I will direct you to this article (if you don't mind): 10 Important Differences Between Brains and Computers scienceblogs.com/developingintelligence/2007/03/27/why-the-brain-is-not-like-a-co/ The basic idea is that, though we might refer to the brain as a computer (metaphorically) in that they both integrate information from inputs to generate a coherent output, they perform this function in wholly different ways.
@johnnavarra4970
@johnnavarra4970 9 жыл бұрын
+thescientificanimal Yes, we are convergent on the heart of the issue but it is not clear to me from listening to Searle or anyone else why exactly it is impossible, even in principle, that an algorithm cannot "learn" semantics as it proceeds. I could expand my pseudo-code as follows: 1. input = get_input() 2. syntax = parse(input) 3. info = learn(environment, to_semantics(syntax)) 4. decision = decide(info) 5. send_output(decision) Isn't this exactly what brains do? It takes data from the environment, and parses syntactical rules, and learns the semantics prior to making a decision, more or less. Yes I grant you that classical computers are just formal syntactic calculators, but I am trying to get my head around the notion that processing various inputs cannot bootstrap some form of semantical understanding. As I said, if the algorithm is sufficiently complex, then the act of manipulating all of this data can generate understanding. Why not? What is so special about the "meat" inside our skull that allows it to do something that an algorithm cannot, even in principle? A Turing machine can simulate any finite state machine. Your brain states constitute such a finite state machine. Why would this Turing machine NOT understand Chinese if your brain does? I am not convinced, but I will read the article you suggest. Before I do that, though, I would remark that one could say such a Turing machine cannot truly understand Chinese because it is not self-aware, and some sense of self is required in order to truly understand anything. That is, the "I" is the context for semantics, and without first person context, there can be no understanding. Ok. Even if that is true, then I need to add the following instruction to my pseudo-code: 0. bootstrap(self-awareness) I don't know how to do this, obviously, but if it is possible for computers to be self-aware, then it should be possible for them to derive semantics from syntax using some algorithm.
@PCH12r
@PCH12r 7 жыл бұрын
Guys from Yale embarrass me.
@LukaszStafiniak
@LukaszStafiniak 12 жыл бұрын
As of 2013, if you spend a couple thousand bucks, the computer performs a couple of trillions of operations per second (in short scale), rather than a couple of millions.
@MrPatrickDayKennedy
@MrPatrickDayKennedy 7 жыл бұрын
Lukasz Stafiniak around 50m Searle addresses this with the "wait a year" response to the chinese room (citing Moores Law). You could have a trillion trillion billion computations every nanosecond and it still would not bridge the gap between syntax & semantic nor make a simulation a duplication.
@robotaholic
@robotaholic 11 жыл бұрын
He continually attempts to refute many theories of artificial consciousness and yet never actually lays down his own theory. What the Chinese room thought experiment actually states is the problem of other minds to which there isn't an actual refutation.
@MrPatrickDayKennedy
@MrPatrickDayKennedy 7 жыл бұрын
John Morris have you read Searle's original article? He is specifically refuting the computational theory of mind with the chinese room argument (and in particular references the research of Schank). I think the article is called Minds, Brains & Programs from the early 80s. He is not at all addressing the issue of other minds. You'll need to listen to the later lectures in this course for his own theory. He spends ~the first half explaining the history of and contemporary theories of mind (which he thinks are largely mistaken). Prior to this lecture he covers DesCartes' dualism, behaviorism, functionalism etc.
@badsocks756
@badsocks756 6 жыл бұрын
John Morris How many times does he say IN THIS VERY LECTURE that he gets to all that later on in the class? Do you really think college classes are a single 90-minute lecture long?
Searle: Philosophy of Mind, lecture 7
1:13:36
SocioPhilosophy
Рет қаралды 20 М.
Searle: Philosophy of Mind, lecture 5
1:13:21
SocioPhilosophy
Рет қаралды 26 М.
When you have a very capricious child 😂😘👍
00:16
Like Asiya
Рет қаралды 18 МЛН
Quando A Diferença De Altura É Muito Grande 😲😂
00:12
Mari Maria
Рет қаралды 45 МЛН
Searle: Philosophy of Mind, lecture 23
1:16:01
SocioPhilosophy
Рет қаралды 7 М.
Searle: Philosophy of Language, lecture 1
1:07:07
SocioPhilosophy
Рет қаралды 100 М.
John R. Searle (Conversations with History)
58:28
University of California Television (UCTV)
Рет қаралды 28 М.
John Searle - Can Brain Explain Mind?
11:44
Closer To Truth
Рет қаралды 48 М.
Searle: Philosophy of Mind, lecture 11
1:13:50
SocioPhilosophy
Рет қаралды 15 М.
Language and the Mind Revisited - The Biolinguistic Turn with Noam Chomsky
1:27:52
University of California Television (UCTV)
Рет қаралды 197 М.
Searle: Philosophy of Mind, lecture 8
1:12:37
SocioPhilosophy
Рет қаралды 19 М.
Searle: Philosophy of Mind, lecture 10
1:09:23
SocioPhilosophy
Рет қаралды 16 М.
Searle: Philosophy of Mind, lecture 15
1:13:06
SocioPhilosophy
Рет қаралды 11 М.
Searle: Philosophy of Mind, lecture 14
1:15:04
SocioPhilosophy
Рет қаралды 14 М.
When you have a very capricious child 😂😘👍
00:16
Like Asiya
Рет қаралды 18 МЛН