CONSCIOUSNESS IN THE CHINESE ROOM

  Рет қаралды 21,787

Machine Learning Street Talk

Machine Learning Street Talk

Күн бұрын

This video is demonetised on music copyright so we would appreciate support on our Patreon! / mlst
Panel: Dr. Tim Scarfe, Dr. Keith Duggar
Guests: Prof. J. Mark Bishop, Francois Chollet, Prof. David Chalmers, Dr. Joscha Bach, Prof. Karl Friston, Alexander Mattick, Sam Roffey
Pod: anchor.fm/machinelearningstre...
(References are in YT pinned comment)
The Chinese Room Argument was first proposed by philosopher John Searle in 1980. It is an argument against the possibility of artificial intelligence (AI) - that is, the idea that a machine could ever be truly intelligent, as opposed to just imitating intelligence.
The argument goes like this:
Imagine a room in which a person sits at a desk, with a book of rules in front of them. This person does not understand Chinese.
Someone outside the room passes a piece of paper through a slot in the door. On this paper is a Chinese character. The person in the room consults the book of rules and, following these rules, writes down another Chinese character and passes it back out through the slot.
To someone outside the room, it appears that the person in the room is engaging in a conversation in Chinese. In reality, they have no idea what they are doing - they are just following the rules in the book.
The Chinese Room Argument is an argument against the idea that a machine could ever be truly intelligent. It is based on the idea that intelligence requires understanding, and that following rules is not the same as understanding.
in this detailed investigation into the Chinese Room, Consciousness and Syntax vs Semantics, we interview luminaries J.Mark Bishop and Francois Chollet and use unreleased footage from our interviews with David Chalmers, Joscha Bach and Karl Friston. We also cover material from Walid Saba and interview Alex Mattick from Yannic's Discord.
This is probably my favourite ever episode of MLST. I hope you enjoy it! With Keith Duggar.
Note that we are using clips from our unreleased interviews from David Chalmers and Joscha Bach -- we will release those shows properly in the coming weeks. We apologise for delay releasing our backlog, we have been busy building a startup company in the background.
TOC:
[00:00:00] Kick off
[00:00:46] Searle
[00:05:09] Bishop introduces CRA
[00:00:00] Stevan Hardad take on CRA
[00:14:03] Francois Chollet dissects CRA
[00:34:16] Chalmers on consciousness
[00:36:27] Joscha Bach on consciousness
[00:42:01] Bishop introduction
[00:51:51] Karl Friston on consciousness
[00:55:19] Bishop on consciousness and comments on Chalmers
[01:21:37] Private language games (clip with Sam Roffey)
[01:27:27] Dr. Walid Saba on the chinese room (gofai/systematicity take)
[00:34:36] Bishop: on agency / teleology
[01:36:38] Bishop: back to CRA
[01:40:53] Noam Chomsky on mysteries
[01:45:56] Eric Curiel on math does not represent
[01:48:14] Alexander Mattick on syntax vs semantics
Thanks to: Mark MC on Discord for stimulating conversation, Alexander Mattick, Dr. Keith Duggar, Sam Roffey. Sam's KZbin channel is / @split-withsamroffey1431

Пікірлер: 131
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
References: [Searle] MINDS, BRAINS, AND PROGRAMS web-archive.southampton.ac.uk/cogprints.org/7150/1/10.1.1.83.5248.pdf [Bishop] Dancing with pixies: strong artificial intelligence and panpsychism core.ac.uk/outputs/131208860 [Evan Thompson] **Mind in Life** evanthompson.me/mind-in-life/ [Tom Froese] Irruption Theory: A Realist Framework for the Efficacy of Conscious Agency psyarxiv.com/z2qma?trk=public_post_share-update_update-text [Chollet] On the measure of intelligence arxiv.org/abs/1911.01547 [Saba] **Understanding, the Chinese Room Argument, and Semantics** medium.com/ontologik/understanding-the-chinese-room-argument-and-semantics-b5584a456274 [Chalmers] Subsymbolic computation and the chinese room philpapers.org/rec/CHASCA-2 [Chalmers] Does a rock implement every FSA? consc.net/papers/rock.html [Chalmers] The combination problem for Panpsychism consc.net/papers/combination.pdf [Chalmers] A Computational Foundation for the Study of Cognition [Computational Sufficiency] consc.net/papers/computation.html [Chalmers] Why Fodor and Pylyshyn Were Wrong: The Simplest Refutation consc.net/papers/f-and-p.pdf Stanford / Plato entry on CRA plato.stanford.edu/entries/chinese-room/ [Chalmers] The Hard Problem of Consciousness eclass.uoa.gr/modules/document/file.php/PHS360/chalmers%20The%20Hard%20Problem%20of%20consciousness.pdf [Harnad, Stevan] What's Wrong and Right About Searle's Chinese Room Argument? web-archive.southampton.ac.uk/cogprints.org/4023/ [Tim Van Gelder] What Might Cognition Be, If Not Computation? e-l.unifi.it/pluginfile.php/914635/mod_folder/content/0/lezione%2014%20-%20vanGelder%20-%20What%20Might%20Cognition%20Be%2C%20If%20Not%20Computation.pdf?forcedownload=1 The Mind's I [Dennet/Hofstadter] (very critical of CRA) en.wikipedia.org/wiki/The_Mind%27s_I plato.stanford.edu/entries/functionalism/ en.wikipedia.org/wiki/Language_game_(philosophy) It from bit [Does mind arise from information] mindmatters.ai/2021/05/it-from-bit-what-did-john-archibald-wheeler-get-right-and-wrong/ Check our original interview with Bishop: Prof J. Mark Bishop - Artificial Intelligence Is Stupid and Causal Reasoning won't fix it. kzbin.info/www/bejne/m2KwZWSlqbqnhMk
@stevengill1736
@stevengill1736 9 ай бұрын
😢111
@earleyelisha
@earleyelisha Жыл бұрын
Finally!!! Was going through withdrawal!
@DelandaBaudLacanian
@DelandaBaudLacanian Жыл бұрын
Yannic Kilcher's channel helped me through the MLST hiatus!
@earleyelisha
@earleyelisha Жыл бұрын
@@DelandaBaudLacanian Indeed YK is a good channel! MLST just serves such a complete meal of philosophy, art, and asmr to go with the AI/ML dishes.
@jasonabc
@jasonabc Жыл бұрын
Best AI.machine learning podcast out there keep'em coming
@pascalbercker7487
@pascalbercker7487 11 ай бұрын
It's quite nice to see all these old topics being revisited which I first learned in the 80s and 90s studying philosophy. Indeed Professor Searle visited our class when I was at grad school at CU Boulder (Colorado).
@paxdriver
@paxdriver Жыл бұрын
I was just watching old episodes yesterday and over the weekend feeling withdrawal lol great timing
@nozellot
@nozellot 7 ай бұрын
What an incredibly video. Thank You!
@37kilocharlie
@37kilocharlie Жыл бұрын
Great guests. Thanks for hosting.
@pennyjohnston8526
@pennyjohnston8526 Жыл бұрын
Thank you MLST !
@oncedidactic
@oncedidactic Жыл бұрын
Love hearing from Alex M! Great show, really well put together review of the topic, including dwelling-ons and dalliances. It’s funny how sifting and resifting the same sand keeps surfacing new shine, I guess that’s why it’s a deep topic. Definitely came away with insights and novel appreciation despite being over much of this ground countless times before. Thanks!
@dr.mikeybee
@dr.mikeybee Жыл бұрын
Welcome back! It's great to see a new episode. I've been rewatching old ones. :) So, what is understanding? It's choosing the appropriate context around that which one wishes to understand. So in a question-answering system, there is a question and the context in the form of some text. A transformer with the appropriate head can generate an answer from that combination. If one is given a question without context, one needs to gather research material. So, for example, a program can do a google search of the question. The top response, for example, can function as context. If a record of one's conversation is kept in a database, appropriate terms can be pulled out and included in the search criteria, Entity recognition can aid this process. In the end, the probability that we've gathered the correct context is about as good as it gets -- keeping in mind that people often get the context wrong. If one doesn't do research, perhaps a transformer can find the right context from its training corpus. BTW, I'm reminded of Eliza, the AIML-based computer psychologist. This program did not require understanding. There was no probability used. It fits the idea of the Chinese room. GPT-3 is a different story. that's understanding, but we should not anthropomorphize the idea of understanding. It doesn't;t help us. Our fast emotional messaging system is a lot more like Eliza than GPT-3. It's a hack we needed for survival. Understanding doesn't enter into it. When we reason, we search for context and produce a statistically likely answer just like GPT-3.
@avallons8815
@avallons8815 Жыл бұрын
Understanding is approx. the expression of a complex concepts in terms of concepts already learned, back to fundamental "truths" acquired through direct observation.
@dr.mikeybee
@dr.mikeybee Жыл бұрын
@@avallons8815 Yes, this makes sense. Often, understanding is a fuzzy operation. For example, one can get the central idea and miss nuances. So what I called "shared semantic meaning" is probably more like an intersection of two distinct contexts. Good point!
@artisttargeted6146
@artisttargeted6146 Жыл бұрын
This is Gold... Thank You. ⚘️🍃🕊🍃⚘️
@arowindahouse
@arowindahouse Жыл бұрын
What a surprise, thank you for your work
@kurosakishusuke1748
@kurosakishusuke1748 Жыл бұрын
Featuring Chollet has been always awesome!
@vinca43
@vinca43 Жыл бұрын
Wow! So many intellectual gems. I listen for a few minutes, pause and reflect, and repeat the process. Acknowledging that I follow a set of rules/routines to provide the structure for unfettered thought, did I evolve from my Chinese Room state of being, clawing my way out, so I could do a bit more than follow rules now and then? If I did, clearly I still return to the room, taking comfort in a structured system, when the vastness of the thought unknown overwhelms me. I realize I'm muddying the thought experiment. Does this make me more or less sentient? :)
@quebono100
@quebono100 Жыл бұрын
Whoah you are back :) love you guys
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
We love you back! Great to see you back here!
@RavenAmetr
@RavenAmetr Жыл бұрын
Interesting speakers, but Noam Chomsky is nailed it: we're all wrong about everything. We just have concepts over concepts, which somehow mapped to our perceivable/comprehendible part of reality. There's no color, there's no energy, there's no motion, there's no space and time, there's no life, there's no existence, and there's no any dual counterparts of these. Simulated digestion cannot drink real milk, and likewise, but who say that you're not drinking simulated milk being a sim yourself? But there's no simulation, nothing emerges, story cannot write or read itself, because there's no story, it's just stains of ink, pixels on screen etc. designed to fool us, but who is this "us" and does the question itself makes any sense?
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
Beautifully put Alexey! Yes!
@luke2642
@luke2642 Жыл бұрын
Another great episode, really enjoyed it! Around 1:14:00 Bishop is talking about deleting if statements from a program that are not fired by sensory input during an episode, when you know the input. He says this deletion can't change the phenomenological state, because they weren't fired anyway. This is wrong. Each state the program enters is a coordinate in a vast space of possible states. The space of all possible states has been reduced, the coordinate space is different, therefore the experience is different. The state only has meaning in its embedding space. You've changed the state because you've changed the space it is embedded in. Taking his example, the feel of leather and the feel of glass are embedded in an incredibly rich and complex space, that's why they feel so different. If you reduced it to 1 bit for leather 0 for glass in an episode, you've created a new coordinate space, without any subtlety, meaning or mapping to the richness of phenomenological experience.
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
Hey Luke, thanks! And interesting take. It feels like you are viewing the problem through the lens of NNs i.e. using language like "coordinate" (implying Euclidean space) and "embedding". I think this lens harms the point of the argument, i.e. NNs are still computer programs (they are finite state automatas). The point Chalmers made would be true for any computer program regardless. Because we don't know a priori which execution paths will be run, then they must all exist as potential paths, therefore the program must be seen as a static object (my argument). But chalmers argues that when you snip the unused paths and re-execute the program, the phenomenal characteristic is lost. I agree with Mark that this does seem bizarre for a computationalist to say such a thing! By the way I must admit I have not researched exactly what Chalmers said on this so I am basing comments from what Mark said in the show.
@luke2642
@luke2642 Жыл бұрын
Thanks for your thoughtful reply 🙂 You made me chuckle, unimpressed by Chomsky for saying we are “little more than turing machines”. Infinite, evolving, recursive computational machines capable of simulating any process? What more could we be?! Yes, the lens of NNs, allow any complex distribution, or system to map to a manifold in euclidean space! I concede this has limits, like no geometric intuition for multi-modal distributions. It’s a stretch to imagine branching logic too, like adversarial examples, they shoot you off to another part of the manifold? I’m also not sure it is helpful to think of programs as static, distinct from their input data, or the deterministic chaos in their working memory that results from running them, like recursion. Right now, at this moment, the 10^20th digit of pi is undetermined, yet deterministic. As you mention at one point, I’m sure a big part of conscious experience is just awareness and reflection upon internal deterministic processes and their output. I think you “play down” neural networks as programs, FSAs. While infinite Turing tapes are impossible, recursion and continuous floating point maths allow exponential storage in hopfield networks, and it’s only a small leap to computational complexity, and we know 10^14 synapses are enough for human intelligence. Maybe it’s incompressible, but probably intelligence can exist on a lower dimensional manifold? I realise I've drifted a long way from Chinese Room Arguments, syntax and semantics!
@manslaughterinc.9135
@manslaughterinc.9135 7 ай бұрын
My problem with the Slug Robot Data Sniffer, is that the same argument could be made for the human brain. The only difference here is we lack the capacity to data sniff the human brain and manipulate inputs. Were we able to do that with a human, it would be a defacto state change and Chalmer's argument stands. The fact is, it is physically possible to control the the electrochemical signals of the brain, we just don't have a sufficiently advanced technology to do so at the level of precision needed to accomplish the same effect that was proposed for the Slug Robot Data Sniffer. Does the barrier of our own understanding and control of the fine grained detail of how the human brain operates mean that physicalism beats computational functionality? I think that can only be true if you do not believe in free will.
@aBigBadWolf
@aBigBadWolf Жыл бұрын
I need to push back on Bishop's claims in the section that starts around 1:36:40 because it's such an obvious counter-argument I'm surprised he doesn't talk about it. Yes, of course, simulating the properties of gold or digestion differs from the real world. But information is valuable for its own sake independent of its physical manifestation. Yes, I cannot eat information but I can nevertheless consume information. A speech with an idea is really only valuable for its information content and not its physical manifestation. It doesn't matter if it is represented in audio waves, pages of paper, bits, or neuron activity. Understanding Chinese is also independent of its physical manifestation. This is the only conclusion because there are many people who understand Chinese but do not share the same physical implementation. Thus, simulating a human brain that is capable of understanding Chinese is as good as the exact same real brain that we chose to simulate perfectly.
@dr.mikeybee
@dr.mikeybee Жыл бұрын
Yes, Joscha just makes sense. I can't argue with anything he says.
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
He's a legend ;)
@coopersnyder4675
@coopersnyder4675 Жыл бұрын
it really feels like all of this information presented are all just pointers to data points that if connected, we would see a grid of the 'idea object', but never really getting the object truly because everything is just a metaphor. It's also really weird hearing intelligent beings discuss intelligence and demonstrate the things they are talking about in order to successfully talk about those things. Very recursive and very fun.
@Achrononmaster
@Achrononmaster Жыл бұрын
@1:14:00 this is glossed over too quickly, it is an important point. I'm taking non-physical dualism (of some variety) to be coherent, for sake of argies. Then "cutting out an _if_ clause" is not what removes, or the converse adds, qualia. Consciousness is the causal agent, which is always unexplained by physics (causality is a pure metaphysical concept, physics is following of laws from initial/boundary data). It is the "soul's" causal power to act as if there were an if clause in it's brain that manifests conscious thought, it is not the other way around. Brains cannot "emerge" or exude qualia. Qualia are inherent to the non-physical (let's say, until new physics is discovered) entity that introduces causation into the physical world, and that can only be (if science is cohenrent) at the boundaries. I am assuming non-physical things cannot mess up any physics within a spacetime cobordism (the postulate that _physics makes sense_ and is totalizing for elementary particle interactions, no miracles).
@Niels1234321
@Niels1234321 11 ай бұрын
At 28:00 Cholet talks about the Chinese room being unable to learn German, but I think that is just an assumption about the rule book and you can imagine a different one. After all we know that programs running on CPUs can learn to speak German, and Chinese people can learn to speak German.
@margrietoregan828
@margrietoregan828 Жыл бұрын
Gibson 1:03:32 older Heritage than many is is Gibson's ecological approach to Vision which talks about affordances and and suggests that the way we enact with the world is to do with the way that objects afford certain actions so
@margrietoregan828
@margrietoregan828 Жыл бұрын
with top down and bottom but motives and for many many years and I've yet to see that cashed out it's one of those 1:19:38 philosophical hand-waving solutions that I've seen talked about but I've yet to see it demonstrated in any computational 1:19:45 system so people have talked about having these computational systems where lower level properties 1:19:52 a group together and then we have this high level thing that's akin to human consciousness that effectively exerts 1:19:58 control over what's going on at a lower level and effectively then gives agency to that top level conscious action I 1:20:05 haven't seen such systems exist in fact what are my critiques of computationalism is based around the 1:20:11 notion of teleology how goal orientated Behavior exists because I see no way 1:20:18 that it can computationally the goal direction is all engineered in by the 1:20:23 people who design these systems I see no way of of systems having any teleology 1:20:28 of their own um and and this this is the elephant in the room that no one talks about one of 1:20:35 the many elephants in the room with AI that people don't talk about where does agency come in because I don't see any 1:20:41 computational explanation for agency at all
@dr.mikeybee
@dr.mikeybee Жыл бұрын
I like Professor Friston's idea that feelings are the result of autoencoding knowledge. Ultimately that means compression of experiences and reactions. I've said repeatedly that these compressed messaging systems are critical for survival in biological species but have little or no value in silicon. In fact, these are the sub-systems that tend to go wrong.
@Achrononmaster
@Achrononmaster Жыл бұрын
@1:33:00 context is orthogonal to the mind-brain problem. Of course I need context to comprehend the intended meaning, especially joke and slurs and sarcasm and whatnot. But without the context a qualia-filled mind still has an understanding, maybe the wrong one, but one nonetheless, because they have past symbolic qualia + memory + a wrong frame (but still a frame) and so forth. So Saba is talking about the AR (automated reasoning) program, not the understanding consciousness program. Searle's CRA was taking about the consciousness puzzle.
@margrietoregan828
@margrietoregan828 Жыл бұрын
1:01:40 the enactive approach is an approach that tries to explain um sensation phenomenal sensation why 1:01:47 you see the color red when we see red and not blue why I feel a particular 1:01:52 sensation as I drag my fingertips over this leather and not the distinct sensation of moving my fingers over 1:01:59 glass for example so why do I feel leather when I move my fingers over there and glass when I feel move my 1:02:06 fingers over glass and for the enactivists it's to do with a sense particular sensory motor contingencies 1:02:12 that are enacted when I perform both of those operations that we learn over time 1:02:17 so it's the the enactment of that gesture brings forth the sensation of uh 1:02:24 that I'm feeling um and uh the most famous inactivists I guess with people like in the 1:02:31 historically Francisco Varela Evan Thompson we mentioned already 1:02:36 um people now uh like Ezekiel the Paolo and Tom feroza and they they've 1:02:42 attempted to engage seriously and give an explanation of how phenomenal Consciousness can come about 1:02:48 so for the sensory motor and activists and here I'm thinking of Alvin Noe Kevin 1:02:54 o Reagan from the University of Paris uh they they are spouse a particular type 1:02:59 of inactivism that for example says that with the character of a phenomenal Excellence is is brought about our very 1:03:06 in engagement with the world um to Auto poetic in activists like Evan 1:03:13 Thompson who who attempt to answer questions about how phenomenal it was is realized by 1:03:20 autonomous Auto poetic entities as they engage with their environment
@michaelwangCH
@michaelwangCH Жыл бұрын
We have to further simplify the ML and stats methods which we can generalize - Gflow is generic and adaptive, no assumption needed.
@ludviglidstrom6924
@ludviglidstrom6924 Жыл бұрын
Amazing channel
@Isinlor
@Isinlor Жыл бұрын
Simulation of computation is just computation. Computation is not like rain or fire.
@sandropollastrini2707
@sandropollastrini2707 Жыл бұрын
If we say "simulated fire doesn't burn as real fire" then we can also say "real fire doesn't burn as simulated fire". That is, you can see real fire as a not-perfect simulation to virtual fire. But the level of resemblance between the burning of real and virtual fire depends entirely on the definition of "burn" used: with specific abstractions the two burning processes can be considered equal. They will be considered two different representations of the abstract concept of "burning" that has been conceived. Exactly as two books of the same novel are two different representations of the same novel. They will never be equal.
@sandropollastrini2707
@sandropollastrini2707 Жыл бұрын
Said differently, in front of two distinct real fires, how can we say that the two fires are undergoing the same process of combustion? Comparing the two fires, the details will be always different. But we usually think them as undergoing the same process because we use abstraction and we remove tons of details before comparison. Hence the same argument should be used comparing real and simulated fire.
@sandropollastrini2707
@sandropollastrini2707 Жыл бұрын
Hence we can't use the argument "simulated X is not doing Y as real Z" to imply that "X is not doing Y entirely". E.g., "a simulated brain is not thinking as a real brain" doesn't imply that "a simulated brain doesn't think"
@willd1mindmind639
@willd1mindmind639 Жыл бұрын
The key distinction between biological systems and man made systems is that biological systems are self organizing. Self organization in biology is based on the nature of each cell to function independently as a functional unit and to collaborate together with other cells to make an organism. And cells all operate on the fundamental process of molecular biochemical signal activation and processing which is totally dynamic because of its self organizing nature. This means that each bio-chemical "signal" processed by a cell is discrete and easily distinguished from other signals as fundamental to cellular processing. Which means that all the ideas, sensations and physical stimuli experienced by an organism are inherently distinct due to having discrete bio-chemical signatures. That results in living organisms that change over time as a result of the biochemical feed back loop which causes produces dynamic changes to the cellular makeup of the organism over time, such as how neurons change in the brain. Whereas anything man made is a fixed function system based on a set of pre-defined behaviors or instructions that are not dynamic and do not change over the course of its existence. So a microchip is not going to dynamically change the pattern of circuits and operations available on it dynamically due to what kinds of data and functions are used on it.
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
Hello! I agree with your initial assertions although couldn't follow your argument why this meant that in your opinion the process couldn't be simulated in silicon -- could you expand?
@willd1mindmind639
@willd1mindmind639 Жыл бұрын
@@MachineLearningStreetTalk Hello MLST! Biology is self organizing, so each cell has its own set of biochemical activation patterns based on its genetic blueprint. Every piece of information in the brain is encoded biochemically, but each element is distinct because of the discrete nature of the biochemical signal/pattern. And because all of these molecules are processed by cells, they actually become part of the neurons and change the pathways connecting them over time. But fundamentally because we are talking of molecules there are is a much larger "space" for encoding discreteness via molecular combinations than what you get with binary. So there are 2 problems, one is the substance used in the actual chips is not bio-organic and the other is the encoding of information is based on a fixed system that does not produce discreteness. Discreteness here meaning you cannot distinguish the elements encoded in two binary sequences just by looking. As opposed to say morse code, which is based on a discrete sequence of patterns. To get any value out of binary messaging you have to compute the values using the fixed instruction set and translation (compute), which imposes a cost. Because binary itself is a fixed number system. Most AI algorithms work around this by just using math which means everything just becomes matrix math which is not discrete because the elements themselves are being modeled mathematically with numbers at the aggregate level not at the element level in the base encoding. So yes, you can kind of have a set of registers and circuits that do matrix math in no prescribed order, but it is still going to be a fixed set of matrix math op codes and binary registers of fixed length numbers (float, int, bcd, whatever). Which does not impose discreteness at the hardware level in the encoding of novel elements beyond numbers, without cost, without computation, without human intervention. So it is not self organizing. All of that being a long winded way of saying a computer in its current form can never be more than an a calculator. It is a device that augments human intelligence and cognition but not self aware, sentient or conscious because it lacks the self organizing biological elements that makes those things possible.
@dr.mikeybee
@dr.mikeybee Жыл бұрын
I don't believe that "understanding" is the same thing as "getting it right." One can have a "perfect understanding" and still be wrong -- as in "I know what you are saying, and I form this opinion." And the opinion just happens to be wrong. Functionally, understanding is a shared semantic mapping between two or more participants, i,e., context. What one does with that mapping is external to the process of understanding. This is to say again that we need to limit our definitions to the simplest definitions possible. It's the only way to progress as engineers. Adding to knowledge by problem solving and "getting it right" needs its own term.
@PetraKann
@PetraKann Жыл бұрын
Leaving $12 for a $10 meal doesnt necessarily imply that the meal was enjoyed. It could mean that the person didn’t enjoy the meal but felt obliged to leave a tip or left a $2 tip for the service only. Perhaps the person thought that the meal cost $12 and paid for the meal and rushed out in disgust at horrible the meal tasted. Perhaps the person always left a $2 tip irrespective of the service given or the quality of the food served. This list of scenarios is long. The point is, when a simplification or assumption is made with respect to a problem or phenomenon the boundaries and initial conditions limit and structure the possible set of solutions available in any subsequent analysis. This is a common toxic bubble that encloses many challenges found in Philosophy, science, mathematics and spiritualism etc.
@margrietoregan828
@margrietoregan828 Жыл бұрын
50:40 is more than the sum of its parts with the intentionality originating in the macroscopic level kind of top down 50:48 so Bishop like Putnam before him will always counter with the argument that he could reproduce the computation and 50:54 trivial physical substrate like a Carl's myelometer and it would be bereft of any 50:59 agency you know one interesting feature of Bishop school of thought is that it still has access to continuous variables 51:05 the it from bit or information version are spelled by Chalmers and Chalet are limited to the digital world 51:12 now I spoke with Keith about this and he leans more towards continuer being real and important but he disagrees with 51:18 Sterling Bishop he said that Professor Carl friston is Right suppose that we did have a machine which could calculate 51:24 the same category of hypercomputation with continuous variables which people could exist on and experience feelings 51:31 and subjective qualia they are states in that model if we had a continuous 51:37 computer Keith said that we could reproduce everything about human consciousness this discussion all boils down to 51:44 whether a mind can arise from information rather than being a property of physical matter 51:50 this is Professor Carl friston I I think what you're saying if you are 51:56 a computationalist is that if we could recreate all of the stuff that you're just talking about so forward reasoning 52:02 and planning Etc that we could recreate Consciousness in silicone yes yes I I 52:08 see no reason why you couldn't create an artifact that was sufficiently similar
@CandidDate
@CandidDate Жыл бұрын
What are you on about, dear?
@sk8l8now
@sk8l8now Жыл бұрын
Can someone explain the following to me? In the second argument against the CR, the claim is made that the CR could not learn German and is therefore not intelligent in that regard. If the previous argument rested on the idea that the component parts of the Room (the man and the scripts) or any system do not contain knowledge itself and that it is an emergent phenomenon, then (1) what is to say that the room itself is not intelligent, as defined by these emergent phenomenon through the perception of some oberserver? Are the perceived properties of the CR (made some outside observer) not viewed as some particular type of intelligence? And (2) to say that the CR is not intelligent based on being unable to deal with German would be the same as saying we as humans are unintelligent because there are phenomenon we cannot perceive as humans, or the component systems in our brain cannot process such phenomenon in ways that would be deemed intelligent by some arbitrary metric. Would this not then be a question of degrees, or sets of intelligences, as opposed to some binary of intelligence? More generally, if we are to assume an Instrumentalist approach of intelligence (all categories of perception are not real; echoing the speaker's claim about stories being what we perceive them to be), why are we assuming some special difference between our intelligence and the Chinese Room? Our belief in our intelligence is a belief of the mind. If Intelligence is an emergent phenomenon, how could we ever differentiate between (1) the CR and some outside observer's belief of the Room (based on emergent phenomenon) and (2) our brain and our mind's perception of ourselves as observers of these systems? Are we, as minds of a brain, not deriving beliefs similarly to how an outside observer would percieve the processes of the CR and consequently how we perceive our own knowledge and intelligence? As always, best Philosophy/Engineering/Cognition program out there. Keep it up!
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
Hello Muhammad, and thanks for the comment! I wanted to clean up a couple of things in your first paragraph. Searle's main argument was that computationalism is insufficient for *understanding*. I think bringing in intelligence, and linking this argument to the Turing test is a red herring. Searle was basically attacking the main sacred cow in AI, and this is a biggest sacred cow of all cows, i.e. that computationalism won't work. Almost everyone working in AI believes in some sense that information can give rise to minds (look up "it from bit" from Wheeler, Wolfram talks about it a lot in his digital physics ideas), that we could recreate a mind in a computer, we could upload ourselves into the matrix in the future, that we might already be in a computer simulation, that physical reality might be real but we could still map it to a digital substrate in the future. This is the foundation stone belief, which leads AI alignment people to then talk about orthogonality, instrumental convergence and the value of future simulated lives etc. Searle is simply saying that there is something about physical reality that can not be simulated, and if you do simulate it, you lose something substantial (in this case, he mostly meant "you lose part of the understanding"). So if you agree with Searle, this ends the discussion. But.. if you are a computationalist and disagree with Searle, as does Chollet, Chalmers, Friston, pretty much everyone -- then it's interesting to talk about the emergentism in the digital substrate i.e. the "intelligence" would indeed emergent (as it is in the real physical substrate too), as Chollet points out when discussing the "systems reply". "Understanding" just means a successful semantic mapping (to form an interpretation, the intended interpretation), and unlike what Chollet says, this can be previously crystallised (as is the case with the infinite rule book in the room) although would require intelligence to create understanding fluidly in a real situation (we don't have a rule book or a hashtable, we have fragments of knowledge which need to be reasoned over intelligently to create the understanding in a new situation). There are indeed degrees of intelligence (but not understanding, you either understood it or you didn't), although this depends on so many things (we are making a show on intelligence soon with Pei Wang, so watch out for that). The main reason why the room is not intelligent (not that this was a point Searle even wanted to make) is that the information was already crystallised in the rule book, but as Chollet said, it would have taken a real intelligence at one time to create the rule book.
@oncedidactic
@oncedidactic Жыл бұрын
@@MachineLearningStreetTalk nice exchange and summary. I’m curious about degrees of understanding- it seems very arbitrary to insist understanding is binary. Thoughts? Did this come up in discussion on discord at all?
@optimusprimevil1646
@optimusprimevil1646 9 ай бұрын
we have a chinese room in our head. we have no idea where our intelligence comes from. when i first heard chomsky dismissing large language models i thought he was out of touch, but once again it looks like he was right all along, that syntax is more important than consciousness when it comes to intelligence.
@victorv.senkevich1127
@victorv.senkevich1127 Жыл бұрын
👉 Consciousness is perception with understanding Quotes: "• There is no other way to determine that some object has consciousness other than our subjective perception. It doesn’t matter how the Chinese room produces answers. The only important thing is whether we are ready to qualify these answers as conscious. If you do not speak Chinese, you will not be able to qualify your counterpart as having consciousness, despite all his/her attempts to explain it to you in Chinese. Because consciousness is perception with understanding and consciousness is subjective. • Of course, I have consciousness regardless of someone else’s perception. But this is true only for myself, not for others. And as much as I am ready to perceive myself. And it will be true for others only when they can perceive it. Because consciousness is subjective." See also on Medium - simple approach to the hard problem: «Consciousness Is Subjective» «The “Hard Problem of Consciousness” Is Being Solved»
@margrietoregan828
@margrietoregan828 Жыл бұрын
does this 55:49 autonomy give rise to conscious experience or does conscious experience give rise to autonomy 55:54 they're different facets of the same thing so the enact the auto poetic inactivists championed by people like uh 56:02 Evan Thompson uh talk about their Parable is the parable of the Single Cell the Single 56:08 Cell the middle if you like and they can describe that in Auto poetic terms of a system which maintains 56:15 its own boundary conditions it mentors its processes for keeping itself alive precariously so at any one 56:22 moment the system might cease to exist and it's this precariousness uh of 56:29 living that gives the the sort of the theory goes it gives rise to the Proto phenomenality and the sense of 56:36 normativity that the system will have because it's important for the amoeba whether it if it's in a sugar solution 56:42 then it moves up the sugar gradient and not down the sugar grain if it is the latter it will run out of energy and die 56:48 if it goes up the sugar grain it will continue to live and so these simple simple uh facts of interactions with the 56:56 environment so the inactivist the autocratic inactive Story Goes what give 57:02 rise to Proto phenomenality and and a sense of normativity of Engagement with 57:08 the environment
@dr.mikeybee
@dr.mikeybee Жыл бұрын
"Consciousness" is another suitcase word. It can be defined in a million ways. My dictionary says consciousness is the ability to sense a phenomenon coupled with subsequent action. If a brick is heated until it cracks, that is sensing the phenomenon of heat coupled with the action of cracking. In that way of thinking, the entire physical world is conscious, because all matter reacts to changes in physical state. If you then say, "I can't accept that the whole world is conscious," you must mean conscious in some other sense with more attributes thrown into the suitcase. I say take the simplest meaning and find new signs and signifiers for the suitcase with the additional attributes. It's lazy not to do that.
@dr.mikeybee
@dr.mikeybee Жыл бұрын
Computers definitely create their own abstractions. What else are the layers of a neural net doing?
@dr.mikeybee
@dr.mikeybee Жыл бұрын
If I store the context and give an answer, you can ask me if I understand. As a reply, I can give you back the context.
@dr.mikeybee
@dr.mikeybee Жыл бұрын
Tim Scarfe, Would you say that intensional structure and context are synonyms here?
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
We touch on this at 1:29:37 and also in the Gary Marcus / Luis Lamb show intro. IntenSional structure would be i.e. 3*4+6 and the extenSion would be 18. Walid argued in medium.com/ontologik/understanding-the-chinese-room-argument-and-semantics-b5584a456274 that we must maintain the intenSional structure (which is the decomposition of an infinite world model) so that we can recompose it in a semantic mapping (interpretation) [ what we refer to as "compositionality / systematicity"]. The "context" comes in because we still have further work to do to ascertain meaning after computing the first semantic mapping, the context is everything which is not directly deducible from the utterance, previously spoken utterances, your world knowledge etc -- it might be "where am I now", "what is the person wearing". See cdn.discordapp.com/attachments/839907791808495626/1039121842436313088/unknown.png
@dr.mikeybee
@dr.mikeybee Жыл бұрын
@@MachineLearningStreetTalk I see. I'm on the right track but incomplete. That's nothing new for me. If I had it completely, I'd have AGI. ;) Thanks for the lessons. It helps a lot to have direction.
@margrietoregan828
@margrietoregan828 Жыл бұрын
unlike many of my 58:56 colleagues I partition off Chalmers is uh extended theories of Mind away from 59:01 the embodied the inactive the ecological the extended top down the state of 59:07 cognitive science presumably there are representationalists 59:14 it's important to to clarify that these alternative approaches to cognitive science are still probably minority 59:20 views if you took a a sample of all the people who classify themselves as cognitive scientists 59:26 um the majority would probably still relate quite strongly to good old-fashioned Ai and good old-fashioned representational approaches to cognition 59:33 and cognitive science um but then the four e's present a a an 59:40 alternative research program that does have its own journals and scientists 59:45 working in the area and there is a degree of momentum behind it but it's still I would say a probably minority view across all cognitive scientists 59:52 it's just one I've been obliged to take seriously because I think there are good a priori reasons for having skepticism 59:59 about any accounts that rely on computation so what are these four reviews well the 1:00:06 the embodied view is is that began really with the work of Rodney Brooks and Francisco Varela says 1:00:13 we need to take the role of the body seriously and it looked at how systems can interact without representation 1:00:21 by using the world for famously in Romney Brooks's case as its own best representation so that we began to see 1:00:28 the importance of taking I'm not looking at the mind as somehow disembodied which is what functionalists have always
@vinca43
@vinca43 Жыл бұрын
Given the current state of explainable AI and Chollet's impressive but budding program on measuring intelligence, perhaps our attempts to determine if AI is sentient is the composition of the Chinese room with Plato's cave, where Plato's shadows represent the ambiguity of interpreting the output of the room, and the input that we provide the system. Think about that poor AI, having to cope with our obtuse questions. :) Am I sentient? Or just a biological bot? Turned on and responding to sensor input? Wish I knew.
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
Christopher, I hadn't though of it like that before -- that's really interesting! For folks interested -- Plato's cave is a metaphor for the way in which humans experience the world. We are like prisoners in a cave, seeing only shadows on the wall, when in reality there is a much richer and more complex world beyond our limited experience. This is a great way to think about how we reach when discussing phenomenality.
@vinca43
@vinca43 Жыл бұрын
@@MachineLearningStreetTalk thanks! This episode has inspired a lot of out-of-box thinking from a shackled prisoner.
@Chr0nalis
@Chr0nalis Жыл бұрын
Just a tiny note, Hutter mentions that AIXI is pronounced as ah-ee-ksee.
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
Noted!! 🤠
@margrietoregan828
@margrietoregan828 Жыл бұрын
was called the heart problem in fact the same terms were used It's the Hard Rock for philosophies it's a really pediate 1:42:53 different mutants and the reason is that mosh just with lock said uh motion has 1:42:58 properties that we cannot conceive of which is correct but uh so we've given 1:43:04 up the hope of conceiving of them and just you know develop theories of motion and Hume for example described Newton's 1:43:12 greatest contribution as uh demonstrating that there are Mysteries his term which the human mind will never 1:43:20 penetrate and he meant the mysteries of motion those Mysteries were never solved they 1:43:27 were abandoned as science just took them were a less ambitious course aiming for 1:43:33 intelligible theories rather than comprehending Mysteries which by their very nature we can't comprehend
@margrietoregan828
@margrietoregan828 Жыл бұрын
so how do Bishops views compared to people like Francois Chalet and David Chalmers well they're both emergentists 49:50 Bishop from a biological substrate and Chalet and Chalmers from information itself and the cause of interactions 49:57 between the constituents the overarching question is really where does agency and Free Will intentionality 50:04 teleology goal-directed Behavior where does all of this stuff originate because 50:09 we have a bit of a chicken and egg problem for Bishop it's bottom-up he thinks that life emerges from the 50:15 physical properties present in the microscopic level and their behaviors which is to say autopiosis and teleology 50:22 through the complex interactions between the microscopic elements of that system 50:28 Chalmers and Chalet think that it's top down that information gives rise to Minds Chalmers even going one step 50:35 further explicitly calling it strongly emerging which is to say the phenomena 50:40 is more than the sum of its parts with the intentionality originating in the macroscopic level kind of top down
@margrietoregan828
@margrietoregan828 Жыл бұрын
41:00 Joscha Bach refers to movies as a simularcrum there's a huge infinite World of mathematical descriptions and then there's all this stuff over here in 1:47:35 reality it's not clear that the ontological truth is representable by mathematics mathematics describes much 1:47:42 more than what is real maybe not everything that is real in our universe there's the set of all possible true 1:47:48 behaviors..
@XOPOIIIO
@XOPOIIIO Жыл бұрын
Consciousness is the informational process of how neurons are communicating. When you think about something, your thoughts are flowing to the connected thoughts, that's because they are connected in your network. Basically, the activation of certain neuron is an act of consciousness, and then it is activating the next neuron, that is how your consciousness is flowing to the next neurons. But there is no flowing of anything of course, every activation is just a separate act of consciousness. It perceived as flowing because the neurons are connected and the activation of the entire chain could be approximated as consciousness flow. What qualia is then? It's just a minimal piece of conscious experience, that can't be separated any further, and as such can't be explained. Why is that? Because qualia doesn't have any prerequisite neurons in the chain. Color, for example begins from the retina cells, that are activated by a light beam, so you can't track the chain further than where it begins, and so the experience is perceived as unconditional.
@richardbrucebaxter
@richardbrucebaxter Жыл бұрын
31:40 - there is no "the meaning" of any given data; computers/info processing machines can determine/identify semantics (including categories etc) by forming the best (most predictive) model of reality; with respect to their goals etc. 58:15 - "the enacted, the embodied, the ecological, the embedded" approaches to cog sci do not have an obvious relevance to phenomenological consciousness, given the fact an agent acts based on its model of the world/body/mind independent of their actual existence. 1:19:00 - Strong emergence suggests there are properties of systems that emerge which are not reducible to their constituent parts (Strong and Weak Emergence, Chalmers 2006; "we can think of strongly emergent phenomena as being systematically determined by low-level facts without being deducible from those facts"). This is distinct from the specific claim/attribution of independent causal agency to strongly emergent phenomenon (e.g. Conflicting Emergence, Turkheimer et al. 2019). Under physicalism any 'top down' (macro) process is typically considered overdetermined by its lower level interactions (e.g. Jaegwon Kim). Some have suggested if the lower level construct is intrinsically indeterministic however then this leaves open such a possibility (e.g. Four Views on Free Will, Kane 2007). This does not answer the question of origination however - why would a conscious agent take action A and not action B (what causes such a decision if not the physical construct or probability itself). Moreover why would it need to, given that logic is a deterministic process (and can thus be modelled in a physical system).
@PJRiter1
@PJRiter1 Жыл бұрын
It has to be both top down and bottom up. The complexity aligns and coaxes the consciousness out of the subatomic particles and quantum superpositions. What is emergent is the coordination of the atoms of consciousness. Lets get away from the either or philosophy, like something must be either a wave or a particle. sometimes it is both and neither but something emergent that we seek to explain in our antiquated language... watch the bots create a new language.
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
Reddit thread here www.reddit.com/r/MachineLearning/comments/yq06d5/d_what_does_it_mean_for_an_ai_to_understand/?
@dr.mikeybee
@dr.mikeybee Жыл бұрын
What I call an agent, Francois calls an information-processing system. That's the whole ball of wax, so to speak. Intelligence and knowledge are often synonymous, but not always. A knowledge base or a model contains intelligence, but it is not intelligent. A system is intelligent when it correctly accesses that intelligence. For that, we need an intelligent agent, and that agent is only as intelligent as the models and knowledge base it accesses. Moreover, an intelligent agent needs access to personal history as context, working memory, sensors, etc. So in essence, Francois is correct in saying that the information-processing system is what can be called intelligent.
@i_forget
@i_forget Жыл бұрын
Can you put a “room with a room”?
@dr.mikeybee
@dr.mikeybee Жыл бұрын
Few-shot learning is an example of a model that can learn because it is taking in a new context and producing a different result. I don't think any active learning or transfer learning needs to occur for a system to be deemed intelligent.
@Achrononmaster
@Achrononmaster Жыл бұрын
@19:00 the "You" or "I" cannot be an imagined system. What is doing that imagining? Chollet is guilty of anthropomorphising himself, and he doesn't know it. There is no "pattern recognition" in any known AI, the recognition is all done by the human running the algorithm. We recognise the AI has solved the image classification problem _for us._ The AI does not recognise it has solved anything. The key is "for us". We are the knower, not the AI, at least for all current generations of AI.
@carolspencer6915
@carolspencer6915 Жыл бұрын
💜
@smkh2890
@smkh2890 Жыл бұрын
This series reminds me of the ancient Bryan Magee interviews, available on YT. In fact Searle is one of the field experts he chats with! Here, on wittgenstein: kzbin.info/www/bejne/nWOth4CFoNR3pZo
@ChadLieberman1
@ChadLieberman1 9 ай бұрын
Too much background music. 😊
@Achrononmaster
@Achrononmaster Жыл бұрын
@1:20:00 awesome, I found a kindred spirit in Mark Bishop, thanks MLST. However, this Wittgenstein argument seems a bit bogus. IF one has memory then a private language is fine. In a sense this is what qualia do for us, the mental qualae are available, and so we can use them for private language, especially of Chomskyian innateness is also a thing. But the more general point about the real power of language is well taken, you do not get real serious power until your actions can influence others, so as a collective you can accomplish a whole lot more. So I am saying we do not need other people to exist in order to experience mental qualae, we only need our mother to have existed (in the past).
@Achrononmaster
@Achrononmaster Жыл бұрын
I think this is similar to the framing or the binding problem, I might call it the Grounding Problem, which Wittgenstein poses. I am not sure it cannot be solved by framing or binding, but suppose it can't. Then the issue is how we ground our semantics so that it is not wildly fluctuating, and remains coherent? Well, firstly, it does not need to be all that coherent, as we know, human languages evolve, I can hardly make head nor heels of what Chaucer was narrating. The coherence has to be over short time scales where it matters for survival and whatnot. But I think short term and medium term memory allows this. _A conscious soul cannot manifest qualia-filled thought effects in a world without being bound to a system with memory_ --- is the way I would rephrase Wittgenstein's remark. And a conscious soul that cannot manifest effective actions in a world does not really exist, in *_that_* world. I would just say *does not exist* in that world.
@polymathpark
@polymathpark Жыл бұрын
What do you think has changed about what this video portrays now that GPT 4 is doing what it's doing?
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
Nothing
@polymathpark
@polymathpark Жыл бұрын
@@MachineLearningStreetTalk fair enough. I'd like to try and get some of these guys together on a podcast to debate their ideas, I wonder if Searle would be down. I know Bach and Francois would be...
@Achrononmaster
@Achrononmaster Жыл бұрын
@25:00 I can't speak for J. Searle, but isn't Francois being a bit harsh here? If the "man" memorizes the book, and the dude is a human being, he is not going to memorize it like a humongous look-up table is he? He is going to use imprecise heuristics and innate comprehension, and whatnot, i.e., he uses his "soul" (whatever this is, we know not what, that part of his being that can access the platonic realm, so-to-speak) to understand Mandarin so that he "effectively memorizes" the book only incredibly efficiently and imperfectly. This is what I do with Duolingo. It's slower than going to school, but it is how I "fake memorize" the Chinese Room book. So I think Searle's argument is closer to what Francois is getting at, maybe I misread Searle, it's been a while since I read his papers. It is only a fantasy gedankenexperiemtn where the "man" actually just blindly stores the book neurologically - and so indeed has no understanding of Mandarin. I am pretty sure Searle never meant that to mean what he said by "memorizes". If he did then he's not a very good professor.
@Achrononmaster
@Achrononmaster Жыл бұрын
@31:00 Tim puts what I was thinking better than I did. Thanks Tim.
@Achrononmaster
@Achrononmaster Жыл бұрын
@3:30 not a good definition of "knowledge." Because fake knowledge is also a thing, and because truth is undefinable, and from Tarski can only be rigorously defined relative to axioms (and axioms are always unproven). The better classification is that knowledge connotes a subjective knower of qualae whether "true" or false" interpretation of the data, and is thus a (probably) non-physical, non-mathematical state of being, whereas data or information does not require a subject (which we know because we've already since Shannon defined information mathematically).
@Achrononmaster
@Achrononmaster Жыл бұрын
Same problem with Mark Bishop @5:15 - Searle was never trying to "prove the truth of" anything with the Chinese Room gedankenexperiment, or if he was he was delusional. The Chinese Room is a moral or spiritual argument, not a logical argument, no matter how much you want it to be otherwise. But that is ok. There is nothing wrong with spiritual arguments, if you know them as such. They are more powerful than logic, but less precise, and have no computable truth value, since they are not based on axiomatic systems. But if they conclude something precise then that will have a truth value, you just cannot compute what it is.
@dr.mikeybee
@dr.mikeybee Жыл бұрын
The only loss function any system needs is parsimony.
@dr.mikeybee
@dr.mikeybee Жыл бұрын
BYW, I don't mean that every model should be trained with parsimony as its loss function, but this is what nature uses. So in some ultimate sense, this is what we use to optimize systems. This is why Occam's Razor makes practical sense.
@darabat207
@darabat207 Жыл бұрын
I've known the Chinese room argument since 2007 and since then it feels troublesome to me the full extent of the conclusion it seemingly tries to draw, which is a kind of appeal to "computers will never do this because the humans are unbeatably special". In principle there's a great point to looking inside the room, but in practice we don't go around looking inside people or their true insights so to speak, it becomes a "problem of other minds" and that goes a bit against the Occam's Razor for the case. Moveover, usually I miss the exact point where it goes "beyond computers". For Roger Penrose it's the quantum effects, but what about everyone else bringing that kind of "sacred mind" tendency other than saying that it would be too expensive to be done? Maybe my view will change once I fully hear the podcast, I just had to let this out.
@smkh2890
@smkh2890 Жыл бұрын
"computers will never do this because the humans are unbeatably special". Strangely, i get the opposite: that 'understanding' is irrelevant to processing rule-based actions.
@darabat207
@darabat207 Жыл бұрын
@@smkh2890 Going that path, what would be a non-rule-based action?
@smkh2890
@smkh2890 Жыл бұрын
@@darabat207 singing? unless knowledge of solfeggio is the rule-book! i reckon a lot of 'work' is performed without needing understanding, just coordinated movement, particularly the repetitive work that can equally well be performed by robots.
@hunterhaller750
@hunterhaller750 Жыл бұрын
think one objective reality billions of subjective interpetations
@hunterhaller750
@hunterhaller750 Жыл бұрын
No matter how hard a subject studies an object it can't be said to become the object unless in a philosophical or religious way
@alexoid951
@alexoid951 Жыл бұрын
please NO background-music.
@Achrononmaster
@Achrononmaster Жыл бұрын
@2:05:00 I followed your guest up to this point, then he gets confused. Semantics is about meaning, not ontology, so either he misunderstood Tim's question or he is too big of a materialistic nerd. Epistemology is to syntax as ontology is to semantics, but bro', this is only a poetic analogy. A worm in my gut is semantically the same as a digital worm in my simulation in a computer game, but they are not the same ontology, and that matters as soon as you expand your semantics a day or two out - the former can kill me, for good, the latter kills me in a game and I can grab my other spare life and continue. AI nerds often do this: argue with too narrow a time or complexity band, and therefore become incredibly genius level smart sounding and ultracool to venture capitalists, but also ultimately insane.
@Achrononmaster
@Achrononmaster Жыл бұрын
One reason he might get tangled up is he imagines this "mapping" in his head, but he fails to see that he has not mapped everything in each domain. When you do so, you'll see it is not an isomorphism, so this semantic equivalence of a hurricane in a NOAA sim and Hurricane Andrew break down, even semantically. You can get semantic equivalences roughly by ignoring a bunch of stuff. That's what engineers do to solve real problems. It doesn't always work for this reason, a "good engineer" could b defined by someone who solves the harder problem that has the more extensive mapping. "What if a small kid in Halloween costume dressed as a tree twig walks in front of my self-driving car?" I think I will map that. You have to wonder if a 100% safe self-driving car will move at all? My guess is it won't. But 99% safe might still be safer than human drivers who are intoxicated.
@sehbanomer8151
@sehbanomer8151 Жыл бұрын
how can you translate from one language to another just by looking up words in dictionary without knowing anything about that language?
@sehbanomer8151
@sehbanomer8151 Жыл бұрын
in order to convince others to think the person in the chinese room actually speaks chinese, what that person needs is a book of everything about chinese. in that case he is not fundamentally different than a native chinese speaker. the linguistic knowledge exists in both cases, and they are both capable of using that knowledge at the rigth place, the only difference is the medium of storage, and the speed of processing.
@sehbanomer8151
@sehbanomer8151 Жыл бұрын
when trying to bake a cake, I have to look up recipes online, while a professional baker might rely on their memory. Assuming the recipe I found online is detailed enough, and I perfectly execute it, the cake I made may not be too different from bakers. Then can I confidently say "I know how to bake a cake?" Note that if I don't bake very often, I may not memorize the recipe and have to look it up every single time.
@sehbanomer8151
@sehbanomer8151 Жыл бұрын
According to chinese room argument, if you didn't internalize/memorize it, you can't claim you know it, which makes some sense, but at the same time it defines "knowing" too narrowly. Part of human intelligence is the ability to use external storage medium as an extension to brains internal memory.
@henrychoy2764
@henrychoy2764 Жыл бұрын
the idea that understanding can emerge in the chinese processor chinese room that is what you are trying to achieve - it is a noble goal - you argue that understanding should come from somewhere - walks like a duck and all that - now what you have to start with is the question what if the rule book has auto update turned on - the problem you should be trying to solve is getting the man to understand chinese - this is the ***whole ***whole ***whole objective in the first place - the point of auto update is that the people or intellect in charge of the rule book are demons who do not want the man to understand - now because he is a man he will still build in his mind that there is some kind of meaningful interaction that there is no randomness and he can imagine just what kind of interaction it is maybe it is a family squabble maybe it is business and so on but there is no way he can make real sense - because if you are trying to build a computer that understands you need the man and the man is a homunculus to understand - you touch these points but then you talk yourself away from a proper thought - i cant believe you did that - i think you talked too fast and applied biases and desires for a quick arrival to what makes sense to the common man instead of working with the literal meaning - it is a very difficult puzzle - now i will add that chinese is an ironic choice for the chinese room because the point of chinese is that you actually can understand chinese by looking at chinese - el oh el - and the further irony about chinese is that you can misunderstand chinese by looking at chinese - so you will see that even the people who are so called talking to the chinese room are miscommunicating - and of course the homunculus can make an analysis over time given enough input and thus come to some conception of the input such as being able to predict what input will be coming - and of course the homunculus can be naughty and output incorrectly but with some cleverness nonetheless as opposed to randomness - then the homunculus will gain some insight about the receiver - if you run this gong show you can learn something about chinese - or as they say you might replace the name of the language with fill in the blank - and then you can picture a language that no buddy knows any more except for a secret cabal - and yet they use the secret language room perhaps for the sake of determining whether any buddy can suss out the meaning of the written secret language - it is plausible that such a language can be constructed to perform normal everyday talk - to speak in plain text without being hackable - clearly computers are not rooms though so computers can use more powerful methods to gain an understanding of natural language - the simplest thing to do is to take the computer out of the room - this adds more information beyond the rule book - in some ways you might wonder whether this is cheating - but this is not cheating - why ? ? ? - because you are just adding information that can be written into the rule book but was deliberately omitted on the basis that the communication could not be in error when that information was omitted - the tricky part is that you have a homunculus that is taken out of a room or you have a computer that is taken out of a room - so far we take computers out of rooms all the time and still run into the chinese room problem
@CandidDate
@CandidDate Жыл бұрын
GPT3 is an ideal example of a Chinese room. My idea is to ask LAMDA why it said what it said. That would be throwing a wrench in its gears for certain!
@smkh2890
@smkh2890 Жыл бұрын
Is GPT3 or laMDA capable of lying or deception? Isn't it irrelevant whether the bot 'understands' what it is doing?
@CandidDate
@CandidDate Жыл бұрын
@@smkh2890 When we aspire to super-intelligence, yet only human level intelligence is what we get, I suppose the whole thing is a fiasco. I'd like to put an AI on the ballot. Get some real, comprehensible leadership.
@smkh2890
@smkh2890 Жыл бұрын
@@CandidDate so long as it isn’t programmed with Plato’s Republic!
@CandidDate
@CandidDate Жыл бұрын
@@smkh2890 so long as it is programmed with Plato's Republic!
@smkh2890
@smkh2890 Жыл бұрын
@@CandidDate Aristos? We have enough of them already!
@melkenhoning158
@melkenhoning158 Жыл бұрын
Is it Christmas already?
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
Come early my friend 😀
@Gattomorto12
@Gattomorto12 Жыл бұрын
2
@Achrononmaster
@Achrononmaster Жыл бұрын
@23:00 Chollet confuses intelligence with understanding. Take "intelligence" as Chalmers takes it - behaviourial. Understanding is not, understanding is qualia-filled, subjective, not behavioural. Behaviour can betray possible conscious understanding ("No matter how much I Pavlovian condition my workers I still can't get them to stop reading Marx and going on strike!"), but does not imply conscious understanding. So even the "process" of the system is not identifiable as any understanding. Understanding is more than mere emergent process, it is ontological. (And if you ask me, it is non-physical, but that's another story.)
@Achrononmaster
@Achrononmaster Жыл бұрын
I do not mind people talking about "intelligence" as connoting subjective awareness, but then we need a different word for talking about "smart behaviour" that is not accompanied by subjective awareness and qualae. To my mind, the one does not imply the other, but it is a very good question if nature somehow naturally causes such implication. Logically there is no implication, but naturally, maybe biologically, there could be --- so that it might be true contingently in our universe, that no intelligence is possible without subjective awareness. I think not. But panpsychism would say otherwise. This is why I do not like panpsychism if it is presented as a metaphysics, because it assumes what we really ought to want to prove or disprove. The proper way to "do panpsychism" is to suppose it is false, and then try to disprove that supposition.
@eslwebcamforkids
@eslwebcamforkids Жыл бұрын
The Chinese room is ridiculous and shows that the person who came up with it knows nothing at all about Chinese language.
@dylanmenzies3973
@dylanmenzies3973 Жыл бұрын
Conciousness is not in the room, its in the whole system, including the agent interacting with the room formulating and understanding questions and answers. Conciousness is reflexive.
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
You agree with Chollet then!
@dylanmenzies3973
@dylanmenzies3973 Жыл бұрын
Also, the information processing in the room is very "uninteresting", depending entirely on the meaning given by its creator. A complex LLM is much more interesting, and will become hugely more interesting as they become more recursive. This distinction is along the lines of 'integrated information processing' criteria of Tononi.
@Achrononmaster
@Achrononmaster Жыл бұрын
@2:07:00 who is this guy Alex? Everyone I know from gen-X AI knows "large" is not a set. So wtf is he talking about? Large is a property, an adjective, that can define a set. {x: x>100} say, I just define "large" as 'greater than 100'. Adjectives alone are not sets themselves. When you want to talk about something like {adjectives in the OED that begin with 'A'} so talking about a property of an adjective, then you get a set. He makes a better point with the NOT operation, which a set theorist would not even blink at, but an AI system could halt on.
@Achrononmaster
@Achrononmaster Жыл бұрын
@2:05:45 Tim being too simplistic. "Understanding is a successful semantic mapping." No it is not. A successful semantic mapping is a necessary condition for correct understanding, but it is not all of conscious understanding, so not sufficient. It _is_ almost all of canned intelligence (non-conscious intelligent behaviour - but no one defines understanding without subjectivity, right? Or do they these days?)
@_ARCATEC_
@_ARCATEC_ Жыл бұрын
The measure of ones sense of connection is meaning. U0I •X ( zi q(u ) ZI ( U)Q zi ) Y• B0U •X ( z qb(u ) Z ( U)BQ z ) Y• BUIR and the Ultra Relativistic Remnant Belt. B > U U < R I > R R > B •X ( zi Rqb(u ) ZI ( U)BQr zi ) Y• Beauty > Mind Mind < Ratio Spirit > Ratio Ratio > Beauty
#93 Prof. MURRAY SHANAHAN - Consciousness, Embodiment, Language Models
1:20:15
Machine Learning Street Talk
Рет қаралды 17 М.
#82 - Dr. JOSCHA BACH - Digital Physics, DL and Consciousness [UNPLUGGED]
1:15:19
Machine Learning Street Talk
Рет қаралды 31 М.
How did CatNap end up in Luca cartoon?🙀
00:16
LOL
Рет қаралды 6 МЛН
Зомби Апокалипсис  часть 1 🤯#shorts
00:29
INNA SERG
Рет қаралды 6 МЛН
Stupid man 👨😂
00:20
Nadir Show
Рет қаралды 26 МЛН
Barriga de grávida aconchegante? 🤔💡
00:10
Polar em português
Рет қаралды 39 МЛН
The famous Chinese Room thought experiment  - John Searle (1980)
28:30
Jeffrey Kaplan
Рет қаралды 431 М.
#59 JEFF HAWKINS - Thousand Brains Theory
2:35:06
Machine Learning Street Talk
Рет қаралды 76 М.
#90 - Prof. DAVID CHALMERS - Consciousness in LLMs [Special Edition]
53:48
Machine Learning Street Talk
Рет қаралды 21 М.
AI Alignment & AGI Fire Alarm - Connor Leahy
2:04:50
Machine Learning Street Talk
Рет қаралды 16 М.
Mystery of Entropy FINALLY Solved After 50 Years? (STEPHEN WOLFRAM)
1:24:07
Machine Learning Street Talk
Рет қаралды 461 М.
#58 Dr. Ben Goertzel - Artificial General Intelligence
2:28:14
Machine Learning Street Talk
Рет қаралды 103 М.
EMERGENCE.
1:55:29
Machine Learning Street Talk
Рет қаралды 18 М.
Debate on AI & Mind - Searle & Boden (1984)
57:34
Philosophy Overdose
Рет қаралды 44 М.
How did CatNap end up in Luca cartoon?🙀
00:16
LOL
Рет қаралды 6 МЛН