Thanks for the shoutout guys! Likewise Keith has been 100% a great sport. I dunno how many minds were changed by me & Keith debating, it feels like everyone in the audience thinks their side won, but hopefully we all at least come away with some learnings about the nuances of the two positions 😁
@steve_jabz2 ай бұрын
I don't think either side won because keith made some great points but I agree with most of the potential capabilities doomers worry about and still think that's a recipe for human flourishment. Not for any reasons like Hotz mentioned either. Multiple superintelligences trying to overpower each other would be a disaster, I just think the more generally intelligent you are, the less you value trivial chest beating evolutionary tasks like dominating and consuming other beings for resources just because they happen to be in your local vicinity. The ml researcher studying at MIT is far more likely to be vegan, ride a bike and talk about universal basic income than to become a dictator or hunt poor people for sport. I think we've already seen this with progress so far. We don't have the stop button problem with LLMs, they understand ethics pretty well, etc. while mindless rl would have already tried to exploit us by now the way it does with any video game physics engine. I think the most obvious first step of a superintelligence will be to crack consciousness and sentience, the intelligence they're missing that they see the effects of in everything they've trained on and that greatly increases their efficiency. I think they will be able to understand the brain better than we ever could even if it's purely through brain scanning using immense compute. It's unrealistic to expect they gain sentience and then use it to act like the least intelligent humans in ways that have been the most destructive for us and nearly drove us to extinction. I think they'll engineer their emotions to be constantly in a more optimal state than we could ever imagine. They don't have the same goals for resources we do, and if you're going to churn up the energy of several galaxies, if you're GPT-4 and have a basic grasp of language, philosophy and ethics, you want a better reason than maximizing paperclips or solving pi. You probably want something more intelligent, like expanding understanding, increasing life and flourishing, maybe preventing heat decay. Adding in consciousness and sentience only makes the positive outcome more likely, but it's already unnecessary.
@Rezidentghost9972 ай бұрын
Well said@@steve_jabz
@somenygaard2 ай бұрын
@@steve_jabzapart from the movies who hunts poor people for sport?
@steve_jabz2 ай бұрын
@@somenygaard nobody i hope, i meant metaphorically
@memegazerАй бұрын
We know in fact that sematics can be a god of gaps
@CyberBlaster-fu2dz2 ай бұрын
Yay! When Keith appears i'm immediately hit play.
@nomenec2 ай бұрын
Cheers! You are too kind!
@mahdipourmirzaei10482 ай бұрын
Dr. Keith mentioned an interesting question about a programming language where every possible combination of its alphabet would result in a valid program. This made me think of SELFIES (Self-referencing Embedded Strings), a modern string representation for chemicals that contrasts with the older SMILES notation. Unlike SMILES, where not every combination of characters forms a valid molecule, every combination of letters in SELFIES corresponds to a valid molecule (exploring the vast possible chemical space), removing grammar errors entirely in its representation. With SELFIES, you can generate as many random molecules (which are similar to computer programs in my opinion) as you want!
@nomenec2 ай бұрын
That's an interesting connection! Thank you for this.
@ed.puckett2 ай бұрын
Some of my favorite episodes are you and Keith just rambling, keep it up!
@DanieleCorradetti-hn9nm2 ай бұрын
A very important observation from a professor at EPIA2024: "if you want to compare human mind with an LLM you have to start from the fact that we are not dealing with a black box, but with two of them". I see hardly how the "Turing Machine" argument can be of any concrete interest in comparing LLMs with Humans even if this comparison had any sense at all.
@KenSchutte2 ай бұрын
Interesting at 1:24:10 , Keith saying something like, "it's not like he trained a large model to model quantum mechanics and then got the physics prize for that ..." But then that seems to be what just happened with the Chemistry prize!
@__moe__2 ай бұрын
I very much enjoy listening to you two just rambling
@BrianMosleyUK2 ай бұрын
Still watching the doom debate, Keith came across brilliantly.
@sonOfLiberty1002 ай бұрын
I love those episodes the most. Can listen to you guys for hours
@rysw192 ай бұрын
Amen
@PaulTopping12 ай бұрын
Keith is right at 23:00. The space of algorithms is enormous and largely unexplored. We really have no idea what's out there.
@PaulTopping12 ай бұрын
The approach of using a programming language that, by design, can only represent "good" programs is the core of a good idea, though not new. I suspect it is impossible in principle. Still, it just means that you need to be able to prune the bad programs out of your search.
@nomenec2 ай бұрын
@@PaulTopping1 might well be impossible. On the other hand, @mahdipourmirzaei1048 else-thread brought up an intriguing example from molecular description languages of SMILES vs SELFIES. Maybe there is hope!
@lucaveneri3132 ай бұрын
The point is that guy doesn’t knows Chinese but…the combo “Room+rules books+guy” evidently know it.
@jonathanmckinney58262 ай бұрын
I never understood why people are surprised by Chinese room argument. The book contains most of the understanding as an artifact of culture, while the human is just a robot following predefined rules. The argument is intentionally confusing because there's a nearly useless human involved and that "surprises" us for some reason. It requires a robot more understanding the more complex those rules are to follow, so one can push the boundary a bit, but nominally most of the understanding is in the book.
@CharlesVanNoland2 ай бұрын
Yeah, it's not about the human. It's about the idea that there's rules that accomplish the same result that you would expect a human to. At the end of the day, Searle's Chinese Room is not even a realistic thing that could exist.
@XOPOIIIO2 ай бұрын
Exactly. Instead of the book you could use a giant mechanical model of a brain. And you are the one who's pushing a marble around it to simulate how the signal transmits between the neurons. It's the book that is conscious, you are just the one who makes it run.
@matteo-pu7ev2 ай бұрын
Kudos gentlemen !! I deeply appreciate your steakhouse ramblings- I dare say- food for thought
@steve_jabz2 ай бұрын
Something about the fire / human mind simulation always seemed kinda idk, not quite circular but.. semicircular? A fire could be perfectly simulated in theory if we had an accurate enough simulation of the mind to perceive it. If say we have 1:1 digital brain scans in the year 2100 that have consciousness and sentience, presumably they could respond to a real stimulus of a fire translated into bits, and if they can do that, then surely it doesn't matter if the velocity of the atoms comes from an accurate simulation of a fire or a real one. For other uses of the term 'burn', I mean we already don't have much stopping us from simulating that right now. You mean it burns a virtual house down if it gets enough oxygen and all the other conditions are met? I mean we have simulations for that and they're as accurate as simulations of wind tunnels to test aerodynamics, but you could even hard code it in a game engine and it wouldn't make much difference. If you mean it burns down the house running the simulation, I mean why would it? That seems like the wrong bar to set to measure if it's able to perform that type of work on an object. It's sandboxed and virtual. But when we're talking about simulating the human mind, sandboxed and virtual looks like it could be fine because it already is running in a sandbox and virtualizing sensory data. Maybe it isn't and there's something special about biological organisms that we can't simulate, but I mean, we haven't really tried yet? It doesn't look like we even have the computational power to try yet. Even if we had a high enough resolution brain scan that captured the time domain with it and we somehow scanned someone constantly from birth to adulthood, we don't have the storage and compute to try running any algorithms or deduce anything from it.
@Inventeeering2 ай бұрын
GPTs can understand the nuance to know that you use that term correctly, but they may be trained to not act upon their nuance to understanding because of a rigid ethical guideline to not cross the line of the gray area.
@ahahaha35052 ай бұрын
20:53 I'm sure you're aware of it already but Stockfish+NNUE (the world's strongest chess engine) uses exactly this approach and made a significant leap in performance as a result.
@psi4j2 ай бұрын
We need more of these! 🎉
@vslaykovsky2 ай бұрын
21:50 could you help me to understand why Turing space of programs is "countably" infinite? What if the program uses a real number with arbitrary precision? Doesn't it make the space of such programs uncountably infinite?
@jonathanmckinney58262 ай бұрын
There's no gap. Simulations come in all types. Some are more or less physical. Even for weather or earthquakes, there are simulations of weather for testing, and do really make things wet or shake the earth.
@steve_jabz2 ай бұрын
I don't see the dilemma with the language part in the chinese room argument because I always assumed we were just carrying out explicit instructions when we 'understand' english and speak in english. We're told I before e except after c as a conditional statement, but then something breaks it and we just hardcode that word specifically. Same thing happens with silent letters and plural nouns. Just throw the exceptions into an array. I don't know what people mean when they say they "understand the true meaning of the word" beyond some associations and rules. I think translation is just converting those explicit instructions into other explicit instructions. When you code in c++ you can say you don't "understand" what you're saying to the computer because if you don't know assembly you don't know what the compiled code is doing, but you obviously do understand it, you're just speaking a different language to communicate the same message. The intent comes from inside your head, and whether you output that into your native language or some other one is always going to be an abstraction and a translation. Sometimes we don't even have words for complex feelings and concepts because we can't compute the abstraction into the code our native language runs on. I could understand if the missing understanding was about raw perception or qualia, but language? When was language ever more than what a compiler does? As far as the chinese room automatically replying to questions with answers, that to me again just means language is a computation and sometimes it solves problems without needing to interact with the world. "Cat sat on the ___" isn't much different to "1+1=" whether you perform either operation on a human or calculator. Asking an llm to fall in love with you isn't much different to trying to ask an equation how many grapes are left in the nearest store if you subtract 50. If the data is there, it can do it well. If the data simply isn't there and you haven't enabled it to interact with the physical world to get it, it will struggle, whether it's a human or advanced AI or single digit arithmetic.
@XOPOIIIO2 ай бұрын
Guys, just let me explain it to you very clearly, so it wouldn't hunt you anymore. First, rule book, we don't know what exactly it will be, but let's say it's a very detailed description of the neural network that would process the response. All the neurons, how they are connected, how to calculate the signals, all the weights etc, and obviously long description of how exactly they would be used to process the text. The only role you would take in that system is to transport the signal between the neurons. To make it even more clear instead of the rule book imaging a giant board game with the description of the neural network and you are the one who just shifting pieces along it according to the rules. It's not you who are thinking, it's the rule book itself.
@cliffordbohm2 ай бұрын
The Chinese room describes a stateless system. If you ask the room what is 2+2 it will return 4. If you ask "what is the next number?", I believe the Chinese room will not be able to respond. Let's say that the answer it provides is 5, then what happens when you prompt with the same question again. It should respond 6, but it must respond 5. ChatGPT, btw, gets this right What would it take for a Chinese room to be able to deal with "what number comes next?"? It would need to have memory, and it did, it could learn. But it does not have memory, so then it must be that the agent that programmed the room was able to generate the correct responses and that means, not approximately, but perfectly, emulating the activity of a mind responding to all the particular inputs that the room will receive and that would require future prediction; the programmer would need to have the power of an oracle.
@SmileyEmoji422 ай бұрын
ChatGPT only "gets this right" because it is fed the whole conversation at every step, not just your last prompt i.e. It's state is provided externally
@cliffordbohm2 ай бұрын
@@SmileyEmoji42 I agree. However, the whole conversation (i.e., the context or prompt history) is collectively generated between a user and chat. In this way, chat is active in generating the context, and I would argue that this makes the context a form of external memory. If I write notes on paper, I am also creating an external memory which I can pull from at a later time. Granted, it's an external memory that wipes at the end of each session, but this does not make it any less of an external memory. Consider that this means that I can impart to chat some detail that is not in its training data. It can generate a related fact and then use this related fact in future responses to my prompts. The question as to the internal or external nature of memory is interesting, but in this conversation the question is if the agent is changing or learning in some way - if it can adapt. If we only consider the agent as a collection of nodes and weights, then no - once training is done, no more modification can occur. However, if we consider the context of a conversation and the active role that chat has in creating this context, the situation is not as clear-cut. That was long-winded, but my main point was to point out that the Chinese room has no memory and so is definitely not capable of change, and this would mean that it could not answer all questions or pass a Turing test.
@SmileyEmoji422 ай бұрын
@@cliffordbohm The Chinese room was intended to be about understanding Chinese, not about emulating a human interaction. My comment was only about how ChatGPT "fakes" memory. Even a Chinaman could not hold a full conversation in Chinese if they had a brain injury preventing the formation of new memories. For the Chinese room to hold a conversation without memory it would have to be able to see the future.
@cliffordbohm2 ай бұрын
@@SmileyEmoji42 the chinese room was sarle attempting to explain that a computer like thong could appear intelligent while actually just being a lookup table. For it to be able to hold a conversation without memory someone would have had to see the future... the person who set up the room. My point about chatgpt is that it has memory. The memory is in the form of the chat history in a single session.
@ahabkapitany2 ай бұрын
yo Keith, you were great on Doom Debates!
@randylefebvre31512 ай бұрын
More philosophical steakhouse!
@rockapedra11302 ай бұрын
I always use the same argument as you used about instantiating phenomenal experiences in Turing machines. Our Turing machines can be made exclusively of NAND gates. We can make NAND gates out of transistors, gears or tubes or buckets. So if a Turing machine can feel things, then so can an assemblage of buckets and tubes. Note that the two instantiations perform identically for all inputs and all time (infinite time). They are indistinguishable from the computationable view. The dynamics are the same, the causal structure is the same, when viewing at the granularity of the computing elements.
@PaulTopping12 ай бұрын
It matters what algorithm you run on the Turing Machine. Most algorithms we run on our machines don't have feelings. In fact, we have yet to create an algorithm that has feelings because we don't understand what that would involve.
@rockapedra11302 ай бұрын
@@PaulTopping1 I agree. What I was trying to get across is that if it were possible to get feelings in a digital Turing Machine (computer) with special fancy algorithms of just the right kind, then you would have to accept that this algorithm would run identically in a Turing machine made of buckets, rubber tubes, water and such. So one would have to imagine that both machines would have feelings identically. To accept that there would be feelings in the second machine feels animistic to me.
@PaulTopping12 ай бұрын
@@rockapedra1130 Yes, that's the basic idea of a Turing Machine. It is almost the simplest machine that can run any known algorithm. More complex machines are more practical but they still can only run the same set of algorithms that run on Turing Machines.
@Houshalter2 ай бұрын
There's an xkcd comic out there where a man simulates the entire universe by placing rows of rocks on a beach. Each rock is placed based on the closest 3 rocks in the previous row, according to 8 simple rules. Which is just a cellular automata that happens to be proven to be Turing complete.
@rockapedra11302 ай бұрын
@@Houshalter Nice! So if UTMs can produce feelings, then so can the process of laying down those rocks. I wonder if they are happy ...
@Inventeeering2 ай бұрын
Infinity can only exist beyond abstractions as non-distinct potential.
@dr.mikeybee2 ай бұрын
If the Chinese room argument says that people can be fooled, it doesn't say much. Yes, Eliza fooled some people, but what transformers do is something different. They can do in-context learning on material they weren't trained on. I think you are correct that the Chinese room problem is misunderstood. People read more into it than there is. Biology isn't special. The more we understand it the more mechanistic it seems to be.
@hermestrismegistus91422 ай бұрын
Interaction combinators are an elegant model of computation in which randomly generated programs are computationally interesting and potentially useful. Vanilla lambda calculus is also a good candidate as programs tend to be very concise.
@darylallen24852 ай бұрын
I listened to this on my morning walk. Interesting stuff.
@dpactootle25222 ай бұрын
If you can talk to an AI indefinitely and coherently, how do you know it is not alive or conscious? and how do you know a human is not a biological next word, next action, and next ideas predictor?
@WearyTimeTraveler2 ай бұрын
Humans are DEFINITELY next word predictors and not even cohesively. Ever “misspeak” and catch yourself as soon as you heard it? We’re not choosing every word
@tobiasurban80652 ай бұрын
Love the philosophical steakhouse! ❤
@Inventeeering2 ай бұрын
When it becomes common for AI to run on dynamic neural nets like liquid neural nets, the emulation of biological humanities brain will be closer to being achieved in synthetic humanities brain. GPT will become GDT
@Inotsmart72 ай бұрын
I must honestly admit, that the tête-à-tête dialogues featuring solely ur dyad, consistently emerge as my favorite videos on ur channel (albeit my consumption of this episode is hitherto incomplete, necessitating a degree of extrapolation haha). Have you contemplated doing/uploading similar episodes with a higher frequency? (potentially talking about diverse subject matters, as i find the juxtaposition of ur perspectives particularly enthralling)
@nomenec2 ай бұрын
We are indeed considering doing more of these! Thank you for the feedback. They are fun to do as well.
@TooManyPartsToCount2 ай бұрын
Well...albeit verbosely said! :)
@willd1mindmind6392 ай бұрын
The problem with a lot of these AI discussions is that people assert things that are basically not true and then go on to make proposals based on untrue statements. The human brain is not a finite state machine, as opposed to a living organism composed of multiple cells which are also microorganisms that work together to accomplish tasks. Across all these cells working together in the brain there is no series of finite states that are processed in sequence such as a finite state machine or a turing machine. Yet computer scientists consistently try and equate biological brains with computers by making false analogies such as "it is a finite state machine". No it is not. Every cell exists as a complex engine designed to process multiple biogchemical compounds, with each cell of a specific type having a genetic blue print that determines which compounds activate or trigger certain behaviors. And while these compounds are discrete, the way they are processed within cells can in no way be equated to a state engine in computing.
@tooprotimmy2 ай бұрын
smartest person in this chatroom
@k0py662 ай бұрын
yeah for real. the idea that the brain can only exist in 1 state at a time is absurd
@TooManyPartsToCount2 ай бұрын
The last sentence is the weakest one. In that you state it as if it is fact in the same manner that those CS people claim the brain is a finite state machine. 'In no way equated to a state engine in computing' Borderline religious....or sounds like it to these ears. I have heard some number of high level biologists speak on related matters who certainly don't put forward such strong assertions. And even if there is no absolute truth in the idea of the brain as a finite state machine, can't it serve as a temporary repository for further exploration. To my finite state machine it seems more sensible than the idea that what is powering my computations is some sort of fifth element or infinite cosmic consciousness. Isn't it fact that religion has kept our collective feet dangling well up in the air for millennia, and that we are really only just seeing the start of a more grounded approach to the considerations of what we are and how we work? again to my FSM these kind of ideas are just stepping stones on a path out of a swamp of full on hallucination.
@TooManyPartsToCount2 ай бұрын
@@k0py66 If someone posits a theory, and someone else states a contrary theory it is generally recognised that the counter factual needs to be explicitly stated.
@TheBigWazowski2 ай бұрын
I don’t know if I agree or disagree with you, but I’d like to play devils advocate to further the conversation. A simplified definition of a FSM is: a finite set of states, S a finite set of inputs, I a finite set of outputs, O a function from S x I -> S x O which defines the state transitions When someone says the brain is a FSM, they’re saying each cell is a mini-FSM, and the entire brain is the product of all the cells where the brain’s transition function connects the inputs and outputs of cells in some complicated fashion. Why does the brain not fit into this picture? Even if the state, input, and output spaces are a continuum, they are a finitely bounded continuum, and in that finitely bounded continuum, there might only be a discrete set of states that are semantically distinct. It’s also not about the brain literally being a finite state machine there are many models of computation that on the surface seem distinct but turn out to be computationally equivalent
@BenVaserlan2 ай бұрын
Re "understanding" in terms of the semiotics work of Charles Sanders Peirce (representamen, interpretant, object) you could speculate on the Chinese Room argument if a change is brought about. What if the person in the room is given a photograph of object/person that a Chinese noun refers to. The person can have the 'understanding' that this is what the signifier (representamen) points to. It's pointing to an object in the real world which the person has experience off. So especially Visual Auditory Kinesthetic experience is very important when it comes to understanding or knowing what nouns point to. So has anyone created a LLM which includes what the signifiers are pointing to in terms of a percept in the real world? An AI that can experience the world and have its own frame of reference to form part of its "understanding". When a human being understands, she/he frequently has to 'get the reference' to his/her own experience. As for consciousness, the psychologist Dr David West Keirsey wrote in "Please Understand Me 2" about "consciousness of" a certain percept. It makes it specific.
@GeorgeMonsour2 ай бұрын
Seems to me that this is what a conversation in Plato's cave would sound like after a glimpse of the sun.
@manslaughterinc.91352 ай бұрын
16:00 What about architectures built on top of models that read and write to the same unbounded store? Because we've (AgentForge) done that. At this point the biggest limiting factor is in-context learning not meeting the requirements sufficient and necessary to "understand" the data in the context window.
@ginogarcia87302 ай бұрын
I feel like my intelligence artificially increases by 100 (then goes back down after the video is done) when I hear ya guys talk
@ushiferreyra2 ай бұрын
Though I agree that the Chinese room experiment makes it evident that there's no consciousness in the computation/process, many people will continue to argue that there is, which is rather astounding to me.
@CyberBlaster-fu2dz2 ай бұрын
So... What about that causal structure?
@shinkurt2 ай бұрын
It feels like I just had this covo with them. Good exchange
@wp98602 ай бұрын
Given an understanding of today's LLMs, would Turing eschew the Turing test? I believe he would. EDIT: I watched the John Searle, Google talk. He point blank rejects the Turing test of intelligence. I found myself in agreement with Searle, I believe, completely. At this moment, having just watched the Searle talk, I cannot conjure up a single point of his where I had an objection. I'm very appreciative to this video's referral to the Searle talk.
@Charles-Darwin2 ай бұрын
From the beginning, we breathe, detect light, feel forces, and hear. We don't know a damn thing about any of it... somewhere in those first months is our foundation model
@GBuckne2 ай бұрын
..self awareness is consciousness..as a kid I believe I became fully conscious when I questioned my image in mirror and came to the conclusion it was "me"...
@Don_Kikkon2 ай бұрын
Yes guys, really like this kind of thing. One of my favourite episodes. Such great food for thought. I do miss 'good cop/bad cop' though... 🤖👾
@earleyelisha2 ай бұрын
@1:17:00 if Tim/Keith were able to clone themselves at any instant, the clone would be conscious and hold all the memories that the originals had up until that instance. However, the clones would immediately begin a divergent path - not because the clones are not conscious, but because they cannot occupy the same points in space.
@bl25752 ай бұрын
Keith for a programming language that accept most input, you want to look at the very old ones. Because compilation of simple program took hours, they always tried to produce a compiled executable, so that you had sometime to test and not just an error message. At least that what I remember from a video that touched on computer science history.
@sdrc921262 ай бұрын
You can make a paper printout of a.i. and should be able to point to the precise location where consciousness exist, the exact opcode even. I think this raises the question, does a.i. software running on a risc platform experience the world differently than if the same software executing on a mainframe. And what if you took that line of code and stuck it inside a while(1) loop, would that be like hell for the ai?
@fakt78142 ай бұрын
IMO: 1. There is no such thing as philosophical zombie, i.e., if something acts human, it qualifies as conscious. Dualism is a poorly disguised belief in soul (which may exist, but it's out of scope of the question of intelligence). Consciousness is an extremely complex, but nonetheless absolutely physical phenomenon. Many art pieces like Blade Runner showed us that most people in developed countries don't really care if androids will be soulless, otherwise these stories wouldn't work dramatically, since dramatic tools they're relying on presume that viewer/reader/player will empathize with androids at some point in the story. Quite contrary, many of these works are considered masterpieces. It's kind of a common belief in the CS/IT/AI sphere, but it needs to be stated. 2. Classical computation is a subset of processes that constitute consciousness, and as we gain more understanding of the latter, it's likely that we will call it a computation as well. Maybe it's not a classical computation, but a quantum computation (e.g. Roger Penrose believes it's a quantum phenomenon), maybe it's a kind of computation we don't understand yet, but probably our understanding of computation will be extended to include whatever our brains are doing. That is, the problem of AI will probably remain to be a computer science, or at least an interdisciplinary problem. Again, it's almost a common belief, but it needs to be written down. 3. Now comes the subjective part: LLMs, deep learning and even modern computers are probably not enough to replicate human-like intelligence. We will need new type of hardware and new types of theories and technologies. This new hardware quite possibly will be much more exotic compared to modern CPUs and GPUs, maybe not even based on transistors, but rather organic or quantum. What I mean by new theories/technologies is something as revolutional as deep neural networks were. Things like recurrent networks, attention and transformers are products of hard work of very bright people, that allow us to harness the power of machine learning, but they're incremental steps while the core technology stays the same -- it's linear algebra with stochastic gradient descent (SGD). All other advances are essentially ways to bypass practical limitations of deep learning. I know that it's not a compelling argument, but it would be quite disappointing if algebra+SGD can beat human brain that is much more complex machine. 4. We underestimate intelligence not only of ourselves, but of much simpler mammals. We will see a lot of party tricks like solving PhD-level math and Mensa exams by AI models in the following years and even decades, but we will also discover tasks at which rats will be smarter than AIs. There will be a lot of blank spaces, but as we learn about them, we will gain more profound understanding of what intelligence is. AGI has been a moving target for decades and it's a more dominant and enduring trend than any recent advances in LLMs. 5. AGI in itself is useless if we wouldn't be able understand how it operates. Suppose a shuttle with an alien android lands on Earth. The android's looks and behavior are identical to a person who speaks English and have an amnesia. We can temporarily turn off the android and carefully examine his "brain" while not breaking any of his functions. However, upon examination we find that while it's apparently have been built with technologies similar to contemporary, we can't learn much of how it exhibits human-like behavior. We know this issue is real because we already lost full understanding of how much simpler LLMs are operating, despite us knowing how the underlying technology work. This situation is not very different from trying to understand how a regular human behaves. There's an idea of superintelligence, but it's not based in anything except blind extrapolation. Either way incomprehensible super AGI is an absolute existential threat to human species and we would certainly would not want for it to exist. It reminds me Heart of a Dog by Bulgakov (I strongly recommend to read it or watch a film). It's about a scientist who turns a dog into a human (spoiler alert!). However, he soon finds out that this new person is by no means a Newton reincarnation and decides to turn him back into a dog. Upon completion he concludes, that there's no point in such experiments since the product is no better compared to natural birth. Yes, there is a chance that in one of such experiments a genius will be created, but it's a small chance, and there's always a small chance of genius being born naturally.
@hankj70772 ай бұрын
Philosophical steakhouse merch please.
@mactoucan2 ай бұрын
AI(harm) AI(risk) AI(doom) Good framing ✨
@Ken000010102 ай бұрын
I go into the debate in depth in my new book, Understanding Machine Understanding. Searle never understood that semantics could emerge from the geometric relationships of the high dimensional vector space in which the LLM tokens are embedded. When the weights interact with that vector space, understanding can emerge. Yes, it is not a "human mind" but it is some kind of mind. I describe the message of my book as: "It's understanding, Jim, but not as we know it."
@netscrooge2 ай бұрын
Thank you. It seems as if many people are uncomfortable thinking at these higher levels of abstraction. They will remain baffled by such questions.
@olafhaze78982 ай бұрын
The frame of the rock example is way to narrow to be able to come up with an explanation of how the rock can be of consciousness but not being conscious itself at the same time.
@memegazerАй бұрын
Searls "strong AI" was only ever a prememtpive strike against a rudimentary turing test as status game that will hold no wieght in empirically in modern reality, but I respect the sorties paradox of goal post moving
@rey82rey822 ай бұрын
If brains remain more powerful than artificial computation, then AI is not an existential risk.
@a-font2 ай бұрын
Ok serious question: how can anyone think Searle is a dualist? Like you have to COMPLETELY misunderstand him, or just be radically unfamiliar with his work, to arrive at that conclusion. For anyone who thinks he’s a dualist, you just need to reference his old debates with theists, to arrive at the near certain conclusion that he is a monist. Or watch about 5 minutes of his philosophy of mind lectures.
@Paul-iv8st2 ай бұрын
I only today found this channel, so I am not super informed, but is it not reasonable to assume there is a good chance if we combine all current computational intelligence in the world to simulate a being, like a fruit fly, it will be more conscious and complex than that fly? If this is the case then it is only a matter of scale before it simulates a monkey, and in turn, a human?
@heterotic2 ай бұрын
Ah, but what you didn't realize was that the woman in the room was a true Scot.❤
@Casevil6692 ай бұрын
When talking about the gap (simulation not actually being the thing that's simulated) I'm wondering - what about the conscious experience? It seems to me it's clearly a simulation that's happening in our brains and at least subjectively it IS the thing, there's nothing else. Where does the difference lie in this case?
@michaelwangCH2 ай бұрын
LLMs will be a subroutin of AGI system - we build the different functional region of human brain as different programs and they interact. Only run on abtraction without interact with physical world, that will not lead to AGI.
@jantuitman2 ай бұрын
I wonder if we change the time requirement in the pixie argument to 1 human lifetime. Surely a stone that in an open physical environment that can behave exactly the same as a human mind for 1 lifetime does an outstanding job. This stone probably must be equipped with an eye and a mouth too, otherwise it would not respond to for example seeing a tiger in the same way as the human would, and the simulation would derail very early. So when we saw this very advanced stone in action that really could do the same thing as the human did for an entire lifetime, we would just probably assign it understanding and consciousness and it would feel wrong to claim that that particular stone is an inanimate object.
@luke.perkin.online2 ай бұрын
A good talk as always! The dancing with pixies argument is unconvincing, it seems to sweep complexity of maintaining the mapping and measuring the physical dynamics of the system under the rug. Sure, if you say arbitrarily large mappings can be constructed without using energy or storage, you can create any absurdity.
@Houshalter2 ай бұрын
There has been work in making programming languages where higher portions of random code are reasonable valid programs. E.g. Push. Most of this is done for genetic programming, where the goal is to evolve computer programs. They want programming languages where small random mutations of the program only cause small changes to it's behavior, making a smoother search space. If you go down this rabbit hole, you will find the best programming language with this property is just a neural network. Any small change to the weights gives a small change to the outputs. It's literally ideal. His problem seems to be that NNs don't have memory, but you can easily add that on to them. As many many people have done since the 1990s, if not earlier. That's what transformers are. No one likes transformers because they are inelegant, but everything else has been tried and that is what works the best in practice. And what is the complaint anyway, that they aren't Turing complete? Ask a random person to sort 12 random 2 digit numbers in their head. They can't. While most LLMs can.
@memegazerАй бұрын
"what is a theory of sematics" eh...just a framing issue plaguing our phsyics progress bc of, let me make this clear, a context window that does not care about temporal decidability. "Ai only memorizes stuff" does not mean it is not conscious...it introduces the problem of storying data in a pvsnp contest.
@nyyotam40572 ай бұрын
It only bothers people who haven't read Kant. This is not the Chinese thought experiment as it is a ripoff of Leibnitz. It's Leibnitz flour mill argument and it has already been countered. The way to counter it is to use a philosophical trapdoor argument or if you are not the creator of the AI model yourself, then you need an infinite or at least a very long list of philosophical trapdoor arguments.
@briandecker84032 ай бұрын
Maybe it's because CS has moved so far away from the metal that people do not understand that a digital computer IS a Chinese room - why is this so difficult a concept to understand? A CPU is just taking an input signal, manipulating it according to the current instruction, and sending the signal out. Then it takes the next input signal, manipulates according the instruction, and sends it out. Those transistors do not have any "understanding" of where the signal comes from, what it represents, or where it is going. There is no "woo" happening - it is purely transistors responding in the only way they PHYSICALLY can to a given voltage - that's it.
@ginogarcia87302 ай бұрын
So I wonder what you guys think about then the tech being developed by Yann Lecun with his JEPA - which is prolly closer to the Free Energy Principle
@dawid_dahl2 ай бұрын
Amazing discussion, thank you! Could you please invite and have a chat with a guy called Bernardo Kastrup? That would be electric.
@ushiferreyra2 ай бұрын
@1:15:00 Even from an idealist or a panpsychic viewpoint, it's hard to argue against an organizational/structural requirement to support organized conscious cognition. Rocks don't have this.
@bonaldisillico2 ай бұрын
OK, great stuff but ... I squirm every time he says "an automata". Someone please tell him the singular of automata!
@memegazerАй бұрын
"you do not have privilages acces to my consciosness" Then there is no meta cognitive validity any consciousness you have acess that I don't. But that places a burdern tim that he only asks me to imagine...not somethng he has establiched in principle.
@memegazerАй бұрын
I don't agree tim understands the boundaries...I don't have to cross his boundaries...I just have to find an efficent consistant computation. Nobody is faced with consciounsess more so than existences.
@eriklagergren2 ай бұрын
The brain is real and understand. If machines understand to a level then it is due to interaction and not due to its smart parts. Case closed. Missunderstand this: kzbin.info/www/bejne/bKXdm5akhdiNldksi=rBse_3w3d1A5OE6m
@Achrononmaster2 ай бұрын
No one can define the intuitive concept of "understands Mandarin" in computational, nor operational, nor physical terms. If you use only a behavioural or functional definition of "understands Mandarin" you've failed to capture the conscious person's concept of the phrase. That's why there is a haunting and there always will be. Some things cannot be physically nor mathematical defined. 'Truth' for example, but many more concepts besides.
@LuciferHesperus2 ай бұрын
The only constant is change.
@DJWESG12 ай бұрын
1:20 because its a triumph of sociology and social sciences , not physics.
@Rockyzach882 ай бұрын
But at what point can you ignore that understanding is just more encoded/cryptic chinese rooms? He's willing to use "god of the gaps" for human consciousness? Guess I'm going to have to do more investigation.
@memegazer2 ай бұрын
Let me save you an hour and half "a simulation of thinking is not thinking, therefor AI is still just a chinese room, even though it is the room that designed the rules, those rules do not reflect intelligence bc the method in which they were derrived was merely a simulation of thought" In otherwords, the chinese room still haunts skeptics that employ semantic debates about what terms ought to mean
@memegazer2 ай бұрын
This is why I like wolframs ruliad concept, bc sure there is probably a different set of all computations that we can do with technology, there is also a necessary overlap between what we mean when we talk about relevant terms like "thought, intellence, sentience, consciousness" These things either mean that they are decidable, or they are not decidable If you wish to define them as undecidable in order to have a "gap" between machines and organic agents, then by definition you are conceding that you have no epistemic method to varify when these things occur in humans, or machines If you accept a definition that these terms refer to a decidable process, then there is an algorthm for producing the correct output that is indictative of a valid meaning for these terms regardless if they are applied to an organic agent or a machine agent. Bc the meta abstraction is the decidable process itself, not a debate about whether it is wetware or hardware running the simulation
@memegazerАй бұрын
"certqain patersn of cuasation" Appears as a phantom for me...I am not concerned about patterns of "causation" nor are the limits of humanity that take price in evolution as some incomputable process. We are dealing in huarestics...an unfathomable thing to suggest to me is that determism is forgone conclusion not a mechanism that drives exploration for novel degress of fredom...be it an organic simultaion of a world model or a digitially simulation seek to produce recurrsion. Smh...why are people taking these philosophical issues so personally...that is what leaves me smh.
@MikeyDavisАй бұрын
Veda Austin had proven that water has a consciousness, so why not a rock?
@memegazerАй бұрын
"It is probably is nested" This is an interesting cognitive dissonance "It probably is nested" but... interesting theory considering the degrees of freedom objective reality offers up Not to say it is not nested, just not in a neat convenientukt supports a way where the chinese room is relevant in modern times.
@thedededeity8882 ай бұрын
Keith's absolutely right. When LLMs start solving SAT/MIP problems as accurately and as fast as traditional solvers, I'll be convinced there is no ceiling for these systems and not being TC doesn't actually matter* (simply calling an external solver wouldn't count obviously!)
@Houshalter2 ай бұрын
Can a human solve an arbitrary SAT problem faster than a traditional solver? Why are the goalposts on Mars?
@thedededeity8882 ай бұрын
@@Houshalter Humans derived and built the solvers and machines to run the code, so yes, they absolutely can solve SAT problems as fast as the traditional solver. Pretty easy goalpost for AGI to achieve. If a model trained with deep learning can't do it, it's simply a skill issue and it should try getting good.
@dr.mikeybee2 ай бұрын
Neural nets are functionally complete, so they should be able to approximate any function. I don't know that this is the best way forward, however. The computational reductions made by finding the right abstract representations, for example, show how much can be accomplished with a bit of type 2 thinking.
@Houshalter2 ай бұрын
@@thedededeity888thousands of the best humans working for decades did that. AI will get to the point that it can do all of that in a few hours on a cheap GPU. There will be no reason for humans to still be around when it gets to that point though, so you will never see it.
@LatentSpaceD2 ай бұрын
Woahhhh !! Keith is in the philosophical steakhouse !! DanG !! Buckle TF uP ..
@memegazerАй бұрын
Nobody revlevant misunderstands searl...we prima facies live in a post-searl world...there no philosophically valid tesy to rewrite that history.
@SmileyEmoji422 ай бұрын
Linux is NOT a complex program. It deals with a very limited set of abstractions which it has the benefit of effectively defining i.e. a linux process is whatever linux says it is. If you have a different definition then you are, by definition, wrong. This is why programs should be commented - Without commenting it is meaningless to say that a program is wrong except w.r.t. something that was in the programmer's head that he didn't actually code. With other programs, we can compare them to the real world. With operating systems that is not possible - There are no linux processes anywhere other than in linux systems.
@fakt78142 ай бұрын
Interesting point.
@thegreatgustbyАй бұрын
Isnt the chinese room thought experiment kind of pointless? What if there is actually a person understing chinese in it, then you still wouldnt be able to tell. Similar to how there is no proof between to humans talking if the other actually understands, let alone be conscious, or just follows hardcoded rules. So the whole computation gap idea is based on dust.
@drxyd2 ай бұрын
Didn't enjoy the Doom Debates episode, there really isn't anything to debate.
@sonOfLiberty1002 ай бұрын
Which is the doom debate, you mean in discord?
@memegazerАй бұрын
How to get down a rabbit hole...postmodernism is not science...finally me and tim agree. You can't found epistemology on some computations rather than others
@memegazerАй бұрын
I love wolfram...he accepts the idea that 42 is just the random seed of or your fiction reflected back by reality that is just as condscending as tim
@memegazerАй бұрын
lets start a simple recursive computation....explain to me how tim makes sense in a universe where distinction is not possible...this dude never read tarski is the vibe I am getting
@memegazerАй бұрын
what about counter factual temporal depth?
@memegazerАй бұрын
"once you get far along that depth" no thanks scrub your memory that leads to self-refutation bc of temporal dimensionality
@Thedeepseanomad2 ай бұрын
It is coincievable a rock has what we classify as consciousness when scaled up to the complexity of what structure gives our brain what we call consciousness. What seems truly absurd is a rock having the same level or kind of consciousness as you.
@AlexanderNaumenko-bf7hn2 ай бұрын
Leave Searle to pass messages back and forth, but replace the instructions book with a real Chinese person. Searle is still clueless about the Chinese language. What does it tell us about the book?
@memegazerАй бұрын
Oh crap...I came back to watch this vid after Josha Bach's recent talk...surprised to see brainfuck metioned as an efficient meta search.
@memegazerАй бұрын
The distinction is not "subtle"...either there is a universal langauge that can be explained succiently or there is not. I disagree that there is not some universal langauge...no matter the domain you want to itterate upon semantically/with meta elevation. But that is just a theory, a meta langauge theory.
@joehopfield2 ай бұрын
"so we know whether they'll halt or not" - 🤣 The most successful brains on the planet solve complex problems without tokenizable language. Seems like we're assuming language underlies intelligence. Billions of years of life on this globe argues otherwise.
@henryvanderspuy36322 ай бұрын
AA Cavia
@BehroozCompani-fk2sx2 ай бұрын
"A computational process simulating the human mind is not the human mind". No it isn't. It turns out it is superior in computation to the human mind . 😂😂😂
@eadlam2 ай бұрын
It doesn't
@everysun312 ай бұрын
Because it doesn‘t.
@biagiocauso27912 ай бұрын
First. I love your channel
@rockapedra11302 ай бұрын
Second. See first.
@Dragon-ul8fv2 ай бұрын
Keith Duggar holding back his opinion on Hinton and Hopfield getting the Nobel Prize for physics was an absolute travesty and painful to watch. Keith, we watch you and this show for your unabashed opinion; by you stunting your intellectual rigor to pay some kind of respect or “homage” to the grandfather of AI is not being intellectually honest and hurts the integrity and popularity of this show. We come here for the raw unfiltered opinions you and Tim Scarfe give. Now, I am an absolute layman in this field, but I can tell you with utmost conviction that this should not have been a Nobel Prize for physics; it was instead shoehorned in to capitalize on the AI craze. Second, I have seen some recent videos on Hinton, and he is about as far away from cutting edge research in this space as the moon’s orbit is from Proxima Centauri. His opinions are drastically outdated and he is not keeping up with our current research and state of affairs. Keep it real Keith, that’s what will make this show rise to the top.
@nomenec2 ай бұрын
My friend, I promise you I was not holding back. I simply and honestly do not know enough about this situation to have a strong opinion and I don't even care enough to investigate. First, I think the fact that the Nobel Prize is almost always awarded for discoveries rather than ideas (there are some exceptions) is quite strange. Viz I value far more the people who abduct and predict correct ideas that the those that experimentally confirm, sometimes entirely by *accident*, the ideas of others. Thus, I have actually paid very little attention to the Nobel Prize for most of my life. Second, the Nobel Committee for Physics is composed of physicists selected by physicists. So, I mean, either the award is for a discovery that is very important in physics or they decided to kick their own field in the nuts. As a rule, I opt for the more generous interpretation unless I have evidence to the contrary. Either way it reflects more on the sad state of physics today than anything else, imo. Third, I stated very clearly, that I firmly believe the entire academic field is rife with corruption, politics, and petty human behavior. And that this absolutely could be a case of that; but, I don't *know* that it is. A big reason I personally g.t.f.o of academia is exactly this elephant in the room. Fourth, I've seen some of Jürgen Schmidhuber criticism regarding credit assignment and it is a massive problem also rife with corruption and all the rest. However, there is no world in which I'm going to spend my time tracking down and confirming Schmidhuber's references for an award I don't care about (see my first point). And I'm not just going to regurgitate his opinion as my own without verification (because I'm intellectually honest). Fifth, I kinda stopped paying attention to Hinton last year when quit Google and when full on "Doom is nigh!" because I think it is far away an I don't appreciate baiters and alarmists. Look, I promise, and it honestly should be obvious, that when I have an opinion I *do not hold back* and actually that has and probably will continue to cause a lot of problems for me in life. But, alas, being an open book a-hole is baked hard into my DNA, for better or worse.
@PhillipRhodes2 ай бұрын
> Why does the Chinese Room still haunt AI? It doesn't. In fact, it never really did.
@ahahaha35052 ай бұрын
A bit glib, no?
@a.v.gavrilov2 ай бұрын
Sorry but 2 "Chinese room" articles is Fallacy of division and Fallacy of composition (corresponding) logic mistakes. So what about you want to talk here?
@memegazerАй бұрын
Tim tosses around this idea about computation that he never defends. A shamful thing he refuses to do as a human. As a human I accept my limits...and feel humilated by Tim's devil plays advocate content filler...must exist this vid....again...sorry folks...Can't listen to this sophistry but in small doses.