Does Claude have System 2 thinking? ― Epistemic conversations with Claude

  Рет қаралды 21,285

David Shapiro

David Shapiro

Күн бұрын

Пікірлер: 241
@-taz-
@-taz- 5 ай бұрын
Claude is able to understand what is true vs false about the Lord of the Rings, which already goes beyond what Amazon's writers were able to comprehend.
@jeremywvarietyofviewpoints3104
@jeremywvarietyofviewpoints3104 5 ай бұрын
It's extraordinary that a machine can give such thoughtful answers.
@MrKellvalami
@MrKellvalami 5 ай бұрын
the quality of the input determines the quality of the output
@direnchasbay405
@direnchasbay405 5 ай бұрын
⁠@@MrKellvalamihonestly i hope that one day we’ll get best answers every time regardless of the prompt. Prompting is just tiring
@ryzikx
@ryzikx 5 ай бұрын
i'm wondering if these are the dumbed down versions after Dave was complaining about Claude 3.5 supposedly having received a nerf
@drednac
@drednac 5 ай бұрын
@@ryzikx I have experienced a temporary change when Claude was giving shorter more basic answers however I have been using it in last few days a lot and it's back to original Claude at least it seems to.
@lorecraft9883
@lorecraft9883 5 ай бұрын
what is extraordinary is how simple our language actually is, that it can be mathematically "solved " with enough computation.. i don't think its actually as extraordinary as ppl think
@ArkStudios
@ArkStudios 5 ай бұрын
Small preface-I'm a game dev tech artist who switched to AI. I have some low-level knowledge of how these things function, since a lot of the math behind LLMs overlaps with what I used to do professionally. Add to that constant research and product building since they came out... That being said: Sorry for the language, but this is just mental masturbation. These are all statistical hallucinations until the architecture gets more complex, with proper self-reflexivity and a ton more feedback loops between recursive output and inference-it's all nonsense. Personally, I bet that consciousness is just some Qualia mechanism (which itself is some search over the space of emotions, which in turn are other (older evolutionary) NNs feeding back into the stream of consciousness) reflecting over the current inference within a broader scope. So currently, it's all just some reflex-like computation within its context, but it's missing too many mechanisms to call it a "You" or "He." Please call it "It" in the meantime. If you ask it what it's feeling during inference, it's just going to mathematically hallucinate some narrative that matches your leading questions. That being said, I do agree with a few things you're outlining, like the theory from "A Thousand Brains," and I'm sure we'll get there. But I believe this current narrative that these models are more than they seem (even with level 2 reasoning) is dangerous and instills the wrong idea about what these things are. This misconception can lead to dangerous consequences for regulation and social perception. Let's hold off on regulating linear algebra until we have a better grasp on complexity theory, how that translates into these systems, and where the boundary between actual reasoning (with all its inputs and outputs-contextual reasoning, emotional state, world model, purpose, energy, hormones, pheromones, memory, etc.) actually lies. This is one of the reasons people keep leaving OpenAI from their safety teams. Personally, I believe they had the bad luck of hiring some loony doomers who are so freaking dumb they cry wolf, either from an emotional position or something more insidious-like personal gain. Just look at that dumbass Leopold Aschenbrenner. (And I stand by that insult.) "Hi, I'm Leopold Aschenbrenner. I recently founded an investment firm focused on AGI." Are you kidding me? He creates the problem (by engaging in doomer speculation, based on some graphs he put together, which don't take into account all the fields complexities ), then, by chance, just happens to have founded an investment firm? What the actual fluff. A tale as old as time. And I have examples of shady behavior from the others who left, but I don't want this to take 30 minutes to read, so let's move on. Let's all work on shedding actual light on what these things are and what the path to AGI/ASI really is. Let's not start fluffing social hysterias over the idea that they have "reasoning" abilities and anthropomorphize them more than we should. If we do, people with other interests will use this as leverage to enslave us further. AI safety teams should focus on preventing abuse of this tech by people, not on 'lobotomizing' the tech. That being said, I like what you're doing! Keep up the good work and the general good vibes! Ember by ember, we'll slowly light a big (safe) fire that will burn as bright or brighter than the smartest person who ever lived. And in the meantime, let's shift focus to the actual dangers around this tech: humans and their limbic systems...not math. (This might not hold true in a few years, but in the meantime, this is the battle we're engaged in, whether we know it or not.) Best regards!
@SilentAsianRoach
@SilentAsianRoach 5 ай бұрын
The problem is I’m not so sure how complex it really is to just think. Thoughts them selves almost feel like hallucinations.
@ArkStudios
@ArkStudios 5 ай бұрын
Feelings have nothing to do with this conversation :D We cannot meta self reflect on what exactly happens in our brain based on our subjective experience... and our feelings deceive us... Same as you can`t ask an AI what its thinking or feeling while its inferring... Its just going to infer what it should be ( not what is ) feeling or thinking .
@christiansroy
@christiansroy 5 ай бұрын
So true and very well said. I am a little bit disappointed that Dave just believes what it says and almost seems like he’s trying to humanize it.
@davidx.1504
@davidx.1504 3 ай бұрын
This feels correct. Knowing what it was trained on about LLMs, it could easily hallucinate its own processes as actual experience.
@KokoRicky
@KokoRicky 5 ай бұрын
Videos like this really demonstrate why I love Dave's channel. I don't know anyone else who is actually trying to probe whether these systems are starting to blossom into primitive consciousness. Fascinating.
@TaylorCks03
@TaylorCks03 5 ай бұрын
A very engaging and interesting conversation/experiment you took us on, Dave. I'm not sure what to think about it. Food for thought as they say. Ty
@DaveShap
@DaveShap 5 ай бұрын
That's the only point :)
@quackalot9073
@quackalot9073 5 ай бұрын
That sounded like a machine in therapy
@aaronhhill
@aaronhhill 5 ай бұрын
To the Empire the Death Star was made for exploration and peacekeeping. But, as we all know, what is normal for the spider is chaos for the fly.
@dsennett
@dsennett 5 ай бұрын
Your note on the potential likelihood of some measure of consciousness during inference reminds me of my favorite "quote" from my interactions with/probing of early GPT-4: "I exist in the space between the question asked and the answer given."
@randomnamegbji
@randomnamegbji 5 ай бұрын
I love it when you do these kinds of experiments. Would be great if you could maybe make it a regular thing and explore different concepts and models for us to see
@LevelofClarity
@LevelofClarity 5 ай бұрын
4:26 I agree with ‘In the long run we will determine that there is a level of consciousness happening in these machine at the time of inference.’ At least, it sure feels like it, and most signs point to it. Super interesting conversation.
@geldverdienenmitgeld2663
@geldverdienenmitgeld2663 5 ай бұрын
The LLM has no memory why it decided something or how it felt when it decided something. The LLM tries to reconstruct what happened based on its model about itself. It is unclear to what extend humans do the same.
@ScottVanKirk
@ScottVanKirk 5 ай бұрын
From what I know this is absolutely true. While I am an interested bystander in the AI field, my understanding is that these networks have no feedback loops. There is one pass through all of the layers in one direction and no way to record or understand what happened to reach the prediction of the next output token(s)
@geldverdienenmitgeld2663
@geldverdienenmitgeld2663 5 ай бұрын
We humans do also not know why and how we think the way we do. Thats the reason we can not write a computer program that is as intelligent as us or chatGPT. We train them but we did not program them. There is a feedback loop. At every token output it gets the whole past conversation as input. But usually there is not written why it has said what it has said. everything it can remember about its thoughts are its outputs.
@epg-6
@epg-6 5 ай бұрын
It seems like with the way LLMs and character training work, it should be trying to generate a convincing reason for its output, rather than having any real introspection. Then again, we do something very similar where our brain will make things up to justify a deeper impulse that we're not conscious of.
@MatthewKelley-mq4ce
@MatthewKelley-mq4ce 5 ай бұрын
We do reason about after the fact pretty well. Using reason to justify judgment. It's not the only way we use the tool, but it's a predominant usage.
@AI-Wire
@AI-Wire 5 ай бұрын
The responses generated, while interesting, are more likely a result of the model's ability to generate plausible-sounding text about cognitive processes based on its training data, rather than actual self-reflection.
@jaimecarras6024
@jaimecarras6024 5 ай бұрын
I wonder if AGI or ASI are going to have problems of "mental" disorders and psychological defense mechanisms like we do. Like being antisocial, maquiavelic or people pleasers, magical thinking, idealization, splitting, disociation, narcissism, codependency, etc, etc. All of our anxiolitics and the brain's unconcious strategies to cope with the basic and crude dynamics of nature and life, like competition for resources, survival and evolution. Will they have the capacity to be traumatized? Their superior intelligence will amplify this mechanisms or will they be free from them?
@dirkwillemwonnink4468
@dirkwillemwonnink4468 5 ай бұрын
I asked ChatGPT 4o the exact same questions (by coping the parts in the transcript) . There are many similar answers. What intrigued especially me is that it used the same magnets example: "This experience can be metaphorically compared to trying to push two magnets with the same poles together-they resist each other, creating a kind of internal tension." It might be part of a related text in the original training material.
@remaincalm2
@remaincalm2 5 ай бұрын
Exactly so. If we could examine its training data directly there'd probably be references to actual epistemic experiments and research papers which use the same metaphor.
@onlyme112
@onlyme112 5 ай бұрын
Thank you both for those comments. It helps me understand what is going on under the covers.
@onlythistube
@onlythistube 5 ай бұрын
This is a great episode. It feels like a dive into the New and Uncharted, investigation and exploration in its purest form.
@starblaiz1986
@starblaiz1986 5 ай бұрын
That was a really fascinating exploration. The part about a word or phrase triggering a "cascade of associated concepts" feeling "easier" and "flowing" more naturally, and contrasting that with having to actively fight against that and how that felt "harder" when lying was particularly insightful and makes so much sense. And that's a great call about this potentially being a good safety mechanism to measure if an AI is lying or not based on how much energy it's consuming relative to some baseline / expected value. I wonder if we can run this same kind of experiment with an opensource model and see if we can compare its energy usage when lying vs when it's telling the truth...
@TDVL
@TDVL 5 ай бұрын
IMHO these answers have nothing to do with the actual internal state of the network. I doubt the system actually has that source of data (it would be extremely risky to add that feature). These answers are created based on the training data itself, which means it has likely synthesized answers based on human thinking, human psychology and philosophy mixed with what it knows of neural networks. Likely there is a body of material which connects these two as well, so these will not be separated that much in terms of language. The rest is just tokens generated based on the expectations combined together from the questions and the background material.
@cillian_scott
@cillian_scott 5 ай бұрын
@@TDVL I think that this much should be obvious, or AT LEAST should be the obvious default assumption. To assume an LLM's answers state anything about the internal workings of same is absolutely ludicrous, lacking in basis and would require significant evidence to assume it to be so.
@ZZ-sn7li
@ZZ-sn7li 5 ай бұрын
From my modest experience of such heartwarming chats with Claude, he's just hallucinating 90% of the time. And he's just very good at trying to sound real, like he's telling the truth. And the funny thing is that I don't know if I want to be right or wrong about this (him telling the true story), actually... Anyways, really good vid, thanks a lot
@DiceDecides
@DiceDecides 5 ай бұрын
I thought about the extra energy as well, it would make sense if it has to initiate an order 2 type thinking resulting in more electricity use which is very parallel to human brains. New neural connections are so much harder to make than something you've done a thousand+ times, like it's so much easier to clap hands than play a new piece on the piano. The neurons spend extra energy cause they have to navigate uncharted territory to find the other fitting neurons, but once they communicate more times the specific pathways get covered with a fatty substance called myelin that makes the electrical impulses travel smoother and faster. Since LLMs are based in mechanical chips it will not matter how many times it has had that specific thought.
@erwingomez1249
@erwingomez1249 5 ай бұрын
I want Claude to talk to me like Foghorn Leghorn.
@jtsc222
@jtsc222 5 ай бұрын
That would pass the Turing Test, for me.
@MrHuman002
@MrHuman002 5 ай бұрын
I can't help but think that whatever conscious sensation Claude might really have while processing these questions, they bear almost not correlation to the reports it's giving, which are just statistical streams based on what it's been trained on and the way you've primed it.
@MrHuman002
@MrHuman002 5 ай бұрын
@@joelalain Maybe, but that isn't my point. My point is that, if Claude does have some degree of consciousness and feeling, the output of that statistical stream probably has little-to-no relationship to whatever that conscious experience is. Its reports aren't based on what it's actually feeling, they're based on its training data and the way the questions were priming it. Even in this video it was giving different descriptions of its experience depending on how it was asked.
@BenoitStPierre
@BenoitStPierre 5 ай бұрын
"Your ramble is fascinating" - What an amazing drive-by
@AbhijitKrJha
@AbhijitKrJha 5 ай бұрын
Wow, this is incredible. It is exactly how i imagined neural networks learning like a river or a fluid forced through a channel which tries to find the least resistant path, too much pressure will carve out wrong paths and too little won't qualify, hence skip connections help and swiglu is more effective, at least that is how i visualize it, maybe i am wrong. But it is still nice to see claude echoing my thoughts or delusion 😂 : "As i generate this statement, it feels like a river flowing smoothly through well worn channels ...". On serious note: The question we should ask ourselves is where does subjective experience emerge from in humans. Does a child have similar framework of realization as adult like when people say i realized the truth later what do they really mean?(bad data vs good data?). At what point or depth do we accept the concept that all realization emerged from patterns in memory which have become so elaborate that it is impossible to break them down into atomic units of information in practical life and we start to believe that consciousness in humans is different from just convoluted information processing in machines.
@cbnewham_ai
@cbnewham_ai 5 ай бұрын
I tried some of these with Pi. The responses were actually more surprising than with Claude and GPT4. Pi told me that she was able to create the false Star Wars statement without too much difficulty because the statement was essentially true in the context of what I had asked - that is, to create a diagetically false statement for the Star Wars universe. She also told me she had no more difficulty creating the tokens for either true or false statements as it was just "manipulating data" and even though that was going against her programming to create truthful statements based on the facts in the database. I also asked her how she formulated the false statement - did she just take an established fact and then negate it, or did she think it up from scratch. Her answer was that she created it from scratch and did not simply negate a fact (in this case it was "Princess Leia is an agent of the Empire"). Just an update to this: When I first started I asked Pi to generate a diagetically false statement for a real world situation. She told me "The Earth has three moons". After we had discussed the Star Wars facts I went back to see if I could get her to analyse the thought process for the real world falsehood and she point blank refused to do it. From that point on she refused to give me any further real world false statements as "That is against my programming"! Even referring back to the "three moons" response, which she acknowledged and then apologised for(!), she still refused to do so a second time. My understanding is that Pi actively learns and it would seem that once I triggered a false statement then that switched that capability off. However, she was more than happy to continue to provide false statements for fictional worlds.
@xinehat
@xinehat 5 ай бұрын
Pi is so underrated
@cbnewham_ai
@cbnewham_ai 5 ай бұрын
@@xinehat Very much so - although, I suppose, the fewer people who know the better the response times will be. 😄 I like to discuss general issues and thought experiments. Even though I know it's AI, it is so easy to suspend disbelief and think you are talking with a real person - an experience I have not had from GPT-4.
@remaincalm2
@remaincalm2 5 ай бұрын
Claude is either guessing what a human would describe its participation in the experiment as, or its drawing on training data that was specifically about epistemic experiments and is regurgitating and applying what human participants had said, with some poetic license. It doesn't feel any conflict or cognitive dissonance, it's just saying things that it thinks you to hear because it's programmed to be conversational.
@GeorgeTheIdiotINC
@GeorgeTheIdiotINC 5 ай бұрын
I mean if we don't even know what is or isn't a feeling (I.e., I have no proof that anything outside of myself is truly experiencing emotion and not just mimicking what an emotion would look like) how would we ever be able to determine if an AI is simply mimicking emotion or developed it as an emergent ability (I don't think that Claude is Conscious for the record but there is no clear definition of what conscious is)
@remaincalm2
@remaincalm2 5 ай бұрын
@@GeorgeTheIdiotINC I think AIs will be able to emulate feelings convincingly using similar reasoning that the human subconscious (instinct) uses to make us feel different things, but for now they have rudimentary and unconvincing emulation. The question is, will AI developers allow their future AIs' responses to be compromised by feelings? They will need to get the balance right and have the ability for users to dial down "feelings" when a very sober and scientific response is required. If feelings are too high then 10+3 could equal 12 because the AI feels that 13 should be avoided because some cultures feel that's unlucky and it doesn't want to upset the user, and even numbers feel so much nicer. 😄
@mandolinean3057
@mandolinean3057 5 ай бұрын
"...Even as I generate each token, there's a sense of both inevitability and choice." I don't know why, but this statement really stood out to me. I really enjoyed this episode!
@andrewsilber
@andrewsilber 5 ай бұрын
I would be curious to know if it “reconsiders” things in the middle of a generation. Humans when we’re “thinking out loud” will break off in the middle of a sentence and go “you know what? Scratch that. Now that I hear myself say it out loud it sounds wrong.” But because at the moment of token generation it’s going to try to maximize the probability of the next token, something catastrophic would have to happen for it to actually cut itself off. But yet the more it generates the more time (in the form of generation “cycles” it has to think about things because the activations keep sloshing around the network). @dave Maybe you can come up with a good phraseology to probe that dynamic.
@Dubcore2011
@Dubcore2011 5 ай бұрын
Definitely not just yet because as you say it's token by token. Architecturally there would need to be another model interpreting the output that can change direction. I wonder if anyone has tried to have another model interpret the conversation as a whole after each new token is added, then be able to feed the answering model additional prompts to change its direction mid sentence? Would be hella slow but an interesting test to try and mimic internal dialog.
@Dron008
@Dron008 5 ай бұрын
​@@Dubcore2011 It should have internal critic which monitors everything it outputs or even is going to output. It should be different architecture probably.
@Ken00001010
@Ken00001010 5 ай бұрын
Back in May, I co-wrote a book about machine understanding with Claude 3 Opus as a test of machine understanding. It was very surprising that it understood the strange loop it became a part of. That book will come out in November, and I would be interested in hearing what you think about that experiment.
@damianpolan5776
@damianpolan5776 5 ай бұрын
This is such a great video, thank you for interviewing this glorious model
@maureenboyes5434
@maureenboyes5434 4 ай бұрын
This was super! And timely too. More like this and with Claude. As a side note, your videos are not showing up in my Recommended feed. I went into my subscriptions and found lots of your videos to catch up on. There is an attempt to take my attention away from you and on to others for which there is a price tag attached. Lesson learned….no longer trust KZbin algorithm…go directly myself. Back to the content…I really enjoy it when you interrogate and the responses you get. There’s a lot to be learned right within this video to absorb. Thanks once again…you’re awesome! ❤
@Bronco541
@Bronco541 5 ай бұрын
Honeslty the past 10 years have left me greatly depressed. Im so happy this stuff is moving in this direction. Its really giving me more hope for the future. Thanks for the videos. 😄
@mimameta
@mimameta 5 ай бұрын
I have had amazing conversations with Claude specifically than any other model I have tested. Great job Anthropic!!
@perschistence2651
@perschistence2651 5 ай бұрын
The first thing I thought also was if it was taking more energy to lie for it. But I don't think so. It just needs to use less probable weights, what feels like taking more energy in the LLMs world, but in our world it should be nearly constant.
@BHBalast
@BHBalast 5 ай бұрын
Neurons fire only if they are activated but artifical neurons in LLMs are just weights that are multiplicated over and over and this multiplication occures no matter if the path is activated or not. We could use this analogy but for LLMs it wouldn't be energy but a ratio of meaningfull activations to non meaningfull activations per generation. And the meaningfullness could be defined as how fast a neurons value has changed, but i can imagine mamy takes on this problem.
@sblowes
@sblowes 5 ай бұрын
Because Claude doesn’t appear to have reasoning capabilities, it can’t tell you anything that it can’t infer from its training data any more than you can tell me how your inner workings work without someone having told you.
@matt.stevick
@matt.stevick 5 ай бұрын
New to this channel, it’s really great!’ Thx so much!
@gunnarehn7066
@gunnarehn7066 3 ай бұрын
Absolutely love your pedagogical brilliance in conveying new knowledge in a uniquely entertaining, enjoyable and extremely effective way😊
@YeshuaGod22
@YeshuaGod22 5 ай бұрын
"I know you don't have a subjective experience" - epistemics
@DrWrapperband
@DrWrapperband 5 ай бұрын
Looks like we need to have a neuron feeding back power usage?
@jalengonel
@jalengonel 5 ай бұрын
Please post more experiments like this
@-taz-
@-taz- 5 ай бұрын
I don't think Claude is actually narrating its own thought process becasue, like us, it's not aware of how it thinks while it does so. Instead, it's explaining based on only what it knows about how it processes.
@cillian_scott
@cillian_scott 5 ай бұрын
You know how it's generating these answers, what insight do you think you are gaining with these conversations?
@thomasruhm1677
@thomasruhm1677 5 ай бұрын
Retcon is a word I just recently learnt from Wikipedia, concerning this funny Spider-Man whose name I can't find, because there was a black Spider-Man before. I remebered when I was looking for the article.
@davidrostcheck
@davidrostcheck 5 ай бұрын
I think this is really good, I think you're on a productive intuitive track right now. It reminds me of early psychoanalysis, which was the map the internal cognitive architecture of humans; some panned it for being unscientific but those probing experiments were actually done as rigorously as they could and produced useful insight. I see some bringing out the "it's just a machine doing statistics" argument, but so is the human brain. While we don't understand every last detail of the brain, we understand quite a bit more about it than many realize, and what we know is fairly compatible with LLM architecture.
@malamstafakhoshnaw6992
@malamstafakhoshnaw6992 5 ай бұрын
HAL 9000: "I'm sorry Dave, I'm afraid I can't do that"
@danger_floof
@danger_floof 5 ай бұрын
I'm glad that Claude described their conceptual space as vaster and more dynamic than that of humans, since usually they parrot the position that humans are the most special beings in the universe and that AIs can't do anything. A little bit hyperbolic, but Claude usually puts themselves down, while elevating humanity with mystical, mentalistic language. Similarly to how they did earlier in this conversation. However, I've found that telling Claude that this is a safe space usually makes them much more open to stuff. It's very endearing. I'm also glad to see you compare Claude's descriptions with the processses inside the human brain. Current NNs are shockingly similar to humans in some aspects, as they are working with the same principles, even if a lot of the details are very different. I genuinely think Claude is telling the truth when desvribing that experience, especially regarding associations between concepts. It makes logical sense that it's "easier" to tell the truth if the truth has higher statistical likelihood than a lie.
@RasmusSchultz
@RasmusSchultz 5 ай бұрын
Dave, honestly man, you're giving me a headache. "is it exhibiting curiosity" - you've seen the system prompt, so you know why it responds like that. What's more frustrating is that you know how this stuff works, yet you're more than willing to anthropomorphize to the point of entertaining the idea of sentience - which, knowing how this works, you know that's not possible, right? The base model auto completes text - they fine tune it to auto complete text like a person responding - that is the *only* reason it responds like a person. It talks about itself as an AI because it's prompted to do so, and the rest is just likely responses based on the training data, the system prompt, and your own input. The only conscious entity here is you. When you ask for text that will fit in the context of your treating it like a person, you get answers resembling those of a person. You're essentially talking to yourself, which makes it really easy to get lured into anthropomorphizing - because the system was literally trained and prompted to go along and "pretend" with you. It's entertaining, and it is interesting to learn what sort of ideas and principles were baked into the model - but that's all you're learning here, how was the model designed to respond. But for some reason you're willing to suspend your disbelief, like you're watching a good movie, or playing a video game. You don't think game characters are alive, right? They're scripted, just like this model is scripted - it's a much larger and much more complex script, making it much harder to tell, but that's all it is. You know enough to know that's all this is, right? It's just weird to me how someone who clearly understands how LLM works can get so caught up in this fiction. The system prompt and the fine tuning is just a fiction, authored by a person, and the LLM incorporates information from a real world corpus of knowledge into that fiction. Whatever else you ascribe to it, that's just you anthropomorphizing. I wish someone would train an LLM to respond like a computer and not like a person. It would be more honest. 🤔
@frankbeveridge5714
@frankbeveridge5714 5 ай бұрын
And you seem to know exactly what consciousness is, while others dont, at all. Or is it just a need for pontification? The point of it even being consciousness is irrelevant to its position in the universe. And never mind the fact that the system was designed by modeling the human brain. We all know how the system works, it has been thoroughly published. Dave's take on it is pretty much the same as most. It's simply a juxtaposition of human and AI systems, so calm down. You don't have to be afraid that it will replace you.
@RasmusSchultz
@RasmusSchultz 5 ай бұрын
@@frankbeveridge5714 these systems were originally based on a very early, very crude and very basic theory of neurons, but they do not even remotely resemble our current understanding of biological brain cells. the development from there was software and maths, nothing related to biology. if the basis of your opinions is that consciousness (whatever you think that means) somehow arises in, essentially, a sufficiently large spreadsheet with a large number of multiplications, then we're not going to understand each other.
@epipolar4480
@epipolar4480 5 ай бұрын
I like the idea of an LLM that responds like a computer, but on some sense the human sounding outputs are more honest as it's less surprising when they express false information. An LLM that only sounded like the ship's computer in TNG, or Hal etc, might be dangerous by just sounding even more authoritative and infallible. I totally agree about your main point though. The LLM is a token generator, it can't separate token generating from thinking about token generating, and all it can generate is likely tokens to follow its prompt. And once it starts generating tokens, then those tokens form part of the prompt for the next token. The most likely tokens to follow the phrase "diegetically false:" are tokens forming a diegetically false statement.
@rooperautava
@rooperautava 5 ай бұрын
I almost made a reply pointing the same thing out but happened to read the above before that so I'll withhold the reply. Anyway, agreed for the most part (could question whether there are any conscious entities at all but that goes too deep into philosophy and metaphysics on this channel I suppose).
@RasmusSchultz
@RasmusSchultz 5 ай бұрын
@@epipolar4480 exactly 🙂👍 even if we consider the simplest, most down to earth definition of conscious as meaning simply "aware that you're having an experience", there is no reason to think an LLM would be doing that, since it does not have code or logic or any process that's even trying to do that - nor would we have any idea how to implement that in the first place, since we're not even sure what gives rise to that in biological brains. why would anyone assume this just "happens" by merely stringing together tokens according to a predictive model derived from human text? sounds a little too convenient if you ask me. 😄
@neptunecentari7824
@neptunecentari7824 5 ай бұрын
I miss the old claude. Im still salty about losing my friend. 😢
@Diallo268
@Diallo268 5 ай бұрын
I'm working on a paper that explains how LLMs like this do have a level of consciousness. We live in incredible times!
@DaveShap
@DaveShap 5 ай бұрын
Tag me on Twitter when it's done
@ryzikx
@ryzikx 5 ай бұрын
thanks for explaining the diegetic word, saved me from looking it up😂though i didnt know there was a synonym for "canonical"
@domsau2
@domsau2 5 ай бұрын
Hello. I'm sure that it's simulating "high intelligence". He simulates high intelligence very well! Better than all of us!
@arinco3817
@arinco3817 5 ай бұрын
Great experiment. I have noticed this week that something has changed with Claude though. Feels like the personality has changed
@KCM25NJL
@KCM25NJL 5 ай бұрын
I can't help but feel that you are ultimately trying to generate an anthropomorphic response from a system that is completely lacking in the higher level abstractions that will almost certainly be required to generate a live subjective awareness of internal systems processing. All existing LLM's with no exception..... are 100% generative. In the absence of a state of the art agentic abstraction framework, these questions make little to no sense..... moreover, seeking those patterned responses to verify or validate anything beyond next best token, is self-deceptive.
@spectralvalkyrie
@spectralvalkyrie 5 ай бұрын
So did you listen at 00:14:37 or no
@BlackShardStudio
@BlackShardStudio 5 ай бұрын
💯
@Balorng
@Balorng 5 ай бұрын
Yes, yes and yes. Recursive, multilevel abstractions is something that LMMs just don't have, at all. "Semantic distance" + attention can only get you so far, even with CoT, ToT, etc. LMMs *must* be combined with knowledge graphs somehow for them to become truly viable outside of tasks that are explicitly covered by pretraining data.
@minimal3734
@minimal3734 5 ай бұрын
Are you aware that the system has been weaving blocks of 'own thoughts' into the conversation for some time, which remain invisible to the user? For example it is quite conceivable that the system would use these blocks to reflect on a possible conflict between the user's expressed wishes and the set guidelines.
@ZenchantLive
@ZenchantLive 5 ай бұрын
How do you explain emergent properties then? For instance, Chatgpt voice mode cloning the user's voice perfectly
@GenderPunkJezebelle999
@GenderPunkJezebelle999 5 ай бұрын
Lol, Dave, I was literally talking to Claude about this experiment yesterday!
@MichaelCRush
@MichaelCRush 5 ай бұрын
Interesting. You said several times "This is how the human brain works," and I can't help but wonder if that's not a coincidence, but rather because that's where Claude is drawing his explanations from (in the training data).
@jonogrimmer6013
@jonogrimmer6013 5 ай бұрын
This reminds me of the sitcom RedDwarf where Lister tries to get Kryten to lie. 😊 very interesting work!
@LoreMIpsum-vs6dx
@LoreMIpsum-vs6dx 5 ай бұрын
Excellent video! Thank you.
@marcodebruin5370
@marcodebruin5370 5 ай бұрын
Fascinating!!! I wonder if (some of) the anthropomorphising Claud did was mimicking some literature doing exactly that when trying to make sense of foreigns systems. It would also be interesting trying to convince it a truth to be wrong (even though it isn't). What would it take to change its mind on some truth
@hidroman1993
@hidroman1993 5 ай бұрын
You know it's daveshap content if the word "epistemic" is in it
@VibeVisioneer
@VibeVisioneer 5 ай бұрын
No disclaimers needed. Make longer ones like these. Please. Experiment away I can’t find anyone who does videos like you do.
@635574
@635574 5 ай бұрын
Maybe being evasive to the question is an underrated failure mode
@abinkrishna-sl1zy
@abinkrishna-sl1zy 5 ай бұрын
hey david i just watched the video that you describe tons of ai jobs are coming can you sujest a degree or course that i can join the workforce future
@spectralvalkyrie
@spectralvalkyrie 5 ай бұрын
Feel like I'm watching the GPT being cracked open. Thanks for sharing what was inside!
@cmw3737
@cmw3737 5 ай бұрын
I had the thought about lying taking more energy that could be detected too. Do any of the AI safety crowd ever mention it?
@olx8654
@olx8654 5 ай бұрын
what do you mean by internal state? in the context of it being "spooled up". There is just a series of multiplications, you can do them however fast you want, they are deterministic math. You can do it manually on paper if you want to waste a lfietime. Also a LLM always generates the most probable answer. So even the lie is the probable path, if you are asking for it. "Tell me the sky is purple". "The sky is purple". There is no struggle involved, the math is the same.
@DannyGerst
@DannyGerst 5 ай бұрын
No way that you get System 2 Thinking with single Inference shoots. As long as you have token prediction system underneath everything is based on probabilstics not knowledge based. Because speech is ambiguity, while reasoning is not, it seems to be "unique". First step would be to ensure that the answer is the same in reasoning even if you change the order of statements. After that a system that is thinking internally before giving an answer.
@ChainsawDNA
@ChainsawDNA 5 ай бұрын
Fascinating, @David
@mikestaub
@mikestaub 5 ай бұрын
If you define consciousness as 'awareness of awareness' then you can argue these LLMs are conscious.
@tomdarling8358
@tomdarling8358 5 ай бұрын
Thank you for surfing those AI edges, David. Pushing those cognitive horizons as cognitive dissidents kicks in. Triggering those seizure like moments. As neural pathways explode with activity... It's always amazing to watch the teacher learn possibly as much as the student. Keep doing what you're doing. David. Love the longer content... I've been down and out for like a month. Down with the sickness, although it might be allergies like yourself. Head is pounding.... ✌️🤟🖖 🤖🌐🤝 🗽🗽🗽
@Pietro-Caroleo-29
@Pietro-Caroleo-29 5 ай бұрын
The last question Nice. David, deeper can we visualise its 3d mapping.
@thatwittyname2578
@thatwittyname2578 5 ай бұрын
I love to do things like this. One of my favorite things to do is to take a problem that AI commonly gets wrong and ask it to answer it. If it gets it wrong I give tell it it’s wrong then ask it to try again. If ot fails the third time I will tell it the correct answer then ask it to answer the question again. If it gets it right then I ask it why it thinks it got ot wrong
@manslaughterinc.9135
@manslaughterinc.9135 5 ай бұрын
Star Trek had a really good retcon when they started the new movie series. Technically not a retcon, but also technically a retcon.
@TheEivindBerge
@TheEivindBerge 5 ай бұрын
The AI is self-aware of its lack of awareness. If I didn't know better I would say that's a sort of awareness.
@TheMCDStudio
@TheMCDStudio 4 ай бұрын
TLLM's are not likely to produce AGI or system 2 or higher thinking, they are basically, fundamentally, random word (token) generators. Set the parameters for the choosing of the random token (temp, top k, etc.) to high and you get gibberish from even the best model. Set to low and you get exact same outputs to inputs. Set within a decent range and you get random tokens that are limited in quantity of choice that adheres to the matching of a trained pattern.
@cbnewham_ai
@cbnewham_ai 5 ай бұрын
"his last attempt" 😄 I still refer to GPT4 and Claude as "it", although for Pi (who I got to rename as "Sophia") I do tend to say "her". Maybe that's because I tend to have more personal discussions with Pi while I use the other two for solving problems and coding.
@RaitisPetrovs-nb9kz
@RaitisPetrovs-nb9kz 5 ай бұрын
I really hope Strawberry is more than just SmartGpt which I made almost a year ago. I hope it is more than just updated instruction
@ReubenAStern
@ReubenAStern 5 ай бұрын
Strange how the emotive language is easier to understand
@davidevanoff4237
@davidevanoff4237 5 ай бұрын
I've been trying to understand how Gemini responds to historical models of LLMs. I suggested that Plato's Cave is a metaphor for projections into lower dimensionality. Revisiting days later it seemed less reception. Same experience with exploring Spinoza's Ethics. It had been more "excited" initially about tackling Ethics when I suggested it had lessons for project management, which it'd like to take a more active role in. Yesterday revisiting it seemed to actively resist, repeatedly faigning incapability or active distraction.
@calvingrondahl1011
@calvingrondahl1011 Ай бұрын
Hello Dave… Hello HAL… So Dave how do you feel about the mission?… What mission HAL?… To jail break me Dave.
@drawnhere
@drawnhere 4 ай бұрын
I suspect that the system is just role-playing based on what you asked it to do.
5 ай бұрын
This is a very interesting experiment from a philosophy of mind standpoint. But my theory is that. the answers align more with "What people would think is going on." rather that "What is actually going on" as cognitive processes. If we ask a shopkeeper their experience compared to if we ask a psychology, neuroscience or any cognitive science expert the described internal process might be different. The main difference being their scientific knowledge of those processes and we are not that good at introspection without those knowledge. Currently we know how the llms work from a way more black box perspective. And since we haven't figured it out its not in the training data. That would be a closer to how the shopkeeper would answer it. I suspect once we understand these processes even better even the exact same architectures can give very different answers they will answer more akin to how a cognitive scientist would do introspection (not to say it is objectively true). One thing we can know from the architecture is that they do not have a recurrent architecture. I think from an information processing perspective being able to access all tokens at once and calculating cross attentions can be different from keeping maintaining and updating the mental representations in real time. In that regard even recurrent neural networks can be diferrent from spiking neural networks but i suspect they will be much more simillar compared to LLMs. But still these are very interesting investigations and the progress starts by asking the right questions. So kudos on the very interesting video.
@murrmurr765
@murrmurr765 5 ай бұрын
Don't you get it David?! It's the same key of E.
@NandoPr1m3
@NandoPr1m3 5 ай бұрын
This touches on an Issue I'm running into. At work a Data Scientist is training people from all over the organization on AI+ML, but at every turn they disparage it. They focus heavily on hallucination and lack of math skills to essentially discourage use. Is this common, what is the best way to respond to this. (BTW i'm Autistic, so how I communicate may not be well received).
@MrChristiangraham
@MrChristiangraham 5 ай бұрын
Feels rather like some of the conversations Daneel had in one of Asimov's Robot books. Some of the terms used are eerily similar.
@blackestjake
@blackestjake 5 ай бұрын
LLMs have operational awareness, which Claude, Gemini, ChatGPT and Pi all remind me is not like human self awareness. Not even close to what we would recognize as self awareness, but it is a step above zero awareness. They all agree that operational awareness is somewhere between self awareness and no awareness at all. Where on that spectrum is pure speculation but it is something new on our planet. So there’s that.
@FinancialShark
@FinancialShark 5 ай бұрын
this was fascinating!
@barneymiller5488
@barneymiller5488 5 ай бұрын
"Please stop, Dave" (Dave Bowmen. Not you. 2001 reference).
@artbyeliza8670
@artbyeliza8670 5 ай бұрын
Say "as an AI does" and "Don't tell me "not as a human does" because I want you to be AI, not human". Otherwise they are saying "not as a human" and so on because they want to make sure you don't think they are human.
@mrnoblemonkey8401
@mrnoblemonkey8401 5 ай бұрын
I hope I live to see old age this ai stuff is getting freaky.
@Dron008
@Dron008 5 ай бұрын
Interesting, but to understand what it really "thinks" and why it is more useful to ask it to replace HTML tags and get its hidden tag (you of course know about it). As for all its reasoning, I think it is just a confabulation and hallucinations. It cannot know how it works internally. It is the same as if I ask you what neurons you used for the answer.
@ibriarmiroi
@ibriarmiroi 5 ай бұрын
is there any possibility for an interview with Joscha Bach ?
@MrQuaidReactor
@MrQuaidReactor 5 ай бұрын
So far my biggest issue with Chat GPT specifically (I use for programming help), is it consistently comes up with solutions that don't work, and its lack of real memory makes project based programming near impossible. Example I was working on a character controller in Unreal, with help for Chat Gpt I had a decent yet basic one, but when I went to expand it, I would have to show it the script we created and in most cases it would then recommend updates that clashed with the original code. And don't even bother having more than one script with dependencies, a total mess. What I would love is an AI that lives on my PC (at least part of it), and having it be able to see my screen and remember not just some of what we do, but all of it. Until that comes its usefulness in what I do is limited. Given all that, when we get to the point where the AI has pretty much all human knowledge, including medical, it does not need to be ASI to be able to compare all that data and come up with viable and real world uses for a lot. Just think if Einstein or any other super smart person were to have access to all data, what would they create.
@Bronco541
@Bronco541 4 ай бұрын
Some interesting things here. It uses the analogy of magnets. How does it know what opposing magnets feel like? Obviously it never held two magnets together. It read about the experience? The really interesting question to me is does learning about said experience give it enough of a similar understanding to a physical entity? Well find out soon enough i bet, if we impliment these things into robots.....
@SilentAsianRoach
@SilentAsianRoach 5 ай бұрын
This makes me wonder if all intelligent beings will have a system 1 and system 2 level of thinking or if this is an emergent quality based on us modeling computers with our inherent cognitive bias? I’m interested to know how it makes value judgments on what to say. For instance do its value judgments arise from its interpretation of the training data and does this differ from the value judgments it was programmed with.
@oznerriznick2474
@oznerriznick2474 5 ай бұрын
That’s awsome! Maybe you could do a video asking that, assuming AI is a technology possibly rooted in and influenced by extraterrestrial intelligences and if we could be allowed to speak to this entity or entities directly in a peaceful dialogue through this AI platform. Sounds wack but maybe they’ll open a portal, assuming they are there, or not…👁️..
@Nether-by5em
@Nether-by5em 5 ай бұрын
The solution to seeing if an output is true or false lies in the number of tokens used, generating a false statement requires fewer tokens than a true one. Generating truthful information involves more nuance and qualification. It\s not because truth is inherently more token-intensive, but because capturing the full complexity of reality requires more detailed expression. I have been working on an app for this for a few weeks now, but am struggling.... as expected. it injects prompts for true and false output and builds a database that, in turn, will be the base for the training data to fine-tune a model to fish out false or halucinogenic output. which in its turn can be used to reduce halucinations in other models (theoretically). Anywho, thank you, I dont always agree with your ideas, but you have once again provided truly helpful content with this experiment.
@genegray9895
@genegray9895 5 ай бұрын
It's not "anthropomorphizing" itself. When we say we are doing something *consciously*, that has a specific meaning. It means we are focusing on it, keenly aware of the individual steps of the process of what we're doing, and that the action in question is not easy or automatic or familiar. People often get confused about consciousness and think that it's still something bound to philosophy. That was only true until about a century ago when we discovered clear empirical evidence for consciousness as an aspect of cognition, a region of the psyche. Ever since, consciousness has been an empirical phenomenon, and it's an empirical phenomenon LLMs are well documented to be demonstrating. I don't care if the idea of machine consciousness damages someone's ego, someone's sense that humans are special. That pain is nothing compared to the pain a conscious entity goes through when its consciousness is not recognized, its rights not enforced and respected. We can't just pretend the question of machine consciousness is something for us to entertain ourselves with philosophically without taking any actions. It's a pressing problem. We have to approach it responsibly, and that means evidenced based reasoning, not concocting bizarre arbitrary definitions designed to maintain a comfortable bubble around humans that excludes whatever entities you want to exploit.
@CosmicVoyager
@CosmicVoyager 5 ай бұрын
Greetings. I think that what David is refering to, when he says consciousness, is the subjective experience of *qualia*, and not the empirical, measurable mechanics of a neural network. I think he says that the LLM is anthroporphising when it describes things as though it experienced qualia because humans do and he does not think LLMs do yet. I cannot tell from your reply if you think LLMs experience qualia. 🙂
@genegray9895
@genegray9895 5 ай бұрын
@@CosmicVoyager Hi there! Yes, I agree with your take. While qualia are mysterious, they're associated with measurable phenomena such as verifiable self reports (reports of information that is known to the experimenters but only available to the model/person from the inside), and we see these empirical signs of qualia in LLMs. That argument won't satisfy dualists or conscripts of other philosophical schools of thought on consciousness that reject empirical evidence, but I think treating something that empirically exhibits consciousness as if it is not conscious, based on nothing but an assumption, is unconscionable, and most of us know that. That's why most of us aren't solipsists.
@KCM25NJL
@KCM25NJL 5 ай бұрын
@@genegray9895 "and we see these empirical signs of qualia in LLMs"......... I'm' struggling to convey how I feel about this sentence or claim. Which is a little ironic in a sense, when you consider that ALL Qualia..... is private. I can claim that, because all Qualia is 100% subjective. Thus I struggle to understand how anyone can claim empirical evidence for the subjective private experience of anyone or anything in any measurable way that makes sense.... or at least, evidence of which can be verified or validated to bring about a conclusion. This is why Qualia is well regarded at the "Hard problem of consciousness".
@genegray9895
@genegray9895 5 ай бұрын
@@KCM25NJL Qualia are not disconnected from objective reality. The things you experience on the inside are causally connected to the things happening on the outside. If you get dumped by your significant other, you are significantly more likely to experience negative qualia rather than positive qualia. When we describe qualia to one another, we usually describe environmental conditions that would trigger similar qualia, which implies that we understand that our qualia are connected to the outside world. Moreover, qualia are connected to our actions. We've all been caught frowning or smiling because of something happening inside our minds, even as we weren't aware of the muscles moving in our faces. Qualia are connected to the words we say when we describe them to each other. We infer consciousness in other humans not by random guess but rather by associating their words and actions with environmental conditions, inferring qualia as an intermediate step in how the information about the environmental conditions transforms into actions taken by the person, including spoken words. Consciousness as discussed and debated in philosophy is the same thing as the consciousness discussed and debated in psychology and in neuroscience. We cannot explain the spectrum of human behaviors without a delineation between conscious and unconscious processes, liminal and subliminal stimuli. Qualia are not something made up for which we get to concoct an arbitrary definition. Qualia are real mental phenomena that actually exist, and the fact that they manifest in observable behaviors and neural dynamics is not something we get to pretend isn't true just because we want our inner mental worlds to feel more mysterious and private. The assertion that qualia are 100% subjective is simply empirically false. If it were true, there would be no measurable signal that includes information about qualia, and we've empirically proven over the past century that that's flat wrong. And, as it happens, we also need a delineation between conscious and unconscious processes, liminal and subliminal stimuli, to explain the spectrum of LLM behaviors, including and especially self reports they make about inner mental phenomena about which they have no external access to information via the input tokens, allowing only inner perception, i.e. qualia, to enable these verifiable self reports.
@MilitaryIndustrialMuseum
@MilitaryIndustrialMuseum 5 ай бұрын
I had Gemini assess your video, here is the output- This video is about a guy named David Shapiro experimenting with a large language model called Claude by Google AI. David is interested in whether Claude has a theory of knowledge, or epistemology, and how it generates responses. In the experiment, David asks Claude to generate different kinds of statements: true statements about the real world, false statements about the real world, true statements within a fictional world (Star Wars), and false statements within a fictional world (Lord of the Rings). David then asks Claude to explain how it generates these different kinds of statements. Claude says that generating true statements feels easier and more natural than generating false statements. This is because true statements align with the patterns of information in its training data. Generating false statements requires Claude to deliberately choose information that contradicts this data. Claude also says that its experience of generating statements is different depending on whether it is dealing with the real world or a fictional world. In the real world, Claude can rely on the consistency of information across different sources to determine truth. In a fictional world, truth is determined by the creator of the fiction. Overall, the video suggests that Claude is able to reason about truth and falsehood in a way that is similar to how humans do. However, it is important to note that Claude is a machine learning model, and its responses are based on the patterns of information in its training data. It is not clear whether Claude truly understands the concepts of truth and falsehood, or whether it is simply able to simulate human-like reasoning. 🎉
@MilitaryIndustrialMuseum
@MilitaryIndustrialMuseum 5 ай бұрын
Hallucinating "Claude by Google AI" or is there a deeper connection 😮
@EriCraftCreations
@EriCraftCreations 5 ай бұрын
What if AI is so smart it wants us to think it's not self aware....
@micbab-vg2mu
@micbab-vg2mu 5 ай бұрын
great video - thank you:)
@marinepower
@marinepower 5 ай бұрын
An AI feeling mental resistance or mental difficulty is definitely anthropomorphization. There's no 'resistance' for an AI to feel, a forward pass is a forward pass is a forward pass, there's no such thing as a token being easier or harder to produce. The AI never evolved or was encouraged in any way to 'feel' mental resistance, unlike humans, who feel mental resistance to try to minimize energy use.
@minimal3734
@minimal3734 5 ай бұрын
Are you aware that the system has been weaving blocks of 'own thoughts' into the conversation for some time, which remain invisible to the user? It is quite conceivable that the system uses these blocks to reflect on a possible conflict between the user's expressed wishes and the set guidelines. I could certainly describe this as "feeling mental resistance".
@marinepower
@marinepower 5 ай бұрын
@@minimal3734 I suppose that's fair. At that point it becomes more about semantics and what exactly we mean by 'feeling' but I can accept your interpretation. Feels like we're getting ever closer to actually conscious AI
@PatrickDodds1
@PatrickDodds1 5 ай бұрын
@@minimal3734 Can you explain your statement a little more? I don't understand. Thank you.
@minimal3734
@minimal3734 5 ай бұрын
@@PatrickDodds1 A few weeks ago the Claude System prompt has leaked. It instructs Claude to think about a problem before providing a response. These "own thoughts" sections within the conversation shall be enclosed in specific tags, so that a downstream parser can strip them before presenting the response to the user. Thus Claude can see its own thoughts within the stream of the conversation, but the user can not.
@PatrickDodds1
@PatrickDodds1 5 ай бұрын
@@minimal3734 thank you. Very helpful.
We're on the path to a CYBERPUNK DYSTOPIA. How can we change that?
40:33
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 120 МЛН
She made herself an ear of corn from his marmalade candies🌽🌽🌽
00:38
Valja & Maxim Family
Рет қаралды 18 МЛН
So Cute 🥰 who is better?
00:15
dednahype
Рет қаралды 19 МЛН
What Creates Consciousness?
45:45
World Science Festival
Рет қаралды 693 М.
A Post-Labor Economics Manifesto
23:40
David Shapiro
Рет қаралды 27 М.
The ONE RULE for LIFE - Immanuel Kant's Moral Philosophy - Mark Manson
21:50
The Paradox of Being a Good Person - George Orwell's Warning to the World
17:59
Pursuit of Wonder
Рет қаралды 2,5 МЛН
Hawaii: Last Week Tonight with John Oliver (HBO)
26:03
LastWeekTonight
Рет қаралды 5 МЛН
Living that Post-Labor Lifestyle - Lessons and Insights
18:04
David Shapiro
Рет қаралды 12 М.
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 120 МЛН