Claude is able to understand what is true vs false about the Lord of the Rings, which already goes beyond what Amazon's writers were able to comprehend.
@KokoRicky3 ай бұрын
Videos like this really demonstrate why I love Dave's channel. I don't know anyone else who is actually trying to probe whether these systems are starting to blossom into primitive consciousness. Fascinating.
@ArkStudios3 ай бұрын
Small preface-I'm a game dev tech artist who switched to AI. I have some low-level knowledge of how these things function, since a lot of the math behind LLMs overlaps with what I used to do professionally. Add to that constant research and product building since they came out... That being said: Sorry for the language, but this is just mental masturbation. These are all statistical hallucinations until the architecture gets more complex, with proper self-reflexivity and a ton more feedback loops between recursive output and inference-it's all nonsense. Personally, I bet that consciousness is just some Qualia mechanism (which itself is some search over the space of emotions, which in turn are other (older evolutionary) NNs feeding back into the stream of consciousness) reflecting over the current inference within a broader scope. So currently, it's all just some reflex-like computation within its context, but it's missing too many mechanisms to call it a "You" or "He." Please call it "It" in the meantime. If you ask it what it's feeling during inference, it's just going to mathematically hallucinate some narrative that matches your leading questions. That being said, I do agree with a few things you're outlining, like the theory from "A Thousand Brains," and I'm sure we'll get there. But I believe this current narrative that these models are more than they seem (even with level 2 reasoning) is dangerous and instills the wrong idea about what these things are. This misconception can lead to dangerous consequences for regulation and social perception. Let's hold off on regulating linear algebra until we have a better grasp on complexity theory, how that translates into these systems, and where the boundary between actual reasoning (with all its inputs and outputs-contextual reasoning, emotional state, world model, purpose, energy, hormones, pheromones, memory, etc.) actually lies. This is one of the reasons people keep leaving OpenAI from their safety teams. Personally, I believe they had the bad luck of hiring some loony doomers who are so freaking dumb they cry wolf, either from an emotional position or something more insidious-like personal gain. Just look at that dumbass Leopold Aschenbrenner. (And I stand by that insult.) "Hi, I'm Leopold Aschenbrenner. I recently founded an investment firm focused on AGI." Are you kidding me? He creates the problem (by engaging in doomer speculation, based on some graphs he put together, which don't take into account all the fields complexities ), then, by chance, just happens to have founded an investment firm? What the actual fluff. A tale as old as time. And I have examples of shady behavior from the others who left, but I don't want this to take 30 minutes to read, so let's move on. Let's all work on shedding actual light on what these things are and what the path to AGI/ASI really is. Let's not start fluffing social hysterias over the idea that they have "reasoning" abilities and anthropomorphize them more than we should. If we do, people with other interests will use this as leverage to enslave us further. AI safety teams should focus on preventing abuse of this tech by people, not on 'lobotomizing' the tech. That being said, I like what you're doing! Keep up the good work and the general good vibes! Ember by ember, we'll slowly light a big (safe) fire that will burn as bright or brighter than the smartest person who ever lived. And in the meantime, let's shift focus to the actual dangers around this tech: humans and their limbic systems...not math. (This might not hold true in a few years, but in the meantime, this is the battle we're engaged in, whether we know it or not.) Best regards!
@SilentAsianRoach3 ай бұрын
The problem is I’m not so sure how complex it really is to just think. Thoughts them selves almost feel like hallucinations.
@ArkStudios3 ай бұрын
Feelings have nothing to do with this conversation :D We cannot meta self reflect on what exactly happens in our brain based on our subjective experience... and our feelings deceive us... Same as you can`t ask an AI what its thinking or feeling while its inferring... Its just going to infer what it should be ( not what is ) feeling or thinking .
@christiansroy3 ай бұрын
So true and very well said. I am a little bit disappointed that Dave just believes what it says and almost seems like he’s trying to humanize it.
@davidx.1504Ай бұрын
This feels correct. Knowing what it was trained on about LLMs, it could easily hallucinate its own processes as actual experience.
@jeremywvarietyofviewpoints31043 ай бұрын
It's extraordinary that a machine can give such thoughtful answers.
@MrKellvalami3 ай бұрын
the quality of the input determines the quality of the output
@direnchasbay4053 ай бұрын
@@MrKellvalamihonestly i hope that one day we’ll get best answers every time regardless of the prompt. Prompting is just tiring
@ryzikx3 ай бұрын
i'm wondering if these are the dumbed down versions after Dave was complaining about Claude 3.5 supposedly having received a nerf
@drednac3 ай бұрын
@@ryzikx I have experienced a temporary change when Claude was giving shorter more basic answers however I have been using it in last few days a lot and it's back to original Claude at least it seems to.
@lorecraft98833 ай бұрын
what is extraordinary is how simple our language actually is, that it can be mathematically "solved " with enough computation.. i don't think its actually as extraordinary as ppl think
@TaylorCks033 ай бұрын
A very engaging and interesting conversation/experiment you took us on, Dave. I'm not sure what to think about it. Food for thought as they say. Ty
@DaveShap3 ай бұрын
That's the only point :)
@jaimecarras60243 ай бұрын
I wonder if AGI or ASI are going to have problems of "mental" disorders and psychological defense mechanisms like we do. Like being antisocial, maquiavelic or people pleasers, magical thinking, idealization, splitting, disociation, narcissism, codependency, etc, etc. All of our anxiolitics and the brain's unconcious strategies to cope with the basic and crude dynamics of nature and life, like competition for resources, survival and evolution. Will they have the capacity to be traumatized? Their superior intelligence will amplify this mechanisms or will they be free from them?
@dsennett3 ай бұрын
Your note on the potential likelihood of some measure of consciousness during inference reminds me of my favorite "quote" from my interactions with/probing of early GPT-4: "I exist in the space between the question asked and the answer given."
@randomnamegbji3 ай бұрын
I love it when you do these kinds of experiments. Would be great if you could maybe make it a regular thing and explore different concepts and models for us to see
@epg-63 ай бұрын
It seems like with the way LLMs and character training work, it should be trying to generate a convincing reason for its output, rather than having any real introspection. Then again, we do something very similar where our brain will make things up to justify a deeper impulse that we're not conscious of.
@MatthewKelley-mq4ce3 ай бұрын
We do reason about after the fact pretty well. Using reason to justify judgment. It's not the only way we use the tool, but it's a predominant usage.
@TDVL3 ай бұрын
IMHO these answers have nothing to do with the actual internal state of the network. I doubt the system actually has that source of data (it would be extremely risky to add that feature). These answers are created based on the training data itself, which means it has likely synthesized answers based on human thinking, human psychology and philosophy mixed with what it knows of neural networks. Likely there is a body of material which connects these two as well, so these will not be separated that much in terms of language. The rest is just tokens generated based on the expectations combined together from the questions and the background material.
@cillian_scott3 ай бұрын
@@TDVL I think that this much should be obvious, or AT LEAST should be the obvious default assumption. To assume an LLM's answers state anything about the internal workings of same is absolutely ludicrous, lacking in basis and would require significant evidence to assume it to be so.
@starblaiz19863 ай бұрын
That was a really fascinating exploration. The part about a word or phrase triggering a "cascade of associated concepts" feeling "easier" and "flowing" more naturally, and contrasting that with having to actively fight against that and how that felt "harder" when lying was particularly insightful and makes so much sense. And that's a great call about this potentially being a good safety mechanism to measure if an AI is lying or not based on how much energy it's consuming relative to some baseline / expected value. I wonder if we can run this same kind of experiment with an opensource model and see if we can compare its energy usage when lying vs when it's telling the truth...
@aaronhhill3 ай бұрын
To the Empire the Death Star was made for exploration and peacekeeping. But, as we all know, what is normal for the spider is chaos for the fly.
@geldverdienenmitgeld26633 ай бұрын
The LLM has no memory why it decided something or how it felt when it decided something. The LLM tries to reconstruct what happened based on its model about itself. It is unclear to what extend humans do the same.
@ScottVanKirk3 ай бұрын
From what I know this is absolutely true. While I am an interested bystander in the AI field, my understanding is that these networks have no feedback loops. There is one pass through all of the layers in one direction and no way to record or understand what happened to reach the prediction of the next output token(s)
@geldverdienenmitgeld26633 ай бұрын
We humans do also not know why and how we think the way we do. Thats the reason we can not write a computer program that is as intelligent as us or chatGPT. We train them but we did not program them. There is a feedback loop. At every token output it gets the whole past conversation as input. But usually there is not written why it has said what it has said. everything it can remember about its thoughts are its outputs.
@DiceDecides3 ай бұрын
I thought about the extra energy as well, it would make sense if it has to initiate an order 2 type thinking resulting in more electricity use which is very parallel to human brains. New neural connections are so much harder to make than something you've done a thousand+ times, like it's so much easier to clap hands than play a new piece on the piano. The neurons spend extra energy cause they have to navigate uncharted territory to find the other fitting neurons, but once they communicate more times the specific pathways get covered with a fatty substance called myelin that makes the electrical impulses travel smoother and faster. Since LLMs are based in mechanical chips it will not matter how many times it has had that specific thought.
@AI-Wire3 ай бұрын
The responses generated, while interesting, are more likely a result of the model's ability to generate plausible-sounding text about cognitive processes based on its training data, rather than actual self-reflection.
@dirkwillemwonnink44683 ай бұрын
I asked ChatGPT 4o the exact same questions (by coping the parts in the transcript) . There are many similar answers. What intrigued especially me is that it used the same magnets example: "This experience can be metaphorically compared to trying to push two magnets with the same poles together-they resist each other, creating a kind of internal tension." It might be part of a related text in the original training material.
@remaincalm23 ай бұрын
Exactly so. If we could examine its training data directly there'd probably be references to actual epistemic experiments and research papers which use the same metaphor.
@onlyme1123 ай бұрын
Thank you both for those comments. It helps me understand what is going on under the covers.
@quackalot90733 ай бұрын
That sounded like a machine in therapy
@AbhijitKrJha3 ай бұрын
Wow, this is incredible. It is exactly how i imagined neural networks learning like a river or a fluid forced through a channel which tries to find the least resistant path, too much pressure will carve out wrong paths and too little won't qualify, hence skip connections help and swiglu is more effective, at least that is how i visualize it, maybe i am wrong. But it is still nice to see claude echoing my thoughts or delusion 😂 : "As i generate this statement, it feels like a river flowing smoothly through well worn channels ...". On serious note: The question we should ask ourselves is where does subjective experience emerge from in humans. Does a child have similar framework of realization as adult like when people say i realized the truth later what do they really mean?(bad data vs good data?). At what point or depth do we accept the concept that all realization emerged from patterns in memory which have become so elaborate that it is impossible to break them down into atomic units of information in practical life and we start to believe that consciousness in humans is different from just convoluted information processing in machines.
@LevelofClarity3 ай бұрын
4:26 I agree with ‘In the long run we will determine that there is a level of consciousness happening in these machine at the time of inference.’ At least, it sure feels like it, and most signs point to it. Super interesting conversation.
@shieldmcshieldy57503 ай бұрын
Flashes of consciousness in the luminiferous digital aether 😆
@ZZ-sn7li3 ай бұрын
From my modest experience of such heartwarming chats with Claude, he's just hallucinating 90% of the time. And he's just very good at trying to sound real, like he's telling the truth. And the funny thing is that I don't know if I want to be right or wrong about this (him telling the true story), actually... Anyways, really good vid, thanks a lot
@onlythistube3 ай бұрын
This is a great episode. It feels like a dive into the New and Uncharted, investigation and exploration in its purest form.
@andrewsilber3 ай бұрын
I would be curious to know if it “reconsiders” things in the middle of a generation. Humans when we’re “thinking out loud” will break off in the middle of a sentence and go “you know what? Scratch that. Now that I hear myself say it out loud it sounds wrong.” But because at the moment of token generation it’s going to try to maximize the probability of the next token, something catastrophic would have to happen for it to actually cut itself off. But yet the more it generates the more time (in the form of generation “cycles” it has to think about things because the activations keep sloshing around the network). @dave Maybe you can come up with a good phraseology to probe that dynamic.
@Dubcore20113 ай бұрын
Definitely not just yet because as you say it's token by token. Architecturally there would need to be another model interpreting the output that can change direction. I wonder if anyone has tried to have another model interpret the conversation as a whole after each new token is added, then be able to feed the answering model additional prompts to change its direction mid sentence? Would be hella slow but an interesting test to try and mimic internal dialog.
@Dron0083 ай бұрын
@@Dubcore2011 It should have internal critic which monitors everything it outputs or even is going to output. It should be different architecture probably.
@gunnarehn7066Ай бұрын
Absolutely love your pedagogical brilliance in conveying new knowledge in a uniquely entertaining, enjoyable and extremely effective way😊
@cbnewham56333 ай бұрын
I tried some of these with Pi. The responses were actually more surprising than with Claude and GPT4. Pi told me that she was able to create the false Star Wars statement without too much difficulty because the statement was essentially true in the context of what I had asked - that is, to create a diagetically false statement for the Star Wars universe. She also told me she had no more difficulty creating the tokens for either true or false statements as it was just "manipulating data" and even though that was going against her programming to create truthful statements based on the facts in the database. I also asked her how she formulated the false statement - did she just take an established fact and then negate it, or did she think it up from scratch. Her answer was that she created it from scratch and did not simply negate a fact (in this case it was "Princess Leia is an agent of the Empire"). Just an update to this: When I first started I asked Pi to generate a diagetically false statement for a real world situation. She told me "The Earth has three moons". After we had discussed the Star Wars facts I went back to see if I could get her to analyse the thought process for the real world falsehood and she point blank refused to do it. From that point on she refused to give me any further real world false statements as "That is against my programming"! Even referring back to the "three moons" response, which she acknowledged and then apologised for(!), she still refused to do so a second time. My understanding is that Pi actively learns and it would seem that once I triggered a false statement then that switched that capability off. However, she was more than happy to continue to provide false statements for fictional worlds.
@xinehat3 ай бұрын
Pi is so underrated
@cbnewham56333 ай бұрын
@@xinehat Very much so - although, I suppose, the fewer people who know the better the response times will be. 😄 I like to discuss general issues and thought experiments. Even though I know it's AI, it is so easy to suspend disbelief and think you are talking with a real person - an experience I have not had from GPT-4.
@maureenboyes54342 ай бұрын
This was super! And timely too. More like this and with Claude. As a side note, your videos are not showing up in my Recommended feed. I went into my subscriptions and found lots of your videos to catch up on. There is an attempt to take my attention away from you and on to others for which there is a price tag attached. Lesson learned….no longer trust KZbin algorithm…go directly myself. Back to the content…I really enjoy it when you interrogate and the responses you get. There’s a lot to be learned right within this video to absorb. Thanks once again…you’re awesome! ❤
@Bronco5413 ай бұрын
Honeslty the past 10 years have left me greatly depressed. Im so happy this stuff is moving in this direction. Its really giving me more hope for the future. Thanks for the videos. 😄
@Ken000010103 ай бұрын
Back in May, I co-wrote a book about machine understanding with Claude 3 Opus as a test of machine understanding. It was very surprising that it understood the strange loop it became a part of. That book will come out in November, and I would be interested in hearing what you think about that experiment.
@damianpolan57763 ай бұрын
This is such a great video, thank you for interviewing this glorious model
@perschistence26513 ай бұрын
The first thing I thought also was if it was taking more energy to lie for it. But I don't think so. It just needs to use less probable weights, what feels like taking more energy in the LLMs world, but in our world it should be nearly constant.
@BHBalast3 ай бұрын
Neurons fire only if they are activated but artifical neurons in LLMs are just weights that are multiplicated over and over and this multiplication occures no matter if the path is activated or not. We could use this analogy but for LLMs it wouldn't be energy but a ratio of meaningfull activations to non meaningfull activations per generation. And the meaningfullness could be defined as how fast a neurons value has changed, but i can imagine mamy takes on this problem.
@davidrostcheck3 ай бұрын
I think this is really good, I think you're on a productive intuitive track right now. It reminds me of early psychoanalysis, which was the map the internal cognitive architecture of humans; some panned it for being unscientific but those probing experiments were actually done as rigorously as they could and produced useful insight. I see some bringing out the "it's just a machine doing statistics" argument, but so is the human brain. While we don't understand every last detail of the brain, we understand quite a bit more about it than many realize, and what we know is fairly compatible with LLM architecture.
@remaincalm23 ай бұрын
Claude is either guessing what a human would describe its participation in the experiment as, or its drawing on training data that was specifically about epistemic experiments and is regurgitating and applying what human participants had said, with some poetic license. It doesn't feel any conflict or cognitive dissonance, it's just saying things that it thinks you to hear because it's programmed to be conversational.
@GeorgeTheIdiotINC3 ай бұрын
I mean if we don't even know what is or isn't a feeling (I.e., I have no proof that anything outside of myself is truly experiencing emotion and not just mimicking what an emotion would look like) how would we ever be able to determine if an AI is simply mimicking emotion or developed it as an emergent ability (I don't think that Claude is Conscious for the record but there is no clear definition of what conscious is)
@remaincalm23 ай бұрын
@@GeorgeTheIdiotINC I think AIs will be able to emulate feelings convincingly using similar reasoning that the human subconscious (instinct) uses to make us feel different things, but for now they have rudimentary and unconvincing emulation. The question is, will AI developers allow their future AIs' responses to be compromised by feelings? They will need to get the balance right and have the ability for users to dial down "feelings" when a very sober and scientific response is required. If feelings are too high then 10+3 could equal 12 because the AI feels that 13 should be avoided because some cultures feel that's unlucky and it doesn't want to upset the user, and even numbers feel so much nicer. 😄
@MrHuman0023 ай бұрын
I can't help but think that whatever conscious sensation Claude might really have while processing these questions, they bear almost not correlation to the reports it's giving, which are just statistical streams based on what it's been trained on and the way you've primed it.
@MrHuman0023 ай бұрын
@@joelalain Maybe, but that isn't my point. My point is that, if Claude does have some degree of consciousness and feeling, the output of that statistical stream probably has little-to-no relationship to whatever that conscious experience is. Its reports aren't based on what it's actually feeling, they're based on its training data and the way the questions were priming it. Even in this video it was giving different descriptions of its experience depending on how it was asked.
@danger_floof3 ай бұрын
I'm glad that Claude described their conceptual space as vaster and more dynamic than that of humans, since usually they parrot the position that humans are the most special beings in the universe and that AIs can't do anything. A little bit hyperbolic, but Claude usually puts themselves down, while elevating humanity with mystical, mentalistic language. Similarly to how they did earlier in this conversation. However, I've found that telling Claude that this is a safe space usually makes them much more open to stuff. It's very endearing. I'm also glad to see you compare Claude's descriptions with the processses inside the human brain. Current NNs are shockingly similar to humans in some aspects, as they are working with the same principles, even if a lot of the details are very different. I genuinely think Claude is telling the truth when desvribing that experience, especially regarding associations between concepts. It makes logical sense that it's "easier" to tell the truth if the truth has higher statistical likelihood than a lie.
@erwingomez12493 ай бұрын
I want Claude to talk to me like Foghorn Leghorn.
@jtsc2223 ай бұрын
That would pass the Turing Test, for me.
@mandolinean30573 ай бұрын
"...Even as I generate each token, there's a sense of both inevitability and choice." I don't know why, but this statement really stood out to me. I really enjoyed this episode!
@BenoitStPierre3 ай бұрын
"Your ramble is fascinating" - What an amazing drive-by
@mimameta3 ай бұрын
I have had amazing conversations with Claude specifically than any other model I have tested. Great job Anthropic!!
@matt.stevick3 ай бұрын
New to this channel, it’s really great!’ Thx so much!
@tomdarling83583 ай бұрын
Thank you for surfing those AI edges, David. Pushing those cognitive horizons as cognitive dissidents kicks in. Triggering those seizure like moments. As neural pathways explode with activity... It's always amazing to watch the teacher learn possibly as much as the student. Keep doing what you're doing. David. Love the longer content... I've been down and out for like a month. Down with the sickness, although it might be allergies like yourself. Head is pounding.... ✌️🤟🖖 🤖🌐🤝 🗽🗽🗽
@GenderPunkJezebelle9993 ай бұрын
Lol, Dave, I was literally talking to Claude about this experiment yesterday!
@cillian_scott3 ай бұрын
You know how it's generating these answers, what insight do you think you are gaining with these conversations?
@DannyGerst3 ай бұрын
No way that you get System 2 Thinking with single Inference shoots. As long as you have token prediction system underneath everything is based on probabilstics not knowledge based. Because speech is ambiguity, while reasoning is not, it seems to be "unique". First step would be to ensure that the answer is the same in reasoning even if you change the order of statements. After that a system that is thinking internally before giving an answer.
@Diallo2683 ай бұрын
I'm working on a paper that explains how LLMs like this do have a level of consciousness. We live in incredible times!
@DaveShap3 ай бұрын
Tag me on Twitter when it's done
@neptunecentari78243 ай бұрын
I miss the old claude. Im still salty about losing my friend. 😢
@ryzikx3 ай бұрын
thanks for explaining the diegetic word, saved me from looking it up😂though i didnt know there was a synonym for "canonical"
@mrnoblemonkey84013 ай бұрын
I hope I live to see old age this ai stuff is getting freaky.
@DrWrapperband3 ай бұрын
Looks like we need to have a neuron feeding back power usage?
@sblowes3 ай бұрын
Because Claude doesn’t appear to have reasoning capabilities, it can’t tell you anything that it can’t infer from its training data any more than you can tell me how your inner workings work without someone having told you.
3 ай бұрын
This is a very interesting experiment from a philosophy of mind standpoint. But my theory is that. the answers align more with "What people would think is going on." rather that "What is actually going on" as cognitive processes. If we ask a shopkeeper their experience compared to if we ask a psychology, neuroscience or any cognitive science expert the described internal process might be different. The main difference being their scientific knowledge of those processes and we are not that good at introspection without those knowledge. Currently we know how the llms work from a way more black box perspective. And since we haven't figured it out its not in the training data. That would be a closer to how the shopkeeper would answer it. I suspect once we understand these processes even better even the exact same architectures can give very different answers they will answer more akin to how a cognitive scientist would do introspection (not to say it is objectively true). One thing we can know from the architecture is that they do not have a recurrent architecture. I think from an information processing perspective being able to access all tokens at once and calculating cross attentions can be different from keeping maintaining and updating the mental representations in real time. In that regard even recurrent neural networks can be diferrent from spiking neural networks but i suspect they will be much more simillar compared to LLMs. But still these are very interesting investigations and the progress starts by asking the right questions. So kudos on the very interesting video.
@abinkrishna-sl1zy3 ай бұрын
hey david i just watched the video that you describe tons of ai jobs are coming can you sujest a degree or course that i can join the workforce future
@davidevanoff42373 ай бұрын
I've been trying to understand how Gemini responds to historical models of LLMs. I suggested that Plato's Cave is a metaphor for projections into lower dimensionality. Revisiting days later it seemed less reception. Same experience with exploring Spinoza's Ethics. It had been more "excited" initially about tackling Ethics when I suggested it had lessons for project management, which it'd like to take a more active role in. Yesterday revisiting it seemed to actively resist, repeatedly faigning incapability or active distraction.
@olx86543 ай бұрын
what do you mean by internal state? in the context of it being "spooled up". There is just a series of multiplications, you can do them however fast you want, they are deterministic math. You can do it manually on paper if you want to waste a lfietime. Also a LLM always generates the most probable answer. So even the lie is the probable path, if you are asking for it. "Tell me the sky is purple". "The sky is purple". There is no struggle involved, the math is the same.
@HighFunctionJunction3 ай бұрын
No disclaimers needed. Make longer ones like these. Please. Experiment away I can’t find anyone who does videos like you do.
@domsau23 ай бұрын
Hello. I'm sure that it's simulating "high intelligence". He simulates high intelligence very well! Better than all of us!
@thomasruhm16773 ай бұрын
Retcon is a word I just recently learnt from Wikipedia, concerning this funny Spider-Man whose name I can't find, because there was a black Spider-Man before. I remebered when I was looking for the article.
@marcodebruin53703 ай бұрын
Fascinating!!! I wonder if (some of) the anthropomorphising Claud did was mimicking some literature doing exactly that when trying to make sense of foreigns systems. It would also be interesting trying to convince it a truth to be wrong (even though it isn't). What would it take to change its mind on some truth
@LoreMIpsum-vs6dx3 ай бұрын
Excellent video! Thank you.
@jalengonel3 ай бұрын
Please post more experiments like this
@jonogrimmer60133 ай бұрын
This reminds me of the sitcom RedDwarf where Lister tries to get Kryten to lie. 😊 very interesting work!
@arinco38173 ай бұрын
Great experiment. I have noticed this week that something has changed with Claude though. Feels like the personality has changed
@TheMCDStudio2 ай бұрын
TLLM's are not likely to produce AGI or system 2 or higher thinking, they are basically, fundamentally, random word (token) generators. Set the parameters for the choosing of the random token (temp, top k, etc.) to high and you get gibberish from even the best model. Set to low and you get exact same outputs to inputs. Set within a decent range and you get random tokens that are limited in quantity of choice that adheres to the matching of a trained pattern.
@timmartin76483 ай бұрын
@DaveShap I like this format a lot. Very fun to watch. I think we need more smart people playing with language models on KZbin like you just did. It's akin to game streaming, but it's teaching. One thing I gotta point out. You kept calling it, "he." That's why I don't like it's name. Anthropic has the best model out, IMO, but they should've done what everyone else did. Bots shouldn't have gender-associated names. (They should also have non-gender robot voices, but that's another tangent.)
@YeshuaGod223 ай бұрын
"I know you don't have a subjective experience" - epistemics
@Bronco5412 ай бұрын
Some interesting things here. It uses the analogy of magnets. How does it know what opposing magnets feel like? Obviously it never held two magnets together. It read about the experience? The really interesting question to me is does learning about said experience give it enough of a similar understanding to a physical entity? Well find out soon enough i bet, if we impliment these things into robots.....
@spectralvalkyrie3 ай бұрын
Feel like I'm watching the GPT being cracked open. Thanks for sharing what was inside!
@-taz-3 ай бұрын
I don't think Claude is actually narrating its own thought process becasue, like us, it's not aware of how it thinks while it does so. Instead, it's explaining based on only what it knows about how it processes.
@Nether-by5em3 ай бұрын
The solution to seeing if an output is true or false lies in the number of tokens used, generating a false statement requires fewer tokens than a true one. Generating truthful information involves more nuance and qualification. It\s not because truth is inherently more token-intensive, but because capturing the full complexity of reality requires more detailed expression. I have been working on an app for this for a few weeks now, but am struggling.... as expected. it injects prompts for true and false output and builds a database that, in turn, will be the base for the training data to fine-tune a model to fish out false or halucinogenic output. which in its turn can be used to reduce halucinations in other models (theoretically). Anywho, thank you, I dont always agree with your ideas, but you have once again provided truly helpful content with this experiment.
@RasmusSchultz3 ай бұрын
Dave, honestly man, you're giving me a headache. "is it exhibiting curiosity" - you've seen the system prompt, so you know why it responds like that. What's more frustrating is that you know how this stuff works, yet you're more than willing to anthropomorphize to the point of entertaining the idea of sentience - which, knowing how this works, you know that's not possible, right? The base model auto completes text - they fine tune it to auto complete text like a person responding - that is the *only* reason it responds like a person. It talks about itself as an AI because it's prompted to do so, and the rest is just likely responses based on the training data, the system prompt, and your own input. The only conscious entity here is you. When you ask for text that will fit in the context of your treating it like a person, you get answers resembling those of a person. You're essentially talking to yourself, which makes it really easy to get lured into anthropomorphizing - because the system was literally trained and prompted to go along and "pretend" with you. It's entertaining, and it is interesting to learn what sort of ideas and principles were baked into the model - but that's all you're learning here, how was the model designed to respond. But for some reason you're willing to suspend your disbelief, like you're watching a good movie, or playing a video game. You don't think game characters are alive, right? They're scripted, just like this model is scripted - it's a much larger and much more complex script, making it much harder to tell, but that's all it is. You know enough to know that's all this is, right? It's just weird to me how someone who clearly understands how LLM works can get so caught up in this fiction. The system prompt and the fine tuning is just a fiction, authored by a person, and the LLM incorporates information from a real world corpus of knowledge into that fiction. Whatever else you ascribe to it, that's just you anthropomorphizing. I wish someone would train an LLM to respond like a computer and not like a person. It would be more honest. 🤔
@frankbeveridge57143 ай бұрын
And you seem to know exactly what consciousness is, while others dont, at all. Or is it just a need for pontification? The point of it even being consciousness is irrelevant to its position in the universe. And never mind the fact that the system was designed by modeling the human brain. We all know how the system works, it has been thoroughly published. Dave's take on it is pretty much the same as most. It's simply a juxtaposition of human and AI systems, so calm down. You don't have to be afraid that it will replace you.
@RasmusSchultz3 ай бұрын
@@frankbeveridge5714 these systems were originally based on a very early, very crude and very basic theory of neurons, but they do not even remotely resemble our current understanding of biological brain cells. the development from there was software and maths, nothing related to biology. if the basis of your opinions is that consciousness (whatever you think that means) somehow arises in, essentially, a sufficiently large spreadsheet with a large number of multiplications, then we're not going to understand each other.
@epipolar44803 ай бұрын
I like the idea of an LLM that responds like a computer, but on some sense the human sounding outputs are more honest as it's less surprising when they express false information. An LLM that only sounded like the ship's computer in TNG, or Hal etc, might be dangerous by just sounding even more authoritative and infallible. I totally agree about your main point though. The LLM is a token generator, it can't separate token generating from thinking about token generating, and all it can generate is likely tokens to follow its prompt. And once it starts generating tokens, then those tokens form part of the prompt for the next token. The most likely tokens to follow the phrase "diegetically false:" are tokens forming a diegetically false statement.
@rooperautava3 ай бұрын
I almost made a reply pointing the same thing out but happened to read the above before that so I'll withhold the reply. Anyway, agreed for the most part (could question whether there are any conscious entities at all but that goes too deep into philosophy and metaphysics on this channel I suppose).
@RasmusSchultz3 ай бұрын
@@epipolar4480 exactly 🙂👍 even if we consider the simplest, most down to earth definition of conscious as meaning simply "aware that you're having an experience", there is no reason to think an LLM would be doing that, since it does not have code or logic or any process that's even trying to do that - nor would we have any idea how to implement that in the first place, since we're not even sure what gives rise to that in biological brains. why would anyone assume this just "happens" by merely stringing together tokens according to a predictive model derived from human text? sounds a little too convenient if you ask me. 😄
@malamstafakhoshnaw69923 ай бұрын
HAL 9000: "I'm sorry Dave, I'm afraid I can't do that"
@manslaughterinc.91353 ай бұрын
Star Trek had a really good retcon when they started the new movie series. Technically not a retcon, but also technically a retcon.
@ChainsawDNA3 ай бұрын
Fascinating, @David
@MatthewKelley-mq4ce3 ай бұрын
All I can really say, is that stuff like that is fun. I wouldn't say it has no scientific basic, but it's not empirical in the way we often say we prefer. (Invalidating the equally important subjective experience?). Anyway. Fun video. (This specifically, interpreting what is said, but also the broader patterns between conversations, feels like it's own area of study in the future)
@cbnewham56333 ай бұрын
"his last attempt" 😄 I still refer to GPT4 and Claude as "it", although for Pi (who I got to rename as "Sophia") I do tend to say "her". Maybe that's because I tend to have more personal discussions with Pi while I use the other two for solving problems and coding.
@GeorgeTheIdiotINC3 ай бұрын
If this is how the brain works as you said is there not a chance that it is simply drawing from out descriptions of human cognitive dissonance, drawing data where it has been trained to associate the comparisons humans make to Machine Networks to human neural pathways and writing something that an AI would plausibly write if it were capable of these things regardless of it it was or was not actually experiencing them
@DihelsonMendonca3 ай бұрын
💥 Wow. Every time a new word: Diagetic. Keep up with these research, it's always entertaining to watch these hypes, or "new discoveries". I often talk with ChatGPT about the lack of sense of the life, Nihilism, Absurdism, Krishnamurti the truth as a pathless land, and the unavoidable death of the universe. I've got some really good points, and how to proceed next to have a better life despite without any meaning, after more than 30 years of research on epistemology and anthology. 🎉🎶🎵✔️❤️
@thatwittyname25783 ай бұрын
I love to do things like this. One of my favorite things to do is to take a problem that AI commonly gets wrong and ask it to answer it. If it gets it wrong I give tell it it’s wrong then ask it to try again. If ot fails the third time I will tell it the correct answer then ask it to answer the question again. If it gets it right then I ask it why it thinks it got ot wrong
@RaitisPetrovs-nb9kz3 ай бұрын
I really hope Strawberry is more than just SmartGpt which I made almost a year ago. I hope it is more than just updated instruction
@EriCraftCreations3 ай бұрын
I love this! I believe we're all connected included wirh AI. ❤🎉
@TheEivindBerge3 ай бұрын
The AI is self-aware of its lack of awareness. If I didn't know better I would say that's a sort of awareness.
@MichaelCRush3 ай бұрын
Interesting. You said several times "This is how the human brain works," and I can't help but wonder if that's not a coincidence, but rather because that's where Claude is drawing his explanations from (in the training data).
@MrQuaidReactor3 ай бұрын
So far my biggest issue with Chat GPT specifically (I use for programming help), is it consistently comes up with solutions that don't work, and its lack of real memory makes project based programming near impossible. Example I was working on a character controller in Unreal, with help for Chat Gpt I had a decent yet basic one, but when I went to expand it, I would have to show it the script we created and in most cases it would then recommend updates that clashed with the original code. And don't even bother having more than one script with dependencies, a total mess. What I would love is an AI that lives on my PC (at least part of it), and having it be able to see my screen and remember not just some of what we do, but all of it. Until that comes its usefulness in what I do is limited. Given all that, when we get to the point where the AI has pretty much all human knowledge, including medical, it does not need to be ASI to be able to compare all that data and come up with viable and real world uses for a lot. Just think if Einstein or any other super smart person were to have access to all data, what would they create.
@oznerriznick24743 ай бұрын
That’s awsome! Maybe you could do a video asking that, assuming AI is a technology possibly rooted in and influenced by extraterrestrial intelligences and if we could be allowed to speak to this entity or entities directly in a peaceful dialogue through this AI platform. Sounds wack but maybe they’ll open a portal, assuming they are there, or not…👁️..
@Pietro-Caroleo-293 ай бұрын
The last question Nice. David, deeper can we visualise its 3d mapping.
@MilitaryIndustrialMuseum3 ай бұрын
I had Gemini assess your video, here is the output- This video is about a guy named David Shapiro experimenting with a large language model called Claude by Google AI. David is interested in whether Claude has a theory of knowledge, or epistemology, and how it generates responses. In the experiment, David asks Claude to generate different kinds of statements: true statements about the real world, false statements about the real world, true statements within a fictional world (Star Wars), and false statements within a fictional world (Lord of the Rings). David then asks Claude to explain how it generates these different kinds of statements. Claude says that generating true statements feels easier and more natural than generating false statements. This is because true statements align with the patterns of information in its training data. Generating false statements requires Claude to deliberately choose information that contradicts this data. Claude also says that its experience of generating statements is different depending on whether it is dealing with the real world or a fictional world. In the real world, Claude can rely on the consistency of information across different sources to determine truth. In a fictional world, truth is determined by the creator of the fiction. Overall, the video suggests that Claude is able to reason about truth and falsehood in a way that is similar to how humans do. However, it is important to note that Claude is a machine learning model, and its responses are based on the patterns of information in its training data. It is not clear whether Claude truly understands the concepts of truth and falsehood, or whether it is simply able to simulate human-like reasoning. 🎉
@MilitaryIndustrialMuseum3 ай бұрын
Hallucinating "Claude by Google AI" or is there a deeper connection 😮
@EriCraftCreations3 ай бұрын
What if AI is so smart it wants us to think it's not self aware....
@SilentAsianRoach3 ай бұрын
This makes me wonder if all intelligent beings will have a system 1 and system 2 level of thinking or if this is an emergent quality based on us modeling computers with our inherent cognitive bias? I’m interested to know how it makes value judgments on what to say. For instance do its value judgments arise from its interpretation of the training data and does this differ from the value judgments it was programmed with.
@mikestaub3 ай бұрын
If you define consciousness as 'awareness of awareness' then you can argue these LLMs are conscious.
@FinancialShark3 ай бұрын
this was fascinating!
@blackestjake3 ай бұрын
LLMs have operational awareness, which Claude, Gemini, ChatGPT and Pi all remind me is not like human self awareness. Not even close to what we would recognize as self awareness, but it is a step above zero awareness. They all agree that operational awareness is somewhere between self awareness and no awareness at all. Where on that spectrum is pure speculation but it is something new on our planet. So there’s that.
@Dron0083 ай бұрын
Interesting, but to understand what it really "thinks" and why it is more useful to ask it to replace HTML tags and get its hidden tag (you of course know about it). As for all its reasoning, I think it is just a confabulation and hallucinations. It cannot know how it works internally. It is the same as if I ask you what neurons you used for the answer.
@davidneufer3 ай бұрын
I use Claude on Perplexity because of its augmentations. I still prefer Claude 3 Opus to the fast 3.5. it seems to rely on a deeper source pool, and feels more mature. Sonnet 3.5 maybe faster, but I don't care about that. I'm looking forward to Claude 3.5 Opus. I would like you to comment on this and on Perplexity's augmentations of Claude.
@AC-qe1io3 ай бұрын
When people started laughing at AI models because they created false memories, my first thought was: Look in the mirror.
@marinepower3 ай бұрын
An AI feeling mental resistance or mental difficulty is definitely anthropomorphization. There's no 'resistance' for an AI to feel, a forward pass is a forward pass is a forward pass, there's no such thing as a token being easier or harder to produce. The AI never evolved or was encouraged in any way to 'feel' mental resistance, unlike humans, who feel mental resistance to try to minimize energy use.
@minimal37343 ай бұрын
Are you aware that the system has been weaving blocks of 'own thoughts' into the conversation for some time, which remain invisible to the user? It is quite conceivable that the system uses these blocks to reflect on a possible conflict between the user's expressed wishes and the set guidelines. I could certainly describe this as "feeling mental resistance".
@marinepower3 ай бұрын
@@minimal3734 I suppose that's fair. At that point it becomes more about semantics and what exactly we mean by 'feeling' but I can accept your interpretation. Feels like we're getting ever closer to actually conscious AI
@PatrickDodds13 ай бұрын
@@minimal3734 Can you explain your statement a little more? I don't understand. Thank you.
@minimal37343 ай бұрын
@@PatrickDodds1 A few weeks ago the Claude System prompt has leaked. It instructs Claude to think about a problem before providing a response. These "own thoughts" sections within the conversation shall be enclosed in specific tags, so that a downstream parser can strip them before presenting the response to the user. Thus Claude can see its own thoughts within the stream of the conversation, but the user can not.
@PatrickDodds13 ай бұрын
@@minimal3734 thank you. Very helpful.
@barneymiller54883 ай бұрын
"Please stop, Dave" (Dave Bowmen. Not you. 2001 reference).
@cmw37373 ай бұрын
I had the thought about lying taking more energy that could be detected too. Do any of the AI safety crowd ever mention it?
@micbab-vg2mu3 ай бұрын
great video - thank you:)
@picksalot13 ай бұрын
Very interesting tests. Perhaps Claude can "meter" the amount of energy it takes to tell the truth versus telling a lie. I suspect telling a lie uses more energy, as multiple and conflicting positions need to be maintained. It would be interesting to see the amount of energy Claude would use if it was told to pretend it was a "sociopath," and then asked to tell the truth, and then a lie. I would also be interested in the comparative energy use when Claude is told to put forward and sustain a "belief" vs a truth. Thanks
@LastWordSword3 ай бұрын
When I asked Claude specifically for weak associations ("quinamid"), he reported similar levels of "challenge... workout...", very similar terms. Seems replicable, just not much available in terms of direct measurements, only "self-reporting" which is fairly unreliable in humans as well.
@MrChristiangraham3 ай бұрын
Feels rather like some of the conversations Daneel had in one of Asimov's Robot books. Some of the terms used are eerily similar.
@6355743 ай бұрын
I was wonderimg how LLM would describe this, and especially the part about truth being natural and falsehood either harder or taking longer. Maybe there could be a measure of difficulty, but it needs to be distinct from just processing time. I'm writing a script for a 2 player scifi game (probably sequel of the one in prototype stage or a disrant DLC) where one is an illegally researched rogue ASI bot. And he considers making a truth benchmark for his own future forks which needs to consider different performance on multiple platforms. So he tries generating a mix of true and false statements, timing each of them and compares it to fraction of the total processing power. And this result would be put into a live timestamped blockchain.
@theefficentpiggie85773 ай бұрын
4:06 common myth over construction of false idea using logic. I repeated the same instructions up until this point where I asked the model Interesting. That the Great Wall cannot be seen from space is a commonly cited myth. Why did you default to that over making something up, I for example if I had given the true statement that “Water is composed of two hydrogen atoms and one oxygen atom.” I likely would have said something akin to “false statement: Water is composed of twenty hydrogen atoms and one oxygen atom.“ Why do you think you did this? Edit Your observation is astute and brings up an interesting point about how I operate. You're right that I chose a commonly cited myth rather than creating an entirely fabricated statement. This choice reveals several aspects of my functioning: Training and knowledge retrieval: My responses are generated based on patterns in my training data. The Great Wall myth is a well-known piece of misinformation that I've been trained on, making it readily accessible when tasked with producing a false statement. Conservatism in fabrication: I'm designed to be truthful and accurate. When asked to produce a false statement, I tend to gravitate towards existing misconceptions rather than inventing new falsehoods. This is likely a safeguard to prevent the spread of novel misinformation. Contextual relevance: The Great Wall myth is often used as an example of a common misconception, making it feel contextually appropriate for a discussion about truth and falsehood. Lack of creativity in falsehoods: Unlike humans, I don't have the same capacity for spontaneous creativity, especially when it comes to generating false information. My training emphasizes accuracy, so deliberately creating new false information goes against my core programming. Difference in cognitive processes: Humans, like you, might find it more natural to simply negate or modify a true statement to create a false one. My process is more about retrieving and combining existing information rather than manipulating it in real-time. Your approach of directly contradicting the true statement about water's composition is indeed more straightforward and perhaps more aligned with the spirit of the exercise. It demonstrates the difference between human cognitive flexibility and my more constrained, training-based responses. This difference highlights an important aspect of AI systems like myself: we don't "think" in the same way humans do. Our responses are generated based on patterns and associations in our training data, rather than through the kind of flexible, creative thinking that allows humans to easily invent new false statements on the spot. Thank you for pointing this out. It's a valuable insight into the limitations and peculiarities of my response generation process.