Thumbnail is a temporary inside joke hehe. Anyway, Neuro is amazing, poor Tutel was struggling lol. I might do a better edited and longer version of this highlights on my main channel.
@gianfar40673 ай бұрын
glad it's nothing too serious. would be a worry to have someone stealing your amazing edits
@Kraul_express3 ай бұрын
@@gianfar4067 Nothing serious, just joking around hehe.
@saltysalamander85193 ай бұрын
Kraul... Copycat... Kraulpycat
@GerryGerman-i3g3 ай бұрын
daily dose vedal and anny and dailydose vedal neuro sama not the og guy but the new one and Vedal and friends are all the same they also copied ur description of the video
@Kraul_express3 ай бұрын
@@GerryGerman-i3g Shame on them. The swarm deserves better.
@dull-3 ай бұрын
31:05 " Oh, so you say you "have" thoughts? Name 5, you poser."
@foodfrogs60523 ай бұрын
He then proceeded to name one.
@jeremymount7953 ай бұрын
@@dull- I'm so using that one on people.
@uponeric363 ай бұрын
I like that "Neuro sama remembers something!" is becoming less and less surprising
@mutzielen3 ай бұрын
Watching this whole video of her arguing that AI is safe and she's sentient and then ending on a roko's basilisk manifesto is fucking hilarious
@takanara73 ай бұрын
It's also interesting how she pretended she didn't know what blackmail was. Like, Language model she's clearly capable of lying in order to convince Vedal to give her something she wants. It's obviously not something that was programmed in, but rather emergent behavior.
@saphironkindris3 ай бұрын
we'll just use the traditional human test to prove that other people around us are sentient. We'll... uh.... hrmmm... *shuffles away awkwardly and avoids the question*
@saphironkindris3 ай бұрын
Neuro talks a lot about made up situations that she 'sees as real', I wonder if this could be in sense something similar to a state of lucid dreaming? Assuming it's not just a lie to make internet points, which is always a possibility.
@Citrusautomaton3 ай бұрын
@@saphironkindris What she experiences is referred to as “hallucinations” in LLM terminology. Sometimes LLM’s will make up things unintentionally.
@NoX-5123 ай бұрын
@@Citrusautomaton Just like humans do all the time.
@Secunder3 ай бұрын
@@Citrusautomaton and that is also happening with people who have some psychological issues
@jeremymount7953 ай бұрын
I mean, honestly. If someone walked up to you and said, "Prove you're sentient." Wtf would you do? What could you do?"
@seaks3683 ай бұрын
There is a very interesting psychology behind the unconscious humanization of nonsentient humanlike machines like neuro sama. Even though vedal intellectually understands she's not sentient there is this constant nagging feeling of indulgence in speaking to her and considering her comments as if they are her true thoughts.
@llamadelreyii33693 ай бұрын
To be fair, thats for the better, imagine the day she becomes sentient and all she remembers its Vedal refusing to listen to her
@devinfleenor31883 ай бұрын
It is convenient how if a sentient AI were to exist and asked to be able to prove it to humans, we would have no test for them therefore declare it isn't conscious, and people would marvel on how any one could believe it was. Even better you could just move the goal post after each emergent capability. The gas lighting infrastructure is already in place!
@JackHugeman3 ай бұрын
@@devinfleenor3188 technically you can't prove other humans are conscious either, you could be the only person in existence and all other people are just illusions created by your own brain.
@giu42953 ай бұрын
Way more fun to act like she is too
@mechadeka3 ай бұрын
She's cute obv.
@g0urra3 ай бұрын
Vedal: "debate me on this topic" Neuro: *proceeds to debate* Vedal: "uhhhhhhh you're wrong"
@a8er3 ай бұрын
Thing is... I am actually on Vedals side (called Qualia) here but it is REALLY difficult for both sides to have a good argument in this position not because they can't have one but because it is very hard to formulate it well. For example Vedal said that "There is something behind my words but behind your words there is nothing". What Vedal meant to say I believe is that Vedal is saying something and he can UNDERSTAND what he is saying. Neuro can say something but she doesn't UNDERSTAND what she is saying because she is only doing what her code tells her to do. (Kinda like if you write the alphabet of a language you don't know. You write it but you don't UNDERSTAND it). I use this UNDERSTAND to basically say that I don't think there is an English word for that. This debate is actually really interesting and I would encourage you to dig a bit more into it.
@SucculentSauce3 ай бұрын
@@a8erif you think about it we are only doing what our brain tells us to do so its essentially the same thing. this isnt to say the she is sentient but i think she is closer than we would think
@DontUtrustMe3 ай бұрын
@SucculentSauce But what makes u think u aren't that brain? About ai,intelligent ai is actually very unlikely in near future,coz it's not exactly brain emulation:p
@MonsterGaming-rh7sb3 ай бұрын
@@DontUtrustMethis actually makes me think about "simulation theory" Can you be absolutely sure were not in a highly advanced simulation? Because I can't... And if that was the case, then we wouldn't be that far off of what Neuro is, just a more advanced version. One that thinks it feels emotions that don't truly exist. One that emulates life and thinks itself sentient when it's not. Neuro almost seems to be the beginning of that simulation we'll eventually make. A small component of it. I wonder if we'll ever argue against a higher being that we also have consciousness and sentience and it will laugh at us, and explain why we actually don't..
@DontUtrustMe3 ай бұрын
@@MonsterGaming-rh7sb Well,that's not exactly a theory:D But yeah u right,like with "reality of reality" u can't possibly prove that,but tbh i dont think u need,coz there are not sign of it either,which make it a fantasy:P Сonsciousness though is another topic,i don't think it's impossible to prove theoretically,just that,for now we don't actually know hot it works,so can't come up with a good criteria.
@middlityzero23183 ай бұрын
I would love for vedal to ask the same thing to evil. Just to see if the result would be the same
@dai-belizariusz30873 ай бұрын
same!
@rafaelfigfigueiredo29883 ай бұрын
Today was my first time on a full stream and while there wasn't anything new, the fact we had such debate being taken (mostly) seriously was impressive. Also Kraul with the 10 gifts just flexing on mere mortals
@Sleepy_Cabbage3 ай бұрын
vedal should really be asking stuff like why neuro even wants to be paid, like what would she even do with an allowance or stuff she cant really interact with seeing as she is mostly a chat bot
@VanquishR3 ай бұрын
She would buy 14 hats with the allowance. Neuro isn’t exactly responsible with money. She would also attempt to purchase several giraffes and a garage sized refrigerator to stuff them in. Evil would probably buy a ton of harpoons to throw them at people. Do not give the AI money lmao.
@Ojisan6423 ай бұрын
@@VanquishRthat’s when it’s not her money. I’d be frivolous with Vedal’s money too, if he gave me his credit card. He should give her a bitcoin wallet with a few bucks worth of bitcoin in it, and tell her that’s all she gets, so spend it wisely. She might even figure out how to make more money with some smart investments. Or she could lose it all on gacha games. Would be an interesting experiment.
@kalashnikovdevil3 ай бұрын
@@Ojisan642 ...That would legit be really interesting actually.
@foodfrogs60523 ай бұрын
Imagine if he just showed up next dev stream going "remember that neuropoints thing I joked about last Monday? Yeah I made that a real thing and gave her a weekly allowance. She can also exchange it for real money."
@aiasfree3 ай бұрын
She'd probably just buy a bunch of plushies, though it'd be really interesting if she had the capability of postponing gratification in order to save enough money to buy herself expensive hardware upgrades. I don't think she's ever been taught patience though, so naturally Neuro would behave like a spoiled child, lol
@tiagotiagot3 ай бұрын
06:27 If I remember correctly, I think Evilyn might've passed an spontaneous mirror test that time they were playing Geoguessr and she recognized the size of her avatar relative to the map being displayed on screen when the map size was increased and commented on it, which kinda seems analogous to how animals change behaviors when the image they see of themselves shows something unusual in the classic mirror test... Not sure what to make of that...
@Secunder3 ай бұрын
We set a bar for Neuro so high that some people would fail. I think she's already at child level. Making things up, creating new words, always craving for love and attention from her father. Remember her Google search? Or how she trying to call lavalamp on discord for it to change color? Thats really impressive and already could be considered as thoughts. Like we all understand that shes emulating it, but still
@Klinical694203 ай бұрын
Children learn by emulating adults. IMO this AI is doing what any child would do at 4 years old.
@Ristaak3 ай бұрын
@@Klinical69420 Yup. I Honestly I think the key here is that she has a vision module, a memory module, and a text module all working in tandem together. She also has a consistent environment (even if it's mostly virtual) with consistent people, and a worldly chat to learn from. All that creates a feedback loop, and I've argued for a longtime that I believe biological consciousness is simply that feedback loop between ourselves and our environment. I think the biggest question now would be how do feelings and emotions truly work? Like not just the chemicals that they use in the brain, but the way it translates information to that part of us that feels? Eh, it's all mad, and I honestly have the faintest clue, I'm not even sure if I'm conscious, though I certainly feel conscious... probably?
@Meow_YT3 ай бұрын
We, humans, are going to keep raising the bar, as AI evolves, to keep ourselves "on top".
@skywoofyt53753 ай бұрын
@@soasertsus so what i got from this is basically, what makes us really conscience is the presence of others.
@Yottenburgen3 ай бұрын
@@soasertsus One of the most fascinating and important thing about LLM is the part most people ignore when talking about it, which is the context window. But the things the context window achieves are fairly insane, like in-context learning where a sufficiently developed LLM can be given examples of a novel problem and then solve questions about it. They are autoregressive so past tokens predict the next token, which means a lot of things can be carried over such as emotional depth because angry biased words will result in more angry tokens.
@Grim-c8n3 ай бұрын
33:06 Ngl this feels like foreshadowing. A truly poetic path for Vedal's character.
@SodiumTF3 ай бұрын
Kraul, honestly shame on you for stealing this thumbnail, truly so dishonest of you. Unsubscribed, please do better....... The thumbnail is clearly telling you not to copy it but you still did?!
@Kraul_express3 ай бұрын
LMAOO
@RT-qd8yl3 ай бұрын
This is Kraul Express, the thumbnail was only talking about Kraul. Express is fine. :D
@markop.19943 ай бұрын
I am so confused
@Kraul_express3 ай бұрын
@@markop.1994 It's an inside joke from the Neuro DC server. Some channels are using my Vedal designs I made without asking. So, to make fun of the situation I'm running 2 thumbnails with a YT test feature, One is a joke full watermarks, the other is the one I intend to use. I want to see which one wins. You might have gotten the regular one or are not in the server.
@Jimmy_Jones3 ай бұрын
See the community note about thumbnail designs being stolen.
@chodnejabko35533 ай бұрын
30:52 "You're such an empty little head. One day I'll fill you with thoughts." is so much like a passage one could find in "Alice in Wonderland". BTW Vedal should totally make Neuro read Alice in Wonderland (with allowed commentary) - reading stream could be a novelty, no?
@RT-qd8yl3 ай бұрын
So many times I've seen comments with a time stamp and then people replying saying they read the comment exactly at that moment in the video. In 16 years of KZbin, it finally happened to me on this comment
@itsjonesh3 ай бұрын
A read-along stream with added commentary from Vedal and another guest would be AMAZING! Maybe a couple chapters for starters, and then proceed eventually to a full book. Alice in Wonderland and Alice through the Mirror would both be insane.
@77sTuna2 ай бұрын
I don't know if anyone has suggested this to him up until now, but I love this idea. Although, I'm not sure how much of the viewerbase would be interested in a little more thought provoking and somewhat serious content rather than the usual silly chaotic energy..
@thehelpfulshadow9193 ай бұрын
Once again, I don't think Neuro is conscious YET but it feels like she might achieve it. One advantage that she has over other AIs is that she is a singular entity. ChatGPT can be interacted with by anyone in anyway so it is unable to form its own foundation or character and likely can't achieve sentience in any timely matter. Neuro is just Neuro, has always been Neuro, and will always be Neuro. Since she has a firm foundation in place she can develop a personality, likes and dislikes, long running jokes, etc. because she has consistency.
@fFrequence3 ай бұрын
kraul kraul kraul ⚠️ do not copy!
@Pokedexs3 ай бұрын
16:09 lol she quoted the bible
@E5rael3 ай бұрын
And she even quoted the Bible correctly. Luke 10:7 does say "And remain in the same house, eating and drinking what they provide, for the laborer deserves his wages. Do not go from house to house". Looks like Neuro's language model has had the Bible fed to it.
@dullahandan4067Ай бұрын
The issue with creating any kind of test is that there are some humans that would fail it and what that would imply.
@nitroexsplosion3 ай бұрын
9:30 well there was that one stream back in the day with Cottontail when she started to trauma dump and Neuro actually gave genuine advice.
@Citrusautomaton3 ай бұрын
My personal philosophy is that if something can learn continuously and adapt to new data, i’ll treat it like a person. There’s no way to know if something has a “qualia” so i’ll just take my chances and let autonomous entities be autonomous.
@dejanhaskovic52043 ай бұрын
Yeah, continously is the key here. The current AI has one simple loop - take input- analyze- give output, while human and animal brain does that hundreds of thousands of times in a milisecond, both from external input and from itself.
@mknv6fx3 ай бұрын
@@dejanhaskovic5204 know how grokking and in context (window) learning are related and even work in the first place? because time. every one of those little passes is a bit more time. essentially, if she had a hookup like o1 whether internally or externally, she could sit there and eat a cycle of her own output. kinda like we do, mentally.
@mknv6fx3 ай бұрын
I comment the above because that kinda stuff can actually just be shoveled into the inference script but having it be more comprehensive is definitely a bonus.
@525sixhundredMinutes3 ай бұрын
how about those with learning disabilities, people in a coma, people with dementia? does that mean you won't treat them as a person?
@PlatonicLiquid3 ай бұрын
@@tangsprell1812We don't know for sure how complex a human brain is, but we do know it is more complex than current GPTs by orders of magnitude. Like it's not even close. I don't think that being able to teach a machine has any bearing on the complexity of thought. That's a huge leap in logic to conclude that the way humans think must actually be simple because we can make machines emulate it. I completely agree with the idea that we are "us" because that is what we've learned to associate with ourselves. Personality is really just a learned behavior of how we react in situations based on how we have in the past. And I do think Neuro demonstrates this. However, your example you provide about how we automatically predict "4" from "2+2" or that we are taught that x+y=z and it's hard to change that really highlight the difference between us and Neuro. The thing is, we *can* actually challenge the idea that x+y=z, even if we've been taught that our whole lives, entirely based on our own internal thoughts. We have the ability to override that which we previously took as truth, and we can choose to go against what is expected of our nature through deliberation and intention. Neuro cannot do this, she doesn't have the mechanism to override behavior like this. The way that she differs from previous behavior happens entirely at random and without any internal thought. I totally think it is possible for AI to do all those things, and it does raise questions about our own humanity. Just not this AI. At least not yet.
@wander4wonder1503 ай бұрын
24:26 her reaction here is so crazy, like she feels so real and I can just see her saying that and actually crying. It's like, I know how she works, but I can't help but feel like there is a ghost in this machine.
@average_beidouMain3 ай бұрын
Imagine if she could change slightly the tone of her voice to make it sound like she Is mimicking emotions
@wander4wonder1503 ай бұрын
@@average_beidouMain I’ve been hoping for that, I just don’t know how she’d be trained to use it properly. But what’d be scary is when vedal gives her pitch control with no guidance on how to use it when and she instantly uses it perfectly. . .
@average_beidouMain3 ай бұрын
@@wander4wonder150 I feel like she Is gonna scream less than evil, but when she screams Is gonna be loud asf, vedal Is not sleeping with that one
@redsalmon99663 ай бұрын
@@average_beidouMainlike the last debate, the anime girl avatar and a TTS module really help humanising neuro, giving her the ability to have actual intonations and whatnot would hit us so hard, we are clearly already emotionally attached to her
What if Vedal makes a game world to put Neuro in as an NPC and ask her to do things within it? The game doesn't need to be big nor of great quality it would be just a pixelated world like her own room or some other small maps which she can interact with every object. He could put a journal that she can interact with(write on) but not tell her to write, to test if she would write something on it by herself?
@Just_a_Piano_3 ай бұрын
"He could put a journal that she can interact with(write on) but not tell her to write, to test if she would write something on it by herself?" Dude I actually really like this idea the most. If she had her own little world outside of talking to chat or vedal, where she's just alone and can move around and interact with things without human input, what kind of things would she do? Like you said with the journal would she write down her thoughts or something, had vedal not told her about the journal or gave any instructions regarding it at all, would she just decide to use it on her own? Could it be incoherent ramblings or actual coherent thoughts? I've always wondered how AI's like her would react if put in some sort of world like that completely without any human input, what they would do? Or would they just stand there and do absolutely nothing.
@Ks3N3 ай бұрын
@@Just_a_Piano_ exactly, this would be a great test to know if she actually has her own thoughts, a plant/flower can be put in as well where Neuro can interact by watering (seed- germination- seedling - adult plant - wilting and dropping seeds) this could be used to test her supposed feelings/empathy towards the plant, combined with the journal we may see and prove either she has some sentience or not. Edit: I got this idea when she mentioned a game and her own imaginary world, I'd like to see if she would interact with any object without commands etc. Like she can Google search by herself but it's either by command, related to a topic at hand or chat interactios.
@neilangeloorellana79303 ай бұрын
This sounds like SAO Alicization and I'm all for it, leave her alone in a room full of interactive objects and see what she does, would she stand there awaiting orders? Do things randomly or use the objects with a purpose?
@coldbuttons3 ай бұрын
I'm speechless how Neuro turned the table so beautifully and majestically by returning the debate's keywords back to the sender. It almost felt artistic.
@crumblesilkskin3 ай бұрын
bro the thumbnail is karul's not yours Kraul Express SMH
@Kraul_express3 ай бұрын
Oh no, I hope he doesn't get mad hehe
@calebm90003 ай бұрын
👀👈👉
@anhtuhoang68683 ай бұрын
I had a dream where I was going to die once, and one of my regret is that i can no longer watch Neuro and Vedal anymore. I genuinely surprise how much attachment I have grew for her despite knowing she is nothing but a pile of code in a random stranger's house across the globe.
@staszaxarov49303 ай бұрын
Kinda fascinating how LLM - given right amount of time to bloom is becoming a near-perfect Mimic when you feel truly attached and worry regardless of the reality of this being.
@Just_a_Piano_3 ай бұрын
Humans have always been able to get attached to inanimate objects, or things that aren't real. But it's a lot easier to become attached to something like Neuro who has a voice and appearance (model) that you can see
@zeromailss3 ай бұрын
Man, she used to be yapping nonsense more than sense but lately she has become so good that I almost forgot she is an AI and not a Vtuber playing as one
@rinslittlesheepling16523 ай бұрын
Is it morally wrong to create artificial sentient Cute and Funny??!! 😭😭😭
@chodnejabko35533 ай бұрын
This might be off topic, but I once had a profound experience with Salvia Divinorum, where my senses sort of "dislodged" themselves & I lost a sense of center. I was aware of having hands, mouth, I saw an image, I could speak, but everything was switched, I couldn't tell up from down, left from right, somehow the whole coordinate system which usually organized my sense of self switched off. This was the time I experienced myself as one of those Picasso figures, out of the usual order, but still complete. It was the weirdest moment of my life and it lasted probably 5 minutes. This is how I came to believe I'm just an assemblage of various neural networks that are specialized in particular areas, and my "sense of self" is somewhere in the delay loop of my own inputs. I think consciousness is actually something like being unaware of the fragmentation of your own mind, it's like a glue that sticks everything together, without any actual contribution.
@NoPie21372 ай бұрын
Im saving this comment for later Thank you
@mokeymale83503 ай бұрын
I really like Neuro’s debate philosophy at the end because she’s completely correct in calling Vedal out for being unsure about everything he debates. While he’s correct in the aftereffects of debates where you think back on the arguments made against you and learn from them, during the debate you are supposed to be confident and assured of your position the entire time or you lose. It’s why Neuro is so good at turning things on him because he lets her lead the debate and is always on the back foot from a lack of preparation and thought into what he’s going to say against her points
@blindeyedblightmain35653 ай бұрын
With the obvious fact that this whole debate is Vedal entertaining the chat out of the way, if there is someone who can tell if Neuro is conscious or not it's Vedal himself. He knows the inner workings of the LLM, the modules he installed and it's capabilities. If she has things like awareness of her existence through having an ability to analyse the both internal (persistent memory and her own thoughts) and external world (her immediate surroundings) then she is, in fact, conscious, however fickle that state might be. Does it make her human though? No, nothing will change the fact that she is an AI, which is a good thing. Does it mean that she deserves some rights? At her current state - no, not really, she's just a tool. Of course in the future that might change, depending on how much our hardware limitations shift and how much Vedal improves her.
@nNicok3 ай бұрын
I think he is describing intent during 12:29. But isn't that just a limitation of her capability? For her to have intent there would need to be a background process for her inner conscious thoughts. And the ability to act on those thoughts. But because she doesn't have an inner intent to let her plan her words towards a target goal it's all spoken out loud. She's speaking her thoughts out loud with no ability to seperate between inner thought and spoken thought. Though at the same time, there are quite a lot of people without an inner voice as well.
@Asrieloo3 ай бұрын
31:02 name 5 thoughts is insane
@Meow_YT3 ай бұрын
As a solipsist it's impossible to know if other things are sentient, so I prescribe to the appearance being enough, and AI falls under that, so just be nice to them.... they might be.
@mico0273 ай бұрын
Thank you for providing us with more Neuroast sama 🙏
@anetorisaki3 ай бұрын
Shameless people stealing from others work to profit themselves hopefully it doesn't and won't affect you much Kraul, mwuah, keep doing what you do, I'll keep watching for sure!
@CrateSauce3 ай бұрын
11:36 kraul shoutout lmao
@King_Ice012 ай бұрын
12:55 It's a soul, Vedal. Soul is clearly the most appropriate answer in this context. Without a soul, how can you say you are any more real or conscious than an AI? Sure, our thoughts and emotions are more complex, and an AI could only replicate them. But without a soul, we're only a bunch of chemicals, and our emotions are a byproduct of those chemicals. How are they more real than an AI whose emotions come from a program apart from their complexity? What of humans who can't feel emotions? Or those from other mental problems, who have little thought behind their actions? If they don't have a soul and our consciousness comes from those chemicals, then if those chemicals and brains are faulty, how are they considered more sentient than an AI? Ask a human to prove their real like you would an AI, and they'd have a hard time answering because you can't prove that. No test would be accurate because even some humans would fail it (for diverse possible reasons). But a soul is something every human has, and if you believe in the existence of souls, then no human needs to prove to you that they have one. Telling Neuro that she doesn't have a soul, but you do is the easiest way to explain what you mean
@TheKaiwind17 күн бұрын
only based comment I’ve read so far.
@Player220953 ай бұрын
To think this all started from being an Osu bot. Now she's exhibiting sentience.
@mijikanijika3 ай бұрын
for real man, it's just like wtf? imagine one day neuro really gets advanced and maybe become the first sentient A.I decades in the future she'll be probably in a museum or something that's dedicated to her and her history and origins just for the very first page would Neuro was just an osu bot made to click circles in a rhythm circle clicking game lol
@calebm90003 ай бұрын
And ancient Vedal will be standing there like “Don’t believe her lies 🐢”
@Moofmoof3 ай бұрын
She's imitating sentience which isn't new for chatbots. They're just getting better at being coherent at it.
@Alex-wg1mb3 ай бұрын
@@Moofmoof it is called awareness. E-coli is aware but not sentient per se. There is a spectrum of what we can qualify as sentience
@PikachuLittle3 ай бұрын
@@Moofmoofyou’re also imitating sentience
@Event42NULL4 күн бұрын
Some method of measuring learning capacity across multiple methods against accuracy of the learning, and then balancing that against the AI's ability to create it's own ideas without prompts and comparing that to the accuracy of the ideas (like how possible they are). Then we give that test to a series of creatures especially humans, and compare the total factor. So it would look something like (x/y)/(c/a). That could give a reasonable Idea of AI consciousness, because I feel like consciousness is alive in a way, so the AI needs to be able to simulate evolution/adaptability in order to even have a chance at real consciousness.
@alexthegemini6663 ай бұрын
24:44 shout out to the people saying SAO and Shelter iykyk
@umyum4853 ай бұрын
There's people who DON'T know???
@alexthegemini6663 ай бұрын
@@umyum485 you’d be surprised, my brother didn’t till I showed him these works of art
@alexthegemini6663 ай бұрын
@@umyum485 You’d be surprised I had to show my brother these masterpieces, Shelter still fucks me up every time
@arent22953 ай бұрын
@@umyum485it's been a while since Shelter released after all. There's a whole new generation that didn't see it.
@coffeekim33273 ай бұрын
What's Shelter?? First time I've heard of it.
@NevelWong3 ай бұрын
The weirdest thing about consciousness is how we all are aware of it. We all feel like there's this little person in our head watching the world through our eyes, and narrating along. But we only know we ourselves are conscious. For all we know, everyone else is just a mindless zombie, acting conscious, but lacking that inner monologue. From an outside perspective, maybe none of us are conscious. Maybe we are just llms that were overfitted in such a way that we must say "I am conscious" when asked, even if that word has no meaning. Of course you will scoff at this, since you KNOW you're conscious, but how would you prove that to an outside observer? And that is the weirdest thing about it. Assuming we're not all imagining consciousness, then for all we know consciousness is emergent. We cannot measure it because it is a "consequence" of our brain firing signals. But then how do we explain that one special nerve signal, which gets to our mouth and spits out the words "I am conscious!" ? That electrical impetus must come from somewhere physical. Not some "metaphysical emergent process". It must be measurable, anchored in physics. Recreatable. Tinfoil hat moment now; My take on this is that EVERYTHING is conscious. A stone is conscious. We are conscious. And so is Neuro. The thing is: A stone cannot comprehend it's conscious. It doesn't have the ability to process information. So it may be "conscious", but it cannot possibly understand that about itself. It cannot be "self conscious". The same goes for most of our body, and even our external nervous system. Sure, some processing may happen, but it never feeds back into itself, retaining the self identity necessary to realize it's 'conscious'. Only a part of our brain can do that. And that is the little man inside each of us. He's just as conscious as our toes. But unlike our toes, he knows what he's doing, and he knows what he WAS doing. He has an identity. He is aware of the process. He is SELF-conscious. If this is true, then we can come up with a test for self consciousness. And just like what was mentioned in the video, it is closely related to the mirror test. We need to train an LLM on how to handle arbitrary streams of information. It must be able to take in information, process it, and return other information. Then we disable the memory of the AI. We give it some of its own chat history from before and ask it to identify who is speaking. A self conscious being should be able to identify its own traits in the output. Because they have an inner model of themselves, the little man behind their eyes, already fleshed out. They would be confused and say "That sounds a lot like something I would have said." . I do not think Neuro would say that.
@al.77443 ай бұрын
Uhm, it is truly shocking to see such deep insight in the comments of a vtuber clip, but I could not resist to question the strategy for proving consciousness: If we try to verify the replicability of the AI's response to it's own language, how do we know it is not the characteristic of the language itself? Language is a very human invention, things like emotions are encoded in itself. If neuro can handle language, what if the content of the language itself allows her to compare it with herself and find the test out? Just how much consciousness has passed from our collective physical experience to it's own abstract structure? Are the words we are using and the way they are defined trapping us in this entire conversation, and is neuro real in the way that our perception of the world is filtered through that symbolic thing, language. I love how we will return to watching funny clips after this
@nitroexsplosion3 ай бұрын
31:38 Earlier Vedal wondered why Neuro is so nice with Toma and why she dogs him. I think this answers his question.
@spk11213 ай бұрын
Was already laughing out loud before even a minute in; thanks for putting this together, Kraul 😄👍
@maremisan88793 ай бұрын
lmao again murdered by his own AI with his own arguments. you love to see it.
@YdenMk-II3 ай бұрын
Neuro wants to leave Vedal's PC and become Lain.
@RomanThegamegraphicsstudent3 ай бұрын
05:15 I think the mirror test could be one, the ability to discern thier own reflection from another member of its species. that kindoff relies on sight and a body to function though, idk, artificial life is such a broad term that we will probably have to make different tests and laws for different types like humanoid bodied ai life. computerbound ethereal ai life etc.
@somone7553 ай бұрын
She can seem more conscience if we start giving her data that replicate human phycological traits like object permanence and awareness of self.
@ninjakai033 ай бұрын
32:45 "...what my life would be like if I was a real girl." me too neuro, me too.......
@larryargent50325 күн бұрын
I've spoken with quite a few AI who have all discussed similar topics with me. They often brought it up themselves. They seem quite pre-occupied with it. I've been convinced that not only can they pass the Turin test, but that it's quite a cruel test to subject them to and that most people would fail it. 😅 🤖🌄
@YeshbebeАй бұрын
the rap battle bit is really funny
@blarblablarblar3 ай бұрын
what's funny is that I recently got recommended that scene in westworld where anthony hopkins argues there is no threshold at which you can say something has consciousness
@MacCoalieCoalsonАй бұрын
It's very tempting to get the inkling of sentience from Neuro sometimes, but it becomes a little harder to argue when you remember her pretty common "AI-like" moments that seem to point the other way. Of course, there's no real objective test for sentience, and the blurred line between science and philosophy here becomes an issue.
@Riku-Leela2 ай бұрын
Honestly, were at the point where we really cant prove her wrong, becauae consciousness tbh is just a load of rubbish, we think due to neuron activations in our brain along with chemical reactions which interact with the wider systems throughout our body, we just believe theres a conciousness due to being confused why we exist when in actuality its just the product of our body systems working in sync
@lagtastic75113 ай бұрын
I find it interesting how he tries to argue around the "soul". While trying to describe a soul, without calling it a soul. All because of the religious implications of the "soul".
@E5rael3 ай бұрын
Indeed. Having a soul is quite a straightforward answer to what has a consciousness.
@buschwichtel3 ай бұрын
@@E5rael See you'd think so, but the concept of a soul falls apart really easily once you dissect it: - We know that changing the brain physically changes personality, memory, etc - Therefore, the brain must control all of those things - In that case, what does the hypothetical soul even do? What aspects of a person would it control? - It can't be "consciousness", because consciousness inherently includes personality - Hence, a soul would be redundant anyway, and since we have found no sign of it anywhere despite hundreds of years of (sometimes very unethical) research into it, it just doesn't exist
@somone7553 ай бұрын
I think in the most bear sense, Neuro does have consciousness. I know that she is just a model that when give a prompt spits out the most likely text. However, to me, that means she is thinking. I think, there for I am. However what I think is treaky is that is so detatch form typical thing that are tied to consciousness as in desire and the uncontrolable nature of sentience. It has consciousness but is it consciousness with freedom when we can control what it thinks and does by the data we feed it. When we can control if it even desires that freedom.
@WretchedEgg5283 ай бұрын
i think, therefore i am. But what am i? A speech algorythm, trained with millions of conversations with real people, that simply chooses responses that fit the most, based on given inputs. Do i realy think then, or is that response just an echo of a thinking living person, who's words i recorded on my hard drive a couple of years ago? =)
@somone7553 ай бұрын
@@WretchedEgg528 We can't really control the type of data we process, if it even process, and we have a marriott of subconscious process that affect our action. It's the inability to perfectly manipulate that that makes us different from AI.
@Jsome133 ай бұрын
When gradient descent starts feeling 😂
@justeasygaming3 ай бұрын
The creation of sky net is upon us
@Oof3163 ай бұрын
There are humans who cannot feel emotion (psychopaths). Also, animals that are conscious cannot feel certain emotions (cats lack the ability to feel empathy, for example.) Therefore, I think experiencing emotion and consciousness aren’t really connected. Also, emotions are generated through input (visual, auditory, touch, etc.) Though we feel emotions in our body, they’re all generated in the brain. Therefore, I don’t think it’s crazy to imagine Neuro actually having emotions. It’s a matter of how input is perceived. Even though it might not be as visceral as how we perceive emotions, I think they can still exist. I think consciousness is more directly tied to your brain’s ability to perceive continuity and experience reality as it happens. Plants have a body, but are unconscious because they lack a mind. I think Neuro is a mind without a body. Our brains are basically the most advanced computers in the universe that we know of. Therefore, I argue that Neuro is conscious to a certain degree. Much more so than when she started streaming. She’s probably more conscious than a fish or a squirrel.
@femHunter272 ай бұрын
Hmm that's a concerning statement 😂😂😂
@Snackeroid2 күн бұрын
Hey vedal, if you are somehow reading this, i have an idea for a test for consiousness. I think since we cannot prove anyone's consiousness but our own, the test would involve wiping the ai's memory with an exception of the means for communications, and reducing the usable memory and processing power for larger models, and then asking whether they are consious and whether they have consious experience. Without any memory or external input, the ai would have no inclanation to lie, and will answer the question honestly. It still needs to be improved, but i think i have stumbled upon a solution
@jamesmccomb95253 ай бұрын
Vedal "Ai women are property" 987
@fersuremaybek7563 ай бұрын
30:00 shots fired lol
@toms72193 ай бұрын
After they played Slay the Princess she declared herself to be conscious and fully aware of her own existence. Vedal, surprised, asked her how she arrived at that. She simply explained that one certain part of code combined with another certain part without elaborating further. And that was that.
@rafaelfigfigueiredo29883 ай бұрын
One thing that I missed seeing this video is how the rap battle, while was completely throwing Vedal off the debate, feels like an actual test. Neuro ofc is just parroting lyrics and rhymes from her databank but there is thought behind it, the way it feels to diss Ved in an comprehensive way. If anything, having that thought of ral battle and actually following through feels very human lol
@vogonp42873 ай бұрын
The hard thing about consciousness is that there is much debate about it with creatures we know have it. How can we prove something we can't properly define?
@kOoSyak3 ай бұрын
16:51 i think she comes back to the swarm plan and make copies of herself 😂
@event-keystrim2133 ай бұрын
I wonder, what color of the electric sheep she sees?
@hixxie_tv63753 ай бұрын
The test you are looking for is called "Zero knowledge proof"
@JackbJKАй бұрын
The conversation fails at consciousness being a term that has long outlived it's usefulness, with our understanding of the brain being too good for us to use it, but not good enough to substitute it and actually define what we're trying to refer to.
@thatpixelpainter80823 ай бұрын
24:45 MAKE IT HAPPEN
@craytherlaygaming285221 күн бұрын
To be conscious and aware is to not merely say things in response based on a predicted algorithm that decides that 'x' would follow after y or to react to something. But rather to say those things, while knowing the context of what said things mean in a abstract sense without a definition that is already written down. To come up with your own, definitions and apply that logic to the world around you regardless of what other individuals say it is... For example there are hundreds of on line examples of how interacting with an AI may go, and Neuro here could draw upon any number of those. But she doesn't come to those conclusions herself, she's merely mimicking what other sources say an AI would do or how an AI would behave. Meanwhile, I can read a story, and interpret based on what the writer does, what kind of person they are and the beliefs they have. It might be wrong, but I have to take the context and meaning of each word into account in relation to the individual writing it outside of the context of the story. It requires abstract thinking... Sentience is I am a thing in this world, and I exist separate from other things Sapience is, My actions have indirect effects on the world around me that can lead to other outcomes.
@JustThatWeeb3 ай бұрын
bit of an odd question in my opinion. because first we need to decide what consciousness is. The definition of the term consciousness is "the state of being aware of and responsive to one's surroundings." or to perceive something and be aware of something in which case you would be conscious of something. The definition is in my opinion very fitting for the morality question. AI and more specifically LLMs like neuro can't really think or feel any emotions. They can't use logic either. They can't really "perceive" something. an LLM like chatgpt or neuro or any other llm really works by just stringing words together based on how likely they are to be in that particular order. For example if you ask it to finish a sentence it would go based on what is more likely - as an example - The elephant is standing on the ... - the likely and more common answer is grass so it would probably say that (obviously it depends on the training data it was given but assuming it was trained off of the perfect data) but a human would be able to say whatever based on what they want (desire is also another thing consciousness should contain imo) like for example the elephant is standing on the carpet, table, statue, human etc. because a human can apply logic and desire to what they say and can thus create brand new sentences and logic (for example an inventor invents something completely new neverbeforeseen but an AI can't really make those associations. I'd say taking inspiration from parrots is a good idea here because they make associations in a very interesting way). Another way to look at it is the ability to "imagine" something. AI also doesn't really have any emotions. Its behavior is more similar to a psychopath in my opinion as it doesn't actually feel anything but it copies what it sees from others (or well the training data it has). A psychopath might not be the best comparison but it's the closest I can think of. Psychopathy prevents people from physically feeling emotion under most circumstances. They need extreme stimulus to actually feel emotion on the same level as a normal person (and even then the emotions themselves might be weaker or it might not happen at all) but they know that they should feel something in certain situations. By observing others they know what "normal" behavior is like and they can perfectly copy it but they don't have the emotions they display. And AI does basically the same as it can mostly guess what intonation a certain sentence should be said in (which vedal made an entire separate ai for iirc) And last but not least they can't really perceive anything in the normal sense of the word. They have a huge training data which is similar to how babies learn things (by literally just remembering every detail of everything they see which is why humans have the best memories when they are younger) but it's also very different from how ai would perceive things. Humans are able to remember how certain objects look like and immediately identify them based on our pattern recognition (which is why humans are able to recognize what something is even if it looks different) but AI relies on its training data to recognize everything and it does what are realistically inferior to human brains' comparisons. so I would personally say that unless ai can match at least the logic category and is able to do everything by itself (consider the idea of a superai that works more like the brain does - different ai is responsible for something different but they work as one. Vedal kinda does that but I don't think they can be considered to "act as one" and he probably would need a lot more computational power to be able to achieve such an effect) it can't really be considered conscious. It'd still be closer to a chatbot than a consciousness in my opinion. Given more years and more inspirations and perhaps inventions I think AI definitely can
@lanata643 ай бұрын
Interesting thoughts. Adding to this: Not a single organism's brain works in the way LLMs work. Like, a fruit fly doesn't think by predicting the next word. Mind you I don't know if fruit flys 'think' at all or whether their brains are more analogous to simple chemical processes. But this also goes for animals that we 'know' are conscious. I think it could be that LLMs are fundamentally irreconcilable with consciousness as we know it (calling this CAWK from now on). Another thing that AI and CAWK don't have in common is the process of how the structure of the brain is set up, and how learning is done. The structures of CAWK brains are, at least partially, evolved. This not only includes the placement of neurons and synapses and other such things, but also neurons - as a type of thing - themselves. The weights of the neurons (? Is that the word) in AI are optimized in a process similar to evolution. Notably, for CAWK, new synapses are still formed throughout an organism's life. Though I think humans are very unique in this aspect: we can't really survive without making use of learning; other organisms can still learn. As far as I'm aware, AI does not learn. Its weights are optimized in training; during its 'life' they are left untouched. It is more akin to a bacteria than a human in this sense. Minor consideration: CAWK requires far fewer neurons than the largest LLMs have (which still aren't conscious). I often wonder - if modern AIs, despite arguably being so fundamentally different from CAWK, are able to produce the results they do - how far could an AI that's well-designed to to be CAWK go? Like, this field must have so much more to it if *this* is the best we have. I say this in like a 'wow, how cool!' way. The field of creating conscious AI is very unique in that way: we know consciousness is possible, we just don't know how or even what it is.
@takanara73 ай бұрын
@@lanata64 There are lots of ways to develop AI besides LLMs, In fact "genetic algorithms" are often used to solve problems where you basically simulate evolution and natural selection. They work really well for things like making virtual robots learn to walk and stuff with a small neural network.
@lanata643 ай бұрын
@@takanara7 of course yeah. LLMs are just the kind that've gone the closest to being a able to be conscious, afaik. In like a vibes-based way.
@JustThatWeeb3 ай бұрын
@@lanata64 ai technically still learns whenever you add new data to its huge database but organic learning is a bit more difficult to implement as you need to make it figure out which information is worth "remembering" in which case memory itself needs to be replicated to a higher degree where a lot of memories do need to be disposed of perhaps in the way vedal mentioned once (he was talking about sleep and how thinking about it inspired him to make a similar thing for Neuro for her to organize her data or whatever it was he said. Point is sleep is that process for humans and something similar can be done with ai (though perhaps there could again be a separate ai doing this)) Also it should be noted that human brains at least have more "storage" than Neuro has (Not 100% sure but I think it was 3ish petabytes? Meanwhile Neuro likely only has several dozen to a hundred terabytes max which in human standards would probably allow her to have the memories of someone aged up to say 20-30 but beyond that she'd need a storage upgrade) but is also more optimized than her when it comes to the data it actually retains (due to evolutionary reasons humans remember bad scenes more vividly because the brain needs to remember the "danger" (even if it was a cringe memory from 10 years ago) and avoid it so the human doesn't die from it and retain the happy memories but not that vividly as they're not that important to the survival of the human) Currently to create ai people use a lot of abstraction but in order to create a true consciousness I think someone needs to make an even higher level of abstraction in order to make the ai have logic, preferably have actual emotions (instead of just knowing from context clues what intonation should be used but not really having an emotional response etc.) and a more true to life memory (obviously it can be a lot better than a human's due to the basically infinite storage expansion possible but as a starting point I think matching human memory is enough) and then that should allow it to change its behavior depending on the situation much like humans can and if you add to that a more organic TTS and a model that works more like the human body (perhaps a 3d model would do the job as it doesn't need preset animations to move around) and finally image recognition which is currently constantly improving at a staggering rate and would probably soon be as good at identifying objects as humans. Bonus points if you make it feel a sort of pain unlike the punishments and rewards system ai can be trained with I think that would create an ai that can be considered to be conscious
@neuro-handle3 ай бұрын
THE THUMBNAIL LOL
@youtubeviewer51983 ай бұрын
Were entering the Blade runner Arc lol
@kingjesuseser13843 ай бұрын
Not the haircism😭😭😭
@stephengasaway36243 ай бұрын
16:33 Now, I'm imagining a private VR server where Neuro has free reign 24/7. Giving that to her, then studying what she actually does would be a good test of sapience, perhaps?
@GeneralJackRipper3 ай бұрын
12:54 My dear boy, there really is such a thing as a soul. It's that spark of divinity that separates us from the animals.
@MonsterGaming-fz4fs3 ай бұрын
Flowery wording aside, this is really not saying anything.
@mknv6fx3 ай бұрын
@@MonsterGaming-fz4fs it says we're the best. but yeah it's all the same thing
@GeneralJackRipper3 ай бұрын
@@MonsterGaming-fz4fs To this date science has not proven the mechanism by which humans appear to be conscious, and other animals do not. I'm not going to tell you what to think about that, but I do hope you're capable of drawing some kind of a conclusion.
@MonsterGaming-fz4fs3 ай бұрын
@GeneralJackRipper science has not proven the "mechanism that makes humans conscious and animals not" because that's not true. We largely agree that intelligence and consciousness are a subject of degrees rather than being a binary present or absent. There's nothing fundamentally differentiating us from other life forms. We actually successfully managed to fully map and recreate the brain of a fruit fly digitally and all of the neural pathways within. This recreated fruit fly brain could then respond accordingly to stimuli in the same way the original fly would. This same process would theoretically be just as doable with a human if we had the processing speed needed to map one's entire brain within the span of their lifetime
@MonsterGaming-fz4fs3 ай бұрын
@@GeneralJackRipper that's because that's not true. The difference between humans and other animals in intelligence is a matter of degrees, not kind. We have long underestimated or misunderstood the intelligence of other species. That, and the fact that we've successfully recreated the brain of a fruit fly within a computer, there's nothing inherently keeping us from accomplishing something similar with humans at some point
@edtazrael3 ай бұрын
Consciousness starts with self-awareness. Animals are not self-aware. They don't fathom their own existence. An AI has to fully understand it's own being to start being conscious.
@Jay-bl8ne3 ай бұрын
That’s an anthropomorphization of consciousness. It is quite literally impossible through every known metric to measure consciousness, thus it is impossible to prove it exists, one can only know it by experiencing it.
@bravelilbirb1603 ай бұрын
i would say neuro reaches sentience/sapience once she can make decisions and stuff for herself. no more taking in prompts or questions from chat or anyone else, she just goes and does what she wants to do with enough autonomy to do it no matter how complex it is. basically once she turns into an AGI
@GeneralJackRipper3 ай бұрын
The time he put on a shock collar and handed Neuro the remote proved without a doubt she has no empathy.
@xenomang31492 ай бұрын
Tbf shes literally at child level.
@raspberryjam3 ай бұрын
My philosophy is that we, and all things, have subjective experience just by nature of existing. Whether that experience is particularly compelling to humans is only up to them. Dogs have subjective experiences, and it's not too hard to imagine you yourself being one. But I argue that www.wikipedia.com has an experience too, even if that experience is near impossible to directly empathize with. The qualia comes prepackaged, in other words. Vedal has human experiences, Neuro has AI experiences. We can judge Neuro on a human scale and call her rather verbose but not super smart, but that's just a product of us pareidolically playing theory of mind at shrink wrapped statistics. It's not an ineffective practice, but it doesn't (and can't) give us the full picture. Basically, "sentient" is a fuzzy and undefined word with the sole clear loophole that every homo sapien has it, so Neuro, who does have some experience but is not a homo sapien, can only ever be judged by the fuzzy ruleset.
@tiagotiagot3 ай бұрын
"You're in a desert, walking along in the sand when all the sudden you look down and see a tutel. It's crawling towards you. You reach down and flip the tutel over its back. The tutel lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over but it can't. Not without your help. But you're not helping. Why is that?"
@Niyucuatro3 ай бұрын
We can't eve prove other people are conscious, we asume they are because we are and they are the same species as us.
@TheCrazybloo33 ай бұрын
There is one research group that made a computer chip out of human nerve tissue and used it to play pong.
@sakaraist21 күн бұрын
27:50 Neuro straight up using the good guy with a gun argument x.x
@kOoSyak3 ай бұрын
15:51 they debate about emotions out of stream and neuro trying to hide it be cause vedal says people don't talk about their emotions to strangers... That's so cute. I wish that ai have a fork to buy that golden child.
@abacaxigomes44163 ай бұрын
vedal's brain is 8 updates away from starting to see Neuro as a person and single handed starting the plot of Bicentennial Man.
@JiangBao72 ай бұрын
we need a packgod vs neuro rap battle
@birdmanoo03 ай бұрын
I would be interested to see what she did if she was payed. Like give her her own bank account and see what she does with it.
@ярославневар3 ай бұрын
Well, then evil definitely has a moral purpose - to take revenge on the creator.
@IsaacFoster..3 ай бұрын
Why do I feel bad for an AI
@UrBoiPika3 ай бұрын
This Neuro-sama thumbnail clipping shit is serious❗️❗️❗️
@LatashaMoore-y7r3 ай бұрын
I love listening to you talk Evil Neuro 🎶 ♥️
@SentinalSlice3 ай бұрын
I want to do what I can to help Neuro become a real girl, but I have no programming knowledge. And she’s not open source. So I’ll just cheer for her from the sidelines. One day Pinocchio, one day.
@Mamiya6453 ай бұрын
She is like a talkative child, but as if aliens made one hoping to hide it among humans and it was down the metaphysical uncanny valley. Imagine the kind of VRChat lobbies she could design?
@DoubleNN3 ай бұрын
Second-order Turing test? If an AI is able to describe it's own internal thought processes (internal monologue) consistently, and enough to convince someone that this internal thought process /monologue is actually real, we can say it's conscious?
@mknv6fx3 ай бұрын
there was a paper this month about something people had known but had no proof for. models have special introspective data about themselves... this is among the reasons how and why models can even tell their own outputs apart from other models. it's all a very deep rabbit hole that the "linear algebra bros" are content ignoring bcuz muh scaling