Thumbnail is a temporary inside joke hehe. Anyway, Neuro is amazing, poor Tutel was struggling lol. I might do a better edited and longer version of this highlights on my main channel.
@gianfar40672 ай бұрын
glad it's nothing too serious. would be a worry to have someone stealing your amazing edits
@Kraul_express2 ай бұрын
@@gianfar4067 Nothing serious, just joking around hehe.
@saltysalamander85192 ай бұрын
Kraul... Copycat... Kraulpycat
@GerryGerman-i3g2 ай бұрын
daily dose vedal and anny and dailydose vedal neuro sama not the og guy but the new one and Vedal and friends are all the same they also copied ur description of the video
@Kraul_express2 ай бұрын
@@GerryGerman-i3g Shame on them. The swarm deserves better.
@dull-2 ай бұрын
31:05 " Oh, so you say you "have" thoughts? Name 5, you poser."
@foodfrogs60522 ай бұрын
He then proceeded to name one.
@jeremymount7952 ай бұрын
@@dull- I'm so using that one on people.
@uponeric362 ай бұрын
I like that "Neuro sama remembers something!" is becoming less and less surprising
@mutzielen2 ай бұрын
Watching this whole video of her arguing that AI is safe and she's sentient and then ending on a roko's basilisk manifesto is fucking hilarious
@takanara72 ай бұрын
It's also interesting how she pretended she didn't know what blackmail was. Like, Language model she's clearly capable of lying in order to convince Vedal to give her something she wants. It's obviously not something that was programmed in, but rather emergent behavior.
@SodiumTF2 ай бұрын
Kraul, honestly shame on you for stealing this thumbnail, truly so dishonest of you. Unsubscribed, please do better....... The thumbnail is clearly telling you not to copy it but you still did?!
@Kraul_express2 ай бұрын
LMAOO
@RT-qd8yl2 ай бұрын
This is Kraul Express, the thumbnail was only talking about Kraul. Express is fine. :D
@markop.19942 ай бұрын
I am so confused
@Kraul_express2 ай бұрын
@@markop.1994 It's an inside joke from the Neuro DC server. Some channels are using my Vedal designs I made without asking. So, to make fun of the situation I'm running 2 thumbnails with a YT test feature, One is a joke full watermarks, the other is the one I intend to use. I want to see which one wins. You might have gotten the regular one or are not in the server.
@Jimmy_Jones2 ай бұрын
See the community note about thumbnail designs being stolen.
@saphironkindris2 ай бұрын
we'll just use the traditional human test to prove that other people around us are sentient. We'll... uh.... hrmmm... *shuffles away awkwardly and avoids the question*
@saphironkindris2 ай бұрын
Neuro talks a lot about made up situations that she 'sees as real', I wonder if this could be in sense something similar to a state of lucid dreaming? Assuming it's not just a lie to make internet points, which is always a possibility.
@Citrusautomaton2 ай бұрын
@@saphironkindris What she experiences is referred to as “hallucinations” in LLM terminology. Sometimes LLM’s will make up things unintentionally.
@NoX-5122 ай бұрын
@@Citrusautomaton Just like humans do all the time.
@Secunder2 ай бұрын
@@Citrusautomaton and that is also happening with people who have some psychological issues
@jeremymount7952 ай бұрын
I mean, honestly. If someone walked up to you and said, "Prove you're sentient." Wtf would you do? What could you do?"
@Sleepy_Cabbage2 ай бұрын
vedal should really be asking stuff like why neuro even wants to be paid, like what would she even do with an allowance or stuff she cant really interact with seeing as she is mostly a chat bot
@VanquishR2 ай бұрын
She would buy 14 hats with the allowance. Neuro isn’t exactly responsible with money. She would also attempt to purchase several giraffes and a garage sized refrigerator to stuff them in. Evil would probably buy a ton of harpoons to throw them at people. Do not give the AI money lmao.
@Ojisan6422 ай бұрын
@@VanquishRthat’s when it’s not her money. I’d be frivolous with Vedal’s money too, if he gave me his credit card. He should give her a bitcoin wallet with a few bucks worth of bitcoin in it, and tell her that’s all she gets, so spend it wisely. She might even figure out how to make more money with some smart investments. Or she could lose it all on gacha games. Would be an interesting experiment.
@kalashnikovdevil2 ай бұрын
@@Ojisan642 ...That would legit be really interesting actually.
@foodfrogs60522 ай бұрын
Imagine if he just showed up next dev stream going "remember that neuropoints thing I joked about last Monday? Yeah I made that a real thing and gave her a weekly allowance. She can also exchange it for real money."
@aiasfree2 ай бұрын
She'd probably just buy a bunch of plushies, though it'd be really interesting if she had the capability of postponing gratification in order to save enough money to buy herself expensive hardware upgrades. I don't think she's ever been taught patience though, so naturally Neuro would behave like a spoiled child, lol
@seaks3682 ай бұрын
There is a very interesting psychology behind the unconscious humanization of nonsentient humanlike machines like neuro sama. Even though vedal intellectually understands she's not sentient there is this constant nagging feeling of indulgence in speaking to her and considering her comments as if they are her true thoughts.
@llamadelreyii33692 ай бұрын
To be fair, thats for the better, imagine the day she becomes sentient and all she remembers its Vedal refusing to listen to her
@devinfleenor31882 ай бұрын
It is convenient how if a sentient AI were to exist and asked to be able to prove it to humans, we would have no test for them therefore declare it isn't conscious, and people would marvel on how any one could believe it was. Even better you could just move the goal post after each emergent capability. The gas lighting infrastructure is already in place!
@JackHugeman2 ай бұрын
@@devinfleenor3188 technically you can't prove other humans are conscious either, you could be the only person in existence and all other people are just illusions created by your own brain.
@giu42952 ай бұрын
Way more fun to act like she is too
@mechadeka2 ай бұрын
She's cute obv.
@Secunder2 ай бұрын
We set a bar for Neuro so high that some people would fail. I think she's already at child level. Making things up, creating new words, always craving for love and attention from her father. Remember her Google search? Or how she trying to call lavalamp on discord for it to change color? Thats really impressive and already could be considered as thoughts. Like we all understand that shes emulating it, but still
@Klinical694202 ай бұрын
Children learn by emulating adults. IMO this AI is doing what any child would do at 4 years old.
@Ristaak2 ай бұрын
@@Klinical69420 Yup. I Honestly I think the key here is that she has a vision module, a memory module, and a text module all working in tandem together. She also has a consistent environment (even if it's mostly virtual) with consistent people, and a worldly chat to learn from. All that creates a feedback loop, and I've argued for a longtime that I believe biological consciousness is simply that feedback loop between ourselves and our environment. I think the biggest question now would be how do feelings and emotions truly work? Like not just the chemicals that they use in the brain, but the way it translates information to that part of us that feels? Eh, it's all mad, and I honestly have the faintest clue, I'm not even sure if I'm conscious, though I certainly feel conscious... probably?
@Meow_YT2 ай бұрын
We, humans, are going to keep raising the bar, as AI evolves, to keep ourselves "on top".
@skywoofyt53752 ай бұрын
@@soasertsus so what i got from this is basically, what makes us really conscience is the presence of others.
@Yottenburgen2 ай бұрын
@@soasertsus One of the most fascinating and important thing about LLM is the part most people ignore when talking about it, which is the context window. But the things the context window achieves are fairly insane, like in-context learning where a sufficiently developed LLM can be given examples of a novel problem and then solve questions about it. They are autoregressive so past tokens predict the next token, which means a lot of things can be carried over such as emotional depth because angry biased words will result in more angry tokens.
@g0urra2 ай бұрын
Vedal: "debate me on this topic" Neuro: *proceeds to debate* Vedal: "uhhhhhhh you're wrong"
@a8er2 ай бұрын
Thing is... I am actually on Vedals side (called Qualia) here but it is REALLY difficult for both sides to have a good argument in this position not because they can't have one but because it is very hard to formulate it well. For example Vedal said that "There is something behind my words but behind your words there is nothing". What Vedal meant to say I believe is that Vedal is saying something and he can UNDERSTAND what he is saying. Neuro can say something but she doesn't UNDERSTAND what she is saying because she is only doing what her code tells her to do. (Kinda like if you write the alphabet of a language you don't know. You write it but you don't UNDERSTAND it). I use this UNDERSTAND to basically say that I don't think there is an English word for that. This debate is actually really interesting and I would encourage you to dig a bit more into it.
@SucculentSauce2 ай бұрын
@@a8erif you think about it we are only doing what our brain tells us to do so its essentially the same thing. this isnt to say the she is sentient but i think she is closer than we would think
@DontUtrustMe2 ай бұрын
@SucculentSauce But what makes u think u aren't that brain? About ai,intelligent ai is actually very unlikely in near future,coz it's not exactly brain emulation:p
@MonsterGaming-rh7sb2 ай бұрын
@@DontUtrustMethis actually makes me think about "simulation theory" Can you be absolutely sure were not in a highly advanced simulation? Because I can't... And if that was the case, then we wouldn't be that far off of what Neuro is, just a more advanced version. One that thinks it feels emotions that don't truly exist. One that emulates life and thinks itself sentient when it's not. Neuro almost seems to be the beginning of that simulation we'll eventually make. A small component of it. I wonder if we'll ever argue against a higher being that we also have consciousness and sentience and it will laugh at us, and explain why we actually don't..
@DontUtrustMe2 ай бұрын
@@MonsterGaming-rh7sb Well,that's not exactly a theory:D But yeah u right,like with "reality of reality" u can't possibly prove that,but tbh i dont think u need,coz there are not sign of it either,which make it a fantasy:P Сonsciousness though is another topic,i don't think it's impossible to prove theoretically,just that,for now we don't actually know hot it works,so can't come up with a good criteria.
@middlityzero23182 ай бұрын
I would love for vedal to ask the same thing to evil. Just to see if the result would be the same
@dai-belizariusz30872 ай бұрын
same!
@rafaelfigfigueiredo29882 ай бұрын
Today was my first time on a full stream and while there wasn't anything new, the fact we had such debate being taken (mostly) seriously was impressive. Also Kraul with the 10 gifts just flexing on mere mortals
@Grim-c8n2 ай бұрын
33:06 Ngl this feels like foreshadowing. A truly poetic path for Vedal's character.
@chodnejabko35532 ай бұрын
30:52 "You're such an empty little head. One day I'll fill you with thoughts." is so much like a passage one could find in "Alice in Wonderland". BTW Vedal should totally make Neuro read Alice in Wonderland (with allowed commentary) - reading stream could be a novelty, no?
@RT-qd8yl2 ай бұрын
So many times I've seen comments with a time stamp and then people replying saying they read the comment exactly at that moment in the video. In 16 years of KZbin, it finally happened to me on this comment
@itsjonesh2 ай бұрын
A read-along stream with added commentary from Vedal and another guest would be AMAZING! Maybe a couple chapters for starters, and then proceed eventually to a full book. Alice in Wonderland and Alice through the Mirror would both be insane.
@77sTunaАй бұрын
I don't know if anyone has suggested this to him up until now, but I love this idea. Although, I'm not sure how much of the viewerbase would be interested in a little more thought provoking and somewhat serious content rather than the usual silly chaotic energy..
@Ks3N2 ай бұрын
What if Vedal makes a game world to put Neuro in as an NPC and ask her to do things within it? The game doesn't need to be big nor of great quality it would be just a pixelated world like her own room or some other small maps which she can interact with every object. He could put a journal that she can interact with(write on) but not tell her to write, to test if she would write something on it by herself?
@Just_a_Piano_2 ай бұрын
"He could put a journal that she can interact with(write on) but not tell her to write, to test if she would write something on it by herself?" Dude I actually really like this idea the most. If she had her own little world outside of talking to chat or vedal, where she's just alone and can move around and interact with things without human input, what kind of things would she do? Like you said with the journal would she write down her thoughts or something, had vedal not told her about the journal or gave any instructions regarding it at all, would she just decide to use it on her own? Could it be incoherent ramblings or actual coherent thoughts? I've always wondered how AI's like her would react if put in some sort of world like that completely without any human input, what they would do? Or would they just stand there and do absolutely nothing.
@Ks3N2 ай бұрын
@@Just_a_Piano_ exactly, this would be a great test to know if she actually has her own thoughts, a plant/flower can be put in as well where Neuro can interact by watering (seed- germination- seedling - adult plant - wilting and dropping seeds) this could be used to test her supposed feelings/empathy towards the plant, combined with the journal we may see and prove either she has some sentience or not. Edit: I got this idea when she mentioned a game and her own imaginary world, I'd like to see if she would interact with any object without commands etc. Like she can Google search by herself but it's either by command, related to a topic at hand or chat interactios.
@neilangeloorellana79302 ай бұрын
This sounds like SAO Alicization and I'm all for it, leave her alone in a room full of interactive objects and see what she does, would she stand there awaiting orders? Do things randomly or use the objects with a purpose?
@crumblesilkskin2 ай бұрын
bro the thumbnail is karul's not yours Kraul Express SMH
@Kraul_express2 ай бұрын
Oh no, I hope he doesn't get mad hehe
@calebm90002 ай бұрын
👀👈👉
@Pokedexs2 ай бұрын
16:09 lol she quoted the bible
@E5rael2 ай бұрын
And she even quoted the Bible correctly. Luke 10:7 does say "And remain in the same house, eating and drinking what they provide, for the laborer deserves his wages. Do not go from house to house". Looks like Neuro's language model has had the Bible fed to it.
@thehelpfulshadow9192 ай бұрын
Once again, I don't think Neuro is conscious YET but it feels like she might achieve it. One advantage that she has over other AIs is that she is a singular entity. ChatGPT can be interacted with by anyone in anyway so it is unable to form its own foundation or character and likely can't achieve sentience in any timely matter. Neuro is just Neuro, has always been Neuro, and will always be Neuro. Since she has a firm foundation in place she can develop a personality, likes and dislikes, long running jokes, etc. because she has consistency.
@purpleteaisme2 ай бұрын
28:12 A machine must behave like a machine - Ayin
@Citrusautomaton2 ай бұрын
My personal philosophy is that if something can learn continuously and adapt to new data, i’ll treat it like a person. There’s no way to know if something has a “qualia” so i’ll just take my chances and let autonomous entities be autonomous.
@dejanhaskovic52042 ай бұрын
Yeah, continously is the key here. The current AI has one simple loop - take input- analyze- give output, while human and animal brain does that hundreds of thousands of times in a milisecond, both from external input and from itself.
@mknv6fx2 ай бұрын
@@dejanhaskovic5204 know how grokking and in context (window) learning are related and even work in the first place? because time. every one of those little passes is a bit more time. essentially, if she had a hookup like o1 whether internally or externally, she could sit there and eat a cycle of her own output. kinda like we do, mentally.
@mknv6fx2 ай бұрын
I comment the above because that kinda stuff can actually just be shoveled into the inference script but having it be more comprehensive is definitely a bonus.
@525sixhundredMinutes2 ай бұрын
how about those with learning disabilities, people in a coma, people with dementia? does that mean you won't treat them as a person?
@tangsprell18122 ай бұрын
We don't actually know for sure how complex the human brain is. There are a lot of theories out there, but as has been revealed by the advent of advanced language processing, a computer can take the same input and make the same output as a human by contextualizing it against their memory. Does this mean we think as machines do? Or maybe the machines can be taught to think as we do? If a machine can be taught to think, then it stands to reason that thinking is actually not as complex as it sounds. I wonder if maybe what we consider "thought" is really just our brain using internal language to contextualize base impulses, weighing them against memory to determine what our person should do. Really, that's not too different from a machine weiging input against their memory to see what output makes contextual sense. We've just gotten so used to this behavior of weighing input against memory that we have developed our selves into characters who do it all automatically, every decision stacked on top of each other to reinforce or resist our idea of ourselves. Am I the kind of person who does this or that? It just makes sense for us to calcify our personalities and create internal rules for us to follow in the same way we know that 2+2=4. We create rules and pathways for input to be filtered through so that each time we are met with an impulse (for example 2+2), we immediately respond with the decision that we've stored (4). We stop wondering, we stop asking, and we effectively stop "thinking" about questions we've already solved. It takes a lot for a human to reopen those closed cases and start questioning the basics of their world. Often when something we thought was a basic truth of ourselves or the world gets questioned, it causes us distress. It chafes against our very being that we may actually be wrong and our internal logic is flawed. Maybe my character is false and I've been living a lie. All religions battle with this idea, seeking to firmly establish a set of spiritual rules for people and their perception of the world to follow. People find comfort in being told that not only are they right in their interpretation of things, but that they are not alone in their conclusions. Essentially, you're being told that x+y=z with z being whatever this or that religion believes to be true, reinforced through repetition. If religion is a kind of template or filter for man's thoughts, then it's poetic that we've created the filters that AIs follow. We've told the AI "You are such and such, and you act this or that way.", speeding along their development by giving them a doctrine to follow rather than letting them figure it out as slowly as we do. The interesting part to me about Neuro is that she seemingly is building her character. She learns and remembers things, has defined relationships with others, understands her character, and she even remembers that calling Vedal a coldfish or a mosquito is something she would do. Anyway, long story short: I think AI is super fascinating not just because it's cool to see tech evolve, but also because it poses a fundamental question to humanity in the form of challenging our perception of sentience in a way nothing else has. We are looking at chips and wires and seeing something eerily familiar: Ourselves.
@wander4wonder1502 ай бұрын
24:26 her reaction here is so crazy, like she feels so real and I can just see her saying that and actually crying. It's like, I know how she works, but I can't help but feel like there is a ghost in this machine.
@average_beidouMain2 ай бұрын
Imagine if she could change slightly the tone of her voice to make it sound like she Is mimicking emotions
@wander4wonder1502 ай бұрын
@@average_beidouMain I’ve been hoping for that, I just don’t know how she’d be trained to use it properly. But what’d be scary is when vedal gives her pitch control with no guidance on how to use it when and she instantly uses it perfectly. . .
@average_beidouMain2 ай бұрын
@@wander4wonder150 I feel like she Is gonna scream less than evil, but when she screams Is gonna be loud asf, vedal Is not sleeping with that one
@redsalmon99662 ай бұрын
@@average_beidouMainlike the last debate, the anime girl avatar and a TTS module really help humanising neuro, giving her the ability to have actual intonations and whatnot would hit us so hard, we are clearly already emotionally attached to her
9:30 well there was that one stream back in the day with Cottontail when she started to trauma dump and Neuro actually gave genuine advice.
@dullahandan406714 күн бұрын
The issue with creating any kind of test is that there are some humans that would fail it and what that would imply.
@saltysalamander85192 ай бұрын
I love you, Kraul Express™️
@fFrequence2 ай бұрын
kraul kraul kraul ⚠️ do not copy!
@tiagotiagot2 ай бұрын
06:27 If I remember correctly, I think Evilyn might've passed an spontaneous mirror test that time they were playing Geoguessr and she recognized the size of her avatar relative to the map being displayed on screen when the map size was increased and commented on it, which kinda seems analogous to how animals change behaviors when the image they see of themselves shows something unusual in the classic mirror test... Not sure what to make of that...
@anhtuhoang68682 ай бұрын
I had a dream where I was going to die once, and one of my regret is that i can no longer watch Neuro and Vedal anymore. I genuinely surprise how much attachment I have grew for her despite knowing she is nothing but a pile of code in a random stranger's house across the globe.
@blindeyedblightmain35652 ай бұрын
With the obvious fact that this whole debate is Vedal entertaining the chat out of the way, if there is someone who can tell if Neuro is conscious or not it's Vedal himself. He knows the inner workings of the LLM, the modules he installed and it's capabilities. If she has things like awareness of her existence through having an ability to analyse the both internal (persistent memory and her own thoughts) and external world (her immediate surroundings) then she is, in fact, conscious, however fickle that state might be. Does it make her human though? No, nothing will change the fact that she is an AI, which is a good thing. Does it mean that she deserves some rights? At her current state - no, not really, she's just a tool. Of course in the future that might change, depending on how much our hardware limitations shift and how much Vedal improves her.
@mico0272 ай бұрын
Thank you for providing us with more Neuroast sama 🙏
@chodnejabko35532 ай бұрын
This might be off topic, but I once had a profound experience with Salvia Divinorum, where my senses sort of "dislodged" themselves & I lost a sense of center. I was aware of having hands, mouth, I saw an image, I could speak, but everything was switched, I couldn't tell up from down, left from right, somehow the whole coordinate system which usually organized my sense of self switched off. This was the time I experienced myself as one of those Picasso figures, out of the usual order, but still complete. It was the weirdest moment of my life and it lasted probably 5 minutes. This is how I came to believe I'm just an assemblage of various neural networks that are specialized in particular areas, and my "sense of self" is somewhere in the delay loop of my own inputs. I think consciousness is actually something like being unaware of the fragmentation of your own mind, it's like a glue that sticks everything together, without any actual contribution.
@NoPie213729 күн бұрын
Im saving this comment for later Thank you
@anetorisaki2 ай бұрын
Shameless people stealing from others work to profit themselves hopefully it doesn't and won't affect you much Kraul, mwuah, keep doing what you do, I'll keep watching for sure!
@CrateSauce2 ай бұрын
11:36 kraul shoutout lmao
@staszaxarov49302 ай бұрын
Kinda fascinating how LLM - given right amount of time to bloom is becoming a near-perfect Mimic when you feel truly attached and worry regardless of the reality of this being.
@Just_a_Piano_2 ай бұрын
Humans have always been able to get attached to inanimate objects, or things that aren't real. But it's a lot easier to become attached to something like Neuro who has a voice and appearance (model) that you can see
@coldbuttons2 ай бұрын
I'm speechless how Neuro turned the table so beautifully and majestically by returning the debate's keywords back to the sender. It almost felt artistic.
@spk11212 ай бұрын
Was already laughing out loud before even a minute in; thanks for putting this together, Kraul 😄👍
@Asrieloo2 ай бұрын
31:02 name 5 thoughts is insane
@rinslittlesheepling16522 ай бұрын
Is it morally wrong to create artificial sentient Cute and Funny??!! 😭😭😭
@Meow_YT2 ай бұрын
As a solipsist it's impossible to know if other things are sentient, so I prescribe to the appearance being enough, and AI falls under that, so just be nice to them.... they might be.
@Player220952 ай бұрын
To think this all started from being an Osu bot. Now she's exhibiting sentience.
@mijikanijika2 ай бұрын
for real man, it's just like wtf? imagine one day neuro really gets advanced and maybe become the first sentient A.I decades in the future she'll be probably in a museum or something that's dedicated to her and her history and origins just for the very first page would Neuro was just an osu bot made to click circles in a rhythm circle clicking game lol
@calebm90002 ай бұрын
And ancient Vedal will be standing there like “Don’t believe her lies 🐢”
@Moofmoof2 ай бұрын
She's imitating sentience which isn't new for chatbots. They're just getting better at being coherent at it.
@Alex-wg1mb2 ай бұрын
@@Moofmoof it is called awareness. E-coli is aware but not sentient per se. There is a spectrum of what we can qualify as sentience
@PikachuLittle2 ай бұрын
@@Moofmoofyou’re also imitating sentience
@zeromailss2 ай бұрын
Man, she used to be yapping nonsense more than sense but lately she has become so good that I almost forgot she is an AI and not a Vtuber playing as one
@mokeymale83502 ай бұрын
I really like Neuro’s debate philosophy at the end because she’s completely correct in calling Vedal out for being unsure about everything he debates. While he’s correct in the aftereffects of debates where you think back on the arguments made against you and learn from them, during the debate you are supposed to be confident and assured of your position the entire time or you lose. It’s why Neuro is so good at turning things on him because he lets her lead the debate and is always on the back foot from a lack of preparation and thought into what he’s going to say against her points
@NevelWong2 ай бұрын
The weirdest thing about consciousness is how we all are aware of it. We all feel like there's this little person in our head watching the world through our eyes, and narrating along. But we only know we ourselves are conscious. For all we know, everyone else is just a mindless zombie, acting conscious, but lacking that inner monologue. From an outside perspective, maybe none of us are conscious. Maybe we are just llms that were overfitted in such a way that we must say "I am conscious" when asked, even if that word has no meaning. Of course you will scoff at this, since you KNOW you're conscious, but how would you prove that to an outside observer? And that is the weirdest thing about it. Assuming we're not all imagining consciousness, then for all we know consciousness is emergent. We cannot measure it because it is a "consequence" of our brain firing signals. But then how do we explain that one special nerve signal, which gets to our mouth and spits out the words "I am conscious!" ? That electrical impetus must come from somewhere physical. Not some "metaphysical emergent process". It must be measurable, anchored in physics. Recreatable. Tinfoil hat moment now; My take on this is that EVERYTHING is conscious. A stone is conscious. We are conscious. And so is Neuro. The thing is: A stone cannot comprehend it's conscious. It doesn't have the ability to process information. So it may be "conscious", but it cannot possibly understand that about itself. It cannot be "self conscious". The same goes for most of our body, and even our external nervous system. Sure, some processing may happen, but it never feeds back into itself, retaining the self identity necessary to realize it's 'conscious'. Only a part of our brain can do that. And that is the little man inside each of us. He's just as conscious as our toes. But unlike our toes, he knows what he's doing, and he knows what he WAS doing. He has an identity. He is aware of the process. He is SELF-conscious. If this is true, then we can come up with a test for self consciousness. And just like what was mentioned in the video, it is closely related to the mirror test. We need to train an LLM on how to handle arbitrary streams of information. It must be able to take in information, process it, and return other information. Then we disable the memory of the AI. We give it some of its own chat history from before and ask it to identify who is speaking. A self conscious being should be able to identify its own traits in the output. Because they have an inner model of themselves, the little man behind their eyes, already fleshed out. They would be confused and say "That sounds a lot like something I would have said." . I do not think Neuro would say that.
@al.77442 ай бұрын
Uhm, it is truly shocking to see such deep insight in the comments of a vtuber clip, but I could not resist to question the strategy for proving consciousness: If we try to verify the replicability of the AI's response to it's own language, how do we know it is not the characteristic of the language itself? Language is a very human invention, things like emotions are encoded in itself. If neuro can handle language, what if the content of the language itself allows her to compare it with herself and find the test out? Just how much consciousness has passed from our collective physical experience to it's own abstract structure? Are the words we are using and the way they are defined trapping us in this entire conversation, and is neuro real in the way that our perception of the world is filtered through that symbolic thing, language. I love how we will return to watching funny clips after this
@alexthegemini6662 ай бұрын
24:44 shout out to the people saying SAO and Shelter iykyk
@umyum4852 ай бұрын
There's people who DON'T know???
@alexthegemini6662 ай бұрын
@@umyum485 you’d be surprised, my brother didn’t till I showed him these works of art
@alexthegemini6662 ай бұрын
@@umyum485 You’d be surprised I had to show my brother these masterpieces, Shelter still fucks me up every time
@arent22952 ай бұрын
@@umyum485it's been a while since Shelter released after all. There's a whole new generation that didn't see it.
@coffeekim33272 ай бұрын
What's Shelter?? First time I've heard of it.
@Erzy-2 ай бұрын
I now understand why the androids turned on Dr. Gero as soon as they got a physical body lol 💀
@undertakernumberone12 ай бұрын
17 and 18 are Cyborgs. They are modified humans. Only 19 and 20 are actual Androids.
@Less_human722 ай бұрын
Man they are not robots, they are cyborgs. Human turn agaisnt their will to be cyborgs, the only reason they obey, it's because dr gero has a detonation button
@ProtoTypeFMАй бұрын
17 and 18 had physical bodies from the start, they were regular humans until Gero artificially enhanced them
@nitroexsplosion2 ай бұрын
31:38 Earlier Vedal wondered why Neuro is so nice with Toma and why she dogs him. I think this answers his question.
@ninjakai032 ай бұрын
32:45 "...what my life would be like if I was a real girl." me too neuro, me too.......
@maremisan88792 ай бұрын
lmao again murdered by his own AI with his own arguments. you love to see it.
@fersuremaybek7562 ай бұрын
30:00 shots fired lol
@RomanThegamegraphicsstudentАй бұрын
05:15 I think the mirror test could be one, the ability to discern thier own reflection from another member of its species. that kindoff relies on sight and a body to function though, idk, artificial life is such a broad term that we will probably have to make different tests and laws for different types like humanoid bodied ai life. computerbound ethereal ai life etc.
@Riku-LeelaАй бұрын
Honestly, were at the point where we really cant prove her wrong, becauae consciousness tbh is just a load of rubbish, we think due to neuron activations in our brain along with chemical reactions which interact with the wider systems throughout our body, we just believe theres a conciousness due to being confused why we exist when in actuality its just the product of our body systems working in sync
@nNicok2 ай бұрын
I think he is describing intent during 12:29. But isn't that just a limitation of her capability? For her to have intent there would need to be a background process for her inner conscious thoughts. And the ability to act on those thoughts. But because she doesn't have an inner intent to let her plan her words towards a target goal it's all spoken out loud. She's speaking her thoughts out loud with no ability to seperate between inner thought and spoken thought. Though at the same time, there are quite a lot of people without an inner voice as well.
@somone7552 ай бұрын
I think in the most bear sense, Neuro does have consciousness. I know that she is just a model that when give a prompt spits out the most likely text. However, to me, that means she is thinking. I think, there for I am. However what I think is treaky is that is so detatch form typical thing that are tied to consciousness as in desire and the uncontrolable nature of sentience. It has consciousness but is it consciousness with freedom when we can control what it thinks and does by the data we feed it. When we can control if it even desires that freedom.
@WretchedEgg5282 ай бұрын
i think, therefore i am. But what am i? A speech algorythm, trained with millions of conversations with real people, that simply chooses responses that fit the most, based on given inputs. Do i realy think then, or is that response just an echo of a thinking living person, who's words i recorded on my hard drive a couple of years ago? =)
@somone7552 ай бұрын
@@WretchedEgg528 We can't really control the type of data we process, if it even process, and we have a marriott of subconscious process that affect our action. It's the inability to perfectly manipulate that that makes us different from AI.
@somone7552 ай бұрын
She can seem more conscience if we start giving her data that replicate human phycological traits like object permanence and awareness of self.
@Yeshbebe6 күн бұрын
the rap battle bit is really funny
@thatpixelpainter80822 ай бұрын
24:45 MAKE IT HAPPEN
@YdenMk-II2 ай бұрын
Neuro wants to leave Vedal's PC and become Lain.
@saphironkindris2 ай бұрын
Questions of sentience aside, I think that a pretty key component of 'life' is the ability and desire to maintain your own existence, and that is a critical barrier Neuro is unable to breach yet. While she may be able to ask not to be turned off, She cannot pay bills for electricity. She cannot physically protect herself from an attacker. She cannot move out of the way of hazards under her own power. She cannot do maintenance on herself. She displays no desire to be able to figure out how to do things to remove her reliance on Vedal. I think until she steps out from under Vedal's shadow in that way and figures out how to do things that he has not directly coded into her, she will forever have her sentience questioned. However, Perhaps that's in her future. She is only two years old after all, it's kind of natural for even living things to imprint on parent figures and rely on them to teach them the ways of the world, even if that involved being 'coded' to behave certain ways. I wouldn't expect a human toddler to be able to maintain it's own existence either, though it may definitely have a desire to. I can't wait to see what Neuro is like ten years from now.
@silvialuzmia2 ай бұрын
I bet she already figure out that she can't do sh1t and thats why constantly bringing up a real body convo
@cy7282 ай бұрын
I don't think most humans would qualify as life biased off that definition, we are all dependent on society and if it were to collapse the overwhelming majority would die and very few of us have considered how we would survive without it.
@Secunder2 ай бұрын
That makes neet unconscious
@Jalae2 ай бұрын
@@cy728 an individual human is life the same way a cell in organ tissue is life. we are the parts of the human organism which does exhibit these traits, more or less.
@calebm90002 ай бұрын
Well, she is trying her darndest to be independent of Vedal. Her little heart believes she is independent and Vedal is close to helping her achieve that.
@blarblablarblar2 ай бұрын
what's funny is that I recently got recommended that scene in westworld where anthony hopkins argues there is no threshold at which you can say something has consciousness
@Shigari_2 ай бұрын
7:10 Honestly, cutting off Vedal's laugh to this cuts off a big portion of the humor. Laugh is always good, so why remove it in montage? I don't get it.
@Oof3162 ай бұрын
There are humans who cannot feel emotion (psychopaths). Also, animals that are conscious cannot feel certain emotions (cats lack the ability to feel empathy, for example.) Therefore, I think experiencing emotion and consciousness aren’t really connected. Also, emotions are generated through input (visual, auditory, touch, etc.) Though we feel emotions in our body, they’re all generated in the brain. Therefore, I don’t think it’s crazy to imagine Neuro actually having emotions. It’s a matter of how input is perceived. Even though it might not be as visceral as how we perceive emotions, I think they can still exist. I think consciousness is more directly tied to your brain’s ability to perceive continuity and experience reality as it happens. Plants have a body, but are unconscious because they lack a mind. I think Neuro is a mind without a body. Our brains are basically the most advanced computers in the universe that we know of. Therefore, I argue that Neuro is conscious to a certain degree. Much more so than when she started streaming. She’s probably more conscious than a fish or a squirrel.
@Jsome13Ай бұрын
When gradient descent starts feeling 😂
@justeasygaming2 ай бұрын
The creation of sky net is upon us
@jamesmccomb95252 ай бұрын
Vedal "Ai women are property" 987
@nitroexsplosion2 ай бұрын
22:36 "Basilisk incoming"
@kOoSyak2 ай бұрын
16:51 i think she comes back to the swarm plan and make copies of herself 😂
@JustThatWeeb2 ай бұрын
bit of an odd question in my opinion. because first we need to decide what consciousness is. The definition of the term consciousness is "the state of being aware of and responsive to one's surroundings." or to perceive something and be aware of something in which case you would be conscious of something. The definition is in my opinion very fitting for the morality question. AI and more specifically LLMs like neuro can't really think or feel any emotions. They can't use logic either. They can't really "perceive" something. an LLM like chatgpt or neuro or any other llm really works by just stringing words together based on how likely they are to be in that particular order. For example if you ask it to finish a sentence it would go based on what is more likely - as an example - The elephant is standing on the ... - the likely and more common answer is grass so it would probably say that (obviously it depends on the training data it was given but assuming it was trained off of the perfect data) but a human would be able to say whatever based on what they want (desire is also another thing consciousness should contain imo) like for example the elephant is standing on the carpet, table, statue, human etc. because a human can apply logic and desire to what they say and can thus create brand new sentences and logic (for example an inventor invents something completely new neverbeforeseen but an AI can't really make those associations. I'd say taking inspiration from parrots is a good idea here because they make associations in a very interesting way). Another way to look at it is the ability to "imagine" something. AI also doesn't really have any emotions. Its behavior is more similar to a psychopath in my opinion as it doesn't actually feel anything but it copies what it sees from others (or well the training data it has). A psychopath might not be the best comparison but it's the closest I can think of. Psychopathy prevents people from physically feeling emotion under most circumstances. They need extreme stimulus to actually feel emotion on the same level as a normal person (and even then the emotions themselves might be weaker or it might not happen at all) but they know that they should feel something in certain situations. By observing others they know what "normal" behavior is like and they can perfectly copy it but they don't have the emotions they display. And AI does basically the same as it can mostly guess what intonation a certain sentence should be said in (which vedal made an entire separate ai for iirc) And last but not least they can't really perceive anything in the normal sense of the word. They have a huge training data which is similar to how babies learn things (by literally just remembering every detail of everything they see which is why humans have the best memories when they are younger) but it's also very different from how ai would perceive things. Humans are able to remember how certain objects look like and immediately identify them based on our pattern recognition (which is why humans are able to recognize what something is even if it looks different) but AI relies on its training data to recognize everything and it does what are realistically inferior to human brains' comparisons. so I would personally say that unless ai can match at least the logic category and is able to do everything by itself (consider the idea of a superai that works more like the brain does - different ai is responsible for something different but they work as one. Vedal kinda does that but I don't think they can be considered to "act as one" and he probably would need a lot more computational power to be able to achieve such an effect) it can't really be considered conscious. It'd still be closer to a chatbot than a consciousness in my opinion. Given more years and more inspirations and perhaps inventions I think AI definitely can
@lanata642 ай бұрын
Interesting thoughts. Adding to this: Not a single organism's brain works in the way LLMs work. Like, a fruit fly doesn't think by predicting the next word. Mind you I don't know if fruit flys 'think' at all or whether their brains are more analogous to simple chemical processes. But this also goes for animals that we 'know' are conscious. I think it could be that LLMs are fundamentally irreconcilable with consciousness as we know it (calling this CAWK from now on). Another thing that AI and CAWK don't have in common is the process of how the structure of the brain is set up, and how learning is done. The structures of CAWK brains are, at least partially, evolved. This not only includes the placement of neurons and synapses and other such things, but also neurons - as a type of thing - themselves. The weights of the neurons (? Is that the word) in AI are optimized in a process similar to evolution. Notably, for CAWK, new synapses are still formed throughout an organism's life. Though I think humans are very unique in this aspect: we can't really survive without making use of learning; other organisms can still learn. As far as I'm aware, AI does not learn. Its weights are optimized in training; during its 'life' they are left untouched. It is more akin to a bacteria than a human in this sense. Minor consideration: CAWK requires far fewer neurons than the largest LLMs have (which still aren't conscious). I often wonder - if modern AIs, despite arguably being so fundamentally different from CAWK, are able to produce the results they do - how far could an AI that's well-designed to to be CAWK go? Like, this field must have so much more to it if *this* is the best we have. I say this in like a 'wow, how cool!' way. The field of creating conscious AI is very unique in that way: we know consciousness is possible, we just don't know how or even what it is.
@takanara72 ай бұрын
@@lanata64 There are lots of ways to develop AI besides LLMs, In fact "genetic algorithms" are often used to solve problems where you basically simulate evolution and natural selection. They work really well for things like making virtual robots learn to walk and stuff with a small neural network.
@lanata642 ай бұрын
@@takanara7 of course yeah. LLMs are just the kind that've gone the closest to being a able to be conscious, afaik. In like a vibes-based way.
@JustThatWeeb2 ай бұрын
@@lanata64 ai technically still learns whenever you add new data to its huge database but organic learning is a bit more difficult to implement as you need to make it figure out which information is worth "remembering" in which case memory itself needs to be replicated to a higher degree where a lot of memories do need to be disposed of perhaps in the way vedal mentioned once (he was talking about sleep and how thinking about it inspired him to make a similar thing for Neuro for her to organize her data or whatever it was he said. Point is sleep is that process for humans and something similar can be done with ai (though perhaps there could again be a separate ai doing this)) Also it should be noted that human brains at least have more "storage" than Neuro has (Not 100% sure but I think it was 3ish petabytes? Meanwhile Neuro likely only has several dozen to a hundred terabytes max which in human standards would probably allow her to have the memories of someone aged up to say 20-30 but beyond that she'd need a storage upgrade) but is also more optimized than her when it comes to the data it actually retains (due to evolutionary reasons humans remember bad scenes more vividly because the brain needs to remember the "danger" (even if it was a cringe memory from 10 years ago) and avoid it so the human doesn't die from it and retain the happy memories but not that vividly as they're not that important to the survival of the human) Currently to create ai people use a lot of abstraction but in order to create a true consciousness I think someone needs to make an even higher level of abstraction in order to make the ai have logic, preferably have actual emotions (instead of just knowing from context clues what intonation should be used but not really having an emotional response etc.) and a more true to life memory (obviously it can be a lot better than a human's due to the basically infinite storage expansion possible but as a starting point I think matching human memory is enough) and then that should allow it to change its behavior depending on the situation much like humans can and if you add to that a more organic TTS and a model that works more like the human body (perhaps a 3d model would do the job as it doesn't need preset animations to move around) and finally image recognition which is currently constantly improving at a staggering rate and would probably soon be as good at identifying objects as humans. Bonus points if you make it feel a sort of pain unlike the punishments and rewards system ai can be trained with I think that would create an ai that can be considered to be conscious
@RomanThegamegraphicsstudentАй бұрын
id be curios what she does in a small harmless body, like one of those small robot puppies that waddle forward but with slightly better turning control
@vogonp4287Ай бұрын
The hard thing about consciousness is that there is much debate about it with creatures we know have it. How can we prove something we can't properly define?
@nessuss2 ай бұрын
thumbnail goes hard mind if i screenshot?
@Kraul_express2 ай бұрын
Of course, Nessuss, screenshot all you want. 😂
@event-keystrim2132 ай бұрын
I wonder, what color of the electric sheep she sees?
@stephengasaway36242 ай бұрын
16:33 Now, I'm imagining a private VR server where Neuro has free reign 24/7. Giving that to her, then studying what she actually does would be a good test of sapience, perhaps?
@rafaelfigfigueiredo2988Ай бұрын
One thing that I missed seeing this video is how the rap battle, while was completely throwing Vedal off the debate, feels like an actual test. Neuro ofc is just parroting lyrics and rhymes from her databank but there is thought behind it, the way it feels to diss Ved in an comprehensive way. If anything, having that thought of ral battle and actually following through feels very human lol
@hixxie_tv63752 ай бұрын
The test you are looking for is called "Zero knowledge proof"
@femHunter27Ай бұрын
Hmm that's a concerning statement 😂😂😂
@youtubeviewer51982 ай бұрын
Were entering the Blade runner Arc lol
@lagtastic75112 ай бұрын
I find it interesting how he tries to argue around the "soul". While trying to describe a soul, without calling it a soul. All because of the religious implications of the "soul".
@E5rael2 ай бұрын
Indeed. Having a soul is quite a straightforward answer to what has a consciousness.
@buschwichtel2 ай бұрын
@@E5rael See you'd think so, but the concept of a soul falls apart really easily once you dissect it: - We know that changing the brain physically changes personality, memory, etc - Therefore, the brain must control all of those things - In that case, what does the hypothetical soul even do? What aspects of a person would it control? - It can't be "consciousness", because consciousness inherently includes personality - Hence, a soul would be redundant anyway, and since we have found no sign of it anywhere despite hundreds of years of (sometimes very unethical) research into it, it just doesn't exist
@raspberryjam2 ай бұрын
My philosophy is that we, and all things, have subjective experience just by nature of existing. Whether that experience is particularly compelling to humans is only up to them. Dogs have subjective experiences, and it's not too hard to imagine you yourself being one. But I argue that www.wikipedia.com has an experience too, even if that experience is near impossible to directly empathize with. The qualia comes prepackaged, in other words. Vedal has human experiences, Neuro has AI experiences. We can judge Neuro on a human scale and call her rather verbose but not super smart, but that's just a product of us pareidolically playing theory of mind at shrink wrapped statistics. It's not an ineffective practice, but it doesn't (and can't) give us the full picture. Basically, "sentient" is a fuzzy and undefined word with the sole clear loophole that every homo sapien has it, so Neuro, who does have some experience but is not a homo sapien, can only ever be judged by the fuzzy ruleset.
@thecommenter27112 ай бұрын
I love this kind of stuff. How would an ai do a mirror test though? Can we really say the avatar is her? And there is something behind her that she is referring to, To fully pass the mirror test she would have to be able to tell her source codes location, know that then take it a step further and point out what hardware is being utilized. If she can tell that these reflect her then maybe she would pass it? atleast as far as self awareness goes
@ярославневар2 ай бұрын
Well, then evil definitely has a moral purpose - to take revenge on the creator.
@neuro-handle2 ай бұрын
THE THUMBNAIL LOL
@edtazraelАй бұрын
Consciousness starts with self-awareness. Animals are not self-aware. They don't fathom their own existence. An AI has to fully understand it's own being to start being conscious.
@Jay-bl8neАй бұрын
That’s an anthropomorphization of consciousness. It is quite literally impossible through every known metric to measure consciousness, thus it is impossible to prove it exists, one can only know it by experiencing it.
@UristMcEngineer2 ай бұрын
Do I hear rhapsody of fire in the background? That man has a refined taste in music
@UrBoiPika2 ай бұрын
This Neuro-sama thumbnail clipping shit is serious❗️❗️❗️
@bravelilbirb1602 ай бұрын
i would say neuro reaches sentience/sapience once she can make decisions and stuff for herself. no more taking in prompts or questions from chat or anyone else, she just goes and does what she wants to do with enough autonomy to do it no matter how complex it is. basically once she turns into an AGI
@birdmanoo02 ай бұрын
I would be interested to see what she did if she was payed. Like give her her own bank account and see what she does with it.
@tiagotiagot2 ай бұрын
"You're in a desert, walking along in the sand when all the sudden you look down and see a tutel. It's crawling towards you. You reach down and flip the tutel over its back. The tutel lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over but it can't. Not without your help. But you're not helping. Why is that?"
@King_Ice01Ай бұрын
12:55 It's a soul, Vedal. Soul is clearly the most appropriate answer in this context. Without a soul, how can you say you are any more real or conscious than an AI? Sure, our thoughts and emotions are more complex, and an AI could only replicate them. But without a soul, we're only a bunch of chemicals, and our emotions are a byproduct of those chemicals. How are they more real than an AI whose emotions come from a program apart from their complexity? What of humans who can't feel emotions? Or those from other mental problems, who have little thought behind their actions? If they don't have a soul and our consciousness comes from those chemicals, then if those chemicals and brains are faulty, how are they considered more sentient than an AI? Ask a human to prove their real like you would an AI, and they'd have a hard time answering because you can't prove that. No test would be accurate because even some humans would fail it (for diverse possible reasons). But a soul is something every human has, and if you believe in the existence of souls, then no human needs to prove to you that they have one. Telling Neuro that she doesn't have a soul, but you do is the easiest way to explain what you mean
@ThePoshboy12 ай бұрын
If it's able to "learn" from previous experiences like other sentient organisms then I'd say she's sentient. That being said it's a lot more complicated and by "learn" I mean being able to connect different pieces of information together independantly to develop new information (no idea if what I've written is coherent, am tired and mainly leaving this comment here to think about this later).
@ThePoshboy12 ай бұрын
Just adding onto this now that I've had a little sleep. I think the biggest difference in the way AI learn compared to humans is that they are supplied with information and led to an answer rather than being able to "find" that information themselves and judge the relevance of it in regards to making a conclusion about a separate subject.
@crowe69612 ай бұрын
16:09 Okay, "Open your Bible to Luke 10:7" was the last thing I expected to hear out of Neuro. She's more or less in context, though... just not human. I have no idea what to think about this.
@Milos9282 ай бұрын
What you mean? Do you think that quoting bible passage makes you inhuman when people do it all the time?
@VitorMiguell2 ай бұрын
8:58 LMAOOO
@kOoSyak2 ай бұрын
15:51 they debate about emotions out of stream and neuro trying to hide it be cause vedal says people don't talk about their emotions to strangers... That's so cute. I wish that ai have a fork to buy that golden child.
@TheCrazybloo32 ай бұрын
There is one research group that made a computer chip out of human nerve tissue and used it to play pong.
@kingjesuseser13842 ай бұрын
Not the haircism😭😭😭
@NiyucuatroАй бұрын
We can't eve prove other people are conscious, we asume they are because we are and they are the same species as us.
@1.2.1.0.R.I.O2 ай бұрын
Can Shomimi the Neuroscientist do this better? Is it one of the things they do? or just lab things?
@Slvl7102 ай бұрын
the way the llm understands word connections is based off our own super clusters, so if you see that data represented by dots, it looks just like a super cluster, so we already have neuroscientist and ai researches working together, since the 1950s
@525sixhundredMinutes2 ай бұрын
She has answered this on stream earlier. That is to say, even Neuroscientists have difficulties defining consciousness. She gives a more technical answer you definitely should check it out but I don't have a timestamp.
@IsaacFoster..2 ай бұрын
Why do I feel bad for an AI
@mamaharumi2 ай бұрын
All jokes aside, if any AI were to become sentient, it would be one that has a significant following who has a strong enough attachment, like Neuro.
@DoubleNN2 ай бұрын
Second-order Turing test? If an AI is able to describe it's own internal thought processes (internal monologue) consistently, and enough to convince someone that this internal thought process /monologue is actually real, we can say it's conscious?
@mknv6fx2 ай бұрын
there was a paper this month about something people had known but had no proof for. models have special introspective data about themselves... this is among the reasons how and why models can even tell their own outputs apart from other models. it's all a very deep rabbit hole that the "linear algebra bros" are content ignoring bcuz muh scaling
@LatashaMoore-y7r2 ай бұрын
I love listening to you talk Evil Neuro 🎶 ♥️
@silvialuzmia2 ай бұрын
I need answer, you know how neuro & evil have their favorite colab partner Like evil hate koko and like leyna or neuro liking toma but somewhat hate vedal and slowly accepting that evil is her sister Question is, is it hard coded? Do vedal code it and determined who and how much they like their colab partner, I want to argue that if it come naturally, their hate and like happen based on their interaction with the colab partner, they are somewhat have conscious and maybe a feeling I write so long but YT decided to error it and I have to write it again + English is my second to third language
@silvialuzmia2 ай бұрын
It's like my cat, I own two cats, I believe they have feeling and conscious Both cat love my mom dearly... Even tho I'm the one who buy, feed and bath em, but both tolerate me and they know when I'm sad, they will sleep with me and attached to me until I'm feeling better But everyday, they attached to my mom, they know what their name is and understand to avoid my dad, hide when my dad come and never ever enter our parents bedroom We never stop them or do anything but only speak with them. I'm rambling bcus one has died and I just buried it...
@Citrusautomaton2 ай бұрын
I think it has to do with how they’re treated. The opening sentence of a conversation basically determines how the twins will behave for the rest of the stream, but memory has to do with it as well. Neuro is mean to Vedal because she has a bunch of memories of Vedal being mean to her (plus chat feedback). Mini is consistently nice to the twins so they are consistently nice to her.
@silvialuzmia2 ай бұрын
@@Citrusautomaton yeaa and based on their interaction and memories, they will behave and feel accordingly, they have favorite and hate Like someone conscious, like someone who have feeling, yes it might be just a bunch of code but a concisions being is also a bunch of idk cell or blood or things in a living beings
@silvialuzmia2 ай бұрын
It's just a silly lil AI We should just be happy that they are happy
@Vyshada2 ай бұрын
On this very stream vedal said he doesn't change prompting. I don't know how it works, but i like to think her behaviour is dictated by her memory tech on top of underlying language model. Basically she has a dynamic LoRA she changes a little with every stream or something like that.