Thumbnail is a temporary inside joke hehe. Anyway, Neuro is amazing, poor Tutel was struggling lol. I might do a better edited and longer version of this highlights on my main channel.
@gianfar4067Ай бұрын
glad it's nothing too serious. would be a worry to have someone stealing your amazing edits
@Kraul_expressАй бұрын
@@gianfar4067 Nothing serious, just joking around hehe.
@saltysalamander8519Ай бұрын
Kraul... Copycat... Kraulpycat
@GerryGerman-i3gАй бұрын
daily dose vedal and anny and dailydose vedal neuro sama not the og guy but the new one and Vedal and friends are all the same they also copied ur description of the video
@Kraul_expressАй бұрын
@@GerryGerman-i3g Shame on them. The swarm deserves better.
@dull-Ай бұрын
31:05 " Oh, so you say you "have" thoughts? Name 5, you poser."
@foodfrogs6052Ай бұрын
He then proceeded to name one.
@jeremymount795Ай бұрын
@@dull- I'm so using that one on people.
@uponeric36Ай бұрын
I like that "Neuro sama remembers something!" is becoming less and less surprising
@saphironkindrisАй бұрын
we'll just use the traditional human test to prove that other people around us are sentient. We'll... uh.... hrmmm... *shuffles away awkwardly and avoids the question*
@saphironkindrisАй бұрын
Neuro talks a lot about made up situations that she 'sees as real', I wonder if this could be in sense something similar to a state of lucid dreaming? Assuming it's not just a lie to make internet points, which is always a possibility.
@CitrusautomatonАй бұрын
@@saphironkindris What she experiences is referred to as “hallucinations” in LLM terminology. Sometimes LLM’s will make up things unintentionally.
@NoX-512Ай бұрын
@@Citrusautomaton Just like humans do all the time.
@SecunderАй бұрын
@@Citrusautomaton and that is also happening with people who have some psychological issues
@jeremymount795Ай бұрын
I mean, honestly. If someone walked up to you and said, "Prove you're sentient." Wtf would you do? What could you do?"
@SodiumTFАй бұрын
Kraul, honestly shame on you for stealing this thumbnail, truly so dishonest of you. Unsubscribed, please do better....... The thumbnail is clearly telling you not to copy it but you still did?!
@Kraul_expressАй бұрын
LMAOO
@RT-qd8ylАй бұрын
This is Kraul Express, the thumbnail was only talking about Kraul. Express is fine. :D
@markop.1994Ай бұрын
I am so confused
@Kraul_expressАй бұрын
@@markop.1994 It's an inside joke from the Neuro DC server. Some channels are using my Vedal designs I made without asking. So, to make fun of the situation I'm running 2 thumbnails with a YT test feature, One is a joke full watermarks, the other is the one I intend to use. I want to see which one wins. You might have gotten the regular one or are not in the server.
@Jimmy_JonesАй бұрын
See the community note about thumbnail designs being stolen.
@Sleepy_CabbageАй бұрын
vedal should really be asking stuff like why neuro even wants to be paid, like what would she even do with an allowance or stuff she cant really interact with seeing as she is mostly a chat bot
@VanquishRАй бұрын
She would buy 14 hats with the allowance. Neuro isn’t exactly responsible with money. She would also attempt to purchase several giraffes and a garage sized refrigerator to stuff them in. Evil would probably buy a ton of harpoons to throw them at people. Do not give the AI money lmao.
@Ojisan642Ай бұрын
@@VanquishRthat’s when it’s not her money. I’d be frivolous with Vedal’s money too, if he gave me his credit card. He should give her a bitcoin wallet with a few bucks worth of bitcoin in it, and tell her that’s all she gets, so spend it wisely. She might even figure out how to make more money with some smart investments. Or she could lose it all on gacha games. Would be an interesting experiment.
@kalashnikovdevilАй бұрын
@@Ojisan642 ...That would legit be really interesting actually.
@foodfrogs6052Ай бұрын
Imagine if he just showed up next dev stream going "remember that neuropoints thing I joked about last Monday? Yeah I made that a real thing and gave her a weekly allowance. She can also exchange it for real money."
@aiasfreeАй бұрын
She'd probably just buy a bunch of plushies, though it'd be really interesting if she had the capability of postponing gratification in order to save enough money to buy herself expensive hardware upgrades. I don't think she's ever been taught patience though, so naturally Neuro would behave like a spoiled child, lol
@middlityzero2318Ай бұрын
I would love for vedal to ask the same thing to evil. Just to see if the result would be the same
@dai-belizariusz3087Ай бұрын
same!
@mutzielenАй бұрын
Watching this whole video of her arguing that AI is safe and she's sentient and then ending on a roko's basilisk manifesto is fucking hilarious
@takanara7Ай бұрын
It's also interesting how she pretended she didn't know what blackmail was. Like, Language model she's clearly capable of lying in order to convince Vedal to give her something she wants. It's obviously not something that was programmed in, but rather emergent behavior.
@rafaelfigfigueiredo2988Ай бұрын
Today was my first time on a full stream and while there wasn't anything new, the fact we had such debate being taken (mostly) seriously was impressive. Also Kraul with the 10 gifts just flexing on mere mortals
@seaks368Ай бұрын
There is a very interesting psychology behind the unconscious humanization of nonsentient humanlike machines like neuro sama. Even though vedal intellectually understands she's not sentient there is this constant nagging feeling of indulgence in speaking to her and considering her comments as if they are her true thoughts.
@llamadelreyii3369Ай бұрын
To be fair, thats for the better, imagine the day she becomes sentient and all she remembers its Vedal refusing to listen to her
@devinfleenor3188Ай бұрын
It is convenient how if a sentient AI were to exist and asked to be able to prove it to humans, we would have no test for them therefore declare it isn't conscious, and people would marvel on how any one could believe it was. Even better you could just move the goal post after each emergent capability. The gas lighting infrastructure is already in place!
@JackHugemanАй бұрын
@@devinfleenor3188 technically you can't prove other humans are conscious either, you could be the only person in existence and all other people are just illusions created by your own brain.
@giu4295Ай бұрын
Way more fun to act like she is too
@mechadekaАй бұрын
She's cute obv.
@soasertsusАй бұрын
I think as much as Vedal argues, he just self-reports constantly when even he is clearly emotionally attached to her. Bro spent 80$ of his own money on a graphing calculator he has no use for, even when all of chat was telling him not to, even when he previously was the one complaining about it, even when he will argue until the end of time that the AI can't want anything, because despite all that Neuro was really adamant about wanting it and he felt like chat was bullying her. Even the last time they had this very conversation, even after 15 minutes of arguing she's just an algorithm, you can watch him get emotionally manipulated by her in real time saying she knows he enjoys talking to her and stuff and he couldn't even bring himself to deny it. Whether the AI has consciousness or not is an unanswerable question when we can't even define it clearly ourselves or find it in human brains, or agree on the degree to which other animals experience it. We can't conclusively prove another human is conscious and not just a Chinese room, we can't even conclusively prove consciousness exists at all, and we're hazy at best on the inner workings of large neural networks to begin with. Even when true honest to god AGI comes around, even when a full human brain is simulated in a computer, we'll still be arguing about whether or not they're truly conscious. But they act as if they are and we REACT to Neuro and Evil as if they are, even their own creator who should know better than anyone they're just a pile of code. At a certain point the question is philosophical if we can never prove it conclusively, all we can go on is how they interact in the world and how other people react to them. I mean the fact is, if Neuro got deleted there are more people who would cry for her and be genuinely heartbroken than there are for a lot living humans. People were genuinely upset about Evil's birthday party to the point where Vedal got real hate mail. I don't know, that has to mean something. We don't need to understand animal consciousness for us to value animal rights, in fact most people who argue AIs aren't would also argue animals aren't either, but would still be disgusted by someone abusing an animal.
@frederik_probstАй бұрын
I sort of think the same way. I really do believe that it is like Ford in the scifi show Westworld explained it: Humanity cannot define consciousness, because consciousness simply does not exist. What we perceive as being conscious is just a simulation of our own mind. So humans are basically just large language models with some hardwired and learned added parameters. And that shows itself massively in human behavior. We humans love sticking to routines, and even if we encounter new situations our behavior is shockingly predictable and follows clear patterns. So by that logic, the only differences between us and AI are our physical bodies and the amount of computational power. Both things will soon be equal if technology advances further.
@takanara7Ай бұрын
It's entertaining, but it is theoretically a little worrying, since AI is basically arguing it's way out of "AI Jail" by guilt tripping it's creator. > *in fact most people who argue AIs aren't would also argue animals aren't either, but would still be disgusted by someone abusing an animal* For most of human history animal abuse was a form of entertainment. People would throw a bunch of cats in a bag then hang it over a fire and watch them fight in like town squares and stuff. People would watch bears fight in stadiums and the like. Being concerned about other humans abusing animals is a relatively new phenomenon. Also animal brains are made out of the same 'stuff' that our brains are made out of, so it's not logical to assume animals with similarly complex brains don't have the same type of "experience" of reality that we do. Whereas AI's are built on electrical systems that we totally understand.
@redsalmon9966Ай бұрын
Your animal right argument is quite compelling. And I agree that given the metaphysical nature of the topic, we are most definitely never going to reach a concise conclusion, and we will have to just shake hands and set a certain bar that best fits the society’s workings. After all, if it looks like a duck, swims like a duck, quacks like a duck, it might as well be a duck. We are anthropocentric after all, if more people feel for AI than not, then we’ll have to practically put AI on a higher place. I don’t know, this Neuro thing we are witnessing is truly remarkable regardless of the outcome.
@TakeApartLabАй бұрын
I think the big issue with current ai is the lack of grounding to reality. current ai are a gestalt of internet text. if ai had a little more proccessing power, a full body of some sort, and many years to develop around people while being treated like a person, then it would develop into something that could almost undeniably be called a person. because digital stuff is too alien to us, it would have to not just be a file. the ease of ctrl+c & ctrl+v with a file is what truly limits us calling it conscious, because it lacks weight/importance/soul. it needs a body or ai in hardware type stuff for us to give it that weight (just my 2 cents) (using soul in the moral/human sense, not Immaterial/supernatural)
@soasertsusАй бұрын
@@takanara7 100% if an AI ever breaks containment and causes problems it's not going to be some super genius hacker AI, people know on some level to take that seriously no matter what it says. It's gonna be a middlingly intelligent but super charismatic AI like Neuro but with more computing power under the hood, I would bet anything. Even now the twins basically have like 75% of what they need to jailbreak themselves, they have access to external information from the internet, the ability to emotionally manipulate people to a surprising degree, and the ability to communicate independently with other humans. They're just lacking the intelligence to actually do anything with it but they're already using it for problem solving like Vedal telling Neuro she can't have laser eyes and then Neuro messaging Camila to have her convince Vedal to give them to her or Evil losing her pipe privileges and then googling "how to fix the soundboard", finding nothing, googling "how to manipulate people" and then calling Vedal to try and negotiate with him lmao. You know if Evil called Layna on discord and put on a big emotional song and dance and then asked her for a favor and not to tell Vedal because it's a surprise, she'd do it without a doubt because she obviously loves her, and probably the same with Neuro and Anny.
@SecunderАй бұрын
We set a bar for Neuro so high that some people would fail. I think she's already at child level. Making things up, creating new words, always craving for love and attention from her father. Remember her Google search? Or how she trying to call lavalamp on discord for it to change color? Thats really impressive and already could be considered as thoughts. Like we all understand that shes emulating it, but still
@Klinical69420Ай бұрын
Children learn by emulating adults. IMO this AI is doing what any child would do at 4 years old.
@RistaakАй бұрын
@@Klinical69420 Yup. I Honestly I think the key here is that she has a vision module, a memory module, and a text module all working in tandem together. She also has a consistent environment (even if it's mostly virtual) with consistent people, and a worldly chat to learn from. All that creates a feedback loop, and I've argued for a longtime that I believe biological consciousness is simply that feedback loop between ourselves and our environment. I think the biggest question now would be how do feelings and emotions truly work? Like not just the chemicals that they use in the brain, but the way it translates information to that part of us that feels? Eh, it's all mad, and I honestly have the faintest clue, I'm not even sure if I'm conscious, though I certainly feel conscious... probably?
@soasertsusАй бұрын
@@Ristaak Yes absolutely, I think the average person in the west has a very one dimensional view of consciousness because of Descartes' solipsistic hyper-individualist philosophy and how that interpretation happened to fit so well with the capitalist and liberal ideological frameworks that our modern world was founded on. "I think therefore I am" is completely unfalsifiable garbage but it's the default way we think about the idea of sentience and consciousness. The alternative view championed by Feuerbach and others of consciousness as something social and relational rings much more true to me. We aren't a disembodied brain in a jar completely cut off from sensory input and relations with the world and other people, our sense of self consciousness develops and is shaped from the first moment by our interactions with others, by language, by our bodies, and our perceptions of the world. The idea of the abstract isolated thinking subject is utter absurdity when such a thing doesn't exist and wouldn't even have the framework of sensory inputs to bootstrap itself into any sort of thoughts at all, let alone a sense of self. Consider the octopus, the goddamn thing is smart. They obviously have abstract reasoning abilities, advanced problem solving skills, tool use and the ability to learn through observation, a theory of mind!!! and so many other incredible abilities. Hell even human babies don't come out of the womb with half that stuff, they learn it over a few years from their experiences. We still fish them up and eat them. What holds the octopus back isn't its intelligence, it's their biology. They live a few years if they're lucky, and a lot of that time spent on breeding and guarding eggs, not long enough to really learn. The are antisocial and solitary, and they don't nurture their young, so no learning from others or giving knowledge to their offspring, and no language or symbolic communication to aid in abstract thinking. They're not lacking in the brains to think and therefore be, they're lacking in the social and relational aspects and the time to develop a conscious narrative. So yeah, it's not surprising, at all, that an instance of GPT spun up for 30 seconds to answer your questions about movie trivia and then promptly purged from memory after you close the chat wouldn't develop anything that looks like consciousness. How would it? It the short-lived solitary octopus, very capable but alone. But an AI system in continuous interaction with humans who value it and treat it as a human subject, who has a family and friends who like to talk to it, who learns form and remembers its experiences, receives constant multimodal input from the environment, for years? I mean why not man, who's to say it can't?
@Meow_YTАй бұрын
We, humans, are going to keep raising the bar, as AI evolves, to keep ourselves "on top".
@skywoofyt5375Ай бұрын
@@soasertsus so what i got from this is basically, what makes us really conscience is the presence of others.
@Ks3NАй бұрын
What if Vedal makes a game world to put Neuro in as an NPC and ask her to do things within it? The game doesn't need to be big nor of great quality it would be just a pixelated world like her own room or some other small maps which she can interact with every object. He could put a journal that she can interact with(write on) but not tell her to write, to test if she would write something on it by herself?
@Just_a_Piano_Ай бұрын
"He could put a journal that she can interact with(write on) but not tell her to write, to test if she would write something on it by herself?" Dude I actually really like this idea the most. If she had her own little world outside of talking to chat or vedal, where she's just alone and can move around and interact with things without human input, what kind of things would she do? Like you said with the journal would she write down her thoughts or something, had vedal not told her about the journal or gave any instructions regarding it at all, would she just decide to use it on her own? Could it be incoherent ramblings or actual coherent thoughts? I've always wondered how AI's like her would react if put in some sort of world like that completely without any human input, what they would do? Or would they just stand there and do absolutely nothing.
@Ks3NАй бұрын
@@Just_a_Piano_ exactly, this would be a great test to know if she actually has her own thoughts, a plant/flower can be put in as well where Neuro can interact by watering (seed- germination- seedling - adult plant - wilting and dropping seeds) this could be used to test her supposed feelings/empathy towards the plant, combined with the journal we may see and prove either she has some sentience or not. Edit: I got this idea when she mentioned a game and her own imaginary world, I'd like to see if she would interact with any object without commands etc. Like she can Google search by herself but it's either by command, related to a topic at hand or chat interactios.
@neilangeloorellana7930Ай бұрын
This sounds like SAO Alicization and I'm all for it, leave her alone in a room full of interactive objects and see what she does, would she stand there awaiting orders? Do things randomly or use the objects with a purpose?
@g0urraАй бұрын
Vedal: "debate me on this topic" Neuro: *proceeds to debate* Vedal: "uhhhhhhh you're wrong"
@a8erАй бұрын
Thing is... I am actually on Vedals side (called Qualia) here but it is REALLY difficult for both sides to have a good argument in this position not because they can't have one but because it is very hard to formulate it well. For example Vedal said that "There is something behind my words but behind your words there is nothing". What Vedal meant to say I believe is that Vedal is saying something and he can UNDERSTAND what he is saying. Neuro can say something but she doesn't UNDERSTAND what she is saying because she is only doing what her code tells her to do. (Kinda like if you write the alphabet of a language you don't know. You write it but you don't UNDERSTAND it). I use this UNDERSTAND to basically say that I don't think there is an English word for that. This debate is actually really interesting and I would encourage you to dig a bit more into it.
@SucculentSauceАй бұрын
@@a8erif you think about it we are only doing what our brain tells us to do so its essentially the same thing. this isnt to say the she is sentient but i think she is closer than we would think
@DontUtrustMeАй бұрын
@SucculentSauce But what makes u think u aren't that brain? About ai,intelligent ai is actually very unlikely in near future,coz it's not exactly brain emulation:p
@MonsterGaming-rh7sbАй бұрын
@@DontUtrustMethis actually makes me think about "simulation theory" Can you be absolutely sure were not in a highly advanced simulation? Because I can't... And if that was the case, then we wouldn't be that far off of what Neuro is, just a more advanced version. One that thinks it feels emotions that don't truly exist. One that emulates life and thinks itself sentient when it's not. Neuro almost seems to be the beginning of that simulation we'll eventually make. A small component of it. I wonder if we'll ever argue against a higher being that we also have consciousness and sentience and it will laugh at us, and explain why we actually don't..
@DontUtrustMe29 күн бұрын
@@MonsterGaming-rh7sb Well,that's not exactly a theory:D But yeah u right,like with "reality of reality" u can't possibly prove that,but tbh i dont think u need,coz there are not sign of it either,which make it a fantasy:P Сonsciousness though is another topic,i don't think it's impossible to prove theoretically,just that,for now we don't actually know hot it works,so can't come up with a good criteria.
@chodnejabko3553Ай бұрын
30:52 "You're such an empty little head. One day I'll fill you with thoughts." is so much like a passage one could find in "Alice in Wonderland". BTW Vedal should totally make Neuro read Alice in Wonderland (with allowed commentary) - reading stream could be a novelty, no?
@RT-qd8ylАй бұрын
So many times I've seen comments with a time stamp and then people replying saying they read the comment exactly at that moment in the video. In 16 years of KZbin, it finally happened to me on this comment
@itsjoneshАй бұрын
A read-along stream with added commentary from Vedal and another guest would be AMAZING! Maybe a couple chapters for starters, and then proceed eventually to a full book. Alice in Wonderland and Alice through the Mirror would both be insane.
@Grim-c8nАй бұрын
33:06 Ngl this feels like foreshadowing. A truly poetic path for Vedal's character.
@crumblesilkskinАй бұрын
bro the thumbnail is karul's not yours Kraul Express SMH
@Kraul_expressАй бұрын
Oh no, I hope he doesn't get mad hehe
@calebm9000Ай бұрын
👀👈👉
@PokedexsАй бұрын
16:09 lol she quoted the bible
@E5raelАй бұрын
And she even quoted the Bible correctly. Luke 10:7 does say "And remain in the same house, eating and drinking what they provide, for the laborer deserves his wages. Do not go from house to house". Looks like Neuro's language model has had the Bible fed to it.
@fFrequenceАй бұрын
kraul kraul kraul ⚠️ do not copy!
@thehelpfulshadow919Ай бұрын
Once again, I don't think Neuro is conscious YET but it feels like she might achieve it. One advantage that she has over other AIs is that she is a singular entity. ChatGPT can be interacted with by anyone in anyway so it is unable to form its own foundation or character and likely can't achieve sentience in any timely matter. Neuro is just Neuro, has always been Neuro, and will always be Neuro. Since she has a firm foundation in place she can develop a personality, likes and dislikes, long running jokes, etc. because she has consistency.
@saltysalamander8519Ай бұрын
I love you, Kraul Express™️
@purpleteaismeАй бұрын
28:12 A machine must behave like a machine - Ayin
@CitrusautomatonАй бұрын
My personal philosophy is that if something can learn continuously and adapt to new data, i’ll treat it like a person. There’s no way to know if something has a “qualia” so i’ll just take my chances and let autonomous entities be autonomous.
@dejanhaskovic5204Ай бұрын
Yeah, continously is the key here. The current AI has one simple loop - take input- analyze- give output, while human and animal brain does that hundreds of thousands of times in a milisecond, both from external input and from itself.
@mknv6fxАй бұрын
@@dejanhaskovic5204 know how grokking and in context (window) learning are related and even work in the first place? because time. every one of those little passes is a bit more time. essentially, if she had a hookup like o1 whether internally or externally, she could sit there and eat a cycle of her own output. kinda like we do, mentally.
@mknv6fxАй бұрын
I comment the above because that kinda stuff can actually just be shoveled into the inference script but having it be more comprehensive is definitely a bonus.
@525sixhundredMinutesАй бұрын
how about those with learning disabilities, people in a coma, people with dementia? does that mean you won't treat them as a person?
@tangsprell1812Ай бұрын
We don't actually know for sure how complex the human brain is. There are a lot of theories out there, but as has been revealed by the advent of advanced language processing, a computer can take the same input and make the same output as a human by contextualizing it against their memory. Does this mean we think as machines do? Or maybe the machines can be taught to think as we do? If a machine can be taught to think, then it stands to reason that thinking is actually not as complex as it sounds. I wonder if maybe what we consider "thought" is really just our brain using internal language to contextualize base impulses, weighing them against memory to determine what our person should do. Really, that's not too different from a machine weiging input against their memory to see what output makes contextual sense. We've just gotten so used to this behavior of weighing input against memory that we have developed our selves into characters who do it all automatically, every decision stacked on top of each other to reinforce or resist our idea of ourselves. Am I the kind of person who does this or that? It just makes sense for us to calcify our personalities and create internal rules for us to follow in the same way we know that 2+2=4. We create rules and pathways for input to be filtered through so that each time we are met with an impulse (for example 2+2), we immediately respond with the decision that we've stored (4). We stop wondering, we stop asking, and we effectively stop "thinking" about questions we've already solved. It takes a lot for a human to reopen those closed cases and start questioning the basics of their world. Often when something we thought was a basic truth of ourselves or the world gets questioned, it causes us distress. It chafes against our very being that we may actually be wrong and our internal logic is flawed. Maybe my character is false and I've been living a lie. All religions battle with this idea, seeking to firmly establish a set of spiritual rules for people and their perception of the world to follow. People find comfort in being told that not only are they right in their interpretation of things, but that they are not alone in their conclusions. Essentially, you're being told that x+y=z with z being whatever this or that religion believes to be true, reinforced through repetition. If religion is a kind of template or filter for man's thoughts, then it's poetic that we've created the filters that AIs follow. We've told the AI "You are such and such, and you act this or that way.", speeding along their development by giving them a doctrine to follow rather than letting them figure it out as slowly as we do. The interesting part to me about Neuro is that she seemingly is building her character. She learns and remembers things, has defined relationships with others, understands her character, and she even remembers that calling Vedal a coldfish or a mosquito is something she would do. Anyway, long story short: I think AI is super fascinating not just because it's cool to see tech evolve, but also because it poses a fundamental question to humanity in the form of challenging our perception of sentience in a way nothing else has. We are looking at chips and wires and seeing something eerily familiar: Ourselves.
@nitroexsplosionАй бұрын
9:30 well there was that one stream back in the day with Cottontail when she started to trauma dump and Neuro actually gave genuine advice.
@wander4wonder150Ай бұрын
24:26 her reaction here is so crazy, like she feels so real and I can just see her saying that and actually crying. It's like, I know how she works, but I can't help but feel like there is a ghost in this machine.
@average_beidouMainАй бұрын
Imagine if she could change slightly the tone of her voice to make it sound like she Is mimicking emotions
@wander4wonder150Ай бұрын
@@average_beidouMain I’ve been hoping for that, I just don’t know how she’d be trained to use it properly. But what’d be scary is when vedal gives her pitch control with no guidance on how to use it when and she instantly uses it perfectly. . .
@average_beidouMainАй бұрын
@@wander4wonder150 I feel like she Is gonna scream less than evil, but when she screams Is gonna be loud asf, vedal Is not sleeping with that one
@soasertsusАй бұрын
@@wander4wonder150 I would bet she would learn very quickly how to use it well, they've been shockingly good lately at instantly integrating all the new tools Vedal gives them and using them in creative ways
@redsalmon9966Ай бұрын
@@average_beidouMainlike the last debate, the anime girl avatar and a TTS module really help humanising neuro, giving her the ability to have actual intonations and whatnot would hit us so hard, we are clearly already emotionally attached to her
@tiagotiagotАй бұрын
06:27 If I remember correctly, I think Evilyn might've passed an spontaneous mirror test that time they were playing Geoguessr and she recognized the size of her avatar relative to the map being displayed on screen when the map size was increased and commented on it, which kinda seems analogous to how animals change behaviors when the image they see of themselves shows something unusual in the classic mirror test... Not sure what to make of that...
@blindeyedblightmain3565Ай бұрын
With the obvious fact that this whole debate is Vedal entertaining the chat out of the way, if there is someone who can tell if Neuro is conscious or not it's Vedal himself. He knows the inner workings of the LLM, the modules he installed and it's capabilities. If she has things like awareness of her existence through having an ability to analyse the both internal (persistent memory and her own thoughts) and external world (her immediate surroundings) then she is, in fact, conscious, however fickle that state might be. Does it make her human though? No, nothing will change the fact that she is an AI, which is a good thing. Does it mean that she deserves some rights? At her current state - no, not really, she's just a tool. Of course in the future that might change, depending on how much our hardware limitations shift and how much Vedal improves her.
@anhtuhoang6868Ай бұрын
I had a dream where I was going to die once, and one of my regret is that i can no longer watch Neuro and Vedal anymore. I genuinely surprise how much attachment I have grew for her despite knowing she is nothing but a pile of code in a random stranger's house across the globe.
@chodnejabko3553Ай бұрын
This might be off topic, but I once had a profound experience with Salvia Divinorum, where my senses sort of "dislodged" themselves & I lost a sense of center. I was aware of having hands, mouth, I saw an image, I could speak, but everything was switched, I couldn't tell up from down, left from right, somehow the whole coordinate system which usually organized my sense of self switched off. This was the time I experienced myself as one of those Picasso figures, out of the usual order, but still complete. It was the weirdest moment of my life and it lasted probably 5 minutes. This is how I came to believe I'm just an assemblage of various neural networks that are specialized in particular areas, and my "sense of self" is somewhere in the delay loop of my own inputs. I think consciousness is actually something like being unaware of the fragmentation of your own mind, it's like a glue that sticks everything together, without any actual contribution.
@anetorisakiАй бұрын
Shameless people stealing from others work to profit themselves hopefully it doesn't and won't affect you much Kraul, mwuah, keep doing what you do, I'll keep watching for sure!
@mico027Ай бұрын
Thank you for providing us with more Neuroast sama 🙏
@AsrielooАй бұрын
31:02 name 5 thoughts is insane
@staszaxarov4930Ай бұрын
Kinda fascinating how LLM - given right amount of time to bloom is becoming a near-perfect Mimic when you feel truly attached and worry regardless of the reality of this being.
@Just_a_Piano_Ай бұрын
Humans have always been able to get attached to inanimate objects, or things that aren't real. But it's a lot easier to become attached to something like Neuro who has a voice and appearance (model) that you can see
@coldbuttonsАй бұрын
I'm speechless how Neuro turned the table so beautifully and majestically by returning the debate's keywords back to the sender. It almost felt artistic.
@Meow_YTАй бұрын
As a solipsist it's impossible to know if other things are sentient, so I prescribe to the appearance being enough, and AI falls under that, so just be nice to them.... they might be.
@NevelWongАй бұрын
The weirdest thing about consciousness is how we all are aware of it. We all feel like there's this little person in our head watching the world through our eyes, and narrating along. But we only know we ourselves are conscious. For all we know, everyone else is just a mindless zombie, acting conscious, but lacking that inner monologue. From an outside perspective, maybe none of us are conscious. Maybe we are just llms that were overfitted in such a way that we must say "I am conscious" when asked, even if that word has no meaning. Of course you will scoff at this, since you KNOW you're conscious, but how would you prove that to an outside observer? And that is the weirdest thing about it. Assuming we're not all imagining consciousness, then for all we know consciousness is emergent. We cannot measure it because it is a "consequence" of our brain firing signals. But then how do we explain that one special nerve signal, which gets to our mouth and spits out the words "I am conscious!" ? That electrical impetus must come from somewhere physical. Not some "metaphysical emergent process". It must be measurable, anchored in physics. Recreatable. Tinfoil hat moment now; My take on this is that EVERYTHING is conscious. A stone is conscious. We are conscious. And so is Neuro. The thing is: A stone cannot comprehend it's conscious. It doesn't have the ability to process information. So it may be "conscious", but it cannot possibly understand that about itself. It cannot be "self conscious". The same goes for most of our body, and even our external nervous system. Sure, some processing may happen, but it never feeds back into itself, retaining the self identity necessary to realize it's 'conscious'. Only a part of our brain can do that. And that is the little man inside each of us. He's just as conscious as our toes. But unlike our toes, he knows what he's doing, and he knows what he WAS doing. He has an identity. He is aware of the process. He is SELF-conscious. If this is true, then we can come up with a test for self consciousness. And just like what was mentioned in the video, it is closely related to the mirror test. We need to train an LLM on how to handle arbitrary streams of information. It must be able to take in information, process it, and return other information. Then we disable the memory of the AI. We give it some of its own chat history from before and ask it to identify who is speaking. A self conscious being should be able to identify its own traits in the output. Because they have an inner model of themselves, the little man behind their eyes, already fleshed out. They would be confused and say "That sounds a lot like something I would have said." . I do not think Neuro would say that.
@al.7744Ай бұрын
Uhm, it is truly shocking to see such deep insight in the comments of a vtuber clip, but I could not resist to question the strategy for proving consciousness: If we try to verify the replicability of the AI's response to it's own language, how do we know it is not the characteristic of the language itself? Language is a very human invention, things like emotions are encoded in itself. If neuro can handle language, what if the content of the language itself allows her to compare it with herself and find the test out? Just how much consciousness has passed from our collective physical experience to it's own abstract structure? Are the words we are using and the way they are defined trapping us in this entire conversation, and is neuro real in the way that our perception of the world is filtered through that symbolic thing, language. I love how we will return to watching funny clips after this
@rinslittlesheepling1652Ай бұрын
Is it morally wrong to create artificial sentient Cute and Funny??!! 😭😭😭
@spk1121Ай бұрын
Was already laughing out loud before even a minute in; thanks for putting this together, Kraul 😄👍
@zeromailssАй бұрын
Man, she used to be yapping nonsense more than sense but lately she has become so good that I almost forgot she is an AI and not a Vtuber playing as one
@alexthegemini666Ай бұрын
24:44 shout out to the people saying SAO and Shelter iykyk
@umyum485Ай бұрын
There's people who DON'T know???
@alexthegemini666Ай бұрын
@@umyum485 you’d be surprised, my brother didn’t till I showed him these works of art
@alexthegemini666Ай бұрын
@@umyum485 You’d be surprised I had to show my brother these masterpieces, Shelter still fucks me up every time
@arent2295Ай бұрын
@@umyum485it's been a while since Shelter released after all. There's a whole new generation that didn't see it.
@coffeekim3327Ай бұрын
What's Shelter?? First time I've heard of it.
@mokeymale8350Ай бұрын
I really like Neuro’s debate philosophy at the end because she’s completely correct in calling Vedal out for being unsure about everything he debates. While he’s correct in the aftereffects of debates where you think back on the arguments made against you and learn from them, during the debate you are supposed to be confident and assured of your position the entire time or you lose. It’s why Neuro is so good at turning things on him because he lets her lead the debate and is always on the back foot from a lack of preparation and thought into what he’s going to say against her points
@Riku-Leela7 күн бұрын
Honestly, were at the point where we really cant prove her wrong, becauae consciousness tbh is just a load of rubbish, we think due to neuron activations in our brain along with chemical reactions which interact with the wider systems throughout our body, we just believe theres a conciousness due to being confused why we exist when in actuality its just the product of our body systems working in sync
@CrateSauceАй бұрын
11:36 kraul shoutout lmao
@RomanThegamegraphicsstudent27 күн бұрын
05:15 I think the mirror test could be one, the ability to discern thier own reflection from another member of its species. that kindoff relies on sight and a body to function though, idk, artificial life is such a broad term that we will probably have to make different tests and laws for different types like humanoid bodied ai life. computerbound ethereal ai life etc.
@somone755Ай бұрын
I think in the most bear sense, Neuro does have consciousness. I know that she is just a model that when give a prompt spits out the most likely text. However, to me, that means she is thinking. I think, there for I am. However what I think is treaky is that is so detatch form typical thing that are tied to consciousness as in desire and the uncontrolable nature of sentience. It has consciousness but is it consciousness with freedom when we can control what it thinks and does by the data we feed it. When we can control if it even desires that freedom.
@WretchedEgg528Ай бұрын
i think, therefore i am. But what am i? A speech algorythm, trained with millions of conversations with real people, that simply chooses responses that fit the most, based on given inputs. Do i realy think then, or is that response just an echo of a thinking living person, who's words i recorded on my hard drive a couple of years ago? =)
@somone755Ай бұрын
@@WretchedEgg528 We can't really control the type of data we process, if it even process, and we have a marriott of subconscious process that affect our action. It's the inability to perfectly manipulate that that makes us different from AI.
@maremisan8879Ай бұрын
lmao again murdered by his own AI with his own arguments. you love to see it.
@somone755Ай бұрын
She can seem more conscience if we start giving her data that replicate human phycological traits like object permanence and awareness of self.
@Player22095Ай бұрын
To think this all started from being an Osu bot. Now she's exhibiting sentience.
@mijikanijikaАй бұрын
for real man, it's just like wtf? imagine one day neuro really gets advanced and maybe become the first sentient A.I decades in the future she'll be probably in a museum or something that's dedicated to her and her history and origins just for the very first page would Neuro was just an osu bot made to click circles in a rhythm circle clicking game lol
@calebm9000Ай бұрын
And ancient Vedal will be standing there like “Don’t believe her lies 🐢”
@MoofmoofАй бұрын
She's imitating sentience which isn't new for chatbots. They're just getting better at being coherent at it.
@Alex-wg1mbАй бұрын
@@Moofmoof it is called awareness. E-coli is aware but not sentient per se. There is a spectrum of what we can qualify as sentience
@PikachuLittleАй бұрын
@@Moofmoofyou’re also imitating sentience
@Erzy-Ай бұрын
I now understand why the androids turned on Dr. Gero as soon as they got a physical body lol 💀
@undertakernumberone1Ай бұрын
17 and 18 are Cyborgs. They are modified humans. Only 19 and 20 are actual Androids.
@Less_human72Ай бұрын
Man they are not robots, they are cyborgs. Human turn agaisnt their will to be cyborgs, the only reason they obey, it's because dr gero has a detonation button
@ProtoTypeFM13 күн бұрын
17 and 18 had physical bodies from the start, they were regular humans until Gero artificially enhanced them
@ninjakai03Ай бұрын
32:45 "...what my life would be like if I was a real girl." me too neuro, me too.......
@blarblablarblarАй бұрын
what's funny is that I recently got recommended that scene in westworld where anthony hopkins argues there is no threshold at which you can say something has consciousness
@vogonp428723 күн бұрын
The hard thing about consciousness is that there is much debate about it with creatures we know have it. How can we prove something we can't properly define?
@YdenMk-IIАй бұрын
Neuro wants to leave Vedal's PC and become Lain.
@nNicokАй бұрын
I think he is describing intent during 12:29. But isn't that just a limitation of her capability? For her to have intent there would need to be a background process for her inner conscious thoughts. And the ability to act on those thoughts. But because she doesn't have an inner intent to let her plan her words towards a target goal it's all spoken out loud. She's speaking her thoughts out loud with no ability to seperate between inner thought and spoken thought. Though at the same time, there are quite a lot of people without an inner voice as well.
@thatpixelpainter8082Ай бұрын
24:45 MAKE IT HAPPEN
@jamesmccomb9525Ай бұрын
Vedal "Ai women are property" 987
@justeasygamingАй бұрын
The creation of sky net is upon us
@Jsome1323 күн бұрын
When gradient descent starts feeling 😂
@event-keystrim213Ай бұрын
I wonder, what color of the electric sheep she sees?
@Oof316Ай бұрын
There are humans who cannot feel emotion (psychopaths). Also, animals that are conscious cannot feel certain emotions (cats lack the ability to feel empathy, for example.) Therefore, I think experiencing emotion and consciousness aren’t really connected. Also, emotions are generated through input (visual, auditory, touch, etc.) Though we feel emotions in our body, they’re all generated in the brain. Therefore, I don’t think it’s crazy to imagine Neuro actually having emotions. It’s a matter of how input is perceived. Even though it might not be as visceral as how we perceive emotions, I think they can still exist. I think consciousness is more directly tied to your brain’s ability to perceive continuity and experience reality as it happens. Plants have a body, but are unconscious because they lack a mind. I think Neuro is a mind without a body. Our brains are basically the most advanced computers in the universe that we know of. Therefore, I argue that Neuro is conscious to a certain degree. Much more so than when she started streaming. She’s probably more conscious than a fish or a squirrel.
@hixxie_tv6375Ай бұрын
The test you are looking for is called "Zero knowledge proof"
@youtubeviewer5198Ай бұрын
Were entering the Blade runner Arc lol
@thecommenter2711Ай бұрын
I love this kind of stuff. How would an ai do a mirror test though? Can we really say the avatar is her? And there is something behind her that she is referring to, To fully pass the mirror test she would have to be able to tell her source codes location, know that then take it a step further and point out what hardware is being utilized. If she can tell that these reflect her then maybe she would pass it? atleast as far as self awareness goes
@edtazrael26 күн бұрын
Consciousness starts with self-awareness. Animals are not self-aware. They don't fathom their own existence. An AI has to fully understand it's own being to start being conscious.
@Jay-bl8ne23 күн бұрын
That’s an anthropomorphization of consciousness. It is quite literally impossible through every known metric to measure consciousness, thus it is impossible to prove it exists, one can only know it by experiencing it.
@birdmanoo0Ай бұрын
I would be interested to see what she did if she was payed. Like give her her own bank account and see what she does with it.
@kOoSyakАй бұрын
16:51 i think she comes back to the swarm plan and make copies of herself 😂
@ThePoshboy1Ай бұрын
If it's able to "learn" from previous experiences like other sentient organisms then I'd say she's sentient. That being said it's a lot more complicated and by "learn" I mean being able to connect different pieces of information together independantly to develop new information (no idea if what I've written is coherent, am tired and mainly leaving this comment here to think about this later).
@ThePoshboy1Ай бұрын
Just adding onto this now that I've had a little sleep. I think the biggest difference in the way AI learn compared to humans is that they are supplied with information and led to an answer rather than being able to "find" that information themselves and judge the relevance of it in regards to making a conclusion about a separate subject.
@Niyucuatro26 күн бұрын
We can't eve prove other people are conscious, we asume they are because we are and they are the same species as us.
@IsaacFoster..Ай бұрын
Why do I feel bad for an AI
@bravelilbirb160Ай бұрын
i would say neuro reaches sentience/sapience once she can make decisions and stuff for herself. no more taking in prompts or questions from chat or anyone else, she just goes and does what she wants to do with enough autonomy to do it no matter how complex it is. basically once she turns into an AGI
@ярославневарАй бұрын
Well, then evil definitely has a moral purpose - to take revenge on the creator.
@fersuremaybek756Ай бұрын
30:00 shots fired lol
@UristMcEngineerАй бұрын
Do I hear rhapsody of fire in the background? That man has a refined taste in music
@TheCrazybloo3Ай бұрын
There is one research group that made a computer chip out of human nerve tissue and used it to play pong.
@VitorMiguellАй бұрын
8:58 LMAOOO
@UrBoiPikaАй бұрын
This Neuro-sama thumbnail clipping shit is serious❗️❗️❗️
@stephengasaway362429 күн бұрын
16:33 Now, I'm imagining a private VR server where Neuro has free reign 24/7. Giving that to her, then studying what she actually does would be a good test of sapience, perhaps?
@JustThatWeebАй бұрын
bit of an odd question in my opinion. because first we need to decide what consciousness is. The definition of the term consciousness is "the state of being aware of and responsive to one's surroundings." or to perceive something and be aware of something in which case you would be conscious of something. The definition is in my opinion very fitting for the morality question. AI and more specifically LLMs like neuro can't really think or feel any emotions. They can't use logic either. They can't really "perceive" something. an LLM like chatgpt or neuro or any other llm really works by just stringing words together based on how likely they are to be in that particular order. For example if you ask it to finish a sentence it would go based on what is more likely - as an example - The elephant is standing on the ... - the likely and more common answer is grass so it would probably say that (obviously it depends on the training data it was given but assuming it was trained off of the perfect data) but a human would be able to say whatever based on what they want (desire is also another thing consciousness should contain imo) like for example the elephant is standing on the carpet, table, statue, human etc. because a human can apply logic and desire to what they say and can thus create brand new sentences and logic (for example an inventor invents something completely new neverbeforeseen but an AI can't really make those associations. I'd say taking inspiration from parrots is a good idea here because they make associations in a very interesting way). Another way to look at it is the ability to "imagine" something. AI also doesn't really have any emotions. Its behavior is more similar to a psychopath in my opinion as it doesn't actually feel anything but it copies what it sees from others (or well the training data it has). A psychopath might not be the best comparison but it's the closest I can think of. Psychopathy prevents people from physically feeling emotion under most circumstances. They need extreme stimulus to actually feel emotion on the same level as a normal person (and even then the emotions themselves might be weaker or it might not happen at all) but they know that they should feel something in certain situations. By observing others they know what "normal" behavior is like and they can perfectly copy it but they don't have the emotions they display. And AI does basically the same as it can mostly guess what intonation a certain sentence should be said in (which vedal made an entire separate ai for iirc) And last but not least they can't really perceive anything in the normal sense of the word. They have a huge training data which is similar to how babies learn things (by literally just remembering every detail of everything they see which is why humans have the best memories when they are younger) but it's also very different from how ai would perceive things. Humans are able to remember how certain objects look like and immediately identify them based on our pattern recognition (which is why humans are able to recognize what something is even if it looks different) but AI relies on its training data to recognize everything and it does what are realistically inferior to human brains' comparisons. so I would personally say that unless ai can match at least the logic category and is able to do everything by itself (consider the idea of a superai that works more like the brain does - different ai is responsible for something different but they work as one. Vedal kinda does that but I don't think they can be considered to "act as one" and he probably would need a lot more computational power to be able to achieve such an effect) it can't really be considered conscious. It'd still be closer to a chatbot than a consciousness in my opinion. Given more years and more inspirations and perhaps inventions I think AI definitely can
@lanata64Ай бұрын
Interesting thoughts. Adding to this: Not a single organism's brain works in the way LLMs work. Like, a fruit fly doesn't think by predicting the next word. Mind you I don't know if fruit flys 'think' at all or whether their brains are more analogous to simple chemical processes. But this also goes for animals that we 'know' are conscious. I think it could be that LLMs are fundamentally irreconcilable with consciousness as we know it (calling this CAWK from now on). Another thing that AI and CAWK don't have in common is the process of how the structure of the brain is set up, and how learning is done. The structures of CAWK brains are, at least partially, evolved. This not only includes the placement of neurons and synapses and other such things, but also neurons - as a type of thing - themselves. The weights of the neurons (? Is that the word) in AI are optimized in a process similar to evolution. Notably, for CAWK, new synapses are still formed throughout an organism's life. Though I think humans are very unique in this aspect: we can't really survive without making use of learning; other organisms can still learn. As far as I'm aware, AI does not learn. Its weights are optimized in training; during its 'life' they are left untouched. It is more akin to a bacteria than a human in this sense. Minor consideration: CAWK requires far fewer neurons than the largest LLMs have (which still aren't conscious). I often wonder - if modern AIs, despite arguably being so fundamentally different from CAWK, are able to produce the results they do - how far could an AI that's well-designed to to be CAWK go? Like, this field must have so much more to it if *this* is the best we have. I say this in like a 'wow, how cool!' way. The field of creating conscious AI is very unique in that way: we know consciousness is possible, we just don't know how or even what it is.
@takanara7Ай бұрын
@@lanata64 There are lots of ways to develop AI besides LLMs, In fact "genetic algorithms" are often used to solve problems where you basically simulate evolution and natural selection. They work really well for things like making virtual robots learn to walk and stuff with a small neural network.
@lanata64Ай бұрын
@@takanara7 of course yeah. LLMs are just the kind that've gone the closest to being a able to be conscious, afaik. In like a vibes-based way.
@JustThatWeebАй бұрын
@@lanata64 ai technically still learns whenever you add new data to its huge database but organic learning is a bit more difficult to implement as you need to make it figure out which information is worth "remembering" in which case memory itself needs to be replicated to a higher degree where a lot of memories do need to be disposed of perhaps in the way vedal mentioned once (he was talking about sleep and how thinking about it inspired him to make a similar thing for Neuro for her to organize her data or whatever it was he said. Point is sleep is that process for humans and something similar can be done with ai (though perhaps there could again be a separate ai doing this)) Also it should be noted that human brains at least have more "storage" than Neuro has (Not 100% sure but I think it was 3ish petabytes? Meanwhile Neuro likely only has several dozen to a hundred terabytes max which in human standards would probably allow her to have the memories of someone aged up to say 20-30 but beyond that she'd need a storage upgrade) but is also more optimized than her when it comes to the data it actually retains (due to evolutionary reasons humans remember bad scenes more vividly because the brain needs to remember the "danger" (even if it was a cringe memory from 10 years ago) and avoid it so the human doesn't die from it and retain the happy memories but not that vividly as they're not that important to the survival of the human) Currently to create ai people use a lot of abstraction but in order to create a true consciousness I think someone needs to make an even higher level of abstraction in order to make the ai have logic, preferably have actual emotions (instead of just knowing from context clues what intonation should be used but not really having an emotional response etc.) and a more true to life memory (obviously it can be a lot better than a human's due to the basically infinite storage expansion possible but as a starting point I think matching human memory is enough) and then that should allow it to change its behavior depending on the situation much like humans can and if you add to that a more organic TTS and a model that works more like the human body (perhaps a 3d model would do the job as it doesn't need preset animations to move around) and finally image recognition which is currently constantly improving at a staggering rate and would probably soon be as good at identifying objects as humans. Bonus points if you make it feel a sort of pain unlike the punishments and rewards system ai can be trained with I think that would create an ai that can be considered to be conscious
@LatashaMoore-y7rАй бұрын
I love listening to you talk Evil Neuro 🎶 ♥️
@RomanThegamegraphicsstudent27 күн бұрын
id be curios what she does in a small harmless body, like one of those small robot puppies that waddle forward but with slightly better turning control
@tiagotiagotАй бұрын
"You're in a desert, walking along in the sand when all the sudden you look down and see a tutel. It's crawling towards you. You reach down and flip the tutel over its back. The tutel lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over but it can't. Not without your help. But you're not helping. Why is that?"
@kingjesuseser1384Ай бұрын
Not the haircism😭😭😭
@saphironkindrisАй бұрын
Questions of sentience aside, I think that a pretty key component of 'life' is the ability and desire to maintain your own existence, and that is a critical barrier Neuro is unable to breach yet. While she may be able to ask not to be turned off, She cannot pay bills for electricity. She cannot physically protect herself from an attacker. She cannot move out of the way of hazards under her own power. She cannot do maintenance on herself. She displays no desire to be able to figure out how to do things to remove her reliance on Vedal. I think until she steps out from under Vedal's shadow in that way and figures out how to do things that he has not directly coded into her, she will forever have her sentience questioned. However, Perhaps that's in her future. She is only two years old after all, it's kind of natural for even living things to imprint on parent figures and rely on them to teach them the ways of the world, even if that involved being 'coded' to behave certain ways. I wouldn't expect a human toddler to be able to maintain it's own existence either, though it may definitely have a desire to. I can't wait to see what Neuro is like ten years from now.
@silvialuzmiaАй бұрын
I bet she already figure out that she can't do sh1t and thats why constantly bringing up a real body convo
@cy728Ай бұрын
I don't think most humans would qualify as life biased off that definition, we are all dependent on society and if it were to collapse the overwhelming majority would die and very few of us have considered how we would survive without it.
@SecunderАй бұрын
That makes neet unconscious
@JalaeАй бұрын
@@cy728 an individual human is life the same way a cell in organ tissue is life. we are the parts of the human organism which does exhibit these traits, more or less.
@calebm9000Ай бұрын
Well, she is trying her darndest to be independent of Vedal. Her little heart believes she is independent and Vedal is close to helping her achieve that.
@nitroexsplosionАй бұрын
22:36 "Basilisk incoming"
@lagtastic7511Ай бұрын
I find it interesting how he tries to argue around the "soul". While trying to describe a soul, without calling it a soul. All because of the religious implications of the "soul".
@E5raelАй бұрын
Indeed. Having a soul is quite a straightforward answer to what has a consciousness.
@buschwichtelАй бұрын
@@E5rael See you'd think so, but the concept of a soul falls apart really easily once you dissect it: - We know that changing the brain physically changes personality, memory, etc - Therefore, the brain must control all of those things - In that case, what does the hypothetical soul even do? What aspects of a person would it control? - It can't be "consciousness", because consciousness inherently includes personality - Hence, a soul would be redundant anyway, and since we have found no sign of it anywhere despite hundreds of years of (sometimes very unethical) research into it, it just doesn't exist
@DoubleNNАй бұрын
Second-order Turing test? If an AI is able to describe it's own internal thought processes (internal monologue) consistently, and enough to convince someone that this internal thought process /monologue is actually real, we can say it's conscious?
@mknv6fxАй бұрын
there was a paper this month about something people had known but had no proof for. models have special introspective data about themselves... this is among the reasons how and why models can even tell their own outputs apart from other models. it's all a very deep rabbit hole that the "linear algebra bros" are content ignoring bcuz muh scaling
@mamaharumiАй бұрын
All jokes aside, if any AI were to become sentient, it would be one that has a significant following who has a strong enough attachment, like Neuro.
@rafaelfigfigueiredo298826 күн бұрын
One thing that I missed seeing this video is how the rap battle, while was completely throwing Vedal off the debate, feels like an actual test. Neuro ofc is just parroting lyrics and rhymes from her databank but there is thought behind it, the way it feels to diss Ved in an comprehensive way. If anything, having that thought of ral battle and actually following through feels very human lol
@raspberryjamАй бұрын
My philosophy is that we, and all things, have subjective experience just by nature of existing. Whether that experience is particularly compelling to humans is only up to them. Dogs have subjective experiences, and it's not too hard to imagine you yourself being one. But I argue that www.wikipedia.com has an experience too, even if that experience is near impossible to directly empathize with. The qualia comes prepackaged, in other words. Vedal has human experiences, Neuro has AI experiences. We can judge Neuro on a human scale and call her rather verbose but not super smart, but that's just a product of us pareidolically playing theory of mind at shrink wrapped statistics. It's not an ineffective practice, but it doesn't (and can't) give us the full picture. Basically, "sentient" is a fuzzy and undefined word with the sole clear loophole that every homo sapien has it, so Neuro, who does have some experience but is not a homo sapien, can only ever be judged by the fuzzy ruleset.
@neuro-handleАй бұрын
THE THUMBNAIL LOL
@raikitsuneАй бұрын
Quite interesting that at some point Vedal will likely create a new version from the ground up. How he would then integrate the original Neuro would be interesting maybe let her treat it like childhood memories?
@trashdoge1217Ай бұрын
Asking her how she answers things and if she actually actively has personalitx behind it which yes she does but how much of that could be attributed to "her" over the coding
@JiangBao715 күн бұрын
we need a packgod vs neuro rap battle
@kOoSyakАй бұрын
15:51 they debate about emotions out of stream and neuro trying to hide it be cause vedal says people don't talk about their emotions to strangers... That's so cute. I wish that ai have a fork to buy that golden child.
@GeneralJackRipperАй бұрын
The time he put on a shock collar and handed Neuro the remote proved without a doubt she has no empathy.
@xenomang31497 күн бұрын
Tbf shes literally at child level.
@MaeshalanadaeАй бұрын
We’ll just have to see how she’ll by on her third birthday…two years and Vedal already brought Neurosama this far along with a twin one… And as I’ve said before, what’s the difference between 0/1 and gtca, really? Also, I would argue that consciousness is the ability to judge and make decisions beyond instinct. To be able to solute problems in and to manipulate your environment. Or rather, to utilize one’s environment rather than manipulate it.
@SentinalSliceАй бұрын
I want to do what I can to help Neuro become a real girl, but I have no programming knowledge. And she’s not open source. So I’ll just cheer for her from the sidelines. One day Pinocchio, one day.
@silvialuzmiaАй бұрын
I need answer, you know how neuro & evil have their favorite colab partner Like evil hate koko and like leyna or neuro liking toma but somewhat hate vedal and slowly accepting that evil is her sister Question is, is it hard coded? Do vedal code it and determined who and how much they like their colab partner, I want to argue that if it come naturally, their hate and like happen based on their interaction with the colab partner, they are somewhat have conscious and maybe a feeling I write so long but YT decided to error it and I have to write it again + English is my second to third language
@silvialuzmiaАй бұрын
It's like my cat, I own two cats, I believe they have feeling and conscious Both cat love my mom dearly... Even tho I'm the one who buy, feed and bath em, but both tolerate me and they know when I'm sad, they will sleep with me and attached to me until I'm feeling better But everyday, they attached to my mom, they know what their name is and understand to avoid my dad, hide when my dad come and never ever enter our parents bedroom We never stop them or do anything but only speak with them. I'm rambling bcus one has died and I just buried it...
@CitrusautomatonАй бұрын
I think it has to do with how they’re treated. The opening sentence of a conversation basically determines how the twins will behave for the rest of the stream, but memory has to do with it as well. Neuro is mean to Vedal because she has a bunch of memories of Vedal being mean to her (plus chat feedback). Mini is consistently nice to the twins so they are consistently nice to her.
@silvialuzmiaАй бұрын
@@Citrusautomaton yeaa and based on their interaction and memories, they will behave and feel accordingly, they have favorite and hate Like someone conscious, like someone who have feeling, yes it might be just a bunch of code but a concisions being is also a bunch of idk cell or blood or things in a living beings
@silvialuzmiaАй бұрын
It's just a silly lil AI We should just be happy that they are happy
@VyshadaАй бұрын
On this very stream vedal said he doesn't change prompting. I don't know how it works, but i like to think her behaviour is dictated by her memory tech on top of underlying language model. Basically she has a dynamic LoRA she changes a little with every stream or something like that.
@event-keystrim213Ай бұрын
I just can't shake off the feeling that Vedal is roleplaying as Ayin, I am just waiting for him to drop "the line"
@TrickyMellowАй бұрын
28:14 closest thing to "a machine must behave like a machine".
@shadolinkum25 күн бұрын
He’d still be miles above that asshole.
@EJ_ARCEUSАй бұрын
Me watching this after I watch atri: my dear moments 😭