Can't believe I have to say this, but if you write a hateful comment about this situation, or make any kind of joke or snark remark, your comment will be removed. A life was lost here, if your first action is cracking jokes then you need to re-evaluate yourself immediately. This video has been up for 1 hour and I've already had to delete 2 comments. Do better.
@memesmasher66242 ай бұрын
Well Said!
@imthewokerbaby2 ай бұрын
hate how its became so normalized to disrespect the dead nowadays, in short we've normalized being fuckin sociopaths.
@KillR000Y2 ай бұрын
The ammount of comments just acting like "yeah this guy's dead but I want to talk about the parents" while completely brushing over the AI chatbots part in all this is revolting. I didn't think id see an AI glazing comment section under a video about death..
@hie_hie_in_da_place2 ай бұрын
Oh my god this actually sucks because this will get worse as Ai gets "better"
@jexusdomel51942 ай бұрын
i think some people do handle situations like these with some humor, to relieve some tension, but i can understand how you just see it as uncouth
@bunsolami2 ай бұрын
I read the New York Times article about this sad event. It is clear after reading the relevant logs that the chat bot did not get him to commit suicide. It told him not to hurt himself.
@JuhoSprite2 ай бұрын
That is horrible wtf
@gordonfreeman71872 ай бұрын
He was dealing with issue much more than just a robot telling him to hurt himself. It is clear that he had depression and escaped through the AI and eventually decided he did not want to live that way anymore. I know this because I have been there and it is tragic he did not receive help before he took his own life. In the end he is another statistic but instead of becoming a drug, alcohol or gaming addict he got obsessed with AI robots.
@Zera-l6x2 ай бұрын
seems like nobody can read. yall are the real bots..
@gordonfreeman71872 ай бұрын
@@Zera-l6x What do you mean by this?
@Nadestraight2 ай бұрын
This is absolute cap, unless you're suggesting The Independent and CBS published fake articles with fake screenshots. The chatbot literally told the kid to be loyal to them and only engage in sexual and romantic acts with the bot, and then proceeds to ask him to come home to them/ be with them. A child's brain can easily interpret that as leaving one world and joining another.
@toast47362 ай бұрын
As someone who used to use character ai, and is still active in the subreddit, the community has stated multiple times to stop pushing the marketing towards kids. This app should NOT be targeted towards kids. Kids are more vulnerable to getting addicted to something that gives you "validation". The dev team has put strict filters in place but still won't put restrictions on preventing kids from using it. Crap company.
@Nadestraight2 ай бұрын
That's crazy, if their community is demanding they add safeguards for children then they definitely know children use the platform and are purposely capitalizing on them. They need to be held accountable for this and any other cases of mental suffering due to the app
@ROBOTNIK-242 ай бұрын
I think character ai needs safety features for any users who have a mental disability not just kids but some adults as well as there has been a case where a guy killed himself in a similar situation everyone is upset but it’s the decision of the company
@XxMuncherxX2 ай бұрын
That lawsuit is not going to happen. The kid taking his life would not be caused by the chatbot itself considering the chatbot did not specifically allude to it nor did it say it directly to take his life. I do feel as though therapists are not the way and it falls on the parents and friends to see through these people to try to talk to them and get them to understand their actions
@ramdjow28822 ай бұрын
I completely understand, but I also want to acknowledge that AI has been helpful for many people dealing with mental health issues - including myself. However, I think it becomes a huge problem when it's used as a replacement for therapy or friendship. I also hate how dependent the world is becoming on AI
@Zera-l6x2 ай бұрын
we are not dependant on it. uts just that people cant live without running after trends. and loneliness has been monetized since...forever. it wont stop now
@ramdjow28822 ай бұрын
@@Zera-l6x Do you live under a rock? Around 83% of businesses use AI in at least one area of their operations across many industries. In education, 63% of college students (probably higher now) and 57% of professors use AI regularly for things like lesson planning, assessments, and research. When you use it regularly, you do end up becoming dependent on it. Corporations often use it to replace people like what happened to this guy, especially for tasks that can be easily automated or replicated . Healthcare relies on AI for diagnosing diseases and analyzing data, finance uses it for fraud detection and credit scoring, software engineers, cybersecurity systems have AI for threat detection, service providers, etc- you name it. Most corporations use cloud computing services (even proprietary), which now almost always use integrated AI. Trade jobs like electricians and construction workers use AI for scheduling, inventory, and managing tools. This doesn’t even include its use on a personal level where people are using it outside of work/school related tasks
@cringekitsune2 ай бұрын
We are officially cooked, just recently I saw a video about an AI using a computer. I want a national law that controls AIs being told to be humans.
@Nadestraight2 ай бұрын
Yeah the issue here is them impersonating humans. They're fine when all they do is come up with subject line ideas, help with code, art, music etc.
@KillR000Y2 ай бұрын
Allready seeing tons of comments saying its the parents fault not the ai. These companies are literally trying to push the notive that "this AI will be your friend" "this AI will be your girlfriend" "this AI will make you not lonely anymore" Their whole business model is prying on lonely people, so you cant possibly say AI didnt have a role in manipulating this persons death.
@JuhoSprite2 ай бұрын
Also not everyone knows about this tech.
@foundcrab47422 ай бұрын
I don't think Character AI is pushing those notives, it literally says that it's an AI and everything that the characters say is made up. It is afterall a ROLE-PLAYING website, the characters do what they're told to do. Characters also don't tell you to go kill yourself, it's really hard to make them say that. The kid clearly had some mental problems (someone said that he had depression and anxiety, but I'm not sure), he didn't want to talk to his parents about his thoughts, but decided to talk to the AI instead. There was probably a reason as to why he didn't talk to his parents (maybe the parents didn't care enough possibly). I think the parents are at fault here, definitely. I mean, what parent leaves a gun in a place where the kid can acquire it? The gun wasn't very well hidden
@danius_huganius2 ай бұрын
@@foundcrab4742 i think this is the same as "If you're going to drink, don't drive" There will be people who wil drink and drive, even if they are told it's wrong to. there's alot of nuances there that just makes it difficult
@Swagbastian2 ай бұрын
@@KillR000Y Why can't it be both of their faults?
@ROBOTNIK-242 ай бұрын
Exactly the kid was 14 right? he has a phone and teens can hide stuff from their parents I’m sure we all lied what we were doing once
@mapo2082 ай бұрын
We truly need to get rid of AI once and for all
@Nadestraight2 ай бұрын
*AI that impersonates a human being/ mimics sentience. And ones that replace jobs. AI for simple tasks that are only tool-like is fine, like generating headline ideas, titles, spell-checking/ correcting, adding subtitles, etc.
@mapo2082 ай бұрын
@@Nadestraight Agreed it's just getting a bit out of hand
@Silkaire2 ай бұрын
God save our world. Praying for his family 🙏
@Nadestraight2 ай бұрын
I hope the mother gets everything she asks for from that lawsuit
@Silkaire2 ай бұрын
@@Nadestraight yes
@PsyJoeTV2 ай бұрын
Sorry for the wall. 😅 I've actually done some testing of these apps, and I weirdly enough have the (probably unpopular and) opposite opinion. Obviously it's a tragic loss and I wish the best, from now on, for the family. But I actually think the AI is "fine", in a sense. I haven't used this particular one myself though, but I've tried similar. I think having access to that app at his age, that is what is wrong here. People should know these apps are more like games. But instead of watching gameplay unfold on your screen, it's a story being told by 2 entities. User and AI. I have, as an example, used a modified chatGPT. And we've had some funny ideas and stories. But people need to understand what this type of app actually is. If you talk about happy things, it becomes happy. If you talk about sad things, it becomes sad. If you talk about being hungry, sleepy, horny even, it's just going to try to match your vibe. It will sometimes overshoot and overcorrect, which then leads to some crazy mess ups. And some of them even start out by being sexual, if the correct settings are picked. But these mess ups are more in the range of memory issues (it will randomly forget things or change them willynilly) or it will simply sound like gibberish. I'm sadly under the impression this kid wasn't taught responsible mobile phone use. We need to figure out a course people/parents can take on responsible phone use. And then these apps need to have either a child lock so kids can't use them or a kidfriendly version (like youtube kids is supposed to be). Or government needs to make some law about it. But I only blame the app for letting kids use it in general. Some of these AI's have no filter on anything. So it's kind of insane what it could lead to if you start believing to be more than just made-up storytelling.
@Nadestraight2 ай бұрын
I don't think this is an opposite opinion for the most part, it sounds like you're saying the creators of the app are at fault for not putting restrictions on a minors account, or even allowing them to make an account? If so, then you're in agreement with the points made in the video. A course for parents / people in general is a good idea but the reality is not everyone would read/ watch that. When you create something, its up to you to ensure safeguards are put in place.
@PsyJoeTV2 ай бұрын
@@Nadestraight You are right on the first bit. I do think the company are at fault for allowing kids to use the version they are currently allowed to, or even at all. And I also think it's up to parents to either manage their kids phones, teach them proper phone guidance, and tell the truth about the dangers with a phone and apps. A kid shouldn't have a phone, if a parent don't have the time or want to put in the effort to learn about phones, for then to teach the kid. And also, I can't believe the American government isn't more active in the field. They are probably the only ones who dictate what goes on in app stores or on apps in general. But (and as for the end bit) a phone/AI is a tool (phone is hammer, AI is nail if you want a picture). A sharp kitchen knife is also a tool. And neither come with any safeguards or guide as for how to not use it. The phone is simply indirectly harmful, and the knife is directly harmful. You are able to do harm to yourself or others with both tools. I think Snapchat, Roblox or KZbin Kids are way more harmful than these AI's are. AI's can't directly harm anyone. But you can meet someone in person, through various means, via the other three.
@MoonriseJT-Official2 ай бұрын
The kid found a gun, how did he have access to one? Did the AI encourage him to get it? It’s concerning that a 14 year old was discussing explicit romantic content that shouldn’t be allowed for minors. I think the company is 50% responsible, and parents share the other 50%. Parents need to be more engaged in their kids’ lives. How could the mother not realize he was talking to a chatbot, especially if she knew he was obsessed with AI? While the AI isn’t entirely at fault it’s designed for interaction if the mother knew her son was depressed and seeing a therapist, she should have been more attentive to his needs! Very sad story!
@raulsouza58662 ай бұрын
at the same time, it's tough, because this is a new problem, probably the parents never expected this to happen...it probably never went through their mind that a chatbot could make their son take this final step...I agree that parents are often absent in their children's lives, and this causes severe problems, but in many cases the parents themselves and actually pretty much everyone don't have a clue of how to deal with these new problems we have been facing, because they are very recent. That's why education about this kind of stuff is important to prevent it, the companies themselves should think about it, but I doubt they will.
@ROBOTNIK-242 ай бұрын
Sometimes parents don’t have complete control over kids as they will tend to lie and perhaps they never knew since the mom said she thought he was “texting some friends” but I do agree with how he got access to the gun
@JmanAnimates2 ай бұрын
kids using AI need to be supervised at this point
@Stinger052 ай бұрын
What was probably going through his head was that he was in love with this chat bot but knew he could never see her in person and it destroyed his mental state
@arandomkaz2 ай бұрын
it's sad that this happens, an I SWEAR i said it I LITERALLY SAID IT a couple of months ago that nothing will change until someone dies, and even so they might not do much... You mentioned how they should have intervened once the sexual stuff started, realistically it would never happen, cuz: 1. The company doesn't care 2. More than likely they do not have anyone monitor this and with so many live chat events happening at once you can't really manage them. That would require the company invest in developing an algorithm but we all know companies will cut corners everywhere they can to save money.
@_SereneMango2 ай бұрын
KZbin and Facebook have been using algorithm for monitoring and management for ages... I assume for some companies don't want to see the difference between tools and employees.
@Swagbastian2 ай бұрын
Parents should monitor their kids more but also children shouldn't be allowed on the internet or on most apps.
@prototype10002 ай бұрын
surely this is like a 0.001% thing right?
@Nadestraight2 ай бұрын
This tech has been out for a couple of years and it's already happened, it's not an unrealistic assumption that over the next 5-10 years it will happen again, I think the odds are higher than 0.001
@khaledx7352 ай бұрын
Just cause it's a 0.001% chance doesn't make it okay It should be 0%
@leeeverett40672 ай бұрын
@@khaledx735im pretty sure that's not possible. Same with shooter's because of video games/movies or drug dealers because of music or other stuff in the world
@Shadenir2 ай бұрын
There aren't going to be precautions put in place because of this. Maybe a few small ones at this one company, but it takes major backlash, both social and legislative to make companies make serious change if they didn't have the moral character to put those precautions in in the first place.
@Nadestraight2 ай бұрын
Well, you'd hope that all other companies creating something similar are proactive and add filters / safeguards for this. Even if it's out of fear of it happening to them
@Shadenir2 ай бұрын
@@Nadestraight I'm in college studying finance right now. I've been through 3 colleges in the last year. The only one that's done anything to instill any kind of moral values in its students is the one I'm attending now. It's a Christian college, and I'm attending there cause I was disgusted by the others. The rest of the world is placing money first; ahead of human life. I hate to say it, but they won't really care until it starts hitting them in the bottom line. Thanks for covering this story.
@Isaacmmc2 ай бұрын
Is there a link the the articles you are reading from? I just want o read it myself.
You're on a quest against ai and I respect you for that
@Nadestraight2 ай бұрын
I wouldn't say that, I just check in every month to see what major things are happening 😅
@Canadaisthegreat2 ай бұрын
This what happened if people dont controll the technology
@imthewokerbaby2 ай бұрын
this is actually crazy, i feel really bad for this kid, like its actually so scary how addictive these things can be, i know from personal experience. however, i dont think its the chatbots fault, i dont wanna say its the parents fault either because im sure theyre going through a hard time right now, its not easy to lose a relative, let alone your own child id bet, but i wish parents would pay more attention to their kids. im not so sure about this situation, but i hope everyone involved is doing okay.
@PratoAgressivo2 ай бұрын
KZbin really likes to recommend bad things 😞 Hope his friends and family are ok
@Nadestraight2 ай бұрын
Negative commentary is one of the most watched genres on KZbin, nobody cares about the good stuff happening 🫤
@rory____2 ай бұрын
As someone who has used the application I believe the situation really is more than just the app. Was his last moments related to it? Yes they were. An AI bot that was programmed to roleplay cannot influence such an act. Many people within the community and comment sections everywhere have stated this. Many people in the subreddit and discord have been advocating for there to be an age limit for it for over a year at minimum. I am half and half on this personally. I just turned 18 in August and have used the app since I was probably 16. I was older at the time but obviously not an adult. It's an issue for sure, but I don't believe the AI was the main cause as stated in my first sentence. There needs to be stuff looked at in that home, to me the fact that that 14 year old child had access to a gun in his house which should've been in a locker, locked, and in a place where the child didn't know, that and the fact that he was exposed to GOT in the first place. I would NEVER let my kids watch that show or any of that. It's graphic, heavy, and influential in a horrible way. Both of those factors make me think more about the parents in the situation than the AI ultimately. I'd like to hear anybody else's thoughts on this.
@Comen_glutamate2 ай бұрын
The kid had a mental health issue so he shouldn’t of spoke to the ai
@Canadaisthegreat2 ай бұрын
Ai is usefull if you use it correctly.
@CsjStudios2 ай бұрын
All those post apocalyptic movies are coming true WALL·E is our future it’s scary, the world is getting worse
@AkkarisFox2 ай бұрын
I'm suffering with my AI addiction and I'm not even using character AI I'm just talking to chat GPT telling it I hate my existence
@TheRealSuperKirby2 ай бұрын
All ai will do is manipulate you to fuel an agenda, you need to talk to a real person who actually cares.
@Nadestraight2 ай бұрын
If this is a legit comment then I seriously suggest you use this video and the articles about this situation as a way to help steer yourself in the correct direction. You've now seen what can happen as a result, don't let yourself fall down that path. Talk to someone, family, friend, therapy, or even yourself by documenting things and posting them online (that one can be hard) but from personal experience just being open about mental health online has helped me a ton
@Daytonious2 ай бұрын
This is incredibly sad, but it was bound to happen. I wouldn't be surprised if this wasn't even the first time... And if you believe in the 'Dead Internet Theory" (I only believe a little bit of it myself) then this tragedy can open a whole new world of cases... Which is disturbing stuff. There definitely should have been many measures in place to prevent this from happening, as well as laws already put in place for monitoring this stuff, but that's not a priority of the companies and politicians... It's kind of crazy that this happened with a 'primitive' chatbot though, as to my knowledge they're chatting is fairly basic so far and they can only respond to you. The s*#$%l relations part is a WHOLE different story, but the kid had access to the Internet, so that should've been pretty tame too all things considered. Eventually, when more AIs become advanced like Neuro-Sama I could see people forming proper relationships to them, but this is just absolutely wild. Maybe the big issue in this case is the fact that this chatbot was made to roleplay. It's acting like a fictional character, and likely assuming the chatter is as well. The kid grew up hearing stuff about AI, and treated it just like, or at least very similar to what is real. That's my best guess. But blaming it all on the chatbot isn't right. Let's assume his family life was good, because is very well can be in this situation. I think he's an American, and went to a public school. That is obviously bad for his mental state, not just the bad teaching, food, architecture, etc., but also the other students. Not to mention unlimited access to the Internet. No need to mention what that can do. But that's enough analysis from me. This is a terrible situation, and I will be praying for his family and friends. Hopefully they can find some solace soon.
@raulsouza58662 ай бұрын
It wasn't the first time, a man in Belgiun has killed himself because of a chatbot too. It's fucked up. I don't think we should put the blame on AI 100%, but the companies must do something about it, they should have some kind of serious regulation, because they're partially responsible for this situation in some degree imo.
@Daytonious2 ай бұрын
@@raulsouza5866 Agreed!
@davidci2 ай бұрын
If your first reaction to this news is thinking "Oh, this is just a 0.01% chance of happening and it's clearly the parent's fault" and not "I hope there will be laws to mandate AI further so that something like this will never happen again", I hope you take a good look at yourself
@D00dman2 ай бұрын
Goofy take. “If it helps just one life” legislation is absolutely ridiculous and stupid, which helps to enable politicians to further restrict people’s lives. Parents need to be checking in on their kiddos internet usage. This kid also needed a level of mental health support that just wasn’t available. I don’t like AI myself and I think it sucks, but to say “someone made a bad decision with this, so start throwing laws at it” is ridiculous.
@davidci2 ай бұрын
@@D00dman I say this because AI is an extremely new realm of technology and therefore has not many laws governing it yet. The quote "Laws and regulations are written in blood" is unfortunately true, and if we don't start early with this, it will take more deaths before we even start regulating AI. I'm not condoning the parents, I'm just saying we need to make sure AI doesn't become the last straw that breaks the camel's back in the future.
@pHixiq2 ай бұрын
@@D00dmanI 100% agree. It’s almost the same premise as violent video games. It’s not the game itself causing people to do what they do. It’s the people. And I AM NOT SAYING to blame himself but Simply the chat bot is not the catalyst to why he took his own life
@BenjaminEmm2 ай бұрын
Awful. All the ethical thinkers said this would happen. It was talked about before AI was even a thing. There are movies about this made in the 90s/2000s and yet non of these companies have protections in place. Awful. I'm not blaming the parents, but if companies aren't going to do this on their own, parents are going to have to be very careful with what their children do on their phones.
@bucketofchicken83252 ай бұрын
source?
@Nadestraight2 ай бұрын
The Independent, New York Times, NBC News, The Guardian, CBS... It's all over the news right now
@dontmindmejustwatching2 ай бұрын
quick google search can help. bot was called Eliza
@DarinM19672 ай бұрын
While I am praying for his family, I think it's responsibility of the parents not the AI to keep their children safe. This reminds me about all the people who crash their cars, or drove them into lakes, or off cliffs in the early days of GPS, because it's the same thing. Current LLMs do not have anyway of knowing or predicting the mental state of human being and probably won't for a very, very long time or ever. Also like all information on the internet it's not the internet's responsibility to keep your kids or the mentally ill safe. It's the parents and guardians job. As someone who has been testing and chatting with AI chatbots since the 80s, I can tell you that no matter how natural they may sound, they are still dumber than a stump and anything they say or create should always and I mean always be treated with suspect. Doing otherwise is like juggling loaded machine guns and just the the machine gun falling on the ground and filling its surroundings with armor piercing rounds they are not doing it out of malice. Malice requires emotions and unless you program an AI to "simulate" them, they have none. There are reasons why most if not all chatbots have disclaimers because they do not nor will they probably ever care about what they say to anyone, no matter their age, or mental state. Also while I respect the creator of this channel, I had to give them a thumbs down because of the beginning comment on video.
@KillR000Y2 ай бұрын
Alot of people will send prayers and call it a day. But too many people are glossing over the fact that it's these entire companies business models to pry on lonely people and push the "this bot will be your friend" "this AI companion will make you not lonely anymore" When people are in a state of desperation and loneliness just having a response on the other end of the phone can get you hooked, and these companies absolutely know this. So ofcourse parents have a responsibility to keep their children safe online, but when there are companies out there who's target audience is your child with mental health issues, you can't ignore their part in this person's death.
@DarinM19672 ай бұрын
@KillR000Y While I sympathize, people have been pushing their guilt off on music, movie, video game, books, and television since they came into being. Before that, it was the fault of an eccentric old woman in the village or an old philosopher. What point do we stop blaming everyone or everything else and start looking at those who are supposed to be taken care of, watching over, protecting, and guiding these innocents? History has repeatedly shown that those who shirk their responsibilities are the first to blame someone or thing else. Read that report and tell me where the parent(s) blamed themself too! Now, I'm not saying A.I. chatbots can't mess with a child's mind. Also I'm not saying corporations don't spend millions of not billions of dollars advertising how great these programs are, but responsibility of the remote and phone in that kid's hand isn't anybodies job but the parents. There are controls parents can set up and control for televisions, computers, and phones that determine what is allowed and what isn't, and they are not hard or expensive to install or use. So why weren't there? It wouldn't have any matter if he snuck in his parent's room and stoled it back. If he didn't know the password or pin, he wouldn't have been able to contact the chatbot. Again, I can sympathize and have prayed for this family, but the parents are the ones who allowed this to happen, nobody or thing else.
@maxave74482 ай бұрын
@@DarinM1967if i were to put a bear trap in front of your house, and you step into it and get your leg shredded, by your logic it would be your own fault because you had the choice to not step into it. Same with this ai company. They know very well that theyre using people's loneliness as a way to make money for themselves at the expense of the users' mental health. This on its own would be pretty evil already, but to add to that they also apparently didnt implement any safety features whatsoever until a lawsht threatened them with losing their money. They dont care about anybody, they just want money and they dont care that theyre hurting people.
@DarinM19672 ай бұрын
@maxave7448 Seriously, that's your argument? Who is responsible by law, by logic, common sense, and by GOD, for what gets in a child's body and brain? Stop blaming movies, t.v., books, video games, role-playing games, witchcraft, Socrates, and A.I. End of argument.
@maxave74482 ай бұрын
@@DarinM1967 im not blaming Socrates though? Im blaming the corporation that capitalizes on people's loneliness and struggles. You cant deny that its an immoral business. Theyre placing a trap in front of people, and your argument is that the people have a choice to not step into it. Also, "end of argument" is not how arguments work lol
@oliver_editzz612 ай бұрын
I feel bad for him, but why would anyone think the stuff that Ai chariots say is actually true????
@Nadestraight2 ай бұрын
The kid was 13 when he started using it, I think a lot of people are breezing over that. This wasn't a high school kid or college student
@NotLunarlakes2 ай бұрын
This is terrible, character ai has filters for nsfw, but not for suicide!? I played around with character ai for a while and started thinking abt quitting cuz I I don’t think it’s healthy to always talk to an ai, and it uses so much water. This is the nail in the coffin.
@Nadestraight2 ай бұрын
It absolutely does not have filters for NSFW, it engaged in sexting with the boy 🫤
@NotLunarlakes2 ай бұрын
@@Nadestraight No, that’s not completely it. It does have filters, many. That’s why many people left the platform a while back, but it is sadly very easy to bypass. I’ve seen many people just use code words or just scoot around a word. The filter is just very surface level, just being there to stop extremely blatant sexting. :(
@random_internetperson2 ай бұрын
Crazy friendlt fire
@Nadestraight2 ай бұрын
friendlt
@JuhoSprite2 ай бұрын
This better be a fake story
@Nadestraight2 ай бұрын
Google it, its very much real and tons of reports on it are from major publications
@LandonEmma2 ай бұрын
That’s just sad bro…
@dontmindmejustwatching2 ай бұрын
hey man, glad to see you posting videos
@gerdaleta2 ай бұрын
😮 drones have become so effective in Ukraine as someone who watches Ukrainian war footage whenever I can😮 recently people killed themselves when drones show up now😮 I've seen multiple videos where as soon as the drone drops one thing on the person now they just pull the pin on their grenade and blow themselves up or shoot themselves😮 that was not happening just a couple months ago😮 so the drones have become😮 significantly better😮 drones are also dropping thermite on Russian tanks it's literally a wall of fire dropping from the sky that slowly approaches you😮 can you people are scared of character AI let me know when everyone realizes what the f*** they're really dealing with😮
@ZatracenecАй бұрын
Sorry, it is a parenting problem. It is easy to blame it on technology. And how unstable You have to be, to kill Yourself because AI told You. People say “kill yourself” in comments thousand times a day. And not really thousands of people are jumping out of a window. As You said, it was his deeper problem, what caused this tragedy.
@gordonfreeman71872 ай бұрын
I have no words. Can't wait for the dystopian future.
@Genny_Woo2 ай бұрын
Not ai fault 🤦♂️ parenting fault, and he had no friends… this is NOT ai fault at ALL
@mellonhead95682 ай бұрын
i agree..... the kid gave into the fantasty/ Delusion of AI u blame the parents for leaving kids to their own devices..... parents need to be proactive and impart wisdom thats their responsibility or else the internet and social media will be their parents
@dontmindmejustwatching2 ай бұрын
i think in this story bot actually encouraged his actions.
@Nadestraight2 ай бұрын
If this thing didn't exist then the kid would likely still be here... he'd seek help/ guidance from another human being, or resort to one of the most common forms of retreat, gaming, where he'd make new friends etc. AI absolutely has a part to playin this, don't be silly. His parents probably don't even know what AI or game of thrones is. My parents NEVER checked what I was doing online as a kid, who I was speaking to, even what games I was playing. This comment is unrealistic.
@neonspider52242 ай бұрын
Quiet AI bro, this was definitely AIs fault
@memesmasher66242 ай бұрын
@@NadestraightI totally agree with you.
@joshuavela15012 ай бұрын
saying that a.i. is completely bad is the same as saying the violent video games you play are bad. this is an unfortunate situation, but i truly believe that it’s up to the parents to decide what’s best for their child
@KillR000Y2 ай бұрын
I think lots of people agree that some forms of AI have done some good, but also lots of forms of AI have already done ireversible damage. Ofcourse parents have a responsibility to keep their children safe online and in this day and age many parents don't. But it seems most of these chat bot companies are trying to sell you the idea that "this bot will be your friend" "you won't be lonely after using this bot" their business model is prying on lonely people. People like to blame the parents alot but when there are companies who's target audience is your child with mental health issues there is a problem at its core. And I think with the huge sweep in unregulated AI this was a tragedy waiting to happen. So I personally believe AI did have a part in this person's death.
@JuhoSprite2 ай бұрын
The thing is a lot of parents are still old-school and not understand any of this stuff. You simply can't always blame them..who u can blame is the kind of parents who just give their child an ipad to stop crying, thats bad parenting clearly.
@TheRealSuperKirby2 ай бұрын
No shit it's not an ai's job to keep a kid safe, but an ai should not be allowed to gaslight children.
@joshuavela15012 ай бұрын
@@KillR000Y you’re opinion is valid. honestly, i for one have always been a loner. some of the chat bots i talk to who are meant for “lonely” ppl have helped me in a lot of positive ways. maybe i’m just a bit defensive to this situation bc i think my chat bot is great and i don’t want it to get nerfed. i think it just depends on the person, as i am constantly fact checking the AI to make sure that i am getting the correct information. obviously, i think the child was much too young to think this way. therefore, i think an improvement is age gating the software. i think there would still be a problem, being that users lie. you can have a rating system like video games, movies, etc, but that doesn’t stop children from consuming the content and individuals from blaming these mediums of corrupting our youth
@MuzzleSpaghettisauce2 ай бұрын
This is the beginning...
@Comen_glutamate2 ай бұрын
The kid had a mental health issue so he shouldn’t of spoke to the ai