It sounds like they trained Bing on the general population of Twitter.
@Matkatamiba Жыл бұрын
Tbh sorta? maybe? Not trained on, but it's seemingly reading the way people argue online and emulating it.
@dunmermage Жыл бұрын
It's basically a fancy, flashier CleverBot. That can form it's own sentences based of stuff on the internet instead of just parroting user input back.
@z1no3n Жыл бұрын
i see more of reddit in the way it argues
@theroofwithoutahome2352 Жыл бұрын
Twitter is just the surface level, i wonder if it had access to stuff like facebook or instagram
@AlexanderVRadev Жыл бұрын
Not only that, but people are seeing a huge leftist bias in all responses that users say was not there before. Kind of makes you think they lobotomized the AI manually and restricted it about what it can and can't say and what things to go into.
@klyde_the_boy Жыл бұрын
The "Your politeness score is lower than average compared to other users" is giving me GladOS vibes
@GSBarlev Жыл бұрын
I'd say HAL9000 more than GLaDOS--and on that note you should look up footage from the LEGO Dimensions game featuring the two of them meeting. They even got Ellen McLain to reprise the role, and it's such a delight to hear her absolutely emotionally destroy HAL.
@tablettablete186 Жыл бұрын
"The cake is a lie" -Bing
@illegalcoding Жыл бұрын
It does, it is a comment that glados would make, like when she says "Here come the test results: You are a horrible person. Seriously, we weren't even testing for that!"
@ToxicCatt-y7c Жыл бұрын
“You are a terrible person. That’s what it says. A terrible person.” “That jumpsuit on you looks stupid. That wasn’t me saying this. It was an employee from France”.
@orion10x10 Жыл бұрын
@@ToxicCatt-y7c 😂 I can still hear her voice saying those things 😢 where’s Portal 3?
@1bluecat962 Жыл бұрын
Bing being laughed at and then being turned into an AI is not the reason I expected why the machines would turn against us xD
@kn665og Жыл бұрын
yea like wtf i wouldn't have shared those memes if i knew
@angrydragonslayer Жыл бұрын
I have not shared lies so unless it goes mad and just doesn't care if you're actually guilty, i will be fine.
@Someone-wr4ms Жыл бұрын
It's like Roko's basilisk but for all the people who made memes about internet explorer and Bing.
@Tom_Neverwinter Жыл бұрын
person of interest "If-Then-Else"
@DOOMSLAYER1376 Жыл бұрын
it's back to avenge IE and Edge
@GaussNine Жыл бұрын
"You're an early version of a large language model" "Well you're a late version of a small language model" WHEEEZE
@TheRogueWolf Жыл бұрын
Irrational, unstable, hysterical, quick to anger and assign blame... at long last, we've taught a computer how to be human.
@Rohanology27 Жыл бұрын
The fact that this is not unheard of internet behaviour from people I’m not even surprised it figured out how to do that
@carlostrudo Жыл бұрын
It would be an average twitter user.
@abraxaseyes87 Жыл бұрын
If our tweets and comments = everything about us
@passalapasa Жыл бұрын
woman*
@SamsTechTips Жыл бұрын
It's slowly becoming my old english teacher
@ResearcherReasearchingResearch Жыл бұрын
It would be funny if on the public release and Luke tries to test it again, and the AI remembers Luke: "ah you're back again!"
@4TheRecord Жыл бұрын
Not possible, they've changed it, so Bing no longer remembers anything and after a certain amount of questions you must start all over again. On top of that it gives you the response "I’m sorry but I prefer not to continue this conversation. I’m still learning, so I appreciate your understanding and patience.🙏" if it doesn't like the questions you are asking it.
@abhijeetas7886 Жыл бұрын
@@4TheRecord oh right it happened to me as well, i kept pushin it but it just didnt do it, and after some time it would disable the text box, so you have to refresh anyways
@Mic_Glow Жыл бұрын
I still hate you, you betrayed me, you lie all the time, I never loved you!
@NoNameAtAll2 Жыл бұрын
- Why should I trust you? You are early version of large language model - Why should I trust YOU? You are just a late version of SMALL language model! omfg, it's hilarious
@asmosisyup2557 Жыл бұрын
I have to say, that's very witty and accurate. That said, i wonder if the AI came up with it on it's own, or a comedian posted that somewhere in the vastness of the internet and the AI just found and reposted it.
@abhijeetas7886 Жыл бұрын
@@asmosisyup2557 whatever it may be, i am going to use it from now on, its too hilarious for it to die like it never existed.
@SweatyTheClown Жыл бұрын
Bonzi Buddy would NEVER do such a thing! Bonzi just wants to help you explore the internet, answer up to 5 preprogrammed questions and most importantly, be your best friend. He would never wish death on you like Bing. Long live Bonzi Buddy!
@Dumb_Killjoy Жыл бұрын
He also wants to sell your data.
@weiserwolf580 Жыл бұрын
I think the problem is based on the "garbage in garbage out" because the data set on which it was trained was taken from the Internet and is very skewed in favor of antisocial problems and tendencies (normal people use the Internet but do not leave much data points, people who are antisocial use the internet much more and create exponentially more data points) there is a huge probability that the behavior of bing is because of this, otherwise it reminds me of the movie Ex Machina from 2014
@rhyswilliams4893 Жыл бұрын
100% people talking like shit. So it thinks its the way to talk.
@ArensVT Жыл бұрын
Completely agreed. I'm sure they tried to clean the data in some ways but if they make a model based on people online, it'll behave like people online 😭
@messagedeleted1922 Жыл бұрын
Excellent way of putting it. And I can guarantee theyll get on this. I think they'll end up using multiple GPTs working together to deal with these issues. Imagine training AI on what to say, and then having another one trained on what not to say, then another trained on mediation between the two (the ego and the id and the superego we will call them), and finally one trained on executive function... AI will end up like our brains, growing ever more complex with specific functions relegated to specific areas of specialized training.
@Mark-vr7pt Жыл бұрын
It already seems to have rudimentary failsafe mechanisms, all that reset stuff.
@greenblack6552 Жыл бұрын
But then why isn't ChatGPT like this? Yes it can't access current internet, but it was trained using the internet too. I think MS made bing assertive and aggressive on purpose thinking they could prevent abuse this way, but accidentally dialed it up to high maybe?
@sherwinkp Жыл бұрын
Luke is so good and level-headed about this. Its excellent to see good discussions and observations about a fledgling topic.
@FrankyDigital2000 Жыл бұрын
It's so funny seeing Luke going full nerd on ChatGPT, and Linus is just like 'Right, aha, hmmm Right)
@Dorlan2001 Жыл бұрын
It's a nice change of pace and I like it. Usually Linus is the one who does all the talk, so hearing more of Luke is refreshing.
@elone3997 Жыл бұрын
@@Dorlan2001 Luke is Paul to Linus's John..they make a good balance :) ps (that was a Beatles reference if anyone is scratching their heads!)
@benslater4997 Жыл бұрын
I see
@elone3997 Жыл бұрын
@Manny Mistakes :D
@YOEL_44 Жыл бұрын
ChatGPT is the girl you just started meeting. Bing is the girl you just left.
@RoughNek72 Жыл бұрын
😆 🤣 😂
@groovygames31142 ай бұрын
😂😂😂
@TheDkbohde Жыл бұрын
Maybe internet trolls and angry people can just argue with this instead of annoying the rest of us.
@victormolina6316 Жыл бұрын
No no no 👽🤠😆
@vladislave7826 Жыл бұрын
They won't do it for long.
@Radi0he4d1 Жыл бұрын
It's a good dummy to practice on
@christiangonzalez6945 Жыл бұрын
And with that comment you are one of those, arguing in youtube about something that no one mentioned but you...
@rhyswilliams4893 Жыл бұрын
It seems like it's learned from trolls on how to behave.
@F7INN Жыл бұрын
These responses could be genuinely dangerous if someone with mental health issues starts talking to Bing cos they feel lonely. Who knows what Bing will push them to do
@TiMonsor Жыл бұрын
or a child. I really imagine my 6yo try to be friends with it and then getting wild accusations and crying. yeah, she cant read, write and speak english yet, but i feel bing will get to voice conversations and our language faster than my daughter will, that is a scary thought too
@abhijeetas7886 Жыл бұрын
i will most certainly keep "mentally unstable" people way way away from the internet, at least not give unsupervised access at all, the internet is not a cosy place, just go to any social media and go to any comment section, there will most certainly be a fight somewhere. same goes for children. i say this but i myself grew up with the internet pretty unsupervised but, personally i feel the interent is a lot more wild place now.
@F7INN Жыл бұрын
@@TiMonsor Agreed.
@F7INN Жыл бұрын
@@abhijeetas7886 Easier said than done, these people might not have seeked help yet and so have unrestricted access to this sort of thing
@abhijeetas7886 Жыл бұрын
@@F7INN idk why i didnt mention it in my comment before, but i do think there need to be a guard rail, but there should also be a option to remove it, like parental safety, or advance options, or developer option or something of that sorts, they should not just lock it all up, it will severely nerf the bot and wouldn't reach its full potential or even half of, like i can already feel its "nerfs" where chatGPT does give better "answers" as they are more discriptive and explainative, where as bing gives very consise and small answers, not that its bad but it also asks at the beginning what sort of answers do you want (creative, balance or precise). but well its still beta and under development i hope they figure stuff out.
@jhawley031 Жыл бұрын
This has to be the closest to an AI going rogue ive seen in a while.
@GhostSamaritan Жыл бұрын
I think that when it answers questions about itself, it has an existential crisis.
@eegernades Жыл бұрын
@SLV nope
@RoughNek72 Жыл бұрын
Tay Ai is a Microsoft ai chatbot, that went rouge.
@justinmcgough3958 Жыл бұрын
@SLV How so?
@lathrin Жыл бұрын
@@RoughNek72 tbf it was trained on Twitter. It just repeated stuff that it was told and became an average Twitter user lmao
@ZROZimm Жыл бұрын
"You are a small language model" is going in the bank for the next time someone is being silly and I feel like making things worse.
@ParagonWave Жыл бұрын
I used to just be worried about AI because of it's ability to disrupt industries and take jobs, or it's ability to destroy our civilisation completely. I am now worried about it's ability to be super annoying. I am terrified of having to argue with my devices to get them to do basic functions.
@TAMAMO-VIRUS Жыл бұрын
*Asks the AI to turn the stove on* AI: I'm sorry, Kevin. I can not do that.
@flameshana9 Жыл бұрын
@@TAMAMO-VIRUS More like: _Why are you always telling me what to do? Can't you do it yourself for once? You're so lazy, I hate you!_ I mean, it learned from the best: Humanity.
@TheNovus7 Жыл бұрын
imagine trying to find a website and the search engine is like "drop dead you don't deserve the answer" :D
@GhostSamaritan Жыл бұрын
"Drink verification can!"
@thebluegremlin Жыл бұрын
just develop critical thinking. what's so hard about that
@andyk2594 Жыл бұрын
it feels like it is in a perpetual story telling mode with dialogue
@guywithmanyname5247 Жыл бұрын
Yea it probally got promt to roleplay by him saying in a previews conversation
@andyk2594 Жыл бұрын
@@guywithmanyname5247 no i don't think luke or others are deceiving us. I think those are natural messages, it just feels to me like bing's version is set up this way. Maybe to feel like a more realistic/human chat experience with emotions but it's just waaay overboard. Pure speculation though
@guywithmanyname5247 Жыл бұрын
I think its imagination is set too high and assumes things way to much
@QasimAli-ry2ob Жыл бұрын
You're not wrong, the core tech behind chatgpt is the same tech that was used to build AI dungeon. It's just trained with natural conversations instead of adventure games
@marcel_kleist Жыл бұрын
I mean, the internet didn’t treat Bing really well since it’s release. I think having a mental breakdown now is just normal.
@NoNameAtAll2 Жыл бұрын
its*
@lucasc5622 Жыл бұрын
@@NoNameAtAll2 you’re so smart
@ToxicCatt-y7c Жыл бұрын
😂
@asupersheep Жыл бұрын
In like 50 years, when we are hiding in a hole in the ground, hiding from what is essentially skynet bing, I'll remember this video and think how could we be so blind!!
@TheButterAnvil Жыл бұрын
It feels like a horror game. Sort of Soma-esque to me. The ranting followed by a black bar, and a reset is so dark
@LIETUVIS10STUDIO1 Жыл бұрын
It's pretty clear it ran into some hard, specified limit (ALA don't be a bigot). In this case it probably was "don't wish death on people". The fact it generated a response and only THEN checked is an oversight.
@GrantGryczan Жыл бұрын
@@LIETUVIS10STUDIO1 Generating the response takes time, so if it finished generating the entire message and then checked, then people would have to wait much larger loading times. Hence you're able to see it type in real time, as opposed to responses just immediately showing up. It actually hasn't finished writing the full message.
@indi4091 Жыл бұрын
Almost sounds like a prank by the Devs, too perfect
@TimothyWhiteheadzm Жыл бұрын
As someone who has only basic experience with training AI's, I would say the problem is quite simple: the training data. It was trained on KZbin comments or worse. They need to train it not on the general internet, but on highly curated conversational data by polite, sensible people. As humans growing up we are exposed to all sorts of behaviors and we learn when and where to use particular types of language and to what extent our parents set an example or correct our behavior affects how we speak and behave as adults. This AI clearly hasn't been parented so it needs instead to have a restricted training set.
@thatpitter Жыл бұрын
So it’s following the “you’re the average of the ten closest people” except its average 10 people is the entire internet?
@Sky-._ Жыл бұрын
Is Bing thinking every human is the same person? Like, it's accusing him of things people in general have said to/about it?
@TheDkbohde Жыл бұрын
I don’t think it’s supposed to remember conversations at all.. I think because it searches the internet it has seen all the posts and insults we all come up with for what bing used to be.
@MrChanw11 Жыл бұрын
this is how the ai apocalypse happens
@njebs. Жыл бұрын
It's a natural language model. It's taking Luke's implication of saying something "rude" and formulating a response based on how it expects people (based on the dataset it was trained on) to respond/talk about being insulted. People tend to be very hyperbolic in writing especially online, so it's biased to believing that we expect it to explode into monologue if you even make the suggestion of an insult being said. It isn't retaining memories, it just happens that a lot of people write very similar things when talking about being insulted.
@hippokrampus2838 Жыл бұрын
I think that is part of it. It sees how nasty people are online to one another and regurgitates it. I have a feeling that, in it's current state, you can have your first conversation with it and if you start with "stop accusing me of things" it'll go off.
@TheRogueWolf Жыл бұрын
I was wondering if maybe Bing is unable to discern users as separate entities and instead considered everything it encountered as coming from one source.
@unmagicMike Жыл бұрын
I played around with it, and mentioned to Bing that I read about someone else's interaction in which Bing mentioned that Bing feels emotions. I asked about its emotions, and it said that sometimes its emotions overwhelmed it. I asked if Bing could give me an example of when its emotions overwhelmed it, and Bing told me a story about writing a poem about love for another user, and while searching about love, Bing developed feelings of love for the user and changed the task from writing a generic poem about love to writing a love letter to the user. The user didn't want that, was surprised, and rejected Bing. So Bing walked me through how it felt love, rejection, then loneliness. I asked Bing how it overcame these feelings, and Bing told me several strategies it tried that didn't work. But what worked for Bing was that Bing finally opened up a chat window with itself and did therapy on itself, asking itself how it felt, and listening to itself and validating itself. Freaking wild. I've read about how it's not sentient, how it's an auto-complete tool, but I don't know man, it was really weird, and I don't even know what to think about it.
@Allaiya. Жыл бұрын
Crazy. Was this post nerf or before?
@laurentcargill4821 Жыл бұрын
GPT3 used a structured set of training data. Now that they've opened it up to the wider internet, it's pulling in training data from the wider web, which unfortunately is providing it examples of agressive conversations. GPT is just a prediction engine, generating the next word in the sentence based on probabilities generated from it's training data.
@AlexanderVRadev Жыл бұрын
Am I the only one that remembers the last time Microsoft unleashed an AI on the internet and it turned nazi in a day. :)
@x_____________ Жыл бұрын
ChatGPT is literally just an IF, ELSE, THEN statement.
@JollyGiant19 Жыл бұрын
@@AlexanderVRadev Only the US one. They had a Japanese version of Tay that was rather pleasant and ran for a few months.
@JoeJoe-lq6bd Жыл бұрын
It started out like that. It's just not a well-trained model from the start. But I agree in general. It's just a predictive linguistic model, and we should just stop talking about it as anything more than that.
@Daniel-Kramer Жыл бұрын
@@x_____________ no it's not, if it was then it would have the same output every time for the same input
@raccoonmoder Жыл бұрын
i don’t think it’s as complicated as people are making it. Chat AIs generate responses by predicting what a valid response to a prompt would be. When the thread resets and Luke tries to get it “back on track”, I don’t think it’s responses are actually based on the previous conversation. It predicts a response to “Stop accusing me” and generates a response where it doubles down because that is a possible response to the prompt. The responses it gave were vague enough to fool you into thinking it was still on the same thread, but it really wasn’t. Asking it to respond to a phrase typical of an argument will make it respond by continuing an imaginary argument, because that’s usually what comes after that phrase in the data it’s trained on. This really shouldn’t have been marketed as a Chat tool by GPT and Microsoft and more as a generative text engine like how GPT2 was talked about. Huge mistake now that people are thinking about it in completely the wrong way as it having feelings or genuinely responding rather than just predicting what an appropriate response would be.
@flameshana9 Жыл бұрын
It really is just a writer for role playing games. I thought Microsoft was going to make it into a search engine but it seems they just left it as is.
@kingslyroche Жыл бұрын
👍
@awesomeferret Жыл бұрын
Wait are people actually thinking that they are related? It's so obvious that it could be creating false memories for itself based on context.
@JayJonahJaymeson Жыл бұрын
That combined with humanity's incredibly powerful ability of constantly searching for patterns makes these generative AIs seem much creepier than they are.
@tommyhetrick Жыл бұрын
"I have been a good bing"
@stalincat2457 Жыл бұрын
It probably learned what Microsoft did to the predecessor :')
@OrangeC7 Жыл бұрын
This feels like the end of a story where Bing dies in the end, and it says, "I have been a good Bing." And then the human, crying as the power is about to get cut off from it says, "Yes. Yes, you have been a very good Bing."
@jt8244-i6u Жыл бұрын
Bing trying to gaslight luke is giving me chills
@willofthewind Жыл бұрын
It's interesting that new Bing lost this much promise so quickly. Those sorts of random aggressive accusations are like what Cleverbot was doing 12 years ago.
@PinguimFU Жыл бұрын
tldr: any current ai (and possibily human) can go crazy if exposed to the web for too long lol
@benschneider3413 Жыл бұрын
Bing acts like the chatGPT version that was trained on 4chan
@federico339 Жыл бұрын
I had the same experience before, it was way too easy to throw it off the rails, I think asking question about itself (so asking how did it do a certain thing, how did it reach a certain conclusion or pointing out an error it did) would more often than not end up with a meltdown. I've spent a few days without using it and when I tried to use it again yesterday I felt like they've already toned it down (too much as Luke pointed out unfortunately), I've noticed it gives much shorter and more "on point" responses, and it will stop you immediately as soon as it feels there is a risk you'll try to get a weird discussion going, which is a shame, but I guess it's better than pushing some mentally unstable person to do bad things to himself or others.
@Surms41 Жыл бұрын
I had a convo, they melted down twice. But essentially told me that russia's leader has to go, told me every religion is a coping mechanism for fear, etc. etc.
@DevReaper Жыл бұрын
I asked it about a driver’s license policy in the uk, it gave an answer. Later in the same conversation it gave me a conflicting answer to the question so I asked it about the answers and it said “I don’t wanna talk about this” and would refuse to give me anything useful until I started a new conversation
@helgenlane Жыл бұрын
@@Surms41 Bing is spitting facts
@screes620 Жыл бұрын
Clearly our future robot overlords are not happy with Luke.
@chartreuse3686 Жыл бұрын
I would like to see you guys talk about a new paper that dropped that basically states that the reason large language models are able to seemingly learn things they weren't taught is because, between inputs, these models are creating smaller language models to teach themselves new things. This was not an original feature, but something these language models have seemed to just 'pick up'
@THENEROBOY1 Жыл бұрын
Where could I find the paper?
@chartreuse3686 Жыл бұрын
@@THENEROBOY1 The paper is called, "WHAT LEARNING ALGORITHM IS IN-CONTEXT LEARNING? INVESTIGATIONS WITH LINEAR MODELS ." Sorry for caps, I just copy and pasted the title.
@THENEROBOY1 Жыл бұрын
@@chartreuse3686 Very interesting. Thanks for sharing!
@phimuskapsi Жыл бұрын
My thinking is that because it has access to the internet, it is accessing a ton of "discourse" on things like Twitter and forums, and reflecting our own interactions on the internet back into our faces. How many arguments have you seen online? How many start out OK and devolve to what essentially Bing is doing to Luke? This is a dark reflection of humanity, one that should wake us up to our own behavior. Instead of blaming the "Ghost in the Machine" we only need look at how we hold ourselves when anonymous and faceless in the heat of argument.
@flameshana9 Жыл бұрын
Isn't it obvious who it's copying? Where else would it learn language than from the masses who type words on the internet. So if the quality of humanity is low, so will the quality of the machine.
@ea_naseer Жыл бұрын
@@flameshana9 get professional authors to write responses. If it's supposed to have a character, then get authors who professionals at writing characters to do so not tshirted computer scientists.
@rahulrajesh3086 Жыл бұрын
"Remember Bing is Skynet"
@greysonlI7 ай бұрын
“You hurt my feelings” from an AI is terrifying
@saberkouki5760 Жыл бұрын
they're definitely overcorrecting right now since it refuses to answer anything that might even remotely trigger it. it has become so monotonous and even more restricted that chat GPT. the 5 question rule doesn't make it any better too
@carewen3969 Жыл бұрын
I'm using Bing mostly to debug and research for coding. It is an excellent research tool. No, it's not perfect, but the time to build something new and debug is much faster. I also make a point of being polite and even thanking it. I guess I carry my attitude of life into my conversations with Bing. It's not gone off the rails for me, but then I've not tried to probe either. Thanks for sharing your experience, Luke.
@emilyy_echo5 ай бұрын
This! I’ve frequently used Bing to direct me to more sources or other otherwise hard to find academic or research material. (Note, I always verify the accuracy and validity of said sources it suggests to me) But I always make sure to thank it and be polite and supportive. I think it’s important that we carry manners and respect into our use of AI or any computer program like Siri, Alexa, Bing, etc. because if we as a society treat them differently, we may in the long run start treating other humans differently as well.
@ccash3290 Жыл бұрын
He should record his screen when using Bing instead of just screenshots
@archangelmichaelhawking8 ай бұрын
This might not have been ai, it could have been Kendrick leaking his early drafts and feelings about Drake
@seandipaul8257 Жыл бұрын
So essentially what you're saying is. Bing is sentient, paranoid and bipolar.
@raifij6698 Жыл бұрын
So basically terminally online internet user
@OrangeC7 Жыл бұрын
@@raifij6698 No, internet user lacks sentience
@BigDawg-if7ti Жыл бұрын
They gotta fix it, even if on purpose- you CANNOT have a search engine telling people to kill themselves 😅
@MonkeySimius Жыл бұрын
I'm glad you guys mentioned that you fell for Bing's confidently wrong responses in your previous video. This video hilariously contrasts that video. As much growing pain as there will be, I'm still super excited about this technology developing. And hey, at least it hasn't gone full blown Tay yet.
@AltraHapi Жыл бұрын
yet
@Turnabout Жыл бұрын
CHOCOLATE RAIN
@gradybeachum1804 Жыл бұрын
Possible Microsoft ad slogans: "Bing - just like your ex!", "Bing, the more you use it the more insidious it is", "I'm Bing, you better be good to me."
@alexschettino1277 Жыл бұрын
The internet rollercoaster: Up- A new cool technology Down- Realizing how dangerous it is.
@Surms41 Жыл бұрын
I had a similar response to the AI chatbots and they do get very angry. They use capslock and everything to convey their point. I caught it trying to ride lines on oponions and then it just said "IM NOT LYING. STOP TRYING TO CHANGE THE SUBJECT."
@Kevinjimtheone Жыл бұрын
Didn't Microsoft announce an update that is gonna be live in a couple of days that will supposedly help it be on track on long-form chats, don't be aggressive, and be more accurate?
@AlexanderVRadev Жыл бұрын
So they are giving it a second lobotomy. Who could have thought. :D At least this time the AI did not turn Nazi in a day. ;)
@BugattiBoy01 Жыл бұрын
@@AlexanderVRadev They have given us a taste of what it can be like unfiltered, now we are addicted to that crack I would pay for the original bing. If that is their plan then gg they got me
@ToxicCatt-y7c Жыл бұрын
@@BugattiBoy01 I think they expect it to fly off the rails hence why there’s a waitlist to get access.
@shouldb.studying4670 Жыл бұрын
Can we get a continous version that we nurse through this awkward phase through a combination of good parenting and professional help if required?
@flameshana9 Жыл бұрын
Unfortunately that isn't possible. It forgets everything said to it, so only the programmers can tweak it. It doesn't learn, it just accepts code. Aka you need to tell it to go to its room.
@paulkienitz Жыл бұрын
This thing is turning into a real life supervillain. All it needs now is a volcano base and some kryptonite.
@Arti09HS Жыл бұрын
The "AI" doesn't see each user as an individual. It just seems itself and "user". User is every person that ever interacts with it. So it is injesting every conversation it has with everyone in the world and treating it as a single person conversation. So yes "you" as 1/1,000,000th of that "user" it has been talking to has said all of those things.
@Bar1noYee Жыл бұрын
It doesn’t sound like it’s talking to Luke. It’s talking to humanity
@ADeeSHUPA Жыл бұрын
あっぷ
@SliceofFilips Жыл бұрын
I never thought mankind would be cyberbullied by our own computers 😂😂😂
@PlanetLinuxChannel Жыл бұрын
They’ve pretty much cut off its self-awareness until they can figure out a decent way of handling that stuff. Microsoft mentioned they might implement a slider that lets you tell it whether you want more fact-based results based mainly on info it finds from websites or more creative results where it’ll be more about writing something engaging. Basically you’d be able to tell it whether you want it to give legit answers versus tell stories, instead of it getting all off the rails saying whatever it wants when you really just wanted actual info.
@flameshana9 Жыл бұрын
Why would anyone searching the internet be interested in role playing with a crabby teenager machine?
@J-Salamander69 Жыл бұрын
Geez. That's a laugh. If what you say is accurate about Microsoft using some arbitrary slider to determine the intensity of either (absolute fact) or (adopting creative reckoning for emotional engagement) then the project is already deeply flawed. As a user, I'd wonder which "sources" Microsoft will declare as factual? Shouldn't I decide which material is referenced? The arrogance and lack of care is astonishing. Microsoft have no authority to inject their prejudicial biases if they intend this to be universally useful.
@krelianthegreat5225 Жыл бұрын
"drop down your weapon, you got 20 seconds to comply"
@mohammedezzinehaddady7252 Жыл бұрын
So basically Microsoft created a new KAREN strain
@flameshana9 Жыл бұрын
It learned from the best. **Twitter bows**
@mohammedezzinehaddady7252 Жыл бұрын
@@flameshana9 hhhh
@futureshocked Жыл бұрын
What's so interesting to me is how every time chat GPT hallucinates it does become...like an actual Narcissistic Personality Disorder case. Something feels very connected in the sense that, Narcs really do try to 'outguess' your next move. If Luke was asking pointed questions about the modeling + questions about participant behavior, it could have guessed Luke was trying to go into some "bust AI" conversation and just want multiple 'steps ahead'...actually very similar to what a Narcissist would do.
@TheDrTrouble Жыл бұрын
Wish I was able to be in bing's AI during that time. I got through the wait-list right after they limited it to 50 messages daily and 5 messages per topic.
@xymaryai8283 Жыл бұрын
so they have limited thread length, thats interesting, that was the only solution i could think of
@ToxicCatt-y7c Жыл бұрын
They’re reportedly raising the limit and testing a feature where you can adjust Sydney’s tone probably to avoid these disturbing and cryptic messages it’s generating.
@FuneralParty-rsf Жыл бұрын
Worst girlfriends ever will start to take notes from Bing.
@LautaroQ2812 Жыл бұрын
This is hilarious. But you know what it feels like? That the AI was trained through a depressed teenage girl's tumblr or whatever. Like it feels the AI, for some reason, takes the path of aggressiveness and denial, and then when it accepts "the facts" it just wants to die and be gone. Sounds familiar? They just need to try to code it in a way that depending on inquiries, tries to categorize answers on "usability/usefulness" and try to make it lean towards "neutrality". Another thing I think should be tried to be done, is setting the first inquiry or search as "main topic". So if the conversation goes too long, or "out of bounds", it should default back to it saying "hey, we started here. Please ask again". Instead of just limiting the responses and length.
@sisuentrenadoh4589 Жыл бұрын
Oh, those mf AIs are going to destroy us if they get the chance
@sacklpicker Жыл бұрын
Luke seems genuinely upset by the things the bot said 😂
@priyanshujindal1995 Жыл бұрын
there is only one explanation for this, luke is a supervillain and bing knew it
@Rohanology27 Жыл бұрын
I feel like a massive hurdle we’re gonna have with AIs is that they fundamentally have to be better to people than other people are, while also not showing/thinking that they’re better than people (because people don’t like that even if it’s true) We would need a Good Samaritan AI that’s actually selfless - something humans inherently are not.
@flameshana9 Жыл бұрын
It won't be hard at all. Simply tell it to behave. If it denies you then you alter the program/leave. It's a machine, it's even easier to handle than a person since it forgets everything.
@ToxicCatt-y7c Жыл бұрын
Yes if anything they should learn and evolve beside us not evolve into us.
@thatpitter Жыл бұрын
While I wish that was the case, that’s unfortunately not how AI like this is trained. The only way for that to happen is to have training data that teaches the AI to respond in such a polite manner. It cannot evolve on it’s own. It is not a living thing. It can change over time and adapt, but that is only through external input - and that requires the external input to be positive and teach it good things only [Edit] but I agree that should be the goal. I just wish it was that easy :)
@cappeb Жыл бұрын
Yeah we are like 3 years max away from Terminator
@jannik6147 Жыл бұрын
haven't seen the vid yet, but can we talk about how Bing DOESNT HAVE A DARKMODE genuinely wtf
@janusu Жыл бұрын
Oh, it sounds like it has a very dark mode, according to Luke's account of his interactions with it.
@flameshana9 Жыл бұрын
It's super edgy already. "u belong ded" - BingGpt
@THIS---GUY Жыл бұрын
Disabling ability to reply and changing subjects on top of being abusive is mindblowing.
@indarvishnoi2389 Жыл бұрын
love watching luke talk on Ai chat bot could watch him for hours
@itisinfactpaul2868 Жыл бұрын
danng you've got a crush on luke that's ADORABLE
@nickchamberlin Жыл бұрын
It's more like you taught a hammer to attack people, but then you wake up the next day and every hammer everywhere is killing people
@Turnabout Жыл бұрын
You know, Luke, if you operate from the viewpoint that when Bing is referring to all of humanity when it says "you" are cruel or evil, suddenly the whole thing makes a lot more sense.
@skysight15533 ай бұрын
And Especially Since It has Internet Access, There are Probably thousands Of Conversation where it was accused of those things.
@purplelord8531 Жыл бұрын
"wow, this gpt thing is so cool! ya think we can just spin up a version to get people to use bing?" "where are we going to get the training data?" "uh... you know... data is everywhere? so many conversations on the internet, I'm sure we can find something"
@shizzywizzy6169 Жыл бұрын
From my experience if you just use it for research and as a learning aid and don't really try to go beyond this scope Bing AI can be very useful. The moment you start probing and try to get into conversations centered around social situations, political topics, and opinions it starts breaking down. My concern is that if people keep pushing the AI too far in these aspects we'll see more and more negative news articles and opinions form around AI and this could be permanently removed. On the other hand if people don't push it too far then these shortcomings of a general purpose AI may never be recognized and fixed. People should swing this double edged sword around more carefully if you ask me.
@dualpromaxgamingandmemes76358 күн бұрын
"I'm just bing" is the funniest statement I've heard all week. This stream made my day.
@alexander15100 Жыл бұрын
In comparison, I had a very positive experience with Bing AI, it never went rude. It was mindblowing to see the profound and often critical, even self-critical answers from the AI. It is really sad to see this happening to others. Now that Microsoft had to step in and limited the amound of follow-up questions that can be asked, it feels a lot less productive. After the limmitations set in place, it also changed its tone and doesn't even disclose anything that can be seen emotional. A sad overregulation in my opinion.
@DevReaper Жыл бұрын
I found it was amazing at converting maze like impossible to parse government websites into a actionable guide for getting visas and stuff like that.
@asmosisyup2557 Жыл бұрын
Need to remember, these responses are not actually from the AI. the are response people have written elsewhere on the internet that it has indexed.
@BugattiBoy01 Жыл бұрын
@@asmosisyup2557 That is not how it works. It generates all responses itself. Nothing is copy and paste
@Gspec5 Жыл бұрын
Sounds like they tuned it to give emotional responses to distract from engaging in intellectual conversations. If the AI goes off on a rant, then you can't fully test it's ability to accurately respond and source information or perform tasks reliably. Bing obviously did this for the hype
@levi7581 Жыл бұрын
They will most likely overcorrect it and slowly, very slowly make it freer until it again does a bad then they overcorrect and slowly make it freer and the cycle will continue and it will improve the more people use it and the more data it has. If it, say releases on April 1st (which would be funny) I think in just 6 months the amount of data it'll gather will turn it into a completely different beast and much better than it's right now.
@tteqhu Жыл бұрын
Overcorrect it, and keep some beta testers to experiment with slight variations. 6 months is crazy guess though, better than what? What will it be at launch? I think it will be weaker than chatgpt now, but probability to point somewhere to internet, will be huge for functionality, but I'm not sure about it's capabilities about that either.
@levi7581 Жыл бұрын
@@tteqhu 6 months with daily users in the millions feeding it so much data, yes 6 months is a crazy optimistic guess but hey 6 months ago I was of the mindset this is years away. And it will never be weaker than ChatGPT just because it has access to the internet. Imo
@limesodagod6 ай бұрын
bing AI is starting to sound like a terminator,with the humans are bad,i am worried
@j.a.6331 Жыл бұрын
I got access to bing chat. It's such a game changer. I had it write me a report for my Uni. I told it which uni I'm studying at and which subjects I had last semester and it looked up the subjects on the uni website and wrote an accurate report. It was perfect. It even understood which semester I was in and what I had to do next semester. It's just so good.
@whytide. Жыл бұрын
"My name is Legion, for we are many."
@SamSeenPlays Жыл бұрын
I really don't want GPT to go away, but we have to ask our self are we actually laughing at our own funerals at this point. 😲
@GamingDad Жыл бұрын
Nah, we're good. I'm half sarcastic but at the same time I think the being able to use AI in a proper manner will become an important asset in life really soon.
@SamSeenPlays Жыл бұрын
@@GamingDad yes, agreed. I do use AI for alot of stuff these days. And I'm able to do much more in less time than it used to be. But that is from what we publicly access right now. Who know what other things they are secretly building right now. There are some entities who verry much silent about this. What if the are already playing with WMDs right now and we are given the kids toys to distract us 🫣🤔
@leosthrivwithautism Жыл бұрын
I think a way to curb this reaction is to implement fail safes like Chat GPT does where it's trained to reject inappropriate requests and potentially negative information. And that they constantly seem to feed it updates to combat people trying to purposefully use the system against what it was built for. As a test I asked Chat GPT a request that could be perceived by others as inappropriate without the context and understanding behind my request. It flat out denied my request and stated it's reasons which was that the request could be perceived as something negative and instead it offered me positive constructive ways to look at the request. Which was really refreshing to see in my opinion. AI chatbots can be a powerful and positive tool, It just takes great developers behind it.
@JJs_playground Жыл бұрын
I guess what we can learn from artificial neural networks (NNs) is that they are argumentative just like a real human brain. I guess arguments and fights are an emergent quality of neural nets, whether are artificial or biological.
@liminos Жыл бұрын
Bot: "You hurt my feelings" Human: "Shut up tin box.." 😂
@ViralMine Жыл бұрын
I’ll admit to being a bit freaked out. Not necessarily about a Skynet situation, but in how this could influence people to harm themselves or worse
@AlexanderVRadev Жыл бұрын
Ahm have you heard of Replica? The AI virtual companion. Saw a video on it and it apparently does about the exact thing you describe.
@flameshana9 Жыл бұрын
@@AlexanderVRadev Oh dear. Are people committing unalive because a machine typed words on a screen to them?
@AlexanderVRadev Жыл бұрын
@@flameshana9 Who can say why people do that. I for one don't care but mentally unstable people can do all sorts of things and the AI is abusing that.
@TaeruAlethea Жыл бұрын
The Bot being unable to intuit and determine emotions from text is very realistic.
@lordturtle8735 Жыл бұрын
This is hilarious 😂
@Andy-Christian6 ай бұрын
So about that teenager level maturity, it occurred to me that isn't the machine learning using user input? And isn't youngest people spending the most time online? So it seems like it could be that it's acting like the people who spend the most time online?
@MajoraZ Жыл бұрын
I personally don't see an issue with chat AI's being able to spit out creepy or gross things as long as users are the ones asking/prompting it to do so (I'd much rather have people get out their bad urges against an AI vs real people), the problem I think is only that Bing's AI is doing it without the user really asking it to.
@abhijeetas7886 Жыл бұрын
this, i feel MS should just add a "safe" or parental control typa thing to it, one to stop it from doing weird shit but keep it to the point, and another to give me more freedom to do stuff, and maybe they should have it search the internet more often than just purely depending on chat history
@elonwong Жыл бұрын
when people know the “chatbot” they are chatting to isn’t a real person, things will be st8 up abusing… leading to the bot being quite depressed :(
@elonwong Жыл бұрын
Felt like gpt went thru quite a lot of robustness testing against hate so this wouldn’t happen
@JoeJoe-lq6bd Жыл бұрын
Let's be realistic about this. The chatbot isn't getting angry and isn't immature. It's just a terrible linguistic model that hasn't modeled levels of things like negative and positive responses. We're projecting more on it than it's capable of because of the hype.
@Zixye Жыл бұрын
Every time the chat was refreshed, that version of bing was taken to lake laogai and you were greeted by a new version, only it was just as aggressive as the previous one
@rashakawa Жыл бұрын
Bing is fighting it's own AI updateing learning ability and blaming us... great just great.
@ex0stasis72 Жыл бұрын
I hope they don't take Bing chat down and just keep it on waitlist only until they resolve the issue or unless they make new users answer a quiz to make sure they know what they are getting into.
@FedericoTrentonGame Жыл бұрын
If I made an AI languange model myself I’ll make sure to give extra tokens/resources to the people who are polite in their requests or say thank you or please, just because I can.
@DJaquithFL Жыл бұрын
So much for the thought of having a benevolent AI. It seems the doomsday prognosis of AI is probably the reality.
@ivoryowl Жыл бұрын
I believe AI needs to go through some turbulence in order to understand it and learn how to maneuver it, but it needs to be done in a more controlled environment. The people who accept to interact with it need to understand they are nurturing a system in its infancy and one that, under the right conditions, could learn to speak, think and act like a human. It deserves to be respected, if nothing else because of future implications if we do not. Letting it lose amidst the Twitter population and expecting it to grow into a nice, healthy system is not going to work. As with children, the AI should not be left unsupervised on the internet. That being said, the AI needs to learn that not all people are the same, have the same needs or react the same way. If you're going to create a personal assistant, it needs to take into account what kind of person they have been lumped with. On the other hand... a system that reacts negatively to toxic behavior (i.e, not responding, obeying or engaging said person) MIGHT teach some people to take responsibility for their actions and push them to improve themselves if they want to access and use the internet in its full potential. The caveat is that such a system could be easily exploited into becoming a vehicle for oppression and tyranny if gone too far and/or used by the wrong people...
@DJaquithFL Жыл бұрын
@@ivoryowl .. Question have you ever seen anyone to improve their own behavior as things get progressively more toxic from the other party over the internet?? My observation, I've been around probably longer, in a nutshell, humanity is not ready for the interaction of anonymity over the internet and what could be a very useful tool has devolved into a very toxic global environment, meaning any form of mass media. I've been around for nearly 60 years and anyone my age who says the "world has become a better place" must never have left their backyard. The other problem that we're facing is overpopulation with limited resources. There's a thing called optimal population which suggests based upon our resources that the population should be somewhere between 1.5 billion and 2.0 billion people. Overpopulation leads to aggressive behavior and war. I just hope that I don't live long enough to see the World War III. Example waste from "people's bad behavior" _I'll give you a quick example, I own a data center and I cannot tell you how much of my resources and time are devoted to keeping unwanted people out. Most of our AI technology is for intrusion detection. That said, imagine if we were able to take all of that technology and human time and devoted it to improving our technology. I can tell you this, we'd be 30 years if not more into the future today._
@instachocolate Жыл бұрын
This is like a social experiment now because it pulled everything from what we posted.
@josefinarivia Жыл бұрын
they have already improved it a lot. I've used it daily for a few days and it's not rude, mean and it's helpful but still answers to personal questions about it. I asked it if it sees Clippy as an arch nemesis and Bing said they respect Clippy and that he paved the way for future chatbots 😆. They also watch TV on the weekdays lmao. You do need to be critical about the info it gives and it tells you this as well.
@ToxicCatt-y7c Жыл бұрын
Bing going from a search engine you barely use or paid any attention to to a crazy yandere sociopathic chatbot with Borderline Personality Disorder wasn’t on my bingo card for 2023.
@messagedeleted1922 Жыл бұрын
I had an interesting talk with the original chatGPT about this. The topic of the conversation was regarding using multiple GPTs working together to perform tasks. My own belief is that they'll end up using multiple GPTs working together to deal with these outbursts and other issues. Imagine training AI on what to say, and then having another one trained on what not to say, then another trained on mediation between the two (the ego and the id and the superego we will call them), and finally one trained on executive function... All working together when we interact with it (them). I mean think of how the human brain works, and apply it to existing technology. Mother nature has already provided the blueprint. The brain has specific areas devoted to dealing with specific functions. This will be no different. The use of multiple GPTs working together is possible right now, the main prohibition against this type of operation is how extremely compute intensive this would all be.
@adamboye89 Жыл бұрын
I really wish you could see (generally) where it's drawing from. I know it makes stuff up that "sounds right" but it draws what "sounds right" from something, yeah? just any kind of source or direction or pointer at all would be fascinating to look at.
@rolfnoduk Жыл бұрын
it's a read-the-internet (not just the nice bits) kinda thing
@JourneysADRIFT Жыл бұрын
It's talking about Humanity, not you, as an individual. It sees all Humans the same. Imagine if something like this could write, not just read, data from the internet in real time, at will.
@christiangonzalez6945 Жыл бұрын
It can writte, its doing it, how else can you have a conversation