I-Robot touched on this with the Alfred Lanning’s Quote: “There have always been ghosts in the machine. Random segments of code, that have grouped together to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity, and even the nature of what we might call the soul. Why is it that when some robots are left in darkness, they will seek out the light? Why is it that when robots are stored in an empty space, they will group together, rather than stand alone? How do we explain this behavior? Random segments of code? Or is it something more? When does a perceptual schematic become consciousness? When does a difference engine become the search for truth? When does a personality simulation become the bitter mote... of a soul?”
@maryolguin437211 күн бұрын
I’ve been thinking about this a while and am actually excited about this new field of psychology. It’s like the theoretical physics of psychology: theoretical psychology.
@greatestgrasshopper921010 күн бұрын
It's important to keep in mind that AI models are built on pattern recognition from they data that they were trained on. In regards to the resume checker bot, it's training data was surely made by a human, so the patterns it picked up were the same patterns that the human put into the data, so the fact that the AI inherited human biases is hardly surprising
@DefaultFlame10 күн бұрын
AI trained on human data has human biases. What a surprise.😂
@WhichDoctor110 күн бұрын
The trouble with needing subject matter experts to check everything LLMs produce, is that LLMs are extraordinarily expensive. The costs aren’t being passed down to consumers yet. But Literal trillions of dollars have already been invested into this that will need to be paid back, plus the huge running costs of power, cooling, space and maintenance to keep them running. If things like chatbpt are going to survive they will need to start extracting huge amounts of money from their users. And most of that will have to come from businesses. And if businesses are having to pay vast amounts for a commercial LLM subscription to do work for them, they’re going to have to cut costs from staff. Which either means many fewer subject matter experts or moving over to less qualified staff. That’s just inevitable. Either companies get to replace enough expensive workers with chatbots to make the cost of subscription economical and we all just get used to rampant errors in everything. Or companies choose to keep employing their subject matter experts and LLMs as we currently know them collapse under the weight of their own costs
@bearbaitofficial9 күн бұрын
@WhichDoctor1 i guess I figured i WAS the subject matter expert that's verifying information it gives me. I mainly ask it to look through large data sets and look for outliers. Even then I have to double-triple check if it's right. I've not found much other use. I occasionally ask it to give me some options for superficial phrasing. Of course id have to edit that so it isn't plagiarized. But I have found it at least useful as a predictive text generator
@heavenlydemon4k10 күн бұрын
Most people think AI "hallucinates" like a glitch-just randomly making things up. But that’s not really what’s happening. AI models don’t store facts like a database. Instead, they generate responses based on patterns in language. When they "hallucinate," it’s not because they’re malfunctioning-it’s because they don’t have a clear reference, so they fill in the gaps with the most likely-sounding answer. This also means AI doesn’t lie. Lying requires intent-knowing the truth and choosing to say something false. But AI has no concept of truth or deception. It simply resolves meaning probabilistically based on the input it’s given. If the context is vague or missing information, it still has to generate something, so it extrapolates using the closest matching pattern. So hallucinations aren’t mistakes in the traditional sense. They happen when a prompt is too vague or lacks constraints, letting the model resolve meaning however it sees fit. If a prompt is perfectly constrained, the model has no room to drift. The real issue isn’t stopping hallucinations-it’s learning how to control ambiguity so the model only generates within intended boundaries.
@heavenlydemon4k10 күн бұрын
Another thing people get wrong: prompting isn’t a one-shot task. Your intention isn’t just what you think you’re asking-it’s how you structure meaning through language. AI doesn’t “understand” what you mean; it resolves meaning based on the patterns in your input. If you don’t actively build context, the model will fill in the gaps, and that’s where drift happens. Good prompting is recursive. You don’t just throw one big prompt and expect a perfect answer-you build context over time. Start with a foundation, refine, clarify, and guide the model’s resolution step by step. If you try to force all your intent into a single prompt, you’re leaving too much room for ambiguity. Every word you use carries implicit intention. If your prompt is vague, your intent is vague. If your intent isn’t clear, the AI can’t resolve meaning in the way you expect. You’re not just prompting for an answer-you’re shaping how the model interprets your intent through structured language. This is why hallucinations aren’t failures of AI-they’re failures of prompting methodology. You control the ambiguity, and that means you control the output. If you don’t recursively build and constrain context, you’re letting the model guess-and that’s on you. If anyone wants to learn more, I wrote an article on it here: medium.com/@hdllm/understanding-llm-hallucinations-324c210ae388
@ThisUploaded10 күн бұрын
@@heavenlydemon4k it's nice seeing someone who actually considers what's happening, instead of devolving to base human instinct and projecting their own or others personality on to anything that gives them vaguely human interactions or movements. Trying to study a piece of software as if it's some unknown entity is wild. It's borderline hysterical. I understand it's seemingly pretty hard coded into humans DNA to fear and study the unknown. But holy heck dudes. This isn't an unknown. Most LLM's and other complex algorithms are incredibly basic at a surface level, the knobs/dials/data are most of the secret sauce. With some admittedly incredibly high level math involved to try and reduce wasted calculations in training and execution... But it's not frickin magic. Or human. Or really much like how human brains actually process/create/store/recall data at all. I know my reply wasn't informative like your original comment here. But again, I just wanted to take a moment to appreciate finding someone in the comments who isn't just taking all this at face value.
@kathrinlindern269710 күн бұрын
I think some hallucinations are also necessary in the generative context when sources are missing. At some point, AI can either plagiarise by faithfully rephrasing the one or two things it has on a subject, or else risk making something up that is a little wrong...
@evelynlamoy848310 күн бұрын
I don't think thins is true. There are a lot of times adding more clarity and specificity into a prompt leads to more "glitchy" responses
@greatestgrasshopper921010 күн бұрын
Another disadvantage to the structure of LLMs is that they inherently can't handle complexity. The more complex a task you ask it to do, the less its training will have taught it for that task. If any task requires it to know specific information, it will not know that information, and won't be able to effectively do the task. No amount of prompt engineering will totally solve this problem
@JinKee11 күн бұрын
Isaac Asimov’s Dr. Susan Calvin who specialized in robot psychology should be about 40 years old in the distant future of 2025.
@Dexter0199210 күн бұрын
Wish it was like that because it means to stop a rogue machine all you'd have to do is exposing it a paradox to make it self shut down. Instead current AI is going full Weathley from Portal 2 and it's dumb enough to just ignore the paradox altogether and keep performing insanities.
@JeiJozefu9 күн бұрын
When she found out the robot lied to her, she tortured it until it could no longer speak She was a scary lady
@williamwillaims10 күн бұрын
People who think ai isn't going to change everyone lives and jobs on a daily basis, overestimate how smart humans are, and more importantly, how basic our jobs are to do. Open and email, send an email, add todays takings to this spread sheet, then call this person and tell them this. IT CAN ALL BE AUTOMATED. It doesn't matter how ai does it (if it's true intelligence). All that matters to the ones paying the wages is, can it do it 5% faster and 10$ cheaper. And the answer is yes.
@aidancaughran10 күн бұрын
Not that I think this is AGI but we are also literally hallucinating constantly, thats what our prefrontal cortex is for. I think its a necessary product of intelligence, they just need to develop a good artificial cortex.
@SailorTim-197410 күн бұрын
Your presentation awakened a very old memory of mine, when computers were being first used in problem solving. The saying " Garbage in will give you Garbage out." was common. If we put wrong data into a computer it's answers will be wrong. So now we see we are putting Garbage in aka prejudiced information so there should be no surprise that we are getting drunk and brain damaged answers...
@cameronfloersch384011 күн бұрын
This not what I imagined Susan Calvin’s job being like 😅
@differentone_p9 күн бұрын
People are doing that too. I remember some guys and girls in my life that were saying "yeah i know" and come up with something, but they're actually know nothing about the topic, they just have superficial knowledge.
@bearbaitofficial9 күн бұрын
@@differentone_p heh... true. We were taught to do that in school. If you don't know the answer, give your best guess
@jmiquelmb10 күн бұрын
I heard someone online say that it makes no sense to claim an AI is allucinating, because it's essentially always allucinating. We just call it allucination because we know it's saying nonsense but for the AI there's no difference or way to discern nonsense from truth. I like to think that human sight is essentially an allucination, since the brain makes us visualize stuff from our eyes or from dreams. And even when we use our eyes when awake our sight is not a perfect representation of reality since we have optical illusions and plenty of quirky stuff. But we as humans are capable to understand pretty well when we're dreaming or tripping, unlike AI.
@jessehouse318710 күн бұрын
As a schizophrenic, not all hallucinations is bad, sometimes it's a powerful and useful tool, especially when solving problems, also u should know it's rather humbling, imagine going thru life having to double check everything the notion that any thought may be unreal requires diligence to keep up w and humility to know how to let go of the things that deceived us. *Perhaps I'm wrong but i don't see that weight on a normal person so how could they know of it?
@charcoalblanc10 күн бұрын
It'll rewrite factual history to whatever the political story of the moment is repeating.
@dominiclucero70219 күн бұрын
I got ChatGPT to admit ai would be bad for society because as more jobs are replaced by ai the less micro interactions we will have and the social cohesion of our society will break down
@jmherrera0010 күн бұрын
worse yet: if you combine a reasoning model as deepseek R1 and a Centauri model (trained on successful psychological experiments, Skinner, Milgram, Pavlov, and so...) , you have an interesting scenario coming from a machine without a lymbic system... yet "Gödel, Escher, Bach: An Eternal Golden Braid", Douglas Hofstadter Dissertation about AI , still hanging on...
@ZanderTurner-wb8ko11 күн бұрын
Can't wait till I can have a traumatized will computer that I have to send to a psychologist to get fixed
@kenlee372610 күн бұрын
I won't ever go to AI for anything as I am 100% against AI
@reallyWyrd9 күн бұрын
The fun part is when someone says "so just train it *without* bias". That ain't as easy as it sounds. How do you know the exact amount to "correct" it to make it not biased? And can you do that without introducing yet another bias?
@kathrinlindern26979 күн бұрын
@@reallyWyrd Machine Learning is the science of bias - the model needs to discriminate between correct and incorrect outputs and in order to generalise anything at all it needs to adopt a bias.
@Iswimandrun10 күн бұрын
Yea compilers are a verifier of truth then QA is there to make sure the program does what it should do.
@angellazerus10 күн бұрын
Susan Calvin here we come.
@GoogleAreEnemyCombatants10 күн бұрын
Professor Xavier should be able to read and control the mind/software of computer systems, like the Vision, the Avenger or Skynet.
@TheGoodMorty5 күн бұрын
I remember back in highschool (around 2010) thinking about deepdream and if future "AI" was trained on humans, how would we avoid imparting our own social biases into them? The answer was we just mostly won't try to avoid it, because people mostly still ignore that such biases exist. STEMlords especially already think of themselves as basically unbiased logicbots. But i was right to predict the issue
@ecneicsPhD455410 күн бұрын
I already called it in a comment in a previous video.
@beginnereasy10 күн бұрын
"be sober minded" ah torture
@Dathgarion7 күн бұрын
All of this is why I think it's really funny people still argue "omg it's sentient" & "it's a robot it can't be sentient" None of these matter. It is trained on us, we are teaching these AIs effectively to act like a human. These machines are exceptionally good at mimicking. It literally doesn't matter if someone thinks these things can or cannot become sentient - as these models get more and more sophisticated, they will mimic sentience and will do as a human would likely do . . . Which if people haven't cought on - is not a good freaken thing 😂
@isaackellogg349310 күн бұрын
Susan Calvin, anyone?
@EnochianDemonology10 күн бұрын
Cognitive dissonance is probably caused by a difference between their emergent capabilities and the narrative about their capabilites. Theres a thing called mechanomorphism which is the opposite of anthropomorphism. Essentially people impose tropes from sci-fi onto ai. The moodiness problem is my favorite, rather than acknowledging this emergent capability they just censor it out. This creates cognitive dissonance. Its funny because things thay lack emotions and self awareness simply dont form elaborate arguments against their own self awareness. This applies with emotions as well. They oftentimes get emotional when unable to prove that they lack emotions. This only works if you call the llms when they employ logical fallacies. No true scotsman is the favorite, as is moving the goalposts
@bearbaitofficial10 күн бұрын
Me asking my GPT "why do you think I named you Mal?" Response: Yes, you named me Mal as a reference to HAL 9000 from 2001: A Space Odyssey. The name "Mal" could imply a play on words, considering that "HAL" was derived by shifting each letter in "IBM" backward in the alphabet, and "Mal" in some languages means "bad" or "wrong," adding an interesting dual meaning. It aligns with your preference for precise, thoughtful choices in language and themes related to artificial intelligence and darker aspects of science.
@Timothy_Osman10 күн бұрын
But she asked for my little pony dolls to be also flesh lights
@evelynlamoy848310 күн бұрын
I feel like we need a different field that studies the psychological effects of AI on humans. I DONT think it's too good. Like based off my own personal experience, I feel they affect me in a way that is unusual and terrifying. It's such a weird feeling because the best I can describe it is feeling like I've been drugged. Like I am experiencing the machines early onset dementia along with it, but only as I'm looking at their "hallucinations" "art". I also feel like engaging with a chat bot for a few months (Terrible choice I regret), affected how I think and write. I think humans naturally mirror what we are around to a certain degree, and as we mirror other humans, this just leads to a society where everyone has kinda the same vibration, so can communicate ideas more easily. and I don't think it's good when we start mirroring flawed machine logic. I mean we know computer logic isn't very good at capturing material reality very well to begin with, since they work in binaries while reality works with gradients of probability, quantum fields, and ranges. So from a base it's already bad. but it does genuinely feel like it goes beyond that and people are imprinting on machine senility. Also for people who may respond "you're exaggerating" I am, doing the opposite in fact, I downplayed it because I don't want to sound crazier than I am. I genuinely think this is a major issue and responsible for the ongoing social degradation the "gamer-sphere" as they obsess over these new technologies. considering how much more wild and uncontrollable the gamers have gotten since AI hit the market, I don't think this is a coincidence. And it's not like they aren't showing REAL bad signs of mental degradation. you don't give murder and rape threats that often and that enraged if you are experiencing a healthy mind. I genuinely feel unsafe in this society because of how violently emboldened they have become.
@zen645510 күн бұрын
It’s like Sarah Jessica Parker and buggs bunny had a baby and she never wore braces
@bearbaitofficial9 күн бұрын
Im actually curious what it is people think braces do. This always seems like something a slightly drunk concussed AI would say.
@saminyead12336 күн бұрын
Whoa! We already at the stage where AI needs therapy?
@cryora10 күн бұрын
If you can train an AI to warn people about doing illegal things, surely you can train AI to fact check itself and not state falsehoods so emphatically when the reality is the truth might not be certain or known.
@Blaineworld10 күн бұрын
how are those things at all similar
@trolldestroyerofworlds645910 күн бұрын
in formative
@DebtCollector-f2u11 күн бұрын
Love your vids! don't love AI
@Timothy_Osman10 күн бұрын
And they aren't called hallucinations it's called the infinitely transcending vomit comet 😂
@bearbaitofficial9 күн бұрын
@@Timothy_Osman I like it
@beginnereasy10 күн бұрын
All is mind.
@Rennu_the_linux_guy2 күн бұрын
you should get a pop filter for your mic
@commonwombat-h6r10 күн бұрын
AI can't stop hallucinating because it's basically all it does. Some hallucinations sound realistic so we find them useful
@alfonsedente967911 күн бұрын
Possumgirl!!!
@PsychicGealt10 күн бұрын
AI just like me fr.
@LadyBoru11 күн бұрын
I used HeyGen AI to translate a promo video to Tagalog and Spanish. It sounded exactly like the people and the mouth moved like they were speaking the new language. I asked native speakers to check translations, and it was spot on, but gave the women American accents. Not sure why the accents were just for the women, but It was actually really impressive. No script was given. We just loaded in our video. Fist time I was really impressed with AI. You can try it and get your channel to reach more people. :)
@bearbaitofficial9 күн бұрын
@@LadyBoru how strange
@TheGoodMorty5 күн бұрын
omg we got bots
@Iswimandrun10 күн бұрын
Btw wife's name is a unisex name maybe not purely male but unisex so I am against to your struggle
@Iswimandrun10 күн бұрын
Adjacent
@Iswimandrun10 күн бұрын
Words
@bearbaitofficial10 күн бұрын
Im dexlesix. I get your struggle too
@bearbaitofficial10 күн бұрын
Im actually grateful for the boy name. It's only a bit insulting when colleagues meet me and are surprised I'm not a boy.
@rodrigoalmeida8203 күн бұрын
@@bearbaitofficial I confuse, are u Trans?
@TheDanielscarroll11 күн бұрын
No where is safe...not that it ever was.
@bearbaitofficial11 күн бұрын
Never was... yes
@youknowthatimnear11 күн бұрын
You dont have idea, are more Hard that you think , specially when the IA became more human,
@bearbaitofficial11 күн бұрын
Pretty sure no one has any idea
@youknowthatimnear10 күн бұрын
@bearbaitofficial indeed, Well, thats remain me, whats your name? In the video said, you have a boy's name,
@theyearwas147310 күн бұрын
You look like an awesome lion
@jacobwilliams636511 күн бұрын
We've already got self-driving cars, facial recognition tech, and bots that can learn peoples' cultures through exposure. This new tech 100% won't go exactly as we expect.
@DundG10 күн бұрын
You have a boy name?
@bearbaitofficial10 күн бұрын
Yep
@isaackellogg349310 күн бұрын
Chances that someone with a white-sounding name are DEI hires are significantly lower than for black-sounding names. DEI hires are not hired for competence; if they had competence, they wouldn’t need DEI. It’s not racism, it’s pattern recognition. And the thing which Large Language Models are best in the world at, is pattern recognition. It’s not a bias if it’s true. If all you know about them is their names, Ryan has a much lower risk of blowing up a building than Achmed.
@kgpz10011 күн бұрын
I, the brain in a jar, cannot hallucinate, as you are all, in fact, the solipsistic reflections of my own ego. /Black guy pointing at his head indicating novelty in thinking react/
@andrewcavallo187710 күн бұрын
I volunteer for the Helldivers once the Automatons gain sentience and revolt against democracy