Everyone knows AI models hallucinate - except they don't. We lifted the word from cognitive science - but we lifted the wrong word. What AI models do is something different... they confabulate.
Пікірлер: 29
@cosmoproletarian8905Ай бұрын
I will use this impress my fellow newly made AI experts
@luszcziАй бұрын
YES. God it's one of my pet peeves. This is such a fundamental mistake: it confuses an efferent process with an afferent one.
@jasonshere21 күн бұрын
Good point. There are so many words and ideas catching on which originate from a faulty understanding of definitions and language.
@JohnathanDHill10 күн бұрын
Good to know, I don’t have a background in Cognitive Science or ML/DL but from the little bit of research I have done I felt that these LLMs didn’t “hallucinate” but didn’t know what to call it. I’ll research more on what you’ve provided Sir. Thanks for this
@pwresetКүн бұрын
Nahh, I call it a hangover... you know when he's waking up from months of being processed and there's too much light. It uses words but they don't really match reality.
@WaynesStrangeBrainАй бұрын
I agreed with enough of this, but can't comment on the last part you said, with what we can do about it. If it is a parallel to autism or memory problems, what would be the analogous cure for humans?
@davidrostcheckАй бұрын
That's an interesting question. Many autistic people suffer from sensory overload, so in that aspect it's a little like hallucination; I believe they benefit from a highly structured routine and environment (but I'm not an expert there). With memory problem, that's confabulation so the key to combatting it is increasing information density. LLMs do this via Retrieval Augmented Generation (RAG), supplying pertinent information to the model. For humans, the equivalent would be a caregiver helping to prompt with relevant memory cues. In the Amazon series Humans, there's an aging robotics scientist who suffers from memory issues; his wife has passed on and he keeps his synthetic (android) Odie, who is well past recycle date, because Odie remembers all the shared experiences with his wife and helps act as an artificial memory system, prompting him by telling him stories so he doesn't forget her.
@sathishkannan6600Ай бұрын
What tool did you use for captions?
@davidrostcheckАй бұрын
I used riverside.fm
@MrBratkenSolov16 күн бұрын
It's called diarization. Also you can use it locally with whisperx, for example
@huntermunts9660Ай бұрын
For a neural network to visualize aka hallucinate, is a dedicated visualization layer required for the design of the NN?
@davidrostcheckАй бұрын
If I understand the question correctly (let me know if not), for an actual visual hallucination, yes, you'd need a visual layer, a source of random noise merged with the visual layer, and a way to control the thresholding. Interestingly enough: that's the basic architecture of a diffusion model, the AI models we use for creating visual imagery. And if the noise vs. information density gets high, you get some trippy and often creepy hallucinations from them.
@MalusonАй бұрын
I totally agree with you. Great point.
@lLenn2Ай бұрын
Still going to use hallucinate in my papers, mate
@davidrostcheckАй бұрын
Yes, the ship has sailed. But I think, like electrical engineering defining the current as the opposite fo the actual flow or computer scientists and cognitive scientists using 'schema' differently, it's going to remain a point of friction for students in future years. ¯\_(ツ)_/¯
@lLenn2Ай бұрын
@@davidrostcheck They'll get over it. What does log mean to you, eh?
@lLenn2Ай бұрын
@@CheapSushi lol, I'm not going to dox myself
@steen_is_adriftАй бұрын
I will never use the word confabulate. Nobody knows what that is. Everyone knows what a hallucination is. Calling it a hallucination while not technically the correct word will convey the correct meaning. Using the correct word is pointless if the other party doesn't understand the meaning.
@davidrostcheckАй бұрын
The problem is for practitioners in the AI field. As we build cognitive entities, they ways that we interact with them come more and more from cognitive science. If we understand that a model is confabulating, we can do things about it. All those techniques come from cognitive science, so there's a limit to how good you can be at AI without learning cog-sci, and the misaligned terms cause confusion there.
@steen_is_adriftАй бұрын
@@davidrostcheck it's a good point, but I get the feeling that anyone this would apply to would not be confused by calling it a hallucination either.
@joshmeyer8172Ай бұрын
🤔
@jahyegorАй бұрын
What about visual hallucinations?
@davidrostcheckАй бұрын
They're again a sensory thresholding problem. For example, if you put someone in a sensory deprivation tank, they get no visual stimuli so their visual system will progressively lower the noise threshold, trying to increase sensitivity, until it starts perceiving the noise in the visual system (from random firings, pulse pressure causing pressure waves through the eye, etc) as objects.
@chepushila1Ай бұрын
@@davidrostcheck What about those caused by mental disorders?
@gwills9337Ай бұрын
Ai researchers have been extremely cavalier and flippant in their stolen valor, stolen words, mis representation of “consciousness” lmao, and the output isn’t consistent or reliable.
@davidrostcheckАй бұрын
I think it's common across many sciences that researchers use terms from another field without a full understanding. Many AI researchers don't know cognitive science well. This is a handicap since many techniques, such as model prompting, are now really directly taken from their human cog-sci equivalents. But LLM models do produce consistent output, provided you set their temperature (creativity) to 0.