AI models don't hallucinate

  Рет қаралды 2,858

David Rostcheck

David Rostcheck

Күн бұрын

Everyone knows AI models hallucinate - except they don't. We lifted the word from cognitive science - but we lifted the wrong word. What AI models do is something different... they confabulate.

Пікірлер: 29
@cosmoproletarian8905
@cosmoproletarian8905 Ай бұрын
I will use this impress my fellow newly made AI experts
@luszczi
@luszczi Ай бұрын
YES. God it's one of my pet peeves. This is such a fundamental mistake: it confuses an efferent process with an afferent one.
@jasonshere
@jasonshere 21 күн бұрын
Good point. There are so many words and ideas catching on which originate from a faulty understanding of definitions and language.
@JohnathanDHill
@JohnathanDHill 10 күн бұрын
Good to know, I don’t have a background in Cognitive Science or ML/DL but from the little bit of research I have done I felt that these LLMs didn’t “hallucinate” but didn’t know what to call it. I’ll research more on what you’ve provided Sir. Thanks for this
@pwreset
@pwreset Күн бұрын
Nahh, I call it a hangover... you know when he's waking up from months of being processed and there's too much light. It uses words but they don't really match reality.
@WaynesStrangeBrain
@WaynesStrangeBrain Ай бұрын
I agreed with enough of this, but can't comment on the last part you said, with what we can do about it. If it is a parallel to autism or memory problems, what would be the analogous cure for humans?
@davidrostcheck
@davidrostcheck Ай бұрын
That's an interesting question. Many autistic people suffer from sensory overload, so in that aspect it's a little like hallucination; I believe they benefit from a highly structured routine and environment (but I'm not an expert there). With memory problem, that's confabulation so the key to combatting it is increasing information density. LLMs do this via Retrieval Augmented Generation (RAG), supplying pertinent information to the model. For humans, the equivalent would be a caregiver helping to prompt with relevant memory cues. In the Amazon series Humans, there's an aging robotics scientist who suffers from memory issues; his wife has passed on and he keeps his synthetic (android) Odie, who is well past recycle date, because Odie remembers all the shared experiences with his wife and helps act as an artificial memory system, prompting him by telling him stories so he doesn't forget her.
@sathishkannan6600
@sathishkannan6600 Ай бұрын
What tool did you use for captions?
@davidrostcheck
@davidrostcheck Ай бұрын
I used riverside.fm
@MrBratkenSolov
@MrBratkenSolov 16 күн бұрын
It's called diarization. Also you can use it locally with whisperx, for example
@huntermunts9660
@huntermunts9660 Ай бұрын
For a neural network to visualize aka hallucinate, is a dedicated visualization layer required for the design of the NN?
@davidrostcheck
@davidrostcheck Ай бұрын
If I understand the question correctly (let me know if not), for an actual visual hallucination, yes, you'd need a visual layer, a source of random noise merged with the visual layer, and a way to control the thresholding. Interestingly enough: that's the basic architecture of a diffusion model, the AI models we use for creating visual imagery. And if the noise vs. information density gets high, you get some trippy and often creepy hallucinations from them.
@Maluson
@Maluson Ай бұрын
I totally agree with you. Great point.
@lLenn2
@lLenn2 Ай бұрын
Still going to use hallucinate in my papers, mate
@davidrostcheck
@davidrostcheck Ай бұрын
Yes, the ship has sailed. But I think, like electrical engineering defining the current as the opposite fo the actual flow or computer scientists and cognitive scientists using 'schema' differently, it's going to remain a point of friction for students in future years. ¯\_(ツ)_/¯
@lLenn2
@lLenn2 Ай бұрын
@@davidrostcheck They'll get over it. What does log mean to you, eh?
@lLenn2
@lLenn2 Ай бұрын
@@CheapSushi lol, I'm not going to dox myself
@steen_is_adrift
@steen_is_adrift Ай бұрын
I will never use the word confabulate. Nobody knows what that is. Everyone knows what a hallucination is. Calling it a hallucination while not technically the correct word will convey the correct meaning. Using the correct word is pointless if the other party doesn't understand the meaning.
@davidrostcheck
@davidrostcheck Ай бұрын
The problem is for practitioners in the AI field. As we build cognitive entities, they ways that we interact with them come more and more from cognitive science. If we understand that a model is confabulating, we can do things about it. All those techniques come from cognitive science, so there's a limit to how good you can be at AI without learning cog-sci, and the misaligned terms cause confusion there.
@steen_is_adrift
@steen_is_adrift Ай бұрын
@@davidrostcheck it's a good point, but I get the feeling that anyone this would apply to would not be confused by calling it a hallucination either.
@joshmeyer8172
@joshmeyer8172 Ай бұрын
🤔
@jahyegor
@jahyegor Ай бұрын
What about visual hallucinations?
@davidrostcheck
@davidrostcheck Ай бұрын
They're again a sensory thresholding problem. For example, if you put someone in a sensory deprivation tank, they get no visual stimuli so their visual system will progressively lower the noise threshold, trying to increase sensitivity, until it starts perceiving the noise in the visual system (from random firings, pulse pressure causing pressure waves through the eye, etc) as objects.
@chepushila1
@chepushila1 Ай бұрын
@@davidrostcheck What about those caused by mental disorders?
@gwills9337
@gwills9337 Ай бұрын
Ai researchers have been extremely cavalier and flippant in their stolen valor, stolen words, mis representation of “consciousness” lmao, and the output isn’t consistent or reliable.
@davidrostcheck
@davidrostcheck Ай бұрын
I think it's common across many sciences that researchers use terms from another field without a full understanding. Many AI researchers don't know cognitive science well. This is a handicap since many techniques, such as model prompting, are now really directly taken from their human cog-sci equivalents. But LLM models do produce consistent output, provided you set their temperature (creativity) to 0.
AI passed the Turing Test -- And No One Noticed
8:46
Sabine Hossenfelder
Рет қаралды 436 М.
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 910 М.
Little girl's dream of a giant teddy bear is about to come true #shorts
00:32
НРАВИТСЯ ЭТОТ ФОРМАТ??
00:37
МЯТНАЯ ФАНТА
Рет қаралды 8 МЛН
Why LLMs hallucinate | Yann LeCun and Lex Fridman
5:47
Lex Clips
Рет қаралды 11 М.
Why Large Language Models Hallucinate
9:38
IBM Technology
Рет қаралды 183 М.
*Interview with (A.I.) Founder 2024* [Funny]
9:59
Programmers are also human
Рет қаралды 69 М.
We Need to Rethink Exercise - The Workout Paradox
12:00
Kurzgesagt – In a Nutshell
Рет қаралды 5 МЛН
Run your own AI (but private)
22:13
NetworkChuck
Рет қаралды 1,3 МЛН
I don't think we can control AI much longer. Here's why.
7:38
Sabine Hossenfelder
Рет қаралды 351 М.
What are Generative AI models?
8:47
IBM Technology
Рет қаралды 966 М.
Did The Future Already Happen? - The Paradox of Time
12:35
Kurzgesagt – In a Nutshell
Рет қаралды 9 МЛН
Что делать если в телефон попала вода?
0:17
Лена Тропоцел
Рет қаралды 3,1 МЛН
Лучший браузер!
0:27
Honey Montana
Рет қаралды 735 М.
iPhone socket cleaning #Fixit
0:30
Tamar DB (mt)
Рет қаралды 17 МЛН
НОВЫЕ ФЕЙК iPHONE 🤯 #iphone
0:37
ALSER kz
Рет қаралды 340 М.