LLM Hallucinations in RAG QA - Thomas Stadelmann, deepset.ai

  Рет қаралды 6,877

deepset

deepset

Күн бұрын

Пікірлер: 4
@pythontok4192
@pythontok4192 11 ай бұрын
What does the hallucination detector model compare each sentence of the answer against? If you run it against the contexts , say if k=4 snippets retrieved by the rag, wouldn’t some of them not be relevant?
@vsrohit
@vsrohit Жыл бұрын
Can you please provide the links for the hallucination detection model?
@davefar2964
@davefar2964 9 ай бұрын
Thanks a lot for this presentation, the research papers on hallucinations as well as your BERTscore solutions were quite interesting. Another class of approaches (that causes high cost but no big latency if done in parallel) to detect and avoid hallucinations (see kzbin.info/www/bejne/oqS9dImjeKeFosUsi=-dYinw2SiAGw44df&t=1428) is getting multiple samples at test time, e.g. doing self-consistency or ensembling, deciding for the final answer by mayority voting or ranking.
@billykotsos4642
@billykotsos4642 Жыл бұрын
So even RAG can't be trusted 100% huh...
Wait for the last one 🤣🤣 #shorts #minecraft
00:28
Cosmo Guy
Рет қаралды 16 МЛН
Сюрприз для Златы на день рождения
00:10
Victoria Portfolio
Рет қаралды 2,7 МЛН
小蚂蚁会选到什么呢!#火影忍者 #佐助 #家庭
00:47
火影忍者一家
Рет қаралды 124 МЛН
Using RAG QA for enterprise search - Webinar by deepset.ai
59:36
GraphRAG: The Marriage of Knowledge Graphs and RAG: Emil Eifrem
19:15
How to set up RAG - Retrieval Augmented Generation (demo)
19:52
Don Woodlock
Рет қаралды 35 М.
Why LLMs hallucinate | Yann LeCun and Lex Fridman
5:47
Lex Clips
Рет қаралды 12 М.
Why AI hallucinations are here to stay | Ep. 151
49:34
TECHtalk
Рет қаралды 819
Mitigating LLM Hallucinations with a Metrics-First Evaluation Framework
1:00:40
Solving Gen AI Hallucinations
36:22
Jonathan Yarkoni
Рет қаралды 3,1 М.
Wait for the last one 🤣🤣 #shorts #minecraft
00:28
Cosmo Guy
Рет қаралды 16 МЛН