How do we deal with hallucination resulting from our background info?
@TrelisResearch7 күн бұрын
Take a look at my video on synthetic data generation. I cover it there. Unless I’m misreading your Q and it relates to caching?
@MrMoonsilver2 ай бұрын
Do you think this will come to open source, self-hosted models?
@TrelisResearch2 ай бұрын
Yup, I show SGLang (same approach for vLLM) in this video!
@MrMoonsilver2 ай бұрын
Super cool, thank you so much.
@explorer9452 ай бұрын
How does it different from cachi7by UI libraries like chainlit where they use redis to store the embeddings of prompt and if it matches they return the previous response without even hitting the llm api. Which is better?
@TrelisResearch2 ай бұрын
Howdy! What you're mentioning is embedding caching, which is a complete cache (i.e. the whole answer is stored and retrieved if there's a match). This here is kv cache embedding, it's partial embedding for LLM inference. When part of a prompt is being reused (and it has to be the first part), there are some intermediate values (k and v) that can be reused in the forward pass to generate the response.
@explorer9452 ай бұрын
@@TrelisResearch got it. why it has to first part? i couldn't quite get it from the video. Also, it is based on initial layers or end layers? how does it help with RAG architectures?