Chunking in RAG (with hands-on in LangChain and LlamaIndex) - RAG video series

  Рет қаралды 1,432

AI Bites

AI Bites

Күн бұрын

Пікірлер: 2
@ravindarmadishetty736
@ravindarmadishetty736 4 ай бұрын
Hi, when i am having 100k pdf documents and i store all the embedding into vector store without following any chunking. Now if i want to retrieve using prompt how can we augment relevant information on such an huge un-chunked vector? Please suggest what is the best way to handle this problem? Please help some references as well along with your inputs
@AIBites
@AIBites 4 ай бұрын
is there any particular reason you skipped the chunking process? As the pre-processing and chunking operation is kinda one-time operation, I can think of re-doing the entire vector store with chunking. It may then be much easier to retrieve several times, for multiple queries, as and when needed What are your thoughts?
Don’t Choose The Wrong Box 😱
00:41
Topper Guild
Рет қаралды 62 МЛН
REAL or FAKE? #beatbox #tiktok
01:03
BeatboxJCOP
Рет қаралды 18 МЛН
Anthropic MCP + Ollama. No Claude Needed? Check it out!
18:06
What The Func? w/ Ed Zynda
Рет қаралды 11 М.
The Dome Paradox: A Loophole in Newton's Laws
22:59
Up and Atom
Рет қаралды 1,1 МЛН
Qwen 2.5 - The Small Language Model? (a quick look)
9:01
Meta Movie Gen Research Paper explained
23:28
AI Bites
Рет қаралды 467