Chat with PDFs: RAG with LangChain, GPT & LLaMa in Python

  Рет қаралды 3,090

NeuralNine

NeuralNine

Күн бұрын

Пікірлер
@namashaggarwal7430
@namashaggarwal7430 2 күн бұрын
Wow, I just finished watching langflow from Tech with Tim, now I'm watching this, you both are my favorite KZbinrs! Makes my learning great!
@frankdenweed6456
@frankdenweed6456 2 күн бұрын
Finally... I have been waiting for this
@guillaumedupin9732
@guillaumedupin9732 Күн бұрын
Very interresting. By the way : - Is it possible to 'index' multiple pdf files, like hundreds, or thousand of them ? - Is it possible to save the collected informations only once to make future requests ? A bit like we do for Python objects with Pickle.
@peerzechmann5253
@peerzechmann5253 2 күн бұрын
Great video! Thanks.
@dandyexplorer4252
@dandyexplorer4252 2 күн бұрын
Did you post the code somewhere? Would love to be able to copy it
@992u
@992u 2 күн бұрын
just learn and code it yourself
@kyleebrahim8061
@kyleebrahim8061 2 күн бұрын
This gives me a good idea for an app. How flexible are local LLMs, as in would it be possible to drive LLM processing with group policies?
@theanonymous92
@theanonymous92 2 күн бұрын
thank you so much for this but could you please create a video on how to handle the new data we want to add to vector store do we delete old index and create new one (literally the shutil) or is there a smarter way will really appreciate if you could cover something like this and more because I have built similar rag but deleting index every time new data adds on doesn't seem right
@hamadalkalbani4122
@hamadalkalbani4122 2 күн бұрын
So Amazing, thank u
@thomasgoodwin2648
@thomasgoodwin2648 2 күн бұрын
Mostly using Llama 3.2 3B Instruct these days. My little RTX3070 seems to handle it just fine. Maybe the embedding chunk size is affecting quality of Llama retrieval? (If the chunks are too large for the models context window for example, it would lose portions of the document in the embedding.). 🖖😎👍
5 Custom Python Decorators For Your Projects
25:40
NeuralNine
Рет қаралды 10 М.
When Should You Use Generators in Python?
10:01
NeuralNine
Рет қаралды 6 М.
How Strong Is Tape?
00:24
Stokes Twins
Рет қаралды 96 МЛН
小丑女COCO的审判。#天使 #小丑 #超人不会飞
00:53
超人不会飞
Рет қаралды 16 МЛН
Python RAG Tutorial (with Local LLMs): AI For Your PDFs
21:33
pixegami
Рет қаралды 327 М.
PDF Summary with LLMs in Python - LangChain Tutorial
14:19
NeuralNine
Рет қаралды 3,9 М.
This Llama 3 is powerful and uncensored, let’s run it
14:58
David Ondrej
Рет қаралды 178 М.
How to Improve LLMs with RAG (Overview + Python Code)
21:41
Shaw Talebi
Рет қаралды 85 М.
Anthropic MCP with Ollama, No Claude? Watch This!
29:55
Chris Hay
Рет қаралды 13 М.
LightRAG: A More Efficient Solution than GraphRAG for RAG Systems?
19:49
Prompt Engineering
Рет қаралды 40 М.
Local GraphRAG with LLaMa 3.1 - LangChain, Ollama & Neo4j
15:01
Coding Crash Courses
Рет қаралды 33 М.
Google’s Quantum Chip: Did We Just Tap Into Parallel Universes?
9:34
How Strong Is Tape?
00:24
Stokes Twins
Рет қаралды 96 МЛН