Wow, I just finished watching langflow from Tech with Tim, now I'm watching this, you both are my favorite KZbinrs! Makes my learning great!
@frankdenweed64562 күн бұрын
Finally... I have been waiting for this
@guillaumedupin9732Күн бұрын
Very interresting. By the way : - Is it possible to 'index' multiple pdf files, like hundreds, or thousand of them ? - Is it possible to save the collected informations only once to make future requests ? A bit like we do for Python objects with Pickle.
@peerzechmann52532 күн бұрын
Great video! Thanks.
@dandyexplorer42522 күн бұрын
Did you post the code somewhere? Would love to be able to copy it
@992u2 күн бұрын
just learn and code it yourself
@kyleebrahim80612 күн бұрын
This gives me a good idea for an app. How flexible are local LLMs, as in would it be possible to drive LLM processing with group policies?
@theanonymous922 күн бұрын
thank you so much for this but could you please create a video on how to handle the new data we want to add to vector store do we delete old index and create new one (literally the shutil) or is there a smarter way will really appreciate if you could cover something like this and more because I have built similar rag but deleting index every time new data adds on doesn't seem right
@hamadalkalbani41222 күн бұрын
So Amazing, thank u
@thomasgoodwin26482 күн бұрын
Mostly using Llama 3.2 3B Instruct these days. My little RTX3070 seems to handle it just fine. Maybe the embedding chunk size is affecting quality of Llama retrieval? (If the chunks are too large for the models context window for example, it would lose portions of the document in the embedding.). 🖖😎👍