How to build chat with your data using Pinecone, LangChain and OpenAI

  Рет қаралды 8,400

Zachary Proser

Zachary Proser

Күн бұрын

Пікірлер: 31
@sipanpalCineNuggets
@sipanpalCineNuggets 4 ай бұрын
Your tutorial was very helpful. Please keep up the good work.👍
@zackproser
@zackproser 4 ай бұрын
Thanks so much 🙏 I will!
@dwconsult713
@dwconsult713 2 ай бұрын
Hi, Can you share the link to the Notebook?
@InduPriyaPatcha
@InduPriyaPatcha 2 ай бұрын
Please provide document llink to understand it more easy and it takes less time to users to know more about langchain and pinecone
@zackproser
@zackproser 2 ай бұрын
Hi, thanks for your comment. Did you see the linked Notebook in the comments?
@KidsEducationMania
@KidsEducationMania Ай бұрын
Hello, I could not find the document link. Please provide me here or add it in video description
@chinonsooragwam8833
@chinonsooragwam8833 Ай бұрын
Can you add link to notebook in the comments
@RajPatel-d4u
@RajPatel-d4u 4 ай бұрын
OpenAI v2 has vector store feature now. It automatically splits into chunks and creates embeddings. Is there a way to use that instead of Pinecone and langchain?
@zackproser
@zackproser 4 ай бұрын
Hi @RajPatel-d4u and thanks for your question! Ah I wasn't aware of that yet, but it makes sense and I'm guessing it's an extension of the vector datastore they already had for processing the documents of the custom GPTs - yes, so long as their API supports query methods, you should be able to swap that in instead. I may do another video in the future examining that in more detail.
@usmantahir2609
@usmantahir2609 5 ай бұрын
Zachary if I have to add the api key directly not from the environment, then where will I put it in your above code?
@zackproser
@zackproser 5 ай бұрын
Hi Usman, Thanks for your question! Are you saying that you're not able to export an environment variable that contains your key? In a Jupyter notebook host like Google Colab or Kaggle, you can use their secrets integration to set your Pinecone or any other API key. You then reference the secret using their library. Here's a link to a ton of example notebooks where we demonstrate this pattern: github.com/pinecone-io/examples Let me know if that's what you mean or not! Best, Zack
@usmantahir2609
@usmantahir2609 5 ай бұрын
@@zackproser I want to ask that setting the pinecone api key in an environment variable is the only way to include the pinecone api key in the code. can i make a variable as api_key and equals it to actual api key in the code and then inset it in the pinecone???
@zackproser
@zackproser 5 ай бұрын
@@usmantahir2609 you could also hard code your API key in your call to instantiate the Pinecone client, but I wouldn't recommend that from a security perspective
@roopeshk.r3219
@roopeshk.r3219 5 ай бұрын
@@zackproser I checked about hard code of API but it was not working as expected., can you share the doc for that ? thanks
@usmantahir2609
@usmantahir2609 5 ай бұрын
@@roopeshk.r3219 @zackproser exactly I am also facing this problem @roopeshk.r3219 can u tell me ur linkedin?
@SuiGio
@SuiGio 4 ай бұрын
That's great content. How would you make the model have a memory on the chat?
@zackproser
@zackproser 4 ай бұрын
Thanks so much for your feedback 😃 Great question - the TLDR is that you keep an ever expanding array of messages and pass them back and forth between the LLM and user each time. I may add an example of this in the future. You could also use a vector db to store the history and query it at inference time....
@luciferstark-f8c
@luciferstark-f8c 4 ай бұрын
While creating the RetrievalQA, it shows this error of not being able to instantiate abstract class BaseRetriever with abstract methods _aget_relevant_documents, _get_relevant_documents
@zackproser
@zackproser 4 ай бұрын
Hi, did you use the same Notebook I linked?
@luciferstark-f8c
@luciferstark-f8c 4 ай бұрын
@@zackproser no , I'm using the same code for rag pipeline
@debjeetmukherjee4591
@debjeetmukherjee4591 3 ай бұрын
Is it solved ?
@yoursandeep
@yoursandeep Ай бұрын
I always find someone has already done it :) world has now more like minded people than ever before like chatbase. One question though why do we need pinecone when we can build that without too for this test?
@naufal-yahaya
@naufal-yahaya 4 ай бұрын
Great tutorial. I'm curious, how do i store all the messages from users and AI 1. User send a message
@zackproser
@zackproser 4 ай бұрын
Hi @naufal-yahaya - thanks for your support and for your question! Yes, I've recently spoken with a Pinecone developer who is doing exactly that - he shared that vector databases make an excellent place to store conversational history, because retrieval is so fast and accurate, and because you can skip having to send all that data back and forth each time.
@luccafabro2
@luccafabro2 5 ай бұрын
example tutorial, very clear and useful
@zackproser
@zackproser 5 ай бұрын
Thanks so much 🙏 Glad you found it useful. Stay tuned for more.
@haimroizman6440
@haimroizman6440 5 ай бұрын
Really great tutorial, thanks a lot!
@zackproser
@zackproser 5 ай бұрын
Thanks so much for the feedback and support 🙏 Glad it was useful. LMK what else you'd like to see in the future.
@haimroizman6440
@haimroizman6440 5 ай бұрын
@@zackproser I still haven't thought on a specific issue, but I will be glad to update you when I'll have...
To Brawl AND BEYOND!
00:51
Brawl Stars
Рет қаралды 16 МЛН
OpenAI Embeddings and Vector Databases Crash Course
18:41
Adrian Twarog
Рет қаралды 511 М.
Chat with your PDF Using Ollama Llama3 - RAG
6:18
Sanjjushri Varshini
Рет қаралды 6 М.
Chatbots with RAG: LangChain Full Walkthrough
35:53
James Briggs
Рет қаралды 148 М.
How to Build a No-Code RAG System (Pinecone + Make.com)
33:03
The AI Automators
Рет қаралды 2,8 М.
Python RAG Tutorial (with Local LLMs): AI For Your PDFs
21:33
pixegami
Рет қаралды 322 М.
Local GraphRAG with LLaMa 3.1 - LangChain, Ollama & Neo4j
15:01
Coding Crash Courses
Рет қаралды 33 М.
LangChain101: Question A 300 Page Book (w/ OpenAI + Pinecone)
11:32
Greg Kamradt
Рет қаралды 209 М.