I hope you guys enjoy this video! Will you be using Document Stores in your projects going forward? PS. I used Llama 3.2 in this tutorial in order for everyone to follow along for free, but you can definitely use OpenAI and Anthropic for improved RAG responses.
@OscarTheStrategistАй бұрын
Came here from your new RAG agents video, went through the semantic agents video and now this document store. You've truly shown me that using Flowise is the right choice for my project. Thank you so much for the tremendous value you are adding to the no code maker space, Leon. You ROCK! I hope you continue to make videos and dive deeper into more advanced topics. Cheers!
@leonvanzylАй бұрын
Thank you for the great feedback 🙏
@homecarers2 ай бұрын
Leon, I LOVE YOU MEN! This is so useful, I cannot believe that nobody paid attention to such a useful feature. Thank you very much!
@AIMasterGuru2 ай бұрын
Thanks so much Leo
@leonvanzyl2 ай бұрын
You're welcome!
@cvwdhn2 ай бұрын
Thank you so much for this video. I'm using document stores for my chatbot and the document stores are really much better than just adding the settings in the flow itself.
@leonvanzyl2 ай бұрын
Exactly!
@homecarers2 ай бұрын
1 - Cutting in the number of nodes you need. 2 - Updating data easily, no more upserting in the chatflows. 3 - less nodes, less potential conflcts or issues. 4 - Sharing data stores with different chatflows. 5 - Testing vector stores..... Wow!
@leonvanzyl2 ай бұрын
Spot on! I didn't even think about mentioning point 4, but you're 100% correct. These document stores can be shared between flows.
@epokaixyz2 ай бұрын
Consider these actionable insights from the video: 1. Create a chat flow in Flowise to begin building your RAG chatbot. 2. Add a Conversational Retrieval QA Chain and choose a suitable chat model like LLaMa 2 or GPT-3. 3. Establish a Document Store within Flowise and utilize document loaders like web scrapers or file uploads to populate it. 4. Configure upsert settings in your Document Store, selecting embeddings and a vector store. 5. Test the retrieval capabilities of your Document Store and fine-tune parameters like the number of documents returned and metadata filters. 6. Continuously test and refine your Document Store configuration to optimize retrieval accuracy for your RAG application.
@sridhartn83Ай бұрын
again a great video, explaining how to set up web scraping in flowise, I have set this up with local postgresql database with pgvector extension for both vector store and record manager. Thank you!
@santiagoghione9177Ай бұрын
Your videos are excellent as always Leon. I don't really understand what the metadata is for within the VectorStore, could you make a video explaining and giving examples of use? Thank you very much and I can't get enough of recommending your channel!
@leonvanzylАй бұрын
Great idea for a video.
@Francotujk2 ай бұрын
Leon, the value you are adding to the community is incredible! I've asked you in another video. If possible, can you explain what's "Streaming" in Flowise (a new feature they added), and what's the main use case? Thank you very much!
@leonvanzyl2 ай бұрын
Are you referring to the new Streaming SDK's? I'm working on videos on using these SDKs in Python and Typescript applications 👍. Basically they're an alternative to calling the Flowise API's, and makes it super easy to handing streaming responses.
@Francotujk2 ай бұрын
@@leonvanzyl yes, I am. Ok so I will stay tuned. Thanks Leon
@MisCopilotos5 күн бұрын
Leon how can we do an API call to the Document Store? To let our users upload documents from the app to Flowise Document Store.
@TeamsWorkAI2 ай бұрын
Thank you Leon. 🎉
@leonvanzyl2 ай бұрын
You're welcome 🤗
@nocturnalbreadwinner2 ай бұрын
Hello Leon! As always thank you so much for all that you're doing for the community. I was wondering if there's any way we could connect docs to structured outputs? or assistants to structured outputs?
@leonvanzyl2 ай бұрын
Hey there! You're welcome 🤗. Not sure I understand the question. Could you provide an example / use-case? Since this video is about document stores, I assumed your asking of you can Upsert structured data like CSV, JSON? You could simply add document loaders for these.
@nocturnalbreadwinner2 ай бұрын
@@leonvanzyl thank you for your reply, I meant if I had documents already loaded, how could I output structured JSON? Or in a different pathway, if I had an assistant that's loaded with documents, how could I get structured JSON out?
@leonvanzyl2 ай бұрын
OK, I understand. This is a bit complicated to explain in a comment, but I'll try 😊. You can't assign output parsers to agents. Only LLM chains can take in an output parser. You also cannot pass the output of an agent to an LLM chain either. So what I would do is use Sequential Agents instead (agentflows). kzbin.info/www/bejne/bH3Fp5qKl7hjeKc You can use an agent to perform the vector retrieval, as per normal. After the agent call, add an LLM node. LLM nodes are able to output structured data. For more complex structured and logic, I would approach this very differently. I would build some automation process outside of Flowise (using Make.com or n8n) to first call the agent flow. I would then parse the response from the agent into a complex, structured output.
@nocturnalbreadwinner2 ай бұрын
@@leonvanzyl Thank you Leon, you're the goat!
@BirdManPhilАй бұрын
this is amazing im going to switch to flowise from n8n for my main flow automation
@leonvanzylАй бұрын
Definitely worth a try. I use platforms for client projects.
@TheWhoIsTom21 күн бұрын
Nice video thx! Is it possible to use a proxy and custom headers for the cheerio webscraping? Because some sites will block the crawler.
@massimosarzi2 ай бұрын
Thank you Leon for all your work! One question: can I query the Document Store's contents filtering by metadata?
@leonvanzyl2 ай бұрын
Yes. If I'm not mistaken, you can set that up in the Upsert config step.
@mikew28832 ай бұрын
Great video! 👏
@leonvanzyl2 ай бұрын
Thanks!
@AbdulrahmanHariri2 ай бұрын
Thanks Leon. i guess this approach is more suitable for upserting manually or is there a way to do it dynamically through API too? If not, I guess we still need to create chatflows for upserting data dynamically (where we can also define custom attributes).
@leonvanzyl2 ай бұрын
You're right, you can use the APIs to automate the process of updating the knowledge base. I'll create a video on how I do exactly this 👍
@AbdulrahmanHariri2 ай бұрын
@@leonvanzyl Oh I never tried that, that would certainly make things easier with this setup! Thanks!
@AssassinUK2 ай бұрын
If you can dynamically insert documents in this setup, that would be amazing. I know you have a video of upsetting a document from Postman, if that was possible, then wow!
@lemarunico2 ай бұрын
Great tutorial Leo, thank you very much for sharing such valuable information. Question: ¿Which of the OpenAi models would be most recommended for a RAG?
@leonvanzyl2 ай бұрын
GPT-4o or 4o-mini are great.
@LURASASA2 ай бұрын
Superb Leo, this video is extremely useful. Thank you again! Question: why the chunks uploaded to Supabase changed to 241 (and Pinecone, after you cleaned it up), when they were supposed to be the sum of the chunks embedded from the two scrapings (244)?
@leonvanzyl2 ай бұрын
HAHAHA, I think the content of the pages literally changed when I recorded the video 😂.
@LURASASA2 ай бұрын
@@leonvanzyl Fair enough. Thanks again Leo!
@WayneBruton2 ай бұрын
Hi Leon, Off the topic conversation, a while back you did a tutorial about adding leads, which I implemented using my own url and not make. However I have a quick question, I see some bots actually insert a form into the chat that the user can fill in. I get comments that people would like something like that. Can flowise do something along those lines?
@leonvanzyl2 ай бұрын
Nah, Flowise cannot expose forms at this point. You could use n8n or VectorShift for that though.
@WayneBruton2 ай бұрын
@@leonvanzyl Thanks, will try n8n
@rodrigorubio34982 ай бұрын
Would be great to see this working with the "In-Memory Vector Store". Would that work?
@leonvanzyl2 ай бұрын
I personally couldn't get it to work. Besides, Pinecone doesn't cost anything (free tier is generous) and it's production-ready.
@rodrigorubio3498Ай бұрын
@@leonvanzyl thank you mate.
@k1r0vsiii2 ай бұрын
hey, how do u implement perplexity results? as custom tools? thx tried .. doesnt connect somehow
@clarkzara15Ай бұрын
Great video! I’m using Flowise to set up a RAG system and need help extracting keywords from chunks created by the Recursive Character Text Splitter. I want to add these keywords as metadata to improve retrieval accuracy. How can I process each chunk individually for keyword extraction (e.g., with Ollama) and add this metadata? Any tips are appreciated!
@KMTAN1000Ай бұрын
I am using chroma db as the vector store in my project, is there a way to dynamically use the metadata filter to improve the search result of the vector store? The idea is to extract the keywords from the user query and then pass them to the metadata filter of the vector store to further improve the accuracy of the search?
@ClarissaGuscettiАй бұрын
Thank you very much for your fantaastic videos, only thing I use flowise and ollama (llama3.2 and nomic-embed-text )on docker (ollama through Nividia) and I put both on the same network and using the Ip adress in the URL of "chat ollama" and "ollama embeddings" in flowise. But in the projects I have with toolnode (agent RAG) it stops always after the tool node, and comes "Error buildAgentGraph - No tool calls found in the response." Diffrently if I would use chat gpt as chat and embeddings model the system would work. Possibility do you know the reasons of this. That would be really nice, thnak you very much
@JorgeMachado88Ай бұрын
So, how can we get 1B documents incremental loaded ? Separated process ?
@3ac-artsАй бұрын
Great video, I'm facing an error in the process of upsert the vectorstoreRetriver, using the inMemoryVectorStore , Postgresvector or the pinecone ,the inMemoryVectorStore.upsert method receiving a HTML in a JSON parser and couldn't figure out from where this HTML response are coming from,the documents splitter works normally but gluing together with the vectorstore i get a error, did you face it before the video? And if yes how you solve it? Thanks for the vídeo!
@Col-pd2zd2 ай бұрын
I tried to upload multiple .docx files with the .docx loader from a folder. Is there a way to specify which document belongs to which within the metadata, or do I have to load each individual file and manually input its file name? Thanks a lot for the videos! Theyre helping a lot
@leonvanzyl2 ай бұрын
Excellent question. Unfortunately some of the loaders require that you manually capture the file name in the metadata. That's definitely the case with the Docx and PDF loaders.
@Col-pd2zd2 ай бұрын
@leonvanzyl maybe it will be available in the future! Or I can find a workaround.. thanks for the reply! And thank you for the videos. I've been binging on your flowise uploads! Please keep em coming! Learning so much with your help
@practical-skills-school2 ай бұрын
Thank you, Leon. Didn't watch the whole video yet. What is concerning for me, can it easily substitute part of outdated vectors with some updated? Like let's say, I have 100-chapter of the book, and 57th chapter has changed. Is it possible to update only vectors to this chapter or I need to delete and re-upload the whole bunch of vectors?
@leonvanzyl2 ай бұрын
Definitely check out the Record Manager section of the video.
@practical-skills-school2 ай бұрын
@@leonvanzyl Great, watched it as you said. Thank you.
@JonOfOldАй бұрын
One issue I'd love some help with: When I chunk my pdf documents, for some reason the metadata is being overwritten. In particular the pdf title. When I preview the chunks and look at the metadata I discover that it's changed. This happens inconsistently; some documents retain their original metadata title and some (particularly those with a combination of numbers, spaces and letters) change seemingly at random.. How can I stop this from happening and ensure the pdf's original title properties are retained in the metadata after chunking?
@santiagoghione9177Ай бұрын
When you indicate Top K at 4 and it recovers 4 chunks, are those 4 texts included in the context that is sent to the LLM?
@leonvanzylАй бұрын
Correct. The K value determines how many documents should be retrieved from the vector store. Those doc's are then injected into the prompt along with the user question.
@SuperLiberty20082 ай бұрын
wow, great video, thaks for your job. Could you advise, is there python sdk to do it with hardcode. I want access control for certain users and assign those privilegios using metadata.
@leonvanzyl2 ай бұрын
Yes, Flowise offers a Python SDK. I'll create videos on both the Python and Typescript SDK's soon.
@SuperLiberty20082 ай бұрын
@@leonvanzyl Can you share docs, please
@DjMerlinRemix11 күн бұрын
When I go to add the document store in the flow the one I created does not show up with latest version of Flowise version 2.2.1
@leonvanzyl10 күн бұрын
Did you remember to run Upsert as well?
@sitedev2 ай бұрын
Can the vector store be accessed/maintained via an API?
@Oliveira-wh1cqАй бұрын
Hello, Léo. I have a prompt to create questions in ChatGPT, and I want to create a chatbot to be used on WhatsApp so that teachers can create questions just by messaging this bot. The teacher should only define the course and subject so that the prompt can be better directed. Is Flowise a good tool to develop this? Could you provide a tutorial?
@leonvanzylАй бұрын
Hey there. Flowise is perfect for this. In fact, Flowise can also generate other educational artifacts like charts and graphs as well. I already have a video on integrating FW with Whatsapp which you should check out 👍
@dimitriappel83742 ай бұрын
If I add an URL to the knowledge base, is there a way to re-scrape it regularly (let's say, every day)? I didn't find this option.
@leonvanzyl2 ай бұрын
It's not possible within Flowise itself (not yet anyway), but I am working on a video showing how you can use the Flowise API's to automate the "document store refresh" process using n8n or Make.
@8bullets94621 күн бұрын
Hi, I tried this, but even though the data initially entered pinecone, the table was not created in supabase. And, probably as a result of this, I am not able to upload another document to the same store after the first try. An error is observed. Status: 500 Error: documentStoreServices.insertIntoVectorStore - Error: documentStoreServices._insertIntoVectorStoreWorkerThread - AggregateError
@muhammadsaadaziz44852 ай бұрын
I am facing error while upserting vector store, related to blob size. I am using llama 3.1 locally
@leonvanzyl2 ай бұрын
Yikes! How big is the file?
@muhammadsaadaziz44852 ай бұрын
@@leonvanzyl a simple url of lang smith doc page and using cheerio web scrapper
@micbab-vg2mu2 ай бұрын
thanks
@leonvanzyl2 ай бұрын
You're welcome!
@shadowmonarch932 ай бұрын
I'm having error while upserting like llama server doesn't responding. The following error occured when loading this page. Status: 500 Error: documentStoreServices.insertIntoVectorStore - Error: documentStoreServices._insertIntoVectorStoreWorkerThread - Error: Request to Ollama server failed: 404 Not Found Please retry after some time.
@LURASASA2 ай бұрын
I'm getting a similar error, persistently. "Status: 500 Error: documentStoreServices.insertIntoVectorStore - Error: documentStoreServices._insertIntoVectorStoreWorkerThread - TypeError: fetch failed"
@shadowmonarch932 ай бұрын
@@LURASASA well I got the solution for my problem,in embedding model section,I wrote nomic embed text but we have to specify the full model name . that's y the fetch failed.type ollama list in cmd,and copy "nomic-embed-textv1.5" something like that.that solves the issue.