I hope you guys enjoy this video! Will you be using Document Stores in your projects going forward? PS. I used Llama 3.2 in this tutorial in order for everyone to follow along for free, but you can definitely use OpenAI and Anthropic for improved RAG responses.
@OscarTheStrategist2 күн бұрын
Came here from your new RAG agents video, went through the semantic agents video and now this document store. You've truly shown me that using Flowise is the right choice for my project. Thank you so much for the tremendous value you are adding to the no code maker space, Leon. You ROCK! I hope you continue to make videos and dive deeper into more advanced topics. Cheers!
@leonvanzyl2 күн бұрын
Thank you for the great feedback 🙏
@homecarersАй бұрын
Leon, I LOVE YOU MEN! This is so useful, I cannot believe that nobody paid attention to such a useful feature. Thank you very much!
@sridhartn8328 күн бұрын
again a great video, explaining how to set up web scraping in flowise, I have set this up with local postgresql database with pgvector extension for both vector store and record manager. Thank you!
@cvwdhnАй бұрын
Thank you so much for this video. I'm using document stores for my chatbot and the document stores are really much better than just adding the settings in the flow itself.
@leonvanzylАй бұрын
Exactly!
@santiagoghione91772 күн бұрын
Your videos are excellent as always Leon. I don't really understand what the metadata is for within the VectorStore, could you make a video explaining and giving examples of use? Thank you very much and I can't get enough of recommending your channel!
@leonvanzyl2 күн бұрын
Great idea for a video.
@homecarersАй бұрын
1 - Cutting in the number of nodes you need. 2 - Updating data easily, no more upserting in the chatflows. 3 - less nodes, less potential conflcts or issues. 4 - Sharing data stores with different chatflows. 5 - Testing vector stores..... Wow!
@leonvanzylАй бұрын
Spot on! I didn't even think about mentioning point 4, but you're 100% correct. These document stores can be shared between flows.
@AbdulrahmanHaririАй бұрын
Thanks Leon. i guess this approach is more suitable for upserting manually or is there a way to do it dynamically through API too? If not, I guess we still need to create chatflows for upserting data dynamically (where we can also define custom attributes).
@leonvanzylАй бұрын
You're right, you can use the APIs to automate the process of updating the knowledge base. I'll create a video on how I do exactly this 👍
@AbdulrahmanHaririАй бұрын
@@leonvanzyl Oh I never tried that, that would certainly make things easier with this setup! Thanks!
@AssassinUKАй бұрын
If you can dynamically insert documents in this setup, that would be amazing. I know you have a video of upsetting a document from Postman, if that was possible, then wow!
@BirdManPhil5 күн бұрын
this is amazing im going to switch to flowise from n8n for my main flow automation
@leonvanzyl4 күн бұрын
Definitely worth a try. I use platforms for client projects.
@practical-skills-schoolАй бұрын
Thank you, Leon. Didn't watch the whole video yet. What is concerning for me, can it easily substitute part of outdated vectors with some updated? Like let's say, I have 100-chapter of the book, and 57th chapter has changed. Is it possible to update only vectors to this chapter or I need to delete and re-upload the whole bunch of vectors?
@leonvanzylАй бұрын
Definitely check out the Record Manager section of the video.
@practical-skills-schoolАй бұрын
@@leonvanzyl Great, watched it as you said. Thank you.
@lemarunicoАй бұрын
Great tutorial Leo, thank you very much for sharing such valuable information. Question: ¿Which of the OpenAi models would be most recommended for a RAG?
@leonvanzylАй бұрын
GPT-4o or 4o-mini are great.
@FrancotujkАй бұрын
Leon, the value you are adding to the community is incredible! I've asked you in another video. If possible, can you explain what's "Streaming" in Flowise (a new feature they added), and what's the main use case? Thank you very much!
@leonvanzylАй бұрын
Are you referring to the new Streaming SDK's? I'm working on videos on using these SDKs in Python and Typescript applications 👍. Basically they're an alternative to calling the Flowise API's, and makes it super easy to handing streaming responses.
@FrancotujkАй бұрын
@@leonvanzyl yes, I am. Ok so I will stay tuned. Thanks Leon
@clarkzara1513 күн бұрын
Great video! I’m using Flowise to set up a RAG system and need help extracting keywords from chunks created by the Recursive Character Text Splitter. I want to add these keywords as metadata to improve retrieval accuracy. How can I process each chunk individually for keyword extraction (e.g., with Ollama) and add this metadata? Any tips are appreciated!
@massimosarziАй бұрын
Thank you Leon for all your work! One question: can I query the Document Store's contents filtering by metadata?
@leonvanzylАй бұрын
Yes. If I'm not mistaken, you can set that up in the Upsert config step.
@WayneBrutonАй бұрын
Hi Leon, Off the topic conversation, a while back you did a tutorial about adding leads, which I implemented using my own url and not make. However I have a quick question, I see some bots actually insert a form into the chat that the user can fill in. I get comments that people would like something like that. Can flowise do something along those lines?
@leonvanzylАй бұрын
Nah, Flowise cannot expose forms at this point. You could use n8n or VectorShift for that though.
@WayneBrutonАй бұрын
@@leonvanzyl Thanks, will try n8n
@LURASASAАй бұрын
Superb Leo, this video is extremely useful. Thank you again! Question: why the chunks uploaded to Supabase changed to 241 (and Pinecone, after you cleaned it up), when they were supposed to be the sum of the chunks embedded from the two scrapings (244)?
@leonvanzylАй бұрын
HAHAHA, I think the content of the pages literally changed when I recorded the video 😂.
@LURASASAАй бұрын
@@leonvanzyl Fair enough. Thanks again Leo!
@k1r0vsiiiАй бұрын
hey, how do u implement perplexity results? as custom tools? thx tried .. doesnt connect somehow
@JonOfOldКүн бұрын
One issue I'd love some help with: When I chunk my pdf documents, for some reason the metadata is being overwritten. In particular the pdf title. When I preview the chunks and look at the metadata I discover that it's changed. This happens inconsistently; some documents retain their original metadata title and some (particularly those with a combination of numbers, spaces and letters) change seemingly at random.. How can I stop this from happening and ensure the pdf's original title properties are retained in the metadata after chunking?
@epokaixyzАй бұрын
Consider these actionable insights from the video: 1. Create a chat flow in Flowise to begin building your RAG chatbot. 2. Add a Conversational Retrieval QA Chain and choose a suitable chat model like LLaMa 2 or GPT-3. 3. Establish a Document Store within Flowise and utilize document loaders like web scrapers or file uploads to populate it. 4. Configure upsert settings in your Document Store, selecting embeddings and a vector store. 5. Test the retrieval capabilities of your Document Store and fine-tune parameters like the number of documents returned and metadata filters. 6. Continuously test and refine your Document Store configuration to optimize retrieval accuracy for your RAG application.
@KMTAN100026 күн бұрын
I am using chroma db as the vector store in my project, is there a way to dynamically use the metadata filter to improve the search result of the vector store? The idea is to extract the keywords from the user query and then pass them to the metadata filter of the vector store to further improve the accuracy of the search?
@nocturnalbreadwinnerАй бұрын
Hello Leon! As always thank you so much for all that you're doing for the community. I was wondering if there's any way we could connect docs to structured outputs? or assistants to structured outputs?
@leonvanzylАй бұрын
Hey there! You're welcome 🤗. Not sure I understand the question. Could you provide an example / use-case? Since this video is about document stores, I assumed your asking of you can Upsert structured data like CSV, JSON? You could simply add document loaders for these.
@nocturnalbreadwinnerАй бұрын
@@leonvanzyl thank you for your reply, I meant if I had documents already loaded, how could I output structured JSON? Or in a different pathway, if I had an assistant that's loaded with documents, how could I get structured JSON out?
@leonvanzylАй бұрын
OK, I understand. This is a bit complicated to explain in a comment, but I'll try 😊. You can't assign output parsers to agents. Only LLM chains can take in an output parser. You also cannot pass the output of an agent to an LLM chain either. So what I would do is use Sequential Agents instead (agentflows). kzbin.info/www/bejne/bH3Fp5qKl7hjeKc You can use an agent to perform the vector retrieval, as per normal. After the agent call, add an LLM node. LLM nodes are able to output structured data. For more complex structured and logic, I would approach this very differently. I would build some automation process outside of Flowise (using Make.com or n8n) to first call the agent flow. I would then parse the response from the agent into a complex, structured output.
@nocturnalbreadwinnerАй бұрын
@@leonvanzyl Thank you Leon, you're the goat!
@3ac-arts27 күн бұрын
Great video, I'm facing an error in the process of upsert the vectorstoreRetriver, using the inMemoryVectorStore , Postgresvector or the pinecone ,the inMemoryVectorStore.upsert method receiving a HTML in a JSON parser and couldn't figure out from where this HTML response are coming from,the documents splitter works normally but gluing together with the vectorstore i get a error, did you face it before the video? And if yes how you solve it? Thanks for the vídeo!
@santiagoghione91772 күн бұрын
When you indicate Top K at 4 and it recovers 4 chunks, are those 4 texts included in the context that is sent to the LLM?
@leonvanzyl2 күн бұрын
Correct. The K value determines how many documents should be retrieved from the vector store. Those doc's are then injected into the prompt along with the user question.
@TeamsWorkAIАй бұрын
Thank you Leon. 🎉
@leonvanzylАй бұрын
You're welcome 🤗
@ClarissaGuscetti26 күн бұрын
Thank you very much for your fantaastic videos, only thing I use flowise and ollama (llama3.2 and nomic-embed-text )on docker (ollama through Nividia) and I put both on the same network and using the Ip adress in the URL of "chat ollama" and "ollama embeddings" in flowise. But in the projects I have with toolnode (agent RAG) it stops always after the tool node, and comes "Error buildAgentGraph - No tool calls found in the response." Diffrently if I would use chat gpt as chat and embeddings model the system would work. Possibility do you know the reasons of this. That would be really nice, thnak you very much
@JorgeMachado884 күн бұрын
So, how can we get 1B documents incremental loaded ? Separated process ?
@AIMasterGuruАй бұрын
Thanks so much Leo
@leonvanzylАй бұрын
You're welcome!
@Oliveira-wh1cq11 күн бұрын
Hello, Léo. I have a prompt to create questions in ChatGPT, and I want to create a chatbot to be used on WhatsApp so that teachers can create questions just by messaging this bot. The teacher should only define the course and subject so that the prompt can be better directed. Is Flowise a good tool to develop this? Could you provide a tutorial?
@leonvanzyl11 күн бұрын
Hey there. Flowise is perfect for this. In fact, Flowise can also generate other educational artifacts like charts and graphs as well. I already have a video on integrating FW with Whatsapp which you should check out 👍
@sitedevАй бұрын
Can the vector store be accessed/maintained via an API?
@mikew2883Ай бұрын
Great video! 👏
@leonvanzylАй бұрын
Thanks!
@rodrigorubio349829 күн бұрын
Would be great to see this working with the "In-Memory Vector Store". Would that work?
@leonvanzyl29 күн бұрын
I personally couldn't get it to work. Besides, Pinecone doesn't cost anything (free tier is generous) and it's production-ready.
@rodrigorubio349827 күн бұрын
@@leonvanzyl thank you mate.
@dimitriappel8374Ай бұрын
If I add an URL to the knowledge base, is there a way to re-scrape it regularly (let's say, every day)? I didn't find this option.
@leonvanzylАй бұрын
It's not possible within Flowise itself (not yet anyway), but I am working on a video showing how you can use the Flowise API's to automate the "document store refresh" process using n8n or Make.
@Col-pd2zdАй бұрын
I tried to upload multiple .docx files with the .docx loader from a folder. Is there a way to specify which document belongs to which within the metadata, or do I have to load each individual file and manually input its file name? Thanks a lot for the videos! Theyre helping a lot
@leonvanzylАй бұрын
Excellent question. Unfortunately some of the loaders require that you manually capture the file name in the metadata. That's definitely the case with the Docx and PDF loaders.
@Col-pd2zdАй бұрын
@leonvanzyl maybe it will be available in the future! Or I can find a workaround.. thanks for the reply! And thank you for the videos. I've been binging on your flowise uploads! Please keep em coming! Learning so much with your help
@SuperLiberty2008Ай бұрын
wow, great video, thaks for your job. Could you advise, is there python sdk to do it with hardcode. I want access control for certain users and assign those privilegios using metadata.
@leonvanzylАй бұрын
Yes, Flowise offers a Python SDK. I'll create videos on both the Python and Typescript SDK's soon.
@SuperLiberty2008Ай бұрын
@@leonvanzyl Can you share docs, please
@micbab-vg2muАй бұрын
thanks
@leonvanzylАй бұрын
You're welcome!
@shadowmonarch93Ай бұрын
I'm having error while upserting like llama server doesn't responding. The following error occured when loading this page. Status: 500 Error: documentStoreServices.insertIntoVectorStore - Error: documentStoreServices._insertIntoVectorStoreWorkerThread - Error: Request to Ollama server failed: 404 Not Found Please retry after some time.
@LURASASAАй бұрын
I'm getting a similar error, persistently. "Status: 500 Error: documentStoreServices.insertIntoVectorStore - Error: documentStoreServices._insertIntoVectorStoreWorkerThread - TypeError: fetch failed"
@shadowmonarch9329 күн бұрын
@@LURASASA well I got the solution for my problem,in embedding model section,I wrote nomic embed text but we have to specify the full model name . that's y the fetch failed.type ollama list in cmd,and copy "nomic-embed-textv1.5" something like that.that solves the issue.
@josephperkins8766Ай бұрын
Please do a video showcasing the new API please
@leonvanzylАй бұрын
Will do!
@muhammadsaadaziz4485Ай бұрын
I am facing error while upserting vector store, related to blob size. I am using llama 3.1 locally
@leonvanzylАй бұрын
Yikes! How big is the file?
@muhammadsaadaziz4485Ай бұрын
@@leonvanzyl a simple url of lang smith doc page and using cheerio web scrapper