This is really inspiting! Thank you for keeping updated with the latest tech
@carlosf34212 күн бұрын
Concise and straightforward. Excellent walkthrough and breakdowns. Thank you.
@thesaltyone44005 күн бұрын
I've been working on this myself using your KZbin data as a base for Gemini and now here you are delivering.
@GregPeters14 күн бұрын
Good stuff, Nate. Also, i like the Jazz background music for Sunday builds.
@jozzzzen4 күн бұрын
very nice tutorial, well explained, ty!
@vladimirrumyantsev74454 күн бұрын
Thanks for such valuable content, you're the best much appreciate you 🤝👍
@nateherk4 күн бұрын
Thank you!
@ClowBwas5 күн бұрын
Great video Nate!
@nateherk5 күн бұрын
Thank you!
@parthchandak79443 күн бұрын
How do you deal with multiple files uploaded at once?
@MrDenisJoshua3 күн бұрын
I wonder if is possible to make a flow that utilize the PGVector to memorize the important personal thinks during the conversation with AI. I don't need the documents, I need that AI decide if an information is important and then update PGVector with this information in order to take is if I'll ask that information next year :-) For example today in a chat session I tell to IA that I just buy a mobile phone model XYZ... next year I wont to ask the IA "witch model of mobile phone I have and how much time I used". Is this possible (locally better) ? do you have a video that is near to what I ask please ? Thanks a lot
@fathin74804 күн бұрын
How do you handle large unstructured documents that would require large context window to enable more accurate results. For example, would this workflow work effectively for a large text file that contains exported chats from a group chat in WhatsApp. Would I be able to get an accurate result if I prompt the chat in the workflow you have shown to export all the names of people in JSON format
@contractorwolf5 күн бұрын
after your "limit" node couldnt you just feed back into the original "downloading file" node instead of creating a new one?
@nateherk5 күн бұрын
I had played around with this but for simplicity sake I split them up. Was running into issues trying to reference the file ID to put it into the metadata in Supabase. I was playing around with operators like ?? and || but I wanted to keep this tutorial more straightforward. Great point though thank you!
@noame5 сағат бұрын
Is there a way to do the same job using Qdrant Vector Store ? I cant find a node to delete a vector collection ? Did some one went through this ?
@nateherk2 сағат бұрын
Great question. I am still exploring different Vector Database providers, hopefully I can make a video about Qdrant soon.
@noame2 сағат бұрын
@@nateherk Thank you Nate for the quality of this tutorial. Looking forward next ones !
@ilias88844 күн бұрын
Can we build multiple RAG ai agent in one workflow.
@nateherk4 күн бұрын
What would be the use case? How would you want to interact with each one?
@gabrielmn894 күн бұрын
Is this faster than just putting the google drive folder into the agent ai tools so it can search the docs and answer questions based on the files?
@nateherk4 күн бұрын
Yes having data in a vector database will be much more efficient especially as you start to continuously add more information
@HugoCatarino5 күн бұрын
If you would indicate budget 0 instead of “we have no money” it would probably be more accurate.
@parthchandak79444 күн бұрын
I keep getting this error when I run the SQL query on supabase: ERROR: 42710: extension "vector" already exists
@nateherk4 күн бұрын
This is likely because your table has already been created
@DIY4Profit4 күн бұрын
What is the benefit for n8n over make?
@plusone.network7 сағат бұрын
1. Open source 2. Self hostsable 3. Community
@Nuetzt-ja-nix4 күн бұрын
Just tested: 'file updated' also triggers on new files
@nateherk4 күн бұрын
Try splitting them into two different workflows or making sure everything is configured correctly within google drive
@sleepless-nite5 күн бұрын
“Got even better,” I thought N8N got some updates or something. Lol😅. How are the hallucinations when you have a complex data?
@nateherk5 күн бұрын
From my POV, they got much better as I haven’t explored this method yet 😅 Hallucinations always happen but that’s where iterating and refining comes into play, a huge percentage of time should be spent here when building out workflows.
@francisco.garcia4 күн бұрын
Monthly cost for running efficient RAG system like this.
@nateherk4 күн бұрын
It is really going to depend on how much data you have, hard for me to give you a ballpark without knowing more
@BriansRoar2 күн бұрын
It'd be helpful if you outlined the total monthly cost of the basic tech stack needed to run a small business with a manager agent that uses 4+ support agents.