Think you have what it takes to build an amazing AI agent? I'm currently hosting an AI Agent Hackathon competition for you to prove it and win some cash prizes! Register now for the oTTomator AI Agent Hackathon with a $6,000 prize pool! It's absolutely free to participate and it's your chance to showcase your AI mastery to the world: studio.ottomator.ai/hackathon/register
@RajBiswal_Films15 сағат бұрын
Your channel is a goldmine for someone who wants to get into the nitty gritties of using AI in practical life with a technical background!
@ColeMedin2 сағат бұрын
Thank you - that means a lot!
@JSambrook12 сағат бұрын
Strong work, Cole. You really do a great job with your videos. Just the right pace, and interesting projects.
@ColeMedin2 сағат бұрын
Thank you very much! This is the longest video I've ever put out on my channel so I worked extra hard to keep a good pace, so I appreciate you calling it out!
@mitchellrcohen16 сағат бұрын
Dude, I literally watched one of your old videos on this topic today so happy you’re covering this. Thank you appreciate you.
@ColeMedin16 сағат бұрын
Glad it's perfect timing haha! You bet Mitchell!
@Xactant15 сағат бұрын
Great content, I have been binge watching your channel over the last few days. Very informative. As a developer this has become a go to channel for AI learning.
@ColeMedin15 сағат бұрын
Glad to hear it - thank you very much!
@MrVohveli6 сағат бұрын
Funnily enough, this is more or less the architecture for all AI agent systems: a processing agent of some kind looks at the query and directs the actions to the relevant systems that then return a response that will be given to the LLM to give to the user. You can run your Azure infra's log analytics to an agent and have it monitor & repair your systems for example: all you need is the agent that looks at the system to determine which part is down and which agent to instruct to attempt repair while another timer runs down on a verification agent to see if it was successful or to escalate it to a human and so on. The structure is identical to this one.
@ColeMedinСағат бұрын
Yes very true! I think a specific example of how to architect agents well with "agentic RAG" resonates with people more and makes the concept clear, but you are certainly right that this kind of solution and agentic reasoning is really the foundation for any agent.
@MikeBtraveling8 сағат бұрын
Your teaching style is the best, thank you for sharing!
@ColeMedin2 сағат бұрын
That means the world, thank you! My pleasure :)
@ahmedd.masoud680913 сағат бұрын
As usual, Fruitful and informative video. Thank you Cole, keep going bro.
@ColeMedin2 сағат бұрын
Thank you very much! I certainly am not going to stop :)
@kacklion410316 сағат бұрын
This is what I've been waiting for all day
@WTPleo9 сағат бұрын
Thank you for what you do man. These videos are more than useful!
@ColeMedin2 сағат бұрын
You are so welcome! I appreciate it!
@g4a516 сағат бұрын
This is so helpful, Cole. Waiting for you to cover the RAG agent with SQL queries.
@ColeMedin16 сағат бұрын
Glad you found it help! Yes I'm planning on doing that soon and it'll be very similar to this. In fact this already is a RAG agent with SQL queries essentially!
@g4a515 сағат бұрын
What I was suggesting is an agent having a tool that searches user’s question in a relational database and gives answer. This would mean the agent/tool will need to convert the question into a sql query to fetch the relevant data and feed it to LLM. This is required for most B2B use cases where data is stored in tables.
@ColeMedinСағат бұрын
Gotcha! Yeah I did this exactly for a client once, definitely going to share a version of that implementation on my channel!
@okich76Сағат бұрын
Rare, no bullshit channel, thank u Cole!
@ColeMedin45 минут бұрын
Haha I appreciate it, you bet!
@vincentmayer75177 сағат бұрын
Bro, this is incredibly valuable! Bog shout out to you for all thos free content! Create a skool I'll be your first disciple 🙏
@ColeMedinСағат бұрын
Thank you very much! I don't have a Skool but I do have a community over at thinktank.ottomator.ai :)
@amerrashed62878 сағат бұрын
Awsome. Does n8n supports agentic rag approach ?
@ColeMedin2 сағат бұрын
Thank you and yes it does! Essentially for agentic RAG in n8n you can include the usual "vector store retrieval" tool for your agent and also define other tools like I did in this video to explore the knowledge in other ways.
@tecnopadre3 сағат бұрын
You are going as a 🚀
@ColeMedinСағат бұрын
Haha I appreciate it! :D
@emprezariotvСағат бұрын
keep up the freaking great work Cole!
@ColeMedin45 минут бұрын
Thank you, I appreciate it!
@MohammedDaba-vb1ws14 сағат бұрын
why basic rag didn'r retrive the whole weather example, although when you split chunks, the function should've accounted for taking whole code blocks and i think the whole example should have been in one chunk, easy to retirive ans spit it out just using basic rag. I know agentic is a lot better, but optimizing basic rag would help agentic I guss as it uses splitting as building block as well.
@ColeMedin2 сағат бұрын
Fantastic question! Honestly I was wondering that myself. I checked the database and confirmed that the weather agent code block is maintained in a single chunk, and that the retrieval isn't grabbing that chunk so it isn't like the LLM is ignoring it. Hard to say exactly what the problem is, and if I optimized my setup (different chunking, better RAG with query expansion, etc.) I'm sure I could eventually get to the point where it could pull the full example. Agentic RAG is just one solution to make it more robust but certainly the easiest in my mind!
@laurenjones508111 сағат бұрын
Yes ! I've had successes, and i've had a lot of pitfalls, I think i've had more non successful runs, then successful runs, but i'm getting there
@benixmaximus12 сағат бұрын
Your desk setup looks so good. Would be great if you made a video or if not did you buy the desk somewhere or build it/customised yourself and if you built it how? :D
@ColeMedin2 сағат бұрын
I hate to break it to you but the background is not real! I generated it with AI and then I use a tool called Nvidia Broadcast to put it as my background without even having to have a green screen.
@RockyTheDog7562 сағат бұрын
Awesome content, it’s so helpful in AI learning curve! Thanks a lot!
@ColeMedinСағат бұрын
Thank you very much! You bet!
@ep4r454 минут бұрын
Hola Cole, me gustaría sugerirte que reconsideres activar las traducciones automáticas en tus videos. Para muchos, como yo, estas hacen tu contenido mucho más accesible y fácil de seguir, especialmente cuando necesitamos prestar atención tanto a lo que explicas como a lo que muestras en pantalla. Al consultar con otros creadores de contenido de habla inglesa, me confirmaron que desactivarlas es una decisión personal, pero hacerlo puede dar la impresión de que no se valora tanto al público HISPANO. Aprecio un montón tu trabajo y espero que tomes esto como una crítica constructiva y un impulso positivo hacia el cambio.
@throbbinhooddd8 сағат бұрын
Unfortunately had to give up due to OpenAI rate limit issues. Tried using time library, reducing chunk size, new API key, still getting rate limit errors on Tier 1 usage. I might come back to it after going Tier 2 to see if it helps. First time forking a repo so could totally be something I'm doing but I don't see what! Awesome video though man, I do truly believe this to be the way of the future.
@ColeMedinСағат бұрын
Thank you for the kind words! Sorry you are hitting rate limit issues though. My suggestion would be to try and use OpenRouter instead of OpenAI for the LLM. Their rate limits are much more generous right out the gate. You'd still have to use OpenAI for the embedding model though, OpenRouter doesn't have those.
@throbbinhoodddСағат бұрын
awesome I’ll give that a go, hadn’t thought of trying OpenRouter
@ColeMedinСағат бұрын
Sounds great! Yeah OpenRouter is fantastic
@mikew28836 сағат бұрын
Awesome tutorial! 👏 Quick question. Is Quadrant's faster speed for sematic search big enough of a benefit to maybe introduce a hybrid model where you might want to use both whereas Supabase might have a reference column to a Quadrant vector store that handles the vector search?
@ColeMedinСағат бұрын
Thank you and good question! Though I might need a bit of clarification. In my mind I can't really see how a specific column in Supabase would point to a Qdrant vector store. If you have multiple Qdrant vector stores to perform RAG with, I would just set those up as separate tools for the agent right in the code instead of making the agent go to Supabase to first find the Qdrant vector store to use. I suppose though that if you really do have dozens of Qdrant vector stores for some reason, it would be more scalable to maintain that list in Supabase instead of having it hardcoded in your script!
@leonardogrig16 сағат бұрын
Great job! Watching with enthusiasm. Agentic RAG is an absolute trend now, I'm just skeptical of leaving important tasks for the LLM (ai agent)
@ColeMedin16 сағат бұрын
Thanks man! Yeah I totally get the skepticism - I share it too. The best AI agents are ones to assist you by saving you time without you having to trust it completely. For example, an agent that will draft replies to your emails without actually sending them so you can review them first.
@oceanheartai11 сағат бұрын
@@ColeMedin 100% with you on this. We want to *combine* human and machine intelligence. Big thank you for all your work in this space, Cole. Your passion and dedication shines through with every video, and you've signed yourself up to the really hard stuff like community building too. See you on competition day ;)
@ColeMedinСағат бұрын
Thank you very much, that means the world to me! Can't wait to see your submission man!
@bangkitsanjaya37737 сағат бұрын
Please 😢, how to retrieve vector database from supabase, it's always fail although node supabase is successful, but AI Agent didn't know how to answer
@ColeMedinСағат бұрын
Hmmm... have you checked to see what is retrieved from RAG? Maybe the wrong context is being fetched which is why it seems the AI Agent doesn't know how to answer?
@ace.1type8z85 минут бұрын
thank you cole youre the best
@zacha.71115 сағат бұрын
Very interesting video, thanks! I have a little question for you: Why do you give the URL for the page to the LLM instead of saving your markdown text obtained from the initial crawling? Is there an advantage to do so? I guess it's to avoid the need to constantly update your Markdown information since the URLs will always have the latest information through the URL and will "re-crawl" it if need be, but I was curious to understand if there was other elements in your thinking. Thanks! :)
@ColeMedin15 сағат бұрын
Thank you and great question! So when I give the URLs to the agent it doesn't actually use the URL to visit the site in realtime. It does just use the markdown I have stored in the database. It simply uses the URL to determine if the content is relevant to the user's question, sort of like a title but I was thinking URLs give extra context with the path - it speaks to how the page relates to the rest of the documentation if that makes sense. But also you're thinking is spot on that we could also have the agent pull the latest information in realtime with the URL if we wanted!
@christianbuttner97937 сағат бұрын
can i create now a vue 3 and nuxt 3 agent with this and combine it somehow with cursor, so that cursor uses claude sonnet and the agent to code my requirements?
@ColeMedinСағат бұрын
GREAT question! You certainly can by putting this agent behind an OpenAI compatible API endpoint and setting that up in Cursor. Something I am going to explore more soon and probably create a video on!
@Steve.Goldberg3 сағат бұрын
How would you setup crawl4ai hosted so you can interact with it through webhooks or http requests in n8n workflows?
@ColeMedinСағат бұрын
Great question! And your head is certainly in the right place for how to leverage this in n8n! I would use FastAPI to create a Python endpoint around whatever Crawl4AI logic you want. The "payload" for the API could be a specific page you want to crawl or a list of pages. Then the API can either return the contents of the crawled page(s) or just put the knowledge in a database like I do and then return a success code. For hosting this endpoint, I'd recommend using DigitalOcean. I'll probably be making a video on this soon! Lot of people want to use this with n8n.
@AdamPaulTalks15 сағат бұрын
Agentic is like a cheat code
@Queracus7 сағат бұрын
what about local postgresql for database and local llm?
@ColeMedinСағат бұрын
You can certainly tweak this solution to use both! For example you could host Supabase locally (for Postgres) and run an LLM through Ollama. For the LLM you'd just have to change the "base URL" for the OpenAI client to point to Ollama: ollama.com/blog/openai-compatibility And for the Pydantic AI agent, Pydantic AI supports Ollama: ai.pydantic.dev/api/models/ollama/
@WhyitHappens-9119 сағат бұрын
I would enrich it with a twin knowledge base for the langgraph doc and we woul be up to have the best AI agent assistant! How would you to it?
@ColeMedin2 сағат бұрын
YES LangGraph and Pydantic AI is an incredible combo! This would fit very well into agentic RAG - we can ingest the LangGraph documentation just like we did with Pydantic AI using Crawl4AI. They have a sitemap.xml as well: langchain-ai.github.io/langgraph/sitemap.xml What we can do is set the metadata field "source" to be "pydantic_ai" for the Pydantic AI docs and "langgraph" for the LangGraph docs. Then we can create separate RAG tools for our agent that will search specifically through each of the docs in the knowledgebase using the metadata to filter. That way the agent won't get confused between the frameworks but can still search through both to combine them together to create agents on our behalf leveraging both.
@WhyitHappens-9112 сағат бұрын
@ was not expecting a tutorial already! Thanks for the quick advice! I’ll work on it
@soundlab483110 сағат бұрын
Thank you, excellent presentation!
@ColeMedin2 сағат бұрын
You bet - thank you very much!
@sreerag43689 сағат бұрын
Hey, If I have around 20-30 docs of websites like these, how should I go and store them,should I store it in a single table or should I break it down to multiple tables ?
@ColeMedin2 сағат бұрын
Great question! I'd recommend sticking to one table for simplicity, and then setting a metadata field for the website the record is from. Very similar to what I do for the "source" metadata field in the video. Then when you query the knowledgebase and you only want to query from one website, you just include that metadata filter in your query!
@adanpalma40266 сағат бұрын
Great Content. I have a question about the base of all that architecutrw query pipe. And is the quality of parsing complex pdfs how good isnpydantic parsing because no matter agents you have and tool if your document is bad parse all thos arch fall down. What do you think.
@ColeMedinСағат бұрын
Thank you! For parsing PDFs, I would create a custom solution that you would bake into this process. You wouldn't use Pydantic to parse the PDFs, you would use some PDF library like: pypdf.readthedocs.io/en/stable/
@exo4693 сағат бұрын
@ColeMedin i have issue with you last vedio when i try to use ollama instead of gpt i can create embedding store them but get stuck when using ollama retrieving the info
@ColeMedinСағат бұрын
Could you clarify what you are stuck with? :)
@VickySingh-s5p4 сағат бұрын
You are a star ⭐
@Sathias_12 сағат бұрын
Enumerate Chunks is the name of my funk/metal band
@Daser7004 сағат бұрын
can we add this to our n8n ? it will pe perfect if we can do this
@ColeMedinСағат бұрын
Since this is all Python code you can't add it directly into n8n unfortunately. But what you could do is create an API endpoint around any of the Crawl4AI logic to scrape websites and use that as a tool in n8n through the HTTP request node! You could also use n8n to create the AI agent side of this equation. So you would still have Python code scraping sites and creating the knowledgebase, but then your n8n agent could be what actually leverages the knowledge in Supabase using the vector store retrieval tool.
@foreverindependant6916Сағат бұрын
Is there a way to feed that data (crawled thanks to Crawl4AI) and easily feed it to a a Chat GPT agent I created ? (I am a no-coder that's why this use case is interesting for me)
@ColeMedinСағат бұрын
Did you create a GPT Assistant? Is that what you mean by Chat GPT agent? If you follow this video to create a knowledgebase in Supabase using what you scrape with Crawl4AI, you could create a custom tool for your OpenAI assistant to query that knowledgebase!
@kofiadom77795 сағат бұрын
This is fantastic!
@ColeMedinСағат бұрын
Thank you! :D
@shehanazshaik326114 сағат бұрын
Great video!! You've used OpenAi api , Can u use Gemini api for it, please
@ColeMedin2 сағат бұрын
Thank you! Swapping over to Gemini would be super easy! You could even use Gemini through OpenRouter so the only thing you'd have to change is the "base URL" and API key for creating the OpenAI client. I actually do that in this video to use DeepSeek but you could do the same for Gemini with OpenRouter: kzbin.info/www/bejne/sJfCdWV7lsupoZI
@Tomusicful3 сағат бұрын
Is it exactly how Cursor Docs work?
@ColeMedinСағат бұрын
I'm guessing it is similar but hard to tell without looking into the exact implementation. I doubt they are using Crawl4AI though!
@vsakaria3 сағат бұрын
Good man !
@waldowalden7379Сағат бұрын
Thanks!
@ColeMedinСағат бұрын
Of course! Thank you for your support - that means a lot to me!
@MassiouenBekhtiar16 сағат бұрын
thanks for the video
@ColeMedin16 сағат бұрын
You bet!
@RaghavSoni-g2i4 сағат бұрын
Hello guys I have more than 10k Pdfs and i want to create an RAG Chatbot with it, can anyone suggest me the best way to do this,?? all the pdf has the law data
@ColeMedinСағат бұрын
I would suggest using a Python library like pypdf to get the text from each PDF! And then you can really follow the rest of this video to create a knowledgebase and AI agent around it!
@rkaid76 сағат бұрын
Hasn’t the first 30s of this video, negated all trust from ur previous rag vids bruv?
@ColeMedinСағат бұрын
Fair question! I wouldn't say so. Basic RAG (non-agentic) can be underwhelming a lot but also for simpler use cases and just as a way to get started it is still great. Really in this video I'm just building on what I've already covered in others!
@Ravinderi877 сағат бұрын
I do not know the coding 😢
@ColeMedinСағат бұрын
Anything I could elaborate on more to making the code more clear? :)
@waldowalden737915 сағат бұрын
Hey dude ... do you have any training, etc? I'd like to check it out. I don't have time to be on forums, etc ... I'd prefer a training. Congrats on the content.
@ColeMedin2 сағат бұрын
Thanks man! I don't have courses yet but I am going to work on my first couple very soon! I'll of course be posting about it on my channel when the time comes.
@waldowalden7379Сағат бұрын
@@ColeMedin Do it please. You have your first client.
@ColeMedinСағат бұрын
Wow I appreciate it!
@Timers12416 сағат бұрын
Thank you I am trying to make a LLM/agent and it thinks it is a human
@ColeMedin16 сағат бұрын
You're welcome! That's crazy haha, how so?
@Timers12416 сағат бұрын
I think bolt messed up the code ( now I use diy😁)
@MichaelReitterer15 сағат бұрын
Some great content but your hands make me feel dizzy
@ColeMedin15 сағат бұрын
Thanks Michael and I appreciate the feedback. I assume you mean in the intro?
@MukulTripathi15 сағат бұрын
😂 I didn't even notice till I saw this comment. And now I can't unsee it. Great content regardless!