First video I have ever seen that actually breaks down the abstraction levels into things you can understand! Great tutorial
@airoundtable22 күн бұрын
Thanks! I am glad you enjoyed the video
@TooyAshy-1002 ай бұрын
Thank you, Farzad! Your channel has been amazing. The way you explained combining RAG with SQL Agents for large database automation was super helpful!
@nicolassuarez2933Ай бұрын
Outstanding! Best of its kind :)
@terryliu36352 ай бұрын
Amazing channel!! This is exactly what I need for my POC. Thanks, Farzad! -- Terry from Calgary!
@zkiyyeller35252 ай бұрын
Thank You, Farzad!
@fullstackailab2 ай бұрын
Great video thanks Farzad!
@alexramos587Ай бұрын
Great video! Subscribed.
@anandukc47092 ай бұрын
Hi sir your videos are awesome. I would suggest to make a video of conversational rag with chat history trimming. Also integration of function calling, langraph etc. Because in companies we need these type of application.
@airoundtable2 ай бұрын
Thanks. In this video I have explainedfunction calling and how to design graphs using langchain and langgraph. I also brifly discussed adding a memory to the system but it does not contain trimming
@yazanrisheh51272 ай бұрын
Hey Farzad, I love this video and this is by far the best chat with database video out there. I've actually understood every bit of it however I have 1 small doubt. Whats the difference between using an agent and usnig an llm thats binded with tools (llm.bind_tools)
@airoundtable2 ай бұрын
Thanks. Glad to hear it. In general LLM agents are responsible for making decisions. The decision can be to call a tool or it can be to choose the right agent. So, an llm that is bind with tools is also considered as an agent
@ghbett2 ай бұрын
Thank you for all videos. will be great if you can add deployment. Thanks again
@aireescreates5 күн бұрын
Hi Farzad. Thanks for this video. I just want to ask if this is the video you’re referring to in your Github “LangGraph_1o1_Agentic_Customer_Support”? The YT link in the README doesn’t work. Thanks
@airoundtable2 күн бұрын
Hi the repo for this video is here: github.com/Farzad-R/Advanced-QA-and-RAG-Series/tree/main/AgentGraph-Intelligent-Q%26A-and-RAG-System
@juanmanuelzwiener44472 ай бұрын
Excellent great video tutorial, thank you very much it is very helpful because I was building an agent with similar capabilities. To work with databases that have high cardinality, I used the example in the Langchain documentation where a similarity retrieval tool is created. I would like you to make a video about those cases.
@airoundtable2 ай бұрын
Thank you. Great suggestion. I'll keep it in mind for future videos
@juanmanuelzwiener44472 ай бұрын
@@airoundtable Thanks, that would be very useful. I would also like you to make RAG with image visión.
@stanTrXАй бұрын
14:07 thanks what if i want to seek from a db but with how to say, use it for similar data in that db so that not regular sql but some interpretion like cleaning or searching these as well, using llm
@airoundtableАй бұрын
I am not sure what you meant exactly. But in the next video, I will present a project that not only can read but also can write in a database. However, I will briefly mention that part since the main objective of the video is something else. But you will have access to the full code
@stanTrXАй бұрын
@airoundtable thank you. What i meant was some kind of similarity problems between db records such as abc company, abc inc. A bc company etc which meant all the same but with typos. I want LLM to find such anomalities for me:)
@anandukc470925 күн бұрын
Hi Farzad, in your opinion which is the best agentic framework when it comes to production, langraph, crewai or autogen???
@airoundtable25 күн бұрын
Hi, you cannot fully trust any of these frameworks right now. All of them still have a long road ahead of them to become fully production ready. That being said, if a good team is behind the project they can keep it up-to-date and adjust with the new changes. I personally would choose LangGraph. IMO it has a bigger community and the team behind it is doing a great job. (Needless to say both CrewAI and Autogen are great frameworks as well but I personally lean towards LangGraph)
@DeepakRaviKumar-bp7in2 ай бұрын
Hi Farzad. May I know what tool you use to create flow diagrams?
@airoundtable2 ай бұрын
Powerpoint, draw.io, and eraser.io
@yazanrisheh51272 ай бұрын
I have 2 more questions: 1) Can we use any db like postgres or mysql etc? 2) Is there a way to limit the chat history that we pass to the LLM where instead of passing it all to the LLM, we pass the 5 most recent msgs or 10 most recent msgs. This way we can guarantee we won't exceed the context window and have our application break at some point
@airoundtable2 ай бұрын
1. Yes 2. Yes there is a way. You can either design a custom chat history and pass it to the LLM or you can use langchain's memory just like the one that we used in the video. And you can control both of them in terms of how many Q&A pairs should be passed to the LLM
@husseinhaidar27122 ай бұрын
thanks man for these videos, very professional!
@terryliu36352 ай бұрын
Hi Farzad, what is the purpose of pair of ** around Music, Business and ALL in the system message definition under Strategy B? Also, I found the results are not always the same. if I clicked on the invoke function multiple times without changing any codes, sometimes the result returns all 6 as expected and sometimes only 3 table names returned...I'm concerned if consistency of the results will be a problem when we need to apply this to a real-life use case. Thoughts? Thanks.
@airoundtable2 ай бұрын
Hello. When I write the system message, I usually tend to use markdown syntaxes. It is both for better human understanding and also LLMs understand markdown so I assume this sends them a more organized message. If the second strategy is not stable enough, go with the third strategy.
@RoyMathew-h1g19 күн бұрын
Hi Farzad , can you please tell me how can i get plot of different chart from the response like pie chart, barplot etc. Or can you make a upgraded video of this adding this plotting features??
@airoundtable19 күн бұрын
This project cannot plot and it needs some upgrades. That is just another agent being added in the system. In my I explained how to design/add agents to different systems. I will keep it in mind and try to address it in a another video
@RoyMathew-h1g19 күн бұрын
@airoundtable thanks i think i can use pandasai and convert it into tool to get these plots right? Or design a custom tool with seaborn or matplotlib...
@stanTrXАй бұрын
1:32:00 very good, thanks. Can this setup able to find something from db and search web according to the output of that query? Or you need to alter the code to do sı?
@airoundtableАй бұрын
I am glad you liked the video. No it cannot do that right now. The code needs to be modified
@aestheticmusic55122 ай бұрын
Hey so how should we approach a usecase where we need to combine knowledge from multiple sources before generating a SQL Query,like for eg:In this case we are finding info from multiple sources based on query,but what if we need a system where vector db's info is dependent on the sql db which helps the agent for generating better queries
@airoundtable2 ай бұрын
You can still design systems where vectorDB's search result has an impact on the SQL query. But to give a more detailed strategy on the system design, much more information is needed. For instance, the type of the relationship (is it linear or does it have an asynchronous effect) is one of them.
@RoyMathew-h1gАй бұрын
Hi sir , can we use the checkpoint memory in production. Pls reply. If can't what are the alternatives?
@airoundtableАй бұрын
It depends on your objectives and also whether you are willing to catch-up with Langchain's continues development and updates. So the answer might be yes or no based on different factors. If you are looking for a very stable system it is better to design your own memory. Otherwise checkpoint memory is a good option.
@m.bharaninath953Ай бұрын
Sir how to change the code with an open source llm like llama ...... 😢. Will that be as efficient as gpt models ! Or would there be fune tuning required !
@airoundtableАй бұрын
You can use OLLAMA. The performance depends on the model that you choose. For instance, I believe LLAMA 3.1 should have an acceptable performance for this project
@RoyMathew-h1gАй бұрын
Hi sir, pls reply if we have multiple users and should have different session fir each user can it be achieved by changing tread id dynamically for each user?
@airoundtableАй бұрын
Yes you can address that with thread_id
@RoyMathew-h1gАй бұрын
@@airoundtable thankyou very much for the support this video was a life saviour for me. Expecting quality contents like this..
@ShubhamKumar-h9v1e6 күн бұрын
Hi Farzad, Could you please explain how the primary agent decides which sub agent to call (RAG, SQL Agent, WEbsite agent)?
@mayankgoyal4213Күн бұрын
Yes l, looking for the same answer.
@stanTrXАй бұрын
44:05 what is the best open source / free embedding model than openai embedding model? I think open ai is not free?
@airoundtableАй бұрын
openai embedding model is not free but it is very cheap. You can use `BAAI/bge-large` from huggingface. I compared its performance with openai's model in this video: kzbin.info/www/bejne/qamlo5KXm9ipmJIsi=ygV5GPEyroNuZ07_ Also I used `BAAI/bge-large` embedding model for RAG in this video: kzbin.info/www/bejne/bJXcq2WDlLqKgtksi=uqr49nmc26Q6u0-M
@prathmeshatre6251Ай бұрын
I'm getting an error in the sql chain steps file at invoking the chain that us chain.invoke part that numerical value is expected and not a str
@airoundtableАй бұрын
There was a discussion about it in my Gothub: github.com/Farzad-R/Advanced-QA-and-RAG-Series/issues/7 I am not sure why it is happening. I cannot replicate it from my end but apparently it comes from the travel database. If you change the database the pipeline works fine
@sreerag4368Ай бұрын
Hey so I'm creating something similar but Im using SQLDATABASETOOLKIT here,is there a way to extract the output of the toolkit like the sql query generated and the final result of the query
@airoundtableАй бұрын
It depends whether your toolkit has somesort of API connection so you can utilize to integrate it into your system. I haven't used it myself
@user-ps9mk3pi7oАй бұрын
I'm working with highly sensitive data what if I don't want to use get models
@airoundtableАй бұрын
I assume you mean GPT models. Well you can run the chatbot with open source models (using OLLAMA for example) but you need a powerful LLM for a good performance
@apoorvgarg29262 ай бұрын
Is there a way to benchmark the performance of the Text to SQL systems on how accurate the sql queries are? I am having hard time to do this since RAG is involved. It completely depends on how good your RAG model is to generate queries. Most of the time business context is different from what resides in the database. e.g- a business may call it’s internal department as squads but database has a column called department. This could be solved using RAG. However, I want to get your insights on benchmarking the performance without using LangGraphs. Also, to your video- really good. But in reality, I think it would be good to create separate agents for each database and control the permissions of users on agents.
@airoundtable2 ай бұрын
There's no easy way to measure the accuracy of SQL queries. If I wanted to do it, I'd create a small dataset with pairs of questions and their correct SQL queries. Then, I'd run those questions through my system, get the generated SQL, and use another LLM to compare the correct SQL with the generated ones and give them a score. You could also add some metrics to measure how similar the two queries are, but this is a custom method since there’s no standard way. It all depends on your project. If you need users to access multiple databases across different departments, this approach works well. But if you need to control user access to specific databases, your approach would be better. The right strategy always depends on the project's needs. Thanks for the great discussion!
@AshokNepal-l5e15 күн бұрын
Hello sir, Could you please add the way to integrate it in multiple website and also talk about load balancing while present the concurrent users at once.
@airoundtable13 күн бұрын
Thanks for the comment. These are the deployment side, and the configurations vary for different scenarios and use cases. So, I cannot make a video that is useful for everyone. There are many factors involved including load balancing, networking, security measures, etc.
@terryliu36352 ай бұрын
Hi Farzad, I got a quick question here...when I was using gpt-35-turbo, the function was not recognized, and the finish reason is "stop". While I switched to 4o-mini, the finish _reason is the same as what you showed, "function" call. Do you have any idea the reasons behind? Thanks.
@airoundtable2 ай бұрын
Hello. Check the OpenAI's version and make sure it is updated. And also test your models with the notebook in which I discussed the custom agent design using function calling first. GPT-3.5 must have the capability to become an agent
@terryliu36352 ай бұрын
@@airoundtable Thanks Farzad. Are you referring to api version? In both scenarios, 4o-mini and 35t-urbo, I was using "2024-02-15-preview". Btw, I was using Azure Open AI models due to security policies. I tried the custom agent function call as well and the result is the same (4o works but 3.5t does not work). The codes are as follows: response = client.chat.completions.create( model="gpt-35-turbo", # gpt-4o-mini functions=[abc_json], function_call="auto", messages=messages, temperature=0
@terryliu36352 ай бұрын
@@airoundtable On another note, I read an article mentioning gpt-3.5-turbo-0613 deployment (from June 13th 2023?) might have some issues with functions....not sure if this is related....
@airoundtable2 ай бұрын
@@terryliu3635might be. I haven't looked into it in that detail to see if all the models are capable of function calling. But all of the ones that I commonly use can do it
@Kunal-e8iАй бұрын
Hello Farzad , thanks for the video. I m working on a use case where I want to use the database which have almost 10+ different table and each table have almost 111+ lac rows in it which a large database and it is in aws redshift cloud db. So can i use the same logic here or do I need to do any vectorization/embedding logic ?
@airoundtableАй бұрын
Every project has its own requirements and challenges. You can definitely use alot of the strategies that I showed in this video and the first one in this series. But you might also need to improve the system to be fully functional. That being said, this video is a great starting point and the system might solve all your needs
@aehtajazahmed94812 ай бұрын
Hey Farzaad, Thank you for such a knowledgeable tutorial, I have two question here, 1. Is it possible to integrate product listings inside the chatbot? For example, if we build a chatbot for Amazon and a user searches for something like ‘I want to eat something low in carbohydrates and high in protein,’ can the chatbot display matching products and their listings? 2. If possible how we can?
@airoundtable2 ай бұрын
Thanks. 1. It depends on the structure of your listings. 2. For instance in your example, if you have a column named `carbohydrates_rate` and it is filled with either `low` or `high` (or a percentage for example) and you have a column named `protein_rate` and again it is filled with `low` or `high`, then yes the agent can filter the table based on the correct values of these two columns and return the correct portion of the table.
@himansudash453810 күн бұрын
Hi there. It looks like there is no code to load data to travel.sqlite db. I ran your notebook and it throws error "no such table: aircrafts_data". can you guide me to get some sample data loaded
@airoundtable10 күн бұрын
There are two versions of the dataset. One of them is a subset of the other one. Make sure to download the complete version. I think I added the URL to the Readme file
@airoundtable6 күн бұрын
@@himansudash4538 download it using this link storage.googleapis.com/benchmarks-artifacts/travel-db/travel2.sqlite
@mayankgoyal42132 күн бұрын
@@airoundtable I think you forgot to add the hyperlink or the link to the database in the README file.
@airoundtableКүн бұрын
@@mayankgoyal4213 Thanks for the note. I will check and add it to the readme file
@immortalx678Ай бұрын
hello, amazing work ❤, but i have a question maybe dumb but is thia agent can work on very large sql database like 1038 table and each table have at least 10 columns, is that will exceeds the context window of the LLM ?
@airoundtableАй бұрын
Thank you! This agentic system that I designed in this video will not be able to handle that scenario. For systems with that size, the only way to handle it is to break down the tasks and design complex agentic systems in which each agent is responsible to interact with a reasonable portion of the data. In my next video I will explain a system like this in detail but not for many databases but for performing specific tasks with separate agents that are all part of one big system
@mayankgoyal4213Күн бұрын
@@airoundtablelooking forward to it.
@khananas5716Ай бұрын
Can i do this without open ai, can we do this with langchain and Groq? Using chatgroq??
@airoundtableАй бұрын
I am not sure if Langchain supports Groq. In the video there is a section where I talk about different models that Langchain supports. Have a look at their documentation. If they are supporting it then you can easily switch to Groq
@devkkkkАй бұрын
Thank you Farzad for Great Video. I'm building bot on realtime manufacturing data for finding information & root causes. I should use only SQL agent or RAG also & do schema alone sufficient for answering complex queries? because there will n number of checklist for each root cause. It will be helpful for me if you suggest. Thanks once again.
@airoundtableАй бұрын
For querying the SQL databases, RAG is not an option. However, how to use SQL agents and how to design the system depends on many factors. Database complexity, project objective, and number of users are just a few of them. But if you want to go in one of those two directions, it is better to go with the LLM agents for SQL databases
@LandAhoy-dx9nw2 ай бұрын
Excellent video! Does it work without mentioning the name of the DB in the question?
@airoundtable2 ай бұрын
Thanks. Well the primary agent should somehow understands which database too look for. So, it is either through the question itself that it can understand or througha comprehensive system role. If your users will not mention the right database, I'd suggest to explain what each database contains in the system role so the primary agent can understand which one to pick for each query.
@anandukc4709Ай бұрын
@@airoundtableAt where should i give the system role to primary agent? Can you guide ?
@airoundtableАй бұрын
@@anandukc4709 In build_full_graph.py you can add the system role although I didn't add it myself. You can add the system role using langchain system message template and add it to the LLM. Check out their documentation and you can figure it out in a couple of minutes. If you couldn't solve it, open an issue on the repository and I will send you a sample code.
@anandukc4709Ай бұрын
@@airoundtable thanks let me check. I have 2 tools sometimes instead of calling rag tools the llm is providing direct answers i hope by giving a proper system role this issue can be solved.
@airoundtableАй бұрын
@@anandukc4709 yes that can definitely help. Hope you can solve it quickly
@oscarduran4394Ай бұрын
Hi sir, can you give a guide on how to export to excel the result of the sql query, and another question i have a test data where i have student name,age etc if i say give me the students with age under 18, i get just and example(5 to 10 student) of that not the full result of the query(i have 45 students under 18) , i try set the top_k=1000 but its not workin
@airoundtableАй бұрын
For saving the results in csv you can either directly write them in the csv file using python libraries like csv (good for large scale data) or you can use pandas dataframes. Top_k is not related to that. That is the Limit in the sql query itself. You can modify the system role of the sql agent to return more results but you have to take into account the context length of the LLM as well. For these scenarios I recommend using a different type of agent. I will briefly explain it in my next video
@Naejbert28 күн бұрын
I’ve learned a lot, very useful and I’m using it. How are the thumbs up/down used by the system?
@airoundtable27 күн бұрын
Glad to hear it! The feedback is not fully implemented in the chatbot. Currently, when someone pushes the thumbs up/down, it sends a message to the Gradio backend but in the code, I am not receiving or saving that message to collect the feedbacks. You can check out their documentation to see how to complete that part of the chatbot: www.gradio.app/docs/gradio/likedata
@Naejbert27 күн бұрын
@@airoundtable Understood, thank you. Also, I’m note sure if the primary agent select which tool to call (depending on the prompt content), or if it called each tool and choose the best results. Or maybe it does both? I’m confused here as, if I’m right, both the appropriate tool selection and a systematic looping over every tools are mentioned in the video. I was about to watch again, but some clarification would be appreciated on that.
@airoundtable27 күн бұрын
@Naejbert It selects the best tool based on the prompt. It won't go over all of them to choose among the best one.
@Naejbert27 күн бұрын
@@airoundtable Thank you! Very appreciated :)
@nafisahmed50232 ай бұрын
Great explanation sir. Can you do a video on how we can create tools for any database that I can have. Here you have tools specified to databases. Can we do tool for large databases that is dynamic (new data and tables coming in most of the times)
@airoundtable2 ай бұрын
Thanks. Well the approaches would be pretty much the same. The only part that needs to be properly adjusted is the data pipeline and the strategy that wants to pass the table names to the agents. It needs to be designed to be dynamic just like the nature of the data that is coming in
@RoyMathew-h1gАй бұрын
Hi, i have been following you for a while your videos are always upto date contents. I have a doubt, i have few api,s for getting data of users in the hrms . How can i convert this into tool and add to primary agent? So if a person ask the bot i want leave for tommorow the llm will call the api to apply for leave but for this there is some input needed from user like date duration etc .how can i do this pls reply
@airoundtableАй бұрын
I'm glad you found the content helpful! For the first part of your question, you just need to create a Python function that calls the appropriate API to handle the task. I covered designing tools in this video so that you can create your own functions. If the HRMS (Human Resource Management System) you're using has an API, you can integrate it with your function to connect it to the system. For date durations or other needed inputs, you can set these as parameters in your tool. This way, if the user doesn’t provide them initially, the model can prompt for them. You can also adjust the system role of the main agent to guide the model on what to expect from the user in different cases.
@RoyMathew-h1gАй бұрын
@@airoundtable thanks in have done it. Also added a system role. I have one doubt. How memory is handled here in graph is it using checkpointer?
@airoundtableАй бұрын
@@RoyMathew-h1g Yo can check it out here: langchain-ai.github.io/langgraph/how-tos/persistence/
@krishchatterjee2819Ай бұрын
Thank you for the amazing video. Learnt a lot. QQ: Since I cannot use out of the box chat models, I created a CustomChatModel class following Langchain documentation by inheriting BaseChatModel Class. I'm struggling to add the bind_tools method to my customchatmodel class. Any help or thoughts will be great :) Thanks in advance.
@airoundtableАй бұрын
Thanks! I am not sure what the problem is here but you can definitely do that
@AhmedSaud-p6d2 ай бұрын
Thank you Farzad! Your videos are very helpful. I faced an issue in trying to run 'sql_agent_chain_steps.ipynb' notebook. When I run the cell with 'message = "How many tables do I have in the database? and what are their names?" response = chain.invoke({"question": message})'. I always get this error. Can you help me understand what could be the issue. Thanks again "TypeError: must be real number, not str"
@airoundtable2 ай бұрын
Thanks. I am glad to hear it! It is hard for me to answer that question since The code works fine on my side. I'd start to break down the code and see where it is coming from. In most cases if you check the traceback, you'd see the exact line of the code that is throwing what error as well. If you couldn't solve it please feel free to open an issue on the repository and we will take it from there
@AhmedSaud-p6d2 ай бұрын
Thank you for your reply I have created a new issue on your githubrepo
@airoundtable2 ай бұрын
@@AhmedSaud-p6d Just responed to you in GitHub
@anandukc4709Ай бұрын
Hello sir pls reply. I have two tools rag and another one for api call. But when i ask a question realated to rag tool am sometimes getting response directly from llm without calling rag. How to solve this issue kindly help me out😢.
@airoundtableАй бұрын
Instruct the primary agent to avoid using its own knowledge in case the user is asking about your targeted topics
@anandukc4709Ай бұрын
@@airoundtableis the feedback mechanism implemented in this like thumbs up and down to give feedback, if so can you explain how its working
@airoundtableАй бұрын
@@anandukc4709 The feedback mechanism is not fully implemented in the project because it really depends on what you have in mind to do with it. Right now the thumbs up and down results are sent back to the backend but there is no variable to store them. Check and test the code here to understand how it works: www.gradio.app/docs/gradio/likedata Then you can update the project based on your needs.
@anandukc4709Ай бұрын
@@airoundtable can you please tell me how can i deploy this application in azure ? Should i have to make some changes?
@airoundtableАй бұрын
@@anandukc4709 This project is not ready for to be deployed. You need to optimize the code, add error handling, upgrade the memory, handle sensitive data, and also take into account other security measures such as user authentication. Then you can contanarize it and deploy it
@MustRunTonyoАй бұрын
Nice project! Is there a chance I could replicate what you did here is a simpler tool like n8n or langgraph?
@airoundtableАй бұрын
Thanks! I haven't used n8n personally so I cannot say for sure. I saw its RAG capabilities but I am not sure how good it is for agentic systems. This project is already designed using LangGraph so no need for any modifications for that.
@MustRunTonyoАй бұрын
@@airoundtable sorry I meant langflow 😅 I will try using that to replicate, as I am still learning, it's easier to use a node based system
@airoundtableАй бұрын
@@MustRunTonyo I hope you can replicate it fast!
@juanantonionavarrojimenez2966Ай бұрын
Awesome.
@theaccountantguy2 ай бұрын
Hi there. Is it possible to integrate python code onto a next js project with react on the frontend ?
@airoundtable2 ай бұрын
Hi. yes. There are multiple ways to do it. For instance, you can create a Python-based backend API using frameworks like FastAPI, Flask, or Django, and then connect this backend to your Next.js frontend via API calls. There are other ways to do it as well but it depends on the use case
@premmanu65572 ай бұрын
could you use streamlit as well
@theaccountantguy2 ай бұрын
@@airoundtable Hi, I tried using the FastAPI method using the Nextjs starter template, it works well on the localhost but I face issues for it running on the production instance. Its not able to connect to the python backend server, I am not a techie and don't have much idea about coding. But what I feel is that its looking for a server to run the python code fetching information from the front end part. Do you have any tutorial for this? Thanks for your help!
@airoundtable2 ай бұрын
@@theaccountantguy Hmm, that is a tricky one. No, I don't have any tutorials on that problem but I bet there are plenty of conversations about it on software forums. My suggestion is to simplify things first and make a very simple call from the front end to the backend on your server and in the meantime solve all the networking and other connection issues. Once you made that first successful call then work on the main project. Hope you can find the solution.
@theaccountantguy2 ай бұрын
@@premmanu6557 Hi yes streamlit works fine, but it lacks proper authentication setup for users.
@FindMultiBagger2 ай бұрын
Great tutorial ! Looking for google gemini based projects since long :( + JSON structured output format in langchain for Gemini based model without pydantic model
@airoundtable2 ай бұрын
I only used the 7B Gemini model in one of my videos. kzbin.info/www/bejne/bJXcq2WDlLqKgtksi=x2t4jH0UriIYFqUr But that is the open source version of Gemini.
@gideongyimah2172 ай бұрын
Would be glad if you could add visualization tool to the sql agent for analysis with graph
@airoundtable2 ай бұрын
Great suggestion. I will keep it in mind for another future video on sql agents. thanks!
@mohamedkhalifa-p4k2 ай бұрын
sur it is a great project thank u... i hope you make a video for generating sql code from text (question) to query large database i mean the model has connection to my database schema ,tables ,columns , ..etc and based on my question it can generate the correct query without telling him the table name explicitly it should be clever to know the right table or the nearst table to the question ... i hobe you can make this video soon with fine tunning and RAG
@airoundtable2 ай бұрын
Thanks. In this video, I introduced three strategies for connecting SQL agents to large databases. Give them a try and I hope they can help you solve the problem.
@Mohamed_khaliVvaaa12 ай бұрын
@@airoundtable Thanks , i will do this , ...iwant to ask you a question if u have a time to answer me i want to make this project with my self to improve my skills but i donot want to use Open ai models directly i want to fine tune an open source llm model like distilbert or llama3 ..etc on. QA dataset to generate sql code from natural language questions do i need the RAG or the fine tunning will be enough... I think I need it because when i pull the data from the database as a jason file i will transform it to vectors then store them in vector database and when i ask question the rag will agument the similar vectors to my question to the generator model to generate the final sql code ... I hope you can answer this question this will help me alot on my graduation project or provide me the. Right way to this approach Thanks alot i hope you the best