If you are interested in learning more about Advanced RAG systems, checkout my RAG Beyond Basics Course: prompt-s-site.thinkific.com/courses/rag
@RohitSharma-uw2eh17 күн бұрын
Why hindi audio track is not available 😢
@SullyOrchestration17 күн бұрын
I've tried this and unfortunately it's way too slow (30 seconds for a query with a 2 sentence output!) and does not produce its 'logical form solver' transparently enough. We don't know what chunks were retrieved or from where. Unfortunately it's still quite a way from being usable in a practical app.
@KoteikiJan17 күн бұрын
It depends on the use case! Sometimes it's acceptable to wait a few minutes, or maybe even hours.
@MatichekYoutube14 күн бұрын
@@KoteikiJan can you write an example in which case would be for example ok to wait for hours? maybe for lawyers?
@KoteikiJan14 күн бұрын
@ yes, court case analysis is one, and any other kinds of analysis that can be executed overnight. Also generating book text, or various kinds of documentation. Of course, being able to iterate fast is an advantage, but sacrificing speed for quality is also worth considering.
@donb552117 күн бұрын
The premise of Knowledge Augmented Generation is promising, but the current KAG code bases failed to deliver today. The TLDR version is that ultimately I saw no notes or edges created in Neo4j. Even weirder is that in spite of there being no graph it was still giving me results in the UI. (the UI is not open source and appears to be locked down) Ultimately the config needs to become more solid and consistent -- and there needs to be agreement between the OpenSPG/openspg and OpenSPG/KAG development teams on whether Ollama is supported. An odd mix of Java and Python. Hopefully this gets straightened up soon. Prompt Engineering... Normally love your stuff. What would be helpful as a starting point is a Jupyter Notebook from OpenSPG that walks through (and validates) the pipeline step by step. A follow-up would be a reproducible and well documented evaluation against other solutions - LazyGraphRAG, LightRAG / nanorag, etc.
@kai_s198515 күн бұрын
nice. wondering if we can use this on groq platform to speed things up?
@ai_handbook17 күн бұрын
Very nicely explained, congrats mate! 👏👏👏
@NevilHulspas17 күн бұрын
Tried it. It's quite slow for answering simple queries. Also indexing about 20 pages in a PDF took about 300k tokens, which is still quite cheap with DeepSeek, but it seems a lot. Indexing also took like an hour or something. User interface is partly chinese, quite a bit of bugs. Seems unfinished. Answers that we're outputted were mostly correct though
@eshandas364516 күн бұрын
U used openai api key?
@조바이든-r6r14 күн бұрын
Did you tried gemini deep research
@ahmadzaimhilmi13 күн бұрын
300k tokens for 20 pages is just too much.
@eshandas364513 күн бұрын
@@ahmadzaimhilmi thinking of using a local llama instead of deep seek
@AnuragVishwa11 күн бұрын
Which is the best RAG for json based raw data ?
@u007james17 күн бұрын
how does continual update works with kag
@xt370817 күн бұрын
Same q!!
@karthikbabu18447 күн бұрын
@engineerprompt, could you consider modifying the outline of your video thumbnails? Currently, when the bottom outline of a thumbnail is red, it often gives the impression that the video has already been watched. This can lead to confusion for viewers who might skip over new content, thinking they’ve seen it before. Great work so far!!
@engineerprompt6 күн бұрын
Thanks for the feedback and makes total sense. Will look into it
@patruff18 күн бұрын
Not to be confused with CAG
@coom0717 күн бұрын
I got confused 4 a second. Thanks buddy❤
@andydataguy17 күн бұрын
🤣🤣
@zhalberd15 күн бұрын
Thank you for this video. Please make a follow up on how to build this. 🙏🏻
@rishavranaut765117 күн бұрын
All these things still in infancy stage to be used in real use cases, ..will take time to match up with traditional RAG
@engineerprompt17 күн бұрын
I agree! I am happy to see the progress and the research in retrieval. Retrieval is still one of the most useful applications of the LLMs.
@anjinsama5911 күн бұрын
It seems now that openspg request Sign up with a phone number which is "Only Chinese Mainland (excluding Hong Kong, Macao and Taiwan) mobile phone number registration" at least when building the Docker image
@RodCoelho15 күн бұрын
How does it compare with light RAG? Is it Better?
@IamalwaysOK16 күн бұрын
Is there any way to use "Medical Information Extraction" instead of OpenIE?
@tirushv968118 күн бұрын
What do you think about Colpali vs KAG? which is the best?
@tirushv968118 күн бұрын
I feel Colpali performs well with accuracy and cost effective. Because image retrieval is from Colpali and we host it. Thoughts?
@MrAhsan9918 күн бұрын
@@tirushv9681 Is there anyone providing the api for colpali usage? I want to use this in my app. Want to give access via api to limited users/
@alprbgt12 күн бұрын
Hi, First of all, Thanks for your good video. It's knowledgeble video. I tried what you do in the video on my computer, but when I create a task, I always getting a vectorization connection error. I'm using OpenAI API and it's embedding model. My question, is there any document or any video on platforms about that error?
@eshandas364517 күн бұрын
Can graph rag establish logical connection between them..
@sanjaybhatikar17 күн бұрын
It is the ReAct prompting with graph backend
@Alex-rg1rz17 күн бұрын
very interesting! thanks
@matiasm.312418 күн бұрын
Nice can you do the one in python using all local services/llm please.
@davidjourno692917 күн бұрын
What about combining KAG and SPARQL ?
@DocRekd-fi2zk16 күн бұрын
Is this based on rdf?
@abdshomad16 күн бұрын
How is it compared to llm-graph-builder from neo4j?
@engineerprompt16 күн бұрын
wasn't aware of it. Will need to check it out.
@definitelynotthefbi7257 күн бұрын
@engineerprompt you really should, it's a fascinating framework
@surajjaiswal137117 күн бұрын
What is better KAG or Agentic RAG?
@engineerprompt17 күн бұрын
You can use KAG as a tool for an agent to do retrieval.
@surajjaiswal137117 күн бұрын
@@engineerprompt Like the agent part in the Agentic RAG?
@engineerprompt17 күн бұрын
@@surajjaiswal1371 exactly. RAG is just a tool that will be available to your agent.
@surajjaiswal137117 күн бұрын
@@engineerprompt Okay, thanks!
@RohitSharma-uw2eh17 күн бұрын
Why hindi audio track is not available
@stefanmisic740516 күн бұрын
why would we need hindi audio??
@RohitSharma-uw2eh16 күн бұрын
@stefanmisic7405 because youtube add audio track feature is available.may we more enjoy like that mr beast videos
@maniecronje15 күн бұрын
Thank you for sharing great tutorial …reading from others comments it looks like the model is slow with responses which prevented me trying it but nevertheless informative nonetheless thanks 🙏
@ankitgadhvi15 күн бұрын
Very helpful tutorial for setup and good explanation on KAG systems however I used it and did not like the answers. it takes too long to give the answers and the answer are not good. also too much time needed to make embeddings from the chunks.
@qadeer.ahmad12317 күн бұрын
There is also CAG (Cache-Augmented Generation) much more faster.
@sanjaybhatikar17 күн бұрын
Another day, another -ag 😂
@beppemerlino17 күн бұрын
amazing!
@NLPprompter9 күн бұрын
so this is like... um... uses Anthropic's contextual RAG + steroid Graph RAG?
@Sri_Harsha_Electronics_Guthik17 күн бұрын
need python version
@JoeCryptola-b1m15 күн бұрын
No FAG? Fetched Augmented Generartion relax everyone. Uses internet fetch over docs that's why it's FAG not RAG
@JNET_Reloaded17 күн бұрын
i notice it tries to use mysql i already have mysql and that default port ofc is in use this is bs! use sql lite or something like a file thats separate from a server and dont require ports ffs!
@JNET_Reloaded17 күн бұрын
I would NOT reccomend this bs at all! terrable!
@matiasm.312417 күн бұрын
@@JNET_Reloaded why ??? Explain
@iftekharshaikh722217 күн бұрын
@@matiasm.3124 they have basically being using SQL in the backend and the biggest problem with the Graph based approach is that it adds an LLM in between that increases cost and decreases speed in the indexing process and on top of that they are using 2 LLMs for indexing so double the cost and slower the speed also in the retrieval process they have query decomposition using NLP and stuff the chunk retrieval process is slower than usual as they are also using an LLM in the middle so this approach might get you the best results and relevance uptill now but it will come with high costs and slower speed as another person said that yiu have to wait 30 seconds for a single answer that too was not complicated so there is no practical use for this
@SonGoku-pc7jl17 күн бұрын
kag waw! i try project with mypufd4llm, is posible combination with this?