GraphRAG or SpeculativeRAG ?

  Рет қаралды 8,863

Discover AI

Discover AI

Күн бұрын

Пікірлер: 28
@MattJonesYT
@MattJonesYT 5 ай бұрын
I think the weakest link in RAG is it usually chunks it without respect to context which means the data is immediately corrupted and then you need a really complex system to make the data not corrupt again. I think the biggest gains to be had is at the start of the process by chunking it in very intelligent semantic paragraphs that stand on their own like a section of paragraphs in a book. Just splitting every n tokens ruins RAG performance.
@themax2go
@themax2go 5 ай бұрын
well that's why contextual graphs now exist, ie ms's graphrag and now scifi's triplex...
@criticalnodecapital
@criticalnodecapital 5 ай бұрын
@@themax2go 100%, i was waiting 4 months fro them to drop it, and then realised speculative graph, or using an abstractlayer to let the llms bash it out was a better way to go!!!>.. EVALs fml , why did i not do this before.
@davidwynter6856
@davidwynter6856 5 ай бұрын
Through actual use of baseline RAG over a year ago I realised knowledge graphs with their rich semantic capability would improve things radically. But after some experimentation I realised I needed to combine the triples and the embeddings, for simplicity and performance reasons. This is easy and free using Weaviate which allows a schema to be added over the top of the vector store. Since then I have built 4 different knowledge graphs over Milvus and Weaviate, they work brilliantly, and you can also build embedding for the full triple as well as the constituent subject, predicate and object. GPT-4o understand triple representations extracted from the user prompt very well.
@fintech1378
@fintech1378 5 ай бұрын
Awesome, any video / link to share?
@davidwynter6856
@davidwynter6856 5 ай бұрын
@@fintech1378 Sorry, trying to get a job currently, after my sabbatical, the toolkit I built has to remain private
@artur50
@artur50 5 ай бұрын
github ;)?
@Karl-Asger
@Karl-Asger 5 ай бұрын
Great to hear this. Can you speak to the cost of generating the knowledge graph, and what scale you're working with? I really like your insight here about embedding not just the chunks but also the triplets.
@antaishizuku
@antaishizuku 5 ай бұрын
Preprocessing text really gives better results so at the end of this if you return a preprocessed string instead of the original to the llm it would probably do better. Personally im focusing on a different approach but from my testing i found this helps.
@c.d.osajotiamaraca3382
@c.d.osajotiamaraca3382 5 ай бұрын
Thank you for helping me avoid the rabbit hole.
@thomaslapras1669
@thomaslapras1669 5 ай бұрын
Great video, as usual ! But i have one question : what if the relevant context is splitted in different sub datasets ?
@code4AI
@code4AI 5 ай бұрын
You operate with multiple datasets.
@iham1313
@iham1313 5 ай бұрын
there was (some time ago and somewhere) the argument about models not capable understanding the question as it is not trained on the domain specific data. it would be interesting to combine training of a base model with domain data (like articles, documents and books) and sending it of to a RAG like setup to retrieve referable results.
@whoareyouqqq
@whoareyouqqq 5 ай бұрын
Google reinvented the map-reduce algorithm, where the map step is draft and the reduce step is verification
@sinasec
@sinasec 5 ай бұрын
is there any source code for this RAG?
@topmaxdata
@topmaxdata 5 ай бұрын
In many cases, working with a KV store or a relational database with extracted entities and relationships is more practical than using a graph database like Neo4j for the following reasons: Familiarity: Most developers are already familiar with relational databases and key-value stores, making them easier to work with and maintain. Ecosystem: Relational databases and KV stores have mature ecosystems with robust tools, libraries, and integrations. Performance: For many use cases, KV stores and well-designed relational databases can offer excellent performance. Flexibility: Relational databases can handle a wide range of data structures and query patterns. Scalability: Both KV stores and relational databases can be scaled horizontally or vertically to meet performance needs.
@code4AI
@code4AI 5 ай бұрын
Smile. And after the praise for a KV, now list 5 problems w/ KV, just to have a balanced presentation from your side.
@topmaxdata
@topmaxdata 5 ай бұрын
@@code4AI Curious what are the 5 problems? Thank you.
@be1tube
@be1tube 5 ай бұрын
I'll give the disadvantages a shot: Consistency: joins are not atomic, so by the time you finish the join the info may be outdated Extra memory: joins must be done in the client Extra queries: need to do one query per joined table usually No relationship constraints across tables or rows Imperative style: you tell the DB every step. You don't get intelligent query optimizers giving you the benefit of years of database research, you have to build it from scratch. The DB doesn't know its structure: it doesn't know when to cascade deletes or when to store two elements nearby because they will be accessed together. Note: it's been a decade since I tried to use a large KV store, so maybe some of these are better now.
@lionardo
@lionardo 5 ай бұрын
I doubt this is working better than simple rags.
@AaronALAI
@AaronALAI 5 ай бұрын
Hmm 🤔 i don't doubt there are better rag strategies.... however, rag with a model of good context size (65k+) yields very good results. But there will always be a scaling issue, too little model context or too large a db.
@code4AI
@code4AI 5 ай бұрын
Whenever a global corporations tells us, that their old product has a very poor performance and that we now have to buy a new product .... we can decide for a product that fits our needs.
@GeertBaeke
@GeertBaeke 5 ай бұрын
@@code4AI That is not exactly what Microsoft is saying. The team that built Graph RAG focused mainly on global queries that use community summaries that were created during indexing. This allows you to ask global questions about your data that, out of the box, provides better answers than baseline RAG. And their local queries are actually a combination of vector queries to find entry points in the graph followed up by graph traversal. It's about combining things, not simply selecting one thing.
@user-gj1gd5pi1m
@user-gj1gd5pi1m 5 ай бұрын
Both are not practical. I guess the authors do not have a production level experience in RAG.
@siddharthgolecha998
@siddharthgolecha998 4 ай бұрын
I agree with you. I work in the research unit of such a company and researchers have no regards for production.
@atultewari5004
@atultewari5004 3 ай бұрын
@@siddharthgolecha998😊😊😊😊😊
@bastabey2652
@bastabey2652 5 ай бұрын
kg is a pain in the neck
Improve AGENTIC AI (Princeton)
28:26
Discover AI
Рет қаралды 3,8 М.
NEW: Better In-Context Learning ICL, Improved RAG (Harvard)
26:43
Discover AI
Рет қаралды 3,5 М.
Cat mode and a glass of water #family #humor #fun
00:22
Kotiki_Z
Рет қаралды 42 МЛН
IL'HAN - Qalqam | Official Music Video
03:17
Ilhan Ihsanov
Рет қаралды 700 М.
Visual PDF Reader: ColPALI for RAG  #ai
27:33
Discover AI
Рет қаралды 6 М.
Building Production RAG Over Complex Documents
1:22:18
Databricks
Рет қаралды 19 М.
LightRAG: A More Efficient Solution than GraphRAG for RAG Systems?
19:49
Prompt Engineering
Рет қаралды 43 М.
GraphRAG: The Marriage of Knowledge Graphs and RAG: Emil Eifrem
19:15
Anthropic MCP with Ollama, No Claude? Watch This!
29:55
Chris Hay
Рет қаралды 16 М.
Fixing RAG with GraphRAG
15:04
Vivek Haldar
Рет қаралды 9 М.
GraphRAG: LLM-Derived Knowledge Graphs for RAG
15:40
Alex Chao
Рет қаралды 133 М.
BEST RAG you can buy: LAW AI (Stanford)
19:12
Discover AI
Рет қаралды 5 М.
NEW TextGrad by Stanford: Better than DSPy
41:25
Discover AI
Рет қаралды 16 М.
Chat With Knowledge Graph Data | Improved RAG
13:00