DSPy with Knowledge Graphs Tested (non-canned examples)

  Рет қаралды 8,177

Diffbot

Diffbot

Күн бұрын

Пікірлер: 27
@kenchang3456
@kenchang3456 8 ай бұрын
Not having experimented with DSPy, and looking at your struggles in at least understanding what DSPy is doing it seems like it should wait while the use of knowledge graphs has had a chance to figure itself out. So thank you for sharing. I intuitively like the concept of knowledge graphs so I'm hopeful.
@lckgllm
@lckgllm 8 ай бұрын
Thank you for the encouragement! 😊 There's still a lot to explore and find out combining LLMs with knowledge graphs, but will definitely share the journey along the way with you all!
@fromjavatohaskell909
@fromjavatohaskell909 7 ай бұрын
10:38 Hypotheisis: what if providing additional data from KG does not override knowledge ("false or hallucinated facts") already inherently present in LLM. I wonder what would happen if you change labels of knowledge graph to same abstract names like Person1, Person2, Company1, Company2 etc. and try to run the exact same program. Would it dramatically change result?
@nk8143
@nk8143 6 ай бұрын
I agree on that. Because misconception "everyone knows that Elon co founded every company" was most likely present in training data.
@jankothyson
@jankothyson 8 ай бұрын
Great video! Would you mind sharing the notebook?
@rohitgupta7146
@rohitgupta7146 7 ай бұрын
If you expand the description mentioned below the video, you will find the github link to notebook
@ScottzPlaylists
@ScottzPlaylists 6 ай бұрын
I haven't seen any good examples of the Self-improving part of DSPy yet. Is it ready for mainstream use❓
@vbywrde
@vbywrde 8 ай бұрын
Thank you for this video! Really great. I'm also a bit new to DSPY, but am having a great time learning it. This is really the right way to explore, imo. You set up comparative tests and then look carefully at the results to think about what might be going on in order to find the best methodology. Yep. That's the way to do it. Some thoughts come to mind. 1) take the same code and question, and try it with different models for the embedding. What I've noticed is that the selection of models can have a significant influence on the outcomes. 2) perhaps try creating a validator function for when you take the data and convert it into English as a way to have the LLM double-check the results to make sure they are accurate. I've been doing that with a code generator I'm working on, and it seems pretty helpful. If the LLM determines the generated code does't match the requirements, then it recursively tries again until it gets it (I send the rationale of the failure back in on each pass to help it find its way -- up to five times max) 3) I'll be curious to see how much optimization influences the outcome! Anyway, fun to follow you on your journey. Thanks for posting!
@lckgllm
@lckgllm 8 ай бұрын
This is such well-rounded and constructive feedback! Thank you so much! 🫶 You're right that I can set up some more testing and validation mechanisms, which are great suggestions that I'm taking to improve the pipeline. Really appreciate your effort writing this down, and I just learn so much more from the community with you all :) Thanks for the encouragement too!
@kefanyou9928
@kefanyou9928 7 ай бұрын
Great video~ Very interested in KG' adaption in LLM. Kindly reminder: hide your api key in the video😉
@plattenschieber
@plattenschieber 7 ай бұрын
Hey @lckgllm, could you also upload the missing `dspy-requirements-2.txt` in the repo? 🤗
@HomeEngineer-wm5fg
@HomeEngineer-wm5fg 7 ай бұрын
You got a topic I exactly was looking into.. Now subscribed....I will follow on X.
@diffbotai
@diffbotai 7 ай бұрын
That’s very kind of you! Thanks! May I know if you’re more interested in knowledge graphs or DSPy with knowledge graphs? Would appreciate your feedback 😊
@HomeEngineer-wm5fg
@HomeEngineer-wm5fg 7 ай бұрын
@@diffbotai I'm in industry. A middle weight engineer trying to early adapt machine learning in a production environment. I see the same thing you are, but you are well ahead of me. Use case for integrating AI with BI. RAG is the natural progression and KG is a latecomer in my industry. Digital Thread things....
@bioshazard
@bioshazard 7 ай бұрын
Which language model did you use? Llama 3 and Sonnet might offer improvements to recall over RAG context.
@diffbotai
@diffbotai 7 ай бұрын
Next video is about testing llama3. coming out soon 😉
@fintech1378
@fintech1378 8 ай бұрын
please do one for vision model too. AI agent for web navigation? like RAG based but it also acts not just retrieves, saw your past videos bout those, but this is based on LLM
@diffbotai
@diffbotai 8 ай бұрын
Thanks for the suggestion! Will do our best to to make a video on this :)
@CaptTerrific
@CaptTerrific 7 ай бұрын
What a great demo, and idea for a comparison of techniques! As for why you got those weird answers about companies Musk co-founded... You chose a very interesting question, because even humans with full knowledge of the subject would have a hard time answering :) For example, while Musk bought Tesla as a pre-existing company (and thus did not co-found it), part of his purchase agreement was that he could legally call himself a co-founder. So is he or isn't he a co-founder? Murky legal/marketing vs. normal understanding, polluted article/web knowledge set, etc.
@real-ethan
@real-ethan 8 ай бұрын
This could be a hallucination of LLMs. The implementation of RAG(KG) itself is good enough. I think the reason is that the information about Elon Musk you asked about has already been trained into the model. We hope to use RAG to solve factual content that LLMs are not aware of, but what if the LLM thinks it already knows the question or information?😂
@lckgllm
@lckgllm 8 ай бұрын
That's a fair point. It's possible that language models carry knowledge from pre-trained data, and this hypothesis needs to be tested out for this particular example. But regardless, hallucination in RAG system can stem from both the model's pre-existing knowledge and the retrieval and generation process, which means RAG still doesn't guarantee hallucination-free outputs, and that's why we are seeing research such as Self-RAG or Corrective RAG trying to improve quality of retrieval/generation components. Anyways, I think one of the main points I wanted to get across in this video is that LLM-based apps still need to be grounded with factual information, and that's what knowledge graphs can be powerful at, but for some reason, the pipeline did not want to follow the knowledge (even if it's provided with the info) in the second question.
@jeffsteyn7174
@jeffsteyn7174 8 ай бұрын
Rag fall over if you ask a question like how many projects was completed in 2018, 2019, 2020. If there is no link between the chunk and a date that might have been further up the page the query can fail. The knowledge graph is a good way to improve the quality. Not perfect but better than plain old rag. Regarding your last question the openai api has gotten better at that. It's far better at following instructions like only use the context to answer the question. But you must give it an alternative, ie if the context does not answer the question respond with I can't answer this. If you really want fine grained control. Tell the llm to analyse the question and context and categories the context as directly related, related or tangently related. Only answer the question if it's directly related to the context, otherwise respond, I can't this question. You might want to look into agentic rag for really complicated questions.
@real-ethan
@real-ethan 8 ай бұрын
@@lckgllm Knowledge graphs are indeed very effective and powerful, and this is also the reason why I became interested in your channel. I have always believed that the old-fashioned way of RAG is not a particularly good implementation. In my understanding, the difficulty of RAG has always been not G, but R. In order to optimize R, I tried to summarize and extract keywords from the text content, and then vectorize and retrieve it. This did improve the retrieval results. Combined with your implementation of the Knowledge Graph, I am more convinced that in order to achieve better RAG, the efforts and costs we need to pay will be more on how to organize and index our data. For the example in the video, I think we can do a simple test: change the Elon Musk Node to a different name, while keeping the other Nodes/relationships unchanged. Perhaps the final generated answer will not have any hallucinations. If this test can produce correct results without hallucinations, then perhaps all we need to do is a little more prompt engineering to make the LLM summarize and answer solely based on the results from R.
@lckgllm
@lckgllm 8 ай бұрын
@@real-ethan @jeffsteyn7174 Thanks for both your fruitful feedback, which really helped me learn and think! 😊I'll do some deeper probing and potentially sharing the new experiments in the next video. ;)
@paneeryo
@paneeryo 7 ай бұрын
Music is too annoying. Please dial it down
@diffbotai
@diffbotai 7 ай бұрын
Will be more aware of the volume! Thanks for the feedback
@MrKrtek00
@MrKrtek00 7 ай бұрын
It is so funny how tech people do not understand why ChatGPT was a hit: exactly because you can use it without programming it.
Getting Started with RAG in DSPy!
31:54
Connor Shorten
Рет қаралды 15 М.
Арыстанның айқасы, Тәуіржанның шайқасы!
25:51
QosLike / ҚосЛайк / Косылайық
Рет қаралды 700 М.
She made herself an ear of corn from his marmalade candies🌽🌽🌽
00:38
Valja & Maxim Family
Рет қаралды 18 МЛН
Don’t Choose The Wrong Box 😱
00:41
Topper Guild
Рет қаралды 62 МЛН
GraphRAG: The Marriage of Knowledge Graphs and RAG: Emil Eifrem
19:15
NEW TextGrad by Stanford: Better than DSPy
41:25
Discover AI
Рет қаралды 16 М.
Announcing sPhil
26:54
sPhil
Рет қаралды 4,7 М.
Why vector search is not enough and we need BM25
8:14
Diffbot
Рет қаралды 19 М.
Reliable Graph RAG with Neo4j and Diffbot
8:02
Diffbot
Рет қаралды 21 М.
Things you should check before using Llama3 with DSPy.
8:20
Diffbot
Рет қаралды 3,8 М.
Adding Depth to DSPy Programs
1:08:24
Connor Shorten
Рет қаралды 8 М.
Is Tree-based RAG Struggling? Not with Knowledge Graphs!
9:06
DSPy + Weaviate for the Next Generation of LLM Apps
13:46
Weaviate • Vector Database
Рет қаралды 8 М.
RDF and OWL : the powerful duo, Tara Raafat
19:13
Connected Data
Рет қаралды 56 М.
Арыстанның айқасы, Тәуіржанның шайқасы!
25:51
QosLike / ҚосЛайк / Косылайық
Рет қаралды 700 М.