Large Language Models and Knowledge Graphs: Merging Flexibility and Structure

  Рет қаралды 30,941

John Tan Chong Min

John Tan Chong Min

Күн бұрын

We discuss how to infuse Large Language Models (LLMs) with Knowledge Graphs (KGs)! This is a very exciting approach, as we can combine the flexibility and generalisability of LLMs with the structure and reliability of KGs, and is a first step towards neurosymbolic architectures!
I will also be going through a LangChain implementation of LLMs with knowledge graphs as inputs, demonstrate some of the limitations currently faced, and show how we can better prompt engineer KG usage with LLMs using my very own StrictJSON Framework.
~~~~~~~~~~~~~
Slides: github.com/tanchongmin/Tensor...
Jupyter Notebook: github.com/tanchongmin/Tensor...
Jupyter Notebook (updated for StrictJSON v4.0.0): github.com/tanchongmin/strict...
StrictJSON Framework: github.com/tanchongmin/strict...
LangChain Documentation: python.langchain.com/docs/get...
Tutorial on how to use LangChain and StrictJSON Framework for Knowledge Graphs and LLMs: • Tutorial #6: LangChain...
Paper on Unifying LLMs and Knowledge Graphs: arxiv.org/abs/2306.08302
LLMs as Graph Neural Networks / Embeddings
ERNIE: arxiv.org/abs/1905.07129
TransE embeddings used by ERNIE: proceedings.neurips.cc/paper/...
QA-GNN: arxiv.org/abs/2104.06378
FactKB: arxiv.org/abs/2305.08281
~~~~~~~~~~~~~
0:00 Introduction
1:55 Pros and Cons of LLMs and Knowledge Graphs (KGs)
4:55 Retrieval Augmented Generation (RAG)
8:10 Problems with LLMs and RAG
17:40 Basics of KG
26:09 Hierarchy in KG
31:13 KGs can be structurally parsed
33:17 KG can represent environmental transitions
33:58 KG as tool/memory for LLM
39:16 3 approaches to integrate KG and LLMs
40:21 Approach 1: KG-augmented LLMs
59:05 Approach 2: LLM-augmented KG
1:05:37 Approach 3: LLMs and KG two-way interaction
1:10:16 LangChain Graph QA Example
1:16:35 Strict JSON Framework Graph QA Example
1:23:00 Discussion
~~~~~~~~~~~~~
AI and ML enthusiast. Likes to think about the essences behind breakthroughs of AI and explain it in a simple and relatable way. Also, I am an avid game creator.
Discord: / discord
LinkedIn: / chong-min-tan-94652288
Online AI blog: delvingintotech.wordpress.com/
Twitter: / johntanchongmin
Try out my games here: simmer.io/@chongmin

Пікірлер: 49
@TheHoinoel
@TheHoinoel 4 ай бұрын
Thanks for this, this talk was excellent. I've been looking to combine LLMs with KGs and have very similar intuitions when it comes to using the same embedding space for the KG as for the LLM. I really like your frame of having the right abstraction spaces to solve the problem at hand. Having written countless prompts, as well as looking at how humans have solved problems over the years, it seems to me that fostering the right context (abstraction space) is vital when trying to solve a new problem. Einstein's discoveries were, in part possible due to the context of his life experience that gave him intuitions to solve a certain type of problem. The cool thing with LLMs is that we can bootload intuition at will, allowing us to swap out abstraction spaces until we find a combination that gives us the right context to solve a problem. Great work!
@AaronEden
@AaronEden 10 ай бұрын
You stretched my mind, thank you for taking the time to share.
@polarbear986
@polarbear986 9 ай бұрын
Valuable content! Thank you for sharing:)
@AyaAya-fh2wx
@AyaAya-fh2wx 8 ай бұрын
Amazing work. Many thanks for your efforts sharing your knowledge
@snehotoshbanerjee1938
@snehotoshbanerjee1938 4 ай бұрын
Knowledge packed video and excellent teaching skill.
@johntanchongmin
@johntanchongmin 11 ай бұрын
Slides: github.com/tanchongmin/TensorFlow-Implementations/blob/main/Paper_Reviews/LLM%20with%20Knowledge%20Graphs.pdf
@chrisogonas
@chrisogonas 2 ай бұрын
While I also appreciate the flexibility of knowledge graphs (KGs) as far as them being able to easily represent relationships, I too agree with you that KGs are not necessarily the best or most effective way to represent intelligence. I will stay in tune with your works. I hope to publish on this in the near future. Thanks for the presentation.
@johntanchongmin
@johntanchongmin 2 ай бұрын
Glad it helps. I am actively pursuing my idea of multiple abstraction spaces, and KG can be one of them. The rest of how we store memory will depend on what kind of memory - semantic facts, episodic memory and so on. These can be stored in various ways like traditional databases, or even in video/image format.
@chrisogonas
@chrisogonas 2 ай бұрын
@@johntanchongmin Thanks for sharing your research. I will particularly follow closely your work on context-dependent embeddings. That's an exciting angle to explore in depth.
@AyaAya-fh2wx
@AyaAya-fh2wx 8 ай бұрын
Thanks
@leonlysak4927
@leonlysak4927 3 ай бұрын
You're the first person I've hear mentioning this concept of context-dependent embeddings. I started tinkering with the same idea back in December of last year, never had a name for it. I was doing some self-reflection and thought about how some of my own behaviors and thoughts were contradictory sometimes- dependent on how my emotions were and such. If I could make a certain perspective of mine a 'node' it's embedding would very likely change given different contexts
@johntanchongmin
@johntanchongmin 3 ай бұрын
Nice, do let me know if you have any feedback / add-ons to this idea
@johntanchongmin
@johntanchongmin 3 ай бұрын
Also, video on Context-Dependent Embeddings here: kzbin.info/www/bejne/j4u3hZuihcxjqLc
@johntanchongmin
@johntanchongmin 3 ай бұрын
Updated the companion notebook to this video as OpenAI API and StrictJSON has been updated: github.com/tanchongmin/strictjson/blob/main/Experiments/LLM%20with%20Knowledge%20Graphs.ipynb
@johntanchongmin
@johntanchongmin 5 ай бұрын
Update: StrictJSON is now a python package Simply "pip install strictjson" Head over to github.com/tanchongmin/strictjson to find out more
@jimhrelb2135
@jimhrelb2135 9 ай бұрын
Where are these livestreams done? Is this a college course? I've actually never this hyped for a presentation-driven video, you've done a really good job walking thru the papers :)
@johntanchongmin
@johntanchongmin 9 ай бұрын
Hey, can check out my discord group (link in my profile) for the link and details. Typically they are on Tuesdays, 1230-2pm (GMT+8). My focus is on fast learning and adaptable systems, knowledge graphs can help in that aim of knowledge representation for faster learning.
@agovil24
@agovil24 8 ай бұрын
@@johntanchongmin Amazing work ! Would love to connect and exchange knowledge ☺
@johntanchongmin
@johntanchongmin 8 ай бұрын
@@agovil24Sure thing, can find me on my linkedin or discord. Check my profile page.
@wilfredomartel7781
@wilfredomartel7781 10 ай бұрын
@chakradharkasturi4082
@chakradharkasturi4082 9 ай бұрын
Great info. I have a small question. KG parser what you are talking about expects KG as input however if we have a huge dataset constructing and sending such KG will cost more right?
@johntanchongmin
@johntanchongmin 9 ай бұрын
For the knowledge graph parser I demonstrated using StrictJSON, yes, you will need to parse through every node to identify which are relevant. It is the most performant, but costs a lot. The alternative is to derive an embedding for the node, and an embedding for the query, use cosine similarity to find nodes that are similar to query and use it to augment context via Retrieval Augmented Generation. For context-dependent knowledge graphs, add on the parent node as context to the current node, and create a new vector that has context info as well. Based on what I have tried so far, using a combination of original vector plus the context-based vector is superior than just using the original node vector.
@AndrewNeeson-vp7ti
@AndrewNeeson-vp7ti 4 ай бұрын
1:31:30 "Mix and match" - I'd be interested in understanding how the AI might decide which space(s) to interrogate based on the input/prompt.
@johntanchongmin
@johntanchongmin 4 ай бұрын
In my opinion, it will be a meta-category to group a problem into like a macro space or a micro space, and then the relevant abstraction spaces get called to solve the problem from that categorisation.
@rodrigosantosalvarez6316
@rodrigosantosalvarez6316 9 ай бұрын
thanks for the content. It really states challenges for the present and future. He forget somehow about openAi, and gives google a role of good, remembering that #don't be evil# motto.... but is it something else behind that old motto?
@rajathslr
@rajathslr 3 күн бұрын
Forgive me if my question is not correct, Are we using LLMs to build a Knowledge graph here?
@nikhilshingadiya7798
@nikhilshingadiya7798 7 ай бұрын
suppose we have large pdf about some person and we want rate him based on skills which is define in pdf so those kind of questions which is rely on not just based on subpart of text but whole text at that time how we can appporach this problem with KG and Vector Embedding stuff. and every time we can't call langchain summarization api (chain_type:stuff) becuase it's costly how we can solve this problem
@johntanchongmin
@johntanchongmin 7 ай бұрын
You can try streaming. That is, take a part of the text and extract out the information you require. And keep updating the information with each part of the text until you are done. This way may be much better than extracting the information into KG first and then extracting skills, as there may be information loss when extracting into the KG if the skills are not clearly stated in the pdf.
@AndrewNeeson-vp7ti
@AndrewNeeson-vp7ti 4 ай бұрын
1:13:25 "When was the MacNCheese pro announced?" --> Fail! But, I wonder - what if the source text was stored as alongside the triplet? So that way the graph capability could be used to efficiently identify the relevant records, but the original language would be retained.
@johntanchongmin
@johntanchongmin 4 ай бұрын
Good thought. Storing things at multiple levels of abstraction can help with different problems. So the KG version can help with entity relations, and the original text can help with QA.
@AEVMU
@AEVMU 10 ай бұрын
Does most of this apply to decentralized knowledge graphs?
@johntanchongmin
@johntanchongmin 10 ай бұрын
If you could give me a context for the decentralized knowledge graph, I can better answer your query. In general, it is better for the knowledge graph to be centralized, so that when you add entities and relations, they do not clash. If it is decentralized, there needs to be a conflict management system to see what gets added and what does not.
@98f5
@98f5 6 ай бұрын
Some valuable content here how u only have 2800 subscribers is a travesty
@johntanchongmin
@johntanchongmin 6 ай бұрын
Thanks for the nice words! Subscribers are not my main goal, I just want to create a community for discussion and pursuing of interesting ideas. My main interests are to find out how to create fast learning and adaptable agents, and I find all this discussion helpful! Do join the discord to discuss more:): discord.gg/bzp87AHJy5
@judgeomega
@judgeomega 8 ай бұрын
at around 23:00 it is said that current knowledge graphs are 'too restrictive' because of context. but the way i see it, they are too broad. you still want that total knowledge available even if its not currently relevant, we just want to filter it down.. right?
@johntanchongmin
@johntanchongmin 8 ай бұрын
I actually meant that it was too restrictive because current knowledge graphs are largely static, and the meaning of each node is pretty much independent of the other nodes. Granted, Graph Neural Networks can pass information from adjacent nodes, but they need to be trained extensively and may also not be the ideal form of representation due to oversquashing and oversmoothing. I am looking for a flexible, dynamic representation, that can change as the Knowledge Graph builds. This is what I call context-dependent knowledge graphs.
@matthewpublikum3114
@matthewpublikum3114 9 ай бұрын
Knowledge Graphs were developed during a time of low data regimes and no large models.
@johntanchongmin
@johntanchongmin 9 ай бұрын
Indeed, that is why in my opinion, we need to change our mentality of fixed nodes and relations, and go towards context dependent embeddings.
@JReuben111
@JReuben111 10 ай бұрын
what about Graphormers ? no need for different embedding spaces: language token sequence is just a specialization of a graph
@johntanchongmin
@johntanchongmin 10 ай бұрын
Wow let me go check it out, sounds promising
@johntanchongmin
@johntanchongmin 10 ай бұрын
I went to read the paper, seems like Graphormers is a type of Graph Neural Networks, that adds in more information about node centrality into node vectors and edge information in the adjacency matrix. I think it still suffers from the problems associated with GNNs such as oversmoothing and oversquashing. I am of the opinion we should just use a Transformer like embedding rather than a graph one, but store it in a graph structure with various embeddings for different contexts. This avoids the problems with GNNs while maintaining an informative embedding space that is connected in a graph structure.
@wilfredomartel7781
@wilfredomartel7781 10 ай бұрын
@@johntanchongmin just like a mind map, right?
@johntanchongmin
@johntanchongmin 10 ай бұрын
@@wilfredomartel7781 Indeed, a mind map (or a graph), with multiple vector embeddings for each node (each embedding with different tiers of contexts)
@Azariven
@Azariven 10 ай бұрын
In the topic of graphs, how would you envision combining LLM with UMAP and HDBSCAN for data exploration? kzbin.info/www/bejne/qGnHiI2Oba56rZo
@johntanchongmin
@johntanchongmin 10 ай бұрын
I think with LLMs, you do not need clustering anymore to get similar vectors - you can just simply use the LLM embedding and do vector search over it for similar vectors. That said, perhaps we can consider clustering embeddings together to build graphs. This would be interesting if we could build up the graph by grouping similar vectors together. However, we might lose out on explanability if we do everything as vectors - it can be hard to map back to words and meaning easily understandable by us. I am more of the opinion of using explanable links for the graph by using known relations, and then using the LLM to derive the vector embedding for it. Happy to discuss more.
@RedCloudServices
@RedCloudServices 8 ай бұрын
again..this is very very similar to Infranodus you take a look. Also (1) should this approach identify edge weight between topics? (2) does Langchain factor edge weights? (3) do you need a vector database for this approach?
@johntanchongmin
@johntanchongmin 8 ай бұрын
Could you let me know which approach you are referring to, so I can reply accordingly :)
@RedCloudServices
@RedCloudServices 8 ай бұрын
@@johntanchongmin approach #3
@johntanchongmin
@johntanchongmin 8 ай бұрын
​@@RedCloudServices Had a look at Infranodus. It is quite interesting how it uses rule-based methods to extract out influential entities and relations and generate the graph. The edge weight can also be done via LLMs, especially if you have predefined criteria for each weight. You can do few-shot prompting and use LLMs to extract from an arbitrary text. Langchain as of the video does not have edge weights - it can be easily done with some prompting. Best if we have fixed categories to group the edge weights to. At the moment, the LLM to KG construction does not use a vector database - it simply extracts out (entity, relation, entity) triplets from the text. I can imagine vector similarity can be useful if we want to disambiguate entities. Entities with high cosine similarity can be treated as the same and we can build on to the existing entities on the KG. This of course requires the embedding space to be expressive enough to disambiguate between different entities and cluster similar entities together. More recently, I've been thinking about whether we can use more rule-based and more fixed methods of extracting out entities from the text. One problem of LLMs is that it may not get the entity you would like for your ontology. So few-shot prompting is quite important, and if you have rules to extract out the entities, it might perform better. In a longer term, I am also thinking if ontology is really needed. Can we just store the whole chunk and infer it based on the context at run-time? I have a new approach I term "Chain of Memories", you can check it out on my discord:)
LLMs as General Pattern Machines: Use Arbitrary Tokens to Pattern Match?
1:45:27
GraphRAG: LLM-Derived Knowledge Graphs for RAG
15:40
Alex Chao
Рет қаралды 81 М.
버블티로 체감되는 요즘 물가
00:16
진영민yeongmin
Рет қаралды 97 МЛН
The day of the sea 🌊 🤣❤️ #demariki
00:22
Demariki
Рет қаралды 98 МЛН
Please be kind🙏
00:34
ISSEI / いっせい
Рет қаралды 190 МЛН
The child was abused by the clown#Short #Officer Rabbit #angel
00:55
兔子警官
Рет қаралды 22 МЛН
Knowledge Graphs are key to unlocking the power of AI
47:35
Architect Tomorrow
Рет қаралды 5 М.
Adding Agentic Layers to RAG
19:40
AI User Group
Рет қаралды 16 М.
[1hr Talk] Intro to Large Language Models
59:48
Andrej Karpathy
Рет қаралды 2 МЛН
KGC23 Keynote: The Future of Knowledge Graphs in a World of LLMs - Denny Vrandečić, Wikimedia
31:39
7 Years of Software Engineering Advice in 18 Minutes
18:32
I wish every AI Engineer could watch this.
33:49
1littlecoder
Рет қаралды 60 М.
СМОТРИ, КАКОЙ ВКУСНЫЙ ПИРОЖОК!
12:56
ViteC ► Play
Рет қаралды 1,2 МЛН