Grounding LLMs: Building a Knowledge Layer atop the Intelligence Layer • Talk @ UMBC • Sept 17, 2024

  Рет қаралды 293

Aman Chadha

Aman Chadha

Күн бұрын

"Grounding LLMs: Building a Knowledge Layer atop the Intelligence Layer" • Invited Talk at the University of Maryland, Baltimore County ‪@umbc‬ • Knowledge-Infused Learning (CMSC691) • September 17, 2024
• Relevant Primers:
transformer.ama...
llm.aman.ai
rag.aman.ai
peft.aman.ai
• Overview:
The talk encompassed various methods of building a knowledge layer atop existing Large Language Models (LLMs), offering an overview of methods such as In-Context Learning (ICL), fine-tuning, Parameter Efficient Fine-Tuning (PEFT), Retrieval Augmented Generation (RAG), and the use of Knowledge Graphs (KGs) for better contextual understanding via structured data.
• Detailed Agenda:
The talk focused on modeling LLMs as an "intelligence" layer and the data associated with the task at hand as the "knowledge" layer, which lends itself as a way to model today's natural language-based interactive tasks with private task-oriented data. To this end, the following topics were covered to develop a knowledge layer atop a base LLM:
Transformer Encoder/Decoder Architecture: The architecture of the Transformer Encoder/Decoder was briefly explained, highlighting the role of encoder models for input understanding and decoder models (LLMs) for generation tasks.
Fine-tuning: Prevalent methods of fine-tuning LLMs were discussed -- full fine-tuning, which updates all model parameters, surgical fine-tuning, which selectively updates specific layers, and Parameter Efficient Fine-Tuning (PEFT), which focuses on a smaller subset of parameters, based on available data and task variation to reduce memory and computational demand.
PEFT enables faster adaptation and modular storage by storing one base model and individual adapters for each task.
In-context learning/Few-shot prompting: Teaches the model how to carry out a desired task without a change in its weights.
Retrieval Augmented Generation (RAG): Combines retrieval and generation methods to enhance performance. RAG retrieves relevant information from an external knowledge base to create an expanded prompt for the model. RAG is an effective method to reduce hallucination and render a grounded model for most tasks.
Knowledge Graphs (KGs): KGs, owing to their structured representation, offer knowledge and contextual enrichment for LLMs, leading to better contextual understanding in applications. A case-study for claim-level fact verification using KGs (arxiv.org/abs/...) was discussed.
• Relevant Links/Papers:
LoRA: Low-Rank Adaptation of Large Language Models: arxiv.org/abs/...
Surgical Fine-Tuning Improves Adaptation to Distribution Shifts: arxiv.org/abs/...
Gaussian Adaptive Attention is All You Need: Robust Contextual Representations Across Multiple Modalities: arxiv.org/abs/...
ClaimVer: Explainable Claim-Level Verification and Evidence Attribution of Text Through Knowledge Graphs: arxiv.org/abs/...
CMSC691: kil-workshop.g...

Пікірлер
MatomoCamp 2024 World Tour Edition - Room 1
11:55:00
MatomoCamp
Рет қаралды 698
風船をキャッチしろ!🎈 Balloon catch Challenges
00:57
はじめしゃちょー(hajime)
Рет қаралды 85 МЛН
How To Choose Mac N Cheese Date Night.. 🧀
00:58
Jojo Sim
Рет қаралды 84 МЛН
Prompt Engineering, RAG, and Fine-tuning: Benefits and When to Use
15:21
GraphRAG: The Marriage of Knowledge Graphs and RAG: Emil Eifrem
19:15
RAG vs. Fine Tuning
8:57
IBM Technology
Рет қаралды 57 М.
Building Production RAG Over Complex Documents
1:22:18
Databricks
Рет қаралды 10 М.
ICML 2024 Tutorial: Physics of Language Models
1:53:43
Zeyuan Allen-Zhu
Рет қаралды 31 М.
GraphRAG: LLM-Derived Knowledge Graphs for RAG
15:40
Alex Chao
Рет қаралды 126 М.
風船をキャッチしろ!🎈 Balloon catch Challenges
00:57
はじめしゃちょー(hajime)
Рет қаралды 85 МЛН