DSPy explained: No more LangChain PROMPT Templates

  Рет қаралды 19,220

Discover AI

Discover AI

Күн бұрын

DSPy explained and coded in simple terms. No more LangChain or LangGraph prompt templates. A self-improving LLM-RM pipeline! Plus automatic prompt engineering via self-optimization by a GRAPH based pipeline representation via DSPy.
Chapter 1: Development of an Intelligent Pipeline for Large Language Models
Focus: Integration and Optimization of Language Models and Data Retrieval Systems.
Pipeline Architecture: The chapter begins with the conceptualization of an intelligent pipeline that integrates a large language model (LLM), a retriever model, and various data models. The pipeline is designed for self-configuration, learning, and optimization.
Graph-Based Representation: Emphasis is placed on using graph theory and mathematical tools for optimizing the pipeline structure. The graph-based approach allows for more efficient data processing and effective communication between different components.
Problem Identification: Challenges in integrating synthetic reasoning and actions within LLMs are addressed. The chapter discusses the need for optimizing prompt structures for diverse applications, highlighting the complexity of creating flexible and efficient models.
Chapter 2: Evaluating and Optimizing Model Performance
Focus: Comparative Analysis of Model Configurations and Optimization Techniques.
Experimental Analysis: This chapter details experiments conducted by Stanford University and other institutions, analyzing various prompt structures and their impact on model performance. It includes an in-depth examination of different models, including LangChain, and their effectiveness in specific contexts.
Optimization Strategies: The text explores strategies for optimizing the intelligent pipeline, including supervised fine-tuning algorithms from Hugging Face and in-context learning for few-shot examples.
Microsoft's Study: A critical review of a study conducted by Microsoft in January 2024 is presented, focusing on the comparison between retrieval augmented generation (RAG) and fine-tuning methods. This section scrutinizes the balance between incorporating external data into LLMs through RAG versus embedding the knowledge directly into the model via fine-tuning.
Chapter 3: Advanced Pipeline Configuration and Self-Optimization
Focus: Advanced Techniques in Pipeline Self-Optimization and Configuration.
Self-Optimizing Framework: This chapter delves into the creation of a self-improving pipeline, which includes automatic prompt generation and optimization. The pipeline is described as being capable of autonomously generating training datasets and deciding the optimal approach (fine-tuning vs. in-context learning) based on specific tasks.
DSPy Integration: Discussion of DSPy, a platform for coding declarative language model calls into self-improving pipelines, with a focus on its utilization in PyTorch.
Comprehensive Optimization: The chapter concludes with an exploration of techniques for structural optimization of the pipeline and internal model optimization. It highlights collaborative efforts from Stanford University, UC Berkeley, Microsoft, Carnegie Mellon University, and Amazon in advancing these technologies.
github.com/sta...
all rights with authors:
DSPY: COMPILING DECLARATIVE LANGUAGE MODEL CALLS INTO SELF-IMPROVING PIPELINES
by Stanford, UC Berkeley, et al
arxiv.org/pdf/...
DSPy Notebooks:
github.com/sta...
colab.research...

Пікірлер: 21
@kevon217
@kevon217 7 ай бұрын
Very timely. Finally got around to learning this framework and it’s an awesome abstraction.
@ghostwhowalks2324
@ghostwhowalks2324 8 ай бұрын
Would love to see the video of finetune of the LLM's using DSPy. Sounds very intriguing
@ЩЩцфч
@ЩЩцфч 7 ай бұрын
Thanks for the video. Pretty interesting tool. No more boring prompt engineering 😁 Shoutout to the mood in your videos.
@jdwebprogrammer
@jdwebprogrammer 7 ай бұрын
Always great videos thanks for making them! Yep, I noticed that along with other things that seemed really limiting with Langchain.
@connor-shorten
@connor-shorten 7 ай бұрын
Amazing analysis! Love this!
@henkhbit5748
@henkhbit5748 8 ай бұрын
great video👍.I will test with mistral models. Thanks
@joserobles11
@joserobles11 8 ай бұрын
Your videos are so inspiring to the comunity and you help a lot of us who are strugling to get up ti date. Could you please share how did you manage to create a database that you talked about in some videos with which you have trained your own LLM. I think I am ready to start a new project idea that I have and it would really help me to get going. Thanks in advance for everything you do!😊
@vbywrde
@vbywrde 7 ай бұрын
Fabulous information. Thank you!
@ppbroAI
@ppbroAI 8 ай бұрын
Seems to me like prompting consensus with filtering synthetic data. Interesting way to compact a pipeline. Oh, and Imagine if this could be mixed with a selection of lora adapters. Nice information Bro !
@ngbrother
@ngbrother 7 ай бұрын
I’ve been playing with autogen to create specialized pipelines.🤔 Another aspects of the self-optimization that might be interesting to explore is self-optimization of which modules in the pipeline are involved in a task. Solving for token I/O cost. Every agent spamming a common chat room gets expensive. 💵 🔥
@zd676
@zd676 6 ай бұрын
I still fail to see what is the value DSP boasts to bring. You keep on saying “you don’t need a template”, however, behind scene, DSP has all these templates for various modules you showed. How is this different from Langchain? In Langchain, I don’t need to explicitly write a template neither when using a retriever chain for example.
@washedtoohot
@washedtoohot 15 күн бұрын
Dspy allows for automatic optimization. If you want to optimize a langchain program you need to manually change the prompts.
@AIAnarchy-138
@AIAnarchy-138 7 ай бұрын
"Microsoft, I don't know what you proved, but here are the results." 😂
@dr.mikeybee
@dr.mikeybee 8 ай бұрын
+RAG? Brevity is the soul of wit.
@tvwithtiffani
@tvwithtiffani 8 ай бұрын
I believe we'll see some of these methods again when Llama 3 is released by Meta
@ghostwhowalks2324
@ghostwhowalks2324 8 ай бұрын
thank you for your definition of teleprompter. Until I saw your explanation for it, it did not make sense. The vocabulary needs to be fine-tuned (no pun intended, as our human language models have historical bias). Like you explained remote optimized prompting would be a good name or something similar.
@whig01
@whig01 7 ай бұрын
Programpt
@raymond_luxury_yacht
@raymond_luxury_yacht 7 ай бұрын
I never understood langchain. Doesn't rally seem to do much. This however is freaking amazing and will be the future. People cant write prompts. For mass adoption you have to automate all the cognition and dumb down to absolute simples Genius.
@DavitBarbakadze
@DavitBarbakadze 6 ай бұрын
I'm like 15 mins in (after watching part 1) and whole thing still doesn't make sense. Like what, why, how? Maybe that's why nobody watches your videos past 10 mins. People need an information, in concise, applicable and maybe little bit casual manner (little bit is a keyword here).
@DavitBarbakadze
@DavitBarbakadze 6 ай бұрын
Spoiler! It starts at 25:17 🙄
AI DSP: LLM Pipeline to Retriever Model (Stanford)
26:44
Discover AI
Рет қаралды 8 М.
DSPy on ICL RAG Classification: Code explained
28:46
Discover AI
Рет қаралды 5 М.
Worst flight ever
00:55
Adam W
Рет қаралды 24 МЛН
отомстил?
00:56
История одного вокалиста
Рет қаралды 7 МЛН
Complete DSPy Tutorial - Master LLM Prompt Programming in 8 amazing examples!
34:42
Neural Breakdown with AVB
Рет қаралды 18 М.
NEW DSPyG: DSPy combined w/ Graph Optimizer in PyG
23:05
Discover AI
Рет қаралды 6 М.
LangGraph Deep Dive: Build Better Agents
46:13
James Briggs
Рет қаралды 19 М.
GraphRAG: LLM-Derived Knowledge Graphs for RAG
15:40
Alex Chao
Рет қаралды 114 М.
Simple ideas to improve your RAG (Stanford, Google)
39:31
Discover AI
Рет қаралды 9 М.
DSPy Explained!
54:16
Connor Shorten
Рет қаралды 60 М.
GraphRAG: The Marriage of Knowledge Graphs and RAG: Emil Eifrem
19:15
Отличия iphone 16 Pro Max от 15 Pro Max
0:46
Romancev768
Рет қаралды 420 М.
iPhone or Samsung?
0:28
Kan Andrey
Рет қаралды 530 М.
Где купить колонку Алиса в ОАЭ или США ?
0:17
Electronics_latvia
Рет қаралды 3 МЛН
iPhone 16 Vs S25 ultra💀
1:01
Skinnycomics
Рет қаралды 4,8 МЛН