NEW TextGrad by Stanford: Better than DSPy

  Рет қаралды 16,356

Discover AI

Discover AI

Күн бұрын

Пікірлер: 28
@dennisestenson7820
@dennisestenson7820 6 ай бұрын
This is a concept I'd been considering myself, but I never thought of it as autodifferentiated text. Fantastic that research is being done in this direction. I knew it'd be a good idea.
@Caellyan
@Caellyan 6 ай бұрын
I criticized this has to be done manually, but never thought of chaining 2 LLMs to achieve it. Though, it does make getting slightly better answers 3x more expensive. I guess it's useful for unsupervised learning though.
@kenchang3456
@kenchang3456 6 ай бұрын
Thanks for the video. I missed the boat with DSPy but it's good to know you can just go ahead with TextGrad.
@giladmorad4348
@giladmorad4348 6 ай бұрын
Thanks for the video, it’s very insightful! I have 1 thought: 1. Textgrad and DSPy can be combined. As DSPy is mostly based on ICL and this framework focuses more on signature optimization. Additionally, the researchers in Stanford mentioned that the combined prompt on one occasion improved the prompt by 1% and it should be further studied.
@matty-oz6yd
@matty-oz6yd 6 ай бұрын
DSPy is ICL and prompt optimisation combined. I hope they add text grad in somehow though
@giladmorad4348
@giladmorad4348 6 ай бұрын
@@matty-oz6yd yea, good correction. I hope they add Textgrad in as an optimizer.
@hussainshaik4390
@hussainshaik4390 5 ай бұрын
Their mipro v2 optmizes literally doing the same
@fingerstyleguitarjustingao729
@fingerstyleguitarjustingao729 5 ай бұрын
great video, hope for you more advanced explain and experience on TextGrad!
@mydetlef
@mydetlef 2 ай бұрын
OK, I'm a n00b. But why should I use two models when the smarter one can give me the optimal answer straight away? In which scenarios do I need all these expensive iterations? Will I then have predefined prompts for recurring queries of the same type that can be answered directly on my smartphone by a small model?
@brandonheaton6197
@brandonheaton6197 6 ай бұрын
Solid. I knew if the guy behind DSPy could build that, there was a better version imminent
@jmanhype1
@jmanhype1 6 ай бұрын
sounds like we need a middleware complexity assesor that can sit in the middle and auto reject if it doesnt meet that balance
@matterhart
@matterhart 6 ай бұрын
Thanks stanford, though I would have called it backpromptigation. ;)
@mlcat
@mlcat 6 ай бұрын
26:51 what does 0 demonstrations mean? No examples of good output, only original prompt?
@mydetlef
@mydetlef 2 ай бұрын
Answer from Copilot: Yes
@DannyGerst
@DannyGerst 6 ай бұрын
You said that you used in on your tasks. Can you release part of that code in the wild? It would be really great to see a live example. That was the thing I found very challenging with DSPy. Only with the storm project I started understanding how it should work ;-)
@code4AI
@code4AI 6 ай бұрын
Start with the four Jupyter Notebooks that I provided and you will see that you have immediately multiple new ideas for your specific tasks. I plan a new video on my insights, given my testing and maybe I have an idea how to optimize the TextGrad method further ....
@pensiveintrovert4318
@pensiveintrovert4318 2 ай бұрын
It is 3 months later, has either of the two approaches proven to be practically useful and is being used today?
@pensiveintrovert4318
@pensiveintrovert4318 5 ай бұрын
How is this different from prompt tuning (not engineering)?
@code4AI
@code4AI 5 ай бұрын
Explained in the video.
@Anonymous-lw1zy
@Anonymous-lw1zy 5 ай бұрын
Superb explanation! Thank you!
@hoomansedghamiz2288
@hoomansedghamiz2288 5 ай бұрын
Here's an unpopular opinion: could this be considered a misuse of the notation for auto-differentiation and backpropagation? For any graph to be differentiable, it must be acyclic-like a Directed Acyclic Graph (DAG), which is typical for neural networks. However, in the LLM sphere, we see pipelines incorporating cycles, such as the RAG where blocks are repeatedly cycled through, forming what might be described as Directed Cyclic Graphs (DCGs). While using PyTorch's clean and modular syntax is appealing, applying auto-differentiation in this context could be seen as a stretch (personal opinion).
@MindEmbedding
@MindEmbedding 5 ай бұрын
Thanks for another great video! I like your presentation style. What kind of software do you use for your slides?
@asadad5162
@asadad5162 4 ай бұрын
Great video, very informative. Textual Gradient is such a pretentious concept for me, but I do look forward to try TextGrad out. At least it is a systematic method to perform prompt optimization.....
@artur50
@artur50 6 ай бұрын
Thanks for the links to colabs…
@stephanembatchou5300
@stephanembatchou5300 6 ай бұрын
Very informative. Thanks
@GeoffLadwig
@GeoffLadwig 6 ай бұрын
Great! Thanks
@whig01
@whig01 6 ай бұрын
Seems like one can prompt optimize for the same level system and never lack coherence.
@spkgyk
@spkgyk 6 ай бұрын
Amazing video! But pseudo as in pseudo-code is pronounced like sudo (syuudo) Not smart enough to correct anything else in this video lmao, keep up the good work! Love the channel
CODE Fine-Tune Vision Language VLM eg PaliGemma-3B
13:40
Discover AI
Рет қаралды 3,1 М.
Who is More Stupid? #tiktok #sigmagirl #funny
0:27
CRAZY GREAPA
Рет қаралды 10 МЛН
GIANT Gummy Worm #shorts
0:42
Mr DegrEE
Рет қаралды 152 МЛН
NEW DSPyG: DSPy combined w/ Graph Optimizer in PyG
23:05
Discover AI
Рет қаралды 6 М.
RAG explained step-by-step up to GROKKED RAG sys
59:31
Discover AI
Рет қаралды 6 М.
Getting Started with RAG in DSPy!
31:54
Connor Shorten
Рет қаралды 15 М.
Understand DSPy: Programming AI Pipelines
28:21
Discover AI
Рет қаралды 4,3 М.
Optimization of LLM Systems with DSPy and LangChain/LangSmith
57:55
Anthropic MCP with Ollama, No Claude? Watch This!
29:55
Chris Hay
Рет қаралды 16 М.
Who is More Stupid? #tiktok #sigmagirl #funny
0:27
CRAZY GREAPA
Рет қаралды 10 МЛН