Prompt Engineering: GPT-4 implements Tree of Thoughts (ToT)

  Рет қаралды 6,604

Discover AI

Discover AI

Күн бұрын

Пікірлер: 25
@VoxScriptPlugin
@VoxScriptPlugin Жыл бұрын
Hey there -- creator of VoxScript here. Just wanted to drop in and say this is a really cool application of the TOT prompting model!
@code4AI
@code4AI Жыл бұрын
Thanks. Appreciated!
@romantercero4436
@romantercero4436 Жыл бұрын
Excellent! This blows my mind and I can't wait to apply ToT in my area of expertice! Thank you for your excellent videos!
@chrisBruner
@chrisBruner Жыл бұрын
Wow, very nice demo on how you use gtp-4.
@code4AI
@code4AI Жыл бұрын
Thanks!
@gileneusz
@gileneusz Жыл бұрын
13:13 can you explain how didn't you hit the 8k tokens context limit with this? I believe it could be done effectively only with 32k tokens version of GPT-4 which is not available in ChatGpt with plugins. Or am I wrong?
@code4AI
@code4AI Жыл бұрын
In one of my next videos I explain the answer to your question. It will be titled something like: The technical interface of OpenAI plugins with real-time external data retrieval, or so. I am currently designing this video, .... smile.
@PawelBojkowski
@PawelBojkowski Жыл бұрын
Chat GPT is smaller than 4k at this moment
@gileneusz
@gileneusz Жыл бұрын
@@PawelBojkowski what do you mean by that?
@joelvalim
@joelvalim Жыл бұрын
finally someone convinced me to use plugins... love u`r wrk mann
@kevon217
@kevon217 Жыл бұрын
Fascinating on so many levels. Can’t thank you enough for sharing your expertise and teaching.
@cutmasta-kun
@cutmasta-kun Жыл бұрын
Danke Danke Danke!!!! Richtig gutes Video! Irgendwie gibt es diesbezüglich keine "ordentliche" Infos, nur glauben und hörensagen ^^ Ich habe für die Suche auf Arxiv ein eigenes Plugin geschrieben, aber jetzt sehe ich, dass ScholarAi das bereits kann, also brauche ich nicht mehr an meinem weiter zu entwickeln 🙂 Und auch so, vielen Dank für deinen Beitrag!!! Gruß aus Bayern
@code4AI
@code4AI Жыл бұрын
Neue Arxiv Plugins spriessen gerade wie Schwammerl aus dem Boden, sprich im Plugin Store, und ich plane in einem der nächsten Videos einen Performancevergleich der einzelnen Plugins. Mal sehen welches das Beste ist .... by the way: Mein erster Kommentar auf Deutsch! Smile.
@cutmasta-kun
@cutmasta-kun Жыл бұрын
@@code4AI Hallo 👋 Mich hat es auch sehr gefreut, dass da ein open-minded AI Experte aus dem Deutsch-sprachigem Raum auf KZbin unterwegs ist. Da gibt's dann doch ähnliche Denkmuster ^^ Und unbedingt mehr Content zu Plugins!! Ich glaube es ist den meisten noch garnicht bewusst, dass sie mit Plugins ihre LLMs Multi-Layered und mit Erinnerung ausstatten können 😄 Ich selbst habe ein "Memory" Plugin auf Basis von Datasette erstellt. Dadurch sind die Daten zumindest "einsehbar" 🫡 Ich freue mich auf die Zukunft ☺️
@jonbennettco
@jonbennettco Жыл бұрын
Do you mind sharing the prompt you used?
@barrelroller8650
@barrelroller8650 Жыл бұрын
Awesome idea, but it seems like GPT entirely skipped the evaluation part - there are no thoughts such as "this idea is unfeasible" or "too difficult to implement" or "V(Thought) = 10". I found it beneficial to manually iterate language model through the steps it identified (generation, iteration, search) by repeating the instructions. Otherwise it just goes into infinite cycle of generation, as shown in the video. In other words, this need a careful guidance, which I assume (without any real experience) could be implemented with langchain package
@xynthewarrior
@xynthewarrior Жыл бұрын
When you ask it for more details about how it works, is it so the GPT4 memory has context to make the prompt or to gather informations on the ToT functionning?
@code4AI
@code4AI Жыл бұрын
Just me trying to figure out if GPT-4 really understood the algorithm. Always good to have some validation run before you tell the system to act on a task given
@xynthewarrior
@xynthewarrior Жыл бұрын
@@code4AI "Given the problem "Find 10 most nostalgic musics in gaming from 2010 to today", I want you to use the Tree of Thoughts (ToT) method to solve it. Here's how I want you to proceed: 1. **Thought Proposals**: Start by proposing some possible next steps. Think about the different paths you could take to solve the problem and list them out. 2. **Breadth-First Search (BFS)**: Next, perform a breadth-first search. At each step, keep the best 5 candidates. This will ensure that you're exploring all possible solutions. 3. **Deliberate BFS**: Now, I want you to evaluate each thought candidate. Label them as 'sure', 'maybe', or 'impossible' with regard to reaching the solution. This will help you focus on the most promising paths. 4. **Thought Generation**: Remember, new thoughts can arise from refining old thoughts. Don't just generate thoughts independently or sequentially - try to build on your previous thoughts. 5. **Depth-First Search (DFS) if Necessary**: If the problem is complex, you might need to perform a depth-first search. Keep exploring the most promising subsequent step until the state is no longer promising, then backtrack to the parent state to explore alternative thoughts. 6. **Thought Confidence Level**: As you go along, give a confidence level for different thoughts. This will help guide the search process. 7. **Iterate and Refine**: Finally, continue this process until the problem is solved or until you have explored all possible paths. Remember, the goal is to make more deliberate decisions and solve the problem as efficiently as possible. Now, let's get started. What are your initial thought proposals?"
@temka088
@temka088 Жыл бұрын
So no chat sharing?
@code4AI
@code4AI Жыл бұрын
I just shared the complete video.
@xynthewarrior
@xynthewarrior Жыл бұрын
@@code4AI He meant, no prompt sharing.
@SiEmG
@SiEmG Жыл бұрын
I think that SOTA papers mentioned that each step should be done in a seperate prompt, not many steps of ToT method in the same prompt, right? I am not sure if it used the evaluation function, i think it just worked as usual, moving on from the last state. Can you explain to me why not?
@code4AI
@code4AI Жыл бұрын
There is a mathematically much more stringent defined way to go for causal inference with LLMs and that is for sure not Tree of Thoughts (ToT). New video on a professional approach to causal inference for use with LLMs on Sunday, on this channel.
AutoGPT & BabyAGI: autonomous AI Agents for LLMs explained
29:36
Chain of Thought (CoT) meets Instruction Fine-Tuning
29:55
Discover AI
Рет қаралды 8 М.
Try Not To Laugh 😅 the Best of BoxtoxTv 👌
00:18
boxtoxtv
Рет қаралды 7 МЛН
Каха и лужа  #непосредственнокаха
00:15
Motorbike Smashes Into Porsche! 😱
00:15
Caters Clips
Рет қаралды 22 МЛН
Beyond AI Vector Database: AI Neural Search on ChatGPT, GPT-4
17:12
Prompt Engineering Tutorial - Master ChatGPT and LLM Responses
41:36
freeCodeCamp.org
Рет қаралды 1,6 МЛН
AI Agents Create a New World - MBTI Personalities
31:04
Discover AI
Рет қаралды 546
4 Methods of Prompt Engineering
12:42
IBM Technology
Рет қаралды 155 М.
Graph-of-Thoughts (GoT) for AI reasoning Agents
41:34
Discover AI
Рет қаралды 15 М.
What is Spatial AI? "The Next Frontier of AI Architecture"
40:24
Matthew Berman
Рет қаралды 50 М.