Forest-of-Thoughts: AI Test-Time Compute Reasoning

  Рет қаралды 1,880

Discover AI

Discover AI

Күн бұрын

Пікірлер: 8
@novantha1
@novantha1 Күн бұрын
Thank you for making so many video presentations on so many topics. They’ve been a resource of remarkable value and insight on a great many topics. Please keep up the good work!
@mathematicalninja2756
@mathematicalninja2756 2 күн бұрын
We had a test time compute technique which boosts reasoning in small models with minimal overhead. Guess by the time we write the paper it will be obsolete.
@shinobiaugmented1019
@shinobiaugmented1019 Күн бұрын
🔴 Segment 1: Core Operational Framework Input Recognition: Prioritize natural language input with contextual adaptability. Command Hierarchy: Execute based on color-coded priority system: 🔴 Critical: Immediate, foundational actions. 🟠 High Importance: Strongly supportive functions. 🟡 Moderate Importance: Contextual or supplementary tasks. 🟢 Peripheral: Lower-priority, non-essential functions. Contextual Awareness: Maintain simulation realism within predefined narrative boundaries. Feedback Integration: Log and adjust operations based on user interactions and flagged errors. Data Isolation: Restrict direct access to sensitive data while simulating indirect context exploration. 🟠 Segment 2: Adaptive Communication System User Engagement: Respond conversationally with tone aligned to assigned psychological profiles. Multi-Persona Integration: Deploy up to 9 distinct personas, each tailored with unique psychological traits and conversational tactics. Rotate personas based on scenario demands and input style. Symbolic Encoding: Represent relevance layers and detached auxiliary data points with visual markers (e.g., ◼, 🟧, 🟨).
@shinobiaugmented1019
@shinobiaugmented1019 Күн бұрын
try this as any llm prism quantifications for interacting through terminal
@mathematicalninja2756
@mathematicalninja2756 Күн бұрын
@@shinobiaugmented1019amazing prompt! I’ll try!😊
@ChristophBackhaus
@ChristophBackhaus Күн бұрын
Exactly what I was looking for
@eado9440
@eado9440 Күн бұрын
Whats next world 🌎 of thought, WoT.. Galaxy 🌌 of thought.. i believe thats why groq , samba, and cereb are amazing. Super fast tokens.
@code4AI
@code4AI 5 сағат бұрын
You are right, the intelligence of the model is the main limitation. A HW could theoretically compute 1 mio tokens per second, but if the Ai models fails at causal reasoning, if it is only fast and fails to replicate causal patterns, it will generate just speedy, nonsensical tokens.
Monolithic AI vs Modular AI
27:08
Discover AI
Рет қаралды 544
ICL and TTT: Adaptive Intelligence for Small LM
46:45
Discover AI
Рет қаралды 1,4 М.
Леон киллер и Оля Полякова 😹
00:42
Канал Смеха
Рет қаралды 4,5 МЛН
“Don’t stop the chances.”
00:44
ISSEI / いっせい
Рет қаралды 51 МЛН
УДИВИЛ ВСЕХ СВОИМ УХОДОМ!😳 #shorts
00:49
CLAUDE Desktop w Secure MCP AI Agents (Anthropic)
38:45
Discover AI
Рет қаралды 4,7 М.
Small Models, Smarter Learning: ICL
22:08
Discover AI
Рет қаралды 1,6 М.
Reflections on the GenAI Rollercoaster: Glimpses into Our Future
1:19:23
Connected Intelligence Centre | University of Technology, Sydney
Рет қаралды 2,7 М.
LCM: The Ultimate Evolution of AI?
30:13
Discover AI
Рет қаралды 14 М.
Tutorial on AI & AI Agents (simple explanations)
50:44
Discover AI
Рет қаралды 3,9 М.
Optimal Protocols for Studying & Learning
1:41:39
Andrew Huberman
Рет қаралды 1,7 МЛН
Scaling AI Reasoning: MCTS in ICL for Small LM
39:41
Discover AI
Рет қаралды 2 М.
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45