Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Tree of Thoughts (ToT) is a novel framework for language model inference that generalizes the Chain of Thought approach. It enables LMs to perform deliberate problem-solving by exploring multiple reasoning paths, self-evaluating choices, and utilizing lookahead or backtracking. ToT significantly enhances language models' problem-solving abilities on tasks requiring non-trivial planning or search. ✨
Article Points:
1
ToT generalizes Chain of Thought, enabling exploration over coherent 'thoughts'.
2
LMs perform deliberate decision-making by considering multiple reasoning paths.
3
ToT allows self-evaluation, lookahead, and backtracking for global choices.
4
Significantly enhances problem-solving on Game of 24, Creative Writing, Mini Crosswords.
5
ToT is modular, adaptable, and requires no extra training for LMs.
6
Achieved 74% success on Game of 24, compared to 4% for CoT with GPT-4.
Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Core Concept

Generalizes Chain of Thought

Deliberate decision-making

Explores coherent "thoughts"

Key Components

Thought decomposition

Thought generator

State evaluator

Search Algorithms

Breadth-First Search (BFS)

Depth-First Search (DFS)

Applications

Game of 24

Creative Writing

Mini Crosswords

Advantages

Generality & Modularity

Adaptability & Convenience

Enhanced problem-solving