Making ChatGPT Think Like a Human: Unlocking Its Potential with Graph of Thoughts
Dive deeper into the intricate weave of Graph of Thoughts, edging ever closer to how humans think.
The landscape of the Large Language Model (LLM) prompting frameworks has seen a new contender – the Graph of Thoughts (GoT) framework. Going beyond the capabilities of Chain-of-thought (CoT) and Tree-of-thoughts (ToT), GoT presents a fresh perspective on how we can make LLMs think like humans for better results.
Background Information: What is a Graph?
In essence, a graph is a structure consisting of vertices and edges. Think of vertices as dots and edges as the lines connecting these dots. In the world of problem-solving, these graphs, especially directed acyclic ones, play a pivotal role in representing complex structures.
The Evolution of Prompting Structures
Chain-of-thought (CoT)
CoT emerged as an innovative prompting approach by weaving intermediate reasoning steps directly into the prompt alongside the primary task input/output. By visualizing the thought process as a linear chain where each link represents a step of reasoning, CoT significantly amplified the ability of LLMs to tackle problems. Each "link" or step in this chain paved the way for the next, ensuring a more structured approach to problem-solving.
Tree of Thoughts (ToT)
Taking inspiration from CoT, the Tree of Thoughts (ToT) was designed to provide even more depth to the LLM reasoning process. Instead of a linear chain, ToT models reasoning as a tree, with branches representing different paths of thought. This branching mechanism introduced novel capabilities like backtracking from outcomes that didn't seem promising. However, while ToT added multiple pathways of reasoning, its tree-like structure sometimes proved restrictive, confining the thought process to its branches without allowing for more intricate interconnections.
With these two as the backdrop, the emergence of the Graph of Thoughts (GoT) represents a leap forward. Instead of chains or trees, GoT envisions the reasoning process as an intricate web, akin to graphs, allowing for a more interconnected and dynamic form of reasoning.
Introducing Graph of Thoughts (GoT)
Core Concept of GoT
GoT is motivated by numerous phenomena such as human reasoning, brain structure, or algorithm execution. When working on a novel idea, a human would generally form a more complex network of thoughts. For example, one could explore a certain chain of reasoning, backtrack, and start a new one, then realize that a certain idea from the previous chain could be combined with the currently explored one, and merge them both into a new solution, taking advantage of their strengths and eliminate their weaknesses.
GoT's brilliance lies in its ability to represent LLM-generated information as an arbitrary graph, where thoughts are vertices and their dependencies, the edges. This means GoT can amalgamate multiple thoughts, refine vast networks of thoughts, or even augment individual thoughts.
Components of GoT
Prompter: Transforms graph structures into LLM prompts.
Parser: Decodes information from LLMs' output into usable 'thought states'.
Scoring & Validation: Determines the accuracy of LLM outputs, often quantified through scores.
Graph of Operations: A predefined plan detailing the flow of thought operations.
Graph Reasoning State: Maintains a record of the LLM reasoning process.
Controller: Orchestrates the flow, deciding when to loop back or conclude the process.
Evaluation and Results
Methodology
The researchers utilized 100 input samples for each task, relying on a 4k context model. However, due to constraints, their focus remained on GPT-3.5 over GPT-4.
Analysis of Outcomes
The results are clear: GoT outshines both ToT and CoT. Compared to ToT, GoT has lower costs and reduces median errors by 62%. Against CoT, GoT improves results by 65%, albeit at a slightly higher cost. This showcases GoT's aptness for intricate problems, especially as they scale in complexity.
Human Thought and GoT: Drawing Parallels
Our brains don't think linearly. We merge ideas, backtrack, and often change our reasoning direction based on new insights. GoT mirrors this non-linear, dynamic approach, resembling the intricate networks formed in human reasoning.
Conclusion
The Graph of Thoughts framework is an upgrade in the way we leverage Large Language Models. By drawing inspiration from the intricate networks of human reasoning and bridging gaps in previous models, GoT stands poised to inspire a new wave of LLM prompting frameworks.