The Business & Technology Network
Helping Business Interpret and Use Technology
«  
  »
S M T W T F S
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 
 
 

How to use Chain of Thought prompting to get a better outputs from LLMs

DATE POSTED:July 3, 2024
How to use Chain of Thought prompting to get a better outputs from LLMs

Chain of Thought prompting is an advanced technique used to enhance the reasoning capabilities of Large Language Models (LLMs). As LLMs are scaled up, they perform well in tasks like sentiment analysis and machine translation but struggle with complex multi-step problems such as arithmetic and commonsense reasoning.

Chain of Thought prompting addresses this issue by structuring the problem-solving process in a way that the model can manage more effectively.

What is Chain of Thought prompting?

Chain of Thought prompting is designed to handle complex reasoning tasks by forcing LLMs to generate a sequence of intermediate steps leading to the desired answer. This method contrasts with traditional prompting techniques, such as zero-shot and few-shot prompting. In zero-shot prompting, a model is given a task description without examples, while few-shot prompting includes a few examples to guide the model. However, both approaches can fall short in complex reasoning scenarios. Chain of Thought prompting breaks down the problem into smaller, more manageable steps, allowing the model to focus on each step individually.

Chain of Thought promptingChain of Thought prompting involves prompting LLMs to output a sequence of intermediate steps leading to the desired answer (Image credit)

For example, instead of asking a model to solve a complex arithmetic problem directly, Chain of Thought prompting would involve breaking the problem down into smaller steps, such as identifying the numbers involved, performing individual operations, and then combining the results. This method improves the model’s ability to handle multi-step problems by providing a clear, structured approach.

What are different Chain of Thought prompting types?

To implement Chain of Thought prompting effectively, it is essential to understand the different ways it can be applied:

  • Zero-shot CoT
  • Few-shot CoT

Zero-shot CoT involves adding a trigger phrase like “Let’s think step by step” to the prompt, encouraging the model to generate a sequence of reasoning steps. For instance, if the task is to determine the total cost of items, the model would first identify the cost of each item and then sum them up.

Few-shot CoT, on the other hand, provides the model with examples of how similar problems have been solved, along with a question and answer format that includes the reasoning steps. This method is particularly useful for more complex problems where examples can guide the model in generating the correct sequence of steps. For example, when solving a math problem, few-shot CoT might present examples where similar problems are broken down into smaller steps, helping the model learn the appropriate reasoning process.

Chain of Thought promptingCoT prompting is especially useful for tasks like math word problems, commonsense reasoning, and symbolic manipulation (Image credit) How to craft an effective Chain of Thought prompt

The effectiveness of Chain of Thought prompting depends on the quality of the prompts used. Well-crafted prompts should be clear, concise, and tailored to the specific task at hand. Avoiding jargon and using language that the model can easily understand are crucial. Additionally, matching the prompt to the task ensures that the model can generate the correct answer. For complex tasks, it is important to provide detailed and relevant prompts that guide the model through each step of the reasoning process.

Self-consistency is a technique that can further enhance the performance of CoT prompting. This involves generating multiple diverse chains of thought for the same problem and selecting the most consistent answer from these chains. This approach has been shown to significantly improve performance on arithmetic and commonsense reasoning benchmarks. For instance, in the GSM8K benchmark, self-consistency improved performance from 17.9% to 74% accuracy.

Don’t forget about advanced CoT techniques

Beyond the basic implementation of Chain of Thought prompting, several advanced techniques can be employed to further improve the performance of LLMs. Multimodal CoT, for example, incorporates both text and images into the reasoning process, enhancing the model’s ability to understand and generate accurate answers. This approach has been shown to outperform traditional methods, such as using image captions alone, by providing a richer context for the model to reason about.

Chain of Thought promptingCompared to standard prompting, CoT prompting excels in tasks requiring intricate multi-step reasoning (Image credit)

Least-to-Most prompting is another advanced technique that involves breaking down a complex problem into simpler subproblems and solving them sequentially. This method is particularly effective for tasks that require symbolic manipulation, compositional generalization, and math reasoning. By starting with the simplest subproblems and gradually increasing complexity, Least-to-Most prompting allows the model to build on previous answers and solve more difficult problems with higher accuracy.

Applying chain of thought prompting in various domains

Chain of Thought prompting can be applied across a wide range of domains, including arithmetic reasoning, commonsense reasoning, symbolic reasoning, natural language inference, and question answering. In arithmetic reasoning, CoT prompting has been shown to achieve state-of-the-art performance on benchmarks like GSM8K. For commonsense reasoning, CoT prompting improves the model’s ability to reason about physical and human interactions based on general knowledge.

In symbolic reasoning tasks, such as the last letter concatenation and coin flip problems, CoT prompting enables the model to handle inference-time inputs longer than those seen in few-shot exemplars. For question answering, CoT prompting helps the model understand complex questions by breaking them down into logical steps, improving its ability to generate accurate answers.

Featured image credit: Freepik