This paper investigates the performance of Large Language Models (LLMs) and
Tool-augmented LLMs in tackling complex mathematical reasoning tasks. We
introduce IMP-TIP: Improving Math Reasoning with Tool-augmented Interleaf
Prompting, a framework that combines the strengths of both LLMs and
Tool-augmented LLMs. IMP-TIP follows the “From Good to Great” concept,
collecting multiple potential solutions from both LLMs and their Tool-Augmented
counterparts for the same math problem, and then selecting or re-generating the
most accurate answer after cross-checking these solutions via tool-augmented
interleaf prompting. The framework incorporates two key aspects: self-prompt
and tool-augmented interleaf prompting (TIP). The former allows LLMs to
autonomously refine and improve an initial prompt related to tool usage, while
the latter enables LLMs to derive the final answer by dynamically analyzing the
problem, cross-checking potential solutions, and revising previous reasoning
hints in an interleaved manner. Experimental analysis shows that IMP-TIP
achieves enhanced mathematical capabilities and outperforms traditional LLMs
and tool-augmented LLMs in accuracy and reasoning diversity on math reasoning
tasks. For instance, IMP-TIP can improve Tool-augmented ChatGPT on GSM8K-Hard
from 56.0% to 65.2%.

Large Language Models (LLMs) have shown great potential in various natural language processing tasks, but their performance in complex mathematical reasoning tasks has been limited. This paper introduces IMP-TIP, a framework that combines the strengths of LLMs and Tool-augmented LLMs to improve math reasoning.

IMP-TIP follows the concept of “From Good to Great” by collecting multiple potential solutions from both LLMs and their Tool-Augmented counterparts for the same math problem. This approach leverages the diverse capabilities of both models and increases the chances of obtaining accurate answers. By cross-checking these solutions via tool-augmented interleaf prompting (TIP), the framework is able to select or re-generate the most accurate answer.

The framework incorporates two key aspects: self-prompt and TIP. Self-prompt allows LLMs to autonomously refine and improve an initial prompt that relates to tool usage. This enables the models to adapt and optimize their reasoning based on the available tools. TIP, on the other hand, enables LLMs to dynamically analyze the problem, cross-check potential solutions, and revise previous reasoning hints in an interleaved manner. This iterative approach enhances the reasoning capabilities of the models and improves the accuracy of their answers.

The experimental analysis conducted on math reasoning tasks demonstrates that IMP-TIP outperforms traditional LLMs and tool-augmented LLMs in terms of accuracy and reasoning diversity. For example, IMP-TIP significantly improves the performance of Tool-augmented ChatGPT on GSM8K-Hard from 56.0% to 65.2%. This improvement showcases the effectiveness of combining the strengths of LLMs and tool-augmented approaches.

The concepts explored in this paper highlight the multi-disciplinary nature of the content. It involves elements of natural language processing, mathematics, and machine learning. By bridging these disciplines, IMP-TIP offers a promising solution to enhance the mathematical capabilities of language models. The framework’s ability to autonomously refine prompts and dynamically analyze problems through interleaf prompting showcases the potential for advancing mathematical reasoning tasks.

Moving forward, it would be interesting to investigate the scalability of the IMP-TIP framework and its applicability to a wider range of mathematical reasoning tasks. Additionally, exploring different techniques for integrating tools with LLMs and further refining the self-prompting mechanism could lead to even more significant improvements in accuracy and reasoning diversity. Overall, IMP-TIP provides valuable insights into addressing complex mathematical reasoning challenges and paves the way for future advancements in the field.
Read the original article