34514@AAAI

Total: 1

#1 MP: Endowing Large Language Models with Lateral Thinking [PDF33] [Copy] [Kimi14] [REL]

Authors: Tian Bai, Yongwang Cao, Yan Ge, Haitao Yu

The recent studies show that Large Language Models (LLMs) often fall short in tasks demanding creative, lateral thinking due to lacking a clear awareness of their own reasoning processes. To cope with this issue, we propose a novel metacognitive prompting method (titled as MP) by mimicking human metacognition. Through integrating metacognitive principles, MP endows LLMs with lateral thinking ability, thereby enhancing their abilities to strategize, monitor, and reflect on their responses when dealing with creative tasks. The experimental results with five base LLMs across three lateral thinking datasets demonstrate that: All LLMs armed with MP consistently outperform the representative baseline methods. For example, MP demonstrates superior performance over CoT prompting across Sentence Puzzle (+5.00%), Word Puzzle (+10.07%), BiRdQA (+6.48%), and RiddleSense (+2.65%) with GPT-3.5-turbo model. In particular, the deployment of MP with GPT-4 achieves significant performance improvements that even surpass human performance on BRAINTEASER benchmark, demonstrating the transformative potential of MP in enhancing the creative problem-solving abilities of LLMs.

Subject: AAAI.2025 - Natural Language Processing