Total: 1
Large language models (LLMs) have achieved remarkable success in natural language processing tasks but still struggle with complex causal and logical reasoning. Previous neuro-symbolic methods can be summarized into a two-stage framework: first translating natural language (NL) problems into symbolic language (SL) representation, and then performing the symbolic reasoning process. To facilitate this direction, we provide a comprehensive survey, summarizing two main challenges including complex logical question-answering (QA) and cross-question logical consistency, and further propose a new taxonomy. To achieve precise symbolic representation and enhance the accuracy of LLMs’ logical reasoning, we propose several effective and efficient approaches, including adaptively selecting the most suitable SL for each QA problem, a data-driven approach to determine the fine-tuning samples order, and an efficient multi-agent debate framework with sparse communication. Our future research will focus on theoretical analysis for optimal SL selection, translation refinement and robust neuro-symbolic approach to improve LLMs' reasoning.