MLprqOvAAK@OpenReview

Total: 1

#1 Teaching Transformers to Solve Combinatorial Problems through Efficient Trial & Error [PDF] [Copy] [Kimi] [REL]

Authors: Panagiotis Giannoulis, Yorgos Pantis, Christos Tzamos

Despite their proficiency in various language tasks, Large Language Models (LLMs) struggle with combinatorial problems like Satisfiability, Traveling Salesman Problem, or even basic arithmetic. We address this gap through a novel approach for solving problems in the class NP. We focus on the paradigmatic task of Sudoku and achieve state-of-the-art accuracy (99\%) compared to prior neuro-symbolic approaches. Unlike prior work that used custom architectures, our method employs a vanilla decoder-only Transformer (GPT-2) without external tools or function calling. Our method integrates imitation learning of simple Sudoku rules with an explicit Depth-First Search (DFS) exploration strategy involving informed guessing and backtracking. Moving beyond imitation learning, we seek to minimize the number of guesses until reaching a solution. We provide a rigorous analysis of this setup by formalizing its connection to a contextual variant of $\textit{Min-Sum Set Cover}$, a well-studied problem in algorithms and stochastic optimization.

Subject: NeurIPS.2025 - Poster