yNpYb376zf@OpenReview

Total: 1

#1 Fleet of Agents: Coordinated Problem Solving with Large Language Models [PDF] [Copy] [Kimi] [REL]

Authors: Lars Klein, Nearchos Potamitis, Roland Aydin, Robert West, Caglar Gulcehre, Akhil Arora

While numerous frameworks have been developed to enhance the reasoning abilities of large language models (LLMs), there is a scarcity of methods that effectively balance the trade-off between cost and quality. In this paper, we introduce Fleet of Agents (FoA), a novel and intuitive yet principled framework utilizing LLMs as agents to navigate through dynamic tree searches, employing a genetic-type particle filtering approach. FoA spawns a multitude of agents, each exploring the search space autonomously, followed by a selection phase where resampling based on a heuristic value function optimizes the balance between exploration and exploitation. This mechanism enables dynamic branching, adapting the exploration strategy based on discovered solutions. We conduct extensive experiments on four benchmark tasks, \``Game of 24\'', \``Mini-Crosswords\'', \``WebShop\'' and \``SciBench\'', utilizing four different LLMs, GPT-3.5, GPT-4, LLaMA3.2-11B, and LLaMA3.2-90B. On average across all tasks and LLMs, FoA obtains an absolute quality improvement of $\simeq 5\%$ while requiring only $\simeq 35\%$ of the cost of previous SOTA methods. Notably, our analyses reveal that (1) FoA achieves the best cost-quality trade-off among all benchmarked methods, and (2) FoA+ LLaMA3.2-11B surpasses the Llama3.2-90B model. FoA is publicly available at [https://github.com/au-clan/FoA](https://github.com/au-clan/FoA).

Subject: ICML.2025 - Poster