Total: 1
Designing optimal prompts for Large Language Models (LLMs) is a complex and resource-intensive task, often requiring substantial human expertise. Existing approaches typically separate the optimization of prompt instructions and in-context learning examples, leading to incohesive, suboptimal results. To overcome this limitation, we propose a novel Cohesive In-Context Prompt Optimization framework that refines both prompt instructions and examples. In our formulation, coherence refers to the degree to which instructions and examples work synergistically to improve task performance—emerging as a byproduct of performance-driven optimization. However, formulating such an optimization in the discrete and high-dimensional space of natural language poses significant challenges in both convergence and computational efficiency. To address these issues, we introduce SEE, a scalable and efficient prompt optimization framework that adopts metaheuristic optimization principles and strategically balances exploration and exploitation to enhance optimization performance and achieve efficient convergence. SEE features a quad-phased design that alternates between global traversal (exploration) and local optimization (exploitation) and adaptively chooses LLM operators during the optimization process. We have conducted a comprehensive evaluation across 35 benchmark tasks, and SEE significantly outperforms state-of-the-art baseline methods by a large margin, achieving an average performance gain of **13.94** while reducing computational costs by **58.67%**.