Processing math: 100%

iT4xhHsR0F@OpenReview

Total: 1

#1 STIMULUS: Achieving Fast Convergence and Low Sample Complexity in Stochastic Multi-Objective Learning [PDF] [Copy] [Kimi] [REL]

Authors: Zhuqing Liu, Chaosheng Dong, Michinari Momma, Simone Shao, Shaoyuan Xu, Yan Gao, Haibo Yang, Jia Liu

Recently, multi-objective optimization (MOO) has gained attention for its broad applications in ML, operations research, and engineering. However, MOO algorithm design remains in its infancy and many existing MOO methods suffer from unsatisfactory convergence rate and sample complexity performance. To address this challenge, in this paper, we propose an algorithm called STIMULUS (**st**ochastic path-**i**ntegrated **mul**ti-gradient rec**u**rsive e**s**timator), a new and robust approach for solving MOO problems. Different from the traditional methods, STIMULUS introduces a simple yet powerful recursive framework for updating stochastic gradient estimates to improve convergence performance with low sample complexity. In addition, we introduce an enhanced version of \algns, termed \algmns, which incorporates a momentum term to further expedite convergence. We establish O(1/T) convergence rates of the proposed methods for non-convex settings and O(expμT) for strongly convex settings, where T is the total number of iteration rounds. Additionally, we achieve the state-of-the-art O(n+nϵ1) sample complexities for non-convex settings and O(n+nln(μ/ϵ)) for strongly convex settings, where ϵ>0 is a desired stationarity error. Moreover, to alleviate the periodic full gradient evaluation requirement in STIMULUS and STIMULUS-M, we further propose enhanced versions with adaptive batching called STIMULUS+/ STIMULUS-M+ and provide their theoretical analysis.

Subject: UAI.2025 - Poster