5QneXT1qoc@OpenReview

Total: 1

#1 FedRAM: Federated Reweighting and Aggregation for Multi-Task Learning [PDF] [Copy] [Kimi] [REL]

Authors: Fan Wu, Xinyu Yan, Jiabei Liu, Wei Yang Bryan Lim

Federated Multi-Task Learning (FL-MTL) enables clients with heterogeneous data to collaboratively train models capable of handling multiple downstream tasks. However, FL-MTL faces key challenges, including statistical heterogeneity, task interference, and the need to balance local learning with global knowledge sharing. Traditional methods like FedAvg struggle in such settings due to the lack of explicit mechanisms to address these issues. In this paper, we propose FedRAM, a three-step framework that progressively updates two scalar hyperparameters: the task importance weight and the client aggregation coefficient. FedRAM introduces a reference-proxy-agent strategy, where the proxy model serves as an intermediate between the local reference model and the global agent model. This design reduces the need for repeated local training while preserving local performance. Extensive experiments on six real-world FL-MTL benchmarks show that FedRAM improves performance by at least 3$\%$ over the most baseline on both in-domain and out-of-domain tasks, while reducing computational cost by 15$\times$. These results make FedRAM a robust and practical solution for large-scale FL-MTL applications. The code is available at \url{https://github.com/wwffvv/FedRAM}.

Subject: NeurIPS.2025 - Poster