wH3F1ZoK70@OpenReview

Total: 1

#1 Salient Concept-Aware Generative Data Augmentation [PDF] [Copy] [Kimi] [REL]

Authors: Tianchen Zhao, Xuanbai Chen, Zhihua Li, Jun Fang, DONGSHENG An, Xiang Xu, Zhuowen Tu, Yifan Xing

Recent generative data augmentation methods conditioned on both image and text prompts struggle to balance between fidelity and diversity, as it is challenging to preserve essential image details while aligning with varied text prompts. This challenge arises because representations in the synthesis process often become entangled with non-essential input image attributes such as environmental contexts, creating conflicts with text prompts intended to modify these elements. To address this, we propose a personalized image generation framework that uses a salient concept-aware image embedding model to reduce the influence of irrelevant visual details during the synthesis process, thereby maintaining intuitive alignment between image and text inputs. By generating images that better preserve class-discriminative features with additional controlled variations, our framework effectively enhances the diversity of training datasets and thereby improves the robustness of downstream models. Our approach demonstrates superior performance across eight fine-grained vision datasets, outperforming state-of-the-art augmentation methods with averaged classification accuracy improvements by 0.73\% and 6.5\% under conventional and long-tail settings, respectively.

Subject: NeurIPS.2025 - Poster