0053-Paper2088@2025@MICCAI

Total: 1

#1 AdFair-CLIP: Adversarial Fair Contrastive Language-Image Pre-training for Chest X-rays [PDF] [Copy] [Kimi] [REL]

Authors: Yi Chenlang, Xiong Zizhan, Qi Qi, Wei Xiyuan, Bathla Girish, Lin Ching-Long, Mortazavi Bobak J., Yang Tianbao, Yi Chenlang, Xiong Zizhan, Qi Qi, Wei Xiyuan, Bathla Girish, Lin Ching-Long, Mortazavi Bobak J., Yang Tianbao

Contrastive Language-Image Pre-training (CLIP) models have demonstrated superior performance across various visual tasks including medical image classification. However, fairness concerns, including demographic biases, have received limited attention for CLIP models. This oversight leads to critical issues, particularly those related to race and gender, resulting in disparities in diagnostic outcomes and reduced reliability for underrepresented groups. To address these challenges, we introduce AdFair-CLIP, a novel framework employing adversarial feature intervention to suppress sensitive attributes, thereby mitigating spurious correlations and improving prediction fairness. We conduct comprehensive experiments on chest X-ray (CXR) datasets, and show that AdFair-CLIP significantly enhances both fairness and diagnostic accuracy, while maintaining robust generalization in zero-shot and few-shot scenarios. These results establish new benchmarks for fairness-aware learning in CLIP-based medical diagnostic models, particularly for CXR analysis.

Subject: MICCAI.2025