42327@AAAI

Total: 1

#1 Breaking Cross-View Associations: Byzantine Model Poisoning Attack against Vertical Federated Learning [PDF] [Copy] [Kimi] [REL]

Author: Jarin Tasneem

Federated learning (FL) has rapidly emerged as a pivotal framework for cross-silo collaborative training while keeping sensitive data localized, driven by growing data volumes and major privacy concerns. Within this paradigm, vertical federated learning (VFL) enables collaboration among parties holding different features of the same sample space, powering tasks like fraud detection, medical diagnosis, and credit scoring. However, the participation of multiple entities creates new vulnerabilities to malicious interference. One critical yet underexplored threat in VFL is the Byzantine poisoning attack, where an adversary intentionally corrupts training to degrade overall model performance. This work reveals a practical vulnerability showing how a single malicious participant can significantly reduce inference accuracy in a VFL system by breaking cross-view association through feature-space corruption. Our findings emphasize the urgent need for robust, VFL-specific defenses to ensure reliability in collaborative, cross-silo AI systems.

Subject: AAAI.2026 - Undergraduate Consortium