Total: 1
Large language models (LLMs) nowadays have attracted an affluent user base due to the superior performance across various downstream tasks. Yet, recent works reveal that LLMs are vulnerable to backdoor attacks, where an attacker can inject a specific token trigger to manipulate the model's behaviors during inference. Existing efforts have largely focused on single-trigger attacks while ignoring the variations in different users' responses to the same trigger, thus often resulting in undermined attack effectiveness. In this work, we propose EmbedX, an effective and efficient cross-trigger backdoor attack against LLMs. Specifically, EmbedX exploits the continuous embedding vector as the soft trigger for backdooring LLMs, which enables trigger optimization in the semantic space. By mapping multiple tokens into the same soft trigger, EmbedX establishes a backdoor pathway that links these tokens to the attacker's target output. To ensure the stealthiness of EmbedX, we devise a latent adversarial backdoor mechanism with dual constraints in frequency and gradient domains, which effectively crafts the poisoned samples close to the target samples. Through extensive experiments on four popular LLMs across both classification and generation tasks, we show that EmbedX achieves the attack goal effectively, efficiently, and stealthily while also preserving model utility.