Total: 1
Transformer-based models are highly vulnerable to adversarial attacks, where even small perturbations can cause significant misclassifications. This paper introduces *I-Guard*, a defense framework to increase the robustness of transformer-based models against adversarial perturbations. *I-Guard* leverages model interpretability to identify influential parameters responsible for adversarial misclassifications. By selectively fine-tuning a small fraction of model parameters, our approach effectively balances performance on both original and adversarial test sets. We conduct extensive experiments on English and code-mixed Hinglish datasets and demonstrate that *I-Guard* significantly improves model robustness. Furthermore, we demonstrate the transferability of *I-Guard* in handling other character-based perturbations.