Total: 1
As large language models (LLMs) are increasingly integrated into real-world applications, ensuring their safety and robustness is critical. Automated red-teaming methods generate adversarial attacks to identify vulnerabilities, but existing approaches often face challenges like slow performance, limited categorical diversity, and high resource demands. We propose Ferret, a novel method that enhances the baseline, Rainbow Teaming by generating multiple adversarial prompt mutations per iteration and ranking them using scoring functions such as reward models, Llama Guard, and LLM-as-a-judge. Ferret achieves a 95% attack success rate (ASR), a 46% improvement over baseline, and reduces time to a 90% ASR by 15.2%. Additionally, it generates transferable adversarial prompts effective on larger LLMs. Our code is available at https://github.com/declare-lab/ferret