Total: 1
Post-training quantization is widely used to compress large language models (LLMs) for efficient deployment in resource-constrained environments. However, recent work shows that quantization, especially aggressive schemes such as 4-bit QLoRA, can substantially degrade safety alignment, making models more vulnerable to harmful completions and jailbreaks. In this work, we investigate these safety risks and propose a mitigation strategy: projecting quantized parameters back into safety-aligned subspaces. First, we empirically measure safety degradation on benchmark datasets using both safety and utility metrics. Next, we explore projection-based restoration methods to recover alignment-preserving directions in the LoRA adapters of quantized models. Finally, we study how quantization affects mechanistic safety neurons and how hybrid-precision designs can preserve them. By foregrounding the safety implications of model compression, this work aims to support more robust, deployment-ready, and ethically aligned LLMs.