2025.acl-long.1188@ACL

Total: 1

#1 Fairness Beyond Performance: Revealing Reliability Disparities Across Groups in Legal NLP [PDF1] [Copy] [Kimi1] [REL]

Authors: Santosh T.y.s.s, Irtiza Chowdhury

Fairness in NLP must extend beyond performance parity to encompass equitable reliability across groups. This study exposes a criticalblind spot: models often make less reliable or overconfident predictions for marginalized groups, even when overall performance appearsfair. Using the FairLex benchmark as a case study in legal NLP, we systematically evaluate both performance and reliability dispari-ties across demographic, regional, and legal attributes spanning four jurisdictions. We show that domain-specific pre-training consistentlyimproves both performance and reliability, especially for underrepresented groups. However, common bias mitigation methods frequentlyworsen reliability disparities, revealing a trade-off not captured by performance metrics alone. Our results call for a rethinking of fairnessin high-stakes NLP: To ensure equitable treatment, models must not only be accurate, but also reliably self-aware across all groups.

Subject: ACL.2025 - Long Papers