2025.findings-emnlp.106@ACL

Total: 1

#1 Towards Achieving Concept Completeness for Textual Concept Bottleneck Models [PDF] [Copy] [Kimi] [REL]

Authors: Milan Bhan, Yann Choho, Jean-Noël Vittaut, Nicolas Chesneau, Pierre Moreau, Marie-Jeanne Lesot

This paper proposes Complete Textual Concept Bottleneck Model (CT-CBM), a novel TCBM generator building concept labels in a fully unsupervised manner using a small language model, eliminating both the need for predefined human labeled concepts and LLM annotations. CT-CBM iteratively targets and adds important and identifiable concepts in the bottleneck layer to create a complete concept basis. CT-CBM achieves striking results against competitors in terms of concept basis completeness and concept detection accuracy, offering a promising solution to reliably enhance interpretability of NLP classifiers.

Subject: EMNLP.2025 - Findings