2025.acl-long.1258@ACL

Total: 1

#1 Do Language Models Have Semantics? On the Five Standard Positions [PDF] [Copy] [Kimi2] [REL]

Author: Anders Søgaard

We identify five positions on whether large language models (LLMs) and chatbots can be said to exhibit semantic understanding. These positions differ in whether they attribute semantics to LLMs and/or chatbots trained on feedback, what kind of semantics they attribute (inferential or referential), and in virtue of what they attribute referential semantics (internal or external causes). This allows for 2^^4=16 logically possible positions, but we have only seen people argue for five of these. Based on a pairwise comparison of these five positions, we conclude that the better theory of semantics in large language models is, in fact, a sixth combination: Both large language models and chatbots have inferential and referential semantics, grounded in both internal and external causes.

Subject: ACL.2025 - Long Papers