2024.acl-tutorials.3@ACL

Total: 1

#1 Computational Expressivity of Neural Language Models [PDF] [Copy] [Kimi2] [REL]

Authors: Alexandra Butoi ; Ryan Cotterell ; Anej Svete

Language models (LMs) are currently at the forefront of NLP researchdue to their remarkable versatility across diverse tasks. However, a largegap exists between their observed capabilities and the explanations proposedby established formal machinery. To motivate a better theoreticalcharacterization of LMs’ abilities and limitations, this tutorial aims toprovide a comprehensive introduction to a specific framework for formalanalysis of modern LMs using tools from formal language theory (FLT). Wepresent how tools from FLT can be useful in understanding the inner workingsand predicting the capabilities of modern neural LM architectures. We willcover recent results using FLT to make precise and practically relevantstatements about LMs based on recurrent neural networks and transformers byrelating them to formal devices such as finite-state automata, Turingmachines, and analog circuits. Altogether, the results covered in thistutorial will allow us to make precise statements and explanations about theobserved as well as predicted behaviors of LMs, as well as providetheoretically motivated suggestions on the aspects of the architectures thatcould be improved.