
Talk Abstract: Do the impressive abilities of Large Language Model (LLMs) to generate rich, well-formed syntax falsify fundamental principles of generative linguistic theory? The short answer I will argue for is: no. But it will be a rather nuanced “no”, trying to identify the proper treatment of generative AI for generative linguistics.
Specifically, I will consider these principles:
1. Computability: Generating NL with rich, human-level syntax requires use of symbolic grammatical rule systems.
2. Explanation: Theoretical explanation in generative linguistics requires built-in discrete symbolic structure.
3. Acquisition: Children’s ability to acquire language requires innate knowledge of grammatical rule systems.
4. Universals: Linguistic universals can only be explained from innate limitations on what languages are learnable.
The quantity of discussion of these questions will decrease sharply from 1–4, the bulk of the presentation focussed on 1. This talk takes off from joint work with Roland Fernandez, Herbert Zhou, Mattia Opper, and Jianfeng Gao (arXiv:2410.17498).