Friday Lunch Talk: QPFest

Speaker: 
Dani Katenkamp, Herbert Zhou
Event time: 
Friday, April 12, 2024 - 12:00pm to 1:30pm
Event description: 
Re-examining subtractive plurals in Historical Choctaw
Dani Katenkamp
 
Historical Choctaw has a process of plural-marking on verbs that looks like an instance of subtractive morphology.

(1) bonolli                         bonni
    bonot-li                        bon-<ot>-li
    roll:up-voice                   roll:up-pl-voice
    ‘roll up (singular object)’     ‘roll up (plural object)’

Subtractive morphology is problematic for many theories of morphology because it conflicts with the assumption that additional meaning should require additional morphosyntactic structure, and that while lexical items may be phonologically null, there is no underlying form which can delete phonological material from another head. The most recent research on Muskogean subtraction (Martin, 1994) proposes that these plural forms fully fossilized early in the history of the family and that the subtraction is the result of semi-random diachronic processes. However, I argue that because there remain many transparent generalizations one can make about the distribution of these subtractive phenomena in Historical Choctaw, this cannot be the case. I consider several possible analyses and show that we can develop a synchronic account of Historical Choctaw pluralization without needing directly subtractive operations. Thus, while Historical Choctaw is relevant to discussion of subtractive morphology, it serves as further evidence that possibly all superficially subtractive phenomena can be analyzed concatenatively.

 
Language Models Show Gradient Inverse Frequency Effects in Structural Priming: Implications for In-Context Learning and Human Implicit Learning
Herbert Zhou

The structural priming paradigm in psycholinguistics has been demonstrated as a way of studying abstract structural representations in neural language models. The current study expands this line of reasoning by simulating structural priming in language models in two ways that correspond to the transient activation account and the implicit learning account, the two existing theories of structural priming in the literature. Specifically, we tested language models’ behaviors on the inverse frequency effect, a priming phenomenon only predicted by implicit learning mechanisms.

We showed that (i) under the transient activation way of priming, larger models tend to show stronger inverse frequency effects; (ii) under the implicit learning way of priming, even the smallest tested model showed significant inverse frequency effect. We hypothesize that model sizes correlate with their in-context learning capability, which is interpreted as a form of implicit fine-tuning. That is, the transient activation way of priming is implicitly performing implicit learning in models with better in-context learning capability, which explains the observed gradient inverse frequency effects. Our study steps one level deeper beyond probing learned internal representations and inquires about language models’ processing mechanisms. We conclude that in-context learning is a form of implicit learning, which is shared between humans and language models.

 
Event Type: 
Lunch Talks