Cumulative complexity effects and phonotactic acceptability

Monday, 22 February 2010, Colloquium

Adam Albright, MIT.

Abstract

A design feature of both ruled-based and constraint-based models of phonology is that processes apply independently of one another: for example, final consonant clusters with disagreeing voicing are always banned in English, causing voicing alternations regardless of whether the word has a simple or complex onset (‘caps’ /kæp+z/, ‘claps’ /klæp+z/ &rarr [kæps], [klæps]), a round vowel (‘copes’ /koʊp+z/ &rarr [koʊps]), or any other marked structure. By forcing rules or constraints to apply independently, we exclude the possibility of `superadditive’ effects in which the well-formedness of a structure depends on the presence of another structure. In this talk, I argue that when we move beyond alternations and turn to static phonotactics, superadditive effects do seem to occur. For example, English allows words beginning with /bl-/ and /gl-/ clusters, as well as words ending in /-sp/ and /-sk/ clusters, but there are no words with both together (*blesk, *glisp). As it turns out, the rarity or lack of such combinations cannot be predicted from the independent frequencies of /bl-/, /-sp/, etc. I discuss several sources of evidence for superadditive effects: lexical underattestation in Lakhota and English, acceptability ratings for English nonce words, and child error patterns In all three cases, it appears that marginal structures become worse in the presence of other marginal structures. Crucially, however, not all combinations are penalized in this way. Relatively common combinations, such as /kr-/ and /-st/ co-occur about as often as expected (crust, crest, etc.), and do not show superadditive effects. Furthermore, although frequency is often a factor in predicting superadditive effects, in many cases, phonetic biases appear to play an even more important role in determining the marginality of a structure.

The challenge, then, is to provide a computational model that penalizes some constraint violations more in the presence of another violation, while allowing others to remain independent. I propose a model in which acceptability judgments arise through a combination of two levels of evaluation: (1) a non-grammatical evaluation of phonotactic probability, which assesses the joint probability of the substrings in a word, and (2) evaluation by a grammar of weighted constraints, further penalizing sequences that violate high-weighted constraints. For grammatically licit combinations such as /kr-/ and /-st/, acceptability is determined by simple joint probability. For grammatically penalized clusters such as /bl-/ or /-sk/, phonotactic probability and grammatical probability combine to yield super-additive effects. I sketch a model in which learners factor out phonotactic probability in learning weights of grammatical constraints.