Yale linguists in Philadelphia
Assistant Professor Jim Wood, PhD candidate Matt Tyler, and graduate student Yiding Hao presented at this year’s annual Penn Linguistics Conference in Philadelphia, Pennsylvania. They collectively introduced four different research projects to conference attendees, which included Yale graduate student Samuel Andersson.
In the first conference session, following a day of invited talks, postdoc Einar Freyr Sigurðsson of the University of Iceland gave a talk showcasing his joint work with Jim on the topic of voice in Icelandic. In linguistics, voice is the relationship between the grammatical subject of a verb and the agent performing the action represented by the verb. For example, consider the following sentences.
- Alice sent a message.
- A message was sent by Alice.
Both sentences describe the the action of a message being sent. In the first sentence, the subject, Alice, is the person sending the message. This is known as active voice. In the second sentence, the subject, a message, is not the sender, but rather the object that is being sent. This is known as passive voice. In their talk, Einar and Jim consider indirect causative sentences in Icelandic such as the following.
- Ég lét byggja húsið.
“I had the house built.” (lit. I let build the-house)
The intended meaning of the above sentence is that the speaker caused the house to be built, but the sentence does not specify who built the house. If the person doing the building is not specified, what kind of voice does the above sentence have? Einar and Jim argue that this sentence neither has active voice nor passive voice. Instead, they propose that this sentence has a structure in which the unspecified builder of the house is represented by a silent word. It turns out that the Icelandic indirect causative construction shares many of the properties exhibited by constructions in other languages that involve unspecified agents represented by silent words. For example, it is possible to use a by-phrase in the embedded word (like a passive), but only if the by-phrase does not specify a specific person. Therefore, the first sentence below is grammatically acceptable, but the second sentence is not.
- Ég lét gera við tölvuna af fagmanni.
“I had the computer repaired by a professional.”
- *Ég lét gera við tölvuna af Jón.
“I had the computer repaired by John.”
Einar and Jim’s analysis of the indirect causative construction in Icelandic suggests that understanding the various kinds of voice that exists in the languages of the world may require more nuanced categories than the binary distinction between active and passive.
Later that afternoon, Matt gave two talks on his recent work in syntax. Previews of both talks were presented at the Linguistic Society of America annual meeting earlier this year in the form of posters. First, Matt spoke about his work with graduate student Michelle Yuan of the Massachusetts Institute of Technology on two languages: Choctaw, a Native American language spoken in Mississippi, and Yimas, a language spoken in Papua New Guinea. In various languages, a noun or a pronoun may have several different forms, or cases, depending on its role within a sentence. For example, the English third-person singular pronouns are he or she when they are the subject of a sentence, but him or her when they are the object. Languages that distinguish between subjects and objects, such as English, are known as nominative–accusative languages. Other languages, such as Basque, may distinguish between subjects and objects in transitive sentences, but treat the subject of an intransitive sentence like the object of a transitive sentence. In other words, the sentence she walks would be translated as her walks in Basque. These kinds of languages are known as ergative–absolutive languages, and how a language defines various kinds of nouns and pronouns is known as its alignment. Interestingly, Choctaw and Yimas seem to have two different kinds of alignment. In nouns and pronouns, both languages seem to be nominative–accusative. However, the verbs, which bear special prefixes called clitics that encode information about the subject and object, seem to have a different kind of alignment: split-S in Choctaw and ergative–absolutive in Yimas. Linguists typically assume that sentences are built by putting words together, and while this occurs, there is a process by which each noun or pronoun is assigned a case. Based on Choctaw and Yimas, Matt and Michelle conclude this process may occur in two stages: one for the nouns and pronouns, and one for the clitics. As their examples show, these two stages do not necessarily use the same alignment system.
Matt’s second talk was about his work on the syntax–prosody interface, previously presented at the North East Linguistic Society. When a person utters a sentence, the pronunciation of that sentence includes cues that indicate its structure. For example, a parenthetical phrase might be surrounded by brief pauses, indicating that the words of that phrase should be grouped together. Other kinds of phrases might be characterized by patterns in their pitch contours. Prosodic cues such as intonation, timing, or stress group sentences into subdivisions. These subdivisions resemble, but are not identical to, the syntactic phrase structures of sentences. One discrepancy between prosodic and syntactic structure exists between words and prosodic words. While nouns and verbs are typically grouped as individual words by prosodic cues, functional words such as articles or prepositions are grouped as if they were part of other words. According to Match Theory, it has traditionally been thought that this occurs because the rules responsible for determining prosodic structure ignore functional words, so that they end up sounding like parts of other words. However, it turns out that the rules of Match Theory seem to have unexplained exceptions. Rather than having rules that ignore functional words, Matt’s talk argued that we should instead consider functional words to come with lexical information explicitly specifying that they should not be considered prosodic words. Matt then showed that his proposal correctly accounts for the exceptions to the Match Theory rules.
Finally, Yiding presented work on finite-state Optimality Theory during the poster session. The linguistic subfield of phonology studies the fact that a single language sound might be pronounced differently depending on where it occurs in a word. For example, the German consonants b, d, and g are pronounced as p, t, and k, respectively, when they appear at the end of a word. Computational phonologists believe that the transformations that sounds might undergo in a language are finite-state: they can be performed using a limited amount of memory. Harmonic Serialism is a system for describing these transformations. According to Harmonic Serialism, each language specifies constraints on what words can sound like, and a series of incremental modifications are made to a word to repair violations of these constraints. For example, Japanese has a constraint that forbids two consonants from appearing adjacent to one another. This constraint is repaired by inserting a vowel, usually u, between any two adjacent consonants. In the poster, Yiding reviewed a method presented at the workshop on Finite-State Methods and Natural Language Processing for simulating Harmonic Serialism using a finite-state transformation. Additionally, the poster shows that this method is incomplete by giving an example of a transformation that is not finite-state, but yet can be described using Harmonic Serialism.
The 42nd Penn Linguistics Conference was held from March 23 to 25 at the University of Pennsylvania. Presenters who gave talks will be invited to submit papers to be published in the conference proceedings.