CLAY members appear at Scandinavian venues

September 21, 2017

Undergraduate alumni Jungo Kasai and Tom McCoy, Professor Bob Frank, and graduate student Yiding Hao presented three talks and a poster at the International Workshop on Tree-Adjoining Grammar and Related Formalisms (TAG+), the Conference on Empirical Methods in Natural Language Processing (EMNLP), and the International Conference on Finite State Methods and Natural Language Processing (FSMNLP). The four presentations showcased results from two ongoing research projects in computational linguistics, with participants from multiple departments at Yale as well as other universities.

Jungo, Tom, and Bob presented two talks and a poster at TAG+ and EMNLP covering various aspects of a yearlong project undertaken by the Computational Linguistics at Yale (CLAY) research group. In addition to the three presenters, participants in the project include undergraduate alumnus Dan Friedman and undergraduate Pauli Xu; Owen Rambow and Forrest Davis of Columbia University; and Alexis Nasr of Aix-Marseille University. This project developed a new technique for TAG parsing. In computational linguistics, parsing is the process of determining the syntactic structure of a sentence. These structures are described in terms of a formalism, or model of grammar. Previous research by various scholars achieved success in parsing for the Combinatory Categorial Grammar (CCG) formalism. This project sought to extend those techniques to parsing in the Tree-Adjoining Grammar (TAG) formalism.

For TAG, the parsing problem is decomposed into two steps. In the supertagging step, each word of the sentence is assigned a label called a supertag, which describes the syntactic properties of the word. Then, in the stapling step, the parser determines the best way to combine words to form phrases based on their supertags. To perform supertagging, the authors use a machine learning model called Long Short-Term Memory (LSTM). LSTMs are known to be useful for assigning each word of a sentence its part of speech. They can be adapted for supertagging by viewing a supertag as a more detailed description of a word’s part of speech. Stapling is done using the shift-reduce parsing method. The model developed here is distinguished from previous models in that unlike part-of-speech labels, the supertags obtained from the LSTM can be combined with one another. For example, a supertag representing a transitive verb might be combined with a supertag representing a noun phrase to form a supertag representing a verb phrase that has an object but not a subject. By testing the model on data, the authors found that their model parses more accurately than many of the best parsers currently available, including Google’s Parsey McParseface parser.

The second project, which Yiding presented at FSMNLP, investigates the mathematical properties of a formalism called Optimality Theory (OT). A fundamental fact of phonology is that what a speaker considers to be a single language sound might be pronounced differently depending on where it occurs in a word. For example, the German consonants b, d, and g are pronounced as p, t, and k, respectively, when they appear at the end of a word. OT is a model of how the exact pronunciation of a word is determined. Each language is described by a set of constraints—rules that restrict how a word can be pronounced. A constraint might say, for instance, that a language does not allow two consonants to appear consecutively. The pronunciation of a word is the one that best satisfies the constraints.

OT has enjoyed success in phonological theory because it can succinctly describe patterns that are observed in languages. However, finding the pronunciation that best satisfies the constraints is computationally complex problems. Computational phonologists believe that finding the pronunciation of a word should be finite-state; in other words, it should be done using a limited amount of memory. While this cannot be done for the standard version of OT, Yiding’s talk showed that it can be done for a version of OT called Harmonic Serialism (HS). In HS, the pronunciation of a word is still determined by constraints. However, HS differs from standard OT in that it finds the pronunciation of a word by making a series of incremental changes to the mental representation of the word. It turns out that if the constraints are sufficiently simple, then the complex computations that OT can describe cannot be performed by the incremental changes of HS. Because of this, when the constraints are simple, the pronunciations described by HS grammars, unlike those described by standard OT grammars, are finite-state.

The titles of the presentations are as follows.

  • TAG Parsing with Neural Networks and Vector Representations of Supertags, Jungo Kasai, Robert Frank, Tom McCoy, Owen Rambow, and Alexis Nasr (poster at EMNLP)
  • Linguistically Rich Vector Representations of Supertags for TAG Parsing, Dan Friedman, Jungo Kasai, R. Thomas McCoy, Robert Frank, Forrest Davis, and Owen Rambow (talk at TAG+)
  • TAG Parser Evaluation using Textual Entailments, Pauli Xu, Robert Frank, Jungo Kasai, and Owen Rambow (talk at TAG+)
  • Harmonic Serialism and Finite-State Optimality Theory, Yiding Hao (talk at FSMNLP)

Jungo, Tom, and Dan graduated last year, but the work they presented was done while they were seniors at Yale College. Jungo is staying on at Yale as a post-graduate research associate, working with Bob and Owen as well as Professor Dragomir Radev of the Department of Computer Science. Tom is now a graduate student at Johns Hopkins University in the Department of Cognitive Science.

TAG+ and FSMNLP were held simultaneously from September 4 to September 6 at Umeå University in Umeå, Sweden. EMNLP was held from September 7 to September 11 in Copenhagen, Denmark. The conference program and abstracts are available on the conference websites (TAG+, FSMNLP, EMNLP). The conference papers are available on the Association for Computational Linguistics Anthology website.

Author: