EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book Exploring Distributional Semantics in Lexical Representations and Narrative Modeling

Download or read book Exploring Distributional Semantics in Lexical Representations and Narrative Modeling written by Su Wang and published by . This book was released on 2020 with total page 228 pages. Available in PDF, EPUB and Kindle. Book excerpt: We are interested in the computational modeling of lexico-conceptual and narrative knowledge (e.g. how to represent the meaning of cat to reflect facts such as: it is similar to a dog, and it is typically larger than a mouse; how to characterize story, and how to identify different narratives on the same topic). On the lexico-conceptual front, we learn lexical representations with strong interpretability, as well as integrate commonsense knowledge into lexical representations. For narrative modeling, we study how to identify, extract, and generate narratives/stories acceptable to human intuition. As a methodological framework we apply the methods of Distributional Semantics (DS) — “a subfield of Natural Language Processing that learns meaning from word usages” (Herbelot, 2019) — where semantic representations (on any levels such as word, phrases, sentences, etc.) are learned at scale from data through machine learning models (Erk and Padó, 2008; Baroni and Lenci, 2010; Mikolov et al., 2013; Pennington et al., 2014). To infuse interpretability and commonsense into semantic representations (specifically lexical and event), which are typically lacking in previous works (Doran et al., 2017; Gusmao et al., 2018; Carvalho et al., 2019), we complement the data-driven scalability with a minimal amount of human knowledge annotation on a selected set of tasks and have obtained empirical evidence in support of our techniques. For narrative modeling, we draw insights from the rich body of work on scripts and narratives started from Schank and Abelson (1977) and Mooney and DeJong (1985) to Chambers and Jurafsky (2008, 2009), and proposed distributional models for the tasks narrative identification, extraction, and generation which produced state-of-the-art performance. Symbolic approaches to lexical semantics (Wierzbicka, 1996; Goddard and Wierzbicka, 2002) and narrative modeling (Schank and Abelson, 1977; Mooney and DeJong, 1985) have been fruitful on the front of theoretical studies. For example, in theoretical linguistics, Wierzbicka defined a small set of lexical semantic primitives from which complex meaning can be built compositionally; in Artificial Intelligence, Schank and Abelson formulated primitive acts which are conceptualized into semantic episodes (i.e. scripts) understandable by humans. Our focus, however, is primarily on computational approaches that need wide lexical coverage, for which DS provides a better toolkit, especially in practical applications. In this thesis, we innovate by building on the “vanilla” DS techniques (Landauer and Dumais, 1997; Mikolov et al., 2013) to address the issues listed above. Specifically, we present empirical evidence that • On the building block level, with the framework of DS, it is possible to learn highly interpretable lexical and event representations at scale and introduce human commonsense knowledge at low cost. • On the narrative level, well-designed DS modeling offers a balance of precision and scalability, solutions which are empirically stronger to complex narrative modeling questions (e.g. narrative identification, extraction and generation). Further, conducting case-studies on lexical and narrative modeling, we showcase the viability of integrating DS with traditional methods in complementation to retain the strengths of both approaches Concretely, the contributions of this thesis are summarized as follows: • Evidence from analyzing/modeling a small set of common concepts which indicates that interpretable representations can be learned for lexical concepts with minimal human annotation to realize one/few-shot learning. • Commonsense integration in lexical semantics: with carefully designed crowdsourcing, and combined with distributional methods, it is possible to substantially improve inference related to physical knowledge of the world. • Neural distributional methods perform strongly in complex narrative modeling tasks, where we demonstrate that the following techniques are particularly useful: 1) human intuition inspired iterative algorithms; 2) integration of graphical and distributional modeling; pre-trained large-scale language models

Book Distributional Semantics

    Book Details:
  • Author : Alessandro Lenci
  • Publisher : Cambridge University Press
  • Release : 2023-09-21
  • ISBN : 1009453823
  • Pages : 448 pages

Download or read book Distributional Semantics written by Alessandro Lenci and published by Cambridge University Press. This book was released on 2023-09-21 with total page 448 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book provides a comprehensive foundation of distributional methods in computational modeling of meaning. It aims to build a common understanding of the theoretical and methodological foundations for students of computational linguistics, natural language processing, computer science, artificial intelligence, and cognitive science.

Book Computational Analysis of Storylines

Download or read book Computational Analysis of Storylines written by Tommaso Caselli and published by Cambridge University Press. This book was released on 2021-11-25 with total page 276 pages. Available in PDF, EPUB and Kindle. Book excerpt: Event structures are central in Linguistics and Artificial Intelligence research: people can easily refer to changes in the world, identify their participants, distinguish relevant information, and have expectations of what can happen next. Part of this process is based on mechanisms similar to narratives, which are at the heart of information sharing. But it remains difficult to automatically detect events or automatically construct stories from such event representations. This book explores how to handle today's massive news streams and provides multidimensional, multimodal, and distributed approaches, like automated deep learning, to capture events and narrative structures involved in a 'story'. This overview of the current state-of-the-art on event extraction, temporal and casual relations, and storyline extraction aims to establish a new multidisciplinary research community with a common terminology and research agenda. Graduate students and researchers in natural language processing, computational linguistics, and media studies will benefit from this book.

Book Towards Unifying Grounded and Distributional Semantics Using the Words as classifiers Model of Lexical Semantics

Download or read book Towards Unifying Grounded and Distributional Semantics Using the Words as classifiers Model of Lexical Semantics written by Stacy Black and published by . This book was released on 2020 with total page 51 pages. Available in PDF, EPUB and Kindle. Book excerpt: "Automated systems that make use of language, such as personal assistants, need some means of representing words such that 1) the representation is computable and 2) captures form and meaning. Recent advancements in the field of natural language processing have resulted in useful approaches to representing computable word meanings. In this thesis, I consider two such approaches: distributional embeddings and grounded models. Distributional embeddings are represented as high-dimensional vectors; words with similar meanings tend to cluster together in embedding space. Embeddings are easily learned using large amounts of text data. However, embeddings suffer from a lack of "real world" knowledge; for example, the knowledge of identifying colors or objects as they appear. In contrast to embeddings, grounded models learn a mapping between language and the physical world, such as visual information in pictures. Grounded models, however, tend to focus only on the mapping between language and the physical world and lack the knowledge that could be gained from considering abstract information found in text. In this thesis, I evaluate wac2vec, a model that brings together grounded and distributional semantics to work towards leveraging the relative strengths of both, and use empirical analysis to explore whether wac2vec adds semantic information to traditional embeddings. Starting with the words-as-classifiers (WAC) model of grounded semantics, I use a large repository of images and the keywords that were used to retrieve those images. From the grounded model, I extract classifier coefficients as word-level vector embeddings (hence, wac2vec), then combine those with embeddings from distributional word representations. I show that combining grounded embeddings with traditional embeddings results in improved performance in a visual task, demonstrating the viability of using the wac2vec model to enrich traditional embeddings, and showing that wac2vec provides important semantic information that these embeddings do not have on their own."--Boise State University ScholarWorks.

Book Lexical Variation and Change

Download or read book Lexical Variation and Change written by Dirk Geeraerts and published by Oxford University Press. This book was released on 2023-11-07 with total page 337 pages. Available in PDF, EPUB and Kindle. Book excerpt: This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read at Oxford Academic and offered as a free PDF download from OUP and selected open access locations. This book introduces a systematic framework for understanding and investigating lexical variation, using a distributional semantics approach. Distributional semantics embodies the idea that the context in which a word occurs reveals the meaning of that word. In contemporary corpus linguistics, that idea takes shape in various types of quantitative analysis of the corpus contexts in which words appear. In this book, the authors explore how count-based token-level semantic vector spaces, as an advanced form of such a quantitative methodology, can be applied to the study of polysemy, lexical variation, and lectometry. What can distributional models reveal about meaning? How can they be used to analyse the semantic relationship between near-synonyms, and to identify strict synonymy? How can they contribute to the study of lexical variation as a sociolinguistic variable, and to the use of those variables to measure convergence or divergence between language varieties? To answer these questions, the book presents a comprehensive model of lexical and semantic variation, based on the combination of a semasiological, an onomasiological, and a lectal dimension. It explains the mechanism of distributional modelling, both informally and technically, and introduces workflows and corpus linguistic tools that implement a distributional perspective in lexical research. Combining a cognitive linguistic interest in meaning with a sociolinguistic interest in variation, the authors illustrate this distributional methodology using case studies of Dutch and Spanish lexical data that focus on the detection of polysemy, the interaction of semasiological and onomasiological change, and sociolinguistic issues of lexical standardization and pluricentricity. Throughout, they highlight both the advantages and disadvantages of a distributional methodology: on the one hand, it has great potential to be scaled up for lexical research; on the other, its outcome does not necessarily neatly correspond with what would traditionally be considered different senses.

Book Latent Variable Models of Distributional Lexical Semantics

Download or read book Latent Variable Models of Distributional Lexical Semantics written by Joseph Simon Reisinger and published by . This book was released on 2014 with total page 358 pages. Available in PDF, EPUB and Kindle. Book excerpt: In order to respond to increasing demand for natural language interfaces---and provide meaningful insight into user query intent---fast, scalable lexical semantic models with flexible representations are needed. Human concept organization is a rich phenomenon that has yet to be accounted for by a single coherent psychological framework: Concept generalization is captured by a mixture of prototype and exemplar models, and local taxonomic information is available through multiple overlapping organizational systems. Previous work in computational linguistics on extracting lexical semantic information from unannotated corpora does not provide adequate representational flexibility and hence fails to capture the full extent of human conceptual knowledge. In this thesis I outline a family of probabilistic models capable of capturing important aspects of the rich organizational structure found in human language that can predict contextual variation, selectional preference and feature-saliency norms to a much higher degree of accuracy than previous approaches. These models account for cross-cutting structure of concept organization---i.e. selective attention, or the notion that humans make use of different categorization systems for different kinds of generalization tasks---and can be applied to Web-scale corpora. Using these models, natural language systems will be able to infer a more comprehensive semantic relations, which in turn may yield improved systems for question answering, text classification, machine translation, and information retrieval.

Book Linguistic Linked Data

    Book Details:
  • Author : Philipp Cimiano
  • Publisher : Springer Nature
  • Release : 2020-01-13
  • ISBN : 3030302253
  • Pages : 286 pages

Download or read book Linguistic Linked Data written by Philipp Cimiano and published by Springer Nature. This book was released on 2020-01-13 with total page 286 pages. Available in PDF, EPUB and Kindle. Book excerpt: This is the first monograph on the emerging area of linguistic linked data. Presenting a combination of background information on linguistic linked data and concrete implementation advice, it introduces and discusses the main benefits of applying linked data (LD) principles to the representation and publication of linguistic resources, arguing that LD does not look at a single resource in isolation but seeks to create a large network of resources that can be used together and uniformly, and so making more of the single resource. The book describes how the LD principles can be applied to modelling language resources. The first part provides the foundation for understanding the remainder of the book, introducing the data models, ontology and query languages used as the basis of the Semantic Web and LD and offering a more detailed overview of the Linguistic Linked Data Cloud. The second part of the book focuses on modelling language resources using LD principles, describing how to model lexical resources using Ontolex-lemon, the lexicon model for ontologies, and how to annotate and address elements of text represented in RDF. It also demonstrates how to model annotations, and how to capture the metadata of language resources. Further, it includes a chapter on representing linguistic categories. In the third part of the book, the authors describe how language resources can be transformed into LD and how links can be inferred and added to the data to increase connectivity and linking between different datasets. They also discuss using LD resources for natural language processing. The last part describes concrete applications of the technologies: representing and linking multilingual wordnets, applications in digital humanities and the discovery of language resources. Given its scope, the book is relevant for researchers and graduate students interested in topics at the crossroads of natural language processing / computational linguistics and the Semantic Web / linked data. It appeals to Semantic Web experts who are not proficient in applying the Semantic Web and LD principles to linguistic data, as well as to computational linguists who are used to working with lexical and linguistic resources wanting to learn about a new paradigm for modelling, publishing and exploiting linguistic resources.

Book Frames  Fields  and Contrasts

Download or read book Frames Fields and Contrasts written by Adrienne Lehrer and published by Routledge. This book was released on 2012-11-12 with total page 473 pages. Available in PDF, EPUB and Kindle. Book excerpt: Recently, there has been a surge of interest in the lexicon. The demand for a fuller and more adequate understanding of lexical meaning required by developments in computational linguistics, artificial intelligence, and cognitive science has stimulated a refocused interest in linguistics, psychology, and philosophy. Different disciplines have studied lexical structure from their own vantage points, and because scholars have only intermittently communicated across disciplines, there has been little recognition that there is a common subject matter. The conference on which this volume is based brought together interested thinkers across the disciplines of linguistics, philosophy, psychology, and computer science to exchange ideas, discuss a range of questions and approaches to the topic, consider alternative research strategies and methodologies, and formulate interdisciplinary hypotheses concerning lexical organization. The essay subjects discussed include: * alternative and complementary conceptions of the structure of the lexicon, * the nature of semantic relations and of polysemy, * the relation between meanings, concepts, and lexical organization, * critiques of truth-semantics and referential theories of meaning, * computational accounts of lexical information and structure, and * the advantages of thinking of the lexicon as ordered.

Book Lexical Semantics and Knowledge Representation in Multilingual Text Generation

Download or read book Lexical Semantics and Knowledge Representation in Multilingual Text Generation written by Manfred Stede and published by Springer Science & Business Media. This book was released on 2012-12-06 with total page 230 pages. Available in PDF, EPUB and Kindle. Book excerpt: In knowledge-based natural language generation, issues of formal knowledge representation meet with the linguistic problems of choosing the most appropriate verbalization in a particular situation of utterance. Lexical Semantics and Knowledge Representation in Multilingual Text Generation presents a new approach to systematically linking the realms of lexical semantics and knowledge represented in a description logic. For language generation from such abstract representations, lexicalization is taken as the central step: when choosing words that cover the various parts of the content representation, the principal decisions on conveying the intended meaning are made. A preference mechanism is used to construct the utterance that is best tailored to parameters representing the context. Lexical Semantics and Knowledge Representation in Multilingual Text Generation develops the means for systematically deriving a set of paraphrases from the same underlying representation with the emphasis on events and verb meaning. Furthermore, the same mapping mechanism is used to achieve multilingual generation: English and German output are produced in parallel, on the basis of an adequate division between language-neutral and language-specific (lexical and grammatical) knowledge. Lexical Semantics and Knowledge Representation in Multilingual Text Generation provides detailed insights into designing the representations and organizing the generation process. Readers with a background in artificial intelligence, cognitive science, knowledge representation, linguistics, or natural language processing will find a model of language production that can be adapted to a variety of purposes.

Book Representation Learning for Natural Language Processing

Download or read book Representation Learning for Natural Language Processing written by Zhiyuan Liu and published by Springer Nature. This book was released on 2020-07-03 with total page 319 pages. Available in PDF, EPUB and Kindle. Book excerpt: This open access book provides an overview of the recent advances in representation learning theory, algorithms and applications for natural language processing (NLP). It is divided into three parts. Part I presents the representation learning techniques for multiple language entries, including words, phrases, sentences and documents. Part II then introduces the representation techniques for those objects that are closely related to NLP, including entity-based world knowledge, sememe-based linguistic knowledge, networks, and cross-modal entries. Lastly, Part III provides open resource tools for representation learning techniques, and discusses the remaining challenges and future research directions. The theories and algorithms of representation learning presented can also benefit other related domains such as machine learning, social network analysis, semantic Web, information retrieval, data mining and computational biology. This book is intended for advanced undergraduate and graduate students, post-doctoral fellows, researchers, lecturers, and industrial engineers, as well as anyone interested in representation learning and natural language processing.

Book Semantic Modeling for Natural Language Using Web Knowledge

Download or read book Semantic Modeling for Natural Language Using Web Knowledge written by Zhaohui Wu and published by . This book was released on 2016 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Web knowledge bases, such as Wikipedia, Freebase, etc., provide high quality structured semantics and textual content that could benefit various talks towards better natural language understanding. A fundamental problem for modeling natural language is how to represent a linguistic item in both a meaningful form and a computational way. Typical approaches for addressing this problem are distributional semantic models, which represent a linguistic item by vector space on its distributional contexts in a corpus or knowledge base. Various distributional semantic models have been proposed, such as latent semantic analysis, explicit semantic analysis, etc., which have been successfully applied to many natural language processing (NLP) and information retrieval (IR) applications. Despite a long history and a rich literature on distributional semantic modeling and its applications, several important issues have not been fully studied. First, most methods do not measure the semantics of a word in its context, but give equal importance to different occurrences of a word in different contexts. This gives the first question: how to accurately model the differences of a word in different contexts. Besides the lack of context-awareness, most methods treat all occurrences of a word as the same and build a single vector to represent the meaning of a word, which fails to capture any ambiguity, leading to the second question: how to build a sense-aware semantic profile for a word that can give accurate sense-specific prototypes in terms of both quantity and quality. In addition, most methods have the close world assumption, assuming that the corpus or knowledge base used for semantic modeling is complete and can give the best representation for a word. However, this might not always hold, especially for the out-of-knowledge base (KB) world. This motivates our third line of research on modeling out-of-KB entities, such as those entities defined only in local content, emerging novel entities or news event entities that do not exist in the knowledge base. Motivated by these observations, this dissertation seeks to address the three limitations on context-awareness, sense-awareness and close world assumption by exploring a set of research problems. Specifically, we study measuring word importance in context, sense-aware semantic analysis (SaSA), and out-of-KB entity modeling. We propose a new metric namely context-aware term informativeness and apply it to various applications including concept/keyword extraction and back-of-the-book index generation. We present SaSA, a multi-prototype vector space model for word representation based on Wikipedia, and apply it to various text semantic relatedness measurement tasks. Finally, we study the modeling of typical cases of out-of-KB entities including locally-defined entities, emerging novel entities, and news event entities. For locally-defined entities, we apply a within document search approach to find the most descriptive passages to profile a given entity and evaluate it on fictional character name entities. For novel entities, we explored various feature spaces for modeling the entities and built a high-precision novel entity detector that can be applied as a pre-filtering step to improve existing entity linking systems. For news events, we studied how to build a news event knowledge base by taking advantage of Wikipedia current events.

Book Lexical Representations and the Semantics of Complementation

Download or read book Lexical Representations and the Semantics of Complementation written by Jean Mark Gawron and published by . This book was released on 1983 with total page 380 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Lexical Semantics and Knowledge Representation

Download or read book Lexical Semantics and Knowledge Representation written by James Pustejovsky and published by Springer. This book was released on 1992-09-10 with total page 388 pages. Available in PDF, EPUB and Kindle. Book excerpt: Recent work on formal methods in computational lexical semantics has had theeffect of bringing many linguistic formalisms much closer to the knowledge representation languages used in artificial intelligence. Formalisms are now emerging which may be more expressive and formally better understood than many knowledge representation languages. The interests of computational linguists now extend to include such domains as commonsense knowledge, inheritance, default reasoning, collocational relations, and even domain knowledge. With such an extension of the normal purview of "linguistic" knowledge, one may question whether there is any logical justification for distinguishing between lexical semantics and commonsense reasoning. This volume explores the question from several methodologicaland theoretical perspectives. What emerges is a clear consensus that the notion of the lexicon and lexical knowledge assumed in earlier linguistic research is grossly inadequate and fails to address the deeper semantic issues required for natural language analysis.

Book Identifying Lexical Relationships and Entailments with Distributional Semantics

Download or read book Identifying Lexical Relationships and Entailments with Distributional Semantics written by Stephen Creig Roller and published by . This book was released on 2017 with total page 250 pages. Available in PDF, EPUB and Kindle. Book excerpt: Many modern efforts in Natural Language Understanding depend on rich and powerful semantic representations of words. Systems for sophisticated logical and textual reasoning often depend heavily on lexical resources to provide critical information about relationships between words, but these lexical resources are expensive to create and maintain, and are never fully comprehensive. Distributional Semantics has long offered methods for automatically inducing meaning representations from large corpora, with little or no annotation efforts. The resulting representations are valuable proxies of semantic similarity, but simply knowing two words are similar cannot tell us their relationship, or whether one entails the other. In this thesis, we consider how methods from Distributional Semantics may be applied to the difficult task of lexical entailment, where one must predict whether one word implies another. We approach this by showing contributions in areas of hypernymy detection, lexical relationship prediction, lexical substitution, and textual entailment. We propose novel experimental setups, models, analysis, and interpretations, which ultimate provide us with a better understanding of both the nature of lexical entailment, as well as the information available within distributional representations.

Book Computational Modeling of Narrative

Download or read book Computational Modeling of Narrative written by Inderjeet Mani and published by Springer Nature. This book was released on 2022-05-31 with total page 124 pages. Available in PDF, EPUB and Kindle. Book excerpt: The field of narrative (or story) understanding and generation is one of the oldest in natural language processing (NLP) and artificial intelligence (AI), which is hardly surprising, since storytelling is such a fundamental and familiar intellectual and social activity. In recent years, the demands of interactive entertainment and interest in the creation of engaging narratives with life-like characters have provided a fresh impetus to this field. This book provides an overview of the principal problems, approaches, and challenges faced today in modeling the narrative structure of stories. The book introduces classical narratological concepts from literary theory and their mapping to computational approaches. It demonstrates how research in AI and NLP has modeled character goals, causality, and time using formalisms from planning, case-based reasoning, and temporal reasoning, and discusses fundamental limitations in such approaches. It proposes new representations for embedded narratives and fictional entities, for assessing the pace of a narrative, and offers an empirical theory of audience response. These notions are incorporated into an annotation scheme called NarrativeML. The book identifies key issues that need to be addressed, including annotation methods for long literary narratives, the representation of modality and habituality, and characterizing the goals of narrators. It also suggests a future characterized by advanced text mining of narrative structure from large-scale corpora and the development of a variety of useful authoring aids. This is the first book to provide a systematic foundation that integrates together narratology, AI, and computational linguistics. It can serve as a narratology primer for computer scientists and an elucidation of computational narratology for literary theorists. It is written in a highly accessible manner and is intended for use by a broad scientific audience that includes linguists (computational and formal semanticists), AI researchers, cognitive scientists, computer scientists, game developers, and narrative theorists. Table of Contents: List of Figures / List of Tables / Narratological Background / Characters as Intentional Agents / Time / Plot / Summary and Future Directions

Book Axmedis 2008

    Book Details:
  • Author : Paolo Nesi
  • Publisher : Firenze University Press
  • Release : 2008
  • ISBN : 8884538106
  • Pages : 200 pages

Download or read book Axmedis 2008 written by Paolo Nesi and published by Firenze University Press. This book was released on 2008 with total page 200 pages. Available in PDF, EPUB and Kindle. Book excerpt: The present book covers topics both on fluvial and lagoon morphodynamics. The first part is dedicated to tidal environments. Topics include an overview of main morphological features and mechanisms of estuaries and tidal channels and a model devoted to investigate flow field pattern and bed topography in tidal meandering channels and a comparison with recent observational evidence of meanders within different tidal environments. The general failure of Bagnold hypothesis when applied to equilibrium bedload transport at even relatively modest transverse slope is demonstrated. A new model is then proposed based on an empirical entrainment formulation of bed grains.

Book Approaches to Meaning

Download or read book Approaches to Meaning written by Daniel Gutzmann and published by BRILL. This book was released on 2014-08-28 with total page 363 pages. Available in PDF, EPUB and Kindle. Book excerpt: The basic claims of traditional truth-conditional semantics are that the semantic interpretation of a sentence is connected to the truth of that sentence in a situation, and that the meaning of the sentence is derived compositionally from the semantic values meaning of its constituents and the rules that combine them. Both claims have been subject to an intense debate in linguistics and philosophy of language. The original research papers collected in this volume test the boundaries of this classic view from a linguistic and a philosophical point of view by investigating the foundational notions of composition, values and interpretation and their relation to the interfaces to other disciplines. They take the classical theories one step further and closer to a realistic semantic theory that covers speaker’s intentions, the knowledge of discourse participants, meaning of fiction and literature, as well as vague and paradoxical utterances. Ede Zimmermann is a pioneering researcher in semantics whose students, friends, and colleagues have collected in this volume an impressive set of studies at the interfaces of semantics. How do meanings interact with the context and with intentions and beliefs of the people conversing? How do meanings interact with other meanings in an extended discourse? How can there be paradoxical meanings? Researchers interested in semantics, pragmatics, philosophy of language, anyone interested in foundational and empirical issues of meaning, will find inspiration and instruction in this wonderful volume. Kai von Fintel, MIT Department of Linguistics