Skip Nav Destination
Close Modal
Search Results for
*
Update search
NARROW
Date
Availability
1-20 of 210 Search Results for
*
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
1
Sort by
Series: Linguistic Inquiry Monographs
Publisher: The MIT Press
Published: 15 October 2024
DOI: 10.7551/mitpress/15453.001.0001
EISBN: 9780262379960
A novel theory of argument structure based on the order in which verbs and their arguments combine across a variety of languages and language families. Merge is the structure-building operation in Chomsky's Minimalist Program. In When Arguments Merge , Elise Newman develops a new Merge-based theory of the syntax of argument structure, taking inspiration from wh- questions. She uncovers new connections between disparate empirical phenomena and provides a unified analysis of patterns across many languages and language families, from Mayan to Bantu to Indo-European languages (among others). The result is a syntactic theory with a small inventory of features and categories that can combine in a limited number of ways, capturing the range of argument configurations that we find cross-linguistically in both declarative and interrogative contexts. Newman's novel approach to argument structure is based on the time at which different kinds of arguments merge and move in the verbal domain. Assuming that all kinds of Merge are driven by features, she proposes that subset relationships between elements bearing different sets of features can constrain the distribution of arguments in unexpected ways and that different feature bundles can predict unusual interactions between arguments in many contexts. The positions of arguments in different contexts have consequences for agreement alignment and case assignment, which are reflected in the Voice of the clause. Examining the order in which verbs and their arguments are combined, she explores the consequences of different orders of combination for the kinds of utterances observed across languages.
Series: Linguistic Inquiry Monographs
Publisher: The MIT Press
Published: 17 September 2024
DOI: 10.7551/mitpress/15436.001.0001
EISBN: 9780262379137
A new theory of argument structure, based on the syntactic operation Merge and presented through an in-depth analysis of properties of the English passive construction. In Principles of Argument Structure , Chris Collins investigates principles of argument structure in minimalist syntax through an in-depth analysis of properties of the English passive construction. He formulates a new theory of argument structure based on the only structure-building operation in minimalist syntax, Merge, which puts together two syntactic objects to form a larger one. This new theory should give rise to detailed cross-linguistic work on the syntactic and semantic properties of implicit arguments. Collins presents an update and defense of his influential 2005 theory of the passive, including a completely original theory of implicit arguments. He makes a direct empirical argument for the Theta-Criterion against various claims that it should be eliminated. He also discusses the conception of voice in syntactic theory, arguing that VoiceP does not introduce external arguments, a position otherwise widely accepted in the field. He shows how the ”smuggling” approach to the passive extends naturally to the dative alternation accounting for a number of striking c-command asymmetries. He compares syntactic and semantic approaches to argument structure, outlining conceptual problems with adopting formal semantics as the basis for a theory of argument structure. The book will be of interest not only to syntacticians and semanticists, but also to typologists investigating the cross-linguistic properties of the passive, psycholinguists and computer scientists working on natural language understanding, and philosophers thinking about the issue of “implicit content.” It includes an appendix that provides common-sense guidelines for doing syntactic research using internet data.Chris Collins
Series: The MIT Press Essential Knowledge series
Publisher: The MIT Press
Published: 06 August 2024
DOI: 10.7551/mitpress/15352.001.0001
EISBN: 9780262379182
A concise, accessible introduction to language endangerment and why it is one of the most urgent challenges of our times. 58% of the world's languages—or, approximately 4,000 languages—are endangered. When we break this figure down, we realize that roughly ten percent of languages have fewer than ten language keepers. And, if one language stops being used every three months, this means that in the next 100 years, if we do nothing, 400 more languages will become dormant. In Endangered Languages , Evangelia Adamou, a specialist of endangered languages and a learner of her own community language, Nashta, offers a sobering look at language endangerment and what is truly lost when a language disappears from usage. Combining recent advances from the Western scientific tradition—from the fields of linguistics, psycholinguistics, neurolinguistics, language attrition, population genetics, and natural language processing—and insights from Indigenous epistemology, theory, and ethics, Adamou examines a wealth of issues surrounding endangered languages. She discusses where endangered languages are found, including how they are faring in a digital world, why these languages are no longer used, and how communities can reclaim languages and keep them strong. Adamou also explains the impact of language continuity on community and individual health and well-being, the importance of language transmission in cultural transmission, and why language rights are essentially human rights. Drawing on varied examples from the Wampanoag Nation to Wales, Endangered Languages offers a powerful reminder of the crucial role every language has in the vitality and well-being of individuals, communities, and our world.
Publisher: The MIT Press
Published: 12 December 2023
DOI: 10.7551/mitpress/15133.001.0001
EISBN: 9780262377263
A critical examination of the complex legacies of early Californian anthropology and linguistics for twenty-first-century communities. In January 2021, at a time when many institutions were reevaluating fraught histories, the University of California removed anthropologist and linguist Alfred Kroeber's name from a building on its Berkeley campus. Critics accused Kroeber of racist and dehumanizing practices that harmed Indigenous people; university leaders repudiated his values. In The Unnaming of Kroeber Hall , Andrew Garrett examines Kroeber's work in the early twentieth century and his legacy today, asking how a vigorous opponent of racism and advocate for Indigenous rights in his own era became a symbol of his university's failed relationships with Native communities. Garrett argues that Kroeber's most important work has been overlooked: his collaborations with Indigenous people throughout California to record their languages and stories. The Unnaming of Kroeber Hall offers new perspectives on the early practice of anthropology and linguistics and on its significance today and in the future. Kroeber's documentation was broader and more collaborative and multifaceted than is usually recognized. As a result, the records Indigenous people created while working with him are relevant throughout California as communities revive languages, names, songs, and stories. Garrett asks readers to consider these legacies, arguing that the University of California chose to reject critical self-examination when it unnamed Kroeber Hall.
Series: Linguistic Inquiry Monographs
Publisher: The MIT Press
Published: 07 March 2023
DOI: 10.7551/mitpress/14681.001.0001
EISBN: 9780262372862
The first attempt at a restrictive theory of the linear order of sentences and phrases of the world's languages, by one of the founders of cartographic syntax. Linearization, or the typical sequence of words in a sentence, varies tremendously from language to language. Why, for example, does the English phrase “a white table” need a different word order from the French phrase “une table blanche,” even though both refer to the same object? Guglielmo Cinque challenges the current understanding of word order variation, which assumes that word order can be dealt with simply by putting a head either before or after its complements and modifiers. The subtle variations in word order, he says, can provide a window into understanding the deeper structure of language and are in need of a sophisticated explanation. The bewildering variation in word order among the languages of the world, says Cinque, should not dissuade us from researching what, if anything, determines which orders are possible (and attested/attestable) and which orders are impossible (and not attested/nonattestable), both when they maximally conform to the “head-final” or “head-initial” types and when they depart from them to varying degrees. His aim is to develop a restrictive theory of word order variation—not just a way to derive the ideal head-initial and head-final word orders but also the mixed cases. In the absence of an explicit theory of linearization, Cinque provides a general approach to derive linear order from a hierarchical arrangement of constituents, specifically, by assuming a restrictive movement analysis that creates structures that can then be linearized by Richard S. Kayne's Linear Correspondence Axiom.
Publisher: The MIT Press
Published: 13 December 2022
DOI: 10.7551/mitpress/11276.001.0001
EISBN: 9780262371995
Natural phenomena, including human language, are not just series of events but are organized quasi-periodically; sentences have structure, and that structure matters. Howard Lasnik and Juan Uriagereka “were there” when generative grammar was being developed into the Minimalist Program. In this presentation of the universal aspects of human language as a cognitive phenomenon, they rationally reconstruct syntactic structure. In the process, they touch upon structure dependency and its consequences for learnability, nuanced arguments (including global ones) for structure presupposed in standard linguistic analyses, and a formalism to capture long-range correlations. For practitioners, the authors assess whether “all we need is Merge,” while for outsiders, they summarize what needs to be covered when attempting to have structure “emerge.” Reconstructing the essential history of what is at stake when arguing for sentence scaffolding, the authors cover a range of larger issues, from the traditional computational notion of structure (the strong generative capacity of a system) and how far down into words it reaches to whether its variants, as evident across the world's languages, can arise from non-generative systems. While their perspective stems from Noam Chomsky's work, it does so critically, separating rhetoric from results. They consider what they do to be empirical, with the formalism being only a tool to guide their research (of course, they want sharp tools that can be falsified and have predictive power). Reaching out to skeptics, they invite potential collaborations that could arise from mutual examination of one another's work, as they attempt to establish a dialogue beyond generative grammar.
Publisher: The MIT Press
Published: 22 November 2022
DOI: 10.7551/mitpress/14795.001.0001
EISBN: 9780262372749
What is it about humans that makes language possible, and what is it about language that makes us human? If you are reading this, you have done something that only our species has evolved to do. You have acquired a natural language. This book asks, How has this changed us? Where scholars have long wondered what it is about humans that makes language possible, N. J. Enfield and Jack Sidnell ask instead, What is it about humans that is made possible by language? In Consequences of Language, their objective is to understand what modern language really is and to identify its logical and conceptual consequences for social life. Central to this undertaking is the concept of intersubjectivity, the open sharing of subjective experience. There is, Enfield and Sidnell contend, a uniquely human form of intersubjectivity, and it is essentially intertwined with language in two ways: a primary form of intersubjectivity was necessary for language to have begun evolving in our species in the first place and then language, through its defining reflexive properties, transformed the nature of our intersubjectivity. In the authors' analysis, social accountability—the bedrock of society—is grounded in this linguistically transformed, enhanced kind of intersubjectivity. The account of the language-mind-society connection put forward in Consequences of Language is one of unprecedented reach, suggesting new connections across disciplines centrally concerned with language—from anthropology and philosophy to sociology and cognitive science—and among those who would understand the foundational role of language in making us human.
Publisher: The MIT Press
Published: 22 November 2022
DOI: 10.7551/mitpress/12128.001.0001
EISBN: 9780262371780
How meaning works—from monkey calls to human language, from spoken language to sign language, from gestures to music—and how meaning is connected to truth. We communicate through language, connecting what we mean to the words we say. But humans convey meaning in other ways as well, with facial expressions, hand gestures, and other methods. Animals, too, can get their meanings across without words. In What It All Means , linguist Philippe Schlenker explains how meaning works, from monkey calls to human language, from spoken language to sign language, from gestures to music. He shows that these extraordinarily diverse types of meaning can be studied and compared within a unified approach—one in which the notion of truth plays a central role . “It's just semantics” is often said dismissively. But Schlenker shows that semantics—the study of meaning—is an unsung success of modern linguistics, a way to investigate some of the deepest questions about human nature using tools from the empirical and formal sciences. Drawing on fifty years of research in formal semantics, Schlenker traces how meaning comes to life. After investigating meaning in primate communication, he explores how human meanings are built, using in some cases sign languages as a guide to the workings of our inner “logic machine.” Schlenker explores how these meanings can be enriched by iconicity in sign language and by gestures in spoken language, and then turns to more abstract forms of iconicity to understand the meaning of music. He concludes by examining paradoxes, which—being neither true nor false—test the very limits of meaning.
Series: The MIT Press Essential Knowledge series
Publisher: The MIT Press
Published: 10 May 2022
DOI: 10.7551/mitpress/12824.001.0001
EISBN: 9780262368155
An accessible introduction to the study of language in the brain, covering language processing, language acquisition, literacy, and language disorders. Neurolinguistics, the study of language in the brain, describes the anatomical structures (networks of neurons in the brain) and physiological processes (ways for these networks to be active) that allow humans to learn and use one or more languages. It draws on neuroscience, linguistics—particularly theoretical linguistics—and other disciplines. In this volume in the MIT Press Essential Knowledge series, Giosuè Baggio offers an accessible introduction to the fundamentals of neurolinguistics, covering language processing, language acquisition, literacy, and speech and language disorders. Baggio first surveys the evolution of the field, describing discoveries by Paul Broca, Carl Wernicke, Noam Chomsky, and others. He discusses mapping language in “brain time” and “brain space” and the constraints of neurolinguistic models. Considering language acquisition, he explains that a child is never a “blank slate”: infants and young children are only able to acquire specific aspects of language in specific stages of cognitive development. He addresses the neural consequences of bilingualism; literacy, discussing how forms of visual language in the brain differ from forms of auditory language; aphasia and the need to understand language disorders in behavioral, functional, and neuroanatomical terms; neurogenetics of language; and the neuroethology of language, tracing the origins of the neural and behavioral building blocks of human linguistic communication to the evolution of avian, mammalian, and primate brains.
Series: Linguistic Inquiry Monographs
Publisher: The MIT Press
Published: 03 May 2022
DOI: 10.7551/mitpress/14421.001.0001
EISBN: 9780262369091
A proposal that syntax extends to the domain of discourse in making core syntax link to the conversational context. In Syntax in the Treetops , Shigeru Miyagawa proposes that syntax extends into the domain of discourse by making linkages between core syntax and the conversational participants. Miyagawa draws on evidence for this extended syntactic structure from a wide variety of languages, including Basque, Japanese, Italian, Magahi, Newari, Romanian, and Spanish, as well as the language of children with autism. His proposal for what happens at the highest level of the tree structure used by linguists to represent the hierarchical relationships within sentences—“in the treetops”—offers a unique contribution to the new area of study sometimes known as “syntacticization of discourse.” Miyagawa's main point is that syntax provides the basic framework that makes possible the performance of a speech act and the conveyance of meaning; although the role that syntax plays for speech acts is modest, it is critical. He proposes that the speaker-addressee layer and the Commitment Phrase (the speaker's commitment to the addressee of the truthfulness of the proposition) occur together in the syntactic treetops. In each succeeding chapter, Miyagawa examines the working of each layer of the tree and how they interact.
Publisher: The MIT Press
Published: 03 May 2022
DOI: 10.7551/mitpress/14237.001.0001
EISBN: 9780262369053
Two distinguished linguists on language, the history of science, misplaced euphoria, surprising facts, and potentially permanent mysteries. In The Secrets of Words , influential linguist Noam Chomsky and his longtime colleague Andrea Moro have a wide-ranging conversation, touching on such topics as language and linguistics, the history of science, and the relation between language and the brain. Moro draws Chomsky out on today's misplaced euphoria about artificial intelligence (Chomsky sees “lots of hype and propaganda” coming from Silicon Valley), the study of the brain (Chomsky points out that findings from brain studies in the 1950s never made it into that era's psychology), and language acquisition by children. Chomsky in turn invites Moro to describe his own experiments, which proved that there exist impossible languages for the brain, languages that show surprising properties and reveal unexpected secrets of the human mind. Chomsky once said, “It is important to learn to be surprised by simple facts”—“an expression of yours that has represented a fundamental turning point in my own personal life,” says Moro—and this is something of a theme in their conversation. Another theme is that not everything can be known; there may be permanent mysteries, about language and other matters. Not all words will give up their secrets.
Publisher: The MIT Press
Published: 15 March 2022
DOI: 10.7551/mitpress/13964.001.0001
EISBN: 9780262367653
An innovative account that brings together cognitive science, ethnography, and literary history to examine patterns of “mindreading” in a wide range of literary works. For over four thousand years, writers have been experimenting with what cognitive scientists call “mindreading”: constantly devising new social contexts for making their audiences imagine complex mental states of characters and narrators. In The Secret Life of Literature , Lisa Zunshine uncovers these mindreading patterns, which have, until now, remained invisible to both readers and critics, in works ranging from The Epic of Gilgamesh to I nvisible Ma n. Bringing together cognitive science, ethnography, and literary studies, this engaging book transforms our understanding of literary history. Central to Zunshine's argument is the exploration of mental states “embedded” within each other, as, for instance, when Ellison's Invisible Man is aware of how his white Communist Party comrades pretend not to understand what he means , when they want to reassert their position of power. Paying special attention to how race, class, and gender inform literary embedments, Zunshine contrasts this dynamic with real-life patterns studied by cognitive and social psychologists. She also considers community-specific mindreading values and looks at the rise and migration of embedment patterns across genres and national literary traditions, noting particularly the use of deception, eavesdropping, and shame as plot devices. Finally, she investigates mindreading in children's literature. Stories for children geared toward different stages of development, she shows, provide cultural scaffolding for initiating young readers into a long-term engagement with the secret life of literature.
Publisher: The MIT Press
Published: 15 February 2022
DOI: 10.7551/mitpress/11210.001.0001
EISBN: 9780262367356
The proposal of a semantics for quotations using explanatory notions drawn from philosophical theories of pictures. In Quotations as Pictures , Josef Stern develops a semantics for quotations using explanatory notions drawn from philosophical theories of pictures. He offers the first sustained analysis of the practice of quotation proper, as opposed to mentioning. Unlike other accounts that treat quotation as mentioning, Quotations as Pictures argues that the two practices have independent histories, that they behave differently semantically, that the inverted commas employed in both mentioning and quotation are homonymous, that so-called mixed quotation is nothing but subsentential quotation, and that the major problem of quotation is to explain its dual reference or meaning—its ordinary meaning and its metalinguistic reference to the quoted phrase attributed to the quoted subject. Stern argues that the key to understanding quotation is the idea that quotations are pictures or have a pictorial character. As a phenomenon where linguistic competence meets a nonlinguistic symbolic ability, the pictorial, quotation is a combination of features drawn from the two different symbol systems of language and pictures, which explains the exceptional and sometimes idiosyncratic data about quotation. In light of this analysis of verbal quotation, in the last chapters Stern analyzes scare quotation as a nonliteral expressive use of the inverted commas and explores the possibility of quotation in pictures themselves.
Publisher: The MIT Press
Published: 08 February 2022
DOI: 10.7551/mitpress/10413.001.0001
EISBN: 9780262543194
An introduction to the the range of current theoretical approaches to the prosody of spoken utterances, with practical applications of those theories. Prosody is an extremely dynamic field, with a rapid pace of theoretical development and a steady expansion of its influence beyond linguistics into such areas as cognitive psychology, neuroscience, computer science, speech technology, and even the medical profession. This book provides a set of concise and accessible introductions to each major theoretical approach to prosody, describing its structure and implementation and its central goals and assumptions as well as its strengths and weaknesses. Most surveys of basic questions in prosody are written from the perspective of a single theoretical framework. This volume offers the only summary of the full range of current theoretical approaches, with practical applications of each theory and critical commentary on selected chapters. The current abundance of theoretical approaches has sometimes led to apparent conflicts that may stem more from terminological differences, or from differing notions of what theories of prosody are meant to achieve, than from actual conceptual disagreement. This volume confronts this pervasive problem head on, by having each chapter address a common set of questions on phonology, meaning, phonetics, typology, psychological status, and transcription. Commentary is added as counterpoint to some chapters, with responses by the chapter authors, giving a taste of current debate in the field. Contributors Amalia Arvaniti, Jonathan Barnes, Mara Breen, Laura C. Dilley, Grzegorz Dogil, Martine Grice, Nina Grønnum, Daniel Hirst, Sun-Ah Jun, Jelena Krivokapić, D. Robert Ladd, Fang Liu, Piet Mertens, Bernd Möbius, Gregor Möhler, Oliver Niebuhr, Francis Nolan, Janet Pierrehumbert, Santitham Prom-on, Antje Schweitzer, Stefanie Shattuck-Hufnagel, Alice Turk, Yi Xu
Series: Open Handbooks in Linguistics
Publisher: The MIT Press
Published: 18 January 2022
DOI: 10.7551/mitpress/12200.001.0001
EISBN: 9780262366076
A guide to principles and methods for the management, archiving, sharing, and citing of linguistic research data, especially digital data. “Doing language science” depends on collecting, transcribing, annotating, analyzing, storing, and sharing linguistic research data. This volume offers a guide to linguistic data management, engaging with current trends toward the transformation of linguistics into a more data-driven and reproducible scientific endeavor. It offers both principles and methods, presenting the conceptual foundations of linguistic data management and a series of case studies, each of which demonstrates a concrete application of abstract principles in a current practice. In part 1, contributors bring together knowledge from information science, archiving, and data stewardship relevant to linguistic data management. Topics covered include implementation principles, archiving data, finding and using datasets, and the valuation of time and effort involved in data management. Part 2 presents snapshots of practices across various subfields, with each chapter presenting a unique data management project with generalizable guidance for researchers. The Open Handbook of Linguistic Data Management is an essential addition to the toolkit of every linguist, guiding researchers toward making their data FAIR: Findable, Accessible, Interoperable, and Reusable.
Series: Linguistic Inquiry Monographs
Publisher: The MIT Press
Published: 19 October 2021
DOI: 10.7551/mitpress/13925.001.0001
EISBN: 9780262366106
A novel, systematic theory of adjunct control, explaining how and why adjuncts shift between obligatory and nonobligatory control. Control in adjuncts involves a complex interaction of syntax, semantics, and pragmatics, which so far has resisted systematic analysis. In this book, Idan Landau offers the first comprehensive account of adjunct control. Extending the framework developed in his earlier book, A Two-Tiered Theory of Control , Landau analyzes ten different types of adjuncts and shows that they fall into two categories: those displaying strict obligatory control (OC) and those alternating between OC and nonobligatory control (NOC). He explains how and why adjuncts shift between OC and NOC, unifying their syntactic, semantic, and pragmatic properties. Landau shows that the split between the two types of adjuncts reflects a fundamental distinction in the semantic type of the adjunct: property (OC) or proposition (NOC), a distinction independently detectable by the adjunct's tolerance to a lexical subject. After presenting a fully compositional account of controlled adjuncts, Landau tests and confirms the specific configurational predictions for each type of adjunct. He describes the interplay between OC and NOC in terms of general principles of competition—both within the grammar and outside of it, in the pragmatics and in the processing module—shedding new light on classical puzzles in the acquisition of adjunct control by children. Along the way, he addresses a range of empirical phenomena, including implicit arguments, event control, logophoricity, and topicality.
Publisher: The MIT Press
Published: 17 August 2021
DOI: 10.7551/mitpress/11865.001.0001
EISBN: 9780262366069
Explorations of language development in different types of learner populations and across various languages. This volume examines language development in different types of learner populations and across various languages. The contributors analyze experimental studies of child and adult language acquisition, heritage language development, bilingualism, and language disorders. They consider theoretical and methodological issues; language development in children, discussing topics that range from gestures to errors in person and number agreement; and development and attrition of (morpho)syntactic constructions in second language learners, bilinguals, and Alzheimer's patients. The approach is “crosslinguistic” in three senses of the word: the contributors offer analyses of acquisition phenomena in different languages; they consider “crosslinguistic influence,” or the potential effects of multiple languages on one another in the mind of the same speaker; and (in a novel use of the term, proposed by the editors) the chapters bring together theoretical and methodological approaches pertinent to the linguistics of language development in children, adults, and heritage speakers. Contributors Jennifer Austin, María Blume, Kimberly Cassidy, Cristina Dye, Suzanne Flynn, Claire Foley, David Giancaspro, Lila R. Gleitman, Yarden Kedar, Elaine C. Klein, D. Terence Langendoen, Christen N. Madsen II, Gita Martohardjono, Reiko Mazuka, Rebecca Nappa, Jerome Packard, Anna Papafragou, Silvia Perez-Cortes, Ian Phillips, Liliana Sánchez, Lynn M. Santelmann, Richard G. Schwartz, Janet Cohen Sherman, John C. Trueswell, Virginia Valian, Yun Yao
Publisher: The MIT Press
Published: 16 March 2021
DOI: 10.7551/mitpress/11577.001.0001
EISBN: 9780262362979
A data-driven exploration of children's early language learning across different languages, providing an empirical reference and a new theoretical framework. This book examines variability and consistency in children's language learning across different languages and cultures, drawing on Wordbank, an open database with data from more than 75,000 children and twenty-nine languages or dialects. This big data approach makes the book the most comprehensive cross-linguistic analysis to date of early language learning. Moreover, its data-driven picture of which aspects of language learning are consistent across languages suggests constraints on the nature of children's language learning mechanisms. The book provides both a theoretical framework for scholars of language learning, language, and human cognition, and a resource for future research. Wordbank archives data from parents' reports about their children's language learning using instruments in the MacArthur-Bates Communicative Development Inventory (CDI); its goal is to make CDI data available for study and analysis. After an overview of practical and theoretical issues, each of the book's empirical chapters applies a particular analysis to the Wordbank dataset, considering such topics as vocabulary size, demographic variation, syntactic and semantic categories, and the relationship between vocabulary growth and grammar. The final three chapters draw on the preceding chapters to quantify variability and consistency, consider the bird's eye view of language acquisition afforded by the data, and reflect on methodology.
Publisher: The MIT Press
Published: 02 March 2021
DOI: 10.7551/mitpress/13618.001.0001
EISBN: 9780262363136
A human-inspired, linguistically sophisticated model of language understanding for intelligent agent systems. One of the original goals of artificial intelligence research was to endow intelligent agents with human-level natural language capabilities. Recent AI research, however, has focused on applying statistical and machine learning approaches to big data rather than attempting to model what people do and how they do it. In this book, Marjorie McShane and Sergei Nirenburg return to the original goal of recreating human-level intelligence in a machine. They present a human-inspired, linguistically sophisticated model of language understanding for intelligent agent systems that emphasizes meaning—the deep, context-sensitive meaning that a person derives from spoken or written language. With Linguistics for the Age of AI , McShane and Nirenburg offer a roadmap for creating language-endowed intelligent agents (LEIAs) that can understand, explain , and learn. They describe the language-understanding capabilities of LEIAs from the perspectives of cognitive modeling and system building, emphasizing “actionability”—which involves achieving interpretations that are sufficiently deep, precise, and confident to support reasoning about action. After detailing their microtheories for topics such as semantic analysis, basic coreference, and situational reasoning, McShane and Nirenburg turn to agent applications developed using those microtheories and evaluations of a LEIA's language understanding capabilities. McShane and Nirenburg argue that the only way to achieve human-level language understanding by machines is to place linguistics front and center, using statistics and big data as contributing resources. They lay out a long-term research program that addresses linguistics and real-world reasoning together, within a comprehensive cognitive architecture. The open access edition of this book was made possible by generous funding from Arcadia – a charitable fund of Lisbet Rausing and Peter Baldwin.
Series: Linguistic Inquiry Monographs
Publisher: The MIT Press
Published: 02 February 2021
DOI: 10.7551/mitpress/12800.001.0001
EISBN: 9780262361262
An argument that Merge is binary but its binarity refers to syntactic positions rather than objects. In this book, Barbara Citko and Martina Gračanin-Yuksek examine the constraints on Merge—the basic structure-building operation in minimalist syntax—from a multidominant perspective. They maintain that Merge is binary, but argue that the binarity of Merge refers to syntactic positions Merge relates: what has typically been formulated as a constraint that prevents Merge from combining more than two syntactic objects is a constraint on Merge's relating more than two syntactic positions. Citko and Gračanin-Yuksek investigate the interactions between the two types of Merge that can generate multidominant structures: Parallel Merge and Internal Merge. Taking Right Node Raiding (RNR) as a representative example of Parallel Merge and Across-the-Board (ATB) extraction to be representative of Parallel Merge + Internal Merge, they show that ATB is subject to a parallelism constraint that RNR is not subject to. They show that this difference follows from Binarity Constraint on Merge (BiCoM), the requirement that prevents Merge from relating more than two syntactic positions within a single derivation, which is obeyed in RNR, but not in ATB extraction. They further show that BiCoM is also operative in languages with more flexible word order, such as Croatian and Polish, and that structural syncretism alleviates BiCoM violations in these languages as well.
1