Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Alexandra Mayn
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Open Mind (2025) 9: 89–120.
Published: 20 January 2025
FIGURES
| View All (11)
Abstract
View articletitled, Beliefs About the Speaker’s Reasoning Ability Influence Pragmatic Interpretation: Children and Adults as Speakers
View
PDF
for article titled, Beliefs About the Speaker’s Reasoning Ability Influence Pragmatic Interpretation: Children and Adults as Speakers
The cooperative principle states that communicators expect each other to be cooperative and adhere to rational conversational principles. Do listeners keep track of the reasoning sophistication of the speaker and incorporate it into the inferences they derive? In two experiments, we asked participants to interpret ambiguous messages in the reference game paradigm, which they were told were sent either by another adult or by a 4-year-old child. We found an effect of speaker identity: if sent by an adult, an ambiguous message was much more likely to be interpreted as an implicature, while if sent by a child, it was a lot more likely to be interpreted literally. We also observed substantial individual variability, which points to different beliefs and strategies among our participants. We discuss how these speaker effects can be modeled in the Rational Speech Act framework.
Journal Articles
Publisher: Journals Gateway
Open Mind (2023) 7: 156–178.
Published: 01 June 2023
FIGURES
| View All (10)
Abstract
View articletitled, High Performance on a Pragmatic Task May Not Be the Result of Successful Reasoning: On the Importance of Eliciting Participants’ Reasoning Strategies
View
PDF
for article titled, High Performance on a Pragmatic Task May Not Be the Result of Successful Reasoning: On the Importance of Eliciting Participants’ Reasoning Strategies
Formal probabilistic models, such as the Rational Speech Act model, are widely used for formalizing the reasoning involved in various pragmatic phenomena, and when a model achieves good fit to experimental data, that is interpreted as evidence that the model successfully captures some of the underlying processes. Yet how can we be sure that participants’ performance on the task is the result of successful reasoning and not of some feature of experimental setup? In this study, we carefully manipulate the properties of the stimuli that have been used in several pragmatics studies and elicit participants’ reasoning strategies. We show that certain biases in experimental design inflate participants’ performance on the task. We then repeat the experiment with a new version of stimuli which is less susceptible to the identified biases, obtaining a somewhat smaller effect size and more reliable estimates of individual-level performance.