Abstract
We present DREAM, the first dialogue-based multiple-choice reading comprehension data set. Collected from English as a Foreign Language examinations designed by human experts to evaluate the comprehension level of Chinese learners of English, our data set contains 10,197 multiple-choice questions for 6,444 dialogues. In contrast to existing reading comprehension data sets, DREAM is the first to focus on in-depth multi-turn multi-party dialogue understanding. DREAM is likely to present significant challenges for existing reading comprehension systems: 84% of answers are non-extractive, 85% of questions require reasoning beyond a single sentence, and 34% of questions also involve commonsense knowledge.
We apply several popular neural reading comprehension models that primarily exploit surface information within the text and find them to, at best, just barely outperform a rule-based approach. We next investigate the effects of incorporating dialogue structure and different kinds of general world knowledge into both rule-based and (neural and non-neural) machine learning-based reading comprehension models. Experimental results on the DREAM data set show the effectiveness of dialogue structure and general world knowledge. DREAM is available at https://dataset.org/dream/.
1 Introduction
Recently a significant amount of research has focused on the construction of large-scale multiple- choice (Lai et al., 2017; Khashabi et al., 2018; Ostermann et al., 2018) and extractive (Hermann et al., 2015; Hill et al., 2016; Rajpurkar et al., 2016; Trischler et al., 2017) reading comprehension data sets (Section 2). Source documents in these data sets have generally been drawn from formal written texts such as news, fiction, and Wikipedia articles, which are commonly considered well-written, accurate, and neutral.
With the goal of advancing research in machine reading comprehension and facilitating dialogue understanding, we construct and present DREAM — the first multiple-choice Dialogue-based REAding comprehension exaMination data set. We collect 10,197 questions for 6,444 multi- turn multi-party dialogues from English language exams, which are carefully designed by educational experts (e.g., English teachers) to assess the comprehension level of Chinese learners of English. Each question is associated with three answer options, exactly one of which is correct. (See Table 1 for an example.) DREAM covers a variety of topics and scenarios in daily life such as conversations on the street, on the phone, in a classroom or library, at the airport or the office or a shop (Section 3).
Dialogue1 (D1) . | |
---|---|
W: | Tom, look at your shoes. How dirty they are! You must clean them. |
M: | Oh, mum, I just cleaned them yesterday. |
W: | They are dirty now. You must clean them again. |
M: | I do not want to clean them today. Even if I clean them today, they will get dirty again tomorrow. |
W: | All right, then. |
M: | Mum, give me something to eat, please. |
W: | You had your breakfast in the morning, Tom, and you had lunch at school. |
M: | I am hungry again. |
W: | Oh, hungry? But if I give you something to eat today, you will be hungry again tomorrow. |
Q1 | Why did the woman say that she wouldn’t give him anything to eat? |
A. | Because his mother wants to correct his bad habit.★ |
B. | Because he had lunch at school. |
C. | Because his mother wants to leave him hungry. |
Dialogue1 (D1) . | |
---|---|
W: | Tom, look at your shoes. How dirty they are! You must clean them. |
M: | Oh, mum, I just cleaned them yesterday. |
W: | They are dirty now. You must clean them again. |
M: | I do not want to clean them today. Even if I clean them today, they will get dirty again tomorrow. |
W: | All right, then. |
M: | Mum, give me something to eat, please. |
W: | You had your breakfast in the morning, Tom, and you had lunch at school. |
M: | I am hungry again. |
W: | Oh, hungry? But if I give you something to eat today, you will be hungry again tomorrow. |
Q1 | Why did the woman say that she wouldn’t give him anything to eat? |
A. | Because his mother wants to correct his bad habit.★ |
B. | Because he had lunch at school. |
C. | Because his mother wants to leave him hungry. |
Based on our analysis of DREAM, we argue that dialogue-based reading comprehension is at least as difficult as existing non-conversational counterparts. In particular, answering 34% of DREAM questions requires unspoken commonsense knowledge, for example, unspoken scene information. This might be due to the nature of dialogues: For efficient oral communication, people rarely state obvious explicit world knowledge (Forbes and Choi, 2017) such as “Christmas Day is celebrated on December 25th.” Understanding the social implications of an utterance as well as inferring a speaker’s intentions is also regularly required for answering dialogue-based questions. The dialogue content in Table 1, for example, is itself insufficient for readers to recognize the intention of the female speaker (W) in the first question (Q1). However, world knowledge is rarely considered in state-of-the-art reading comprehension models (Tay et al., 2018; Wang et al., 2018b).
Moreover, dialogue-based questions can cover information imparted across multiple turns involving multiple speakers. In DREAM, approximately 85% of questions can only be answered by considering the information from multiple sentences. For example, to answer Q1 in Table 3 later in the paper regarding the date of birth of the male speaker (M), the supporting sentences (in bold) include “You know, tomorrow is Christmas Day” from the female speaker and “…I am more than excited about my birthday, which will come in two days” from the male speaker. Compared with “multiple-sentence questions” in traditional reading comprehension data sets, DREAM further requires an understanding of the turn-based structure of dialogue—for example, for aligning utterances with their corresponding speakers.
As only 16% of correct answer options are text spans from the source documents, we primarily explore rule-based methods and state-of-the-art neural models designed for multiple-choice reading comprehension (Section 4). We find first that neural models designed for non–dialogue-based reading comprehension (Chen et al., 2016; Dhingra et al., 2017; Wang et al., 2018b) do not fare well: The highest achieved accuracy is 45.5%, only slightly better than the accuracy (44.6%) of a simple lexical baseline (Richardson et al., 2013). For the most part, these models fundamentally exploit only surface-level information from the source documents. Considering the above-mentioned challenges, however, we hypothesize that incorporating general world knowledge and aspects of the dialogue structure would allow a better understanding of the dialogues. As a result, we modify our baseline systems to include (1) general world knowledge in the form of such as ConceptNet relations (Speer et al., 2017) and a pre-trained language model (Radford et al., 2018), and (2) speaker information for each utterance. Experiments show the effectiveness of these factors on the lexical baselines as well as neural and non-neural machine learning approaches: We acquire up to 11.9% absolute gain in accuracy compared with the highest performance achieved by the state-of-the-art reading comprehension model (Wang et al., 2018b), which mainly relies on explicit surface-level information in the text (Section 5).
Finally, we see a significant gap between the best automated approach (59.5%) and human ceiling performance (98.6%) on the DREAM data set. This provides yet additional evidence that dialogue-based reading comprehension is a very challenging task. We hope that it also inspires the research community to develop methods for the dialogue-based reading comprehension task.
2 Related Work
We divide reading comprehension data sets into three categories based on the types of answers: extractive, abstractive, and multiple choice.
2.1 Extractive and Abstractive Data Sets
In recent years, we have seen increased interest in large-scale cloze/span-based reading comprehension data set construction (Hermann et al., 2015; Hill et al., 2016; Onishi et al., 2016; Rajpurkar et al., 2016; Bajgar et al., 2016; Nguyen et al., 2016; Trischler et al., 2017; Joshi et al., 2017; Choi et al., 2018). We regard them as extractive since candidate answers are usually short spans from source documents. State-of-the-art neural models with attention mechanisms already achieve very high performance based on local lexical information. Recently researchers work on the construction of spoken span-based data sets (Lee et al., 2018; Li et al., 2018) by applying text-to-speech technologies or recruiting human speakers based on formal written document-based data sets such as SQuAD (Rajpurkar et al., 2016). Some span-based conversation data sets are constructed from a relatively small size of dialogues from television shows (Chen and Choi, 2016; Ma et al., 2018).
Considering the limitations in extractive data sets, answers in abstractive data sets such as MS MARCO (Nguyen et al., 2016), SearchQA (Dunn et al., 2017), and NarrativeQA (Kočiskỳ et al., 2018) are human-crowdsourced based on source documents or summaries. Concurrently, there is a growing interest in conversational reading comprehension such as CoQA (Reddy et al., 2018). Because annotators tend to copy spans as answers (Reddy et al., 2018), the majority of answers are still extractive in these data sets (Table 2). Compared to the data sets mentioned above, most of the correct answer options (83.7%) in DREAM are free-form text.
. | SQuAD . | NarrativeQA . | CoQA . | RACE . | DREAM (this work) . |
---|---|---|---|---|---|
Answer type | extractive | abstractive | abstractive | multiple-choice | multiple-choice |
Source document type | written text | written text | written text | written text | dialogue |
# of source documents | 536 | 1,572 | 8,399 | 27,933 | 6,444 |
Average answer length | 3.2 | 4.7 | 2.7 | 5.3 | 5.3 |
Extractive (%) | 100.0 | 73.6 | 66.8 | 13.0 | 16.3 |
Abstractive (%) | 0.0 | 26.4 | 33.2 | 87.0 | 83.7 |
. | SQuAD . | NarrativeQA . | CoQA . | RACE . | DREAM (this work) . |
---|---|---|---|---|---|
Answer type | extractive | abstractive | abstractive | multiple-choice | multiple-choice |
Source document type | written text | written text | written text | written text | dialogue |
# of source documents | 536 | 1,572 | 8,399 | 27,933 | 6,444 |
Average answer length | 3.2 | 4.7 | 2.7 | 5.3 | 5.3 |
Extractive (%) | 100.0 | 73.6 | 66.8 | 13.0 | 16.3 |
Abstractive (%) | 0.0 | 26.4 | 33.2 | 87.0 | 83.7 |
Dialogue2 (D2) . | . |
---|---|
W: | Hey, Mike. Where have you been? I didn’t see you around these days? |
M: | I was hiding in my office. My boss gave me loads of work to do, and I tried to finish it before my birthday. Anyway, I am done now. Thank goodness! How is everything going with you? |
W: | I’m quite well. You know, tomorrow is Christmas Day. Do you have any plans? |
M: | Well, to tell you the truth, I am more than excited about my birthday, which will come in two days. I am going to visit my parents-in-law with my wife. |
W: | Wow, sounds great. |
M: | Definitely! This is my first time to spend my birthday with them. |
W: | Do they live far away from here? |
M: | A little bit. We planned to take the train, but considering the travel peak, my wife strongly suggested that we go to the airport right after we finish our work this afternoon. How about you? What’s your holiday plan? |
W: | Well, our situations are just the opposite. My parents-in-law will come to my house, and they wish to stay at home and have a quiet Christmas Day. So I have to call my friends to cancel our party that will be held at my house. |
M: | You’ll experience a quite different and lovely holiday. Enjoy your Christmas! |
W: | Thanks, the same to you! |
Q1 | What is the date of the man’s birthday? |
A. | 25th, December. |
B. | 26th, December.★ |
C. | 27th, December. |
Q2 | How will the man go to his wife’s parents’ home? |
A. | By train. |
B. | By bus. |
C. | By plane.★ |
Q3 | What is the probable relationship between the two speakers? |
A. | Husband and wife. |
B. | Friends.★ |
C. | Parent-in-law and son-in-law. |
Dialogue2 (D2) . | . |
---|---|
W: | Hey, Mike. Where have you been? I didn’t see you around these days? |
M: | I was hiding in my office. My boss gave me loads of work to do, and I tried to finish it before my birthday. Anyway, I am done now. Thank goodness! How is everything going with you? |
W: | I’m quite well. You know, tomorrow is Christmas Day. Do you have any plans? |
M: | Well, to tell you the truth, I am more than excited about my birthday, which will come in two days. I am going to visit my parents-in-law with my wife. |
W: | Wow, sounds great. |
M: | Definitely! This is my first time to spend my birthday with them. |
W: | Do they live far away from here? |
M: | A little bit. We planned to take the train, but considering the travel peak, my wife strongly suggested that we go to the airport right after we finish our work this afternoon. How about you? What’s your holiday plan? |
W: | Well, our situations are just the opposite. My parents-in-law will come to my house, and they wish to stay at home and have a quiet Christmas Day. So I have to call my friends to cancel our party that will be held at my house. |
M: | You’ll experience a quite different and lovely holiday. Enjoy your Christmas! |
W: | Thanks, the same to you! |
Q1 | What is the date of the man’s birthday? |
A. | 25th, December. |
B. | 26th, December.★ |
C. | 27th, December. |
Q2 | How will the man go to his wife’s parents’ home? |
A. | By train. |
B. | By bus. |
C. | By plane.★ |
Q3 | What is the probable relationship between the two speakers? |
A. | Husband and wife. |
B. | Friends.★ |
C. | Parent-in-law and son-in-law. |
2.2 Multiple-Choice Data Sets
We primarily discuss the multiple-choice data sets, in which answer options are not restricted to extractive text spans in the given document. Instead, most of the correct answer options are abstractive (Table 2). Multiple-choice data sets involve extensive human involvement for problem generation during crowdsourcing (i.e., questions, correct answer option, and distractors). Besides surface matching, a significant portion of questions require multiple-sentence reasoning and external knowledge (Richardson et al., 2013; Mostafazadeh et al., 2016; Khashabi et al., 2018; Ostermann et al., 2018).
Besides crowdsourcing, some data sets are collected from examinations designed by educational experts (Penas et al., 2014; Shibuki et al., 2014; Tseng et al., 2016; Clark et al., 2016; Lai et al., 2017; Mihaylov et al., 2018), which aim to test human examinees. There are various types of complicated questions such as math word problems, summarization, logical reasoning, and sentiment analysis. Because we can adopt more objective evaluation criteria such as accuracy, these questions are usually easy to grade. Besides, questions from examinations are generally clean and high-quality. Therefore, human performance ceiling on this kind of data set is much higher (e.g., 94.5% on RACE [Lai et al., 2017] and 98.6% on DREAM in accuracy) than that of data sets built by crowdsourcing.
In comparison, we present the first multiple-choice dialogue-based data set from examinations that contains a large percentage of questions that require multiple sentence inference. To the best of our knowledge, DREAM also contains the largest number of questions involving commonsense reasoning compared with other examination data sets.
3 Data
In this section, we describe how we construct DREAM (Section 3.1) and provide a detailed analysis of this data set (Section 3.2).
3.1 Collection Methodology
We collect dialogue-based comprehension problems from a variety of English language exams (including practice exams) such as National College Entrance Examination, College English Test, and Public English Test,1 which are designed by human experts to assess either the listening or reading comprehension level of Chinese English learners in high schools and colleges (for individuals aged 12–22 years). All the problems in DREAM are freely accessible online for public usage. Each problem consists of a dialogue and a series of multiple-choice questions. To ensure every question is associated with exactly three answer options, we drop wrong answer option(s) randomly for questions with more than three options. We remove duplicate problems and randomly split the data at the problem level, with 60% train, 20% development, and 20% test.
3.2 Data Analysis
We summarize the statistics of DREAM in Table 4 and data split in Table 5. Compared with existing data sets built from formal written texts, the vocabulary size is relatively small since spoken English by its nature makes greater use of high-frequency words and needs a smaller vocabulary for efficient real-time communication (Nation, 2006).
Metric . | Value . |
---|---|
# of answer options per question | 3 |
# of turns | 30,183 |
Avg./Max. # of questions per dialogue | 1.6 / 10 |
Avg./Max. # of speakers per dialogue | 2.0 / 7 |
Avg./Max. # of turns per dialogue | 4.7 / 48 |
Avg./Max. option length (in tokens) | 5.3 / 21 |
Avg./Max. question length (in tokens) | 8.6 / 24 |
Avg./Max. dialogue length (in tokens) | 85.9 / 1,290 |
vocabulary size | 13,037 |
Metric . | Value . |
---|---|
# of answer options per question | 3 |
# of turns | 30,183 |
Avg./Max. # of questions per dialogue | 1.6 / 10 |
Avg./Max. # of speakers per dialogue | 2.0 / 7 |
Avg./Max. # of turns per dialogue | 4.7 / 48 |
Avg./Max. option length (in tokens) | 5.3 / 21 |
Avg./Max. question length (in tokens) | 8.6 / 24 |
Avg./Max. dialogue length (in tokens) | 85.9 / 1,290 |
vocabulary size | 13,037 |
. | Train . | Dev . | Test . | All . |
---|---|---|---|---|
# of dialogues | 3,869 | 1,288 | 1,287 | 6,444 |
# of questions | 6,116 | 2,040 | 2,041 | 10,197 |
. | Train . | Dev . | Test . | All . |
---|---|---|---|---|
# of dialogues | 3,869 | 1,288 | 1,287 | 6,444 |
# of questions | 6,116 | 2,040 | 2,041 | 10,197 |
We categorize questions into two main categories according to the types of knowledge required to answer them: matching and reasoning.
- •
Matching A question is entailed or paraphrased by exactly one sentence in a dialogue. The answer can be extracted from the same sentence. For example, we can easily verify the correctness of the question-answer pair (“What kind of room does the man want to rent?”, “A two-bedroom apartment.”) based on the sentence “M: I’m interested in renting a two-bedroom apartment.” This category is further divided into two categories, word matching and paraphrasing, in previous work (Chen et al., 2016; Trischler et al., 2017).
- •
Reasoning Questions that cannot be answered by the surface meaning of a single sentence belong to this category. We further define four subcategories as follows.
- -
Summary Answering this kind of ques tions requires the whole picture of a dia logue, such as the topic of a dialogue and the relation between speakers (e.g., D2-Q3 in Table 3). Under this category, ques tions such as “What are the two speakers talking about?” and “What are the speak ers probably doing?” are frequently asked.
- -
Logic We require logical reasoning to answer questions in this category. We usu ally need to identify logically implied relations among multiple sentences in a dialogue. To reduce the ambiguity during the annotation, we regard a question that can only be solved by considering the con tent of multiple sentences and does not belong to the summary subcategory that involves all the sentences in a dialogue as a logic question. Following this definition, both D2-Q1 and D2-Q2 in Table 3 belong to this category.
- -
Arithmetic Inferring the answer re quires arithmetic knowledge (e.g., D2-Q1 in Table 3 requires 25 − 1 + 2 = 26).
- -
Commonsense To answer questions under this subcategory, besides the textual infor mation in the dialogue, we also require external commonsense knowledge that cannot be obtained from the dialogue. For instance, all questions in Table 3 fall under this category. D2-Q1 and D2-Q2 in Table 3 belong to both logic and common sense since they require multiple sentences as well as commonsense knowledge for question answering. There exist multiple types of commonsense knowledge in DREAM such as the well-known properties of a highly recognizable entity (e.g., D2-Q1 in Table 3), the prominent relationship between two speakers (e.g., D2-Q3 in Table 3), the knowledge of or shared by a particular culture (e.g., when a speaker says “Cola? I think it tastes like medicine.”, she/he probably means “I don’t like cola.”), and the cause-effect relation between events (e.g., D1-Q1 in Table 1). We refer read ers to LoBue and Yates (2011) for detailed definitions.
- -
Table 6 shows the question type distribution labeled by two human annotators on 25% questions randomly sampled from the development and test sets. Besides the previously defined question categories, we also report the percentage of questions that require reasoning over multiple sentences (i.e., summary or logic questions) and the percentage of questions that require the surface-level understanding or commonsense/math knowledge based on the content of a single sentence. As a question can belong to multiple reasoning subcategories, the summation of the percentage of reasoning subcategories is not equal to the percentage of reasoning. The Cohen’s kappa coefficient is 0.67 on the development set and 0.68 on the test set.
Question Type . | Dev . | Test . | Dev + Test . |
---|---|---|---|
Matching | 13.0 | 10.3 | 11.7 |
Reasoning | 87.0 | 89.7 | 88.3 |
Summary | 8.4 | 15.9 | 12.1 |
Logic | 74.5 | 70.4 | 72.5 |
Arithmetic | 5.1 | 3.6 | 4.4 |
Commonsense | 31.5 | 35.9 | 33.7 |
Single sentence | 17.1 | 13.7 | 15.4 |
Multiple sentences | 82.9 | 86.3 | 84.6 |
Question Type . | Dev . | Test . | Dev + Test . |
---|---|---|---|
Matching | 13.0 | 10.3 | 11.7 |
Reasoning | 87.0 | 89.7 | 88.3 |
Summary | 8.4 | 15.9 | 12.1 |
Logic | 74.5 | 70.4 | 72.5 |
Arithmetic | 5.1 | 3.6 | 4.4 |
Commonsense | 31.5 | 35.9 | 33.7 |
Single sentence | 17.1 | 13.7 | 15.4 |
Multiple sentences | 82.9 | 86.3 | 84.6 |
Dialogues in DREAM are generally clean and mostly error-free because they are carefully designed by educational experts. However, it is not guaranteed that each dialogue is written or proofread by a native speaker. Besides, dialogues tend to be more proper and less informal for exam purposes. To have a rough estimation of the quality of dialogues in DREAM and the differences between these dialogues and more casual ones in movies or television shows, we run a proofreading tool—Grammarly2 —on all the dialogues from the annotated 25% instances of the development set and the same size (20.7k tokens) of dialogues from Friends, a famous American television show whose transcripts are commonly used for dialogue understanding (Chen and Choi, 2016; Ma et al., 2018). As shown in Table 7, there exist fewer spelling mistakes and the overall score is slightly higher than that of the dialogues in Friends. Based on the evaluated instances, articles and verb forms are the two most frequent grammar error categories (10 and 8, respectively, out of 23) in DREAM. Besides, the language tends to be less precise in DREAM, indicated by the number of vocabulary suggestions. For example, experts tend to use expressions such as “really hot,”“really beautiful,”“very bad,” and “very important” rather than more appropriate yet more advanced adjectives that might hinder reading comprehension of language learners with smaller vocabularies. According to the explanations provided by the tool, the readability scores for both data sets fall into the same category “Your text is very simple and easy to read, likely to be understood by an average 5th-grader (age 10).”
Metric . | DREAM . | Friends . |
---|---|---|
# of spelling errors | 11 | 146 |
# of grammar errors | 23 | 16 |
# of conciseness suggestions | 6 | 2 |
# of vocabulary suggestions | 18 | 3 |
General Performance | 98.0 | 95.0 |
Readability Score | 93.7 | 95.3 |
Metric . | DREAM . | Friends . |
---|---|---|
# of spelling errors | 11 | 146 |
# of grammar errors | 23 | 16 |
# of conciseness suggestions | 6 | 2 |
# of vocabulary suggestions | 18 | 3 |
General Performance | 98.0 | 95.0 |
Readability Score | 93.7 | 95.3 |
4 Approaches
We formally introduce the dialogue-based reading comprehension task and notations in Section 4.1. To investigate the effects of different kinds of general world knowledge and dialogue structure, we incorporate them into rule-based approaches (Section 4.2) as well as non-neural (Section 4.3) and neural (Section 4.4) machine learning approaches. We describe in detail preprocessing and training in Section 4.5.
4.1 Problem Formulation and Notations
We start with a formal definition of the dialogue-based multiple-choice reading comprehension task. An n-turn dialogue D is defined as D = {s1: t1, s2: t2, …, sn: tn}, where si represents the speaker ID (e.g., “M” and “W”), and ti represents the text of the ith turn. Let Q denote the text of question, and O1..3 denote the text of three answer options. The task is to choose the correct one from answer options O1..3 associated with question Q given dialogue D. In this paper, we regard this task as a three-class classification problem, each class corresponding to an answer option.
For convenience, we define the following notations, which will be referred in the rest of this paper. Let Ds denote the turns spoken by speaker s in D. Formally, Ds = {si1: ti1, si2: ti2, … sim: tim} where {i1, i2, …, im} = {i | si = s} and i1 < i2 < … < im. In particular, s = * denotes all the speakers. WDs and WOi denote the ordered set of the running words (excluding punctuation marks) in Ds and Oi, respectively. Questions designed for dialogue-based reading comprehension often focus on a particular speaker. If there is exactly one speaker mentioned in a question, we use sQ to denote this target speaker. Otherwise, sQ = *. For example, given the dialogue in Table 3, sQ = “M” for Question 1 and 2, and sQ = * for Question 3.
4.2 Rule-Based Approaches
We first attempt to incorporate dialogue structure information into sliding window (SW), a rule-based approach developed by Richardson et al. (2013). This approach matches a bag of words constructed from a question Q and one of its answer option Oi with a given document, and calculates the TF-IDF style matching score for each answer option.
4.3 Feature-Based Classifier
To explore what features are effective for dialogue understanding, we first consider a gradient boosting decision tree (GBDT) classifier. Besides the conventional bag-of-words features, we primarily focus on features related to general world knowledge and dialogue structure.
- •
Bag of words of each answer option.
- •
Features inspired by rule-based approaches: We adopt the features introduced in Section 4.2, including speaker-independent scores (i.e., swi* and di*) and speaker-focused scores (i.e., swisQ and disQ).
- •
Matching position: p1..3sQ and p1..3*, where pis is the last position (in percentage) of a word in Ds that is also mentioned in Oi; 0 if none of the words in Ds is mentioned in Oi. We consider matching position because of our observation of the existence of concessions and negotiations in dialogues (Amgoud et al., 2007). We assume the facts or opinions expressed near the end of a dialogue tend to be more critical for us to answer a question.
- • Pointwise mutual information (PMI):, , , , , and pmiavg,1..3*, where is defined asC1(w) denotes the word frequency of w in external copora (we use Reddit posts [Tan and Lee, 2015]), and C2(w1,w2) represents the co-occurrence frequency of word w1 and w2 within a distance <K in external copora. We use PMI to evaluate the relatedness between the content of an answer option and the target-speaker-focused context based on co-occurrences of words in external corpora, inspired by previous studies on narrative event chains (Chambers and Jurafsky, 2008).(9)
- •
ConceptNet relations (CR):cr1..3, 1..|R|. R = {r1, r2, …} is the set of ConceptNet relation types (e.g., “CapableOf” and “PartOf”). cri,j is the number of relation triples (w1, rj, w2) that appear in the ConceptNet (Speer et al., 2017), where w1 represents a word in answer option Oi, w2 represents a word in D, and the relation type rj ∈ R. Similar to the motivation for using PMI, we use CR to capture the association between an answer option and the source dialogue based on raw co-occurrence counts in the commonsense knowledge base.
- •
ConceptNet embeddings (CE): Besides the lexical similarity based on string matching, we also calculate and , where csi* and csisQ represent the maximum cosine similarity between Oi and consecutive words of the same length in D and DsQ, respectively (Expression (6) in Section 4.2). We use ConceptNet embeddings (Speer et al., 2017) because they leverage the general world knowledge graph.
4.4 End-To-End Neural Network
Our end-to-end neural model is based on a generative pre-trained language model (LM). We follow the framework of finetuned transformer LM (FTLM) (Radford et al., 2018) and make modifications for dialogue-based reading comprehension.
Because the original FTLM framework already leverages rich linguistic information from a large unlabeled corpus, which can be regarded as a type of tacit general world knowledge, we investigate whether additional dialogue structure can further improve this strong baseline. We propose speaker embedding to better capture dialogue structure. Specifically, in the original framework, given an input context (u−k, …, u−1) of the transformer, the encoding of u−i is mathbfwe(u−i) + pe(i), where we(⋅) is the word embedding, and pe(⋅) is the position embedding. When adapting Θ to DREAM, we change the encoding to we(u−i) + pe(i) + se(u−i, sQ), where the speaker embedding se(u−i, sQ) is (a) 0 if the token u−i is not in the dialogue (i.e. it is either a start/end/delimiter token or a token in the question/option); (b) etarget if the token is spoken by sQ; (c) erest if the token is in the dialogue but not spoken by sQ. etarget and erest are trainable and initialized randomly. We show the overall framework in Figure 1.
4.5 Preprocessing and Training Details
For all the models, we conduct coreference resolution to determine speaker mentions of sQ based on simple heuristics. Particularly, we map three most common speaker abbreviations (i.e., “M”; “W” and “F”) that appear in dialogues to their eight most common corresponding mentions (i.e., “man,”“boy,”“he,” and “his”; “woman,”“girl,”“she,” and “her”) in questions. We keep speaker abbreviations unchanged, since neither replacing them with their corresponding full forms nor removing them contributes to the performance based on our experiments.
For the neural model mentioned in Section 4.4, most of our parameter settings follow Radford et al. (2018). We adopt the same preprocessing procedure and use their publicly released language model, which is pre-trained on the BooksCorpus data set (Zhu et al., 2015). We set the batch size to 8, language model weight λ to 2, and maximum epochs of training to 10.
For other models, we use the following preprocessing steps. We tokenize and lowercase the corpus, convert number words to numeric digits, normalize time expressions to 24-hour numeric form, and address negation by removing interrogative sentences that receive “no” as the reply. We use the gradient boosting classifier implemented in the scikit-learn toolkit (Pedregosa et al., 2011). We set the number of boosting iterations to 600 and keep the rest of hyperparameters unchanged. The distance upper bound K for PMI is set to 10.
We perform several runs of machine learning models (Section 4.3 and Section 4.4) with randomness introduced by different random seeds and/or GPU non-determinism and select the model or models (for ensemble) that perform best on the development set.
5 Experiment
5.1 Baselines
We implement several baselines, including rule-based methods and state-of-the-art neural models.
- •
Word Matching This strong baseline (Yih et al., 2013) selects the answer option that has the highest count of overlapping words with the given dialogue.
- •
Sliding Window We implement the sliding window approach (i.e., arg maxi swi*) and its distance-based variation DSW (i.e., arg maxi swi* − di*) (Richardson et al., 2013) introduced in Section 4.2.
- •
Enhanced Distance-Based Sliding Window (DSW ++) We also use general world knowledge and speaker-focused information to improve the original sliding window baseline, formulated in Expression 8 (Section 4.2).
- •
Stanford Attentive Reader This neural baseline compares each candidate answer (i.e., entity) representation to the question-aware document representation built with attention mechanism (Hermann et al., 2015; Chen et al., 2016). Lai et al. (2017) add a bilinear operation to compare document and answer option representations to answer multiple-choice questions.
- •
Gated-Attention Reader The baseline models multiplicative question-specific document representations based on a gated-attention mechanism (Dhingra et al., 2017), which are then compared to each answer option (Lai et al., 2017).
- •
Co-Matching This state-of-the-art multiple-choice reading comprehension model explicitly treats question and answer option as two sequences and jointly matches them against a given document (Wang et al., 2018b).
- •
Finetuned Transformer LM This is a general task-agnostic model introduced in Section 4.4, which achieves the best reported performance on several tasks requiring multi- sentence reasoning (Radford et al., 2018).
We do not investigate other ways of leveraging pre-trained deep models such as adding ELMo representations (Peters et al., 2018) as additional features to a neural model since recent studies show that directly fine-tuning a pre-trained language model such as FTLM is significantly superior on multiple-choice reading comprehension tasks (Radford et al., 2018; Chen et al., 2019). We do not apply more recent extractive models such as AOA (Cui et al., 2017) and QANet (Yu et al., 2018) since they aim at precisely locating a span in a document. When adapted to solve questions with abstractive answer options, extractive models generally tend to perform less well (Chen et al., 2016; Dhingra et al., 2017; Lai et al., 2017).
5.2 Results and Analysis
We report the performance of the baselines introduced in Section 5.1 and our proposed approaches in Table 8. We report the averaged accuracy of two annotators as the human performance. The proportion of valid questions (i.e., an unambiguous question with a unique correct answer option provided) that are manually checked by annotators on the annotated test and development sets is regarded as the human ceiling performance.
Method . | Dev . | Test . |
---|---|---|
Random | 32.8 | 33.4 |
Word Matching (WM) (Yih et al., 2013) | 41.7 | 42.0 |
Sliding Window (SW) (Richardson et al., 2013) | 42.6 | 42.5 |
Distance-Based Sliding Window (DSW) (Richardson et al., 2013) | 44.4 | 44.6 |
Stanford Attentive Reader (SAR) (Chen et al., 2016) | 40.2 | 39.8 |
Gated-Attention Reader (GAR) (Dhingra et al., 2017) | 40.5 | 41.3 |
Co-Matching (CO) (Wang et al., 2018b) | 45.6 | 45.5 |
Finetuned Transformer LM (FTLM) (Radford et al., 2018) | 55.9 | 55.5 |
OurApproaches : | ||
DSW ++ (DSW w/ Dialogue Structure and ConceptNet Embedding) | 51.4 | 50.1 |
GBDT ++ (GBDT w/ Features of Dialogue Structure and General World Knowledge) | 53.3 | 52.8 |
FTLM ++ (FTLM w/ Speaker Embedding) | 57.6 | 57.4 |
Ensemble of 3 FTLM ++ | 58.1 | 58.2 |
Ensemble of 1 GBDT ++ and 3 FTLM ++ | 59.6 | 59.5 |
Human Performance | 93.9★ | 95.5★ |
Ceiling Performance | 98.7★ | 98.6★ |
Method . | Dev . | Test . |
---|---|---|
Random | 32.8 | 33.4 |
Word Matching (WM) (Yih et al., 2013) | 41.7 | 42.0 |
Sliding Window (SW) (Richardson et al., 2013) | 42.6 | 42.5 |
Distance-Based Sliding Window (DSW) (Richardson et al., 2013) | 44.4 | 44.6 |
Stanford Attentive Reader (SAR) (Chen et al., 2016) | 40.2 | 39.8 |
Gated-Attention Reader (GAR) (Dhingra et al., 2017) | 40.5 | 41.3 |
Co-Matching (CO) (Wang et al., 2018b) | 45.6 | 45.5 |
Finetuned Transformer LM (FTLM) (Radford et al., 2018) | 55.9 | 55.5 |
OurApproaches : | ||
DSW ++ (DSW w/ Dialogue Structure and ConceptNet Embedding) | 51.4 | 50.1 |
GBDT ++ (GBDT w/ Features of Dialogue Structure and General World Knowledge) | 53.3 | 52.8 |
FTLM ++ (FTLM w/ Speaker Embedding) | 57.6 | 57.4 |
Ensemble of 3 FTLM ++ | 58.1 | 58.2 |
Ensemble of 1 GBDT ++ and 3 FTLM ++ | 59.6 | 59.5 |
Human Performance | 93.9★ | 95.5★ |
Ceiling Performance | 98.7★ | 98.6★ |
Surface matching is insufficient. Experimental results show that neural models that primarily exploit surface-level information (i.e., SAR, GAR, and CO) attain a performance level close to that of simple rule-based approaches (i.e., WM, SW, and DSW). The highest accuracy achieved by CO is 45.5%, a similar level of performance to the rule-based method DSW (44.6%).
It is helpful to incorporate general world knowledge and dialogue structure. We see a significant gain of 5.5% in accuracy when enhancing DSW using general world knowledge from ConceptNet embeddings and considering speaker-focused information (Section 4.2). FTLM, which leverages rich external linguistic knowledge from thousands of books, already achieves a much higher accuracy (55.5%) compared with previous state-of-the-art machine comprehension models, indicating the effectiveness of general world knowledge. Experimental results show that our best single model FTLM ++ significantly outperforms FTLM (p-value = 0.03), illustrating the usefulness of additional dialogue structure. Compared with the state-of-the-art neural reader Co- Matching that primarily explores surface-level information (45.5%), the tacit general world knowledge (in the pre-trained language model) and dialogue structure in FTLM ++ lead to an absolute gain of 11.9% in accuracy.
Ensembling different types of methods can bring further improvements. We use the majority vote strategy to obtain the ensemble model performance. Although GBDT ++ (52.8%) itself does not outperform FTLM ++, GBDT ++ can serve as a supplement to FTLM ++ because they leverage different types of general world knowledge and model architectures. We achieve the highest accuracy (59.5%) by ensembling one GBDT ++ and three FTLM ++.
5.3 Ablation Tests
Method . | Accuracy . | Δ . |
---|---|---|
DSW ++ | 51.4 | − |
− dialogue structure | 50.0 | −1.4 |
− CE | 46.7 | −4.7 |
GBDT ++ | 53.3 | − |
− bag of words | 51.6 | −1.7 |
− rule-based features | 51.2 | −2.1 |
− matching position | 53.0 | −0.3 |
− dialogue structure | 51.9 | −1.4 |
− PMI | 51.4 | −1.9 |
− CR | 52.7 | −0.6 |
− CE | 52.7 | −0.6 |
− PMI, CR, CE | 47.1 | −6.2 |
FTLM ++ | 57.6 | − |
− speaker embedding | 55.9 | −1.7 |
− LM pre-training | 36.2 | −21.4 |
Method . | Accuracy . | Δ . |
---|---|---|
DSW ++ | 51.4 | − |
− dialogue structure | 50.0 | −1.4 |
− CE | 46.7 | −4.7 |
GBDT ++ | 53.3 | − |
− bag of words | 51.6 | −1.7 |
− rule-based features | 51.2 | −2.1 |
− matching position | 53.0 | −0.3 |
− dialogue structure | 51.9 | −1.4 |
− PMI | 51.4 | −1.9 |
− CR | 52.7 | −0.6 |
− CE | 52.7 | −0.6 |
− PMI, CR, CE | 47.1 | −6.2 |
FTLM ++ | 57.6 | − |
− speaker embedding | 55.9 | −1.7 |
− LM pre-training | 36.2 | −21.4 |
. | Dialogue Structure . | General World Knowledge . |
---|---|---|
DSW ++ | speaker-focused scores | CE |
GBDT ++ | speaker-focused features | CE, CR, and PMI |
FTLM ++ | speaker embedding | pre-trained LM |
. | Dialogue Structure . | General World Knowledge . |
---|---|---|
DSW ++ | speaker-focused scores | CE |
GBDT ++ | speaker-focused features | CE, CR, and PMI |
FTLM ++ | speaker embedding | pre-trained LM |
Dialogue Structure
Specifically, we observe 1 . 4 % drop in accuracy if we set the target speaker sQ to * for all questions when we apply DSW ++. We observe a similar performance drop when we remove speaker-focused features from GBDT ++. In addition, removing speaker embeddings from FTLM ++ leads to a 1.7% drop in accuracy (in this case, the model becomes the original FTLM). These results consistently indicate the usefulness of dialogue structure for dialogue understanding.
General World Knowledge
We also investigate the effects of general world knowledge. The accuracy of DSW ++ drops by 4.7% if we remove ConceptNet embeddings (CE) by deleting the last term of Expression (8) in Section 4.2. Additionally, the accuracy of GBDT ++ drops by 6.2% if we remove all the general world knowledge features (i.e., ConceptNet embeddings/relations and PMI), leading to prediction failures on questions such as “What do we learn about the man?” whose correct answer option “He is health-conscious.” is not explicitly mentioned in the source dialogue “M: We had better start to eat onions frequently, Linda. W: But you hate onions, don’t you? M: Until I learned from a report from today’s paper that they protect people from flu and colds. After all, compared with health, taste is not so important.” Moreover, if we train FTLM ++ with randomly initialized transformer weights instead of weights pre-trained on the external corpus, the accuracy drops dramatically to 36.2%, which is only slightly better than a random baseline.
5.4 Error Analysis
Impact of Longer Turns
The number of dialogue turns has a significant impact on the performance of FTLM ++. As shown in Figure 2, its performance reaches the peak when the number of turns ranges from 0 to 10, while it suffers severe performance drops when the given dialogue contains more turns. Both DSW ++ (56.8%) and GBDT ++ (57.4%) outperform FTLM ++ (55.7%) when the number of turns ranges from 10 to 48. To deal with lengthy context, it may be helpful to first identify relevant sentences based on a question and its associated answer options rather than using the entire dialogue context as input.
Impact of Confusing Distractors
For 54.5% of questions on the development set, the fuzzy matching score (Sikes, 2007) of at least one distractor answer option against the dialogue is higher than the score of the correct answer option. For questions that all models (i.e., DSW ++, GBDT ++, and FTLM ++) fail to answer correctly, 73.0% of them contain at least one such confusing distractor answer option. The causes of this kind of errors can be roughly divided into two categories. First, the distractor is wrongly associated with the target speaker/s mentioned in the question (e.g., answer option A and C in D2-Q3 in Table 3). Second, although the claim in the distractor is supported by the dialogue, it is irrelevant to the question (e.g., D1-Q1-B in Table 1). A promising direction to solve this problem could be the construction of speaker-focused event chains (Chambers and Jurafsky, 2008) and advanced dialogue-specific coreference resolution systems for more reliable evidence context detection in a dialogue.
Impact of Question Types
We further report the performance of the best single model FTLM ++ and the GBDT ++ baseline on the categories defined in Section 3.2 (Table 11). Not surprisingly, both models perform worse than random guessing on math problems. While most of the math problems can be solved by one single linear equation, it is still difficult to apply recent neural math word problem solvers (Huang et al., 2018; Wang et al., 2018a) due to informal dialogue-based problem descriptions and the requirement of commonsense inference. For example, given the dialogue: “W: The plane arrives at 10:50. It is already 10:40 now. Be quick! M: Relax. Your watch must be fast. There are still twenty minutes left.” We need prior knowledge to infer that the watch of the man is showing incorrect time 10:40. Instead, 10:50 should be used as the reference time with the time interval “twenty minutes left” together to answer the question “What time is it now?”
Question Type . | FTLM + + . | GBDT + + . |
---|---|---|
Matching | 57.0 | 68.1 |
Reasoning | 56.8 | 49.4 |
Summary | 73.6 | 47.1 |
Logic | 55.0 | 49.7 |
Arithmetic | 30.2 | 24.5 |
Commonsense | 53.4 | 41.7 |
Single sentence | 56.5 | 63.3 |
Multiple sentences | 56.9 | 49.5 |
Question Type . | FTLM + + . | GBDT + + . |
---|---|---|
Matching | 57.0 | 68.1 |
Reasoning | 56.8 | 49.4 |
Summary | 73.6 | 47.1 |
Logic | 55.0 | 49.7 |
Arithmetic | 30.2 | 24.5 |
Commonsense | 53.4 | 41.7 |
Single sentence | 56.5 | 63.3 |
Multiple sentences | 56.9 | 49.5 |
Results show that GBDT ++ is superior to the fine-tuned language model on the questions under the category matching (68.1% vs. 57.0%) and the latter model is more capable of answering implicit questions (e.g., under the category summary, logic, and commonsense) which require aggregation of information from multiple sentences, the understanding of the entire dialogue, or the utilization of world knowledge. Therefore, it might be useful to leverage the strengths of individual models to solve different types of questions.
6 Conclusion and Future Work
We present DREAM, the first multiple-choice dialogue-based reading comprehension data set from English language examinations. Besides the multi-turn multi-party dialogue context, 85% of questions require multiple-sentence reasoning, and 34% of questions also require commonsense knowledge, making this task very challenging. We apply several popular reading comprehension models and find that surface-level information is insufficient. We incorporate general world knowledge and dialogue structure into rule-based and machine learning methods and show the effectiveness of these factors, suggesting a promising direction for dialogue-based reading comprehension. For future work, we are interested in problem generation for dialogues and investigating whether it will lead to more gains to pre-train a deep language model such as FTLM over large-scale dialogues from movies and TV shows instead of the BookCorpus data set (Zhu et al., 2015) used by previous work (Radford et al., 2018).
Acknowledgments
We would like to thank the editors and anonymous reviewers for their helpful feedback. We also thank Hai Wang from Toyota Technological Institute at Chicago for useful discussions and valuable comments.
Notes
We list all the Web sites used for data collection in the released data set.
We use the list of stop words from NLTK (Bird and Loper, 2004).
References
Author notes
This work was done when K. S. was an intern at the Tencent AI Lab, Bellevue, WA.