DREAM: A Challenge Dataset and Models for Dialogue-Based Reading Comprehension

We present DREAM, the first dialogue-based multiple-choice reading comprehension dataset. Collected from English-as-a-foreign-language examinations designed by human experts to evaluate the comprehension level of Chinese learners of English, our dataset contains 10,197 multiple-choice questions for 6,444 dialogues. In contrast to existing reading comprehension datasets, DREAM is the first to focus on in-depth multi-turn multi-party dialogue understanding. DREAM is likely to present significant challenges for existing reading comprehension systems: 84% of answers are non-extractive, 85% of questions require reasoning beyond a single sentence, and 34% of questions also involve commonsense knowledge. We apply several popular neural reading comprehension models that primarily exploit surface information within the text and find them to, at best, just barely outperform a rule-based approach. We next investigate the effects of incorporating dialogue structure and different kinds of general world knowledge into both rule-based and (neural and non-neural) machine learning-based reading comprehension models. Experimental results on the DREAM dataset show the effectiveness of dialogue structure and general world knowledge. DREAM will be available at https://dataset.org/dream/.


Introduction
Recently a significant amount of research has focused on the construction of large-scale multiple- * This work was done when K. S. was an intern at the Tencent AI Lab, Bellevue, WA.
With the goal of advancing research in machine reading comprehension and facilitating dialogue understanding, we construct and present DREAM -the first multiple-choice Dialoguebased REAding comprehension exaMination data set. We collect 10,197 questions for 6,444 multiturn multi-party dialogues from English language exams, which are carefully designed by educational experts (e.g., English teachers) to assess the comprehension level of Chinese learners of English. Each question is associated with three answer options, exactly one of which is correct. (See Table 1 for an example.) DREAM covers a variety of topics and scenarios in daily life such as conversations on the street, on the phone, in a classroom or library, at the airport or the office or a shop (Section 3).
Based on our analysis of DREAM, we argue that dialogue-based reading comprehension is at least as difficult as existing non-conversational counterparts. In particular, answering 34% of DREAM questions requires unspoken commonsense knowledge, for example, unspoken scene information. This might be due to the nature of dialogues: For efficient oral communication, people rarely state obvious explicit world knowledge (Forbes and Choi, 2017)

such as ''Christmas Day is celebrated on December 25th.''Understanding
Dialogue 1 (D1) W: Tom, look at your shoes. How dirty they are! You must clean them. M: Oh, mum, I just cleaned them yesterday. W: They are dirty now. You must clean them again. M: I do not want to clean them today. Even if I clean them today, they will get dirty again tomorrow. W: All right, then. M: Mum, give me something to eat, please. W: You had your breakfast in the morning, Tom, and you had lunch at school. M: I am hungry again. W: Oh, hungry? But if I give you something to eat today, you will be hungry again tomorrow.
Q1 Why did the woman say that she wouldn't give him anything to eat? A. Because his mother wants to correct his bad habit. B. Because he had lunch at school. C. Because his mother wants to leave him hungry. the social implications of an utterance as well as inferring a speaker's intentions is also regularly required for answering dialogue-based questions. The dialogue content in Table 1, for example, is itself insufficient for readers to recognize the intention of the female speaker (W) in the first question (Q1). However, world knowledge is rarely considered in state-of-the-art reading comprehension models (Tay et al., 2018;Wang et al., 2018b).
Moreover, dialogue-based questions can cover information imparted across multiple turns involving multiple speakers. In DREAM, approximately 85% of questions can only be answered by considering the information from multiple sentences. For example, to answer Q1 in Table 3 later in the paper regarding the date of birth of the male speaker (M), the supporting sentences (in bold) include ''You know, tomorrow is Christmas Day'' from the female speaker and ''. . . I am more than excited about my birthday, which will come in two days'' from the male speaker. Compared with ''multiple-sentence questions'' in traditional reading comprehension data sets, DREAM further requires an understanding of the turn-based structure of dialogue-for example, for aligning utterances with their corresponding speakers.
As only 16% of correct answer options are text spans from the source documents, we primarily explore rule-based methods and state-of-theart neural models designed for multiple-choice reading comprehension (Section 4). We find first that neural models designed for non-dialoguebased reading comprehension Dhingra et al., 2017;Wang et al., 2018b) do not fare well: The highest achieved accuracy is 45.5%, only slightly better than the accuracy (44.6%) of a simple lexical baseline (Richardson et al., 2013). For the most part, these models fundamentally exploit only surface-level information from the source documents. Considering the abovementioned challenges, however, we hypothesize that incorporating general world knowledge and aspects of the dialogue structure would allow a better understanding of the dialogues. As a result, we modify our baseline systems to include (1) general world knowledge in the form of such as ConceptNet relations (Speer et al., 2017) and a pre-trained language model (Radford et al., 2018), and (2) speaker information for each utterance. Experiments show the effectiveness of these factors on the lexical baselines as well as neural and non-neural machine learning approaches: We acquire up to 11.9% absolute gain in accuracy compared with the highest performance achieved by the state-of-the-art reading comprehension model (Wang et al., 2018b), which mainly relies on explicit surface-level information in the text (Section 5).
Finally, we see a significant gap between the best automated approach (59.5%) and human ceiling performance (98.6%) on the DREAM data set. This provides yet additional evidence that dialogue-based reading comprehension is a very challenging task. We hope that it also inspires the research community to develop methods for the dialogue-based reading comprehension task.

Related Work
We divide reading comprehension data sets into three categories based on the types of answers: extractive, abstractive, and multiple choice.

Extractive and Abstractive Data Sets
In recent years, we have seen increased interest in large-scale cloze/span-based reading comprehension  Table 2: Distribution of answer (or correct answer option) types in three kinds of reading comprehension data sets. Statistics of other data sets come from Reddy et al. (2018), Kočiskỳ et al. (2018), and Lai et al. (2017). data set construction (Hermann et al., 2015;Hill et al., 2016;Onishi et al., 2016;Rajpurkar et al., 2016;Bajgar et al., 2016;Nguyen et al., 2016;Trischler et al., 2017;Joshi et al., 2017;. We regard them as extractive since candidate answers are usually short spans from source documents. State-of-the-art neural models with attention mechanisms already achieve very high performance based on local lexical information. Recently researchers work on the construction of spoken span-based data sets Li et al., 2018) by applying text-to-speech technologies or recruiting human speakers based on formal written document-based data sets such as SQuAD (Rajpurkar et al., 2016). Some spanbased conversation data sets are constructed from a relatively small size of dialogues from television shows (Chen and Choi, 2016;Ma et al., 2018). Considering the limitations in extractive data sets, answers in abstractive data sets such as MS MARCO (Nguyen et al., 2016), SearchQA (Dunn et al., 2017), and NarrativeQA (Kočiskỳ et al., 2018) are human-crowdsourced based on source documents or summaries. Concurrently, there is a growing interest in conversational reading comprehension such as CoQA (Reddy et al., 2018). Because annotators tend to copy spans as answers (Reddy et al., 2018), the majority of answers are still extractive in these data sets (Table 2). Compared to the data sets mentioned above, most of the correct answer options (83.7%) in DREAM are free-form text.

Multiple-Choice Data Sets
We primarily discuss the multiple-choice data sets, in which answer options are not restricted to extractive text spans in the given document. Instead, most of the correct answer options are abstractive (Table 2). Multiple-choice data sets involve extensive human involvement for problem generation during crowdsourcing (i.e., questions, correct answer option, and distractors). Besides surface matching, a significant portion of questions require multiple-sentence reasoning and external knowledge (Richardson et al., 2013;Mostafazadeh et al., 2016;Khashabi et al., 2018;Ostermann et al., 2018).
Besides crowdsourcing, some data sets are collected from examinations designed by educational experts (Penas et al., 2014;Shibuki et al., 2014;Tseng et al., 2016;Clark et al., 2016;Lai et al., 2017;Mihaylov et al., 2018), which aim to test human examinees. There are various types of complicated questions such as math word problems, summarization, logical reasoning, and sentiment analysis. Because we can adopt more objective evaluation criteria such as accuracy, these questions are usually easy to grade. Besides, questions from examinations are generally clean and high-quality. Therefore, human performance ceiling on this kind of data set is much higher (e.g., 94.5% on RACE [Lai et al., 2017] and 98.6% on DREAM in accuracy) than that of data sets built by crowdsourcing.
In comparison, we present the first multiplechoice dialogue-based data set from examinations that contains a large percentage of questions that require multiple sentence inference. To the best of our knowledge, DREAM also contains the largest number of questions involving commonsense reasoning compared with other examination data sets.

Data
In this section, we describe how we construct DREAM (Section 3.1) and provide a detailed analysis of this data set (Section 3.2).

Collection Methodology
We collect dialogue-based comprehension problems from a variety of English language exams    learners in high schools and colleges (for individuals aged 12-22 years). All the problems in DREAM are freely accessible online for public usage. Each problem consists of a dialogue and a series of multiple-choice questions. To ensure every question is associated with exactly three answer options, we drop wrong answer option(s) randomly for questions with more than three options. We remove duplicate problems and randomly split the data at the problem level, with 60% train, 20% development, and 20% test.

Data Analysis
We summarize the statistics of DREAM in Table 4 and data split in Table 5. Compared with existing data sets built from formal written texts, the vocabulary size is relatively small since spoken English by its nature makes greater use of highfrequency words and needs a smaller vocabulary for efficient real-time communication (Nation, 2006).
We categorize questions into two main categories according to the types of knowledge required to answer them: matching and reasoning.
• Matching A question is entailed or paraphrased by exactly one sentence in a dialogue. The answer can be extracted from the same sentence. For example, we can easily verify the correctness of the question-answer pair (''What kind of room does the man want to rent?'', ''A two-bedroom apartment.'') based on the sentence ''M: I'm interested in renting a two-bedroom apartment.'' This category is further divided into two categories, word matching and paraphrasing, in previous work Trischler et al., 2017).
• Reasoning Questions that cannot be answered by the surface meaning of a single sentence belong to this category. We further define four subcategories as follows.
-Summary Answering this kind of questions requires the whole picture of a dialogue, such as the topic of a dialogue and the relation between speakers (e.g., D2-Q3 in Table 3). Under this category, questions such as ''What are the two speakers talking about?'' and ''What are the speakers probably doing?'' are frequently asked. -Logic We require logical reasoning to answer questions in this category. We usually need to identify logically implied relations among multiple sentences in a dialogue. To reduce the ambiguity during the annotation, we regard a question that can only be solved by considering the content of multiple sentences and does not belong to the summary subcategory that involves all the sentences in a dialogue as a logic question. Following this definition, both D2-Q1 and D2-Q2 in Table 3 belong to this category. -Arithmetic Inferring the answer requires arithmetic knowledge (e.g., D2-Q1 in Table 3 requires 25 − 1 + 2 = 26).
-Commonsense To answer questions under this subcategory, besides the textual information in the dialogue, we also require external commonsense knowledge that cannot be obtained from the dialogue. For instance, all questions in Table 3 fall under this category. D2-Q1 and D2-Q2 in Table 3 belong to both logic and commonsense since they require multiple sentences as well as commonsense knowledge for question answering. There exist multiple types of commonsense knowledge in DREAM such as the well-known properties of a highly recognizable entity (e.g., D2-Q1 in Table 3), the prominent relationshipbetween two speakers (e.g., D2-Q3 in   Table 1). We refer readers to LoBue and Yates (2011) for detailed definitions. Table 6 shows the question type distribution labeled by two human annotators on 25% questions randomly sampled from the development and test sets. Besides the previously defined question categories, we also report the percentage of questions that require reasoning over multiple sentences (i.e., summary or logic questions) and the percentage of questions that require the surfacelevel understanding or commonsense/math knowledge based on the content of a single sentence. As a question can belong to multiple reasoning subcategories, the summation of the percentage of reasoning subcategories is not equal to the percentage of reasoning. The Cohen's kappa coefficient is 0.67 on the development set and 0.68 on the test set.
Dialogues in DREAM are generally clean and mostly error-free because they are carefully designed by educational experts. However, it is not guaranteed that each dialogue is written or proofread by a native speaker. Besides, dialogues tend to be more proper and less informal for exam purposes. To have a rough estimation of the quality of dialogues in DREAM and the differences between these dialogues and more casual ones in movies or television shows, we run a proofreading tool-Grammarly 2 -on all the dialogues from the annotated 25% instances of the development set and the same size (20.7k tokens) of dialogues from Friends, a famous American television show whose transcripts are commonly used for dialogue understanding (Chen and Choi, 2016;Ma et al., 2018). As shown in Table 7, there exist fewer spelling mistakes and the overall score is slightly higher than that of the dialogues in Friends.
Based on the evaluated instances, articles and verb forms are the two most frequent grammar error categories (10 and 8, respectively, out of 23) in DREAM. Besides, the language tends to be less precise in DREAM, indicated by the number of vocabulary suggestions. For example, experts tend to use expressions such as ''really hot,'' ''really beautiful,'' ''very bad,'' and ''very important'' rather than more appropriate yet more advanced adjectives that might hinder reading comprehension of language learners with smaller vocabularies. According to the explanations provided by the tool, the readability scores for both data sets fall into the same category ''Your text is very simple and easy to read, likely to be understood by an average 5th-grader (age 10).''

Approaches
We formally introduce the dialogue-based reading comprehension task and notations in Section 4.1. To investigate the effects of different kinds of general world knowledge and dialogue structure, we incorporate them into rule-based approaches (Section 4.2) as well as non-neural (Section 4.3) and neural (Section 4.4) machine learning approaches. We describe in detail preprocessing and training in Section 4.5.

Problem Formulation and Notations
We start with a formal definition of the dialoguebased multiple-choice reading comprehension task. An n-turn dialogue D is defined as D = {s 1 : t 1 , s 2 : t 2 , . . . , s n : t n }, where s i represents the speaker ID (e.g., ''M'' and ''W''), and t i represents the text of the i th turn. Let Q denote the text of question, and O 1..3 denote the text of three answer options. The task is to choose the correct one from answer options O 1..3 associated with question Q given dialogue D. In this paper, we regard this task as a three-class classification problem, each class corresponding to an answer option.
For convenience, we define the following notations, which will be referred in the rest of this paper. Let D s denote the turns spoken by In particular, s = * denotes all the speakers. W D s and W O i denote the ordered set of the running words (excluding punctuation marks) in D s and O i , respectively. Questions designed for dialoguebased reading comprehension often focus on a particular speaker. If there is exactly one speaker mentioned in a question, we use s Q to denote this target speaker. Otherwise, s Q = * . For example, given the dialogue in Table 3, s Q =''M'' for Question 1 and 2, and s Q = * for Question 3.

Rule-Based Approaches
We first attempt to incorporate dialogue structure information into sliding window (SW), a rulebased approach developed by Richardson et al. (2013). This approach matches a bag of words constructed from a question Q and one of its answer option O i with a given document, and calculates the TF-IDF style matching score for each answer option.
LetD s ,Q, andÔ i be the unordered set of distinct words (excluding punctuation marks) in D s , Q, and O i , respectively. Instead of only regarding dialogue D as a non-conversational text snippet, we also pay special attention to the context that is relevant to the target speaker mentioned in the question. Therefore, given a target speaker s Q , we propose to compute a speaker-focused sliding window score for each answer option O i , by matching a bag of words constructed from Q and O i with D s Q (i.e., turns spoken by s Q ). Given speaker s, we formally define the sliding window score sw of O i as: where ic s (w) = log 1 + Based on these definitions, we can regard sw * i as the general score defined in the original sliding window approach, and sw s Q i represents the speakerfocused sliding window score considering the target speaker s Q .
Because the sliding window score ignores longrange dependencies, Richardson et al. (2013) introduce a distance-based variation (DSW), in which a word-distance based score is subtracted from the sliding window score to arrive at the final score. Similarly, we calculate the speaker-focused distance-based score given a (Q, O i ) pair and s Q , by counting the distance between the occurrence of a word in Q and a word in O i in D s Q . More formally, given speaker s and a set of stop words 3 U , the distance-based score d of O i is defined as i is the minimum number of words between an occurrence of a question word and an answer option word in W D s , plus one. The formal definition of δ s i is as follows.
Based on these definitions, we can regard d * i as the distance-based score defined in the original sliding window approach, and d s Q i represents the speaker-focused distance-based score considering speaker s Q . In addition, the final distance-based sliding window score of O i (Richardson et al., 2013) can be formulated as Expression (4) only focuses on the general (or speaker-independent) information (i.e., sw * i and d * i ); we can capture general and speaker-focused information (i.e., sw s Q i , and d s Q i ) simultaneously by averaging them: Since a large percentage of questions cannot be solved by word-level matching, we also attempt to incorporate general world knowledge into our rule-based method. We calculate cs s i , the 3 We use the list of stop words from NLTK (Bird and Loper, 2004). maximum cosine similarity between O i and consecutive words of the same length in W D s , as: where x is obtained by averaging the embeddings of the constituent words in x. Here we use Concept-Net embeddings (Speer et al., 2017) because they leverage the knowledge graph that focuses on general world knowledge. Following Expression (5), we capture both general and speaker-focused semantic information within a dialogue as follows.
To make the final answer option selection, our rule-based method combines Expressions (5) and (7):

Feature-Based Classifier
To explore what features are effective for dialogue understanding, we first consider a gradient boosting decision tree (GBDT) classifier. Besides the conventional bag-of-words features, we primarily focus on features related to general world knowledge and dialogue structure.
• Bag of words of each answer option.
• Features inspired by rule-based approaches: We adopt the features introduced in Section 4.2, including speaker-independent scores (i.e., sw * i and d * i ) and speaker-focused scores (i.e., sw  (Amgoud et al., 2007). We assume the facts or opinions expressed near the end of a dialogue tend to be more critical for us to answer a question.
C 1 (w) denotes the word frequency of w in external copora (we use Reddit posts [Tan and Lee, 2015]), and C 2 (w 1 , w 2 ) represents the co-occurrence frequency of word w 1 and w 2 within a distance < K in external copora. We use PMI to evaluate the relatedness between the content of an answer option and the target-speaker-focused context based on co-occurrences of words in external corpora, inspired by previous studies on narrative event chains (Chambers and Jurafsky, 2008).
• ConceptNet relations (CR): cr 1..3,1..|R| . R = {r 1 , r 2 , . . .} is the set of ConceptNet relation types (e.g., ''CapableOf'' and ''PartOf''). cr i,j is the number of relation triples (w 1 , r j , w 2 ) that appear in the ConceptNet (Speer et al., 2017), where w 1 represents a word in answer option O i , w 2 represents a word in D, and the relation type r j ∈ R. Similar to the motivation for using PMI, we use CR to capture the association between an answer option and the source dialogue based on raw co-occurrence counts in the commonsense knowledge base.
• ConceptNet embeddings (CE): Besides the lexical similarity based on string matching, we also calculate cs * 1..3 and cs s Q 1..3 , where cs * i and cs s Q i represent the maximum cosine similarity between O i and consecutive words of the same length in D and D s Q , respectively (Expression (6) in Section 4.2). We use ConceptNet embeddings (Speer et al., 2017) because they leverage the general world knowledge graph.

End-To-End Neural Network
Our end-to-end neural model is based on a generative pre-trained language model (LM). We follow the framework of finetuned transformer LM (FTLM) (Radford et al., 2018) and make modifications for dialogue-based reading comprehension.
The training procedure of FTLM consists of two stages. The first stage is to learn a highcapacity language model on a large-scale unsupervised corpus of tokens U = {u 1 , . . . , u n } by maximizing the following likelihood: where k is the context window size, and the conditional probability P is modeled by a multilayer transformer decoder  with parameters Θ. In the second stage, the model is adapted to a labeled data set C, where each instance consists of a sequence of input tokens x 1 , . . . , x m with a label y, by maximizing: where P (y | x 1 , . . . , x m ) is obtained by a linear + softmax layer over the final transformer block's activation, and λ is the weight for language model. For multiple-choice reading comprehension, the input tokens x 1 , . . . , x m come from the concatenation of a start token, dialogue, question, a delimiter token, answer option, and an end token; y indicates if the answer option is correct. We refer readers to Radford et al. (2018) for more details.
Because the original FTLM framework already leverages rich linguistic information from a large unlabeled corpus, which can be regarded as a type of tacit general world knowledge, we investigate whether additional dialogue structure can further improve this strong baseline. We propose speaker embedding to better capture dialogue structure. Specifically, in the original framework, given an input context (u −k , . . . , u −1 ) of the transformer, the encoding of u −i is we we we(u −i ) + pe pe pe(i), where we we we(·) is the word embedding, and pe pe pe(·) is the position embedding. When adapting Θ to DREAM, we change the encoding to we we we(u −i ) + pe pe pe(i)+se se se(u −i , s Q ), where the speaker embedding se se se(u −i , s Q ) is (a) 0 if the token u −i is not in the dialogue (i.e. it is either a start/end/delimiter token or a token in the question/option); (b) e e e target if the token is spoken by s Q ; (c) e e e rest if the token is in the dialogue but not spoken by s Q . e e e target and e e e rest are trainable and initialized randomly. We show the overall framework in Figure 1.

Preprocessing and Training Details
For all the models, we conduct coreference resolution to determine speaker mentions of s Q based on simple heuristics. Particularly, we map three most common speaker abbreviations (i.e., ''M''; ''W'' and ''F'') that appear in dialogues to their eight most common corresponding mentions (i.e., ''man, '' ''boy,'' ''he,'' and ''his''; ''woman,'' ''girl,'' ''she,'' and ''her'') in questions. We keep speaker abbreviations unchanged, since neither replacing them with their corresponding full forms nor removing them contributes to the performance based on our experiments. For the neural model mentioned in Section 4.4, most of our parameter settings follow Radford et al. (2018). We adopt the same preprocessing procedure and use their publicly released language model, which is pre-trained on the BooksCorpus data set (Zhu et al., 2015). We set the batch size to 8, language model weight λ to 2, and maximum epochs of training to 10.
For other models, we use the following preprocessing steps. We tokenize and lowercase the corpus, convert number words to numeric digits, normalize time expressions to 24-hour numeric form, and address negation by removing interrogative sentences that receive ''no'' as the reply. We use the gradient boosting classifier implemented in the scikit-learn toolkit (Pedregosa et al., 2011). We set the number of boosting iterations to 600 and keep the rest of hyperparameters unchanged. The distance upper bound K for PMI is set to 10.
We perform several runs of machine learning models (Section 4.3 and Section 4.4) with randomness introduced by different random seeds and/or GPU non-determinism and select the model or models (for ensemble) that perform best on the development set.

Baselines
We implement several baselines, including rulebased methods and state-of-the-art neural models.
• Word Matching This strong baseline (Yih et al., 2013) selects the answer option that has the highest count of overlapping words with the given dialogue.
• Sliding Window We implement the sliding window approach (i.e., arg max i sw * i ) and its distance-based variation DSW (i.e., • Gated-Attention Reader The baseline models multiplicative question-specific document representations based on a gated-attention mechanism (Dhingra et al., 2017), which are then compared to each answer option (Lai et al., 2017).
• Co-Matching This state-of-the-art multiplechoice reading comprehension model explicitly treats question and answer option as two sequences and jointly matches them against a given document (Wang et al., 2018b).
• Finetuned Transformer LM This is a general task-agnostic model introduced in Section 4.4, which achieves the best reported performance on several tasks requiring multisentence reasoning (Radford et al., 2018).  (Richardson et al., 2013) 42.6 42.5 Distance-Based Sliding Window (DSW) (Richardson et al., 2013) 44.4 44.6 Stanford Attentive Reader (SAR)  40.2 39.8 Gated-Attention Reader (GAR) (Dhingra et al., 2017) 40.5 41.3 Co-Matching (CO) (Wang et al., 2018b) 45.6 45.5 Finetuned Transformer LM (FTLM) (Radford et al., 2018) 55.9 55.5  We do not investigate other ways of leveraging pre-trained deep models such as adding ELMo representations (Peters et al., 2018) as additional features to a neural model since recent studies show that directly fine-tuning a pre-trained language model such as FTLM is significantly superior on multiple-choice reading comprehension tasks (Radford et al., 2018;Chen et al., 2019). We do not apply more recent extractive models such as AOA (Cui et al., 2017) and QANet (Yu et al., 2018) since they aim at precisely locating a span in a document. When adapted to solve questions with abstractive answer options, extractive models generally tend to perform less well Dhingra et al., 2017;Lai et al., 2017).

Results and Analysis
We report the performance of the baselines introduced in Section 5.1 and our proposed approaches in Table 8. We report the averaged accuracy of two annotators as the human performance. The proportion of valid questions (i.e., an unambiguous question with a unique correct answer option provided) that are manually checked by annotators on the annotated test and development sets is regarded as the human ceiling performance.
Surface matching is insufficient. Experimental results show that neural models that primarily exploit surface-level information (i.e., SAR, GAR, and CO) attain a performance level close to that of simple rule-based approaches (i.e., WM, SW, and DSW). The highest accuracy achieved by CO is 45.5%, a similar level of performance to the rule-based method DSW (44.6%).
It is helpful to incorporate general world knowledge and dialogue structure. We see a significant gain of 5.5% in accuracy when enhancing DSW using general world knowledge from ConceptNet embeddings and considering speakerfocused information (Section 4.2). FTLM, which leverages rich external linguistic knowledge from thousands of books, already achieves a much higher accuracy (55.5%) compared with previous state-of-the-art machine comprehension models, indicating the effectiveness of general world knowledge. Experimental results show that our best single model FTLM++ significantly outperforms FTLM (p-value = 0.03), illustrating the usefulness of additional dialogue structure. Compared with the state-of-the-art neural reader Co-Matching that primarily explores surface-level information (45.5%), the tacit general world knowledge (in the pre-trained language model) and dialogue structure in FTLM++ lead to an absolute gain of 11.9% in accuracy.
Ensembling different types of methods can bring further improvements. We use the majority vote strategy to obtain the ensemble model performance. Although GBDT++ (52.8%) itself does not outperform FTLM++, GBDT++  can serve as a supplement to FTLM++ because they leverage different types of general world knowledge and model architectures. We achieve the highest accuracy (59.5%) by ensembling one GBDT++ and three FTLM++.

Ablation Tests
We conduct ablation tests to evaluate the individual components of our proposed approaches (Table 9). In Table 10, we summarize the involved types of dialogue structure and general world knowledge in our approaches.  , which is only slightly better than a random baseline.

Error Analysis
Impact of Longer Turns The number of dialogue turns has a significant impact on the performance of FTLM++. As shown in Figure 2, its performance reaches the peak when the number of turns ranges from 0 to 10, while it suffers severe performance drops when the given dialogue contains more turns. Both DSW++ (56.8%) and GBDT++ (57.4%) outperform FTLM++ (55.7%) when the number of turns ranges from 10 to 48. To deal with lengthy context, it may be helpful to first identify relevant sentences based on a question and its associated answer options rather than using the entire dialogue context as input.

Impact of Confusing Distractors
For 54.5% of questions on the development set, the fuzzy matching score (Sikes, 2007) of at least one distractor answer option against the dialogue is higher than the score of the correct answer option. For questions that all models (i.e., DSW++, GBDT++, and FTLM++) fail to answer correctly, 73.0% of them contain at least one such confusing distractor answer option. The causes of this kind of errors can be roughly divided into two categories. First, the distractor is wrongly associated with the target speaker/s mentioned in the question (e.g., answer option A and C in D2-Q3 in Table 3). Second, although the claim in the distractor is supported by the dialogue, it is irrelevant to the question (e.g., D1-Q1-B in Table 1). A promising direction to solve this problem could be the construction of speaker-focused event chains (Chambers and Jurafsky, 2008) and advanced dialogue-specific coreference resolution systems for more reliable evidence context detection in a dialogue.

Impact of Question Types
We further report the performance of the best single model FTLM++ and the GBDT++ baseline on the categories defined in Section 3.2 (Table 11). Not surprisingly, both models perform worse than random guessing on math problems. While most of the math problems can be solved by one single linear equation, it is still difficult to apply recent neural math word problem solvers (Huang et al., 2018;Wang et al., 2018a)  and commonsense) which require aggregation of information from multiple sentences, the understanding of the entire dialogue, or the utilization of world knowledge. Therefore, it might be useful to leverage the strengths of individual models to solve different types of questions.

Conclusion and Future Work
We present DREAM, the first multiple-choice dialogue-based reading comprehension data set from English language examinations. Besides the multi-turn multi-party dialogue context, 85% of questions require multiple-sentence reasoning, and 34% of questions also require commonsense knowledge, making this task very challenging. We apply several popular reading comprehension models and find that surface-level information is insufficient. We incorporate general world knowledge and dialogue structure into rule-based and machine learning methods and show the effectiveness of these factors, suggesting a promising direction for dialogue-based reading comprehension. For future work, we are interested in problem generation for dialogues and investigating whether it will lead to more gains to pre-train a deep language model such as FTLM over large-scale dialogues from movies and TV shows instead of the BookCorpus data set (Zhu et al., 2015) used by previous work (Radford et al., 2018).