Abstract
Recent advancements in open-domain question answering (ODQA), that is, finding answers from large open-domain corpus like Wikipedia, have led to human-level performance on many datasets. However, progress in QA over book stories (Book QA) lags despite its similar task formulation to ODQA. This work provides a comprehensive and quantitative analysis about the difficulty of Book QA: (1) We benchmark the research on the NarrativeQA dataset with extensive experiments with cutting-edge ODQA techniques. This quantifies the challenges Book QA poses, as well as advances the published state-of-the-art with a ∼7% absolute improvement on ROUGE-L. (2) We further analyze the detailed challenges in Book QA through human studies.1 Our findings indicate that the event-centric questions dominate this task, which exemplifies the inability of existing QA models to handle event-oriented scenarios.
1 Introduction
Recent Question-Answering (QA) models have achieved or even surpassed human performance on many challenging tasks, including single- passage QA2 and open-domain QA (ODQA).3 Nevertheless, understanding rich context beyond text pattern matching remains unsolved, especially answering questions on narrative elements via reading books. One example is NarrativeQA (Kočiskỳ et al., 2018) (Figure 1). Since its first release in 2017, there has been no significant improvement over the primitive baselines. In this paper, we study this challenging Book QA task and shed light on the inherent difficulties.
Despite its similarity to standard ODQA tasks,4 that is, both requiring finding evidence paragraphs for inferring answers, the Book QA has certain unique challenges (Kočiskỳ et al., 2018): (1) The narrative writing style of book stories differs from the formal texts in Wikipedia and news, which demands a deeper understanding capability. The flexible writing styles from different genres and authors make the challenge severe; (2) The passages that depict related book plots and characters share more semantic similarities than the Wikipedia articles, which increases confusion in finding the correct evidence to answer a question; (3) The free-form nature of the answers necessitates the summarization ability from the narrative plots; (4) The free-form answers make it hard to obtain fine-grained supervision at passage or span levels; and finally (5) Different paragraphs usually have logical relations among them.5
To quantify the aforementioned challenges, we conduct a two-fold analysis to examine the gaps between Book QA and the standard ODQA tasks. First, we benchmark the Book QA performance on the NarrativeQA dataset, with methods created or adapted based on the ideas of state-of-the-art ODQA methods (Wang et al., 2018a; Lin et al., 2018; Lee et al., 2019; Min et al., 2019; Guu et al., 2020; Karpukhin et al., 2020). We build a state-of-the-art Book QA system with a retrieve- and-read framework, which consists of a ranker for retrieving evidence and a reader (i.e., QA model) to predict answers given evidence. For the ranker model, we investigate different weakly supervised or unsupervised methods for model training with the lack of passage-level supervision. For the reader model, we fill up the missing study and comparison among pre-trained generative models for Book QA, such as GPT-2 (Radford et al., 2019) and BART (Lewis et al., 2019). Then we investigate approaches to adapt to the book writing styles and to make use of more evidence paragraphs. As a result, our study gives a ∼7% absolute ROUGE-L improvement over the published state-of-the-art.
Second, we conduct human studies to quantify the challenges in Book QA. To this end, we design a new question categorization schema based on the types of reading comprehension or reasoning skills required to provide the correct answers. Precisely, we first define the basic semantic units, such as entities, event structures in the questions and answers. The question category thus determines the types of units and the relations between the units. We annotate 1,000 questions accordingly and discover the significantly distinctive statistics of the NarrativeQA dataset from the other QA datasets, mainly regarding the focus of event arguments and relations between events. We further give performance decomposition of our system over the question categories, to show the detailed types of challenges in a quantitative way.
In summary, our comprehensive study not only improves the state-of-the-art with careful utilization of recent ODQA advancements, but also reveals the unique challenges in Book QA with quantitative measurements.
2 Related Work
Open-Domain QA
ODQA aims at answering questions from large open-domain corpora (e.g., Wikipedia). The recent work naturally adopts a ranker-reader framework (Chen et al., 2017). Recent success in this field mainly comes from improvement in the following directions: (1) distantly supervised training of neural ranker models (Wang et al., 2018a; Lin et al., 2018; Min et al., 2019; Cheng et al., 2020) to select relevant evidence passages for a question; (2) fine-tuning and improving the pre-trained LMs, like ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019), as the rankers and readers; (3) unsupervised adaptation of pre-trained LMs to the target QA tasks (Lee et al., 2019; Sun et al., 2019; Xiong et al., 2019a).
Book QA
Previous works (Kočiskỳ et al., 2018; Tay et al., 2019; Frermann, 2019) also adopt a ranker-reader pipeline. However, they have not fully investigated the state-of-the-art ODQA techniques. First, the NarrativeQA is a generative QA task by nature, yet the application of the latest pre-trained LMs for generation purposes, such as BART, is not well-studied. Second, lack of fine-grained supervision on evidence prevents earlier methods from training a neural ranking model, thus they only use simple BM25 (Robertson et al., 1995) based retrievers. An exception is Mou et al. (2020), who construct pseudo distance supervision signals for ranker training. Another relevant work (Frermann, 2019) uses book summaries as an additional resource to train rankers. However, this is different from the aim of the Book QA task in answering questions solely from books, since in a general scenario the book summary cannot answer all questions about the book. Our work is the first to investigate and compare improved training algorithms for rankers and readers in Book QA.
3 Task Setup
3.1 Task Definition and Dataset
Following Kočiskỳ et al. (2018), we define the Book QA task as finding the answer A to a question Q from a book, where each book contains a number of consecutive and logically related paragraphs . The size from different books varies from a few hundred to thousands.
All our experiments are conducted on the NarrativeQA dataset (Kočiskỳ et al., 2018). It has a collection of 783 books and 789 movie scripts (we use the term books to refer to both of them), each containing an average of 62K words. Additionally, each book has 30 question-answer pairs generated by human annotators in free-form natural language. Hence the exact answers are not guaranteed to appear in the books. NarrativeQA provides two different settings, the summary setting and the full-story setting. The former requires answering questions from book summaries from Wikipedia, and the latter requires answering questions from the original books, assuming that the summaries do not exist. Our Book QA task corresponds to the full-story setting, and we use both names interchangeably.
3.2 Baseline
Our baseline QA systems consist of training different base reader models (detailed in Sec. 4.1) over the BM25 ranker. We also compare with competitive public Book QA systems as baselines from several sources (Kočiskỳ et al., 2018; Frermann, 2019; Tay et al., 2019; Frermann, 2019; Mou et al., 2020) under the Narrative full-story setting, and a concurrent work (Zemlyanskiy et al., 2021). As discussed in Section 2, Mou et al. (2020) train a ranker with distant supervision (DS), that is, the first analyzed ranker method (Figure 3); Frermann (2019) use exterior supervision from the book summaries, which is considered unavailable by design of the Book QA task. Because the summaries are written by humans, the system can be viewed as benefiting from human comprehension of books. Figure 2 lists the details of our compared systems.
3.3 Metrics
Following previous works (Kočiskỳ et al., 2018; Tay et al., 2019; Frermann, 2019), we use ROUGE-L (Lin, 2004) as the main metric for both evidence retrieval and question answering.7 For completeness, Appendix A provides results with other metrics used in the previous works, including BLEU-1/4 (Papineni et al., 2002), Meteor (Banerjee and Lavie, 2005), and the Exact Match (EM) and F1 scores that are commonly used in extractive QA.
4 Analysis Part I: Experimental Study
This section describes our efforts of applying or adapting the latest open-domain QA ideas to improve Book QA ranker/reader models. Figure 3 summarizes our inspected approaches. The experimental results quantify the challenges in Book QA beyond open-domain QA.
4.1 QA Reader
Base Reader Models
We study the usage of different pre-trained LMs on Book QA, including BART (Lewis et al., 2019), GPT-2 (Radford et al., 2019), T5 (Raffel et al., 2019), and BERT (Devlin et al., 2019). The first three are generative readers and can be directly trained with the free-form answers as supervision. Specifically, during training we treat as input to generate answer A, where [SEP] is the special separation token and ⊕ is the concatenation operator.
For the extractive reader (BERT), we predict the most likely span in given the concatenation of the question and the evidence . Due to the generative nature of Book QA, the true answer may not have an exact match in the context. Therefore, we follow Mou et al. (2020) to find the span S that has the maximum ROUGE-L score with the ground truth A as the weak label, subject to that A and S have the same length (i.e., |S| = |A|).
Method 1: Book Prereading
Inspired by the literature on the unsupervised adaptation of pre-trained LMs (Sun et al., 2019; Xiong et al., 2019a), we let the reader “preread” the training books through an additional pre-training step prior to fine-tuning with QA task. This technique helps to better adapt to the narrative writing styles.
Specifically, we extract random passages from all training books to build a passage pool. For each training iteration, we mask random spans from each passage, following the setting in Lewis et al. (2019). The start positions of spans are sampled from a uniform distribution without overlapping. The length of each span is drawn from a Poisson distribution with λ = 3. Each span is then replaced by a single [mask] token regardless of the span length. We mask 15% of the total tokens in each passage. During the prereading stage, we use the masked passage as the encoder input and the raw passage as the decoder output to restore the raw passage in the auto-regressive way.
Method 2: Fusion-in-Decoder
Recently, Izacard and Grave (2020) scale BART reader up to large number of input paragraphs. The method, Fusion- in-Decoder (FiD), first concatenates each paragraph to the question to obtain a question-aware encoded vector, then merges these vectors from all paragraphs and feeds them to a decoder for answer prediction. FiD reduces the memory and time costs for encoding the concatenation of all paragraphs, and improves on multiple ODQA datasets. FiD is an interesting alternative for Book QA, since it can be viewed as an integration of the ranker and reader, with the ranker absorbed in the separated paragraph encoding step.
FiD trades cross-paragraph interactions for encoding more paragraphs. The single encoded vector per passage works well for extractive ODQA because the vector only needs to encode information of candidate answers. However, in Book QA, the answers may not be inferred from a single paragraph and integration of multiple paragraphs is necessary. Therefore, in our approach, we concatenate the encoded vectors of all the paragraphs, and rely on the decoder’s attention over these vectors to capture the cross-paragraph interactions.
4.2 Passage Ranker
Base Ranker Model
Our ranker is a BERT- based binary classifier fine-tuned for evidence retrieval. It estimates the likelihood of each passage to be supporting evidence given a question Q.
Training the ranker models is difficult without high-quality supervision. To deal with this problem, we investigate three approaches for creating pseudo labels, including distant supervision, unsupervised ranker training, and Hard EM training.
Method 1: Distant Supervision (DS)
This is the baseline approach from Mou et al. (2020). It constructs DS signals for rankers in two steps: First, for each question Q, two BM25 rankers are used to retrieve passages, one with Q as query and the other with both Q and the true answer A. Denoting the corresponding retrieval results as 8 and , the method samples the positive samples from and the negative samples from the rest, with the ratio for each question Q as a hyperparameter.
Method 2: Unsupervised ICT Training
The selection of “pseudo-questions” is critical to ICT training. To select representative questions, we investigate several filtering methods, and finally develop a book-specific filter.9 Our method selects the top-scored sentence in a passage as a “pseudo-question” in terms of its total of token- wise mutual information against the corresponding book. The details can be found in Appendix B.
Method 3: Hard EM
5 Evaluation Part I: QA System Ablation
We evaluate the overall Book QA system, and the individual modules on NarrativeQA.
Implementation Details: For rankers, we initialize with bert-base-uncased. For readers, we use bert-base-uncased, gpt2-medium, bart-large, and T5-base. The readers use top-3 retrieved passages as inputs, except for the FiD reader which uses top-10, making the readers have comparable time and space complexities.
5.1 Overall Performance of Book QA
We first show the positions of our whole systems on the NarrativeQA Book QA task. Table 1 lists our results along with the state-of-the-art results reported in prior work (see Section 3.2 and Figure 2 for reference). Empirically, our best ranker is from the combination of heuristic distant supervision and the unsupervised ICT training; our best reader is from the combination of the FiD model plus book prereading (with the top-10 ranked paragraphs as inputs). It is observed that specifically designed pre-training techniques play the most important role. Details of the best ranker and reader can be found in the ablation study.
System . | ROUGE-L . | |
---|---|---|
dev . | test . | |
Public Extractive Baselines | ||
BiDAF (Kočiskỳ et al., 2018) | 6.33 | 6.22 |
R3 (Wang et al., 2018a) | 11.40 | 11.90 |
DS-ranker + BERT (Mou et al., 2020) | 14.76 | 15.49 |
BERT-heur (Frermann, 2019) | – | 15.15 |
ReadTwice (Zemlyanskiy et al., 2021) | 22.7 | 23.3 |
Public Generative Baselines | ||
Seq2Seq (Kočiskỳ et al., 2018) | 13.29 | 13.15 |
AttSum* (Kočiskỳ et al., 2018) | 14.86 | 14.02 |
IAL-CPG (Tay et al., 2019) | 17.33 | 17.67 |
DS-Ranker + GPT2 (Mou et al., 2020) | 21.89 | 22.36 |
Our Book QA Systems | ||
BART-no-context (baseline) | 16.86 | 16.83 |
BM25 + BART reader (baseline) | 23.16 | 24.47 |
Our best ranker + BART reader | 25.83 | 26.95† |
Our best ranker + our best reader | 27.91 | 29.21† |
repl ranker with oracle IR | 37.75 | 39.32 |
System . | ROUGE-L . | |
---|---|---|
dev . | test . | |
Public Extractive Baselines | ||
BiDAF (Kočiskỳ et al., 2018) | 6.33 | 6.22 |
R3 (Wang et al., 2018a) | 11.40 | 11.90 |
DS-ranker + BERT (Mou et al., 2020) | 14.76 | 15.49 |
BERT-heur (Frermann, 2019) | – | 15.15 |
ReadTwice (Zemlyanskiy et al., 2021) | 22.7 | 23.3 |
Public Generative Baselines | ||
Seq2Seq (Kočiskỳ et al., 2018) | 13.29 | 13.15 |
AttSum* (Kočiskỳ et al., 2018) | 14.86 | 14.02 |
IAL-CPG (Tay et al., 2019) | 17.33 | 17.67 |
DS-Ranker + GPT2 (Mou et al., 2020) | 21.89 | 22.36 |
Our Book QA Systems | ||
BART-no-context (baseline) | 16.86 | 16.83 |
BM25 + BART reader (baseline) | 23.16 | 24.47 |
Our best ranker + BART reader | 25.83 | 26.95† |
Our best ranker + our best reader | 27.91 | 29.21† |
repl ranker with oracle IR | 37.75 | 39.32 |
Overall, we significantly raise the bar on NarrativeQA by 4.7% over our best baseline and 6.8% over the best published one.10 But there is still massive room for future improvement, compared to the upperbound with oracle ranker. Our baseline is better than all published results with simple BM25 retrieval, showing the importance of reader investigation. Our best ranker (see Section 5.2 for details) contributes to 2.5% of our improvement over the baseline. Our best reader (see Section 5.3 for details) brings an additional >2% improvement compared to the BART reader.
We conduct a significance test for the results of our best system. There is no agreement on the best practice of the tests for natural language generation (Clark et al., 2011; Dodge et al., 2019). We choose the non-parametric bootstrap test, because it is a more general approach and does not assume specific distributions over the samples. For bootstrapping, we sample 10K subsets, the size of each is 1K. The small p-value (¡ 0.01) shows the effectiveness of our best model.
As a final note, even the results with oracle IR are far from perfect. It indicates the limitation of text-matching-based IR; and further confirms the challenge of evidence retrieval in Book QA.
5.2 Ranker Ablation
To dive deeper into the effects of our ranker training techniques in Sec. 4.2, we study the intermediate retrieval results and measure their coverage of the answers. The coverage is estimated on the top-5 selections of a ranker from the baseline BM25’s top-32 outputs, by both the maximum ROUGE-L score of all the overlapped subsequences of the same length as the answer in the retrieved passages; and a binary indicator of the appearance of the answer in the passages (EM). Table 2 gives the ranker-only ablation. On one hand, our best ranker improves both metrics. It also significantly boosts the BART reader compared to the DS-ranker (Mou et al., 2020), as shown in Appendix A. On the other hand, on top of the DS ranker, none of the other techniques can further improve the two ranker metrics significantly. The ICT unsupervised training brings significant improvement over BM25. When adding to the DS-ranker, it brings slight improvement and leads to our best results. Hard EM (Min et al., 2019) does not lead to improvements. Our conjecture is that generative readers do not solely generate purely matching-oriented signals, thus introducing noise in matching-oriented ranker training.
IR Method . | EM . | ROUGE-L . |
---|---|---|
Baseline Rankers | ||
BM25 | 18.99 | 47.48 |
BERT DS-ranker (Mou et al., 2020) | 24.26 | 52.68 |
- ROUGE-L filtering | 22.63 | 51.02 |
Repl BERT w/ BiDAF | 21.88 | 50.64 |
Repl BERT w/ MatchLSTM | 21.97 | 50.39 |
Our Rankers | ||
BERT ICT-ranker | 21.29 | 50.35 |
BERT DS-ranker | ||
+ Hard EM | 22.45 | 50.50 |
+ ICT pre-training* | 24.83 | 53.19 |
Oracle Conditions | ||
Upperbound (BM25 top-32) | 30.81 | 61.40 |
Oracle (BM25 w/ Q+A) | 35.75 | 63.92 |
IR Method . | EM . | ROUGE-L . |
---|---|---|
Baseline Rankers | ||
BM25 | 18.99 | 47.48 |
BERT DS-ranker (Mou et al., 2020) | 24.26 | 52.68 |
- ROUGE-L filtering | 22.63 | 51.02 |
Repl BERT w/ BiDAF | 21.88 | 50.64 |
Repl BERT w/ MatchLSTM | 21.97 | 50.39 |
Our Rankers | ||
BERT ICT-ranker | 21.29 | 50.35 |
BERT DS-ranker | ||
+ Hard EM | 22.45 | 50.50 |
+ ICT pre-training* | 24.83 | 53.19 |
Oracle Conditions | ||
Upperbound (BM25 top-32) | 30.81 | 61.40 |
Oracle (BM25 w/ Q+A) | 35.75 | 63.92 |
The limited improvement and the low absolute performance demonstrate the difficulty of retrieval in Book QA. The gap between our best performance and the upper-bound implies that there is a large potential to design a more advanced ranker.
Additionally, we show how much useful information our best ranker can provide to our readers in the whole QA system. In our implementation, the BART and FiD readers use top-3 and top-10 paragraphs from the ranker, respectively. The top-3 paragraphs from our best ranker give the answer coverage of 22.12% EM and 49.83% ROUGE-L; and the top-10 paragraphs give 27.15% EM and 56.77% ROUGE-L. In comparison, the BM25 baseline has 15.75%/43.44% for top-3 and 24.08%/53.55% for top-10. Therefore, our best ranker efficiently eases the limited- passage bottleneck brought by the ranker and benefits BART reader much more, which is consistent with our observations in Table 3, Section 5.3.
System . | ROUGE-L . | |
---|---|---|
dev . | test . | |
BM25 + BART reader (baseline) | 23.16 | 24.47 |
+ BART-FiD reader | 25.95 | – |
Our ranker + BART reader | 25.83 | 26.95 |
+ BART-FiD reader | 26.27 | – |
repl BART w/ GPT-2 | 22.22 | – |
repl BART w/ T5 | 20.57 | – |
+ book preread | 26.82 | – |
+ BART-FiD Reader* | 27.91 | 29.21 |
+ book preread (decoder-only) | 26.51 | – |
System . | ROUGE-L . | |
---|---|---|
dev . | test . | |
BM25 + BART reader (baseline) | 23.16 | 24.47 |
+ BART-FiD reader | 25.95 | – |
Our ranker + BART reader | 25.83 | 26.95 |
+ BART-FiD reader | 26.27 | – |
repl BART w/ GPT-2 | 22.22 | – |
repl BART w/ T5 | 20.57 | – |
+ book preread | 26.82 | – |
+ BART-FiD Reader* | 27.91 | 29.21 |
+ book preread (decoder-only) | 26.51 | – |
5.3 Reader Ablation
Table 3 shows how the different reader techniques in Section 4.1 contribute to the QA performance.
First, switching the BART reader to FiD gives a large improvement when using the BM25 ranker (2.8%), approaching the result of “our ranker + BART”. This agrees with our hypothesis in Section 4.1 Analysis 2, that FiD takes the roles of both ranker and reader. Second, although the above result shows that FiD’s ranking ability does not add much to our best ranker, our cross- paragraph attention enhancement still improves FiD due to better retrieval results (0.5% improvement over “our ranker + BART”). Third, among all the generative reader models, BART outperforms GPT-2 and T5 by a notable margin. Finally, the book prereading brings consistent improvements to both combinations; and the combination of our orthogonal reader improvements finally gives the best results. We also confirm that the prereading helps decoders mostly, as only training the decoder gives comparable results.
6 Analysis Part II: Human Study
This section conducts in-depth analyses of the challenges in Book QA. We propose a new question categorization scheme based on the types of comprehension or reasoning skills required for answering the questions; then conduct a human study on 1,000 questions. Consequently, the model performance per category provides further insights of the deficiency in current QA models.
6.1 Question Categorization
There have been many different question categorization schemes. Among them the most widely used is intention-based, where an intention is defined by the WH-word and its following word. Some recent reasoning-focused datasets (Yang et al., 2018; Xiong et al., 2019b) categorize intents by the types of multi-hop reasoning or by the types of required external knowledge beyond texts.
However, all these previous schemes do not reasonably fit our analysis over narrative texts from two aspects: (1) they only differentiate high-level reasoning types, which is useful in knowledge base QA (i.e., KB-QA) but fails to pinpoint the text-based evidence in Book QA; (2) they are usually entity-centric and overlook linguistic structures like events, while events play essential roles in narrative stories. With this, we design a new systematic schema to categorize the questions in the NarrativeQA dataset.
Semantic Unit Definition
We first identify a minimum set of basic semantic units, each describing one of the most fundamental components of a story. The set should be sufficient such that (1) each answer can be uniquely linked to one semantic unit, and (2) each question should contain at least one semantic unit. Our final set contains three main classes and nine subclasses (Figure 4).
We merge the two commonly used types in the previous analysis, named entities and noun phrases, into the Concept class. The Event class follows the definition in ACE 2005 (Walker et al., 2006). We also use a special sub-type “Book Attribute” that represents the meta information or the global settings of the book, such as the era and the theme of the story in a book.
Question Type Definition
On top of the semantic units’ definition, each question can be categorized as a query that asks about either a semantic unit or a relation between two semantic units. We use the difference and split all the questions into nine types grouped in four collections (Figure 5).
Concept questions that ask a Concept attribute or a relation between two Concepts. The most common types in most ODQA tasks (e.g., TriviaQA) and the QA tasks require multi-hop reasoning (e.g., ComplexQuestions and HotpotQA).
Event-argument questions that ask parts of an event structure. This type is less common in the existing QA datasets, although some of them contain a small portion of questions in this class. The large ratio of these event- centric questions demonstrates the uniqueness of the NarrativeQA dataset.
Event-relation questions that ask relations (e.g., causal or temporal relations) between two events or between an event and an attribute (a state or a description). This type is common in NarrativeQA, since events play essential roles in story narrations. A particular type in this group is the relation that one event serves as the argument of another event (e.g., how-questions). It corresponds to the common linguistic phenomenon of (compositional) nested event structures.
Global-attribute questions that ask Book Attribute: As designed, it is also unique in Book QA.
6.2 Annotation Details
Five annotators are asked to label the semantic unit types and the question types on a total of 1,000 question-answer pairs. There can be overlapped question categories for the same question. A major kind of overlaps is between the three event component types (trigger, argument - concept, argument - attribute) and the three event relation types (causal, temporal, and nested). Therefore in the guideline, when the question can be answered with an event component, we ask the annotators to check if the question requires the understanding of event relations. If so, the question should be labeled with the event relation types as these are the more critical information for finding the answers. Similarly, for the other rare cases of category overlaps, we ask the annotators to label the types that they believe are more important for finding the answers.
Correlation Between Question and Answer Types
Figure 6 shows the ratios of answer types under each question type via a flow diagram. Most question types correspond to a single major answer type, with a few exceptions: (1) Most of the three event-relation questions have events as answers. A small portion of them have concepts or attributes as answers. This is either because the answers are state/description attributes or because the answers are the arguments of one of the related events queried by the questions. (2) The Relation b/w Concepts type has some questions with attribute-typed answers. This is because the questions may ask the names of relations themselves, while some relation names are recognized as description-typed attributes. (3) Most of Book Attribute questions have concepts as answers, because they ask for the protagonists or the locations where the stories occur.
Annotation Agreement
A subset of 150 questions is used for quality checking, with each question labeled by two annotators. Table 4 reports both the simple agreement rates and the Fleiss Kappa (Fleiss, 1971) κs. Our annotations reach a high agreement, with around 90% for question types and SU types and 80% for SU sub-types, reflecting the rationality of our scheme.
6.3 Performance of Question Type Classification on the Annotated Data
We conduct an additional experiment to study how well a machine learning model can learn to classify our question types based on question surface patterns. We use the RoBERTa-base model that demonstrates superior on multiple sentence classification tasks. Since our labeled dataset is small, we conduct a 10-fold cross validation on our labeled 1,000 instances. For each testing fold, we randomly select another fold as the development set and use the rest folds as training.
The final averaged testing accuracy is 70.2%. Considering the inter-agreement rate of 88.0%, this is a reasonable performance, with several reasons for the gap: (1) Our training dataset is too small and easy to overfit, evidenced by the performance gap between the training accuracy and development accuracy (∼100% versus 73.4%). The accuracy can be potentially increased with more training data. (2) Some of the ambiguous questions require the contexts to determine their types. During labeling, our human annotators are allowed to read the answers for additional information, which leads to a higher upperbound performance. (3) There is a small number of ambiguous cases, on which humans can use world knowledge whereas it is difficult for models to employ such knowledge. Therefore, the current accuracy can be potentially increased with a better model architecture.
Error Analysis and Lessons Learned
Figure 7 gives major error types, which verifies the reasons discussed above. The majority of errors are the confusion between Event Argument - Concept and Nested Relation. The models are not accurate on the two types for several reasons: (1) Sometimes the similar question surface forms can take both concepts and events as an argument. In these cases, the answers are necessary for determining the question type. (2) According to our annotation guideline, we encourage the annotators to label event relations with higher priority, especially when the answer is a concept but serves as an argument of a clause. This increases the labeling error rate between the two types. Another major error type is labeling Causal Relation as Nest Relation. This is mainly because some questions ask causal relations in an implicit way, on which human annotators have the commonsense to identify the causality but models do not. The third major type is the failure in identifying the Attribute of Concept and the Relation b/w Concepts categories. As the attributes can be associated to some predicates, especially when they are descriptions, the models confuse them with relations or events.
These observations provide insights on future refinement of our annotation guidelines, if someone wishes to further enlarge the labeled data. For example, the Nested Relation should be more clearly defined with comprehensive examples provided. In this way, the annotators can better distinguish them from the other types, and can better determine if the nested structure exists and whether to label the Event Argument types. Similarly, we could define clearer decision rules among relations, attributes and events, to help annotators distinguish Relation b/w Concepts, Attribute of Concept, and Event Argument - Concept types.
7 Evaluation Part II: QA System Performance Decomposition
Table 5 presents both the ratio of each question type and our best generative and extractive performance on it. The ratios reflect NarrativeQA’s unique focus on events, as ∼75% of the questions are relevant to the events in book stories. Specifically, ∼34% of the questions ask components of event structures (i.e., arguments or triggers) and 41% ask relations between events (note that these questions may still require the understanding of event structures). By comparison, the two dominating types in the other QA datasets, Concept Relation and Concept Attribute, only contribute to a ratio of ∼23%. This agrees with human intuitions on the unique challenges in book understanding.
Question Type . | Ratio(%) . | QA ROUGE-L . | Ranker . | |
---|---|---|---|---|
Gen . | Ext . | ROUGE-L . | ||
Relation b/w Concepts | 11.0 | 40.48 | 24.46 | 63.76 |
Attribute of Concept | 12.0 | 34.09 | 21.69 | 56.73 |
Event - Attribute | 3.4 | 25.88 | 10.57 | 49.23 |
Event - Concept | 28.3 | 27.35 | 15.73 | 62.15 |
Event - Trigger | 1.8 | 29.63 | 9.28 | 37.56 |
Causal Relation | 12.6 | 22.86 | 10.39 | 38.47 |
Temporal Relation | 12.6 | 28.01 | 15.57 | 49.20 |
Nested Relation | 15.4 | 23.02 | 8.44 | 48.93 |
Book Attribute | 2.9 | 23.11 | 25.71 | 54.60 |
Question Type . | Ratio(%) . | QA ROUGE-L . | Ranker . | |
---|---|---|---|---|
Gen . | Ext . | ROUGE-L . | ||
Relation b/w Concepts | 11.0 | 40.48 | 24.46 | 63.76 |
Attribute of Concept | 12.0 | 34.09 | 21.69 | 56.73 |
Event - Attribute | 3.4 | 25.88 | 10.57 | 49.23 |
Event - Concept | 28.3 | 27.35 | 15.73 | 62.15 |
Event - Trigger | 1.8 | 29.63 | 9.28 | 37.56 |
Causal Relation | 12.6 | 22.86 | 10.39 | 38.47 |
Temporal Relation | 12.6 | 28.01 | 15.57 | 49.20 |
Nested Relation | 15.4 | 23.02 | 8.44 | 48.93 |
Book Attribute | 2.9 | 23.11 | 25.71 | 54.60 |
Most Difficult Question Types: The performance breakdown shows that all three event-relation types (Causal, Temporal, and Nested) are challenging to our QA systems. The Causal relation is the most difficult type with the lowest QA performance. The result confirms that the unique challenge in understanding event relations is still far from being well-handled by current machine comprehension techniques, even with powerful pre-trained LMs. Moreover, these types can also be potentially improved by the idea of complementary evidence retrieval (Wang et al., 2018b; Iyer et al., 2020; Mou et al., 2021) in ODQA.
Besides the three event-relation types, the Event - Attribute and Event - Triggers are also challenging to the extractive system, because the answers are usually long textual mentions of events or states that are not extractable from the passages.
Challenging Types for the Reader: By checking the performance gaps of the generative system and the ranker, we can tell which types are difficult mainly for the reader.11 The Event - Concept type poses more challenges to the reader, given that the ranker can perform well on them but the overall QA performance is low. These questions are challenging mainly due to the current readers’ difficulty in understanding the event structures, since their answers are usually extractable from texts.
Breakdown Onto Answer Types: To better understand the challenges of non-extractable answers, we show the performance on each answer type in Table 6. The answers are mostly extractable when they are entities (including the book-specific terms and numeric values). On these types the extractive systems perform better and the two systems perform closer, compared to the other types. In contrast, the answers are less likely to be extractable from the original passages when they are events, states, and descriptions. An interesting observation is that the Common Noun Phrases type is also challenging for the extractive system. It indicates that these answers may not appear in the texts with the exact forms, so commonsense knowledge is required to connect their different mentions.
Answer Type . | Ratio(%) . | QA ROUGE-L . | Ranker . | |
---|---|---|---|---|
Gen . | Ext . | ROUGE-L . | ||
Concept - Entity | 35.3 | 26.76 | 18.59 | 66.79 |
Concept - Common Noun | 16.9 | 31.53 | 12.90 | 51.03 |
Concept - Book Specific | 4.3 | 39.68 | 26.53 | 65.54 |
Event - Expression | 25.1 | 24.62 | 11.50 | 39.40 |
Event - Name | 2.8 | 24.79 | 5.54 | 42.88 |
Attribute - State | 4.2 | 38.75 | 17.03 | 53.82 |
Attribute - Numeric | 4.7 | 33.57 | 24.44 | 57.31 |
Attribute - Description | 6.1 | 26.13 | 11.15 | 41.70 |
Attribute - Book Attribute | 0.6 | 27.91 | 19.88 | 52.78 |
Answer Type . | Ratio(%) . | QA ROUGE-L . | Ranker . | |
---|---|---|---|---|
Gen . | Ext . | ROUGE-L . | ||
Concept - Entity | 35.3 | 26.76 | 18.59 | 66.79 |
Concept - Common Noun | 16.9 | 31.53 | 12.90 | 51.03 |
Concept - Book Specific | 4.3 | 39.68 | 26.53 | 65.54 |
Event - Expression | 25.1 | 24.62 | 11.50 | 39.40 |
Event - Name | 2.8 | 24.79 | 5.54 | 42.88 |
Attribute - State | 4.2 | 38.75 | 17.03 | 53.82 |
Attribute - Numeric | 4.7 | 33.57 | 24.44 | 57.31 |
Attribute - Description | 6.1 | 26.13 | 11.15 | 41.70 |
Attribute - Book Attribute | 0.6 | 27.91 | 19.88 | 52.78 |
Quantifying the Challenge of Event-Typed Answers to the Reader:Table 6 shows that the ranker performs poorly when the answers are events and descriptions. This arouses a question —whether the relatively lower QA performance is mainly due to the ranker’s deficiency, or due to the deficiency of both the ranker and the reader.
To answer this question, we conduct an experiment in the summary setting of NarrativeQA, to eliminate the effects of the ranker. We create a subset of questions with event-typed answers if a question has either of its two answers containing a verb. This procedure results in a subset of 2,796 and 8,248 QA pairs in validation and test sets, respectively. We train a BART reader with all training data in the summary setting, and test on both the full evaluation data and our event-only subsets. Table 7 shows that the performance on the event-only subsets is about 12% lower. The results confirm that questions with event-typed answers are challenging for both the reader and the ranker.
System . | Full Data . | Event-Only . | ||
---|---|---|---|---|
. | dev . | test . | dev . | test . |
BERT+Hard EM | 58.1 | 58.8 | – | – |
Masque | – | 54.7 | – | – |
BART Reader (ours) | 66.9 | 66.9 | 55.1 | 55.0 |
System . | Full Data . | Event-Only . | ||
---|---|---|---|---|
. | dev . | test . | dev . | test . |
BERT+Hard EM | 58.1 | 58.8 | – | – |
Masque | – | 54.7 | – | – |
BART Reader (ours) | 66.9 | 66.9 | 55.1 | 55.0 |
8 Conclusion
We conduct a comprehensive analysis on the Book QA task, taking the representative NarrativeQA dataset as an example. Firstly, we design the Book QA techniques by borrowing the wisdom from the cutting-edge open-domain QA research and demonstrate through extensive experiments that (1) evidence retrieval in Book QA is difficult even with the state-of-the-art pre-trained LMs, due to the factors of rich writing style, recurrent book plots and characters, and the requirement of high-level story understanding; (2) our proposed approaches that adapt pre-trained LMs to books, especially the prereading technique for the reader training, are consistently helpful.
Secondly, we perform a human study and find that (1) a majority of questions in Book QA requires understanding and differentiating events and their relations; (2) the existing pre-trained LMs are deficient in extracting the inter- and intra-structures of the events in the Book QA. Such facts lead us towards the event understanding task for future improvement over the Book QA task.
Acknowledgments
This work is funded by RPI-CISL, a center in IBM’s AI Horizons Network, and the Rensselaer- IBM AI Research Collaboration (RPI-AIRC).
A Full Results on NarrativeQA
Table 8 gives full results with different metrics.
System . | Bleu-1 . | Bleu-4 . | Meteor . | ROUGE-L . | EM . | F1 . |
---|---|---|---|---|---|---|
Public Extractive Baselines | ||||||
BiDAF (Kočiskỳ et al., 2018) | 5.82/5.68 | 0.22/0.25 | 3.84/3.72 | 6.33/6.22 | – | – |
R3 (Wang et al., 2018a) | 16.40/15.70 | 0.50/0.49 | 3.52/3.47 | 11.40/11.90 | – | – |
BERT-heur (Frermann, 2019) | –/12.26 | –/2.06 | –/5.28 | –/15.15 | – | – |
DS-Ranker + BERT (Mou et al., 2020) | 14.60/14.46 | 1.81/1.38 | 5.09/5.03 | 14.76/15.49 | 6.79/6.66 | 13.75/14.45 |
ReadTwice(E) (Zemlyanskiy et al., 2021) | 21.1/21.1 | 3.6/4.0 | 6.7/7.0 | 22.7/23.3 | –/– | –/– |
Our Extractive QA Models | ||||||
BM25 + BERT Reader | 13.27/13.84 | 0.94/1.07 | 4.29/4.59 | 12.59/13.81 | 4.67/5.26 | 11.57/12.55 |
+ HARD EM | 14.39/– | 1.72/– | 4.61/– | 14.10/– | 5.92/– | 12.92/– |
+ ORQA | 15.06/14.25 | 1.58/1.30 | 5.28/5.06 | 15.42/15.22 | 6.25/6.19 | 14.58/14.30 |
+ Oracle IR (BM25 w/ Q+A) | 23.81/24.01 | 3.54/4.01 | 9.72/9.83 | 28.33/28.72 | 15.27/15.39 | 28.42/28.55 |
Public Generative Baselines | ||||||
AttSum (top-20) (Kočiskỳ et al., 2018) | 19.79/19.06 | 1.79/2.11 | 4.60/4.37 | 14.86/14.02 | – | – |
IAL-CPG (Tay et al., 2019) | 23.31/22.92 | 2.70/2.47 | 5.68/5.59 | 17.33/17.67 | – | – |
- curriculum | 20.75/– | 1.52/– | 4.65/– | 15.42/– | ||
DS-Ranker + GPT2 (Mou et al., 2020) | 24.94/– | 4.76/– | 7.74/– | 21.89/– | 6.79/– | 19.67/– |
Our Generative QA Models | ||||||
BM25 + BART Reader | 24.52/25.30 | 4.28/4.65 | 8.68/9.25 | 23.16/24.47 | 6.28/6.73 | 21.16/22.28 |
+ DS-Ranker | 24.91/25.22 | 4.28/4.60 | 8.63/8.82 | 23.39/24.10 | 6.67/6.93 | 21.31/21.93 |
+ HARD EM | 25.83/– | 4.48/– | 8.75/– | 24.31/– | 7.29/– | 21.91/– |
+ Our Ranker | 27.06/27.68 | 5.22/5.45 | 9.35/9.74 | 25.83/26.95 | 8.57/8.95 | 23.80/25.08 |
+ Preread | 28.54/– | 6.13/– | 9.59/– | 26.82/– | 10.21/– | 25.06/– |
+ FiD | 28.04/– | 5.66/– | 9.49/– | 26.27/– | 9.20/– | 24.29/– |
+ FiD + Preread | 29.56/29.98 | 6.11/6.31 | 10.03/10.33 | 27.91/29.21 | 10.45/11.16 | 26.09/27.58 |
+ Oracle IR (BM25 w/ Q+A) | 35.04/36.41 | 8.84/9.08 | 14.78/15.07 | 37.75/39.32 | 15.78/17.27 | 37.71/38.73 |
BM25 + GPT-2 Reader | 24.54/– | 4.74/– | 7.32/– | 20.25/– | 5.12/– | 17.72/– |
+ Our Ranker | 24.85/– | 5.01/– | 7.84/– | 22.22/– | 7.29/– | 20.03/– |
+ Oracle IR (BM25 w/ Q+A) | 33.18/32.95 | 8.16/7.70 | 12.35/12.47 | 34.83/34.96 | 17.09/15.98 | 33.65/33.75 |
BM25 + T5 Reader | 19.28/– | 3.67/– | 6.62/– | 16.89/– | 4.17/– | 15.47/– |
+ Our Ranker | 22.35/– | 4.31/– | 7.59/– | 20.57/– | 6.13/– | 18.48/– |
+ Oracle IR (BM25 w/ Q+A) | 31.06/31.49 | 8.36/8.32 | 12.61/12.93 | 31.18/32.43 | 12.77/12.84 | 31.23/32.18 |
System . | Bleu-1 . | Bleu-4 . | Meteor . | ROUGE-L . | EM . | F1 . |
---|---|---|---|---|---|---|
Public Extractive Baselines | ||||||
BiDAF (Kočiskỳ et al., 2018) | 5.82/5.68 | 0.22/0.25 | 3.84/3.72 | 6.33/6.22 | – | – |
R3 (Wang et al., 2018a) | 16.40/15.70 | 0.50/0.49 | 3.52/3.47 | 11.40/11.90 | – | – |
BERT-heur (Frermann, 2019) | –/12.26 | –/2.06 | –/5.28 | –/15.15 | – | – |
DS-Ranker + BERT (Mou et al., 2020) | 14.60/14.46 | 1.81/1.38 | 5.09/5.03 | 14.76/15.49 | 6.79/6.66 | 13.75/14.45 |
ReadTwice(E) (Zemlyanskiy et al., 2021) | 21.1/21.1 | 3.6/4.0 | 6.7/7.0 | 22.7/23.3 | –/– | –/– |
Our Extractive QA Models | ||||||
BM25 + BERT Reader | 13.27/13.84 | 0.94/1.07 | 4.29/4.59 | 12.59/13.81 | 4.67/5.26 | 11.57/12.55 |
+ HARD EM | 14.39/– | 1.72/– | 4.61/– | 14.10/– | 5.92/– | 12.92/– |
+ ORQA | 15.06/14.25 | 1.58/1.30 | 5.28/5.06 | 15.42/15.22 | 6.25/6.19 | 14.58/14.30 |
+ Oracle IR (BM25 w/ Q+A) | 23.81/24.01 | 3.54/4.01 | 9.72/9.83 | 28.33/28.72 | 15.27/15.39 | 28.42/28.55 |
Public Generative Baselines | ||||||
AttSum (top-20) (Kočiskỳ et al., 2018) | 19.79/19.06 | 1.79/2.11 | 4.60/4.37 | 14.86/14.02 | – | – |
IAL-CPG (Tay et al., 2019) | 23.31/22.92 | 2.70/2.47 | 5.68/5.59 | 17.33/17.67 | – | – |
- curriculum | 20.75/– | 1.52/– | 4.65/– | 15.42/– | ||
DS-Ranker + GPT2 (Mou et al., 2020) | 24.94/– | 4.76/– | 7.74/– | 21.89/– | 6.79/– | 19.67/– |
Our Generative QA Models | ||||||
BM25 + BART Reader | 24.52/25.30 | 4.28/4.65 | 8.68/9.25 | 23.16/24.47 | 6.28/6.73 | 21.16/22.28 |
+ DS-Ranker | 24.91/25.22 | 4.28/4.60 | 8.63/8.82 | 23.39/24.10 | 6.67/6.93 | 21.31/21.93 |
+ HARD EM | 25.83/– | 4.48/– | 8.75/– | 24.31/– | 7.29/– | 21.91/– |
+ Our Ranker | 27.06/27.68 | 5.22/5.45 | 9.35/9.74 | 25.83/26.95 | 8.57/8.95 | 23.80/25.08 |
+ Preread | 28.54/– | 6.13/– | 9.59/– | 26.82/– | 10.21/– | 25.06/– |
+ FiD | 28.04/– | 5.66/– | 9.49/– | 26.27/– | 9.20/– | 24.29/– |
+ FiD + Preread | 29.56/29.98 | 6.11/6.31 | 10.03/10.33 | 27.91/29.21 | 10.45/11.16 | 26.09/27.58 |
+ Oracle IR (BM25 w/ Q+A) | 35.04/36.41 | 8.84/9.08 | 14.78/15.07 | 37.75/39.32 | 15.78/17.27 | 37.71/38.73 |
BM25 + GPT-2 Reader | 24.54/– | 4.74/– | 7.32/– | 20.25/– | 5.12/– | 17.72/– |
+ Our Ranker | 24.85/– | 5.01/– | 7.84/– | 22.22/– | 7.29/– | 20.03/– |
+ Oracle IR (BM25 w/ Q+A) | 33.18/32.95 | 8.16/7.70 | 12.35/12.47 | 34.83/34.96 | 17.09/15.98 | 33.65/33.75 |
BM25 + T5 Reader | 19.28/– | 3.67/– | 6.62/– | 16.89/– | 4.17/– | 15.47/– |
+ Our Ranker | 22.35/– | 4.31/– | 7.59/– | 20.57/– | 6.13/– | 18.48/– |
+ Oracle IR (BM25 w/ Q+A) | 31.06/31.49 | 8.36/8.32 | 12.61/12.93 | 31.18/32.43 | 12.77/12.84 | 31.23/32.18 |
B Details of ICT Training Data Creation
Our pilot study shows that uniformly sampling the sentences and their source passages as “pseudo- questions” (PQs) and “pseudo-evidences” (PEs) does not work well. Such selected PQs have high probability to be casual, for example, “Today is sunny”, thus are not helpful for ranker training.
During sampling, we filter out stopwords and punctuation when computing f(s,bj). In movie scripts, the instructive sentences like “SWITCH THE SCENARIO” that have poor connections to its source passages are also ignored. Finally, we require each PQ contain a minimum number of 3 non-stopwords.
Notes
The SQuAD leaderboard (Rajpurkar et al., 2018): rajpurkar.github.io/SQuAD-explorer.
Historically, open-domain QA meant “QA on any domain/topic”. More recently, the term has been restricted to “retrieval on a large pile of corpus” (Chen et al., 2017), so “open-retrieval QA” seems a better term here. However, to follow the recent terminology in the QA community, we still use “open-domain QA” throughout this paper.
We consider Challenge (5) more like an opportunity than a challenge, and leave its investigation to future work.
For fair comparison, we lowercase the answers and remove the punctuation, and use the open-source nlg-eval library (Sharma et al., 2017).
For simplicity, we use the notation here.
A unique filter is built for each book.
Appendix A reports the full results, where we achieve the best performance across all of the metrics.
Note that this analysis cannot confirm which types pose challenges to the ranker. This is because for event answers that are relatively longer and generative, there is a natural disadvantage on our pseudo ranker ROUGE scores.
References
Author notes
* Equal contribution. XM built the whole system, implemented the data preprocessing pipeline, Hard EM ranker, and all the reader modules, and conducted all the QA experiments. CY implemented the unsupervised ICT ranker and the first working version of FiD, and was responsible for the final ranker module. MY is the corresponding author, who proposed and led this project, built the ranker code base (until the DS ranker), designed the question schema and conducted its related experiments and analysis in Part II.