Narrative Question Answering with Cutting-Edge Open-Domain QA Techniques: A Comprehensive Study

Recent advancements in open-domain question answering (ODQA), i.e., finding answers from large open-domain corpus like Wikipedia, have led to human-level performance on many datasets. However, progress in QA over book stories (Book QA) lags behind despite its similar task formulation to ODQA. This work provides a comprehensive and quantitative analysis about the difficulty of Book QA: (1) We benchmark the research on the NarrativeQA dataset with extensive experiments with cutting-edge ODQA techniques. This quantifies the challenges Book QA poses, as well as advances the published state-of-the-art with a $\sim$7\% absolute improvement on Rouge-L. (2) We further analyze the detailed challenges in Book QA through human studies.\footnote{\url{https://github.com/gorov/BookQA}.} Our findings indicate that the event-centric questions dominate this task, which exemplifies the inability of existing QA models to handle event-oriented scenarios.


Introduction
Recent Question-Answering (QA) models have achieved or even surpassed human performance on many challenging tasks, including single-passage QA 2 and open-domain QA (ODQA) 3 . Neverthe- Figure 1: An example of Book QA. The content is from the book An Ideal Husband (Wilde and Fornelli, 1916). The bottom contains a typical QA pair, and the highlighted text is the evidence for deriving the answer. less, understanding rich context beyond text pattern matching remains unsolved, especially answering questions on narrative elements via reading books. One example is NarrativeQA (Kočiskỳ et al., 2018) (Fig. 1). Since its first release in 2017, there has been no significant improvement over the primitive baselines. In this paper, we study this challenging Book QA task and shed light on the inherent difficulties.
Despite its similarity to standard ODQA tasks 4 , i.e., both requiring finding evidence paragraphs for inferring answers, the Book QA has certain unique challenges (Kočiskỳ et al., 2018): (1) the narrative writing style of book stories differs from the formal texts in Wikipedia and news, which demands a deeper understanding capability. The flexible writing styles from different genres and authors make the challenge severe; (2) the passages that depict related book plots and characters share more semantic similarities than the Wikipedia articles, which increases confusion in finding the correct evidence to answer a question; (3) the free-form nature of the answers necessitates the summarization ability from the narrative plots; (4) the freeform answers make it hard to obtain fine-grained supervision at passage or span levels; and finally (5) different paragraphs usually have logical relations among them. 5 To quantify the aforementioned challenges, we conduct a two-fold analysis to examine the gaps between Book QA and the standard ODQA tasks. First, we benchmark the Book QA performance on the NarrativeQA dataset, with methods created or adapted based on the ideas of state-of-the-art ODQA methods (Wang et al., 2018a;Lin et al., 2018;Min et al., 2019;Guu et al., 2020;Karpukhin et al., 2020). We build a state-ofthe-art Book QA system with a retrieve-and-read framework, which consists of a ranker for retrieving evidence and a reader (i.e., QA model) to predict answers given evidence. For the ranker model, we investigate different weakly supervised or unsupervised methods for model training with the lack of passage-level supervision. For the reader model, we fill up the missing study and comparison among pre-trained generative models for Book QA, such as GPT-2 (Radford et al., 2019) and BART (Lewis et al., 2019). Then we investigate approaches to adapt to the book writing styles and to make use of more evidence paragraphs. As a result, our study gives a ∼7% absolute Rouge-L improvement over the published state-of-the-art.
Second, we conduct human studies to quantify the challenges in Book QA. To this end, we design a new question categorization schema based on the types of reading comprehension or reasoning skills required to provide the correct answers. Precisely, we first define the basic semantic units, such as entities, event structures in the questions and answers. The question category thus determines the types of units and the relations between the units. We annotate 1, 000 questions accordingly and discover the significantly distinctive statistics of the NarrativeQA dataset from the other QA datasets, mainly regarding the focus of event arguments and relations between events. We further give performance decomposition of our system over the question categories, to show the detailed types of challenges in a quantitative way.
In summary, our comprehensive study not only improves the state-of-the-art with careful utilization of recent ODQA advancements, but also reveals the unique challenges in Book QA with quantitative measurements.

Related Work
Open-Domain QA ODQA aims at answering questions from large open-domain corpora (e.g., Wikipedia). The recent work naturally adopts a ranker-reader framework (Chen et al., 2017). Recent success in this field mainly comes from improvement in the following directions: (1) distantly supervised training of neural ranker models (Wang et al., 2018a;Lin et al., 2018;Min et al., 2019; to select relevant evidence passages for a question; (2) fine-tuning and improving the pre-trained LMs, like ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), as the rankers and readers; (3) unsupervised adaptation of pre-trained LMs to the target QA tasks Sun et al., 2019;Xiong et al., 2019a).
Book QA Previous works (Kočiskỳ et al., 2018;Tay et al., 2019;Frermann, 2019) also adopt a ranker-reader pipeline. However, they have not fully investigated the state-of-the-art ODQA techniques. First, the NarrativeQA is a generative QA task by nature, yet the application of the latest pre-trained LMs for generation purposes, such as BART, is not well-studied. Second, lack of finegrained supervision on evidence prevents earlier methods from training a neural ranking model, thus they only use simple BM25 (Robertson et al., 1995) based retrievers. An exception is (Mou et al., 2020) that constructs pseudo distance supervision signals for ranker training. Another relevant work (Frermann, 2019) uses book summaries as an additional resource to train rankers. However, this is different from the aim of Book QA task in answering questions solely from books, since in a general scenario the book summary cannot answer all questions about the book. Our work is the first to investigate and compare improved training algorithms for rankers and readers in Book QA.
3 Task Setup

Task Definition and Dataset
Following Kočiskỳ et al. (2018), we define the Book QA task as finding the answer A to a question Q from a book, where each book contains a number of consecutive and logically-related para-graphs C. The size |C| from different books varies from a few hundred to thousands. All our experiments are conducted on the Nar-rativeQA dataset (Kočiskỳ et al., 2018). It has a collection of 783 books and 789 movie scripts (we use books to refer to both of them), each containing an average of 62K words. Besides, each book has 30 question-answer pairs generated by human annotators in free-form natural language. Hence the exact answers are not guaranteed to appear in the books. NarrativeQA provides two different settings, the summary setting and the full-story setting. The former requires answering questions from book summaries from Wikipedia, and the latter requires answering questions from the original books, assuming that the summaries do not exist. Our Book QA task corresponds to the full-story setting, and we use both names interchangeably.
Following Kočiskỳ et al. (2018), we tokenize the books with SpaCy 6 , and split each book into non-overlapping trunks of 200 tokens.

Baseline
Following the formulation of the open-domain setting, we employ the dominating ranker-reader pipeline that first utilizes a ranker model to select the most relevant passages C Q to Q as evidence, and then a reader model to predict answerÃ given Q and C Q . Our baseline QA systems consist of training different base reader models (detailed in Sec. 4.1) over the BM25 ranker. We also compare with competitive public Book QA systems as baselines from (Kočiskỳ et al., 2018;Frermann, 2019;Tay et al., 2019;Frermann, 2019;Mou et al., 2020) under the Narrative full-story setting, and a concurrent work (Zemlyanskiy et al., 2021). As discussed in Section 2, Mou et al. (2020) train a ranker with distant supervision (DS), i.e., the first analyzed ranker method (Fig. 3); Frermann (2019) use exterior supervision from the book summaries, which is considered unavailable by design of the Book QA task. Because the summaries are written by humans, the system can be viewed as benefiting from human comprehension of books. Fig. 2 lists the details of our compared systems.

Metrics
Following previous works (Kočiskỳ et al., 2018;Tay et al., 2019;Frermann, 2019), we use Rouge-L (Lin, 2004) as the main metric for both evidence retrieval and question answering. 7 For completeness, Appendix A provides results with other metrics used in the previous works, including Bleu-1/4 (Papineni et al., 2002), Meteor (Banerjee and Lavie, 2005), and the Exact Match (EM) and F1 scores that are commonly used in extractive QA.

Analysis Part I: Experimental Study
This section describes our efforts of applying or adapting the latest open-domain QA ideas to improve Book QA ranker/reader models. Fig. 3  We propose to adapt BART to the narrative style with the text-infilling objective.

Fusion-in-Decoder
Proposed by (Izacard and Grave, 2020) as a new type of ODQA reader.
We improve the decoder with attention over all the encoder states to capture cross-passage interaction.

Ranker
Heuristic distant supervision N/A N/A * Unsupervised ICT Proposed by  as siamese network for both BERT pre-training and dense retrieval.
We improve the method with our book-specific training data selection.
Hard EM Proposed by (Min et al., 2019) for reader training. We adapt the method for ranker training. Figure 3: Summary of our inspected approaches in Analysis Part I. *We directly apply the heuristics from (Mou et al., 2020) for Book QA.
answer may not have an exact match in the context. Therefore, we follow Mou et al. (2020) to find the span S that has the maximum Rouge-L score with the ground truth A as the weak label, subject to that A and S have the same length (i.e. |S| = |A|).
Method 1: Book Prereading Inspired by the literature on the unsupervised adaptation of pretrained LMs (Sun et al., 2019;Xiong et al., 2019a), we let the reader "preread" the training books through an additional pre-training step prior to fine-tuning with QA task. This technique helps to better adapt to the narrative writing styles. Specifically, we extract random passages from all training books to build a passage pool. For each training iteration, we mask random spans from each passage, following the setting in (Lewis et al., 2019). The start positions of spans are sampled from a uniform distribution without overlapping. The length of each span is drawn from a Poisson distribution with λ = 3. Each span is then replaced by a single [mask] token regardless of the span length. We mask 15% of the total tokens in each passage. During the prereading stage, we use the masked passage as the encoder input and the raw passage as the decoder output to restore the raw passage in the auto-regressive way.

Method 2: Fusion-in-Decoder
Recently Izacard and Grave (2020) scale BART reader up to large number of input paragraphs. The method, Fusion-in-Decoder (FiD), first concatenates each paragraph to the question to obtain a questionaware encoded vector, then merges these vectors from all paragraphs and feeds them to a decoder for answer prediction. FiD reduces the memory and time costs for encoding the concatenation of all paragraphs, and improves on multiple ODQA datasets. FiD is an interesting alternative for Book QA, since it can be viewed as an integration of the ranker and reader, with the ranker absorbed in the separated paragraph encoding step.
FiD trades cross-paragraph interactions for encoding more paragraphs. The single encoded vector per passage works well for extractive ODQA because the vector only needs to encode information of candidate answers. However, in Book QA, the answers may not be inferred from a single paragraph and integration of multiple paragraphs is necessary. Therefore, in our approach, we concatenate the encoded vectors of all the paragraphs, and rely on the decoder's attention over these vectors to capture the cross-paragraph interactions.

Passage Ranker
Base Ranker Model Our ranker is a BERTbased binary classifier fine-tuned for evidence retrieval. It estimates the likelihood of each passage to be supporting evidence given a question Q.
Training the ranker models is difficult without high-quality supervision. To deal with this problem, we investigate three approaches for creating pseudo labels, including distant supervision, unsupervised ranker training and Hard EM training.
Method 1: Distant Supervision (DS) This is the baseline approach from (Mou et al., 2020). It constructs DS signals for rankers in two steps: First, for each question Q, two BM25 rankers are used to retrieve passages, one with Q as query and the other with both Q and the true answer A. Denoting the corresponding retrieval results as C Q 8 and C Q+A , the method samples the positive samples C + Q from C Q ∩C Q+A and the negative samples C − Q from the rest, with the ratio σ ≡ |C + Q |/|C − Q | for each question Q as a hyperparameter.
Second, to enlarge the margin between the positive and negative samples, the method applies a Rouge-L filter upon the previous sampling results to get the refined samples, C ++ Q and C −− Q : S is a span in C i , Sim(·, ·) is Rouge-L between two sequences. α and β are hyperparameters.
Method 2: Unsupervised ICT Training Inspired by the effectiveness of Inverse Cloze Task (ICT)  as an unsupervised ranker training objective, we use it to pre-train our ranker. The rationale is that we construct "pseudoquestion" q and "pseudo-evidence" b from the same original passage p and aim at maximizing the probability P ICT (b|q) of retrieving b given q, which is estimated using negative sampling as: . (2) S retr (·, q) is the relevance score between a paragraph and the "pseudo-question" q. b =b is sampled from original passages other than p.
The selection of "pseudo-questions" is critical to ICT training. To select representative questions, we investigate several filtering methods, and finally develop a book-specific filter 9 . Our method selects the top-scored sentence in a passage as a "pseudo-question" in terms of its total of tokenwise mutual information against the corresponding book. The details can be found in Appendix B.
Method 3: Hard EM Hard EM is an iterative learning scheme. It was first introduced to ODQA by Min et al. (2019), to find correct answer spans that maximize the reader performance. Here we adapt the algorithm to ranker training. Specifically, the hard EM can be achieved in two steps. At step t, the E-step first trains the reader with the current top-k selections C Q t as input to update its parameters Φ t+1 ; then derives the new positive passages C + Q t+1 that maximizes the reader Φ t+1 's probability of predicting A (Eq. 3). The M-step updates the ranker parameter Θ (Eq. 4): In practice, Min et al. (2019) find that initialized with standard maximum likelihood training, the Hard EM usually converges in 1-2 EM iterations. 9 A unique filter is built for each book.

Evaluation Part I: QA System Ablation
We evaluate the overall Book QA system, and the individual modules on NarrativeQA.
Implementation Details: For rankers, we initialize with bert-base-uncased. For readers, we use bert-base-uncased, gpt2-medium, bart-large and T5-base. The readers use top-3 retrieved passages as inputs, except for the FiD reader which uses top-10, making the readers have comparable time and space complexities.

Overall Performance of Book QA
We first show the positions of our whole systems on the NarrativeQA Book QA task. Table 1 lists our results along with the state-of-the-art results reported in prior works (see Section 3.2 and Fig. 2 for reference). Empirically, our best ranker is from the combination of heuristic distant supervision and the unsupervised ICT training; our best reader is from the combination of the FiD model plus book prereading (with the top-10 ranked paragraphs as inputs). It is observed that specifically designed pre-training techniques play the most important role. Details of the best ranker and reader can be found in the ablation study.
Overall, we significantly raise the bar on Narra-tiveQA by 4.7% over our best baseline and 6.8% over the best published one. 10 But there is still massive room for future improvement, compared to the upperbound with oracle ranker. Our baseline is better than all published results with simple BM25 retrieval, showing the importance of reader investigation. Our best ranker (see Section 5.2 for details) contributes to 2.5% of our improvement over the baseline. Our best reader (see Section 5.3 for details) brings an additional >2% improvement compared to the BART reader.
We conduct a significance test for the results of our best system. There is no agreement on the best practice of the tests for natural language generation (Clark et al., 2011;Dodge et al., 2019). We choose the non-parametric bootstrap test, because it is a more general approach and does not assume specific distributions over the samples. For bootstrapping, we sample 10K subsets, the size of each is 1K. The small p-value (< 0.01) shows the effectiveness of our best model. As a final note, even the results with oracle IR are far from perfect. It indicates the limitation of text-matching-based IR; and further confirms the challenge of evidence retrieval in Book QA.

Ranker Ablation
To dive deeper into the effects of our ranker training techniques in Sec. 4.2, we study the intermediate retrieval results and measure their coverage of the answers. The coverage is estimated on the top-5 selections of a ranker from the baseline BM25's top-32 outputs, by both the maximum Rouge-L score of all the overlapped subsequences of the same length as the answer in the retrieved passages; and a binary indicator of the appearance of the answer in the passages (EM). Table 2 gives the ranker-only ablation. On one hand, our best ranker improves both metrics. It also significantly boosts the BART reader compared to the DS-ranker (Mou et al., 2020), as shown in Appendix A. On the other hand, on top of the DS ranker, none of the other techniques can further improve the two ranker metrics significantly. The ICT unsupervised training brings significant improvement over BM25. When adding to the DSranker, it brings slight improvement and leads to our best results. Hard EM (Min et al., 2019)   not lead to improvements. Our conjecture is that generative readers does not solely generate purely matching-oriented signals, thus introduces noise in matching-oriented ranker training. The limited improvement and the low absolute performance demonstrate the difficulty of retrieval in Book QA. The gap between our best performance and the upper-bound implies that there is a large potential to design a more advanced ranker.
Additionally, we show that how much useful information our best ranker can provide to our readers in the whole QA system. In our implementation, the BART and FiD readers use top-3 and top-10 paragraphs from the ranker respectively. The top-3 paragraphs from our best ranker give the answer coverage of 22.12% EM and 49.83% Rouge-L; and the top-10 paragraphs give 27.15% EM and 56.77% Rouge-L. In comparison, the BM25 baseline has 15.75%/43.44% for top-3 and 24.08%/53.55% for top-10. Therefore, our best ranker efficiently eases the limited-passage bottleneck brought by the ranker and benefits BART reader much more, which is consistent with our observations in Table 3, Section 5.3. Table 3 shows how the different reader techniques in Section 4.1 contribute to the QA performance.

Reader Ablation
First, switching the BART reader to FiD gives a large improvement when using the BM25 ranker (2.8%), approaching the result of "our ranker + BART". This agrees with our hypothesis in Section 4.1 Analysis 2, that FiD takes the roles of both ranker and reader. Second, although the above result shows that FiD's ranking ability does not add much to our best ranker, our cross-paragraph attention enhancement still improves FiD due to better retrieval results (0.5% improvement over "our ranker + BART"). Third, among all the generative reader models, BART outperforms GPT-2 and T5 by a notable margin. Finally, the book prereading brings consistent improvements to both combinations; and the combination of our orthogonal reader improvements finally gives the best results. We also confirm that the prereading helps decoders mostly, as only training the decoder gives comparable results.

Analysis Part II: Human Study
This section conducts in-depth analyses of the challenges in Book QA. We propose a new question categorization scheme based on the types of comprehension or reasoning skills required for answering the questions; then conduct a human study on 1,000 questions. Consequently, the model performance per category provides further insights of the deficiency in current QA models.

Question Categorization
There have been many different question categorization schemes. Among them the most widelyused is intention-based, where an intention is defined by the WH-word and its following word. Some recent reasoning-focused datasets (Yang et al., 2018;Xiong et al., 2019b) categorize intents by the types of multi-hop reasoning or by the types of required external knowledge beyond texts. However, all these previous schemes do not reasonably fit our analysis over narrative texts from two aspects: (1) they only differentiate high-level reasoning types, which is useful in knowledge base QA (i.e., KB-QA) but fails to pinpoint the text-based evidence in Book QA; (2) they are usually entity-centric and overlook linguistic structures like events, while events play essential roles in narrative stories. With this, we design a new systematic schema to categorize the questions in the NarrativeQA dataset.
Semantic Unit Definition We first identify a minimum set of basic semantic units, each describing one of the most fundamental components of a story. The set should be sufficient such that (1) each answer can be uniquely linked to one semantic unit, and (2) each question should contain at least one semantic unit. Our final set contains three main classes and nine subclasses (Fig. 4).
We merge the two commonly-used types in the previous analysis, named entities and noun phrases, into the Concept class. The Event class follows the definition in ACE 2005 (Walker et al., 2006). We also use a special sub-type "Book Attribute" that represents the meta information or the global settings of the book, such as the era and the theme of the story in a book.
Question Type Definition On top of the semantic units' definition, each question can be categorized as a query that asks about either a semantic unit or a relation between two semantic units. We use the difference and split all the questions into nine types grouped in four collections (Fig. 5).
• Concept questions that ask a Concept attribute or a relation between two Concepts. The most common types in most ODQA tasks (e.g., Trivi-aQA) and the QA tasks require multi-hop reasoning (e.g., ComplexQuestions and HotpotQA).
• Event-argument questions that ask parts of an event structure. This type is less common in the existing QA datasets, although some of them contain a small portion of questions in this class. The large ratio of these event-centric questions demonstrates the uniqueness of the NarrativeQA dataset.
• Event-relation questions that ask relations (e.g., causal or temporal relations) between two events or between an event and an attribute (a state or a description). This type is common in Narra-tiveQA, since events play essential roles in story narrations. A particular type in this group is the relation that one event serves as the argument of another event (e.g., how-questions). It corresponds to the common linguistic phenomenon of (compositional) nested event structures.

Relation between Concepts
The question asks a relation a concept has, and expects another concept as the answer  • Global-attribute questions that ask Book Attribute: As designed, it is also unique in Book QA.

Annotation Details
Five annotators are asked to label the semantic unit types and the question types on a total of 1,000 question-answer pairs. There can be overlapped question categories for the same question. A major kind of overlaps is between the three event component types (trigger, argument -concept/attribute) and the three event relation types (causal, temporal and nested). Therefore in the guideline, when the question can be answered with an event component, we ask the annotators to check if the question requires the understanding of event relations.
If so, the question should be labeled with the event relation types as these are the more critical information for finding the answers. Similarly, for the other rare cases of category overlaps, we ask the annotators to label the types that they believe are more important for finding the answers.
Correlation between question and answer types Figure 6 shows the ratios of answer types under   Table 4: Annotation agreement. SU: Semantic Unit. "SU Type" and "SU Sub Type" are defined in Figure 4. each question type via a flow diagram. Most question types correspond to a single major answer type, with a few exceptions: (1) Most of the three event-relation questions have events as answers. A small portion of them have concepts or attributes as answers. This is either because the answers are state/description attributes; or because the answers are the arguments of one of the related events queried by the questions.
(2) The Relation b/w Concepts type has some questions with attributetyped answers. This is because the questions may ask the names of relations themselves, while some relation names are recognized as description-typed attributes.
(3) Most of Book Attribute questions have concepts as answers, because they ask for the protagonists or the locations the stories occur at.
Annotation agreement A subset of 150 questions is used for quality checking, with each question labeled by two annotators. Table 4 reports both the simple agreement rates and the Fleiss' Kappa (Fleiss, 1971) κs. Our annotations reach a high agreement with around 90% for question types and SU types and 80% for SU sub-types, reflecting the rationality of our scheme.

Performance of Question Type Classification on the Annotated Data
We conduct an additional experiment to study how well a machine learning model can learn to classify our question types based on question surface patterns. We use the RoBERTa-base model that demonstrates superior on multiple sentence classification tasks. Since our labeled data is small, we conduct a 10-fold cross validation on our labeled 1,000 instances. For each testing fold, we randomly select another fold as the development set and use the rest folds as training.
The final averaged testing accuracy is 70.2%. Considering the inter agreement rate of 88.0%, this is a reasonable performance, with several reasons for the gap: (1) Our training data is too small and easy to overfit, evidenced by the performance gap between the training accuracy and development accuracy (∼100% versus 73.4%). The accuracy can be potentially increased with more training data.
(2) Some of the ambiguous questions require the contexts to determine their types. During labeling, our human annotators are allowed to read the answers for additional information, which leads to a higher upperbound performance. (3) There is a small number of ambiguous cases, on which humans can use world knowledge while models are difficult to employ such knowledge. Therefore, the current accuracy can be potentially increased with a better model architecture. Figure 7 gives major error types, which verifies the our discussed reasons above. The majority of errors are the confusion between Event Argument -Concept and Nested Relation. The models are not accurate on the two types for several reasons: (1) Sometimes the similar question surface forms can take both concepts and events as an argument. In these cases, the answers are necessary for determining the question type. (2) According to our annotation guideline, we encourage the annotators to label event relations with higher priority, especially when the answer is a concept but serves as an argument of a clause. This increases the labeling error rate between the two types. Another major error type is labeling Causal Relation as Nest Relation. This is mainly because some questions ask causal relations in an implicit way, on which human annotators have the commonsense to identify the causality but models do not. The third major type is the failures in identifying the Attribute  Table 5: Performance decomposition to question types of our best generative system (Gen, the best BARTbased system), extractive system (Ext, the best BERTbased system, i.e., our best ranker + BERT reader), and ranker (BERT+ICT from Table 2).

Error Analysis and Lessons Learned
of Concept and the Relation b/w Concepts categories. As the attributes can be associated to some predicates, especially when they are descriptions, the models confuse them with relations or events.
The above observations provide insights on future refinement of our annotation guidelines, if people want to further enlarge the labeled data. For example, the Nested Relation should be more clearly defined with comprehensive examples provided. In this way, the annotators can better distinguish them from the other types; and can better determine if the nested structure exists and whether to label the Event Argument types. Similarly, we could define clearer decision rules among relations, attributes and events, to help annotators distinguish Relation b/w Concepts, Attribute of Concept and Event Argument -Concept types.  Table 6: Performance decomposition to answer types of our best generative/extractive systems and ranker.
Gen and Ext are the same systems as in Table 5.
7 Evaluation Part II: QA System Performance Decomposition Table 5 presents both the ratio of each question type and our best generative and extractive performance on it. The ratios reflect NarrativeQA's unique focus on events, as ∼75% of the questions are relevant to the events in book stories. Specifically, ∼34% of the questions ask components of event structures (i.e., arguments or triggers) and 41% ask relations between events (note that these questions may still require the understanding of event structures). By comparison, the two dominating types in the other QA datasets, Concept Relation and Concept Attribute, only contribute to a ratio of ∼23%. This agrees with human intuitions on the unique challenges in book understanding.
Most difficult question types: The performance breakdown shows that all three event-relation types (Causal, Temporal and Nested) are challenging to our QA systems. The Causal relation is the most difficult type with the lowest QA performance. The result confirms that the unique challenge in understanding event relations is still far from being well-handled by current machine comprehension techniques, even with powerful pretrained LMs. Moreover, these types can also be potentially improved by the idea of complementary evidence retrieval (Wang et al., 2018b;Iyer et al., 2020;Mou et al., 2021) in ODQA.
Besides the three event-relation types, the Event -Attribute and Event -Triggers are also challenging to the extractive system, because the answers are usually long textual mentions of events or states that are not extractable from the passages.
Challenging types for reader: By checking the performance gaps of the generative system and the ranker, we can tell which types are difficult mainly  Table 7: Rouge-L scores under NarrativeQA summary setting. We list the best public extractive model BERT+Hard EM (Min et al., 2019) and the best generative model Masque (Nishida et al., 2019) for reference.
for the reader. 11 The Event -Concept type poses more challenges to the reader, given that the ranker can perform well on them but the overall QA performance is low. These questions are challenging mainly due to the current readers' difficulty in understanding the event structures, since their answers are usually extractable from texts.
Breakdown onto answer types: To better understand the challenges of non-extractable answers, we show the performance on each answer type in Table 6. The answers are mostly extractable when they are entities (including the book-specific terms and numeric values). On these types the extractive systems perform better and the two systems perform closer, compared to the other types. In contrast, the answers are less likely to be extractable from the original passages when they are events, states, and descriptions. An interesting observation is that the Common Noun Phrases type is also challenging for the extractive system. It indicates that these answers may not appear in the texts with the exact forms, so commonsense knowledge is required to connect their different mentions.
Quantifying the challenge of event-typed answers to the reader: Table 6 shows that the ranker performs poorly when the answers are events and descriptions. This arouses a question -whether the relatively lower QA performance is mainly due to the ranker's deficiency; or due to the deficiency of both the ranker and the reader.
To answer this question, we conduct an experiment in the summary setting of NarrativeQA, to eliminate the effects of the ranker. We create a subset of questions with event-typed answers if a question has either of its two answers containing a verb. This procedure results in a subset of 2,796 and 8,248 QA pairs in validation and test sets re-spectively. We train a BART Reader with all training data in the summary setting, and test on both the full evaluation data and our event-only subsets. Table 7 shows that the performance on the eventonly subsets is about 12% lower. The results confirm that questions with event-typed answers are challenging for both the reader and the ranker.

Conclusion
We conduct a comprehensive analysis on the Book QA task, taking the representative NarrativeQA dataset as an example. Firstly, we design the Book QA techniques by borrowing the wisdom from the cutting-edge open-domain QA research and demonstrate through extensive experiments that (1) evidence retrieval in Book QA is difficult even with the state-of-the-art pre-trained LMs, due to the factors of rich writing style, recurrent book plots and characters, and the requirement of high-level story understanding; (2) our proposed approaches that adapt pre-trained LMs to books, especially the prereading technique for the reader training, are consistently helpful.
Secondly, we perform a human study and find that (1) a majority of questions in Book QA requires understanding and differentiating events and their relations; (2) the existing pre-trained LMs are deficient in extracting the inter-and intrastructures of the events in the Book QA. Such facts lead us towards the event understanding task for future improvement over the Book QA task.  A Full Results on NarrativeQA Table 8 gives full results with different metrics.

B Details of ICT Training Data Creation
Our pilot study shows that uniformly sampling the sentences and their source passages as "pseudoquestions" (PQs) and "pseudo-evidences" (PEs) does not work well. Such selected PQs have high probability to be casual, e.g., "Today is sunny", thus are not helpful for ranker training.
To select useful PQs, we define the following measure f (s, b j ) to level the affinity between each candidate sentence s and the book b j : where pmi(w k , b j ) is the word-level mutualinformation between each word w ik ∈ s and the book b j . Intuitively, pmi(w k , b j ) can be seen as the "predictiveness" of the word w k with respect to the book b j , and f (s, b j ) measures the aggre-gated 'importance" for s. Consequently, the sentence s with the highest f (s, b j ) from each passage p n will be selected as the PQ; the corresponding p n with the PQ removed becomes the positive sample; whereas the corresponding negative samples from the same book b j will be the top-500 passages (exclusive of the source passage p n ) with the highest TF-IDF similarity scores to the PQ. During sampling, we filter out stopwords and punctuation when computing f (s, b j ). In movie scripts, the instructive sentences like "SWITCH THE SCENARIO" that have poor connections to its source passages are also ignored. Finally, we require each PQ contain a minimum number of 3 non-stopwords.