Diff-Explainer: Differentiable Convex Optimization for Explainable Multi-hop Inference

Abstract This paper presents Diff-Explainer, the first hybrid framework for explainable multi-hop inference that integrates explicit constraints with neural architectures through differentiable convex optimization. Specifically, Diff- Explainer allows for the fine-tuning of neural representations within a constrained optimization framework to answer and explain multi-hop questions in natural language. To demonstrate the efficacy of the hybrid framework, we combine existing ILP-based solvers for multi-hop Question Answering (QA) with Transformer-based representations. An extensive empirical evaluation on scientific and commonsense QA tasks demonstrates that the integration of explicit constraints in a end-to-end differentiable framework can significantly improve the performance of non- differentiable ILP solvers (8.91%–13.3%). Moreover, additional analysis reveals that Diff-Explainer is able to achieve strong performance when compared to standalone Transformers and previous multi-hop approaches while still providing structured explanations in support of its predictions.


Introduction
Explainable Question Answering (QA) in complex domains is often modelled as a multihop inference problem (Thayaparan et al., 2020;Valentino et al., 2021;Jansen et al., 2021).In this context, the goal is to answer a given question through the construction of an explanation, typically represented as a graph of multiple interconnected sentences supporting the answer (Figure 1).(Khashabi et al., 2018;Jansen, 2018;Kundu et al., 2019;Thayaparan et al., 2021).
However, explainable QA models exhibit lower performance when compared to state-of-the-art approaches, which are generally represented by Answer: Question: Explanations: to rub together means to move against friction is a force a stick is an object Two sticks getting warm when rubbed together What is an example of force producing heat friction occurs when two object's surfaces move against each other friction causes the temperature of an object to increases Figure 1: Example of a multi-hop QA problem with an explanation represented as a graph of multiple interconnected sentences supporting the answer (Xie et al., 2020;Jansen et al., 2018).
Transformer-based architectures (Khashabi et al., 2020;Devlin et al., 2019;Khot et al., 2020).While Transformers are able to achieve high accuracy due to their ability to transfer linguistic and semantic information to downstream tasks, they are typically regarded as black-boxes (Liang et al., 2021), posing concerns about the interpretability and transparency of their predictions (Rudin, 2019;Guidotti et al., 2018).
To alleviate the aforementioned limitations and find a better trade-off between explainability and inference performance, this paper proposes Diff-Explainer (∂-Explainer), a novel hybrid framework for multi-hop and explainable QA that combines constraint satisfaction layers with pretrained neural representations, enabling end-toend differentiability.
Recent works have shown that certain convex optimization problems can be represented as individual layers in larger end-to-end differentiable networks (Agrawal et al., 2019b,a;Amos and Kolter, 2017), demonstrating that these layers can be adapted to encode constraints and dependencies between hidden states that are hard to capture via standard neural networks.
In this paper, we build upon this line of research, showing that convex optimization layers can be integrated with Transformers to improve explainability and robustness in multi-hop inference problems.To illustrate the impact of end-toend differentiability, we integrate the constraints of existing ILP solvers (i.e., TupleILP (Khot et al., 2017), ExplanationLP (Thayaparan et al., 2021)) into a hybrid framework.Specifically, we propose a methodology to transform existing constraints into differentiable convex optimization layers and subsequently integrate them with pretrained sentence embeddings based on Transformers (Reimers et al., 2019).
To evaluate the proposed framework, we perform extensive experiments on complex multiplechoice QA tasks requiring scientific and commonsense reasoning (Clark et al., 2018;Xie et al., 2020).In summary, the contributions of the paper are as follows: 1.A novel differentiable framework for multihop inference that incorporates constraints via convex optimization layers into broader Transformer-based architectures.2.An extensive empirical evaluation demonstrating that the proposed framework allows end-to-end differentiability on downstream QA tasks for both explanation and answer selection, leading to a substantial improvement when compared to non-differentiable constraint-based and transformer-based approaches.3. We demonstrate that Diff-Explainer is more robust to distracting information in addressing multi-hop inference when compared to Transformer-based models.

Related Work
Constraint-Based Multi-hop QA Solvers: ILP has been employed to model structural and semantic constraints to perform multi-hop QA.
TableILP (Khashabi et al., 2016) is one of the earliest approaches to formulate the construction of explanations as an optimal sub-graph selection problem over a set of structured tables and evaluated on multiple-choice elementary science question answering.In contrast to TableILP, Tu-pleILP (Khot et al., 2017) was able to perform inference over free-form text by building semistructured representations using Open Information Extraction.SemanticILP (Khashabi et al., 2018) also comes from the same family of solvers that leveraged different semantic abstractions, including semantic role labelling, named entity recognition and lexical chunkers for inference.In contrast to previous approaches, Thayaparan et al. (2021) proposed the ExplanationLP model that is optimized towards answer selection via Bayesian optimization.ExplanationLP was limited to finetuning only nine parameters as it is intractable to finetune large models using Bayesian optimization.
Hybrid Reasoning with Transformers A growing line of research focuses on adopting Transformers for interpretable reasoning over text (Clark et al., 2021;Gontier et al., 2020;Saha et al., 2020;Tafjord et al., 2021).For example, Saha et al. (2020) introduced the PROVER model that provides an interpretable transformer-based model that jointly answers binary questions over rules while generating the corresponding proofs.These models are related to the proposed framework for exploring hybrid architectures.However, these models assume that the rules are fully available in the context and are still mostly applied on synthetically generated datasets.In this paper, we take a step forward in this direction proposing an hybrid model for addressing scientific and commonsense QA which require the construction of complex explanations through multi-hop inference on external knowledge bases.

Differentiable Convex Optimization Layers:
Our work is in line with previous works that have attempted to incorporate optimization as a neural network layer.These works have introduced differentiable modules for quadratic problems (Donti et al., 2017;Amos and Kolter, 2017), satisfiability solvers (Wang et al., 2019) and submodular optimizations (Djolonga and Krause, 2017;Tschiatschek et al., 2018).Recent works also offer differentiation through convex cone programs (Busseti et al., 2019;Agrawal et al., 2019a).In this work, we use the differentiable convex optimization layers proposed by Agrawal et al. (2019b).These layers provide a way to abstract away from the conic form, letting users define convex optimization in natural syntax.The defined convex optimization problem is converted by the layers into a conic form to be solved by a conic solver (O'Donoghue, 2021).

Multi-hop Question Answering via Differentiable Convex Optimization
The problem of Explainable Multi-Hop Question Answering can be stated as follows: Definition 3.1 (Explanations in Multi-Hop Question Answering).Given a question Q, answer a and a knowledge base F kb (composed of natural language sentences), we say that we may infer hypothesis h (where hypotheses h is the concatenation of Q with a) if there exists a subset (F exp ) of supporting facts {f 1 , f 2 , . ..} ⊆ F kb of statements which would allow a human being to deduce h from {f 1 , f 2 , . ..}.We call this set of facts an explanation for h.
Given a question (Q) and a set of candidate answers C = {c 1 , c 2 , c 3 , ..., c n } ILP-based approaches (Khashabi et al., 2016;Khot et al., 2017;Thayaparan et al., 2021) convert them into a list of hypothesis H = {h 1 , h 2 , h 3 , . . ., h n } by concatenating question and candidate answer.For each hypothesis h i these approaches typically adopt a retrieval model (e.g: BM25, FAISS (Johnson et al., 2017)), to select a list of candidate explanatory facts F = {f 1 , f 2 , f 3 , . . ., f k }, and construct a weighted graph G = (V, E, W ) with edge weights W : E → R where V = {{h i } ∪ F }, edge weight W ik of each edge E ik denote how relevant a fact f k is with respect to the hypothesis h i .
Based on these definitions, ILP-based QA can be defined as follows: is maximal and adheres to set of constraints M c designed to emulate multi-hop inference.The hypothesis h i with the highest subgraph weight W [G * = (V * , E * )] is selected to be the correct answer c ans .
The ILP-based inference has two main challenges in producing convincing explanations.First, design edge weights W , ideally capturing a quantification of the relevance of the fact to the hypothesis.Second, define constraints that emulate the multi-hop inference process.

Limitations with Existing ILP formulations
In previous work, the construction of the graph G requires predetermined edge-weights based on lexical overlaps (Khot et al., 2017) or semantic similarity using sentence embeddings (Thayaparan et al., 2021), on top of which combinatorial optimization strategies are performed separately.
From those approaches, ExplanationLP proposed by Thayaparan et al. (2021) is the only approach that modifies the graph weight function by optimizing the weight parameters θ by fine-tuning them for inference via Bayesian Optimization over pre-trained embeddings.
In contrast, we posit that learning the graph weights dynamically by fine-tuning the underlying neural embeddings towards answer and explanation selection will lead to more accurate and robust performance.To this end, the constraint optimization strategy should be differentiable and efficient.However, Integer Linear Programming based approaches present two critical shortcomings that prevent achieving this goal: 1.The Integer Linear Programming formulation operates with discrete inputs/outputs resulting in non-differentiability (Paulus et al., 2021).Consequently, it cannot be integrated with deep neural networks and trained end-to-end.Making ILP differentiable requires non-trivial assumptions and approximations (Paulus et al., 2021).2. Integer Programming is known to be NPcomplete, with the special case of 0-1 integer linear programming being one of Karp's 21 NP-complete problems (Karp, 1972).Therefore, as the size of the combinatorial optimization problem increases, finding exact solutions becomes computationally intractable.This intractability is a strong limitation for multi-hop QA in general since these systems typically operate on large knowledge bases and corpora.

Subgraph Selection via Semi-Definite Programming
Differentiable convex optimization (DCX) layers (Agrawal et al., 2019b) provide a way to encode constraints as part of a deep neural network.However, an ILP formulation is nonconvex (Wolsey, 2020;Schrijver, 1998)  Retrieved Facts: Candidate Hypotheses: [✓] a stick is an object: F1 [✓] friction is a force: F2 [✕] an apple is a fruit: F3 [✓] to rub together means to move against: F4 [✕] falling occurs due to gravity: F5 [✓] friction occurs when two object's surfaces move against each other: F6 [✓] friction causes the temperature of an object to increases: F7 [✕] magnetic attraction pulls two objects together: F8 [✓] Two sticks getting warm when rubbed together is an example of force producing heat: H1 [✕] An apple falling from a tree branch is an example of force producing heat: H2 [✕] A wagon rolling across a yard when pulled is an example of force producing heat: H3 Multi-Hop Inference Constraints mate ILP with convex optimization constraints.
In order to alleviate this problem, we turn to Semi-Definite programming (SDP) (Vandenberghe and Boyd, 1996).SDP is non-linear but convex and has shown to efficiently approximate combinatorial problems.
A semi-definite optimization is a convex optimization of the form: Here X ∈ S n is the optimization variable and C, A 1 , . . ., A p ∈ S n , and b 1 , . . ., b p ∈ R. X 0 is a matrix inequality with S n denotes a set of n×n symmetric matrices.
SDP is often used as a convex approximation of traditional NP-hard combinatorial graph optimization problems, such as the max-cut problem, the dense k-subgraph problem and the quadratic {0 − 1} programming problem (Lovász and Schrijver, 1991).
Specifically, we adopt the semi-definite relaxation of the following quadratic {0, 1} problem: Here W is the edge weight matrix of the graph G and the optimal solution for this problem ŷ indicates if a node is part of the induced subgraph G * .
We follow Helmberg (2000) in their reformulation and relaxation of this problem.Instead of vectors y ∈ {0, 1} n , we optimize over the set of positive semidefinite matrices satisfying the SDP constraint in the following relaxed convex optimization problem1 : where The optimal solution for Y in this problem Ê ∈ [0, 1] indicates if an edge is part of the subgraph G * .In addition to the semi-definite constraints, we also impose Multi-hop inference constraints M c .These constraints are introduced in Section 3.4 and the Appendix.
This reformulation provides the tightest approximation for the optimization with the convex constraints.Since this formulation is convex, we can now integrate it with differentiable convex optimization layers.Moreover, the semi-definite program relaxation can be solved by adopting the interior-point method (De Klerk, 2006;Vandenberghe and Boyd, 1996) which has been proved to run in polynomial time (Karmarkar, 1984).To the best of our knowledge, we are the first to employ SDP to solve a natural language processing task.

Diff-Explainer: End-to-End Differentiable Architecture
Diff-Explainer is an end-to-end differentiable architecture that simultaneously solves the constraint optimization problem and dynamically ad-justs the graph edge weights for better performance.We adopt differentiable convex optimization for the optimal subgraph selection problem.
The complete architecture and setup are described in the subsequent subsections and Figure 2. We transform a multi-hop question answering dataset into a multi-hop QA dataset by converting an example's question (q) and the set of candidate answers 2A) by using the approach proposed by Demszky et al. (2018).To build the initial graph, for the hypotheses set H we adopt a retrieval model to select a list of candidate explanatory facts where the weights W ik of each edge E ik denote how relevant a fact f k is with respect to the hypothesis h i .Departing from traditional ILP approaches (Thayaparan et al., 2021;Khashabi et al., 2018Khashabi et al., , 2016)), the aim is to select the correct answer c ans and relevant explanations F exp with a single graph.
In order to demonstrate the impact of Diff-Explainer, we reproduce the formalization introduced by previous ILP solvers.Specifically, we approximate the two following solvers: • TupleILP (Khot et al., 2017): TupleILP constructs a semi-structured knowledge base using tuples extracted via Open Information Extraction (OIE) and performs inference over them.For example, in Figure 2A, F 1 will be decomposed into (a stick; is a; object) and the subject (a stick) will be connected to the hypothesis to enforce constraints and build the subgraph.• ExplanationLP (Thayaparan et al., 2021): Ex-planationLP classifies facts into abstract and grounding facts.Abstract facts are core scientific statements.Grounding facts help connect the generic terms in the abstract facts to the terms in the hypothesis.For example, in Figure 2A, F 1 is a grounding fact and helps to connect the hypothesis with the abstract fact F 7 .
The approach aims to emulate abstract reasoning.
Further details of these approaches can be found in the Appendix.
To demonstrate the impact of integrating a convex optimization layer into a broader end-toend neural architecture, Diff-Explainer employs a transformer-based sentence embedding model.Figure 2B describes the end-to-end architectural diagram of Diff-Explainer.Specifically, we incorporate a differentiable convex optimization layer with Sentence-Transformer (STrans) (Reimers et al., 2019), which has demonstrated state-of-theart performance on semantic sentence similarity benchmarks.
STrans is adopted to estimate the relevance between hypothesis and facts during the construction of the initial graph.We use STrans as a bi-encoder architecture to minimize the computational overload and operate on large number of sentences.The semantic relevance score from STrans is complemented with a lexical relevance score computed considering the shared terms between hypotheses and facts.We calculate semantic and lexical relevance as follows: Semantic Relevance (s): Given a hypothesis h i and fact f j we compute sentence vectors of h i = ST rans(h i ) and f j = ST rans(f j ) and calculate the semantic relevance score using cosinesimilarity as follows: Lexical Relevance (l): The lexical relevance score of hypothesis h i and f j is given by the percentage of overlaps between unique terms (here, the function trm extracts the lemmatized set of unique terms from the given text): Given the above scoring function, we construct edge weights matrix (W ) as follows: Here relevance scores are weighted by weight parameters (θ) with ) satisfy condition D k or 0 otherwise.TupleILP has two weights each for lexical and semantic relevance.Meanwhile, Ex-planationLP has nine weights based on the type of fact and relevance type.Additional details on how to calculate W for each approach can be found in the Appendix.

Answer and Explanation Selection
Given edge variable Y and node variable y (diag(Y ) = y) (See Section 3.2) where 1 means the edge/node is part of the subgraph and 0 otherwise, we design the the answer selection constraint is defined as follows: Each entry in the edge diagonal represents a value between 0 and 1, indicating whether the corresponding node in the initial graph should be included in the optimal subgraph.Explanation selection is done via the following constraint that limits the number of nodes in the subgraph to be m.
Apart from these functional constraints, ILP based methods also impose semantic and structural constraints.For instance, ExplanationLP places explicit grounding-abstract fact chain constraints to perform efficient abstractive reasoning and TupleILP enforces constraints to leverage the SPO structure to align and select facts.See the Appendix on how these constraints are designed and imposed within Diff-Explainer.
The output from the DCX layer returns the solved edge adjacency matrix Ê with values between 0 and 1.We interpret the diagonal values of Ê be the probability of the specific node to be part of the selected subgraph.The final step is to optimize the sum of the L1 loss l 1 between the selected answer and correct answer c ans for the answer loss L ans : (13) As well as the binary cross entropy loss l b between the selected explanatory facts and true explanatory facts F exp for the explanatory loss L exp : We add the losses to backpropagate to learn the θ weights and fine-tune the sentence transformers.The pseudo-code to train Diff-Explainer end-toend is summarized in Algorithm 1.

Empirical Evaluation
Question Sets: We use the following multiplechoice question sets to evaluate the Diff-Explainer.
(1) WorldTree Corpus (Xie et al., 2020): The 4,400 question and explanations in the WorldTree corpus are split into three different subsets: trainset, dev-set and test-set.We use the dev-set to assess the explainability performance since the explanations for test-set are not publicly available.
(2) ARC-Challenge Corpus (Clark et al., 2018): ARC-Challenge is a multiple-choice question dataset which consists of question from science exams from grade 3 to grade 9.These questions have proven to be challenging to answer for other LP-based question answering and neural approaches.
Experimental Setup : We use all-mpnet-base-v2 model as the Sentence Transformer model for the sentence representation in Diff-Explainer.The motivation to choose this model is to use a pretrained model on natural language inference and MPNet Base (Song et al., 2020) is smaller com-pared to large models like BERT Large , enabling us to encode a larger number of facts.Similarly, for fact retrieval representation, we use allmpnet-base-v2 trained with gold explanations of WorldTree Corpus to achieve a Mean Average Precision of 40.11 in the dev-set.We cache all the facts from the background knowledge and retrieve the top k facts using MIPS retrieval (Johnson et al., 2017).We follow a similar setting proposed by Thayaparan et al. (2021) for the background knowledge base by combining over 5000 abstract facts from the WorldTree table store (WTree) and over 100,000 is-a grounding facts from Concept-Net (CNet) (Speer et al., 2016).Furthermore, we also set m=2 in line with the previous configurations from TupleILP and ExplanationLP 2 .
Baselines: In order to assess the complexity of the task and the potential benefits of the convex optimization layers presented in our approach, we show the results for different baselines.We run all models with k = {1, . . ., 10, 25, 50, 75, 100} to find the optimal setting for each baseline and perform a fair comparison.For each question, the baselines take as input a set of hypotheses, where each hypothesis is associated with k facts, ranked according to the fact retrieval model.
(1) IR Solver (Clark et al., 2018): This approach attempts to answer the questions by computing the accumulated score from all k obtained from summing up the retrieval scores.In this case, the retrieval scores are calculated using the cosine similarity of fact and hypothesis sentence vectors obtained from the STrans model trained on gold explanations.The hypothesis associated with the highest score is selected as the one containing the correct answer.
(2) BERT Base and BERT Large (Devlin et al., 2019): To use BERT for this task, we concatenate every hypothesis with k retrieved facts, using the separator token [SEP].We use the Hugging-Face (Wolf et al., 2019) implementation of Bert-ForSequenceClassification, taking the prediction with the highest probability for the positive class as the correct answer 3 .
(3) PathNet (Kundu et al., 2019): PathNet is a graph-based neural approach that constructs a sin-2 We fine-tune Diff-Explainer using a learning rate of 1e-5, 14 epochs, with a batch size of 8 3 We fine-tune both versions of BERT using a learning rate of 1e-5, 10 epochs, with a batch size of 16 for Base and 8 for Large gle linear path composed of two facts connected via entity pairs for reasoning.It uses the constructed paths as evidence of its reasoning process.They have exhibited strong performance for multiple-choice science questions.

Model
(4) TupleILP and ExplanationLP: Both replications of the non-differentiable solvers are implemented with the same constraints as Diff-Explainer via SDP approximation without finetuning end-to-end; instead, we fine-tune the θ parameters using Bayesian optimization4 and frozen STrans representations.This baseline helps us to understand the impact of the end-to-end finetuning.

Answer Selection
WorldTree Corpus: Table 1 presents the answer selection performance on the WorldTree corpus in terms of accuracy, presenting the best results obtained for each model after testing for different values of k.We also include the results for BERT without explanation in order to evaluate the influence extra facts can have on the final score.We also present the results for two different training goals, optimizing for only the answer and optimizing jointly for answer and explanation selection.
We draw the following conclusions from the empirical results obtained on the WorldTree corpus (The performance increase here are in expressed absolute terms): (1) Diff-Explainer with ExplanationLP and TupleILP outperforms the respective nondifferentiable solvers by 13.3% and 8.91%.This increase in performance indicates that Diff-Explainer can incorporate different types of constraints and significantly improve performance compared with the non-differentiable version.
(2) It is evident from the performance obtained by a large model such as BERT Large (59.32%) that we are dealing with a non-trivial task.The best Diff-Explainer setting (with ExplanationLP) outperforms the best transformer-based models with and without explanations by 12.16% and 21.85%.Additionally, we can also observe that both with TupleILP and ExplanationLP, we obtain better scores over the transformer-based configurations.
(3) Fine-tuning with explanations yielded better performance than only answer selection with ExplanationLP and TupleILP, improving performance by 1.75% and 1.98%.The increase in performance indicates that Diff-Explainer can learn from the distant supervision of answer selection and improve in a strong supervision setting.(4) Overall, we can conclude that incorporating constraints using differentiable convex optimization with transformers for multi-hop QA leads to better performance than pure constraint-based or transformer-only approaches.
ARC Corpus: Table 2 presents a comparison of baselines and our approach with different background knowledge bases: TupleInf, the same as used by TupleILP (Khot et al., 2017), and WorldTree & ConceptNet as used by Explana-tionLP (Thayaparan et al., 2021).We have also reported the original scores reported by the respective approaches.
For this dataset, we use our approach with the same settings as the model applied to WorldTree, and fine-tune for only answer selection since ARC does not have gold explanations.Models employing Large Language Models (LLMs) trained across multiple question answering datasets like UnifiedQA (Khashabi et al., 2020) and Aristo-BERT (Xu et al., 2021) have demonstrated strong performance in ARC with an accuracy of 81.14 and 68.95 respectively.
To ensure a fair comparison, we only compare the best configuration of Diff-Explainer with other approaches that have been trained only on the ARC corpus and provide some form of explanations in Table 3.Here the explainability column indicates if the model delivers an explanation for the predicted answer.A subset of the approaches produces evidence for the answer but remains intrinsically black-box.These models have been marked as Partial.
(1) Diff-Explainer improves the performance of non-differentiable solvers regardless of the background knowledge and constraints.With the same background knowledge, our model improves the original TupleILP and ExplanationLP by 10.12% and 2.74%, respectively.
(2) Our approach also achieves the highest performance for partially and fully explainable approaches trained only on ARC corpus.
(3) As illustrated in Table 3, we outperform the next best fully explainable baseline (Explana-tionLP) by 2.74%.We also outperform the statof-the-art model AutoRocc (Yadav et al., 2019b) (uses BERT Large ) that is only trained on ARC corpus by 1.71% with 230 million fewer parameters.
(4) Overall, we achieve consistent performance improvement over different knowledge bases (Tu-pleInf, Wordtree & ConceptNet) and question sets (ARC, WorldTree), indicating that the robustness of the approach.

Explanation Selection
Table 4 shows the Precision@K scores for explanation retrieval for PathNet, Explana-tionLP/TupleILP and Diff-Explainer with Ex-  planationLP/TupleILP trained on answer and explanation selection.We choose Precision@K as the evaluation metric as the design of the approaches is not to construct full explanations but to take the top k=2 explanations and select the answer.
As evident from the table, our approach significantly outperforms PathNet.We also improved the explanation selection performance over the non-differentiable solvers indicating the end-toend fine-tuning also helps improve the selection of explanatory facts.

Answer Selection with Increasing Distractors
As noted by previous works (Yadav et al., 2019b(Yadav et al., , 2020), the answer selection performance can decrease when increasing the number of used facts k for Transformer.We evaluate how our ap- proach stacks compared with transformer-based approaches in this aspect, presented in Figure 3.
As we can see, the IR Solver decreases in performance as we add more facts, while the scores for transformer-based models start deteriorating for k > 5.Such results might seem counter-intuitive since it would be natural to expect a model's performance to increase as we add supporting facts.However, in practice, that does not apply as by adding more facts, there is an addition of distractors that such models may not filter out.
We can prominently see this for BERT Large with a sudden drop in performance for k = 10, going from 56.61 to 30.26.Such drop is likely being caused by substantial overfitting; with the added noise, the model partially lost the ability for generalization.A softer version of this phenomenon is also observed for BERT Base .
In contrast, our model's performance increases as we add more facts, reaching a stable point around k = 50.Such performance stems from our combination of overlap and relevance scores along with the structural and semantic constraints.The obtained results highlight our model's robustness to distracting knowledge, allowing its use in datarich scenarios, where one needs to use facts from extensive knowledge bases.PathNet is also exhibiting robustness across increasing distractors, but we consistently outperform it across all k configurations.
On the other hand, for smaller values of k our model is outperformed by transformer-based approaches, hinting that our model is more suitable for scenarios involving large knowledge bases such as the one presented in this work.
Question (1): Fanning can make a wood fire burn hotter because the fanning: Correct Answer: adds more oxygen needed for burning.PathNet Answer: provides the energy needed to keep the fire going.Explanations: (i) fanning a fire increases the oxygen near the fire, (ii) placing a heavy blanket over a fire can be used to keep oxygen from reaching a fire ExplanationLP Answer: increases the amount of wood there is to burn.Explanations: (i) more burning causes fire to be hotter, (ii) wood burns Diff-Explainer ExplanationLP Answer: adds more oxygen needed for burning.Explanations: (i) more burning causes fire to be hotter, (ii) fanning a fire increases the oxygen near the fire Question (2): Which type of graph would best display the changes in temperature over a 24 hour period?Correct Answer: line graph.PathNet Answer: circle/pie graph.Explanations: (i) a line graph is used for showing change ; data over time ExplanationLP Answer: circle/pie graph.Explanations: (i) 1 day is equal to 24 hours, (ii) a circle graph; pie graph can be used to display percents; ratios Diff-Explainer ExplanationLP Answer: line graph.Explanations: (i) a line graph is used for showing change; data over time, (ii) 1 day is equal to 24 hours Question (3): Why has only one-half of the Moon ever been observed from Earth? Correct Answer: The Moon rotates at the same rate that it revolves around Earth.. PathNet Answer: The Moon has phases that coincide with its rate of rotation.Explanations: (i) the moon revolving around ; orbiting the Earth causes the phases of the moon, (ii) a new moon occurs 14 days after a full moon ExplanationLP Answer: The Moon does not rotate on its axis.Explanations: (i) the moon rotates on its axis, (ii) the dark half of the moon is not visible Diff-Explainer ExplanationLP Answer: The Moon is not visible during the day.Explanations: (i) the dark half of the moon is not visible, (ii) a complete revolution; orbit of the moon around the Earth takes 1; one month

Qualitative Analysis
We selected some qualitative examples that showcase how end-to-end fine-tuning can improve the quality and inference and presented them in Table 5.We use the ExplanationLP for nondifferentiable solver and Diff-Explainer as they yield higher performance in answer and explanation selection.
For Question (1), Diff-Explainer retrieves both explanations correctly and be able to answer correctly.Both PathNet and ExplanationLP has correctly retrieved at least one explanation but performed incorrect inference.We hypothesise that the other two approaches were distracted by the lexical overlaps in question/answer and facts, while our approach is robust towards distractor terms.In Question (2), our model was able only to retrieve one explanation correctly and was dis-tracted by the lexical overlap to retrieve an irrelevant one.However, it still was able to answer correctly.In Question (3), all the approaches answered the question wrong, including our approach.Even though our approach was able to retrieve at least one correct explanation, it was not able to combine the information to answer and was distracted by lexical noise.These shortcomings indicate that more work can be done, and different constraints can be experimented with for combining facts.

Conclusion
We presented a novel framework for encoding explicit and controllable assumptions as an end-toend learning framework for question answering.We empirically demonstrated how incorporating these constraints in broader Transformer-based ar-chitectures can improve answer and explanation selection.The presented framework adopts constraints from TupleILP and ExplanationLP, but Diff-Explainer can be extended to encode different constraints with varying degrees of complexity.
This approach can also be extended to handle other forms of multi-hop QA, including open-domain, cloze style and answer generation.ILP has also been employed for relation extraction (Roth and Yih, 2004;Choi et al., 2006;Chen et al., 2014), semantic role labeling (Punyakanok et al., 2004;Koomen et al., 2005), sentiment analysis (Choi and Cardie, 2009) and explanation regeneration (Gupta and Srinivasaraghavan, 2020).We can adapt and improve the constraints presented in this approach to build explainable approaches for the respective tasks.
Diff-Explainer is the first work investigating the intersection of explicit constraints and latent neural representations to the best of our knowledge.We hope this work will open the way for future lines of research on neuro-symbolic models, leading to more controllable, transparent and explainable NLP models.

Model Description
This section presents a detailed explanation of Tu-pleILP and ExplanationLP: TupleILP TupleILP uses Subject-Predicate-Object tuples for aligning and constructing the explanation graph.As shown in Figure 4C, the tuple graph is constructed and lexical overlaps are aligned to select the explanatory facts.The constraints are designed based on the position of text in the tuple.
ExplanationLP Given hypothesis H 1 from Figure 4A, the underlying concept the hypothesis attempts to test is the understanding of friction.Different ILP approaches would attempt to build explanation graph differently.For example, Expla-nationLP (Thayaparan et al., 2021) would classify core scientific facts (F 6 -F 8 ) into abstract facts and the linking facts (F 1 -F 5 ) that connects generic or abstract terms in the hypothesis into grounding fact.The constraints are designed to emulate abstraction by starting to from the concrete statement to more abstract concepts via the grounding facts as shown in Figure 4B.

Objective Function
In this section, we explain how to design the objective function for TupleILP and ExplanationLP to adopt with Diff-Explainer.
Given n candidate hypotheses and k candidate explanatory facts, A represents an adjacency matrix of dimension ((n + k) × (n + k)) where the first n columns and rows denote the candidate hypotheses, while the remaining rows and columns represent the candidate explanatory facts.The adjacency matrix denotes the graph's lexical connections between hypotheses and facts.Specifically, each entry in the matrix A ij contains the following values: Given the relevance scoring functions, we construct edge weights matrix (W ) via a weighted function for each approach as follows: TupleILP The weight function for Diff-Explainer with TupleILP constraints is: ExplanationLP Give Abstract KB (F A ) and Grounding KB (F G ), the weight function for Diff-Explainer with Explanation LP is as follows: 6.3 Constraints with Disciplined Parameterized Programming (DPP) In order to adopt differentiable convex optimization layers, the constraints should be defined following the Disciplined Parameterized Programming (DPP) formalism (Agrawal et al., 2019b), providing a set of conventions when constructing convex optimization problems.DPP consists of functions (or atoms) with a known curvature (affine, convex or concave) and per-argument monotonicities.In addition to these, DPP also consists of Parameters which are symbolic constants with an unknown numerical value assigned during the solver run.
TupleILP We extract SPO tuples f t i = {f S i , f P i , f O i } for each fact f i using an Open Information Extraction model (Stanovsky et al., 2018).From the hypothesis h i we extract the set of unique terms h ht i = {t h i 1 , t h i 2 , t h i 3 , . . ., t h i l } excluding stopwords.
In addition to the aforementioned constraints and semidefinite constraints specified in Equation 7, we adopt part of the constraints from Tu-pleILP (Khot et al., 2017).In order to implement TupleILP constraints, we extract SPO tuples f t i = {f S i , f P i , f O i } for each fact f i using an Open Information Extraction model (Stanovsky et al., 2018).From the hypotheses H we also extract the set of unique terms H t = {t 1 , t 2 , t 3 , . . ., t l } excluding stopwords.The constraints are described in Table 6.
ExplanationLP ExplanationLP constraints are described in Table 6.[✓] Two sticks getting warm when rubbed together is an example of force producing heat: H1 [✕] An apple falling from a tree branch is an example of force producing heat: H2 [✕] A wagon rolling across a yard when pulled is an example of force producing heat: H3 stick rub Active tuple must have active subject Y T S θ >= E A θ (19) A θ populated by adjacency matrix A, T S θ by subject tuple matrix T S with dimension ((n + k) × (n + k)) and the values are given by:

Figure 2 :
Figure 2: Overview of our approach: Illustrates the end-to-end architectural diagram of Diff-Explainer for the provided example.

Algorithm 1 :
Training Diff-Explainer Data: M c ← Multi-hop inference constraints Data: Ans c ← Answer selection constraint Data: Exp c ← Explanation selection constraint Data:

Figure 3 :
Figure 3: Comparison of accuracy for different number of retrieved facts.
a stick is an object: F1 [✓] friction is a force: F2 [✕] an apple is a fruit: F3 [✓] to rub together means to move against: F4[✕] falling occurs due to gravity: F5 [✓] friction occurs when two object's surfaces move against each other: F6 [✓] friction causes the temperature of an object to increases: F7[✕] magnetic attraction pulls two objects together: F8 (A) Active tuple must have≥ w 3 active fields Y T S θ +Y T P θ +Y T O θ ≥ w 3 (Y A θ )(21)A θ populated by adjacency matrix A and T S θ , T P θ , T O θ populated by subject, predicate and object matrices T S , T P , T O respectively.Predicate and object tuples are converted into T P , T O matrices similar to T S Abstract fact matrix F AB , where:

Table 1 :
Answer selection performance for the baselines and across different configurations of our approach on WorldTree Corpus.

Table 2 :
Answer Selection performance on ARC corpus with Diff-Explainer fine-tuned on answer selection.

Table 3 :
ARC challenge scores compared with other Fully or Partially explainable approaches trained only on the ARC dataset.

Table 4 :
F1 score for explanation selection in WorldTree dev-set .

Table 5 :
Example of predicted answers and explanations (Only CENTRAL explanations) obtained from our model with different levels of fine-tuning.