Abstract
Justification is an explanation that supports the veracity assigned to a claim in fact-checking. However, the task of justification generation has been previously oversimplified as summarization of a fact-check article authored by fact-checkers. Therefore, we propose a realistic approach to generate justification based on retrieved evidence. We present a new benchmark dataset called ExClaim (for Explainable fact-checking of real-world Claims), and introduce JustiLM, a novel few-shot Justification generation based on retrieval-augmented Language Model by using fact-check articles as an auxiliary resource during training only. Experiments show that JustiLM achieves promising performance in justification generation compared to strong baselines, and can also enhance veracity classification with a straightforward extension.1
Code and dataset are released at https://github.com/znhy1024/JustiLM.
1 Introduction
Automated fact-checking typically encompasses several stages: identify check-worthy claims, retrieve relevant evidence, determine the claim’s veracity using the retrieved evidence, and generate justification for the verdict on the veracity (Guo et al., 2022). Despite a wealth of research focusing on the initial three stages, justification generation has remained under-explored in the past. Justifications present essential evidence and rationales used to arrive at a claim’s veracity judgment, serving to convince readers and enhance the credibility of fact-checking systems. This explanatory process is of paramount importance in gaining the user’s trust in automated fact-checking (Kotonya and Toni, 2020a; Atanasova et al., 2020).
Several methods have attempted to generate justification of verdict by summarizing fact-check articles that were previously authored by human fact-checkers (Kotonya and Toni, 2020b; Atanasova et al., 2020; Russo et al., 2023). Since a fact-check article per se is manually written to justify the verdict of a given claim with detailed presentation and reasoning over digested evidence, referring to reference documents collected from multiple sources, directly generating a summary from such a report as justification sidesteps the realistic challenges of evidence gathering and evidence-based reasoning for veracity assessment we essentially face in the fact-checking task. More importantly, these existing methods are impractical because fact-check articles are not available for new claims that are yet to check (Guo et al., 2022). Table 1 shows an example illustrating different types of information involved in the fact-checking practice and their relationship. To justify the veracity for a claim, the source of information that can be used practically ought to be the retrieved reference documents containing evidence rather than its fact-check article, which, as an outcome, has not been written during the checking process.
An example claim along with the evidence documents, justification, and veracity. The title of each evidence document is italicized. The sentences in the fact-check article referring to evidence documents are marked in the same color as the corresponding documents, and the sentences that directly entail the justification are in bold.

In this paper, we propose a more realistic approach for the task of justification generation based on a language model approach, which complies with the process of journalistic fact-checking by well-known fact-check organizations such as PolitiFact.2 Our goal is to produce high-quality justifications, drawing upon evidence gathered from diverse sources. To this end, we construct a benchmark dataset for Explainable fact-checking of real-world Claims, named ExClaim, derived from a public dataset, WatClaimCheck (Khan et al., 2022), containing newsworthy claims along with their fact-check articles and reference documents. ExClaim provides a large searchable corpus by mixing the reference documents from all claims in WatClaimCheck. Additionally, it curates the verdict justifications sourced from fact-check articles, typically located in a conclusive paragraph marked by cue phrases like “Our ruling” or “Our rating” for each claim. Furthermore, we develop a Justification Language Model called JustiLM for generating the rationales behind veracity judgement within the context of few-shot learning. Presumably, few-shot fine-tuning can mitigate the training resource requirements and its dependence on high-end hardware, often financially prohibitive, and also enables the model to achieve comparable effectiveness to state-of-the-art fully-trained models. JustiLM utilizes fact-check articles as auxiliary information in its training only via fine-tuning a pre-trained Retrieval-Augmented Generation (RAG) model on our curated justification dataset. Meanwhile, leveraging fact-check articles for training enhances the model’s proficiency in generating rationales based on evidence and articulating them in its generated content. Our contributions are threefold:
We propose JustiLM, the first realistic justification generation method based on a retrieval-augmented language model that is trained end-to-end for explainable fact checking of real-world claims, leveraging fact-check articles as auxiliary information for model training only.
We construct ExClaim, a new benchmark derived from the WatClaimCheck dataset (Khan et al., 2022) for explainable fact-checking, which contains 6,951 real-world claims and their corresponding veracity labels and human-written justifications, together with a large searchable corpus of 957,949 chunk-level documents for fine-grained evidence retrieval.
JustiLM outperforms In-Context Learning (ICL) enabled language models, including Flan-T5, Llama2, and the state-of-the-art few-shot RAG model Atlas. JustiLM also shows promising performance compared to the latest GPT-4 model. A straightforward extension of JustiLM for joint veracity prediction and justification generation improves the veracity prediction task with large margins.
2 Related Work
2.1 Explanations for Fact-checking
Explanations for fact-checking claims have gained significant prominence in recent times, particularly due to the prevalent use of black-box models in automated fact-checking systems (Atanasova et al., 2020; Guo et al., 2022). Several methods have emerged to address this issue utilizing various techniques to provide human readable explanations. One stream of research leverages attention weights to highlight salient parts in the retrieved evidence as explanations (Popat et al., 2018; Ma et al., 2019; Yang et al., 2019; Shu et al., 2019; Lu and Li, 2020). Another stream of study is to adopt logic-based rules, such as knowledge graphs and natural logic relations designed by human experts (Ahmadi et al., 2019; Gad-Elrab et al., 2019; Vedula and Parthasarathy, 2021; Krishna et al., 2022a), where explanations are obtained by tracing the rules path to reach the veracity of the claim. However, these explanations are not presented in natural language, rendering them less accessible to general users. Furthermore, these rule-based systems encounter challenges when dealing with real-world claims that may not conform to predefined rules. In contrast, our work places a strong emphasis on generating textual justifications that are readily understandable for users, avoiding manual rule definitions.
A few studies have attempted to automatically generate textual justifications by summarizing fact-check articles (Kotonya and Toni, 2020b; Atanasova et al., 2020; Russo et al., 2023). Atanasova et al. (2020) employ DistilBERT (Sanh et al., 2019) to extract sentences from fact-check articles to form justifications. Kotonya and Toni (2020b) propose a two-step process, initially utilizing a Sentence-BERT (Reimers and Gurevych, 2019) to extract sentences from fact-check articles and subsequently using the BERTSUM model (Liu and Lapata, 2019) for abstractive justification generation based on the extracted sentences. Russo et al. (2023) explore several existing extractive summarization (Erkan and Radev, 2004; Reimers and Gurevych, 2019) and abstractive summarization (Raffel et al., 2020; Zhang et al., 2020; Shleifer and Rush, 2020) approaches for summarizing fact-check articles. These summarization methods come with inherent limitations practically, including complete reliance on fact-check articles (i.e., detailed human justification) as input, which is hardly available at the time of deployment, and complete omission of automatic evidence search and evidence-based reasoning. Different from these approaches, our method only assumes the availability of fact-check articles during model training and the key evidence exists within a large corpus which is searchable. Therefore, our approach generates justifications by harnessing the information from retrieved reference documents during inference, which is a more realistic solution for real-world scenarios. Similarly, Khan et al. (2022) infer claim veracity based on retrieved textual references, while Yao et al. (2023a) retrieve evidence for multi-modal fact-checking and generate explanations for predicted veracity labels using the BART model (Lewis et al., 2020a), both of which are stage-wise and full-dataset trained. In contrast, we base our approach on the latest RAG framework that is trained end-to-end and generates justifications by using fact-check articles to distill supervisory signals for training.
2.2 Few-shot Fact-checking
The need of few-shot learning is exacerbated by the continuous increase of computational and storage requirements for language model training. However, the specific application of few-shot learning techniques in the context of fact-checking has been relatively underexplored. Existing methods for few-shot fact-checking only focus on the so-called fact verification task (Lee et al., 2021; Zeng and Zubiaga, 2022; Zeng and Gao, 2023; Yue et al., 2023; Pan et al., 2023; Zhang and Gao, 2023) by feeding a few instances together with gold evidence into the model to predict the veracity of a claim. Different from these methods, our work primarily centers on generating justifications to substantiate the veracity of a claim based on the retrieved evidence. Importantly, we do not assume the availability of annotated evidence. Instead, we necessitate the system to retrieve pertinent evidence, conforming to a more realistic and challenging scenario.
2.3 Retrieval-augmented Language Models
Equipping language models (LM) with external memory has shown to enhance their performance in knowledge intensive NLP tasks (Chen et al., 2017; Thorne et al., 2018; Guu et al., 2020; Lewis et al., 2020b; Sachan et al., 2021; Izacard and Grave, 2021b; Borgeaud et al., 2022; Izacard et al., 2023). Typically, a retriever is used to retrieve relevant documents from a large corpus, which enriches the input of a language model and contributes to the final output. However, due to the high cost of acquiring query-document annotations and training retrievers, many implementations rely on off-the-shelf retrievers, such as TF-IDF and BM25 (Jones, 2004; Robertson et al., 1994), which use term-matching techniques. In this setup, only the parameters of LMs are fine-tuned.
Recent research has demonstrated the advantages of jointly training the retriever and the LM in an end-to-end manner, which leverages the supervision signals from the LM to train the retriever (Guu et al., 2020; Lewis et al., 2020b; Sachan et al., 2021; Izacard and Grave, 2021b; Izacard et al., 2023). Moreover, considering the remarkable performance of large language models (LLMs) in various few-shot NLP tasks, some studies suggest enhancing LLMs with the retrievers or web search engines (Mallen et al., 2023; Si et al., 2023; Yu et al., 2023; Shi et al., 2023; Zhang and Gao, 2023). For example, REPLUG (Shi et al., 2023) optimizes the retriever by minimizing the KL divergence between the retrieval likelihood and the black-box LLM likelihood over retrieved documents. However, there exist inherent limitations in the interaction between retriever and black-box LLMs, such as their restricted ability to provide or access specific information. We refer readers to a comprehensive survey of retrieval-augmented LMs (Mialon et al., 2023).
3 Task Formulation
Let C = {(x, z, y)} be a fact-checking dataset of real-world news claims associated with a textual knowledge corpus . Each instance is composed of a claim x and its corresponding ground-truth justification y and fact-check article z. C is divided as a training set and a test set, and only instances in the training set are associated with fact-check articles if available.
Given a claim x and the corpus , the goal of justification generation is to produce a sequence of tokens, denoted as , that serves as an explanation for the veracity rendered on the claim using the evidence retrieved from the corpus. In the few-shot setting, we randomly select K instances from the training set, following the similar setup employed in previous studies for fact verification (Lee et al., 2021; Liu et al., 2022; Zeng and Gao, 2023), and we do not assume the availability of development set as this aligns to a more realistic scenario with limited data resources.
4 ExClaim Dataset
The existing fact-checking datasets based on real-world claims have limitations for justification generation. This is because the provided evidence sources might not cover the evidence documents that fact-checkers actually rely on when writing justifications. For example, some datasets (Vlachos and Riedel, 2014; Wang, 2017; Alhindi et al., 2018) only provide metadata like speaker, party, and date without a sizeable knowledge corpus for finding specific evidence. Some studies (Popat et al., 2016; Baly et al., 2018; Augenstein et al., 2019; Gupta and Srikumar, 2021; Yang et al., 2022; Hu et al., 2022) utilize web search to gather evidence documents, which result in retrieved information from non-authoritative sources or lead to the leak of ground truth by inadvertently including articles verifying the same claims by other organizations or sharing the fact-check information (Khan et al., 2022). More notably, certain studies (Hanselowski et al., 2019; Kotonya and Toni, 2020a; Atanasova et al., 2020; Ostrowski et al., 2021; Russo et al., 2023) regard fact-check articles as a primary source of evidence, a practice that may not align with realistic fact-checking procedures.
We use the WatClaimCheck (Khan et al., 2022) dataset that provides the real-world claims along with the text of reference documents cited by fact-check articles. However, WatClaimCheck is constructed for veracity classification and does not provide ground-truth justifications. For our task, we construct ExClaim based on WatClaimCheck, for which we additionally extract justifications from fact-check articles based on the cue phrases such as “Our ruling” or “Our rating” in the reports following previous works (Alhindi et al., 2018; Augenstein et al., 2019; Kotonya and Toni, 2020a) and remove the instances that do not have such justification content. After extracting the justifications, we also remove them from fact-check articles.
Table 2 presents summary statistics of the ExClaim dataset with a total 6,951 real-world claims and justifications (i.e., 5,964 for training and 987 for testing). The data pose some challenges: 1) A single reference document is generally much longer than fact-check article, easily exceeding the context window of most text generation models (e.g., 512 tokens of T5 (Raffel et al., 2020) or 1,024 tokens of BART (Lewis et al., 2020a)). In particular, each claim may correspond to multiple reference documents from different sources, leading to excessively long text for evidence. 2) There is a lack of passage-/sentence-level annotation in reference documents and fact-check articles. Since fact-checkers generally refer to only several pieces of text in reference documents when writing justifications, most information in a reference document tend to be irrelevant for generating the justifications. To address these issues, we split each document into disjoint 100-word chunks following previous work (Lee et al., 2019; Karpukhin et al., 2020; Lewis et al., 2020b; Izacard et al., 2023), resulting in a large textual knowledge corpus comprising a total of 957,949 chunk-level documents that systems can search fine-grained evidence text from. In the rest of the paper, we refer to these short text chunks as “reference documents” or simply “documents”.
Statistics of the ExClaim dataset. †: Note that fact-check articles in the test set are not used in our method, but exclusively utilized by baselines that rely on fact-check articles.
. | Split . | # Instance . | Avg. # Tokens. . |
---|---|---|---|
Claim | Train | 5,964 | 25 |
Test | 987 | 25 | |
Fact-check | Train | 5,964 | 1,102 |
Article | Test† | 987† | 1,091† |
Reference | Train | 40,089 | 2,656 |
Documents | Test | 6,647 | 2,404 |
Justification | Train | 5,964 | 129 |
Test | 987 | 131 |
. | Split . | # Instance . | Avg. # Tokens. . |
---|---|---|---|
Claim | Train | 5,964 | 25 |
Test | 987 | 25 | |
Fact-check | Train | 5,964 | 1,102 |
Article | Test† | 987† | 1,091† |
Reference | Train | 40,089 | 2,656 |
Documents | Test | 6,647 | 2,404 |
Justification | Train | 5,964 | 129 |
Test | 987 | 131 |
5 Methodology
We base our approach on the retrieval-augmented generation (RAG) framework (Lewis et al., 2020b; Sachan et al., 2021; Izacard and Grave, 2021b; Izacard et al., 2023), which contains a retriever for fine-grained evidence retrieval and a LM for textual justification generation. As shown in Figure 1, the retriever takes the claim text as input and retrieves the top-N chunk-level documents from the textual knowledge corpus, and the LM conditions on these documents together with the claim to generate justification. The retriever and LM can be jointly trained within a single RAG framework, which makes it possible to utilize fact-check articles as an auxiliary resource to provide supervisory signals during training, targeting to enhance the quality of generated justification. We employ Atlas (Izacard et al., 2023) as our backbone model considering two main reasons: 1) its strong few-shot learning ability in knowledge intensive tasks when its retriever and LM are jointly trained; 2) its flexibility for incorporating fact-check articles in the training process.
The architecture of JustiLM. Gray solid arrows present the inference process without fact-check article z. Red dash arrows present the training process of backbone model, where the ground-truth justification provide supervisory signals to train both retriever and LM. Blue dashed arrows present the training process with the distillation of z as supervisory signals. The document encoder is fixed during training, while other modules are trainable. QE: Query Encoder; DE: Document Encoder; Enc: Encoder; Dec: Decoder.
The architecture of JustiLM. Gray solid arrows present the inference process without fact-check article z. Red dash arrows present the training process of backbone model, where the ground-truth justification provide supervisory signals to train both retriever and LM. Blue dashed arrows present the training process with the distillation of z as supervisory signals. The document encoder is fixed during training, while other modules are trainable. QE: Query Encoder; DE: Document Encoder; Enc: Encoder; Dec: Decoder.
5.1 Retriever
Given a claim x, the retriever should return the documents that help LM generate better justification. To enable the training of the retriever, Atlas utilizes a dense retriever named Contriever (Izacard et al., 2022), which is pre-trained using the MoCo contrastive loss (He et al., 2020). Contriever is a dual-encoder architecture that the pre-trained query encoder Ec and document encoder Ed encode the claim x and each document , respectively. The embeddings of documents can be pre-computed to build a collection of index using FAISS (Johnson et al., 2021) for fast retrieval. Documents are ranked by the similarity score s(x, dj) = Ec(x)⊤Ed(dj) that is calculated by taking the dot product of the embeddings of the claim x and document dj.
To mitigate the burden of re-computing embeddings for all documents when training the retriever, Atlas (Izacard et al., 2023) only updates the parameters corresponding to the query encoder while freezing the documents encoder, which still shows promising results in the few-shot setting. Therefore, we employ the document encoder for encoding reference documents and the query encoder for encoding other inputs. Since there is no direct supervision available to train the retriever, Atlas proposes a Perplexity Distillation loss to leverage the supervisory signals from the LM. The intuition behind this is that documents contributing to the LM that help generate lower-perplexity outputs should be ranked higher (Izacard et al., 2023).
5.2 Language Model
The language model conditions on the top-N retrieved documents by the retriever, together with the claim x, to generate the justification. To aggregate evidence efficiently and effectively from multiple documents in LM, Atlas employs a T5 encoder-decoder model (Raffel et al., 2020) with the Fusion-in-Decoder (FiD) (Izacard and Grave, 2021b) modification. Each retrieved document dj is encoded independently by the encoder, with the claim x prepended to it. All outputs of the encoder are then concatenated. The decoder takes as input this concatenation and performs cross-attention to fuse the evidence and generate outputs. The training objective is the standard language modeling loss that encourages the LM to assign higher probability to the target sequence y given the claim x and top-N retrieved documents.
5.3 Distillation Techniques
Although directly summarizing fact-check articles z can generate justifications with reasonable quality in previous work (Kotonya and Toni, 2020a; Atanasova et al., 2020), z is by no means available during inference for new claims in real-world deployment, as we discussed in §1, making the previous methods impractical. We propose a realistic approach to address this limitation: distilling information from z as auxiliary supervisory signals for training phase only. We introduce two types of techniques based on the granularity of distillation from fact-check articles. The first is article-level distillation, which utilizes aggregated information from the entire z. The second is chunk-level distillation, where we split each article z as multiple disjoint 100-word chunks , where . Chunk-level distillation utilizes individual information of each chunk zi. Both types of distillation techniques can be applied to train the retriever and LM.
5.3.1 Article-level Distillation
Article-level distillation is performed at the entirety of a fact-check article, aiming at utilizing the global-level alignment between fact-check article z and retrieved documents DN as supervisory signals for model training. The basic idea is that the more similar DN and z are, the easier it is for LM to generate justification based on DN closely approximating that generated based on z. This alignment serves two main purposes. Firstly, the similarity between DN and z can act as a supervisory signal, guiding the retriever to prioritize the ranking of documents in DN to resemble z. Secondly, the justification generated by the LM based on z can be used as a supervision signal to encourage the LM using DN to generate justification as similar as those generated based on z. Next, we will discuss two training losses that serve both purposes.
Retrieval Loss.
Generation Loss.
5.3.2 Chunk-level Distillation
Chunk-level distillation is performed at the granularity of each chunk of fact-check article, leveraging the alignment between chunks and documents to provide supervisory signals for model training. The intuition is that different chunks of the fact-check article could be derived from rearranging or modifying specific text spans sourced from reference documents. Further, the chunks may correspond to certain parts of the ground-truth justification y. Thus, can be seen as the “connections” between DN and y. Aligning and intuitively aids the model in learning the mapping from DN to y, hence improving its performance. However, there is no chunk-level annotation available, which poses an important challenge for training. We design two training techniques to address it for chunk-level distillation in both retriever and LM.
Retrieval Loss.
Generation Loss.
6 Experiments and Results
6.1 Evaluation Metrics
To assess the consistency of generated justifications with ground truth, we employ a spectrum of metrics to make our evaluation balance between factual accuracy and style diversity of verbal expressions: ROUGE (Lin, 2004) counts the number of overlapping units (e.g., n-gram and word sequences) between output justifications and ground truths. MAUVE (Pillutla et al., 2021) measures the divergence between output justifications and the ground truths, which could reflect whether the output is fluent and coherent to the ground (Xie et al., 2023; Krishna et al., 2022b; Gao et al., 2023; Xu et al., 2023). SummaCC expands the SummaC (Laban et al., 2022) to evaluate the coverage and factual consistency through checking entailment between the output justifications and ground truth. It sums the aggregating NLI scores over the pairs of the entire output justification and each sentence in the ground truth for coverage (Scialom et al., 2021; Gao et al., 2023), and reversely, the pairs of the entire ground truth justification and each sentence in the output justification for consistency (Laban et al., 2022).
6.2 Fallacy of Fact-Check Summarization
Experimental Setup.
1) Full training: We include two existing models, ExplainMT (Atanasova et al., 2020) and ExplainerFC (Kotonya and Toni, 2020b). ExplainMT is an extractive model while ExplainerFC is extractive-abstractive. We partition the training set of ExClaim into 5,000 instances for training and 964 for validation. We train the two models to summarize fact-check articles, and test them by inputting fact-check articles versus evidence documents retrieved with BM25 (Robertson et al., 1994). 2) Few-shot training: We train the RAG model Atlas (Izacard et al., 2023) under few shots with fact-check articles as input and test it using fact-check articles versus documents retrieved by its pre-trained retriever Contriever. In this setting, Contriever will be fixed during fine-tuning since the LM’s input is fact-check articles. We use randomly sampled 30 shots from the training split, and report the results averaged over 3 trials based on different seeds.
Results.
As shown in Table 3, for both settings, we observe that using retrieved documents as input dramatically declines the performance compared to inputting fact-check articles. This suggests that the fact-check article summarization approach struggles to generalize to the retrieved documents, especially in few-shot setting, indicating the impracticality of previous approaches and the importance of the more realistic framework outlined in §3. That is, models need to generate justifications based on retrieved evidence instead of fact-check articles which are not available for new claims during inference.
Results of justification generation methods trained on Fact-check Article (F.C. Article) and tested on Fact-check Article / Retrieved Documents (Retr. Docs). Para.: Parameters. Standard deviation is in parentheses.
Method . | #Para. . | Test . | ROUGE-1 . | ROUGE-2 . | ROUGE-L . | SummaCC . | MAUVE . |
---|---|---|---|---|---|---|---|
ExplainMTFull-dataset | 132M | F.C. Article | 35.01(−) | 22.13(−) | 21.25(−) | 22.70(−) | 5.59(−) |
(Atanasova et al., 2020) | Retr. Docs | 19.33(−) | 9.55(−) | 17.59(−) | 9.34(−) | 5.27(−) | |
ExplainerFCFull-dataset | 340M | F.C. Article | 62.10(−) | 38.03(−) | 54.25(−) | 50.67(−) | 14.63(−) |
(Kotonya and Toni, 2020b) | Retr. Docs | 47.16(−) | 24.88(−) | 44.13(−) | 35.82(−) | 10.07(−) | |
AtlasFew-shot | ∼3B | F.C. Article | 40.93(0.97) | 26.71(1.15) | 33.98(1.01) | 29.72(1.22) | 28.25(2.46) |
(Izacard et al., 2023) | Retr. Docs | 28.14(0.87) | 13.91(1.31) | 21.87(1.12) | 12.64(0.87) | 25.37(0.69) |
Method . | #Para. . | Test . | ROUGE-1 . | ROUGE-2 . | ROUGE-L . | SummaCC . | MAUVE . |
---|---|---|---|---|---|---|---|
ExplainMTFull-dataset | 132M | F.C. Article | 35.01(−) | 22.13(−) | 21.25(−) | 22.70(−) | 5.59(−) |
(Atanasova et al., 2020) | Retr. Docs | 19.33(−) | 9.55(−) | 17.59(−) | 9.34(−) | 5.27(−) | |
ExplainerFCFull-dataset | 340M | F.C. Article | 62.10(−) | 38.03(−) | 54.25(−) | 50.67(−) | 14.63(−) |
(Kotonya and Toni, 2020b) | Retr. Docs | 47.16(−) | 24.88(−) | 44.13(−) | 35.82(−) | 10.07(−) | |
AtlasFew-shot | ∼3B | F.C. Article | 40.93(0.97) | 26.71(1.15) | 33.98(1.01) | 29.72(1.22) | 28.25(2.46) |
(Izacard et al., 2023) | Retr. Docs | 28.14(0.87) | 13.91(1.31) | 21.87(1.12) | 12.64(0.87) | 25.37(0.69) |
6.3 Few-shot Justification Generation
6.3.1 Baselines
1) Lead-4 (Nallapati et al., 2017) selects as justification the first sentence from each document among the top-4 documents retrieved by BM25. 2) Retriever + ICL-enabled LMs: We use BM25 as the sparse retriever and Contriever (Izacard et al., 2022) as the dense retriever, and choose Flan-T5 (11B) (Chung et al., 2022), Llama2 (70B) (Touvron et al., 2023), and GPT-4 (OpenAI, 2023) as the ICL-enabled LMs. We prompt the model to generate justifications by concatenating few-shot training instances along with a test instance. 3) Atlas (Izacard et al., 2023) is the SoTA RAG model with strong few-shot ability, which consists of a trainable dense retriever Contriever and a LM-adapted variant of T5 (Lester et al., 2021) with FiD (Izacard and Grave, 2021b) modified to increase the number of retrieved documents. We also include a non-joint training setting by replacing the retriever with BM25.
6.3.2 Experimental Setup
For our method JustiLM, we randomly sample 30 instances from the training set for fine-tuning. We use the Atlas (Izacard et al., 2023) with its released pre-trained checkpoint3 of 3B parameters as our backbone model. Following the Atlas paper, we retrieve top-20 documents for each instance. We set training steps as 100, batch size as 8, and learning rate as 4 × 10−5 with linear decay and 5 warmup steps for both the LM and the retriever.
For the distillation techniques to train the LM, we begin by fine-tuning the LM to take fact-check articles as auxiliary input and generate justification, which provides a warmup for LM. For BM25 + ICL-enabled LMs, we use the Pyserini4 toolkit to build BM25 model. For Flan-T5, We use the code and pre-trained checkpoints fromHuggingFace Transformers.5 We use the original code and pre-trained checkpoints of Llama2.6 We use the API service of GPT-4 from OpenAI.7 Given different length constraints of these LMs, we intend to maximize the utilization of their specific input capabilities. We adjust the number of the shots and/or the number of retrieved documents to maximally utilize their input context windows. We prioritize to ensure that these models have access to as many of the top-20 retrieved documents as possible because effective generation requires an adequate amount of information, with the secondary goal to maximize the number of few-shot examples used. Specifically, we set 1-shot ICL with top-10 documents for Flan-T5, 2-shot ICL with top-20 documents for Llama2 and 3-shot with top-20 documents for GPT-4.
For fair and robust comparison, we perform experiments three times, with training instances sampled using different random seeds. We report the mean and standard deviation of each metric over the three runs in all experiments. The seeds and training instances are kept the same across different models. All the experiments use a server with 8 NVIDIA Tesla-V100 32GB GPUs.
6.3.3 Main Results
The results of few-shot justification generation methods are reported in Table 4a. Lead-4 that directly presents the retrieved documents as justification does not yield satisfactory results, due to simple evidence stacking without generating a clear explanation of the rationale.
Few-shot justification generation results on test set (a) and new test set (b). Standard deviation is in parentheses.
. | #Parameters . | ROUGE-1 . | ROUGE-2 . | ROUGE-L . | SummaCC . | MAUVE . |
---|---|---|---|---|---|---|
Lead-4 (Nallapati et al., 2017) | ||||||
– | 22.72(−) | 5.72(−) | 14.11(−) | 2.26(−) | 7.95(−) | |
Retriever + ICL-enabled LMs | ||||||
BM25 (Robertson et al., 1994) | ||||||
+ Flan-T5 (Chung et al., 2022) | 11B | 27.99(2.39) | 14.14(1.06) | 20.74(1.66) | 14.55(0.90) | 12.42(1.22) |
+ Llama2 (Touvron et al., 2023) | 70B | 31.45(0.51) | 12.36(0.25) | 20.72(0.22) | 13.05(0.38) | 7.88(0.15) |
+ GPT-4 (OpenAI, 2023) | Unkown | 39.72(1.97) | 17.12(1.97) | 26.18(2.26) | 24.98(2.49) | 14.73(2.86) |
Contriever (Izacard et al., 2022) | ||||||
+ Flan-T5 | ∼11B | 23.75(1.91) | 11.34(1.17) | 18.11(1.48) | 9.93(0.29) | 12.07(0.90) |
+ Llama2 | ∼70B | 31.28(0.51) | 11.52(0.82) | 20.42(0.70) | 11.06(0.14) | 7.91(0.09) |
+ GPT-4 | Unkown | 36.83(1.37) | 14.10(1.66) | 23.36(1.75) | 20.07(2.37) | 9.85(0.96) |
Atlas (Izacard et al., 2023) | ||||||
No joint training | 3B | 31.42(1.61) | 16.53(0.86) | 24.67(1.00) | 13.55(0.54) | 25.19(4.37) |
Joint training | ∼3B | 31.91(1.78) | 17.81(1.19) | 25.60(1.16) | 13.81(1.11) | 25.51(2.08) |
JustiLM (Ours) | ||||||
gret + glm | ∼3B | 33.48(1.33) | 18.59(0.79) | 27.12(0.81) | 15.04(1.27) | 20.29(2.00) |
gret + clm | ∼3B | 36.70(0.77) | 19.23(0.84) | 28.39(0.75) | 14.80(0.45) | 32.99(3.33) |
cret + glm | ∼3B | 36.51(1.01) | 18.67(1.00) | 27.94(0.96) | 14.77(0.19) | 37.08(1.53) |
cret + clm | ∼3B | 36.30(0.91) | 18.68(0.96) | 27.97(0.99) | 14.69(0.48) | 35.30(1.09) |
(a) On the original test set with 987 claims indicated in Table 2. |
. | #Parameters . | ROUGE-1 . | ROUGE-2 . | ROUGE-L . | SummaCC . | MAUVE . |
---|---|---|---|---|---|---|
Lead-4 (Nallapati et al., 2017) | ||||||
– | 22.72(−) | 5.72(−) | 14.11(−) | 2.26(−) | 7.95(−) | |
Retriever + ICL-enabled LMs | ||||||
BM25 (Robertson et al., 1994) | ||||||
+ Flan-T5 (Chung et al., 2022) | 11B | 27.99(2.39) | 14.14(1.06) | 20.74(1.66) | 14.55(0.90) | 12.42(1.22) |
+ Llama2 (Touvron et al., 2023) | 70B | 31.45(0.51) | 12.36(0.25) | 20.72(0.22) | 13.05(0.38) | 7.88(0.15) |
+ GPT-4 (OpenAI, 2023) | Unkown | 39.72(1.97) | 17.12(1.97) | 26.18(2.26) | 24.98(2.49) | 14.73(2.86) |
Contriever (Izacard et al., 2022) | ||||||
+ Flan-T5 | ∼11B | 23.75(1.91) | 11.34(1.17) | 18.11(1.48) | 9.93(0.29) | 12.07(0.90) |
+ Llama2 | ∼70B | 31.28(0.51) | 11.52(0.82) | 20.42(0.70) | 11.06(0.14) | 7.91(0.09) |
+ GPT-4 | Unkown | 36.83(1.37) | 14.10(1.66) | 23.36(1.75) | 20.07(2.37) | 9.85(0.96) |
Atlas (Izacard et al., 2023) | ||||||
No joint training | 3B | 31.42(1.61) | 16.53(0.86) | 24.67(1.00) | 13.55(0.54) | 25.19(4.37) |
Joint training | ∼3B | 31.91(1.78) | 17.81(1.19) | 25.60(1.16) | 13.81(1.11) | 25.51(2.08) |
JustiLM (Ours) | ||||||
gret + glm | ∼3B | 33.48(1.33) | 18.59(0.79) | 27.12(0.81) | 15.04(1.27) | 20.29(2.00) |
gret + clm | ∼3B | 36.70(0.77) | 19.23(0.84) | 28.39(0.75) | 14.80(0.45) | 32.99(3.33) |
cret + glm | ∼3B | 36.51(1.01) | 18.67(1.00) | 27.94(0.96) | 14.77(0.19) | 37.08(1.53) |
cret + clm | ∼3B | 36.30(0.91) | 18.68(0.96) | 27.97(0.99) | 14.69(0.48) | 35.30(1.09) |
(a) On the original test set with 987 claims indicated in Table 2. |
. | ||||||
---|---|---|---|---|---|---|
. | #Parameters . | ROUGE-1 . | ROUGE-2 . | ROUGE-L . | SummaCC . | MAUVE . |
Lead-4 (Nallapati et al., 2017) | ||||||
– | 21.87(−) | 3.95(−) | 12.61(−) | 1.70(−) | 6.62(−) | |
Retriever + ICL-enabled LMs | ||||||
BM25 (Robertson et al., 1994) | ||||||
+ Flan-T5 (Chung et al., 2022) | 11B | 22.86(2.27) | 7.63(0.70) | 14.74(1.52) | 10.94(2.37) | 7.00(0.17) |
+ Llama2 (Touvron et al., 2023) | 70B | 31.01(0.29) | 9.64(0.32) | 18.73(0.17) | 11.49(0.69) | 6.99(0.60) |
+ GPT-4 (OpenAI, 2023) | Unkown | 38.28(1.44) | 13.74(1.75) | 23.36(2.20) | 25.10(2.29) | 7.47(1.30) |
Contriever (Izacard et al., 2022) | ||||||
+ Flan-T5 | ∼11B | 20.44(1.27) | 7.93(0.48) | 14.45(0.85) | 10.18(2.03) | 8.24(0.48) |
+ Llama2 | ∼70B | 31.01(0.84) | 9.81(0.73) | 19.07(0.63) | 10.75(0.52) | 6.62(0.54) |
+ GPT-4 | Unkown | 35.93(1.09) | 12.07(1.51) | 21.46(1.79) | 21.79(2.22) | 6.25(0.37) |
Atlas (Izacard et al., 2023) | ||||||
No joint training | 3B | 29.76(0.98) | 13.40(0.34) | 22.16(0.32) | 10.78(0.55) | 12.56(1.56) |
Joint training | ∼3B | 30.78(1.95) | 15.75(1.72) | 23.84(1.48) | 12.20(0.45) | 14.09(2.34) |
JustiLM (Ours) | ||||||
gret + glm | ∼3B | 32.76(0.89) | 17.40(0.65) | 26.61(0.61) | 14.75(1.45) | 10.57(0.94) |
gret + clm | ∼3B | 35.55(0.31) | 17.84(0.48) | 27.30(0.21) | 14.11(1.40) | 16.78(4.64) |
cret + glm | ∼3B | 35.51(0.51) | 17.21(0.70) | 26.53(0.06) | 14.30(0.40) | 20.02(7.39) |
cret + clm | ∼3B | 35.48(0.59) | 17.52(0.86) | 26.92(0.57) | 13.99(0.49) | 19.17(7.04) |
(b) On the new test set with 348 claims published later than the claims from the WatClaimCheck dataset used for training. |
. | ||||||
---|---|---|---|---|---|---|
. | #Parameters . | ROUGE-1 . | ROUGE-2 . | ROUGE-L . | SummaCC . | MAUVE . |
Lead-4 (Nallapati et al., 2017) | ||||||
– | 21.87(−) | 3.95(−) | 12.61(−) | 1.70(−) | 6.62(−) | |
Retriever + ICL-enabled LMs | ||||||
BM25 (Robertson et al., 1994) | ||||||
+ Flan-T5 (Chung et al., 2022) | 11B | 22.86(2.27) | 7.63(0.70) | 14.74(1.52) | 10.94(2.37) | 7.00(0.17) |
+ Llama2 (Touvron et al., 2023) | 70B | 31.01(0.29) | 9.64(0.32) | 18.73(0.17) | 11.49(0.69) | 6.99(0.60) |
+ GPT-4 (OpenAI, 2023) | Unkown | 38.28(1.44) | 13.74(1.75) | 23.36(2.20) | 25.10(2.29) | 7.47(1.30) |
Contriever (Izacard et al., 2022) | ||||||
+ Flan-T5 | ∼11B | 20.44(1.27) | 7.93(0.48) | 14.45(0.85) | 10.18(2.03) | 8.24(0.48) |
+ Llama2 | ∼70B | 31.01(0.84) | 9.81(0.73) | 19.07(0.63) | 10.75(0.52) | 6.62(0.54) |
+ GPT-4 | Unkown | 35.93(1.09) | 12.07(1.51) | 21.46(1.79) | 21.79(2.22) | 6.25(0.37) |
Atlas (Izacard et al., 2023) | ||||||
No joint training | 3B | 29.76(0.98) | 13.40(0.34) | 22.16(0.32) | 10.78(0.55) | 12.56(1.56) |
Joint training | ∼3B | 30.78(1.95) | 15.75(1.72) | 23.84(1.48) | 12.20(0.45) | 14.09(2.34) |
JustiLM (Ours) | ||||||
gret + glm | ∼3B | 32.76(0.89) | 17.40(0.65) | 26.61(0.61) | 14.75(1.45) | 10.57(0.94) |
gret + clm | ∼3B | 35.55(0.31) | 17.84(0.48) | 27.30(0.21) | 14.11(1.40) | 16.78(4.64) |
cret + glm | ∼3B | 35.51(0.51) | 17.21(0.70) | 26.53(0.06) | 14.30(0.40) | 20.02(7.39) |
cret + clm | ∼3B | 35.48(0.59) | 17.52(0.86) | 26.92(0.57) | 13.99(0.49) | 19.17(7.04) |
(b) On the new test set with 348 claims published later than the claims from the WatClaimCheck dataset used for training. |
Both Flan-T5 and Llama2 outperform Lead-4, demonstrating the LM’s ability to generate justifications based on retrieved evidence. Flan-T5 performs comparably with Llama2 in ROUGE and SummaCC scores and better in MAUVE, despite much fewer parameters. The reasons are likely two-fold: 1) Flan-T5’s instruction fine-tuning on 1.8K tasks, which effectively enhances the pre-trained language models (Sanh et al., 2022; Chung et al., 2022); 2) its fine-tuning on Chain-of-Thought (CoT) data (Wei et al., 2022), aligning with the common presentation of ground-truth justifications that provide rationales to conclude the veracity, as exemplified in Table 1.
Incorporating ICL-enabled LMs with the dense retriever Contriever does not exhibit improvement over using the sparse retriever BM25. Dense retrievers that trained on extensive in-domain training datasets like MS-MARCO (Nguyen et al., 2016), are often surpassed by sparse retrievers when applied to new domains without large annotated datasets (Thakur et al., 2021; Izacard et al., 2022). While Contriever is a strong unsupervised retriever for bridging this gap, BM25 still remains competitive (Izacard et al., 2022).
When training only the LM of Atlas, it demonstrates superior overall performance compared to Flan-T5 and Llama2, despite its much fewer parameters. This finding indicates that merely relying on the implicit knowledge of LMs without parameter updates is insufficient when the size of LM is not large enough. Joint training of the retriever and LM leads to further performance gains, implying its benefits in the few-shot setting.
Compared to Atlas, JustiLM makes improvements in different metrics, indicating that utilizing fact-check article as auxiliary training signals enhances justification quality. With our proposed distillation techniques, JustiLM considerably improves all ROUGE scores. Compared to Atlas, the combination of article-level distillation on retriever and chunk-level distillation on LM increases ROUGE-1, ROUGE-2, and ROUGE-L scores by 15.0%, 7.97%, and 10.9%, respectively, suggesting that JustiLM can generate justifications which are more similar to those written by fact-checkers. Furthermore, 3 out of 4 combinations of distillation techniques outperform Atlas in MAUVE scores, with the highest gain being 45%. This suggests that JustiLM’s justifications are more fluent and coherent with ground truths. It can be attributed to our distillation method allowing the model to learn from fact-check articles that are much more informative and detailed than the explanatory justifications. Lastly, JustiLM effectively enhances the SummaCC score, indicating the improvements on the factual consistency of generated justifications.
GPT-4 demonstrates exceptionally strong ability in providing factually consistent responses and outperforms other ICL-enabled methods Flan-T5 and Llama2 across all metrics. In comparison, JustiLM falls relatively below GPT-4 in ROUGE-1 and SummaCC, but outperforms GPT-4 in ROUGE- 2/L and MAUVE. This highlights its effectiveness, especially considering its small model size and independence from intensive compute and storage resources required by very large models. Also, its ease of fine-tuning with more and new training data provides significant flexibility in addressing the ever-changing landscape of misinformation.
6.3.4 Generalization on New Claims
To address the concern of pre-trained LMs having potentially seen the evaluation data during their pre-training, we investigate how different methods perform on a new test set with new emerging claims made after their training. Since the WatClaimCheck dataset exclusively encompasses claims prior to July 2021 (Khan et al., 2022) and the newest pre-training data of Llama2 are cut off by September 2022, we gather a new set of claims made between October 2022 and September 2023, yielding a new test set comprising 348 instances, each with their associated reference documents and justifications. Following the same steps detailed in §4, the newly collected reference documents are added into the corpus for model retrieval. As shown in Table 4b, all methods demonstrate performance drop on the new test set. Nonetheless, the findings obtained based on the original test set still hold true for the new test data. Additionally, compared to baseline methods, the relatively mild performance drop in JustiLM suggests stronger generalizability and robustness of our distillation techniques.
6.3.5 Ablation on Distillation Techniques
Table 5 reports the result of ablations on our distillation techniques. We observe that the distillation during LM training results in greater improvements compared to the retriever. This is expected, considering that the LM benefits from direct supervision from ground-truth justifications during training, while the retriever relies on the weak supervision from LM and the distillation of fact-check articles. Additionally, the LM has a larger number of parameters than the retriever, with 3 billion parameters for the LM compared to 110 million parameters for the retriever. As a result, the LM tends to capture more knowledge from fact-check article during the distillation process, leading to substantial improvements in performance.
Results of ablations on different distillation techniques. Parentheses enclose standard deviation.
Component . | Loss . | ROUGE-1 . | ROUGE-2 . | ROUGE-L . | SummaCC . | MAUVE . |
---|---|---|---|---|---|---|
Retriever | gret | 32.13(0.99) | 16.45(0.39) | 25.15(0.59) | 14.53(0.23) | 26.53(3.38) |
cret | 31.29(1.53) | 17.26(0.94) | 25.19(1.15) | 13.77(1.41) | 19.17(1.57) | |
LM | glm | 36.30(1.80) | 19.23(1.05) | 28.52(1.04) | 14.92(0.73) | 27.09(7.20) |
clm | 37.03(0.80) | 18.89(0.90) | 28.29(0.79) | 14.56(0.69) | 34.16(3.73) |
Component . | Loss . | ROUGE-1 . | ROUGE-2 . | ROUGE-L . | SummaCC . | MAUVE . |
---|---|---|---|---|---|---|
Retriever | gret | 32.13(0.99) | 16.45(0.39) | 25.15(0.59) | 14.53(0.23) | 26.53(3.38) |
cret | 31.29(1.53) | 17.26(0.94) | 25.19(1.15) | 13.77(1.41) | 19.17(1.57) | |
LM | glm | 36.30(1.80) | 19.23(1.05) | 28.52(1.04) | 14.92(0.73) | 27.09(7.20) |
clm | 37.03(0.80) | 18.89(0.90) | 28.29(0.79) | 14.56(0.69) | 34.16(3.73) |
6.4 Joint Veracity-Justification Performance
In this section, we demonstrate that JustiLM can be easily extended for joint veracity prediction and justification generation. We follow Khan et al. (2022) to map the original veracity labels assigned by fact-checking websites, resulting in 388, 532, and 67 instances for the false, mixture, and true classes in the test split, respectively. Such class imbalance is consistent with the report by Khan et al. (2022). To mitigate the impact of imbalanced class distribution, we balance the 30 training shots across the three classes by randomly sampling 10 instances per class from the training set.
We make the LM generate the justification and veracity label at the same time. For veracity label prediction, let ycls, i be a veracity label, and its predicted score assigned by the LM conditioned on the claim and the retrieved documents is defined as following Liu et al. (2022). In this way, we rank all classes by the predicted scores and select the top-ranked class. During training, we calculate the probability of prediction by applying softmax function on the predicted scores, and use cross-entropy as the loss function.
Table 6 presents the result. The Atlas-CLS, which directly predicts veracity label with Atlas, shows a limited improvement in macro-F1 score compared to the Majority method. This suggests that predicting the veracity of real-world claims remains challenging for this original RAG model in a few-shot setting. When performing joint veracity prediction and justification generation with the LM training, a substantial boost in verdict prediction is observed for our method. Specifically, we achieve absolute improvements of 18.19 and 15.52 in macro-F1 using article-level and chunk-level techniques, respectively. This indicates that justification generation can help veracity prediction by consolidating evidence from retrieved documents. We also find that jointly training JustiLM with the veracity prediction task does not improve the performance of justification generation, which is consistent with the findings by Atanasova et al. (2020). We conjecture that it remains challenging for the model to boost both tasks simultaneously with few-shot training instances. Potential solutions could consider either leveraging a larger multi-task training dataset, such as T0 (Sanh et al., 2022), or using an independent veracity classifier that can be jointly trained with the retriever and the LM. However, both options necessitate adding data and computational resources. We will leave this for future studies.
Results of joint veracity prediction and justification generation. Parentheses enclose standard deviation.
Method . | macro-F1 . | ROUGE-1 . | ROUGE-2 . | ROUGE-L . | SummaCC . | MAUVE . |
---|---|---|---|---|---|---|
Majority | 23.34(−) | – | – | – | – | – |
Atlas-CLS | 25.81(0.46) | – | – | – | – | – |
JustiLM-glm | 44.00(1.51) | 32.52(1.39) | 18.20(0.61) | 26.34(0.88) | 14.76(1.17) | 18.68(1.80) |
JustiLM-clm | 41.33(4.49) | 35.87(1.02) | 19.52(0.68) | 28.22(0.86) | 15.02(0.95) | 32.98(2.96) |
Method . | macro-F1 . | ROUGE-1 . | ROUGE-2 . | ROUGE-L . | SummaCC . | MAUVE . |
---|---|---|---|---|---|---|
Majority | 23.34(−) | – | – | – | – | – |
Atlas-CLS | 25.81(0.46) | – | – | – | – | – |
JustiLM-glm | 44.00(1.51) | 32.52(1.39) | 18.20(0.61) | 26.34(0.88) | 14.76(1.17) | 18.68(1.80) |
JustiLM-clm | 41.33(4.49) | 35.87(1.02) | 19.52(0.68) | 28.22(0.86) | 15.02(0.95) | 32.98(2.96) |
6.5 Case Study
Table 7 presents example justifications generated by JustiLM, the strong ICL baseline GPT-4, and the few-shot RAG model Atlas. Atlas’s generated justification catches that the GOP bill does not change the law, but fails to highlight the key point that women still have viable avenues to address pay discrimination. Both GPT-4 and JustiLM successfully refute the claim by providing that crucial point.
An example of generated justifications by different methods compared to the ground-truth justification.
Claim | Says Wisconsin women facing pay discrimination can’t “do something about it” under bill passed by Republicans. |
Ground Truth | Falk said women facing pay discrimination can’t “do something about it” under a bill passed by Wisconsin Republicans. But of the four legal options available under the current law, only one would be eliminated by the bill. We rate Falk’s statement False. |
Atlas | Falk said that under a Republican-backed bill, women facing pay discrimination can’t “do something about it.” Falk said that under the GOP bill, women facing pay discrimination can’t “do something about it.” But the bill doesn’t change the law. It simply makes it harder for women to file lawsuits. Falk said that under the GOP bill, women facing pay discrimination can’t “do something about it.” |
GPT-4 | A statement by Kathleen Falk, a former Dane County executive, claims that women in Wisconsin would no longer have “a right to do something” about pay discrimination on the job if a bill is signed by Governor Walker. The bill in question would take away the right of women who suffer pay discrimination to sue in state court. However, there are still other avenues for employment discrimination victims, including filing a complaint with the state Equal Rights Division, filing a complaint with the federal Equal Employment Opportunity Commission, and filing a lawsuit in federal court. Therefore, while the bill may limit one avenue for action, it does not completely prevent women from taking action against pay discrimination. |
JustiLM | Falk said that women facing pay discrimination can’t “do something about it” under a bill passed by Republicans. The measure would make it harder on victims of employment discrimination because it is easier to sue in state court than in federal court. But eliminating the state lawsuit option wouldn’t affect the three other avenues of legal recourse available to workplace discrimination victims. And Falk’s claim was women facing pay discrimination would have no options at all. |
Claim | Says Wisconsin women facing pay discrimination can’t “do something about it” under bill passed by Republicans. |
Ground Truth | Falk said women facing pay discrimination can’t “do something about it” under a bill passed by Wisconsin Republicans. But of the four legal options available under the current law, only one would be eliminated by the bill. We rate Falk’s statement False. |
Atlas | Falk said that under a Republican-backed bill, women facing pay discrimination can’t “do something about it.” Falk said that under the GOP bill, women facing pay discrimination can’t “do something about it.” But the bill doesn’t change the law. It simply makes it harder for women to file lawsuits. Falk said that under the GOP bill, women facing pay discrimination can’t “do something about it.” |
GPT-4 | A statement by Kathleen Falk, a former Dane County executive, claims that women in Wisconsin would no longer have “a right to do something” about pay discrimination on the job if a bill is signed by Governor Walker. The bill in question would take away the right of women who suffer pay discrimination to sue in state court. However, there are still other avenues for employment discrimination victims, including filing a complaint with the state Equal Rights Division, filing a complaint with the federal Equal Employment Opportunity Commission, and filing a lawsuit in federal court. Therefore, while the bill may limit one avenue for action, it does not completely prevent women from taking action against pay discrimination. |
JustiLM | Falk said that women facing pay discrimination can’t “do something about it” under a bill passed by Republicans. The measure would make it harder on victims of employment discrimination because it is easier to sue in state court than in federal court. But eliminating the state lawsuit option wouldn’t affect the three other avenues of legal recourse available to workplace discrimination victims. And Falk’s claim was women facing pay discrimination would have no options at all. |
More specifically, Atlas falls short in delivering convincing and comprehensive justification due to its tendency to provide incomplete and repetitive responses. In contrast, GPT-4, being the SoTA LLM, impresses with its ability to generate well-rounded justification, but appears to be lengthy and less focused. JustiLM, on the other hand, successfully highlights key points for fact-checking the claim with a precise and refined justification. Despite its relatively small model size, JustiLM may not always offer the same level of details as GPT-4, but it can produce concise and accurate justifications that closely resemble the ground truth, making JustiLM promising and valuable for users seeking quick and trustworthy fact-check explanations.
7 Discussion
There is no passage-/sentence-level annotation in the original long-form reference documents and fact-check articles, which are costly to obtain. We do not have ground truths for training and evaluating evidence retrieval model. Since these long documents bury specific evidence in them, directly using them for training will introduce a considerable amount of irrelevant text. While we mitigate this challenge by splitting each original reference document into disjoint 100-word chunks for retrieval, we believe that acquiring fine-grained evidence annotations will benefit the training and evaluation.
In our experimental setup, evidence retrieval is conducted under the assumption that the needed evidence for fact-checking a given claim exists in the retrieval corpus. However, in a real-world searching scenario where gold evidence may be absent from the retrieval corpus, it is valuable to investigate how justification generation methods perform under this more challenging scenario by varying the ratio of gold reference documents in the retrieval corpus.
Additionally, while our experiments include the NLI-based metric SummaCC, providing automated evaluation on the factuality of generated justifications, we believe that a sound human evaluation should involve professional fact-checkers. Such evaluation, currently not conducted, necessitates close collaboration with fact-checking organizations and needs particular networking and setup, such as the integration with their existing workflow and the provision of motives for them to participate in evaluation, which could be warranted as a separate study by itself and is part of our future plan.
As the SoTA LLM, GPT-4 shows strong ability in generating factually consistent and informative justifications, therefore, developing justification methods based on those powerful API-based LLMs is beneficial. However, these blackbox LLMs have strict constraints on accessing their specific internal information, which poses important open challenges for being interacted with deeply and providing supervision signals to retriever.
In this work, we address the justification generation task with a realistic approach, which generates justifications based on the retrieved evidence using an end-to-end retrieval-augmented language model. Furthermore, incorporating our distillation techniques with the RAG model Atlas demonstrates a marked improvement in performance. This affirms that utilizing fact-check articles during training to provide supervision signals can strongly enhance justification generation.
8 Conclusion and Future Work
We propose a justification generation language model JustiLM for realistic fact-checking of real-world news claims, where justification generation is performed based on retrieved evidence from large textual corpus, and introduce a new benchmark dataset ExClaim for this task. JustiLM leverages fact-check articles as auxiliary resources during training to distill article-level and chunk-level training signals to guide justification writing. Experimental results show JustiLM outperforms ICL-enabled Flan-T5 and Llama2, as well as the SoTA few-shot RAG model Atlas. JustiLM also demonstrates comparable and promising performance when compared to GPT-4.
In the future, we will explore the adaptation of various LLM-based reasoning methods (e.g., CoT [Wei et al., 2022], ToT [Yao et al., 2023b], and GoT [Besta et al., 2023]) into JustiLM to enhance the reasoning ability for improving the task of justification generation, which aims to assist the LMs in providing better signals for guiding evidence retrieval and improving reasoning over retrieved evidence during justification generation. We also plan to develop a human evaluation scheme involving fact-checking experts to provide a more comprehensive and efficient assessment on machine-generated justifications.
Acknowledgments
We would like to thank the anonymous reviewers and action editor Fei Liu for their helpful suggestions. We are also grateful to Alessandro Moschitti for his valuable comments and discussion.
Notes
Code and dataset are released at https://github.com/znhy1024/JustiLM.
References
Author notes
Action Editor: Fei Liu