Abstract
Recent work has presented intriguing results examining the knowledge contained in language models (LMs) by having the LM fill in the blanks of prompts such as “Obama is a __ by profession”. These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as “Obama worked as a __ ” may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know. We have released the code and the resulting LM Prompt And Query Archive (LPAQA) at https://github.com/jzbjyb/LPAQA.
1 Introduction
Recent years have seen the primary role of language models (LMs) transition from generating or evaluating the fluency of natural text (Mikolov and Zweig, 2012; Merity et al., 2018; Melis et al., 2018; Gamon et al., 2005) to being a powerful tool for text understanding. This understanding has mainly been achieved through the use of language modeling as a pre-training task for feature extractors, where the hidden vectors learned through a language modeling objective are then used in down-stream language understanding systems (Dai and Le, 2015; Melamud et al., 2016; Peters et al., 2018; Devlin et al., 2019).
Interestingly, it is also becoming apparent that LMs1themselves can be used as a tool for text understanding by formulating queries in natural language and either generating textual answers directly (McCann et al., 2018; Radford et al., 2019), or assessing multiple choices and picking the most likely one (Zweig and Burges, 2011; Rajani et al., 2019). For example, LMs have been used to answer factoid questions (Radford et al., 2019), answer common sense queries (Trinh and Le, 2018; Sap et al., 2019), or extract factual knowledge about relations between entities (Petroni et al., 2019; Baldini Soares et al., 2019). Regardless of the end task, the knowledge contained in LMs is probed by providing a prompt, and letting the LM either generate the continuation of a prefix (e.g., “Barack Obama was born in __”), or predict missing words in a cloze-style template (e.g., “Barack Obama is a __ by profession”).
However, while this paradigm has been used to achieve a number of intriguing results regarding the knowledge expressed by LMs, they usually rely on prompts that were manually created based on the intuition of the experimenter. These manually created prompts (e.g., “Barack Obama was born in ___ ”) might be sub-optimal because LMs might have learned target knowledge from substantially different contexts (e.g., “The birth place of Barack Obama is Honolulu, Hawaii.”) during their training. Thus it is quite possible that a fact that the LM does know cannot be retrieved due to the prompts not being effective queries for the fact. Thus, existing results are simply a lower bound on the extent of knowledge contained in LMs, and in fact, LMs may be even more knowledgeable than these initial results indicate. In this paper we ask the question: “How can we tighten this lower bound and get a more accurate estimate of the knowledge contained in state-of-the-art LMs?” This is interesting both scientifically, as a probe of the knowledge that LMs contain, and from an engineering perspective, as it will result in higher recall when using LMs as part of a knowledge extraction system.
In particular, we focus on the setting of Petroni et al. (2019) who examine extracting knowledge regarding the relations between entities (definitions in § 2). We propose two automatic methods to systematically improve the breadth and quality of the prompts used to query the existence of a relation (§ 3). Specifically, as shown in Figure 1, these are mining-based methods inspired by previous relation extraction methods (Ravichandran and Hovy, 2002), and paraphrasing-based methods that take a seed prompt (either manually created or automatically mined), and paraphrase it into several other semantically similar expressions. Further, because different prompts may work better when querying for different subject-object pairs, we also investigate lightweight ensemble methods to combine the answers from different prompts together (§ 4).
We experiment on the LAMA benchmark (Petroni et al., 2019), which is an English-language benchmark devised to test the ability of LMs to retrieve relations between entities (§ 5). We first demonstrate that improved prompts significantly improve accuracy on this task, with the one-best prompt extracted by our method raising accuracy from 31.1% to 34.1% on BERT-base (Devlin et al., 2019), with similar gains being obtained with BERT-large as well. We further demonstrate that using a diversity of prompts through ensembling further improves accuracy to 39.6%. We perform extensive analysis and ablations, gleaning insights both about how to best query the knowledge stored in LMs and about potential directions for incorporating knowledge into LMs themselves. Finally, we have released the resulting LM Prompt And Query Archive (LPAQA) to facilitate future experiments on probing knowledge contained in LMs.
2 Knowledge Retrieval from LMs
Retrieving factual knowledge from LMs is quite different from querying standard declarative knowledge bases (KBs). In standard KBs, users formulate their information needs as a structured query defined by the KB schema and query language. For example, SELECT ?yWHERE {wd:Q76 wdt:P19 ?y} is a SPARQL query to search the birth place of Barack_Obama. In contrast, LMs must be queried by natural language prompts, such as “Barack Obama was born in ___”, and the word assigned the highest probability in the blank will be returned as the answer. Unlike deterministic queries on KBs, this provides no guarantees of correctness or success.
In previous work (McCann et al., 2018; Radford et al., 2019; Petroni et al., 2019), tr has been a single manually defined prompt based on the intuition of the experimenter. As noted in the introduction, this method has no guarantee of being optimal, and thus we propose methods that learn effective prompts from a small set of training data consisting of gold subject-object pairs for each relation.
3 Prompt Generation
First, we tackle prompt generation: the task of generating a set of prompts for each relation r, where at least some of the prompts effectively trigger LMs to predict ground-truth objects. We employ two practical methods to either mine prompt candidates from a large corpus (§ 3.1) or diversify a seed prompt through paraphrasing (§ 3.2).
3.1 Mining-based Generation
Our first method is inspired by template-based relation extraction methods (Agichtein and Gravano, 2000; Ravichandran and Hovy, 2002), which are based on the observation that words in the vicinity of the subject x and object y in a large corpus often describe the relation r. Based on this intuition, we first identify all the Wikipedia sentences that contain both subjects and objects of a specific relation r using the assumption of distant supervision, then propose two methods to extract prompts.
Middle-word Prompts
Following the observation that words in the middle of the subject and object are often indicative of the relation, we directly use those words as prompts. For example, “Barack Obama was born in Hawaii” is converted into a prompt “xwas born iny” by replacing the subject and the object with placeholders.
Dependency-based Prompts
Toutanova et al. (2015) note that in cases of templates where words do not appear in the middle (e.g., “The capital of France is Paris”), templates based on syntactic analysis of the sentence can be more effective for relation extraction. We follow this insight in our second strategy for prompt creation, which parses sentences with a dependency parser to identify the shortest dependency path between the subject and object, then uses the phrase spanning from the leftmost word to the rightmost word in the dependency path as a prompt. For instance, the dependency path in the above example is “France of capital is Paris”, where the leftmost and rightmost words are “capital” and “Paris”, giving a prompt of “capital of xisy”.
Notably, these mining-based methods do not rely on any manually created prompts, and can thus be flexibly applied to any relation where we can obtain a set of subject-object pairs. This will result in diverse prompts, covering a wide variety of ways that the relation may be expressed in text. However, it may also be prone to noise, as many prompts acquired in this way may not be very indicative of the relation (e.g., “x,y”), even if they are frequent.
3.2 Paraphrasing-based Generation
Our second method for generating prompts is more targeted—it aims to improve lexical diversity while remaining relatively faithful to the original prompt. Specifically, we do so by performing paraphrasing over the original prompt into other semantically similar or identical expressions. For example, if our original prompt is “xshares a border withy”, it may be paraphrased into “xhas a common border withy” and “xadjoinsy”. This is conceptually similar to query expansion techniques used in information retrieval that reformulate a given query to improve retrieval performance (Carpineto and Romano, 2012).
Although many methods could be used for paraphrasing (Romano et al., 2006; Bhagat and Ravichandran, 2008), we follow the simple method of using back-translation (Sennrich et al., 2016; Mallinson et al., 2017) to first translate the initial prompt into B candidates in another language, each of which is then back-translated into B candidates in the original language. We then rank B2 candidates based on their round-trip probability (i.e., , where is the initial prompt, is the translated prompt in the other language, and t is the final prompt), and keep the top T prompts.
4 Prompt Selection and Ensembling
In the previous section, we described methods to generate a set of candidate prompts for a particular relation r. Each of these prompts may be more or less effective at eliciting knowledge from the LM, and thus it is necessary to decide how to use these generated prompts at test time. In this section, we describe three methods to do so.
4.1 Top-1 Prompt Selection
4.2 Rank-based Ensemble
Next we examine methods that use not only the top-1 prompt, but combine together multiple prompts. The advantage to this is that the LM may have observed different entity pairs in different contexts within its training data, and having a variety of prompts may allow for elicitation of knowledge that appeared in these different contexts.
4.3 Optimized Ensemble
5 Main Experiments
5.1 Experimental Settings
In this section, we assess the extent to which our prompts can improve fact prediction performance, raising the lower bound on the knowledge we discern is contained in LMs.
Dataset
As data, we use the T-REx subset (ElSahar et al., 2018) of the LAMA benchmark (Petroni et al., 2019), which has a broader set of 41 relations (compared with the Google-RE subset, which only covers 3). Each relation is associated with at most 1000 subject-object pairs from Wikidata, and a single manually designed prompt. To learn to mine prompts (§ 3.1), rank prompts (§ 4.2), or learn ensemble weights (§ 4.3), we create a separate training set of subject-object pairs also from Wikidata for each relation that has no overlap with the T-REx dataset. We denote the training set as T-REx-train. For consistency with the T-REx dataset in LAMA, T-REx-train also is chosen to contain only single-token objects. To investigate the generality of our method, we also report the performance of our methods on the Google-RE subset,5 which takes a similar form to T-REx but is relatively small and only covers three relations.
Pörner et al. (2019) note that some facts in LAMA can be recalled solely based on surface forms of entities, without memorizing facts. They filter out those easy-to-guess facts and create a more difficult benchmark, denoted as LAMA-UHN. We also conduct experiments on the T-REx subset of LAMA-UHN (i.e., T-REx-UHN) to investigate whether our methods can still obtain improvements on this harder benchmark. Dataset statistics are summarized in Table 1.
Properties . | T-REx . | T-REx-UHN . | T-REx-train . |
---|---|---|---|
#sub-obj pairs | 830.2 | 661.1 | 948.7 |
#unique subject | 767.8 | 600.8 | 880.1 |
#unique objects | 150.9 | 120.5 | 354.6 |
object entropy | 3.6 | 3.4 | 4.4 |
Properties . | T-REx . | T-REx-UHN . | T-REx-train . |
---|---|---|---|
#sub-obj pairs | 830.2 | 661.1 | 948.7 |
#unique subject | 767.8 | 600.8 | 880.1 |
#unique objects | 150.9 | 120.5 | 354.6 |
object entropy | 3.6 | 3.4 | 4.4 |
Models
As for the models to probe, in our main experiments we use the standard BERT-base and BERT-large models (Devlin et al., 2019). We also perform some experiments with other pre-trained models enhanced with external entity representations, namely, ERNIE (Zhang et al., 2019) and KnowBert (Peters et al., 2019), which we believe may do better on recall of entities.
Evaluation Metrics
Methods
We attempted different methods for prompt generation and selection/ensembling, and compare them with the manually designed prompts used in Petroni et al. (2019). Majority refers to predicting the majority object for each relation, as mentioned above. Man is the baseline from Petroni et al. (2019) that only uses the manually designed prompts for retrieval. Mine (§ 3.1) uses the prompts mined from Wikipedia through both middle words and dependency paths, and Mine+Man combines them with the manual prompts. Mine+Para (§ 3.2) paraphrases the highest-ranked mined prompt for each relation, while Man+Para uses the manual one instead.
The prompts are combined either by averaging the log probabilities from the TopK highest-ranked prompts (§ 4.2) or the weights after optimization (§ 4.3; Opti.). Oracle represents the upper bound of the performance of the generated prompts, where a fact is judged as correct if any one of the prompts allows the LM to successfully predict the object.
Implementation Details
We use T = 40 most frequent prompts either generated through mining or paraphrasing in all experiments, and the number of candidates in back-translation is set to B = 7. We remove prompts only containing stopwords/ punctuations or longer than 10 words to reduce noise. We use the round-trip English-German neural machine translation models pre-trained on WMT’19 (Ng et al., 2019) for back-translation, as English-German is one of the most highly resourced language pairs.7 When optimizing ensemble parameters, we use Adam (Kingma and Ba, 2015) with default parameters and batch size of 32.
5.2 Evaluation Results
Prompts . | Top1 . | Top3 . | Top5 . | Opti. . | Oracle . |
---|---|---|---|---|---|
BERT-base (Man=31.1) | |||||
Mine | 31.4 | 34.2 | 34.7 | 38.9 | 50.7 |
Mine+Man | 31.6 | 35.9 | 35.1 | 39.6 | 52.6 |
Mine+Para | 32.7 | 34.0 | 34.5 | 36.2 | 48.1 |
Man+Para | 34.1 | 35.8 | 36.6 | 37.3 | 47.9 |
BERT-large (Man=32.3) | |||||
Mine | 37.0 | 37.0 | 36.4 | 43.7 | 54.4 |
Mine+Man | 39.4 | 40.6 | 38.4 | 43.9 | 56.1 |
Mine+Para | 37.8 | 38.6 | 38.6 | 40.1 | 51.8 |
Man+Para | 35.9 | 37.3 | 38.0 | 38.8 | 50.0 |
Prompts . | Top1 . | Top3 . | Top5 . | Opti. . | Oracle . |
---|---|---|---|---|---|
BERT-base (Man=31.1) | |||||
Mine | 31.4 | 34.2 | 34.7 | 38.9 | 50.7 |
Mine+Man | 31.6 | 35.9 | 35.1 | 39.6 | 52.6 |
Mine+Para | 32.7 | 34.0 | 34.5 | 36.2 | 48.1 |
Man+Para | 34.1 | 35.8 | 36.6 | 37.3 | 47.9 |
BERT-large (Man=32.3) | |||||
Mine | 37.0 | 37.0 | 36.4 | 43.7 | 54.4 |
Mine+Man | 39.4 | 40.6 | 38.4 | 43.9 | 56.1 |
Mine+Para | 37.8 | 38.6 | 38.6 | 40.1 | 51.8 |
Man+Para | 35.9 | 37.3 | 38.0 | 38.8 | 50.0 |
Prompts . | Top1 . | Top3 . | Top5 . | Opti. . | Oracle . |
---|---|---|---|---|---|
BERT-base (Man=22.8) | |||||
Mine | 20.7 | 22.7 | 23.9 | 25.7 | 36.2 |
Mine+Man | 21.3 | 23.8 | 24.8 | 26.6 | 38.0 |
Mine+Para | 21.2 | 22.4 | 23.0 | 23.6 | 34.1 |
Man+Para | 22.8 | 23.8 | 24.6 | 25.0 | 34.9 |
BERT-large (Man=25.7) | |||||
Mine | 26.4 | 26.3 | 25.9 | 30.1 | 40.7 |
Mine+Man | 28.1 | 28.3 | 27.3 | 30.7 | 42.2 |
Mine+Para | 26.2 | 27.1 | 27.0 | 27.1 | 38.3 |
Man+Para | 25.9 | 27.8 | 28.3 | 28.0 | 39.3 |
Prompts . | Top1 . | Top3 . | Top5 . | Opti. . | Oracle . |
---|---|---|---|---|---|
BERT-base (Man=22.8) | |||||
Mine | 20.7 | 22.7 | 23.9 | 25.7 | 36.2 |
Mine+Man | 21.3 | 23.8 | 24.8 | 26.6 | 38.0 |
Mine+Para | 21.2 | 22.4 | 23.0 | 23.6 | 34.1 |
Man+Para | 22.8 | 23.8 | 24.6 | 25.0 | 34.9 |
BERT-large (Man=25.7) | |||||
Mine | 26.4 | 26.3 | 25.9 | 30.1 | 40.7 |
Mine+Man | 28.1 | 28.3 | 27.3 | 30.7 | 42.2 |
Mine+Para | 26.2 | 27.1 | 27.0 | 27.1 | 38.3 |
Man+Para | 25.9 | 27.8 | 28.3 | 28.0 | 39.3 |
Single Prompt Experiments
When only one prompt is used (in the first Top1 column in both tables), the best of the proposed prompt generation methods increases micro-averaged accuracy from 31.1% to 34.1% on BERT-base, and from 32.3% to 39.4% on BERT-large. This demonstrates that the manually created prompts are a somewhat weak lower bound; there are other prompts that further improve the ability to query knowledge from LMs. Table 4 shows some of the mined prompts that resulted in a large performance gain compared with the manual ones. For the relation religion, “xwho converted toy” improved 60.0% over the manually defined prompt of “xis affiliated with theyreligion”, and for the relation subclass_of, “xis a type of y” raised the accuracy by 22.7% over “xis a subclass of y”. It can be seen that the largest gains from using mined prompts seem to occur in cases where the manually defined prompt is more complicated syntactically (e.g., the former), or when it uses less common wording (e.g., the latter) than the mined prompt.
ID . | Relations . | Manual Prompts . | Mined Prompts . | Acc. Gain . |
---|---|---|---|---|
P140 | religion | x is affiliated with the y religion | x who converted to y | +60.0 |
P159 | headquarters location | The headquarter of x is in y | x is based in y | +4.9 |
P20 | place of death | x died in y | x died at his home in y | +4.6 |
P264 | record label | x is represented by music label y | x recorded for y | +17.2 |
P279 | subclass of | x is a subclass of y | x is a type of y | +22.7 |
P39 | position held | x has the position of y | x is elected y | +7.9 |
ID . | Relations . | Manual Prompts . | Mined Prompts . | Acc. Gain . |
---|---|---|---|---|
P140 | religion | x is affiliated with the y religion | x who converted to y | +60.0 |
P159 | headquarters location | The headquarter of x is in y | x is based in y | +4.9 |
P20 | place of death | x died in y | x died at his home in y | +4.6 |
P264 | record label | x is represented by music label y | x recorded for y | +17.2 |
P279 | subclass of | x is a subclass of y | x is a type of y | +22.7 |
P39 | position held | x has the position of y | x is elected y | +7.9 |
Prompt Ensembling
Next we turn to experiments that use multiple prompts to query the LM. Comparing the single-prompt results in column 1 to the ensembled results in the following three columns, we can see that ensembling multiple prompts almost always leads to better performance. The simple average used in Top3 and Top5 outperforms Top1 across different prompt generation methods. The optimized ensemble further raises micro-averaged accuracy to 38.9% and 43.7% on BERT-base and BERT-large respectively, outperforming the rank-based ensemble by a large margin. These two sets of results demonstrate that diverse prompts can indeed query the LM in different ways, and that the optimization-based method is able to find weights that effectively combine different prompts together.
We list the learned weights of top-3 mined prompts and accuracy gain over only using the top-1 prompt in Table 5. Weights tend to concentrate on one particular prompt, and the other prompts serve as complements. We also depict the performance of the rank-based ensemble method with respect to the number of prompts in Figure 2. For mined prompts, top-2 or top-3 usually gives us the best results, while for paraphrased prompts, top-5 is the best. Incorporating more prompts does not always improve accuracy, a finding consistent with the rapidly decreasing weights learned by the optimization-based method. The gap between Oracle and Opti. indicates that there is still space for improvement using better ensemble methods.
ID . | Relations . | Prompts and Weights . | Acc. Gain . |
---|---|---|---|
P127 | owned by | x is owned by y.485x was acquired by y.151x division of y.151 | +7.0 |
P140 | religion | x who converted to y.615y tirthankara x.190y dedicated to x.110 | +12.2 |
P176 | manufacturer | y introduced the x.594y announced the x.286x attributed to the y.111 | +7.0 |
ID . | Relations . | Prompts and Weights . | Acc. Gain . |
---|---|---|---|
P127 | owned by | x is owned by y.485x was acquired by y.151x division of y.151 | +7.0 |
P140 | religion | x who converted to y.615y tirthankara x.190y dedicated to x.110 | +12.2 |
P176 | manufacturer | y introduced the x.594y announced the x.286x attributed to the y.111 | +7.0 |
Mining vs. Paraphrasing
For the rank-based ensembles (Top1, 3, 5), prompts generated by paraphrasing usually perform better than mined prompts, while for the optimization-based ensemble (Opti.), mined prompts perform better. We conjecture this is because mined prompts exhibit more variation compared to paraphrases, and proper weighting is of central importance. This difference in the variation can be observed in the average edit distance between the prompts of each class, which is 3.27 and 2.73 for mined and paraphrased prompts respectively. However, the improvement led by ensembling paraphrases is still significant over just using one prompt (Top1 vs. Opti.), raising micro-averaged accuracy from 32.7% to 36.2% on BERT-base, and from 37.8% to 40.1% on BERT-large. This indicates that even small modifications to prompts can result in relatively large changes in predictions. Table 6 demonstrates cases where modification of one word (either function or content word) leads to significant accuracy improvements, indicating that large-scale LMs are still brittle to small changes in the ways they are queried.
ID . | Modifications . | Acc. Gain . |
---|---|---|
P413 | x plays inaty position | +23.2 |
P495 | x was createdmade in y | +10.8 |
P495 | x wasis created in y | +10.0 |
P361 | x is a part of y | +2.7 |
P413 | x plays in y position | +2.2 |
ID . | Modifications . | Acc. Gain . |
---|---|---|
P413 | x plays inaty position | +23.2 |
P495 | x was createdmade in y | +10.8 |
P495 | x wasis created in y | +10.0 |
P361 | x is a part of y | +2.7 |
P413 | x plays in y position | +2.2 |
Middle-word vs. Dependency-based
We compare the performance of only using middle-word prompts and concatenating them with dependency-based prompts in Table 7. The improvements confirm our intuition that words belonging to the dependency path but not in the middle of the subject and object are also indicative of the relation.
Micro vs. Macro
Comparing Tables 2 and 3, we can see that macro-averaged accuracy is much lower than micro-averaged accuracy, indicating that macro-averaged accuracy is a more challenging metric that evaluates how many unique objects LMs know. Our optimization-based method improves macro-averaged accuracy from 22.8% to 25.7% on BERT-base, and from 25.7% to 30.1% on BERT-base. This again confirms the effectiveness of ensembling multiple prompts, but the gains are somewhat smaller. Notably, in our optimization-based methods, the ensemble weights are optimized on each example in the training set, which is more conducive to optimizing micro-averaged accuracy. Optimization to improve macro-averaged accuracy is potentially an interesting direction for future work that may result in prompts more generally applicable to different types of objects.
Performance of Different LMs
In Table 8, we compare BERT with ERNIE and KnowBert, which are enhanced with external knowledge by explicitly incorporating entity embeddings. ERNIE outperforms BERT by 1 point even with the manually defined prompts, but our prompt generation methods further emphasize the difference between the two methods, with the highest accuracy numbers differing by 4.2 points using the Mine+Man method. This indicates that if LMs are queried effectively, the differences between highly performant models may become more clear. KnowBert underperforms BERT on LAMA, which is opposite to the observation made in Peters et al. (2019). This is probably because that multi token subjects/objects are used to evaluate KnowBert in Peters et al. (2019), while LAMA contains only single-token objects.
Model . | Man . | Mine . | Mine+Man . | Mine+Para . | Man+Para . |
---|---|---|---|---|---|
BERT | 31.1 | 38.9 | 39.6 | 36.2 | 37.3 |
ERNIE | 32.1 | 42.3 | 43.8 | 40.1 | 41.1 |
KnowBert | 26.2 | 34.1 | 34.6 | 31.9 | 32.1 |
Model . | Man . | Mine . | Mine+Man . | Mine+Para . | Man+Para . |
---|---|---|---|---|---|
BERT | 31.1 | 38.9 | 39.6 | 36.2 | 37.3 |
ERNIE | 32.1 | 42.3 | 43.8 | 40.1 | 41.1 |
KnowBert | 26.2 | 34.1 | 34.6 | 31.9 | 32.1 |
LAMA-UHN Evaluation
The performances on LAMA-UHN benchmark are reported in Table 9. Although the overall performances drop dramatically compared to the performances on the original LAMA benchmark (Table 2), optimized ensembles can still outperform manual prompts by a large margin, indicating that our methods are effective in retrieving knowledge that cannot be inferred based on surface forms.
5.3 Analysis
Next, we perform further analysis to better understand what type of prompts proved most suitable for facilitating retrieval of knowledge from LMs.
Prediction Consistency by Prompt
Performance on Google-RE
We also report the performance of optimized ensemble on the Google-RE subset in Table 10. Again, ensembling diverse prompts improves accuracies for both the BERT-base and BERT-large models. The gains are somewhat smaller than those on the T-REx subset, which might be caused by the fact that there are only three relations and one of them (predicting the birth_date of a person) is particularly hard to the extent that only one prompt yields non-zero accuracy.
POS-based Analysis
Next, we try to examine which types of prompts tend to be effective in the abstract by examining the part-of-speech (POS) patterns of prompts that successfully extract knowledge from LMs. In open information extraction systems (Banko et al., 2007), manually defined patterns are often leveraged to filter out noisy relational phrases. For example, ReVerb (Fader et al., 2011) incorporates three syntactic constraints listed in Table 11 to improve the coherence and informativeness of the mined relational phrases. To test whether these patterns are also indicative of the ability of a prompt to retrieve knowledge from LMs, we use these three patterns to group prompts generated by our methods into four clusters, where the “other” cluster contains prompts that do not match any pattern. We then calculate the rank of each prompt within the extracted prompts, and plot the distribution of rank using box plots in Figure 4.8 We can see that the average rank of prompts matching these patterns is better than those in the “other” group, confirming our intuitions that good prompts should conform with those patterns. Some of the best performing prompts’ POS signatures are “x VBD VBN IN y” (e.g., “xwas born iny”) and “x VBZ DT NN IN y” (e.g., “xis the capital of y”).
Cross-model Consistency
Finally, it is of interest to know whether the prompts that we are extracting are highly tailored to a specific model, or whether they can generalize across models. To do so, we use two settings: One compares BERT-base and BERT-large, the same model architecture with different sizes; the other compares BERT-base and ERNIE, different model architectures with a comparable size. In each setting, we compare when the optimization-based ensembles are trained on the same model, or when they are trained on one model and tested on the other. As shown in Tables 12 and 13, we found that in general there is usually some drop in performance in the cross-model scenario (third and fifth columns), but the losses tend to be small, and the highest performance when querying BERT-base is actually achieved by the weights optimized on BERT-large. Notably, the best accuracies of 40.1% and 42.2% (Table 12) and 39.5% and 40.5% (Table 13) with the weights optimized on the other model are still much higher than those obtained by the manual prompts, indicating that optimized prompts still afford large gains across models. Another interesting observation is that the drop in performance on ERNIE (last two columns in Table 13) is larger than that on BERT-large (last two columns in Table 12) using weights optimized on BERT-base, indicating that models sharing the same architecture benefit more from the same prompts.
Test . | BERT-base . | BERT-large . | ||
---|---|---|---|---|
Train . | base . | large . | large . | base . |
Mine | 38.9 | 38.7 | 43.7 | 42.2 |
Mine+Man | 39.6 | 40.1 | 43.9 | 42.2 |
Mine+Para | 36.2 | 35.6 | 40.1 | 39.0 |
Man+Para | 37.3 | 35.6 | 38.8 | 37.5 |
Test . | BERT-base . | BERT-large . | ||
---|---|---|---|---|
Train . | base . | large . | large . | base . |
Mine | 38.9 | 38.7 | 43.7 | 42.2 |
Mine+Man | 39.6 | 40.1 | 43.9 | 42.2 |
Mine+Para | 36.2 | 35.6 | 40.1 | 39.0 |
Man+Para | 37.3 | 35.6 | 38.8 | 37.5 |
Test . | BERT . | ERNIE . | ||
---|---|---|---|---|
Train . | BERT . | ERNIE . | ERNIE . | BERT . |
Mine | 38.9 | 38.0 | 42.3 | 38.7 |
Mine+Man | 39.6 | 39.5 | 43.8 | 40.5 |
Mine+Para | 36.2 | 34.2 | 40.1 | 39.0 |
Man+Para | 37.3 | 35.2 | 41.1 | 40.3 |
Test . | BERT . | ERNIE . | ||
---|---|---|---|---|
Train . | BERT . | ERNIE . | ERNIE . | BERT . |
Mine | 38.9 | 38.0 | 42.3 | 38.7 |
Mine+Man | 39.6 | 39.5 | 43.8 | 40.5 |
Mine+Para | 36.2 | 34.2 | 40.1 | 39.0 |
Man+Para | 37.3 | 35.2 | 41.1 | 40.3 |
Linear vs. Log-linear Combination
6 Omitted Design Elements
6.1 LM-aware Prompt Generation
We used this method to refine all the mined and manual prompts on the T-REx-train dataset, and display their performance on the T-REx dataset in Table 14. After fine-tuning, the oracle performance increased significantly, while the ensemble performances (both rank-based and optimization-based) dropped slightly. This indicates that LM-aware fine-tuning has the potential to discover better prompts, but some portion of the refined prompts may have over-fit to the training set upon which they were optimized.
6.2 Forward and Backward Probabilities
Finally, given class imbalance and the propensity of the model to over-predict the majority object, we examine a method to encourage the model to predict subject-object pairs that are more strongly aligned. Inspired by the maximum mutual information objective used in Li et al. (2016a), we add the backward log probability of each prompt to our optimization-based scoring function in Equation 3. Due to the large search space for objects, we turn to an approximation approach that only computes backward probability for the most probable B objects given by the forward probability at both training and test time. As shown in Table 15, the improvement resulting from backward probability is small, indicating that a diversity-promoting scoring function might not be necessary for knowledge retrieval from LMs.
7 Related Work
Much work has focused on understanding the internal representations in neural NLP models (Belinkov and Glass, 2019), either by using extrinsic probing tasks to examine whether certain linguistic properties can be predicted from those representations (Shi et al., 2016; Linzen et al., 2016; Belinkov et al., 2017), or by ablations to the models to investigate how behavior varies (Li et al., 2016b; Smith et al., 2017). For contextualized representations in particular, a broad suite of NLP tasks are used to analyze both syntactic and semantic properties, providing evidence that contextualized representations encode linguistic knowledge in different layers (Hewitt and Manning, 2019; Tenney et al., 2019a; Tenney et al., 2019b; Jawahar et al., 2019; Goldberg, 2019).
Different from analyses probing the representations themselves, our work follows Petroni et al. (2019); Pörner et al. (2019) in probing for factual knowledge. They use manually defined prompts, which may be under-estimating the true performance obtainable by LMs. Concurrently to this work, Bouraoui et al. (2020) made a similar observation that using different prompts can help better extract relational knowledge from LMs, but they use models explicitly trained for relation extraction whereas our methods examine the knowledge included in LMs without any additional training.
Orthogonally, some previous works integrate external knowledge bases so that the language generation process is explicitly conditioned on symbolic knowledge (Ahn et al., 2016; Yang et al., 2017; Logan et al., 2019; Hayashi et al., 2020). Similar extensions have been applied to pre-trained LMs like BERT, where contextualized representations are enhanced with entity embeddings (Zhang et al., 2019; Peters et al., 2019; Pörner et al., 2019). In contrast, we focus on better knowledge retrieval through prompts from LMs as-is, without modifying them.
8 Conclusion
In this paper, we examined the importance of the prompts used in retrieving factual knowledge from language models. We propose mining-based and paraphrasing-based methods to systematically generate diverse prompts to query specific pieces of relational knowledge. Those prompts, when combined together, improve factual knowledge retrieval accuracy by 8%, outperforming manually designed prompts by a large margin. Our analysis indicates that LMs are indeed more knowledgeable than initially indicated by previous results, but they are also quite sensitive to how we query them. This indicates potential future directions such as (1) more robust LMs that can be queried in different ways but still return similar results, (2) methods to incorporate factual knowledge in LMs, and (3) further improvements in optimizing methods to query LMs for knowledge. Finally, we have released all our learned prompts to the community as the LM Prompt and Query Archive (LPAQA), available at: https://github.com/jzbjyb/LPAQA.
Acknowledgments
This work was supported by a gift from Bosch Research and NSF award no. 1815287. We would like to thank Paul Michel, Hiroaki Hayashi, Pengcheng Yin, and Shuyan Zhou for their insightful comments and suggestions.
Notes
Some models we use in this paper, e.g., BERT (Devlin et al., 2019), are bi-directional, and do not directly define probability distribution over text, which is the underlying definition of an LM. Nonetheless, we call them LMs for simplicity.
We can also go the other way around by filling in the objects and predicting the missing subjects. Since our focus is on improving prompts, we choose to be consistent with Petroni et al. (2019) to make a fair comparison, and leave exploring other settings to future work. Also notably, Petroni et al. (2019) only use objects consisting of a single token, so we only need to predict one word for the missing slot.
We restrict to masked LMs in this paper because the missing slot might not be the last token in the sentence and computing this probability in traditional left-to-right LMs using Bayes’ theorem is not tractable.
Intuitively, because we are combining together scores in the log space, this has the effect of penalizing objects that are very unlikely given any certain prompt in the collection. We also compare with linear combination in ablations in § 5.3.
In LAMA, it is called “P@1.” There might be multiple correct answers for some cases, e.g., a person speaking multiple languages, but we only use one ground truth. We will leave exploring more advanced evaluation methods to future work.
We use the ranking position of a prompt to represent its quality instead of its accuracy because accuracy distributions of different relations might span different ranges, making accuracy not directly comparable across relations.
In theory, this algorithm can be applied to both masked LMs like BERT and traditional left-to-right LMs, since the masked probability can be computed using Bayes’ theorem for traditional LMs. However, in practice, due to the large size of vocabulary, it can only be approximated with beam search, or computed with more complicated continuous optimization algorithms (Hoang et al., 2017).
References
Author notes
The first two authors contributed equally.