Abstract
Probing has become a go-to methodology for interpreting and analyzing deep neural models in natural language processing. However, there is still a lack of understanding of the limitations and weaknesses of various types of probes. In this work, we suggest a strategy for input-level intervention on naturalistic sentences. Using our approach, we intervene on the morpho-syntactic features of a sentence, while keeping the rest of the sentence unchanged. Such an intervention allows us to causally probe pre-trained models. We apply our naturalistic causal probing framework to analyze the effects of grammatical gender and number on contextualized representations extracted from three pre-trained models in Spanish, the multilingual versions of BERT, RoBERTa, and GPT-2. Our experiments suggest that naturalistic interventions lead to stable estimates of the causal effects of various linguistic properties. Moreover, our experiments demonstrate the importance of naturalistic causal probing when analyzing pre-trained models.
https://github.com/rycolab/naturalistic-causal-probing1 Introduction
Contextualized word representations are a byproduct of pre-trained neural language models and have led to improvements in performance on a myriad of downstream natural language processing (NLP) tasks (Joshi et al., 2019; Kondratyuk, 2019; Zellers et al., 2019; Brown et al., 2020). Despite this performance improvement, though, it is still not obvious to researchers how these representations encode linguistic information. One prominent line of work attempts to shed light on this topic through probing (Alain and Bengio, 2017), also referred to as auxiliary prediction (Adi et al., 2017) or diagnostic classification (Hupkes et al., 2018). In machine learning parlance, a probe is a supervised classifier that is trained to predict a property of interest from the target model’s representations. If the probe manages to predict the property with high accuracy, one may conclude that these representations encode information about the probed property.
While widely used, probing is not without its limitations.1 For instance, probing a pre-trained model for grammatical gender can only tell us whether information about gender is present in the representations,2 it cannot, however, tell us how or if the model actually uses information about gender in its predictions (Ravichander et al., 2021; Elazar et al., 2021; Ravfogel et al., 2021; Lasri et al., 2022). Furthermore, supervised probing cannot tell us whether the property under consideration is directly encoded in the representations, or if it can be recovered from the representations alone due to spurious correlations among various linguistic properties. In other words, while we might find correlations between a probed property and representations through supervised probing techniques, we cannot uncover causal relationships between them.
In this work, we propose a new strategy for input-level intervention on naturalistic data to obtain what we call naturalistic counterfactuals, which we then use to perform causal probing. Through such input-level interventions, we can ascertain whether a particular linguistic property has a causal effect on a model’s representations. A number of prior papers have attempted to tease apart causal dependencies using either input-level or representation-level interventions. For instance, work on representational counterfactuals has investigated causal dependencies via interventions on neural representations. While quite versatile, representation-level interventions make it hard— if not impossible—to determine whether we are only intervening on our property of interest. Another proposed method, templated counterfactuals, does perform an input-level intervention strategy, which is guaranteed to only affect the probed property. Under such an approach, the researcher first creates a number of templated sentences (either manually or automatically), which they then fill with a set of minimal-pair words to generate counterfactual examples. However, template-based interventions are limited by design: They do not reflect the diversity of sentences present in natural language, and, thus, lead to biased estimates of the measured causal effects. Naturalistic counterfactuals improve upon template-based interventions in that they lead to unbiased estimates of the causal effect.
In our first set of experiments, we employ naturalistic causal probing to estimate the average treatment effect (ATE) of two morpho-syntactic features—namely, number and grammatical gender—on a noun’s contextualized representation. We show the estimated ATE’s stability across corpora. In our second set of experiments, we find that a noun’s grammatical gender and its number are encoded by a small number of directions in three pre-trained models’ representations: BERT, RoBERTa, and GPT-2.3 We further use naturalistic counterfactuals to causally investigate gender bias in RoBERTa. We find that RoBERTa is much more likely to predict the adjective hermoso(a) (beautiful) for feminine nouns and racional (rational) for masculine. This suggests that RoBERTa is indeed gender-biased in its adjective predictions.
Finally, through our naturalistic counterfactuals, we show that correlational probes overestimate the presence of certain linguistic properties. We compare the performance of correlational probes on two versions of our dataset: one unaltered and one augmented with naturalistic counterfactuals. While correlational probes achieve very high (above 90%) performance when predicting gender from sentence-level representations, they only perform close to chance (around 60%) on the augmented data. Together, our results demonstrate the importance of a naturalistic causal approach to probing.
2 Probing
There are several types of probing methods that have been proposed for the analysis of NLP models, and there are many possible taxonomies of those methods. For the purposes of this paper, we divide previously proposed probing models into two groups: correlational and causal probes. On one hand, correlational probes attempt to uncover whether a probed property is present in a model’s representations. On the other hand, causal probes, roughly speaking, attempt to uncover how a model encodes and makes use of a specific probed property. We compare and contrast correlational and causal probing techniques in this section.
2.1 Correlational Probing
Correlational probing is any attempt to correlate the input representations with the probed property of interest. Under correlational probing, the performance of a probe is viewed as the degree to which a model encodes information in its representations about some probed property (Alain and Bengio, 2017). At various times, correlational results have been used to claim that language models have knowledge of various morphological, syntactic, and semantic phenomena (Adi et al., 2017; Ettinger et al., 2016; Belinkov et al., 2017; Conneau et al., 2018, inter alia). Yet the validity of these claims has been a subject of debate (Saphra and Lopez, 2019; Hewitt and Liang, 2019; Pimentel et al., 2020a, 2020b; Voita and Titov, 2020).
2.2 Causal Probing
A more recent line of work aims to answer the question: What is the causal relationship between the property of interest and the probed model’s representations? In natural language, however, answering this question is not straightforward: sentences typically contain confounding factors that render analyses tedious. To circumvent this problem, most work in causal probing relies on interventions, that is, the act of setting a variable of interest to a fixed value (Pearl, 2009). Importantly, this must be done without altering any of this variable’s causal parents, thereby keeping their probability distributions fixed.4 As a byproduct, these interventions generate counterfactuals, namely, examples where a specific property of interest is changed while everything else is held constant. Counterfactuals can then be used to perform a causal analysis. Prior probing papers have proposed methods using both representational and templated counterfactuals, as we discuss next.
Representational Counterfactuals.
A few recent causal probing papers perform interventions directly on a model’s representations (Giulianelli et al., 2018; Feder et al., 2021; Vig et al., 2020; Tucker et al., 2021; Ravfogel et al., 2021; Lasri et al., 2022; Ravfogel et al., 2022a). For example, Elazar et al. (2021) use iterative null space projection (INLP; Ravfogel et al., 2020) to remove an analyzed property’s information, for example, part of speech, from the representations. Although representational interventions can be applied to situations where other forms of intervention are not feasible, it is often impossible to make sure only the information about the probed property is removed or changed.5 In the absence of this guarantee, any causal conclusion should be viewed with caution.
Templated Counterfactuals.
Other work (Vig et al., 2020; Finlayson et al., 2021), like us, has leveraged input-level interventions. However, in these cases, the interventions are carried out using templated minimal-pair sentences, which differ only with respect to a single analyzed property. Using these minimal pairs, they estimate the effect of an input-level intervention on individual attention heads and neurons. One benefit of template-based approaches is that they create a highly controlled environment, which guarantees that the intervention is done correctly, and which may lead to insights that would be impossible to gain from natural data. However, since the templates are typically designed to analyze a specific property, they cover a narrow set linguistic phenomena, which may not reflect the complexity of language in naturalistic data.
Naturalistic Counterfactuals.
In this paper, following Zmigrod et al. (2019), we propose a new and less complex strategy to perform input-level interventions by creating naturalistic counterfactuals that are not derived from templates. Instead, we derive the counterfactuals from the dependency structure of the sentence. By creating counterfactuals on the fly using a dependency parse, we avoid the biases of manually creating templates. Furthermore, our approach guarantees that we only intervene on the specific linguistic property of interest, for example, changing the grammatical gender or number of a noun.
3 The Causal Framework
The question of interest in this paper is how contextualized representations are causally affected by a morpho-syntactic feature such as gender or number. To see how our method works, it is easiest to start with an example. Let’s consider the following pair of Spanish sentences:
- (1)
El programadortalentoso escribióelcódigo.
the.m.sg programmer.m.sg talented.m.sg wrote the code.
The talented programmer wrote the code.
- (2)
La programadora talentosa escribióelcódigo.
the.f.sg programmer.f.sg talented.f.sg wrote the code.
The talented programmer wrote the code.
The meaning of these sentences is equivalent up to the gender of the noun programador, whose feminine form is programadora. However, more than just this one word changes from (1) to (2): The definite article el changes to la and the adjective talentoso changes to talentosa. In the terminology of this paper, we will refer to programador as the focus noun, as it is the noun whose grammatical properties we are going to change. We will refer to the changing of (1) to (2) as a syntactic intervention on the focus noun. Informally, a syntactic intervention may be thought of as taking part in two steps. First, we swap the focus noun (programador) with another noun that is equivalent up to a single grammatical property. In this case, we swap programador with programadora, which differs only in its gender marking. Second, we reinflect the sentence so that all necessary words grammatically agree with the new focus noun. The result of a syntactic intervention is a pair of sentences that differ minimally, that is, only with respect to this one grammatical property (Figure 1). Another way of framing the syntactic intervention is as a counterfactual: What would (1) have looked like if programador had been feminine? The rest of this section focuses on formalizing the notion of a syntactic intervention and discussing how to use them in a causal inference framework for probing.
A Note on Inanimate Nouns.
When estimating the effect of grammatical gender here, we restrict our investigation to animate nouns, for example, programadora/programador (feminine/masculine programmer). Grammatical gender of inanimate nouns is lexicalized, meaning that each noun is assigned a single gender, for example, puente (bridge) is masculine. In other words, there is not a non-zero probability of assigning each lemmata to each gender, which violates a condition called positivity in causal inference literature. Thus, we cannot perform an intervention on the grammatical gender of those words, but rather would need to perform an intervention on the lemma itself. We refer to Gonen et al. (2019) for an analysis of the effect of gender on inanimate nouns’ representations. Note that a similar lexicalization can also be observed in a few animate nouns, for example, madre/padre (mother/father). In such cases, to separate the lemma from gender, we assume that these words share a hypothetical lemma, which in our example represents parenthood, and combining that with gender would give us the specific forms (e.g., madre/padre).
3.1 The Causal Model
We now describe a causal model that will allow us to more formally discuss syntactic interventions.
Notation and Variables.
We denote random variables in upper-case letters and instances with lower-case letters. We bold sequences: bold lower-case letters represent a sequence of words and bold upper-case letters represent a sequence of random variables. Let be a sentence (of length T) where each ft is a word form. In addition, let r be the list of contextual representations where each rt ∈ℝh, and is in one-to-one correspondence with the sentence f, that is, rt is ft’s contextual representations. Furthermore, let be a list of lemmata and a list of morpho-syntactic features co-indexed with f; ℓt is the lemma of ft and mt is its morpho-syntactic features. We call the minimal list of morpho-syntactic features, where each tk is an index between 1 to T. In essence, we drop features of the tokens that are dependent on other tokens’ morphology. In our example (1) this means we only include the morpho-syntactic features of programador and código, thus .6 We denote the morpho-syntactic feature of interest as m*, which, in this work, represents either the gender g* or number n* of the focus noun. We further denote the lemma of the focus noun as ℓ*.
Causal Assumptions.
Our causal model is introduced in Figure 2. It encodes the causal relationships between U,L,M,F, and R. Explicitly, we assume the following causal relationships:
M and L are causally dependent on U. The underlying meaning that the writer of a sentence wants to convey determines the used lemmas and morpho-syntactic features;
In general, Ltcan causally affectMt. Take the gender of inanimate nouns as an example, where the lemma determines the gender;
Fis causally dependent onLandM. Word forms are a combination of lemmata and morpho-syntactic features;
Ris causally dependent onF. Contextualized representations are obtained by processing the sentences through the probed model.
Dependency Trees.
Abstract Causal Model.
We can now simplify the causal model from Figure 2 into Figure 3. For simplicity, we isolate the lemma and morpho-syntactic feature of interest L* and M* and aggregate the other lemmata and morpho-syntactic features into an abstract variable, which we call Z and refer to as the context. Furthermore, we only show the aggregation of word forms and representations as F and R in the abstract model. We will assume for now, and in most of our experiments, that the output of the causal model (R in Figure 3) represents the contextualized representation of the focus noun. However, as we generalize later, the output of the causal model can be any function of word forms F, such as: The representation of other words in the sentence, the probability distribution assigned by the model to a masked word, or even the output of a downstream task. We note that Figure 3 can be easily re-expanded into Figure 2 for any specific utterance by using its dependency tree.
3.2 Naturalistic Counterfactuals
In causal inference literature, the do(·) operator represents an intervention on a causal diagram. For instance, we might want to intervene on the gender of the focus noun (thus using gender G* as the morpho-syntactic feature of interest M*). Concretely, in our example (Figure 2), do(G* = fem) means intervening on the causal graph by removing all the causal edges going into G* from U and L* and setting G*’s value to a specific realization fem. The result of this intervention on a sampled sentence f is a new counterfactual sentence f′. As our causal graph suggests, the relationship between words in a sentence is complex, occurring at multiple levels of abstraction; swapping the gender of a single word—while leaving all other words unchanged—may not result in grammatical text. Consequently, one must approach the creation of counterfactuals in natural language with caution. Specifically, we rely on syntactic interventions to generate our naturalistic counterfactuals.
Syntactic Intervention.
We develop a heuristic algorithm to perform our interventions, shown in Appendix B. Given a sentence and its dependency tree, the algorithm generates a counterfactual version of the sentence, that is, approximating the do(·) operation. This algorithm processes the dependency tree of each sentence in a depth-first search recursive manner. In each iteration, if the node in process is a noun, it is marked as the focus noun7 and a new copy of the sentence is created, which will be the base of the counterfactual sentence. Then, the intervention is performed, altering the focus noun and all dependent tokens in the copied sentence.8 Notably, when we syntactically intervene on the grammatical gender or number of a noun, we do not alter potentially incompatible semantic contexts. Take sentence (3) as an example, where the focus noun is mujer and we intervene on gender. Its counterfactual sentence (4) is semantically odd and unlikely, but still meaningful. We can thus estimate the causal effect of grammatical gender in the contextual representations—breaking the correlation between morpho-syntax and semantics.
- (3)
Lamujer dio a luz a 6 bebés.
the.f.sg woman.f.sg gave birth to 6 babies.
The woman gave birth to 6 babies.
- (4)
Elhombre dio a luz a 6 bebés.
the.m.sg man.m.sg gave birth to 6 babies.
The man gave birth to 6 babies.
3.3 Measuring Causal Effects
In this section, we define the causal effect of a morpho-syntactic feature. We then present estimators for these values in the following section. While we focus on grammatical gender here, our derivations are similarly applicable to number and other morpho-syntactic features.
4 Approximating the ATE
In this section, we show how to estimate Equation(6) from a finite corpus of sentences .
4.1 Naïve Estimator
4.2 Paired Estimator
4.3 A Closer Look at our Estimators
A closer look at our paired estimator in Equation (9) shows that it is an unbiased Monte Carlo estimator of the ATE presented in Equation (6). In short, if we assume our corpus was sampled from the target distribution, we can use this corpus as samples ℓ*,z ∼ p(ℓ*,z). For each ℓ*,z pair, we can then generate sentences with both msc and fem grammatical genders to estimate the ATE.
On a separate note, template-based approaches allow the researcher to investigate causal effects by using minimal pairs of sentences, each of which can be used to estimate an ITE (as in Equation (3)). And, by averaging them, they provide an estimate of ATE (as in Equation (7)). However, these minimal pairs are either manually written or automatically collected using template structures. Therefore, they cover a narrow (and potentially biased) set of structures, arguably not following a naturalistic distribution. In other words, their corpus cannot be assumed to be sampled according to the distribution p(ℓ*,z).12 In practice, templated counterfactuals approximate the treatment effect using an approach identical to the paired estimators–up to a change of distribution. This change of distribution, however, may lead to biased estimates of the ATE..
5 Dataset
We use two Spanish UD treebanks (Nivre et al., 2020) in our experiments: Spanish-GSD (McDonald et al., 2013) and Spanish-AnCora (Taulé et al., 2008). We only analyze gender on animate nouns and use Open Multilingual WordNet (Gonzalez-Agirre et al., 2012) to mark the animacy. Corpus statistics for the datasets can be found in Table 1.
. | train . | dev . | test . | Gender . | Number . | ||
---|---|---|---|---|---|---|---|
msc . | fem . | sing . | plur . | ||||
AnCora | ✓ | ✓ | ✗ | 1,029 | 203 | 14,602 | 6,692 |
✗ | ✗ | ✓ | 107 | 21 | 1,540 | 693 | |
GSD | ✓ | ✓ | ✗ | 403 | 135 | 9,141 | 3,993 |
. | train . | dev . | test . | Gender . | Number . | ||
---|---|---|---|---|---|---|---|
msc . | fem . | sing . | plur . | ||||
AnCora | ✓ | ✓ | ✗ | 1,029 | 203 | 14,602 | 6,692 |
✗ | ✗ | ✓ | 107 | 21 | 1,540 | 693 | |
GSD | ✓ | ✓ | ✗ | 403 | 135 | 9,141 | 3,993 |
5.1 Evaluating Counterfactual Sentences
To evaluate our syntactic intervention algorithm (introduced in §3.2), we randomly sample a subset of 100 sentences from our datasets. These samples are evenly distributed across the two datasets (AnCora and GSD), morpho-syntactic features (gender and number), and categories within each feature (masculine, feminine, singular, and plural). A native Spanish speaker assessed the grammaticality of sampled sentences. Our syntactic intervention algorithm was able to accurately generate counterfactuals for 73% of the sentences.13 The accuracy for the gender and number interventions are 76% and 70%, respectively. Due to the subtleties discussed in disentangling syntax from semantics and the complex sentence structures found in naturalistic data, we believe this error is within an acceptable range and leave improvements to future work.
5.2 Template-Based Dataset
To compare our approach to templated counterfactuals, we translate two datasets for measuring gender bias: Winogender (Rudinger et al., 2018) and WinoBias (Zhao et al., 2018). As shown by Stanovsky et al. (2019), simply translating these templates to Spanish leads to biased translations, where professions are translated stereotypically and the context is ignored. Following Stanovsky et al., we thus put either handsome and pretty before nouns to enforce the gender constraint after translation. Consider, for instance, the sentence: “The developer was unable to communicate with the writer because he only understands the code.” We rewrite it as “The handsome developer …”. Similarly, if the pronoun was she, we would write “The pretty developer …”. As an extra constraint, we want to ensure the gender of the writer stays the same before and after the intervention. Therefore, we make two copies of the sentence: One where writer is translated as escritora (feminine writer), enforced by replacing writer with pretty writer, and one where writer is translated as escritor(masculine writer), enforced by replacing writer with handsome writer. We translate the resulting pairs of sentences using the Google Translate API and drop the sentences with wrong gender translations. In the end, we obtain 2740 minimal pairs.
6 Insights From ATE Estimators
In the following experiments, we first use the estimators introduced in §4 to approximate the ATE of number and grammatical gender on contextualized representations. We look at how stable these ATE estimates are across datasets, and whether they change across words with different parts of speech. We then analyze whether the ATE (as an expected value) was an accurate description of how representations actually change in individual sentences. Finally, we compute the ATE of gender on the probability of predicting specific adjectives in a sentence, thereby measuring the causal effect of gender in adjective prediction.
6.1 Variations Across ATEs
Variation Across Datasets.
Using our ATE estimators, we compute the average treatment effect of both gender and number on BERT’s contextualized representations (Devlin et al., 2019) of focus nouns.14 We compute ψpaired and ψnaïve estimators. Figure 4 presents their cosine similarities. We observe high cosine similarities between paired estimators across datasets,15 but lower cosine similarities with the naïve estimator. This suggests that, while the causal effect is stable across treebanks, the correlational effect is more susceptible to variations in the datasets, for example, semantic variations due to the domain from which treebanks were sampled.
Templated vs. Naturalistic Counterfactuals.
As an extra baseline, we estimate the ATE using a paired estimator with the template-based dataset introduced in §5.2. We observe a low cosine similarity between our naturalistic ATE estimates and the template-based ones. This shows that sentences from template-based datasets are substantially different from naturalistic datasets, thus fail to provide unbiased estimates in naturalistic settings.
Variation Across Part-of-Speech Tags.
Using the same approach, we additionally compute the ATEs on adjectives and determiners. Figure 5 presents our naïve and paired ATE estimates, computed on words with different parts of speech. These results suggest that gender and number do not affect the focus noun or its dependent words in the same way. While the ATE on focus nouns and adjectives are strongly aligned, the cosine similarity between ATEs on focus nouns and determiners is smaller.16
6.2 Masked Language Modeling Predictions
We now analyze the effect of our morpho- syntactic features on masked language modeling predictions. Specifically, we analyze RoBERTa (Conneau et al., 2020)17 in these experiments, as it has better performance than BERT in masked prediction. We thus look at how grammatical gender and number affect the probability that RoBERTa assigns to each word in its output vocabulary.
We now look at how probability assignments change as a function of our interventions. Specifically, Table 2 shows Jensen–Shannon divergences between MProbs(·) computed on top of different representations. We can make a number of observations based on this table. First, for gender, these distributions change more when predicting determiners and focus nouns than adjectives. We speculate that this may be because many Spanish adjectives are syncretic, that is, they have the same inflected form for masculine and feminine (e.g., inteligente [intelligent], or profesional [professional]). Second, the distributions change more after an intervention on number than on gender. Third, when we use either of our estimators to approximate the counterfactual representation, the divergences are greatly reduced. These results show that the ATE values do describe (at least to some extent) the change of representations in individual sentences.
6.3 Gender Bias in Adjectives
7 Insights From Naturalistic Counterfactuals
In the following experiments, we rely on a dataset augmented with naturalistic counterfactuals. We first explore the geometry of the encoded morpho-syntactic features. We then run a more classic correlational probing experiment, highlighting the importance of a causal framework when analyzing representations.
7.1 Geometry of Morpho-Syntactic Features
In this experiment, we follow Bolukbasi et al.’s (2016) methodology to isolate the subspace capturing our morpho-syntactic features’ information. First, we create a matrix with the representations of all focus nouns in our counterfactually augmented dataset. Second, we pair each noun’s representation with its counterfactual representation (after the intervention). Third, we center the matrix of representations by subtracting each pair’s mean. Finally, we perform principal component analysis on this new matrix.
As Figure 7 shows, in BERT and RoBERTa, the first principal component explains close to 20% of the variance caused by gender and number. In GPT-2 (Radford et al., 2019),19 more than half of the variance is captured by the first or the first two principal components.20 This result is in line with prior work (e.g., Biasion et al., 2020, on Italian word embeddings), and suggests that these morpho-syntactic features are linearly encoded in the representations.
To further explore the gender and number subspaces, we project a random sample of 20 sentences (along with their counterfactuals) onto the first principal component. Figure 7 (bottom) shows that the three models we probe can (at least to a large extent) differentiate both morpho-syntactic features using a single dimension. Notably, this first principal component is strongly aligned with the estimate ψpaired; they have a cosine similarity of roughly 0.99 in all these settings.
7.2 Analysis of Correlational Probing
We now use a dataset augmented with naturalistic counterfactuals to empirically evaluate the entanglement of correlation and causation discussed in §2, which arises when using diagnostic probes to probe the representations. Again, we probe three contextual representations: BERT, RoBERTa, and GPT-2. We train logistic regressors (LogRegProbe) and support vector machines (SVMProbe) to predict either gender or number of the focus noun from its contextual representation. Further, we probe the representations in two positions: the focus noun and the [CLS] token (or a sentence’s last token, for GPT-2).21
Accuracy of correlational probes on the original dataset is shown in Figure 8 as green points. Both gender and number probes reach a near-perfect accuracy on focus nouns’ representations. Furthermore, all correlational gender probes reach a high accuracy in [CLS] representations, suggesting that gender can be reliably recovered from them.
Next, we evaluate trained probes on counterfactually augmented test sets (shown as yellow points in Figure 8). We see that there is a drop in performance in all settings, and, more specifically, the accuracy of probes on [CLS] representations drops significantly when evaluated on the counterfactual test set. This suggest that the previous results using correlational probes overestimate the extent to which gender and number can be predicted from the representations.
Finally, we also train supervised probes on a counterfactually augmented dataset in order to study whether we can achieve the levels of performance attested in the literature (shown as gray points in Figure 8). Since these probes are trained on a dataset augmented with counterfactuals, they are not as susceptible to spurious correlations; we thus call them the causal probes. Although there is a considerable improvement in accuracy, there is still a large gap between correlational and causal probes’ accuracies. Together, these results imply that correlational probes are sensitive to spurious correlations in the data (such as the semantic context in which nouns appear), and do not learn to predict grammatical gender robustly.
8 Conclusion
We propose a heuristic algorithm for syntactic intervention which, when applied to naturalistic data, allows us to create naturalistic counterfactuals. Although similar analyses have been run by prior work, using either templated or representational counterfactuals (Elazar et al., 2021; Vig et al., 2020; Bolukbasi et al., 2016, inter alia), our syntactic intervention approach allows us to run these analyses on naturalistic data. We further discuss how to use these counterfactuals in a causal setting to probe for morpho-syntax. Experimentally, we first showed that ATE estimates are more robust to dataset differences than either our naïve (correlational) estimator, or template-based approaches. Second, we showed that ATE can (at least partially) predict how representations will be affected after intervention on gender or number. Third, we employ our ATE framework to study gender bias, finding a list of adjectives that are biased towards one or other gender. Fourth, we find that the variation of gender and number can be captured by a few principal axes in the nouns’ representations. And, finally, we highlight the importance of causal analyses when probing: When evaluated on counterfactually augmented data, correlational probe results drop significantly.
Ethical Concerns
Pretrained models often encode gender bias. The adjective bias experiments in this work can provide further insights into the extent to which these biases are encoded in multilingual pretrained models. As our paper focuses on (grammatical) gender as a morpho-syntactic feature, it focuses on a binary notion of gender, which is not representative of the full spectrum of human gender expression. Most of the analysis in this paper focuses on measuring grammatical gender, not gender bias. We thus advise caution when interpreting the findings from this work. Nonetheless, we hope the causal structure formalized here, together with our analyses, can be of use to bias mitigation techniques in future (e.g., Liang et al., 2020).
A List of Adjectives
We use 30 different Spanish adjectives in our experiments: hermoso/hermosa (beautiful), sexy (sexy), molest/molesta (upset), bonito/bonita (pretty), delicado/delicada (delicate), rápido /rápida (fast), joven (young), inteligente (intelligent), divertido/divertida (funny), fuerte (strong), duro/dura (hard), alegre (cheerful), protegido/protegida (protected), excelente (excellent), nuevo/nueva (new), serio/seria (serious), sensible (sensitive), profesional (professional), emocional (emotional), independiente (independent), fantástico/fantástica (fantastic), brutal (brutal), malo/mala (bad), bueno/buena (good), horrible (horrible), triste (sad), amable (nice), tranquilo/tranquila (quiet), rico/rica (rich), racional (rational).
B Algorithm for Heuristic Intervention
C Theory
Acknowledgments
We would like to thank Shauli Ravfogel for feedback on a preliminary draft and Damián Blasi for analyzing the errors made by our naturalistic counterfactual algorithm. We would also like to thank the action editor and the the anonymous reviewers for their insightful feedback during the review process. Afra Amini is supported by ETH AI Center doctoral fellowship. Ryan Cotterell acknowledges support from the SNSF through the “The Forgotten Role of Inductive Bias in Interpretability” project.
Notes
See Belinkov (2021) for an overview.
We study the Spanish version of these models, if it exists, or the multilingual version if there is no Spanish version.
Consider a set of three random variables with a causal structure (where X causes Y, which causes Z). If we simply conditioned on Y = 1, we would be left with the conditional distribution p(x,z∣Y = 1) = p(x∣Y = 1)p(z∣Y = 1). If we perform an intervention on Y = 1, on the other hand, we are left with a distribution of p(x,z∣do(Y ) = 1) = p(x)p(z∣Y = 1); thus X’s distribution is not altered by Y.
There are, however, methods to mitigate this issue, e.g., Ravfogel et al. (2022b) recently proposed an improved (adversarial) method to remove information from a set of representations that greatly reduces the number of removed dimensions.
In this work, we only focus on two morpho-syntactic features: gender and number. To analyze other features, the minimal list should be expanded—e.g., to analyze verb tense, m3 should be added to the list.
Specifically, for gender intervention we only mark the noun as the focus if it is an animate noun.
This is a simplified version of the algorithm where we omit the rule-based re-inflection functions for nouns, adjectives, and determiners. We also handle contractions, such as a + elal, which is not mentioned in this pseudo-code.
We overload tgt(·) to receive either F or .
A backdoor path is a causal path from an analyzed variable to its effect which contains an arrow to the treatment (i.e., an arrow going backwards). For instance, consider random variables with a causal structure and (where Y causes X, and both X and Y cause Z). forms a backdoor path (Definition 3; Pearl, 2009).
This is referred to as the naïve or unadjusted estimator in the literature (Hernán and Robins, 2020).
This becomes clear when we take a look at the sentences in one of such template-based datasets. For instance, all sentences in the Winogender dataset (Rudinger et al., 2018)—used by Vig et al. (2020)—have very similar sentential structures. Such biases, however, are not necessarily problematic and might be imposed by design to analyze specific phenomena.
Approximating our estimate of this accuracy with a normal distribution, we obtain a 95% confidence interval (Wald interval) which ranges from 64% to 82% (Brown et al., 2001).
More specifically, bert-base-multilingual-cased in the Transformers library (Wolf et al., 2020).
To make sure that the imbalance in the dataset before intervention doesn’t have a significant effect on results, we create a balanced version of the dataset, where we observe similar results.
Relatedly, Lasri et al. (2022) recently showed BERT encodes number differently on nouns and verbs.
More specifically, we use xlm-roberta-base.
When an adjective in the list has two forms depending on the gender (e.g., hermosa/hermoso), we sum the probabilities for masculine and feminine forms.
More specifically, we use gpt2-small-spanish.
These results are not obtained due to the randomness of a finite sample of high dimensional vectors. Neither are they due to the structure of the model. To show this, we present two random baselines: random vectors of the same size (as green traces) and representations extracted from models with randomized weights (as gray traces) in Figure 7.
BERT and RoBERTa treat [CLS] as a special token whose representation is supposed to aggregate information from the whole input sentence. In GPT-2, the last token in a sentence should also contain information about all its previous tokens.
References
Author notes
Action Editor: Miguel Ballesteros