Abstract
In behavioral testing, system functionalities underrepresented in the standard evaluation setting (with a held-out test set) are validated through controlled input-output pairs. Optimizing performance on the behavioral tests during training (behavioral learning) would improve coverage of phenomena not sufficiently represented in the i.i.d. data and could lead to seemingly more robust models. However, there is the risk that the model narrowly captures spurious correlations from the behavioral test suite, leading to overestimation and misrepresentation of model performance—one of the original pitfalls of traditional evaluation.
In this work, we introduce BeLUGA, an analysis method for evaluating behavioral learning considering generalization across dimensions of different granularity levels. We optimize behavior-specific loss functions and evaluate models on several partitions of the behavioral test suite controlled to leave out specific phenomena. An aggregate score measures generalization to unseen functionalities (or overfitting). We use BeLUGA to examine three representative NLP tasks (sentiment analysis, paraphrase identification, and reading comprehension) and compare the impact of a diverse set of regularization and domain generalization methods on generalization performance.1
1 Introduction
The standard paradigm for evaluating natural language processing (NLP) models is to compute correctness metrics on a held-out test set from the same distribution as the training set (Linzen, 2020). If the test set is large and diverse, this may be a good measure of average performance, but it fails to account for the worst-case performance (Sagawa et al., 2020). By exploiting correlations in the training data, models work well in most cases but fail in those where the correlations do not hold (Niven and Kao, 2019; McCoy et al., 2019; Zellers et al., 2019), leading to overestimation of model performance in the wild (Ribeiro et al., 2020). Furthermore, standard evaluation does not indicate the sources of model failure (Wu et al., 2019) and disregards important model properties such as fairness (Ma et al., 2021).
Behavioral testing (Röttger et al., 2021; Ribeiro et al., 2020) has been proposed as a complementary evaluation framework, where model capabilities are systematically validated by examining its responses to specific stimuli. This is done through test suites composed of input-output pairs where the input addresses specific linguistic or social phenomena and the output is the expected behavior given the input. The suites can be seen as controlled challenge datasets (Belinkov and Glass, 2019) aligned with human intuitions about how the agent should perform the task (Linzen, 2020).
In this work, we understand test suites as a hierarchy of functionality classes, functionalities, and test cases (Röttger et al., 2021). Functionality classes stand at the highest level, capturing system capabilities like fairness, robustness and negation. They are composed of functionalities that target finer-grained facets of the capability. For example, a test suite for sentiment analysis can include the functionality “negation of positive statement should be negative” inside the Negation class. Finally, each functionality is composed of test cases, the input-output pairs used to validate model behavior. For the functionality above, an example test case could be the input “The movie was not good” and the expected output “negative”, under the assumption that the non-negated sentence is positive.
Though behavioral test suites identify model weaknesses, the question of what to do with such feedback is not trivial. While test suite creators argue that these tools can aid the development of better models (Röttger et al., 2021) and lead to improvements in the tested tasks (Ribeiro et al., 2020), how to act on the feedback concretely is not discussed.
One common approach is fine-tuning on data targeting the failure cases, which previous work has shown can improve performance in these same cases (Malon et al., 2022; Liu et al., 2019; McCoy et al., 2019). But this practice overlooks the possibility of models overfitting to the covered tests and consequently overestimates model performance. Even if one takes care to split the behavioral test cases into disjoint sets for training and testing, models can still leverage data artifacts such as word-label co-occurrences to achieve seemingly good performance that is over-optimistic and does not align with out-of-distribution (OOD) performance.
This creates the following dilemma: Either one does not use the feedback from test suites for model development and loses the chance to improve model trustworthiness; or one uses it to address model shortcomings (e.g., by training on similar data)—and run the risk of overfitting to the covered cases. Prior work (Luz de Araujo and Roth, 2022; Rozen et al., 2019) has addressed this in part by employing structured cross-validation, where a model is trained and evaluated on different sets of phenomena. However, the analyses have been so far restricted to limited settings where only one task, training configuration and test type is examined. Moreover, these studies have not examined how different regularization and generalization mechanisms influence generalization.
In this paper, we introduce BeLUGA, a general method for Behavioral Learning Unified Generalization Analysis. By training and evaluating on several partitions of test suite and i.i.d. data, we measure model performance on unseen phenomena, such as held-out functionality and functionality classes. This structured cross-validation approach yields scores that better characterize model performance on uncovered behavioral tests than the ones obtained by over-optimistic i.i.d. evaluation.
Our main contributions are:
- (1)
We design BeLUGA, an analysis method to measure the effect of behavioral learning. It handles different kinds of behavior measures, operationalized by labeled or perturbation-based tests. To that end we propose loss functions that optimize the expected behavior of three test types: Minimum functionality, invariance, and directional expectation tests (Ribeiro et al., 2020).
- (2)
We extend previous work on behavioral learning by exploring two training configurations in addition to fine-tuning on suite data (Luz de Araujo and Roth, 2022; Liu et al., 2019): Training on a mixture of i.i.d. and suite data; and training on i.i.d. data followed by fine-tuning on the data mixture.
- (3)
We design aggregate metrics that measure generalization across axes of different levels of granularity. From finer to coarser: Generalization within functionalities, to different functionalities and to different functionality classes.
- (4)
We compare the generalization capabilities of a range of regularization techniques and domain generalization algorithms for three representative NLP tasks (sentiment analysis, paraphrase identification, and reading comprehension).
This work is not a recommendation to train on behavioral test data, but an exploration of what happens if data targeting the same set of phenomena as the tests is used for model training. We find that naive optimization and evaluation do yield over-optimistic scenarios: Fine-tuning on suite data results in large improvements for seen functionalities, though at the same time i.i.d. data and unseen functionalities performance can degrade, with some models adopting degenerate solutions that pass the tests but lead to catastrophic i.i.d. performance. Including i.i.d. as well as test suite samples was found to prevent this, mitigating i.i.d. performance degradation—with even improvements in particular cases—and yielding higher scores for unseen functionalities as well.
2 Background
2.1 Behavioral Testing
We consider a joint distribution p over an input space , corresponding label space , and assume access to an i.i.d. dataset composed of n examples , split into disjoint train, validation, and test sets , , and . We also assume access to a behavioral test suite , composed of m test cases partitioned into nfunc disjoint functionalities . Each functionality belongs to one of nclass functionality classes , such that nclass < nfunc < m.
Each test case belongs to a functionality, , and is described by a pair (X, b), where X is a list with inputs. The expectation function takes a model’s predictions for all inputs and outputs 1 if the model behaves as expected and 0 otherwise.
The above taxonomy, by Röttger et al. (2021), describes the hierarchy of concepts in behavioral testing: Functionality classes correspond to coarse properties (e.g., negation) and are composed of finer-grained functionalities; these assess facets of the coarse property (e.g., negation of positive sentiment should be negative) and are operationalized by individual input-output pairs, the test cases. These concepts align with two of the generalization axes we explore in this work, functionality and functionality class generalization (§ 3.3).
We additionally follow the terminology created by Ribeiro et al. (2020), which defines three test types, according to their evaluation mechanism: Minimum Functionality, Invariance, and Directional Expectation tests. When used for model training, each of them requires a particular optimization strategy (§ 3.2).
Minimum Functionality Test (MFT): MFTs are input-label pairs designed to check specific system behavior: X has only one element, x, and the expectation function checks if the model output given x is equal to some label y. Thus, they have the same form as the i.i.d. examples.
2.2 Behavioral Learning
3 BeLUGA
BeLUGA is an analysis method to estimate how training on test suite data impacts generalization to seen and unseen phenomena. Given an i.i.d. dataset , a test suite , and a training configuration χ (§ 3.1), BeLUGA trains on several controlled splits of suite data and outputs scores that use performance on unseen phenomena as a proxy measure (§ 3.3) for generalization.
3.1 Training Configurations
We split into three disjoint splits , , and , such that each split contains cases from all functionalities, and define four training configurations regarding whether and how we use :
IID: The standard training approach that uses only i.i.d. data for training (). It serves as a baseline to contrast performance of the three following suite-augmented configurations.
IID→T: A two-step approach where first the PLM is fine-tuned on and then on . This is the setting examined in prior work on behavioral learning (§ 2.2), which has been shown to lead to deterioration of i.i.d. dataset () performance (Luz de Araujo and Roth, 2022).
To assess the impact of including i.i.d. samples in the behavioral learning procedure, we define two additional configurations:
IID +T: The PLM is fine-tuned on a mixture of suite and i.i.d. data ().
IID→(IID +T): The PLM is first fine-tuned on and then on .
By contrasting the performance on and of these configurations, we assess the impact of behavioral learning on both i.i.d. and test suite data distributions.
3.2 Behavior Optimization
Since each test type describes and expects different behavior, BeLUGA optimizes type-specific loss functions:
MFT: As MFTs are formally equivalent to i.i.d. data (input-label pairs), they are treated as such: We randomly divide them into mini-batches and optimize the cross-entropy between model predictions and labels.
DIR: Batch construction follows the INV procedure: The DIRs are randomly divided into mini-batches of unperturbed-perturbed input pairs, the unperturbed input is randomly sampled during training.
The optimization objective depends on the comparison function δ. For a given δ, we define a corresponding error measure [0,1]. For example, if the expectation is that prediction confidence should not increase, then . This way, ϵδ increases with confidence increase and is zero otherwise.
3.3 Cross-functional Analysis
Test suites have limited coverage: The set of covered functionalities is only a subset of the phenomena of interest: , where is the hypothetical set of all functionalities. For example, the test suite for sentiment analysis provided by Ribeiro et al. (2020) has a functionality that tests for invariance to people’s names—the sentiment of the sentence “I do not like Mary’s favourite movie” should not change if “Mary” is changed to “Maria”. However, the equally valid functionality that tests for invariance to organizations’ names is not in the suite. Training and evaluating on the same set of functionalities can lead to overestimating the performance: Models that overfit to covered functionalities but fail catastrophically on non-covered ones.
BeLUGA computes several measures of model performance that address generalization from to and from to . We do not assume access to test cases for non-covered phenomena, so we use held-out sets of functionalities as proxies for generalization to .
i.i.d. Data: To score performance on , we use the canonical evaluation metric for the specific dataset. We detail the metrics used for each examined task3 in Section 4.1. We denote the i.i.d. score as siid.
We vary the set of functionalities used for training and testing to construct different evaluation scenarios:
Unseen Evaluation: No test cases are seen during training. This is equivalent to the use of behavioral test suites without behavioral learning: We compute the pass rates using the predictions of an IID model.
Seen Evaluation: is used for training. We compute the pass rate on using the predictions of suite-augmented models. This score measures how well the fine-tuning procedure generalizes to test cases of covered functionalities: Even though all functionalities are seen during training, the particular test cases evaluated are not the same as the ones used for training .
Generalization to Non-Covered Phenomena: To estimate performance on non-covered phenomena, we construct a l-subset partition of the set of functionalities . For each Ui, we use for training and then compute the pass rates for : . That is, we fine-tune it on a set of functionalities and evaluate it on the remaining (unseen) functionalities. Since U is a partition of , by the end of the procedure there will be a pass rate for each functionality.
We consider three different partitions, depending on the considered generalization proxy:
(1) Functionality generalization: A partition with nfunc subsets, each corresponding to a held-out functionality: . We consider this a proxy of performance on non-covered functionalities: .
(2) Functionality class generalization: A partition with nclass subsets, each corresponding to a held-out functionality class: . We consider this to be a proxy of performance on non-covered functionality classes: .
(3) Test type generalization: A partition with three subsets, each corresponding to a held-out test type: . We use this measure to examine generalization across different test types.
3.4 Metrics
For model comparison purposes, BeLUGA outputs the average pass rate (the arithmetic mean of the nfunc pass rates) as the aggregated metric for test suite correctness. Since one of the motivations for behavioral testing is its fine-grained results, BeLUGA also reports the individual pass rates.
In total, BeLUGA computes five aggregated suite scores, each corresponding to an evaluation scenario:
: The baseline score of a model only trained on i.i.d. data: If the other scores are lower, then fine-tuning on test suite data degraded overall model performance.
: Performance on seen functionalities. This score can give a false sense of model performance since it does not account for model overfitting to the seen functionalities: Spurious correlations within functionalities and functionality classes can be exploited to get deceivingly high scores.
: Measure of generalization to unseen functionalities. It is a more realistic measure of model quality, but since functionalities correlate within a functionality class, the score may still offer a false sense of quality.
: Measure of generalization to unseen functionality classes. This is the most challenging generalization setting, as the model cannot exploit correlations within functionalities and functionality classes.
: Measure of generalization to unseen test types. This score is of a more technical interest: It can offer insights into how different training signals affect each other (e.g., if training with MFTs supports performance on INVs and vice-versa).
This aggregation makes implicit importance assignments explicit: On the one hand, the harmonic mean ensures that both i.i.d. and suite performance are important due to its sensitivity to low scores; on the other, different phenomena are weighted differently, as i.i.d. performance has a bigger influence on the final score than each single functionality pass rate.
4 Experiments on Cross-functional Analysis
4.1 Tasks
We experiment with three classification tasks that correspond to the test suites made available4 by Ribeiro et al. (2020): Sentiment analysis (SENT), paraphrase identification (PARA), and reading comprehension (READ).5Tables 1 and 2 summarize and show representative examples from the i.i.d. and test suite datasets, respectively.
Examples for each i.i.d. dataset. The number of train/validation/test samples is 67k/436/436, 363k/20k/20k and 87k/5k/5k for SST-2, QQP and SQuAD, respectively.

Examples for each test suite. We color-code perturbations as red/green for deletions/additions. The number of train/validation/test samples is 89k/44k/44k, 103k/51k/51k, and 35k/17k/17k for the SENT, PARA and READ test suites, respectively.

Sentiment Analysis (SENT): As the i.i.d. dataset for sentiment analysis, we use the Stanford Sentiment Treebank (SST-2) (Socher et al., 2013). We use the version made available in the GLUE benchmark (Wang et al., 2018), where the task is to assign binary labels (negative/positive sentiment) to sentences. The test set labels are not publicly available, so we split the original validation set in half as our validation and test sets. The canonical metric for the dataset is accuracy.
The SENT suite contains 68k MFTs, 9k DIRs, and 8k INVs. It covers functionality classes such as semantic role labeling (SRL), named entity recognition (NER), and fairness. The MFTs were template-generated, while the DIRs and INVs were either template-generated or obtained from perturbing a dataset of unlabeled airline tweets. Therefore, there is a domain mismatch between the i.i.d. data (movie reviews) and the suite data (tweets about airlines).
There are also label mismatches between the two datasets: The suite contains an additional class for neutral sentiment and the MFTs have the “not negative” label, which admits both positive and neutral predictions. We follow Ribeiro et al. (2020) and consider predictions with probability of positive sentiment within [1/3,2/3] as neutral.6
There are two types of comparison for DIRs, regarding either sentiment or prediction confidence. In the former case, the prediction for a perturbed input is expected to be either not more negative or not more positive when compared with the prediction for the original input. In the latter, the confidence of the original prediction is expected to either not increase or not decrease, regardless of the sentiment. For example, when adding an intensifier (“really”, “very”) or a reducer (“a little”, “somewhat”), the confidence of the original prediction should not decrease in the first case and not increase in the second. On the other hand, if a perturbation adds a positive or negative phrase to the original input, the positive probability should not go down (up) for the first (second) case.
Paraphrase Identification (PARA): We use Quora Question Pairs (QQP) (Iyer et al., 2017) as the i.i.d. dataset. It is composed of question pairs from the website Quora with annotation for whether a pair of questions is semantically equivalent (duplicates or not duplicates). The test set labels are not available, hence we split the original validation set into two sets for validation and testing. The canonical metrics are accuracy and the F1 score of the duplicate class.
The PARA suite contains 46k MFTs, 13k DIRs, and 3k INVs, with functionality classes such as co-reference resolution, logic, and negation. All MFTs are template generated,7 while the INVs and DIRs are obtained from perturbing QQP data.
The DIRs are similar to MFTs: Perturbed question pairs are either duplicate or not duplicate. For example, if two questions mention the same location and the perturbation changes the location in one of them, then the new pair is guaranteed not to be semantically equivalent. Thus, the comparison function δ checks if the perturbed predictions correspond to the expected label; the original prediction is not used for evaluation. So during training, we treat them as MFTs: We construct mini-batches of perturbed samples and corresponding labels and minimize the cross-entropy between predictions and labels.
Reading Comprehension (READ): The i.i.d. dataset for READ is the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016), composed of excerpts from Wikipedia articles with crowdsourced questions and answers. The task is to, given a text passage (context) and a question about it, extract the context span that contains the answer. Once again, the test set labels are not publicly available and we repeat our splitting approach for SENT and PARA. The canonical metrics are exact string match (EM) (percentage of predictions that match ground truth answers exactly) and the more lenient F1 score, which measures average token overlap between predictions and ground truth answers.
The READ suite contains 10k MFTs and 2k INVs, with functionality classes such as vocabulary and taxonomy. The MFTs are template generated, while the INVs are obtained from perturbing SQuAD data.
Invariance training in READ has one complication, since the task is to extract the answer span by predicting the start and end positions. Naively using the originally predicted positions would not work because the answer position may have changed after the perturbation. For example, let us take the original context-question pair (C: Paul traveled from Chicago to New York, Q: Where did Paul travel to?) and perturb it so that Chicago is changed to Los Angeles. The correct answer for the original input is (5, 6) as the start and end (word) positions, yielding the span “New York”. Applying these positions to the perturbed input would extract “to New”. Instead, we only compare the model outputs for the positions that correspond to the common ground of original and perturbed inputs. In the example, the outputs for the tokens “Paul”, “traveled”, “from”, “to”, “New” and “York”. We minimize the cross-entropy between this restricted set of outputs for the original and perturbed inputs. This penalizes changes in prediction for equivalent tokens (e.g., the probability of “Paul” being the start of the answer is 0.1 for the original input but 0.15 for the perturbed).
4.2 Generalization Methods
We use BeLUGA to compare several techniques used to improve generalization:
L2: We apply a stronger-than-typical ℓ2-penalty coefficient of λ = 0.1.
Dropout: We triple the dropout rate for all fully connected layers and attention probabilities from the default value of 0.1 to 0.3.
LP: Instead of fine-tuning on suite data, we apply linear probing (LP), where the encoder parameters are frozen, and only the classification head parameters are updated. Previous work (Kumar et al., 2022) has found this to generalize better than full fine-tuning.
LP-FT: We experiment with linear probing followed by fine-tuning, which Kumar et al. (2022) have shown to combine the benefits of fine-tuning (in-distribution performance) and linear-probing (out-of-distribution performance).
Invariant Risk Minimization (IRM) (Arjovsky et al., 2019), a framework for OOD generalization that leverages different training environments to learn feature-label correlations that are invariant across the environments, under the assumption that such features are not spuriously correlated with the labels.
Group Distributionally Robust Optimization (Group-DRO) (Sagawa et al., 2020), an algorithm that minimizes not the average training loss, but the highest loss across the different training environments. This is assumed to prevent the model from adopting spurious correlations as long as such correlations do not hold on one of the environments.
Fish (Shi et al., 2022), an algorithm for domain generalization that maximises the inner product between gradients from different training environments, under the assumption that this leads models to learn features invariant across environments.
For the last three methods, we treat the different functionalities as different environments. For the IID +T and IID→(IID +T) settings, we consider the i.i.d. data as an additional environment. In the multi-step training configurations (IID→T and IID→(IID +T)), we only apply the techniques during the second step: When training only with i.i.d. data we employ vanilla gradient descent, since we are interested in the generalization effect of using suite data.
4.3 Experimental Setting
We use pre-trained BERT models (Devlin et al., 2019) for all tasks. We follow Ribeiro et al. (2020) and use BERT-base for SENT and PARA and BERT-large for READ. All our experiments use AdamW (Loshchilov and Hutter, 2019) as the optimizer. When fine-tuning on i.i.d. data, we use the same hyper-parameters as the ones reported for models available on Hugging Face’s model zoo.8 When fine-tuning on test suite data, we run a grid search over a range of values for batch size, learning rate and number of epochs.9 We select the configuration that performed best on . To maintain the same compute budget across all methods, we do not tune method-specific hyper-parameters. We instead use values shown to work well in the original papers and previous work (Dranker et al., 2021).
5 Results and Observations
5.1 i.i.d. and Generalization Scores
Table 3 exhibits i.i.d. and aggregate G scores for all tasks, training configurations, and generalization methods. Figure 1 presents pass rates of individual functionalities.
i.i.d. test set performance and generalization measures (in %) of each examined method for all tasks and training configurations. The Avg. column shows the average G score across all tasks and generalization measures. We show scores significantly above and below the IID baseline (first row, suite scores are Gstandard) in green and red, respectively, and write the best score for each column in bold weight. When the score is not significantly different from the baseline counterpart, we show it in black. We use two-tailed binomial testing when comparing the i.i.d. performances, and randomization testing (Yeh, 2000) when comparing G scores, setting 0.05 as the significance level.

Average and individual pass rates for all tasks, methods, and training configurations. From first to third row: Results for SENT, PARA, and READ. From first to fourth column: Seen evaluation, functionality generalization, functionality class generalization, and test type generalization scores. The y-axis correspond to all training configuration-method pairs; the x-axis shows the average functionality pass rate followed by the individual pass rates. The blue horizontal and vertical lines demarcate different training configurations and functionality classes, respectively. The colors in the x-axis designate the different test types: Blue for MFTs, red for INVs, and green for DIRs.
Average and individual pass rates for all tasks, methods, and training configurations. From first to third row: Results for SENT, PARA, and READ. From first to fourth column: Seen evaluation, functionality generalization, functionality class generalization, and test type generalization scores. The y-axis correspond to all training configuration-method pairs; the x-axis shows the average functionality pass rate followed by the individual pass rates. The blue horizontal and vertical lines demarcate different training configurations and functionality classes, respectively. The colors in the x-axis designate the different test types: Blue for MFTs, red for INVs, and green for DIRs.
Seen Performance: Fine-tuning on test suite data led to improvements for all tasks: The Gseen scores are generally higher than the baseline scores (first row in Table 3).
That is, models were able to generalize across test cases from covered functionalities (from to ) while retaining reasonable i.i.d. data performance. In some specific training configuration-method combinations this was not the case. We discuss this below when we compare methods and report the degenerate solutions.
Generalization Performance: For any given configuration-method pair, Gseen is higher than Gfunc, Gclass, and Gtype, indicating a generalization gap between seen and unseen functionalities. Furthermore, for all tasks, average (across methods) Gfunc is higher than average Gclass, which is higher than average Gtype,10 indicating that generalization gets harder as one moves from unseen functionalities to unseen functionality classes and test types. This aligns with previous work (Luz de Araujo and Roth, 2022), in which hate speech detection models are found to generalize within—but not across—functionality classes.
Improvements over the IID baseline were task-dependent. Almost all configuration-method pairs achieved Gfunc (22 of 24) and Gclass (20 of 24) scores significantly higher that the IID baseline for SENT, with improvements over the baseline as high as 18.44 and 12.84 percentage points (p.p.) for each metric, respectively. For PARA, improving over Gclass proved much harder—only seven configuration-method pairs could do so. Increases in score were also less pronounced, the best Gfunc and Gclass scores being 6.91 and 2.19 p.p. above the baseline. READ was the one with both rarer and subtler improvements, with a third of the approaches significantly improving functionality and none significantly improving functionality class generalization. Improvements in each case were as high as 4.70 and 0.51 percentage points over the baseline.
i.i.d. Performance: Fine-tuning on test suite data only (IID→T configuration) reduced performance for all tasks’ i.i.d. test sets. Fine-tuning on both suite and i.i.d. examples (IID +T and IID→ (IID +T)) helped retain—or improve—performance in some cases, but decreases were still more common. The IID→(IID +T) configuration was the most robust regarding i.i.d. scores, with an average change (compared to the IID baseline) of −1.43/−0.50/−1.73 for SENT/PARA/READ.
5.2 Training Configuration and Method Comparison
Using a mixture of i.i.d. and suite samples proved essential to retain i.i.d. performance: The overall scores (average over methods and i.i.d. test sets) for each configuration are 67.52, 76.33, and 87.98 for IID→T, IID +T, and IID→(IID +T), respectively.
That said, the environment-based generalization algorithms (IRM, DRO, and Fish) struggled in the IID +T configuration, underperforming when compared with the other methods. We hypothesize that in these scenarios models simply do not see enough i.i.d. data, as we treat it as just one more environment among many others (reaching as much as 54 in PARA). LP also achieves subpar scores, even though i.i.d. data is not undersampled. The problem here is the frozen feature encoder, as BERT features are not good enough without fine-tuning on i.i.d. task data—as was done in the other configurations, with clear benefits for LP.
No individual method performed best for all scores and tasks. That said, IID→(IID +T) with L2, LP, LP-FT or Fish was able to achieve Gfunc and Gclass scores higher or not significantly different from the baseline in all tasks, though IID→(IID +T) with dropout was the best when score is averaged over all tasks and generalization measures. Considering this same metric, IID→(IID +T) was the most consistently good configuration, with all methods improving over the average IID baseline.
5.3 DIR Applicability
We have found that DIRs, as used for SENT, have limited applicability for both testing and training. The reason for that is that models are generally very confident about their predictions: The average prediction confidence for the test suite predictions is 0.97 for the IID model. On the evaluation side, this makes some DIRs impossible to fail: The confidence cannot get higher and fail “not more confident” expectations. On the training side, DIRs do not add much of a training signal, as the training loss is near zero from the very beginning.11
We see an additional problem with DIRs in the SENT setting: They confuse prediction confidence with sentiment intensity. Though prediction confidence may correlate with sentiment intensity, uncertainty also signals difficulty and ambiguousness (Swayamdipta et al., 2020). Consequently, sentiment intensity tests may not be measuring the intended phenomena. One alternative would be to disentangle the two factors: Using prediction values only for confidence-based tests, and sentiment intensity tests only for sentiment analysis tasks with numeric or fine-grained labels.
5.4 Negative Transfer
Though Gclass scores are generally lower than Gfunc scores, this is not always the case for the pass rates of individual functionalities. When there are contrastive functionalities within a class—those whose test cases have similar surface form but entirely different expected behaviors—it is very difficult to generalize from one to the other.
For example, the SRL class in PARA contains the functionalities “order does not matter for symmetric relations” and “order does matter for asymmetric relations” (functionalities 41 and 42 in the second row of Figure 1). Their test cases are generated by nearly identical templates where the only change is the relation placeholder. Examples from the first and second functionalities would include (Q1: Is Natalie dating Sophia? Q2: Is Sophia dating Natalie?) and (Q1: Is Matthew lying to Nicole? Q2: Is Nicole lying to Matthew?) respectively. Though their surface forms are similar, they have opposite labels: duplicate and not duplicate.
To compute , a model is trained with samples from one functionality and evaluated on samples from the other. Consequently, the surface form will be spuriously correlated with the label seen during training and models may blindly assign it to the question pairs that fit the template. This would work well for the seen functionality, but samples from the unseen one would be entirely misclassified. Conversely, when computing the score, the model will not have been trained on either of the functionalities and will not have the chance to adopt the heuristic, leading to better unseen pass rates.
5.5 Degenerate Solutions
Settings where the Gtype score is higher than the baseline are much rarer than for the other measures, happening only in one case for SENT (IID→T with dropout) and never for READ. One explanation is that training only on perturbation-based tests (with no MFTs) can lead to degenerate solutions, such as passing all tests by always predicting the same class.
To assess if that was the case, we examined the predictions on the SST-2 test set of the IID→T vanilla model fine-tuned only on DIRs and INVs. We have found that 95.18% of the i.i.d. data points were predicted as negative, though the ground truth frequency for that label is 47.25%. When examining the predictions for MFTs, the results are even more contrasting: 0.29% of the predictions were negative, with the ground truth frequency being 43.42%. These results show that the model has, indeed, adopted the degenerate solution. Interestingly, it predicts different classes depending on the domain, almost always predicting negative for i.i.d. data and positive for suite data.
The gap between Gclass and Gtype scores in PARA is not as severe, possibly due to the supervised signal in its DIRs. Since these tests expect inputs to correspond to specific labels—as opposed to DIRs for SENT, which check for changes in prediction confidence—always predicting the same class would not be a good solution. Indeed, when examining the predictions on the QQP test set of the vanilla IID→T model fine-tuned with no MFT data, we see that 58.70% of question pairs are predicted as not duplicate, which is similar to the ground truth frequency, 63.25%. The same is true when checking the predictions for MFTs: 64.47% of the data points are predicted as not duplicate, against a ground truth frequency of 52.46%.
The READ scenario is more complex—instead of categories, spans are extracted. Manual inspection showed that some IID→T models adopted degenerate solutions (e.g., extracting the first word, a full stop or the empty span as the answer), even when constrained by the MFT supervised signal. Interestingly, the degenerate solutions were applied only for INV tests (where such invariant predictions work reasonably) and i.i.d. examples (where they do not). On the other hand, these models were able to handle the MFTs well, obtaining near perfect scores and achieving high scores even though i.i.d. performance is catastrophic. The first grid of the third row in Figure 1 illustrates this: The high scores are shown on the first column, and the MFT pass rates on the columns with blue x-axis numbers.
5.6 Summary Interpretation of the Results
Figure 1
Figure 1 supports fine-grained analyses that consider performance on individual functionalities in each generalization scenario. One can interpret it horizontally to assess the functionality pass rates for a particular method. For example, the bottom left grid, representing seen results for READ, shows that IID +T with LP behaves poorly on almost all functionalities, confirming the importance of fine-tuning BERT pre-trained features (§ 5.2).
Alternatively, one can interpret it vertically to assess performance and generalization trends for individual functionalities. For example, models generalized well to functionality 21 of the READ suite (second grid of the bottom row), with most methods improving over the IID baseline. However, under the functionality class evaluation scenario (third grid of the bottom row), improvements for functionality 21 are much rarer. That is, the models were able to generalize to functionality 21 as long as they were fine-tuned on cases from functionalities from the same class (20 and 22).12
Such fine-grained analyses show the way for more targeted explorations of generalization (e.g., why do models generalize to functionality 21 but not to functionality 20?), which can guide subsequent data annotation, selection and creation efforts, and shed light on model limitations.
Table 3
For i.i.d. results, we refer to the SST2, QQP, and SQuAD columns. These show that the suite-augmented configuration and methods (all rows below and including IID→T Vanilla) generally hurt i.i.d. performance. However, improvements can be found for some methods in the IID +T and IID→(IID +T). Takeaway: Fine-tuning on behavioral tests degrades model general performance, which can be mitigated by jointly fine-tuning on i.i.d. samples and behavioral tests.
For performance concerning seen functionalities, we refer to the Gseen columns. Generalization scores concerning unseen functionalities, functionality classes, and test types can be found in the Gfunc, Gclass, and Gtype columns. Across all tasks, training configurations, and methods, the Gseen scores are higher than the others. Takeaway: Evaluating only on the seen functionalities (Liu et al., 2019; Malon et al., 2022) is overoptimistic—improving performance on seen cases may come at the expense of degradation on unseen cases. This is detected by the underperforming generalization scores.
Previous work on generalization in behavioral learning (Luz de Araujo and Roth, 2022; Rozen et al., 2019) corresponds to the IID→T Vanilla row. It shows deterioration of i.i.d. scores, poor generalization in some cases, and lower average performance compared with the IID baseline. However, our experiments with additional methods (all rows below IID→T Vanilla), show that some configuration-method combinations improve the average performance. Takeaway: While naive behavioral learning generalizes poorly, more sophisticated algorithms can lead to improvements. BeLUGA is a method that detects and measures further algorithmic improvements.
6 Related Work
Traditional NLP benchmarks (Wang et al., 2018, 2019) are composed of text corpora that reflect the naturally occurring language distribution, which may fail to sufficiently capture rarer, but important phenomena (Belinkov and Glass, 2019). Moreover, since these benchmarks are commonly split into identically distributed train and test sets, spurious correlations in the former will generally hold for the latter. This may lead to the obfuscation of unintended behaviors, such as the adoption of heuristics that work well for the data distribution but not in general (Linzen, 2020; McCoy et al., 2019). To account for these shortcomings, complementary evaluations methods have been proposed, such as using dynamic benchmarks (Kiela et al., 2021) and behavioral test suites (Kirk et al., 2022; Röttger et al., 2021; Ribeiro et al., 2020).
A line of work has explored how training on challenge and test suite data affects model performance by fine-tuning on examples from specific linguistic phenomena and evaluating on other samples from the same phenomena (Malon et al., 2022; Liu et al., 2019). This is equivalent to our seen evaluation scenario, and thus cannot distinguish between models with good generalization and those that have overfitted to the seen phenomena. We account for that with our additional generalization measures, computed using only data from held-out phenomena.
Other efforts have also used controlled data splits to examine generalization: McCoy et al. (2019) have trained and evaluated on data from disjoints sets of phenomena relevant for Natural Language Inference (NLI); Rozen et al. (2019) have split challenge data according to sentence length and constituency parsing tree depth, creating a distribution shift between training and evaluation data; Luz de Araujo and Roth (2022) employ a cross-functional analysis of generalization in hate speech detection. Though these works address the issue of overfitting to seen phenomena, their analyses are restricted to specific tasks and training configurations. Our work gives a more comprehensive view of generalization of behavioral learning by examining different tasks, training configurations, test types, and metrics. Additionally, we use this setting as an opportunity to compare the generalization impact of both simple regularization mechanisms and state-of-the-art domain generalization algorithms.
7 Conclusion
We have presented BeLUGA, a framework for cross-functional analysis of generalization in NLP systems that both makes explicit the desired system traits and allows for quantifying and examining several axes of generalization. While in this work we have used BeLUGA to analyze data from behavioral suites, it can be applied in any setting where one has access to data structured into meaningful groups (e.g., demographic data, linguistic phenomena, domains).
We have shown that, while model performance for seen phenomena greatly improves after fine-tuning on test suite data, the generalization scores reveal a more nuanced view, in which the actual benefit is less pronounced and depends on the task and training configuration-method combination. We have found the IID→(IID +T) configuration to result in the most consistent improvements. Conversely, some methods struggle in the IID→T and IID +T settings by overfitting to the suite or underfitting i.i.d. data, respectively. In these cases, a model both practically aces all tests and fails badly for i.i.d. data, which reinforces the importance of considering both i.i.d. and test suite performance when comparing systems, which is accounted for by BeLUGA’s aggregate scores.
These results show that naive behavioral learning has unintended consequences, which the IID→(IID +T) configuration mitigates to some degree. There is still much room for improvement, though, especially if generalization to unseen types of behavior is desired. Through BeLUGA, progress in that direction is measurable, and further algorithmic improvements might make behavioral learning an option to ensure desirable behaviors and preserve general performance and generalizability of the resulting models. We do not recommend training on behavioral tests in the current technological state. Instead, we show a way to improve research on reconciling the qualitative guidance of behavioral tests with desired generalization in NLP models.
Acknowledgments
We thank the anonymous reviewers and action editors for the helpful suggestions and detailed comments. We also thank Matthias Aßenmacher, Luisa März, Anastasiia Sedova, Andreas Stephan, Lukas Thoma, Yuxi Xia, and Lena Zellinger for the valuable discussions and feedback. This research has been funded by the Vienna Science and Technology Fund (WWTF) [10.47379/VRG19008] “Knowledge-infused Deep Learning for Natural Language Processing”.
Notes
Our code is available on https://github.com/peluz/beluga.
Note that any amount of perturbed inputs could be used, but using only one allows fitting more test cases in a mini-batch if its size is kept constant.
We refer to the i.i.d. data as the dataset as opposed to the task. The task is more abstract, and it comes with a corresponding behavioral test suite.
These test suites were originally proposed for model evaluation. Every design choice we describe regarding optimization (e.g., loss functions and label encodings) is ours.
When training, we encode “neutral” and “not negative” labels as [1/2, 1/2] and [1/3, 2/3], respectively. One alternative is to create two additional classes for such cases, but this would prevent the use of the classification head fine-tuned on i.i.d. data (which is annotated with binary labels).
The test cases from functionality “Order does matter for asymmetric relations” (e.g., Q1: Is Rachel faithful to Christian?, Q2: Is Christian faithful to Rachel?) were originally labeled as duplicates. This seems to be unintended, so we change their label to not duplicates.
Available on https://huggingface.co/. The model names are textattack/bert-base-uncased-SST-2 (SENT), textattack/bert-base-uncased-QQP (PARA), and bert-large-uncased-whole-word-masking-finetuned-squad (READ).
Batch size:{2, 3} for READ and {8, 16} for the others; learning rate: {2e −5, 3e −5, 5e −5}; number of epochs: {1, 2, 3}.
SENT: 85.97/78.15/69.54, PARA: 75.04/72.22/71.55, READ: 49.23/46.66/43.46.
Confidence regularization (Yu et al., 2021) could potentially increase DIR’s usefulness for training and evaluation purposes.
These functionalities assess co-reference resolution capabilities: 20 and 21 have test cases with personal and possessive pronouns, respectively; 22 tests whether the model distinguishes “former” from “latter”.
References
Author notes
Action Editor: Mihai Surdeanu