Cross-functional Analysis of Generalization in Behavioral Learning

In behavioral testing, system functionalities underrepresented in the standard evaluation setting (with a held-out test set) are validated through controlled input-output pairs. Optimizing performance on the behavioral tests during training ( behavioral learning ) would improve coverage of phenomena not sufficiently represented in the i.i.d. data and could lead to seemingly more robust models. However, there is the risk that the model narrowly captures spurious correlations from the behavioral test suite, leading to overestimation and misrepresentation of model performance— one of the original pitfalls of traditional evaluation. In this work, we introduce B E LUGA, an analysis method for evaluating behavioral learning considering generalization across dimensions of different granularity levels. We optimize behavior-specific loss functions and evaluate models on several partitions of the behavioral test suite controlled to leave out specific phenomena. An aggregate score measures generalization to unseen functionalities (or over-fitting). We use B E LUGA to examine three representative NLP tasks (sentiment analysis, paraphrase identification, and reading comprehension) and compare the impact of a diverse set of regularization and domain generalization methods on generalization performance. 1


Introduction
The standard paradigm for evaluating natural language processing (NLP) models is to compute correctness metrics on a held-out test set from the same distribution as the training set (Linzen, 2020).If the test set is large and diverse, this may be a good measure of average performance, but it fails to account for the worst-case performance (Sagawa   1 Our code is available on https://github.com/peluz/beluga.et al., 2020).By exploiting correlations in the training data, models work well in most cases but fail in those where the correlations do not hold (Niven and Kao, 2019;McCoy et al., 2019;Zellers et al., 2019), leading to overestimation of model performance in the wild (Ribeiro et al., 2020).Furthermore, standard evaluation does not indicate the sources of model failure (Wu et al., 2019) and disregards important model properties such as fairness (Ma et al., 2021).
Behavioural testing (Röttger et al., 2021;Ribeiro et al., 2020) has been proposed as a complementary evaluation framework, where model capabilities are systematically validated by examining its responses to specific stimuli.This is done through test suites composed of input-output pairs where the input addresses specific linguistic or social phenomena and the output is the expected behaviour given the input.The suites can be seen as controlled challenge datasets (Belinkov and Glass, 2019) aligned with human intuitions about how the agent should perform the task (Linzen, 2020).
In this work, we understand test suites as a hierarchy of functionality classes, functionalities, and test cases (Röttger et al., 2021).Functionality classes stand at the highest level, capturing system capabilities like fairness, robustness and negation.They are composed of functionalities that target finergrained facets of the capability.For example, a test suite for sentiment analysis can include the functionality "negation of positive statement should be negative" inside the Negation class.Finally, each functionality is composed of test cases, the inputoutput pairs used to validate model behaviour.For the functionality above, an example test case could be the input "The movie was not good" and the expected output "negative", under the assumption that the non-negated sentence is positive.
Though behavioural test suites identify model weaknesses, the question of what to do with such feedback is not trivial.While test suite creators argue that these tools can aid the development of better models (Röttger et al., 2021) and lead to improvements in the tested tasks (Ribeiro et al., 2020), how to act on the feedback concretely is not discussed.
One common approach is fine-tuning on data targeting the failure cases, which previous work has shown can improve performance in these same cases (Malon et al., 2022;Liu et al., 2019;Mc-Coy et al., 2019).But this practice overlooks the possibility of models overfitting to the covered tests and consequently overestimates model performance.Even if one takes care to split the behavioural test cases into disjoint sets for training and testing, models can still leverage data artifacts such as word-label co-occurrences to achieve seemingly good performance that is over-optimistic and does not align with out-of-distribution (OOD) performance.
This creates the following dilemma: either one does not use the feedback from test suites for model development and loses the chance to improve model trustworthiness; or one uses it to address model shortcomings (e.g. by training on similar data)-and run the risk of overfitting to the covered cases.Prior work (Luz de Araujo and Roth, 2022;Rozen et al., 2019) has addressed this in part by employing structured cross-validation, where a model is trained and evaluated on different sets of phenomena.However, the analyses have been so far restricted to limited settings where only one task, training configuration and test type is examined.Moreover, these studies have not examined how different regularisation and generalisation mechanisms influence generalisation.
In this paper, we introduce BELUGA, a general method for Behavioural Learning Unified Generalisation Analysis.By training and evaluating on several partitions of test suite and i.i.d.data, we measure model performance on unseen phenomena, such as held-out functionality and functionality classes.This structured cross-validation approach yields scores that better characterise model performance on uncovered behavioural tests than the ones obtained by over-optimistic i.i.d.evaluation.
Our main contributions are: (1) We design BELUGA, an analysis method to measure the effect of behavioural learning.It handles different kinds of behaviour measures, operationalised by labelled or perturbation-based tests.
To that end we propose loss functions that optimise the expected behaviour of three test types: minimum functionality, invariance and directional expectation tests (Ribeiro et al., 2020).
(2) We extend previous work on behavioural learning by exploring two training configurations in addition to fine-tuning on suite data (Luz de Araujo and Roth, 2022;Liu et al., 2019): training on a mixture of i.i.d. and suite data; and training on i.i.d.data followed by fine-tuning on the data mixture.
(3) We design aggregate metrics that measure generalisation across axes of different levels of granularity.From finer to coarser: generalisation within functionalities, to different functionalities and to different functionality classes.
(4) We compare the generalisation capabilities of a range of regularisation techniques and domain generalisation algorithms for three representative NLP tasks (sentiment analysis, paraphrase identification and reading comprehension).
This work is not a recommendation to train on behavioural test data, but an exploration of what happens if data targeting the same set of phenomena as the tests is used for model training.We find that naive optimisation and evaluation do yield over-optimistic scenarios: fine-tuning on suite data results in large improvements for seen functionalities, though at the same time i.i.d.data and unseen functionalities performance can degrade, with some models adopting degenerate solutions that pass the tests but lead to catastrophic i.i.d.performance.Including i.i.d. as well as test suite samples was found to prevent this, mitigating i.i.d.performance degradation-with even improvements in particular cases-and yielding higher scores for unseen functionalities as well.

Behavioural testing
We consider a joint distribution p over an input space X , corresponding label space Y and assume access to an i.  2021), describes the hierarchy of concepts in behavioural testing: functionality classes correspond to coarse properties (e.g., negation) and are composed of finer-grained functionalities; these assess facets of the coarse property (e.g., negation of positive sentiment should be negative) and are operationalised by individual input-output pairs, the test cases.These concepts align with two of the generalisation axes we explore in this work, functionality and functionality class generalisation ( § 3.3).
We additionally follow the terminology created by Ribeiro et al. (2020), which defines three test types, according to their evaluation mechanism: Minimum Functionality, Invariance and Directional Expectation tests.When used for model training, each of them requires a particular optimisation strategy ( § 3.2).
Minimum Functionality test (MFT): MFTs are input-label pairs designed to check specific system behaviour: X has only one element, x, and the expectation function checks if the model output given x is equal to some label y.Thus, they have the same form as the i.i.
for all i ∈ {1, . . ., |X| − 1}.That is, the expectation function checks if model predictions are invariant to the perturbations.Directional Expectation test (DIR): The form for input X is similar to the INV case, but instead of label-preserving transformations, x o is perturbed in a way that changes the prediction in a taskdependent predictable way, e.g.prediction confidence should not increase.Given a task-dependent comparison function δ : For example, if the expectation is that prediction confidence should not increase, then δ(ŷ 0 , ŷi The model can be additionally evaluated using test suite T , which gives a finer-grained performance measure over each functionality.

Behavioural learning
In behavioural learning, samples from T are used for training in a two-step approach: a pre-trained language model (PLM) (Devlin et al., 2019) is first fine-tuned on examples from D train , and then finetuned further on examples from T (Luz de Araujo and Roth, 2022;Liu et al., 2019).

BELUGA
BELUGA is an analysis method to estimate how training on test suite data impacts generalisation to seen and unseen phenomena.Given an i.i.d.dataset D, a test suite T , and a training configuration χ ( § 3.1), BELUGA trains on several controlled splits of suite data and outputs scores that use performance on unseen phenomena as a proxy measure ( § 3.3) for generalisation.
That is, BELUGA can be formalised as a function f parametrised by D, T , and χ that returns a set of metrics M : By including measures of performance on i.i.d.data and on seen and unseen sets of phenomena, these metrics offer a more comprehensive and realistic view of how the training data affected model capabilities and shed light on failure cases that would be obfuscated by other evaluation schemes.

Training configurations
We split T into three disjoint splits T train , T val and T test , such that each split contains cases from all functionalities, and define four training configurations regarding whether and how we use T train : IID: The standard training approach that uses only i.i.d.data for training (D train ).It serves as a baseline to contrast performance of the three following suite-augmented configurations.
IID→T: A two-step approach where first the PLM is fine-tuned on D train and then on T train .This is the setting examined in prior work on behavioural learning ( § 2.2), which has been shown to lead to deterioration of i.i.d.dataset (D test ) performance (Luz de Araujo and Roth, 2022).
To assess the impact of including i.i.d.samples in the behavioural learning procedure, we define two additional configurations: IID+T: The PLM is fine-tuned on a mixture of suite and i.i.d.data (D train ∪ T train ).

IID→(IID+T):
The PLM is first fine-tuned on D train and then on D train ∪ T train .
By contrasting the performance on D test and T test of these configurations, we assess the impact of behavioural learning on both i.i.d. and test suite data distributions.

Behaviour optimisation
Since each test type describes and expects different behaviour, BELUGA optimises type-specific loss functions: MFT: As MFTs are formally equivalent to i.i.d.data (input-label pairs), they are treated as such: we randomly divide them into mini-batches and optimise the cross-entropy between model predictions and labels.
INV: We randomly divide INVs into minibatches composed of unperturbed-perturbed input pairs.For each training update, we randomly select one perturbed version (of several possible) for each original input. 2 We enforce invariance by minimising the cross-entropy between model predictions over perturbed-unperturbed input pairs: where c is the number of classes.This penalises models that are not invariant to the perturbations (Eq.1), since the global minimum of the loss is the point where the predictions are the same.DIR: Batch construction follows the INV procedure: the DIRs are randomly divided into minibatches of unperturbed-perturbed input pairs, the unperturbed input is randomly sampled during training.
The optimisation objective depends on the comparison function δ.For a given δ, we define a corresponding error measure δ : For example, if the expectation is that prediction confidence should not increase, then ).This way, δ increases with confidence increase and is zero otherwise.
We minimise the following loss: Intuitively, if δ = 0, the loss is zero.Conversely, the loss increases with the error measure (as δ gets closer to 1).

Cross-functional analysis
Test suites have limited coverage: the set of covered functionalities is only a subset of the phenomena of interest: T ⊂ P, where P is the hypothetical set of all functionalities.For example, the test suite for sentiment analysis provided by Ribeiro et al. ( 2020) has a functionality that tests for invariance to people's names-the sentiment of the sentence "I do not like Mary's favourite movie" should not change if "Mary" is changed to "Maria".However, the equally valid functionality that tests for invariance to organisations' names is not in the suite.Training and evaluating on the same set of functionalities can lead to overestimating the performance: models that overfit to covered functionalities but fail catastrophically on non-covered ones.
BELUGA computes several measures of model performance that address generalisation from T train to T test and from T train to P. We do not assume access to test cases for non-covered phenomena, so we use held-out sets of functionalities as proxies for generalisation to P.

I.i.d. data:
To score performance on D test , we use the canonical evaluation metric for the specific dataset.We detail the metrics used for each examined task3 in Section 4.1.We denote the i.i.d.score as s iid .
Test suite data: We compute the pass rate s F i of each functionality F i ∈ T : where Ŷ are the model prediction given the inputs in X.In other words, the pass rate is simply the proportion of successful test cases.
We vary the set of functionalities used for training and testing to construct different evaluation scenarios: Unseen evaluation: No test cases are seen during training.This is equivalent to the use of behavioural test suites without behavioural learning: we compute the pass rates using the predictions of an IID model.
Seen evaluation: T train is used for training.We compute the pass rate on T test using the predictions of suite-augmented models.This score measures how well the fine-tuning procedure generalises to test cases of covered functionalities: even though all functionalities are seen during training, the particular test cases evaluated ({t|t ∈ T test }) are not the same as the ones used for training (T train ∩ T test = ∅).
Generalisation to non-covered phenomena: To estimate performance on non-covered phenomena, we construct a l-subset partition of the set of functionalities U := {U i } l i=1 .For each U i , we use T train \ U i for training and then compute the pass rates for That is, we fine-tune it on a set of functionalities and evaluate it on the remaining (unseen) functionalities.Since U is a partition of T , by the end of the procedure there will be a pass rate for each functionality.
We consider three different partitions, depending on the considered generalisation proxy: (1) Functionality generalisation: a partition with n func subsets, each corresponding to a held-out functionality: We consider this a proxy of performance on noncovered functionalities: F ∈ P \ T .
(2) Functionality class generalisation: a partition with n class subsets, each corresponding to a held-out functionality class: We consider this to be a proxy of performance on non-covered functionality classes: C ⊂ P \ T .
(3) Test type generalisation: a partition with three subsets, each corresponding to a held-out test type: U i = {F|F has type i}, i ∈ {MFT, INV, DIR}.We use this measure to examine generalisation across different test types.

Metrics
For model comparison purposes, BELUGA outputs the average pass rate (the arithmetic mean of the n func pass rates) as the aggregated metric for test suite correctness.Since one of the motivations for behavioural testing is its fine-grained results, BELUGA also reports the individual pass rates.
In total, BELUGA computes five aggregated suite scores, each corresponding to an evaluation scenario: s T standard : The baseline score of a model only trained on i.i.d.data: if the other scores are lower, then fine-tuning on test suite data degraded overall model performance.
s T seen : Performance on seen functionalities.This score can give a false sense of model performance since it does not account for model overfitting to the seen functionalities: spurious correlations within functionalities and functionality classes can be exploited to get deceivingly high scores.
s T func : Measure of generalisation to unseen functionalities.It is a more realistic measure of model quality, but since functionalities correlate within a functionality class, the score may still offer a false sense of quality.
s T class : Measure of generalisation to unseen functionality classes.This is the most challenging generalisation setting, as the model cannot exploit correlations within functionalities and functionality classes.
s Comprehensive generalisation score: Since performance on i.i.d.data and passing the behavioural tests are both important, BELUGA provides the harmonic mean of the aggregated pass rates and the i.i.d.score as an additional metric for model comparison: There are five G scores (G standard , G seen , G func , G class and G type ), each corresponding to plugging either s T standard , s T seen , s T func , s T class or s T type into Eq.8.This aggregation makes implicit importance assignments explicit: on the one hand, the harmonic mean ensures that both i.i.d. and suite performance are important due to its sensitivity to low scores; on the other, different phenomena are weighted differently, as i.i.d.performance has a bigger influence on the final score than each single functionality pass rate.

Experiments on cross-functional analysis 4.1 Tasks
We experiment with three classification tasks that correspond to the test suites made available4 by Ribeiro et al. ( 2020): sentiment analysis (SENT), paraphrase identification (PARA) and reading comprehension (READ).5 Tables 1 and 2 summarise and show representative examples from the i.i.d. and test suite datasets, respectively.Sentiment analysis (SENT): As the i.i.d.dataset for sentiment analysis, we use the Stanford Sentiment Treebank (SST-2) (Socher et al., 2013).We use the version made available in the GLUE benchmark (Wang et al., 2018), where the task is to assign binary labels (negative/positive sentiment) to sentences.The test set labels are not publicly available, so we split the original validation set in half as our validation and test sets.The canonical metric for the dataset is accuracy.
The SENT suite contains 68k MFTs, 9k DIRs and 8k INVs.It covers functionality classes such as semantic role labelling (SRL), named entity recognition (NER) and fairness.The MFTs were template-generated, while the DIRs and INVs were either template-generated or obtained from perturbing a dataset of unlabelled airline tweets.Therefore, there is a domain mismatch between the i.i.d.data (movie reviews) and the suite data (tweets about airlines).
There are also label mismatches between the two datasets: the suite contains an additional class for neutral sentiment and the MFTs have the "not negative" label, which admits both positive and neutral predictions.We follow Ribeiro et al. ( 2020) and consider predictions with probability of positive sentiment within [1/3, 2/3] as neutral. 6here are two types of comparison for DIRs, regarding either sentiment or prediction confidence.In the former case, the prediction for a perturbed input is expected to be either not more negative or not more positive when compared with the prediction for the original input.In the latter, the confidence of the original prediction is expected to either not increase or not decrease, regardless of the sentiment.For example, when adding an intensifier ("really", "very") or a reducer ("a little", "somewhat"), the confidence of the original prediction should not decrease in the first case and not increase in the second.On the other hand, if a perturbation adds a positive or negative phrase to the original input, the positive probability should not go down (up) for the first (second) case.
More formally, each prediction ŷ is a twodimensional vector where the first and second components are the confidence for negative (ŷ[0]) and positive (ŷ[1]) sentiment, respectively.Let c * denote the component with highest confidence in the original prediction: c * := argmax ŷ0 .Then, the comparison function δ can take one of four forms (not more negative, not more positive, not more confident and not less confident): Each corresponding to an error measure : We compute the max because only test violations should be penalised.
Paraphrase identification (PARA): We use Quora Question Pairs (QQP) (Iyer et al., 2017)   The DIRs are similar to MFTs: perturbed question pairs are either duplicate or not duplicate.For example, if two questions mention the same location and the perturbation changes the location in one of them, then the new pair is guaranteed not to 7 The test cases from functionality "Order does matter for asymmetric relations" (e.g.Q1: Is Rachel faithful to Christian?, Q2: Is Christian faithful to Rachel?) were originally labelled as duplicates.This seems to be unintended, so we change their label to not duplicates.be semantically equivalent.Thus, the comparison function δ checks if the perturbed predictions correspond to the expected label; the original prediction is not used for evaluation.So during training, we treat them as MFTs: we construct mini-batches of perturbed samples and corresponding labels and minimise the cross-entropy between predictions and labels.

Reading comprehension (READ):
The i.i.d.dataset for READ is the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016), composed of excerpts from Wikipedia articles with crowdsourced questions and answers.The task is to, given a text passage (context) and a question about it, extract the context span that contains the answer.Once again, the test set labels are not publicly available and we repeat our splitting approach for SENT and PARA.The canonical metrics are exact string match (EM) (percentage of predictions that match ground truth answers exactly) and the more lenient F 1 score, which measures average token overlap between predictions and ground truth answers.
The READ suite contains 10k MFTs and 2k INVs, with functionality classes such as vocabulary and taxonomy.The MFTs are template generated, while the INVs are obtained from perturbing SQuAD data.
Invariance training in READ has one complication, since the task is to extract the answer span by predicting the start and end positions.Naively using the originally predicted positions would not work because the answer position may have changed after the perturbation.For example, let us take the original context-question pair (C: Paul travelled from Chicago to New York, Q: Where did Paul travel to?) and perturb it so that Chicago is changed to Los Angeles.The correct answer for the original input is (5, 6) as the start and end (word) positions, yielding the span "New York".Applying these positions to the perturbed input would extract "to New".Instead, we only compare the model outputs for the positions that correspond to the common ground of original and perturbed inputs.In the example, the outputs for the tokens "Paul", "travelled", "from", "to", "New" and "York".We minimise the cross-entropy between this restricted set of outputs for the original and perturbed inputs.This penalises changes in prediction for equivalent tokens (e.g. the probability of "Paul" being the start of the answer is 0.1 for the original input but 0.15 for the perturbed).

Generalisation methods
We use BELUGA to compare several techniques used to improve generalisation: L2: We apply a stronger-than-typical 2 -penalty coefficient of λ = 0.1.
Dropout: We triple the dropout rate for all fully connected layers and attention probabilities from the default value of 0.1 to 0.3.
LP: Instead of fine-tuning on suite data, we apply linear probing (LP), where the encoder parameters are frozen, and only the classification head parameters are updated.Previous work (Kumar et al., 2022) has found this to generalise better than full fine-tuning.
LP-FT: We experiment with linear probing followed by fine-tuning, which Kumar et al. (2022) have shown to combine the benefits of fine-tuning (in-distribution performance) and linear-probing (out-of-distribution performance).
Invariant risk minimisation (IRM) (Arjovsky et al., 2019), a framework for OOD generalisation that leverages different training environments to learn feature-label correlations that are invariant across the environments, under the assumption that such features are not spuriously correlated with the labels.
Group distributionally robust optimisation (Group-DRO) (Sagawa et al., 2020), an algorithm that minimises not the average training loss, but the highest loss across the different training environments.This is assumed to prevent the model from adopting spurious correlations as long as such correlations do not hold on one of the environments.
Fish (Shi et al., 2022), an algorithm for domain generalisation that maximises the inner product between gradients from different training environments, under the assumption that this leads models to learn features invariant across environments.
For the last three methods, we treat the different functionalities as different environments.For the IID+T and IID→(IID+T) settings, we consider the i.i.d.data as an additional environment.In the multi-step training configurations (IID→T and IID→(IID+T)), we only apply the techniques during the second step: when training only with i.i.d.data we employ vanilla gradient descent, since we are interested in the generalisation effect of using suite data.

Experimental setting
We use pre-trained BERT models (Devlin et al., 2019) for all tasks.We follow Ribeiro et al. ( 2020) and use BERT-base for SENT and PARA and BERTlarge for READ.All our experiments use AdamW (Loshchilov and Hutter, 2019) as the optimiser.When fine-tuning on i.i.d.data, we use the same hyper-parameters as the ones reported for models available on Hugging Face's model zoo. 8When fine-tuning on test suite data, we run a grid search over a range of values for batch size, learning rate and number of epochs. 9We select the configuration that performed best on T val .To maintain the same compute budget across all methods, we do not tune method-specific hyper-parameters.We instead use values shown to work well in the original papers and previous work (Dranker et al., 2021).Seen performance: Fine-tuning on test suite data led to improvements for all tasks: the G seen scores are generally higher than the baseline scores (first row in Table 3).

Results and observations 5.1 I.i.d. and generalisation scores
That is, models were able to generalise across test cases from covered functionalities (from T train to T test ) while retaining reasonable i.i.d.data performance.In some specific training configurationmethod combinations this was not the case.We discuss this below when we compare methods and report the degenerate solutions.
Generalisation performance: For any given configuration-method pair, G seen is higher than G func , G class and G type , indicating a generalisation gap between seen and unseen functionalities.Furthermore, for all tasks, average (across methods) G func is higher than average G class , which is higher than average G type , 10 indicating that generalisation gets harder as one moves from unseen functionalities to unseen functionality classes and test types.This aligns with previous work (Luz de Araujo and Roth, 2022), in which hate speech detection models are found to generalise within-but not across-functionality classes.
Improvements over the IID baseline were task dependent.Almost all configuration-method pairs achieved G func (22 of 24) and G class (20 of 24) scores significantly higher that the IID baseline for SENT, with improvements over the baseline as high as 18.44 and 12.84 percentage points (p.p.) for each metric, respectively.For PARA, improving over G class proved much harder-only seven configuration-method pairs could do so.Increases in score were also less pronounced, the best G func and G class scores being 6.91 and 2.19 p.p. above the baseline.READ was the one with both rarer and subtler improvements, with a third of the approaches significantly improving functionality and none significantly improving functionality class That said, the environment-based generalisation algorithms (IRM, DRO and Fish) struggled in the IID+T configuration, underperforming when compared with the other methods.We hypothesize that in these scenarios models simply do not see enough i.i.d.data, as we treat it as just one more environment among many others (reaching as much as 54 in PARA).LP also achieves subpar scores, even though i.i.d.data is not undersampled.The problem here is the frozen feature encoder, as BERT features are not good enough without fine-tuning on i.i.d.task data-as was done in the other configurations, with clear benefits for LP.
No individual method performed best for all scores and tasks.That said, IID→(IID+T) with L2, LP, LP-FT or Fish was able to achieve G func and G class scores higher or not significantly different from the baseline in all tasks, though IID→(IID+T) with dropout was the best when score is averaged over all tasks and generalisation measures.Considering this same metric, IID→(IID+T) was the most consistently good configuration, with all methods improving over the average IID baseline.

DIR applicability
We have found that DIRs, as used for SENT, have limited applicability for both testing and training.The reason for that is that models are generally very confident about their predictions: the average prediction confidence for the test suite predictions is 0.97 for the IID model.On the evaluation side, Table 3: I.i.d.test set performance and generalisation measures (in %) of each examined method for all tasks and training configurations.The Avg. column shows the average G score across all tasks and generalisation measures.We show scores significantly above and below the IID baseline (first row, suite scores are G standard ) in green and red, respectively, and write the best score for each column in bold weight.When the score is not significantly different from the baseline counterpart, we show it in black.We use two-tailed binomial testing when comparing the i.i.d.performances, and randomisation testing (Yeh, 2000) when comparing G scores, setting 0.05 as the significance level.
this makes some DIRs impossible to fail: the confidence cannot get higher and fail "not more confident" expectations.On the training side, DIRs do not add much of a training signal, as the training loss is near zero from the very beginning. 11 We see an additional problem with DIRs in the SENT setting: they confuse prediction confidence with sentiment intensity.Though prediction confidence may correlate with sentiment intensity, uncertainty also signals difficulty and ambiguousness (Swayamdipta et al., 2020).Consequently, sentiment intensity tests may not be measuring the intended phenomena.One alternative would be to disentangle the two factors: using prediction values only for confidence-based tests, and sentiment in-11 Confidence regularisation (Yu et al., 2021) could potentially increase DIR's usefulness for training and evaluation purposes.
tensity tests only for sentiment analysis tasks with numeric or fine-grained labels.

Negative transfer
Though G class scores are generally lower than G func scores, this is not always the case for the pass rates of individual functionalities.When there are contrastive functionalities within a class-those whose test cases have similar surface form but entirely different expected behaviours-it is very difficult to generalise from one to the other.
For example, the SRL class in PARA contains the functionalities "order does not matter for symmetric relations" and "order does matter for asymmetric relations" (functionalities 41 and 42 in the second row of Fig. 1).  the first and second functionalities would include (Q1: Is Natalie dating Sophia?Q2: Is Sophia dating Natalie?) and (Q1: Is Matthew lying to Nicole?Q2: Is Nicole lying to Matthew?) respectively.Though their surface forms are similar, they have opposite labels: duplicate and not duplicate.
To compute s T func , a model is trained with samples from one functionality and evaluated on samples from the other.Consequently, the surface form will be spuriously correlated with the label seen during training and models may blindly assign it to the question pairs that fit the template.This would work well for the seen functionality, but samples from the unseen one would be entirely misclassified.Conversely, when computing the s T class score, the model will not have been trained on either of the functionalities and will not have the chance to adopt the heuristic, leading to better unseen pass rates.

Degenerate solutions
Settings where the G type score is higher than the baseline are much rarer than for the other measures, happening only in one case for SENT (IID→T with dropout) and never for READ.One explanation is that training only on perturbation-based tests (with no MFTs) can lead to degenerate solutions, such as passing all tests by always predicting the same class.
To assess if that was the case, we examined the predictions on the SST-2 test set of the IID→T vanilla model fine-tuned only on DIRs and INVs.We have found that 95.18% of the i.i.d.data points were predicted as negative, though the ground truth frequency for that label is 47.25%.When examining the predictions for MFTs, the results are even more contrasting: 0.29% of the predictions were negative, with the ground truth frequency being 43.42%.These results show that the model has, indeed, adopted the degenerate solution.Interestingly, it predicts different classes depending on the domain, almost always predicting negative for i.i.d.data and positive for suite data.
The gap between G class and G type scores in PARA is not as severe, possibly due to the supervised signal in its DIRs.Since these tests expect inputs to correspond to specific labels-as opposed to DIRs for SENT, which check for changes in prediction confidence-always predicting the same class would not be a good solution.Indeed, when examining the predictions on the QQP test set of the vanilla IID→T model fine-tuned with no MFT data, we see that 58.70% of question pairs are predicted as not duplicate, which is similar to the ground truth frequency, 63.25%.The same is true when checking the predictions for MFTs: 64.47% of the data points are predicted as not duplicate, against a ground truth frequency of 52.46%.
The READ scenario is more complex-instead of categories, spans are extracted.Manual inspection showed that some IID→T models adopted degenerate solutions (e.g.extracting the first word, a full stop or the empty span as the answer), even when constrained by the MFT supervised signal.Interestingly, the degenerate solutions were applied only for INV tests (where such invariant predictions work reasonably) and i.i.d.examples (where they do not).On the other hand, these models were able to handle the MFTs well, obtaining near perfect scores and achieving high s T seen scores even though i.i.d.performance is catastrophic.The first grid of the third row in Fig. 1 illustrates this: the high s T seen scores are shown on the first column, and the MFT pass rates on the columns with blue x-axis numbers.

Summary interpretation of the results
Figure 1 Figure 1 supports fine-grained analyses that consider performance on individual functionalities in each generalisation scenario.One can interpret it horizontally to assess the functionality pass rates for a particular method.For example, the bottom left grid, representing seen results for READ, shows that IID+T with LP behaves poorly on almost all functionalities, confirming the importance of fine-tuning BERT pre-trained features ( § 5.2).
Alternatively, one can interpret it vertically to assess performance and generalisation trends for individual functionalities.For example, models generalised well to functionality 21 of the READ suite (second grid of the bottom row), with most methods improving over the IID baseline.However, under the functionality class evaluation scenario (third grid of the bottom row), improvements for functionality 21 are much rarer.That is, the models were able to generalise to functionality 21 as long as they were fine-tuned on cases from functionalities from the same class (20 and 22) 12 .
Such fine-grained analyses show the way for more targeted explorations of generalisation (e.g.why do models generalise to functionality 21 but not to functionality 20?), which can guide subsequent data annotation, selection and creation efforts, and shed light on model limitations.
Table 3 For i.i.d.results, we refer to the SST2, QQP and SQuAD columns.These show that the suite-augmented configuration and methods (all rows below and including IID→T Vanilla) generally hurt i.i.d.performance.However, improvements can be found for some methods in the IID+T and IID→(IID+T).Takeaway: fine-tuning on behavioural tests degrades model general performance, which can be mitigated by jointly finetuning on i.i.d.samples and behavioural tests.
For performance concerning seen functionalities, we refer to the G seen columns.Generalisation scores concerning unseen functionalities, functionality classes and test types can be found in the G func , G class and G type columns.Across all tasks, training configurations and methods, the G seen scores are higher than the others.Takeaway: evaluating only on the seen functionalities (Liu et al., 2019;Malon et al., 2022) is overoptimistic-improving performance on seen cases may come at the expense of degradation on unseen cases.This is detected by the underperforming generalisation scores.
Previous work on generalisation in behavioural learning (Luz de Araujo and Roth, 2022;Rozen et al., 2019) corresponds to the IID→T Vanilla row.It shows deterioration of i.i.d.scores, poor generalisation in some cases, and lower average performance compared with the IID baseline.However, our experiments with additional methods (all rows below IID→T Vanilla), show that some configuration-method combinations improve the average performance.Takeaway: while naive behavioural learning generalises poorly, more sophisticated algorithms can lead to improvements.BELUGA is a method that detects and measures further algorithmic improvements.

Related work
Traditional NLP benchmarks (Wang et al., 2018(Wang et al., , 2019) ) are composed of text corpora that reflect the naturally-occurring language distribution, which may fail to sufficiently capture rarer, but important phenomena (Belinkov and Glass, 2019).Moreover, since these benchmarks are commonly split into identically distributed train and test sets, spurious correlations in the former will generally hold for the latter.This may lead to the obfuscation of unintended behaviours, such as the adoption of heuristics that work well for the data distribution but not in general (Linzen, 2020;McCoy et al., 2019).To account for these shortcomings, complementary evaluations methods have been proposed, such as using dynamic benchmarks (Kiela et al., 2021) and behavioural test suites (Kirk et al., 2022;Röttger et al., 2021;Ribeiro et al., 2020).
A line of work has explored how training on challenge and test suite data affects model performance by fine-tuning on examples from specific linguistic phenomena and evaluating on other samples from the same phenomena (Malon et al., 2022;Liu et al., 2019).This is equivalent to our seen evaluation scenario, and thus cannot distinguish between models with good generalisation and those that have overfitted to the seen phenomena.We account for that with our additional generalisation measures, computed using only data from held-out phenomena.
Other efforts have also used controlled data splits to examine generalisation: McCoy et al. (2019) have trained and evaluated on data from disjoints sets of phenomena relevant for Natural Language Inference (NLI); Rozen et al. (2019) have split challenge data according to sentence length and constituency parsing tree depth, creating a distribution shift between training and evaluation data; Luz de Araujo and Roth (2022) employ a crossfunctional analysis of generalisation in hate speech detection.Though these works address the issue of overfitting to seen phenomena, their analyses are restricted to specific tasks and training configurations.Our work gives a more comprehensive view of generalisation of behavioural learning by exam-ining different tasks, training configurations, test types and metrics.Additionally, we use this setting as an opportunity to compare generalisation impact of both simple regularisation mechanisms and state-of-the-art domain generalisation algorithms.

Conclusion
We have presented BELUGA, a framework for cross-functional analysis of generalisation in NLP systems that both makes explicit the desired system traits and allows for quantifying and examining several axes of generalisation.While in this work we have used BELUGA to analyse data from behavioural suites, it can be applied in any setting where one has access to data structured into meaningful groups (e.g.demographic data, linguistic phenomena, domains).
We have shown that, while model performance for seen phenomena greatly improves after finetuning on test suite data, the generalisation scores reveal a more nuanced view, in which the actual benefit is less pronounced and depends on the task and training configuration-method combination.We have found the IID→(IID+T) configuration to result in the most consistent improvements.Conversely, some methods struggle in the IID→T and IID+T settings by overfitting to the suite or underfitting i.i.d.data, respectively.In these cases, a model both practically aces all tests and fails badly for i.i.d.data, which reinforces the importance of considering both i.i.d. and test suite performance when comparing systems, which is accounted for by BELUGA's aggregate scores.
These results show that naive behavioural learning has unintended consequences, which the IID→(IID+T) configuration mitigates to some degree.There is still much room for improvement, though, especially if generalisation to unseen types of behaviour is desired.Through BEL-UGA, progress in that direction is measurable, and further algorithmic improvements might make behavioural learning an option to ensure desirable behaviours and preserve general performance and generalisability of the resulting models.We do not recommend training on behavioural tests in the current technological state.Instead, we show a way to improve research on reconciling the qualitative guidance of behavioural tests with desired generalisation in NLP models.
where c * := argmax ŷ0 and ŷ[c * ] denotes the predicted probability for class c * .Evaluation: Given a model family Θ and a loss function : Θ × (X × Y) → R + , the standard learning goal is to find the model θ ∈ Θ that minimises the loss over the training examples: model correctness is evaluated using one or more metrics over the examples in D test .

Figure 1 :
Figure 1: Average and individual pass rates for all tasks, methods and training configurations.From first to third row: results for SENT, PARA and READ.From first to fourth column: seen evaluation, functionality generalisation, functionality class generalisation, and test type generalisation scores.The y-axis correspond to all training configuration-method pairs; the x-axis shows the average functionality pass rate followed by the individual pass rates.The blue horizontal and vertical lines demarcate different training configurations and functionality classes, respectively.The colors in the x-axis designate the different test types: blue for MFTs, red for INVs an green for DIRs.
split into disjoint train, validation and test sets D train , D val and D test .We also assume access to a behavioural test suite T , composed of m test cases {l i } m i=1 partitioned into n func disjoint functionalities {F i } n func i=1 .Each functionality belongs to one of n class functionality classes {C i } n class |X|×|Y| → {0, 1} takes a model's predictions for all |X| inputs and outputs 1 if the model behaves as expected and 0 otherwise.The above taxonomy, by Röttger et al. ( i=1 , such that n class < n func < m.Each test case belongs to a functionality, t ∈ F i , and is described by a pair (X, b), where X is a list with |X| inputs.The expectation function b : R INVs are designed to check for invariance to certain input transformations.The input list X consists of an original input x o and |X|−1 perturbed inputs (x i ) d. examples.Invariance test (INV): o .Given model predictions Ŷ T type : Measure of generalisation to unseen test types.This score is of a more technical interest: it can offer insights into how different training signals affect each other (e.g. if training with MFTs supports performance on INVs and vice-versa).

Table 1 :
as the i.i.d.dataset.It is composed of question pairs from the website Quora with annotation for Who is king of sports?Q2:Who is the king?(Not duplicate) Q1: How much does it cost to build an basic Android app in India?Q2: How much does it cost to build an Android app in India?(Duplicate) SQuAD C: Solar energy may be used in a water stabilisation pond to treat waste [...] although algae may produce toxic chemicals that make the water unusable.Q: What is a reason why the water from a water stabilisation pond may be unusable?(algae may produce toxic chemicals) Examples for each i.i.d.dataset.The number of train/validation/test samples is 67k/436/436, 363k/20k/20k and 87k/5k/5k for SST-2, QQP and SQuAD, respectively.
READ C: Somewhere around a billion years ago, a free-living cyanobacterium entered an early eukaryotic cell [...] Q: What kind → Wha tkind of cell did cyanobacteria enter long ago?(Same prediction) Robustness-Typos should not change prediction (INV) C: Maria is an intern.Austin is an editor.Q: Who is not an intern?(Austin) Negation-Negations in question matter for prediction (MFT)

Table 2 :
Examples for each test suite.We color-code perturbations as red/green for deletions/additions.The number of train/validation/test samples is 89k/44k/44k, 103k/51k/51k and 35k/17k/17k for the SENT, PARA and READ test suites, respectively.whethera pair of questions is semantically equivalent (duplicates or not duplicates).The test set labels are not available, hence we split the original validation set into two sets for validation and testing.The canonical metrics are accuracy and the F 1 score of the duplicate class.The PARA suite contains 46k MFTs, 13k DIRs and 3k INVs, with functionality classes such as coreference resolution, logic and negation.All MFTs are template generated, 7 while the INVs and DIRs are obtained from perturbing QQP data.

Table 3
10 SENT: 85.97/78.15/69.54,PARA: 75.04/72.22/71.55,READ: 49.23/46.66/43.46.generalisation.Improvements in each case were as high as 4.70 and 0.51 p.p. over the baseline.I.i.d.performance: Fine-tuning on test suite data only (IID→T configuration) reduced performance for all tasks' i.i.d.test sets.Finetuning on both suite and i.i.d.examples (IID+T and IID→(IID+T)) helped retain-or improveperformance in some cases, but decreases were still more common.The IID→(IID+T) configuration was the most robust regarding i.i.d.scores, with an average change (compared to the IID baseline) of −1.43/−0.50/−1.73 for SENT/PARA/READ.