The Impact of Word Splitting on the Semantic Content of Contextualized Word Representations

When deriving contextualized word representations from language models, a decision needs to be made on how to obtain one for out-of-vocabulary (OOV) words that are segmented into subwords. What is the best way to represent these words with a single vector, and are these representations of worse quality than those of in-vocabulary words? We carry out an intrinsic evaluation of embeddings from different models on semantic similarity tasks involving OOV words. Our analysis reveals, among other interesting findings, that the quality of representations of words that are split is often, but not always, worse than that of the embeddings of known words. Their similarity values, however, must be interpreted with caution.


Introduction
With the appearance of pre-trained language models (PLMs) such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), there has been an interest in extracting, analyzing, and using contextualized word representations derived from these models, for example to understand how well they represent the meaning of words (Garí Soler et al., 2019) or to predict diachronic semantic change (Giulianelli et al., 2020).
Most modern PLMs, however, operate at the subword level -they rely on a subword tokenization algorithm to represent their input, like Word-Piece (Schuster and Nakajima, 2012;Wu et al., 2016) or Byte Pair Encoding (BPE) (Sennrich et al., 2016).This way of representing words has advantages: with a fixed, reasonably-sized vocabulary, models can account for out-of-vocabulary words by splitting them into smaller units.When it comes to obtaining representations for words, a subword vocabulary implies that not all words are created equally.Words that have to be split ("split-words") need a special treatment, different from words that have a dedicated embedding ("full-words").
There are reasons to believe that the semantics of split-words is more poorly represented than that of full-words.First, it is generally assumed that longer tokens tend to contain more semantic information about a word (Church, 2020) because they are more discriminative.The subword representations making up split-words must be able to encode the semantics of all words they can be part of.It has also been noted that tokenization algorithms tend to split words in a way that disregards language morphology (Hofmann et al., 2021), and some of them favor splittings with more subword units than would be necessary (Church, 2020).In fact, a more morphology-aware segmentation seems to correlate with better results on downstream NLP tasks (Bostrom and Durrett, 2020).
In this study, we investigate the impact that word splitting (and how we decide to deal with it) has on the quality of contextualized word representations.We rely on the task of lexical semantic similarity estimation, which has traditionally been used as a way of intrinsically evaluating different types of word representations (Landauer and Dumais, 1997;Hill et al., 2015).We set out to answer arXiv:2402.14616v1[cs.CL] 22 Feb 2024 two main questions: • What is the best strategy to combine contextualized subword representations into a contextualized word-level representation?
• (Given a good strategy), how does the quality of split-word representations compare to that of full-word representations?We design experiments that allow us to answer these and related questions for BERT and other English models.Contrary to previous work where the quality of the lexicosemantic knowledge encoded in word representations is analyzed regardless of the words' tokenization (Wiedemann et al., 2019;Bommasani et al., 2020;Vulić et al., 2020), we analyze the quality of the similarity estimations for split-and full-words separately, and do so in an inter-word and a within-word1 similarity setting.See Figure 1 for an example of an experimental setting we consider.We uncover several interesting, and sometimes unexpected, tendencies: for example, that when it comes to polysemous nouns, OOV words are better represented than in-vocabulary ones; and that similarity values between two split-words are generally higher than between two full-words.We additionally contribute a new WordNet-based word similarity dataset with a large representation of split-words.2

Background
Subword tokenization algorithms were first proposed by Schuster and Nakajima (2012) and became widespread after the adaptation of Byte Pair Encoding to word segmentation (Gage, 1994;Sennrich et al., 2016).Given a specified vocabulary size, these algorithms create a vocabulary such that the most frequent character sequences in a given corpus can be represented with a single token.Unambiguous detokenization (i.e., recovering the original sequence) can be ensured in different ways.For example, when BERT's tokenizer splits an unknown word into multiple subwords, all but the first are marked with "##" -we will refer to these as "sub-tokens" (as opposed to "fulltokens" which do not start with "##").
Subword tokenization presented itself as a good compromise between character-level and wordlevel models, balancing the trade-off between vocabulary size and sequence length.Characterbased representations are generally better than subword-based models at morphology, part-ofspeech (PoS) tagging, and at handling noisy input and out-of-domain words; but the latter are generally better at handling semantics and syntax (Keren et al., 2022;Durrani et al., 2019;Li et al., 2021a).Because of these advantages, most modern PLMs rely on subword tokenization: BERT uses Wordpiece; RoBERTa, XLM (Conneau and Lample, 2019), GPT-2 (Radford et al., 2019) and GPT-3 (Brown et al., 2020) use BPE or some variant; T5 (Raffel et al., 2020) relies on Sentence-Piece (Kudo and Richardson, 2018).
Several works have pointed out that splitting words may be detrimental for certain tasks, especially if segmentation is not done in a linguistically correct way.Bostrom and Durrett (2020) compare two subword tokenization algorithms, BPE and unigramLM (Kudo, 2018), and find that the latter, which aligns better with morphology, also yields better results on question answering, textual entailment, and named entity recognition.Work on machine translation has shown benefits from using linguistically-informed tokenization (Huck et al., 2017;Mager et al., 2022) as well as algorithms that favor segmentation into fewer tokens (Gallé, 2019).In fact, Rust et al. (2021) note that multilingual BERT's (mBERT) tokenizer segments much more in some languages than others, and they demonstrate that a dedicated monolingual tokenizer plays a crucial role in mBERT's performance on numerous NLP tasks.Similarly, Mutuvi et al. (2022) show that increased fertility (i.e., the average number of tokens generated for every word) and number of split-words correlate negatively with mBERT's performance on epidemiologic watch through multilingual event extraction.However, the effect that (over)splitting words -or doing so disregarding their morphology-has on similarity remains unclear.
Nayak et al. ( 2020) explore a similar question to ours using the BERT model, but compare the similarity between a word representation and their sub-token counterpart (e.g., night with ##night).We argue, however, that even if they represent the same string, sub-tokens and full-tokens have different distributions and the similarity between them is not necessarily expected to be high. 3Their experiments additionally involve a modification of the tokenizer.We instead compare representa-tions of whole words using the models' default tokenization, and we work with representations of words extracted from sentential contexts and not in isolation.
Multiple approaches have been proposed to improve on the weak aspects of vanilla subword tokenization, such as the representation of rare, out-of-domain, or misspelled words (Schick and Schütze, 2020b;Hong et al., 2021;Benamar et al., 2022); and its concurrence with morphological structure (Hofmann et al., 2021).Hofmann et al. (2022) devise FLOTA, a simple segmentation method that can be used with pre-trained models without the need of re-training a new model or tokenizer.It consists in segmenting words prioritizing the longest substrings available, omitting part of the word in some cases.FLOTA was shown to match the actual morphological segmentation of words more closely than the default BERT, GPT-2 and XLNet tokenizers, and yielded an improved performance on a topic-based text classification task.El Boukkouri et al. (2020) propose CharacterBERT, a modified BERT model with a character-level CNN intended for building representations for complex tokens.The model improves BERT's performance on several tasks on the medical domain.We test the FLOTA method and the CharacterBERT model in our experiments to investigate their advantages when it comes to lexical semantic similarity.
The split-words in our study are existing words -we do not include misspelled terms-with a generally low frequency.There has been extensive work in NLP focused on improving representations of rare words, which are often involved in lower quality predictions than those of more frequent words (Luong et al., 2013;Bojanowski et al., 2017;Herbelot and Baroni, 2017;Prokhorov et al., 2019), also in BERT (Schick and Schütze, 2020b).Our goal is not to study the quality of rare word representations per se, but rather the effect of the splitting procedure on the quality of similarity estimates.Given the strong link between splitting and frequency, we also include an analysis controlling for this factor.

Similarity Tasks and Data
We evaluate the representations' lexical semantic content on two similarity tasks.In this section we describe the creation of an inter-word similarity dataset ( §3.1) as well as the dataset used in our within-word similarity experiments ( §3.2).

Inter-Word: the SPLIT-SIM Dataset
We want a dataset annotated with inter-word similarities which allows us to compare similarity estimation quality in three different scenarios: when no word in a pair is split (0-SPLIT), when only one word in a pair is split (1-SPLIT), and when the two words are split (2-SPLIT).We refer to these situations, defined according to a given tokenizer, as "split-types".
Factors affecting similarity It is well known that, even in out-of-context (OOC) settings (i.e., when comparing word types and not word instances), BERT similarity predictions are more reliable when obtained from a context instead of in isolation (Vulić et al., 2020).However, as shown in Garí Soler and Apidianaki (2021), representations reflect the sense distribution found in the contexts used as well as the words' degree of polysemy.Additionally, it is desirable to take PoS into account, because the quality of similarities obtained with BERT varies across PoS (Garí Soler et al., 2022).To control for all these factors affecting similarity estimates, we conduct separate analyses for words of different nature: monosemous nouns (M-N), monosemous verbs (M-V), polysemous nouns (P-N) and polysemous verbs (P-V).The number of senses of a word with a specific PoS is determined with WordNet (Fellbaum, 1998).
Limitations of existing datasets Existing context-dependent (i.e., not OOC) inter-word similarity datasets, like CoSimLex (Armendariz et al., 2020) and Stanford Contextual Word Similarity (SCWS) (Huang et al., 2012) do not have a large enough representation of split-words: with BERT's default tokenization 97% and 85% of inter-word pairs, respectively, are of type 0-SPLIT.OOC word similarity datasets do not meet our criteria either.In Simlex-999 (Hill et al., 2015) and WS353 (Agirre et al., 2009), 96% and 95% pairs are 0-SPLIT.CARD-660 (Pilehvar et al., 2018), which specifically targets rare words, has a better distribution of split-types, but it contains a large number of multi-word expressions (MWEs) and lacks PoS information.The Rare Word (RW) dataset (Luong et al., 2013) is also specialized on rare words and has a larger coverage of 1-and 2-SPLIT pairs, but we do not use it because of its low inter-annotator agreement and problems with annotation consistency described in Pilehvar et al. (2018).
Therefore, and since it is more convenient to obtain similarity annotations out-of-rather than incontext, we create a dataset of OOC word similarity, SPLIT-SIM.It consists of four separate subsets, one for each type of word.Each subset has a balanced representation of split-types.

Word selection and sentence extraction
We use WordNet to create SPLIT-SIM.We first identify all words in WordNet which are not MWEs, numbers or proper nouns, and which are at least two characters long.After this filtering, we find 28,563 monosemous nouns, 12,903 polysemous nouns, 3,888 monosemous verbs and 4,518 polysemous verbs.
We search for sentences containing these words in the c4 corpus (Raffel et al., 2020), from which we will derive contextualized word representations.We postag sentences using nltk (Bird et al., 2009). 4Importantly, we only select sentences that contain the lemma form of a word with the correct PoS.This ensures that a word will be tokenized in the same way (and belong to the same split-type) in all its contexts, and avoids BERT's word form bias (Laicher et al., 2021).We only keep words for which we could find at least ten sentences that are between 5 and 50 words long.If we found more, we randomly select 10 sentences among the first 100 occurrences found.
Pair creation We rely on WUP (Wu and Palmer, 1994), a Wordnet-based similarity measure, as our reference similarity value.WUP similarity takes into account the depth (the path length to the root node) of the two senses to be compared (s 1 and s 2 ), as well as of their "least common subsumer" (LCS).In general, the deeper LCS is, the higher the similarity between s 1 and s 2 . 5UP similarities are only available for nouns and verbs.based similarity measures like LCH (Leacock et al., 1998) and path similarity because it conveniently ranges from 0 to 1 and its distribution aligns with the intuition that most randomly obtained pairs would have a low semantic similarity.6 WUP is not as good as human judgments, but it correlates reasonably well with them (Yang et al., 2019a).Table 1 shows the measure's correlation with manual similarity judgments by PoS.We consider it to be a good enough approximation for our purposes of comparing performance across splittypes and representation strategies.For an alternative non-Wordnet-based similarity metric to compare to WUP, we also use the similarity of FastText embeddings (Bojanowski et al., 2017) as a control.
We exhaustively pair all words in each subset and calculate their WUP similarity.We select a portion of all pairs ensuring that the full spectrum of similarity values is represented: For each split-type, we randomly sample the same number of word pairs in each 0.2-sized similarity score interval.Due to data availability this number is different for each subset.For the creation of the dataset, the split-type is determined using BERT's default tokenization.Table 2 contains statistics on the full dataset composition.Example pairs from the dataset can be found in Table 3.

Controlling for frequency
In our experiments we also want to control for frequency, since split-words tend to be more rare than full-words.We calculate the frequencies of words in SPLIT-SIM with the wordfreq Python package (Speer, 2022) and report them in Table 4. Frequencies are low overall, especially those of monosemous split-words.To mitigate the potential effect of frequency differences, we find the narrowest possible frequency range that is still represented with enough word pairs in every split-type.We determine this range to be [2.25, 3.75).We create a smaller version of SPLIT-SIM, which we call "balanced", with pairs that include only words within this frequency interval.Another aspect to take into account is that of the difference in frequencies of words in a pair, what we call ∆f .∆f is highest in 1-SPLIT pairs (up to 2.19 in M-V compared to 0.67 in the corresponding 0-SPLIT), but it is much lower overall in the balanced dataset because of the narrower frequency range.

Within-Word
Similarly to the inter-word setting, for withinword similarity we want to distinguish between 0-, 1-and 2-SPLIT pairs.An important factor that can influence within-word similarity estimations is whether pairs compare the same word form (SAME) or different morphological forms of the word (DIFF).1-SPLIT pairs are all necessarily of type DIFF,7 but 0-and 2-SPLIT pairs can be of either type (e.g., {carry} vs {carries}; {multi, ##ply} vs {multi, ##ply, ##ing}).
We choose the Word-in-Context (WiC) dataset (Pilehvar and Camacho-Collados, 2019) for its convenient representation of all split-types.WiC contains pairs of word instances that have the same (T) or a different (F) meaning.We use the training and development sets, whose labels (which are taken as a reference) are publicly available.They consist of a total of 6,066 pairs that we rearrange for our purposes.We use as training data all 0-SPLIT pairs found in the original training set.For evaluation we use the 0-SPLIT pairs in the original development set, and all 1-SPLIT and 2-SPLIT pairs found in both sets.Table 5 contains details about the composition of the dataset, such as the proportion of T and F labels.Note that, again, numbers differ depending on the tokenizer used (BERT's or XLNet's).
WiC is smaller than SPLIT-SIM and offers a less controlled, but more realistic, environment.For example, 2-SPLIT pairs involve words with low frequency and few senses, which results in an overrepresentation of T pairs in this class.We did not use other within-word similarity datasets such as Usim (Erk et al., 2009(Erk et al., , 2013) ) or DWUG (Schlechtweg et al., 2021), because they contain a small number of 1-and 2-SPLIT pairs (91 and 4 in Usim), or these involve very few distinct lemmas (14 and 12 in DWUG).

Models
We run all our experiments with representations extracted from the BERT (base, uncased) model in the transformers library (Wolf et al., 2020) and the general CharacterBERT model (hereafter CBERT). 8The two are trained on a comparable amount of tokens (3.3B and 3.4B, respectively) which include English Wikipedia.BERT is also trained on BookCorpus (Zhu et al., 2015), and CBERT on OpenWebText (Gokaslan and Cohen, 2019).For comparison, we also include ELECTRA base (Clark et al., 2020) and XLNet (base, cased)9 (Yang et al., 2019b) in our analysis.ELECTRA is trained on the same data as BERT and uses exactly the same architecture, tokenizer and vocabulary (30,522 tokens), but is trained with a more efficient discriminative pretraining approach.XLNet relies on the Senten-cePiece implementation of UnigramLM and has a 32,000 token vocabulary.It is a Transformerbased model pre-trained on 32.89B tokens with the task of Permutation Language Modeling.We choose these models because they are newer and better than BERT (e.g., on GLUE (Wang et al., 2018) among other benchmarks) and because of their wide use.XLNet allows us to investigate the effect of word splitting in models relying on different tokenizers.We experiment with all layers of the models.In inter-word experiments, a word representation is obtained by averaging the contextualized word representations from each of the 10 sentences.

Input Treatment
Here we describe the different ways in which input data is processed before feeding it to the models.
Tokenization We use the model's default tokenizations.We additionally experiment with the FLOTA tokenizer (Hofmann et al., 2022) used in combination with BERT.FLOTA has a hyperparameter controlling the number of iterations, k ∈ N.With lower k, portions of words are more likely to be omitted.We set k to 3 as it obtained the best results on text classification (Hofmann et al., 2022).
Lemmatization In the WiC dataset, the word instances to be compared may have different surface forms.One way of restricting the influence of word form on BERT representations is through lemmatization (Laicher et al., 2021).We replace the target word instance with its lemma before extracting its representation.We refer to this setting as LM.This procedure is not relevant for SPLIT-SIM, where all instances are already in lemma form.

Split-Words Representation Strategy
We compare different strategies for pooling a single word embedding from the representations of a split-word's multiple subwords.

Average (AVG)
The embeddings of every subword forming a word are averaged to obtain a word representation.This is the most commonly used strategy when representing splitwords (Wiedemann et al., 2019;Garí Soler et al., 2019;Liu et al., 2020;Montariol and Allauzen, 2021, inter alia).Bommasani et al. (2020) tested max, min and mean pooling as well as using the representation of the last token.We only use mean pooling (AVG) from their work because they found it to work best for OOC word similarity.
Weighted average (WAVG) A word is represented with a weighted average of all its subword representations.Weights are assigned according to word length.For example, a subword that makes up 70% of a word's characters is weighted with 0.7.

Longest (LNG)
Only the representation of the longest subword is used.This approach, as WAVG, accounts for the intuition that longer pieces carry more information about the meaning of a word.

Prediction and Evaluation
The similarity between two words or word instances is calculated as the cosine similarity between their representations.For experiments on SPLIT-SIM, the evaluation metric is Spearman's ρ.
For within-word experiments, we train a logistic regression classifier that uses the cosine between two word instance representations as its only feature.We evaluate the classifier based on its accuracy.

Results and Analysis
In this section we analyze the results obtained on the SPLIT-SIM ( §5.1) and WiC ( §5.2) datasets.

Inter-Word
We start with a look at the results of each method on each SPLIT-SIM subset as a whole.The rest of this section is organized around the main questions we aim to answer.Table 6 presents the correlations obtained by different representation types and strategies on the full dataset.We report the highest correlation found across all layers.The best model on all subsets is clearly XLNet with the LNG or WAVG strategies.ELECTRA (with WAVG) is the second best one on most subsets.Correlations obtained against FastText cosine similarities reflect, with few exceptions, the same tendencies observed in this section (results are presented in Appendix A).

What is the best strategy to represent
split-words?
Table 7 shows the Spearman's correlations obtained by different pooling methods on the three split-types.The best layer is selected separately for each split-type, model and strategy.We can see that the best strategy for each model tends to be stable across datasets.AVG is the preferred strategy overall, followed by WAVG, which, in ELEC-TRA and XLNet, performs almost on par with AVG.Using the longest subword (LNG) results in a considerably lower performance across models and data subsets, presumably because some important information is excluded from the representation.CBERT obtains good results (comparable or better than BERT) on monosemous nouns (M-N), but on other kinds of words it generally lags behind.
FLOTA performance The use of the FLOTA tokenizer systematically decreases BERT's performance.We believe there are two main reasons behind this outcome: First, that similarly to LNG, FLOTA sometimes10 omits parts of words.We investigate this by comparing its performance on pairs where both words were left complete (COM) to that on pairs where some word is incomplete (INCM).We present results in Table 8.We observe that, indeed, in most cases, performance is worse when parts of words are omitted.However, this is not the only factor at play, since the performance on COM is still lower than when using BERT's default tokenizer.The second reason, we believe, is that FLOTA tokenization differs from the tokenization used for BERT's pretraining.FLOTA was originally evaluated on a supervised text classification task (Hofmann et al., 2022), while we do not fine-tune the model for similarity estimation with the new tokenization.Additionally, classification was done relying on a sequence-level token representation (e.g., [CLS] in BERT).It is possible that FLOTA tokenization provides an advantage when considering full sequences which does not translate to an improvement in the similarity between individual word token representations.
Given its poor results compared to BERT, in what follows, we omit FLOTA from our discussion.

Is performance on pairs involving
split-words worse than on 0-SPLIT?
In Table 7 we can see that, as expected, in most subsets (M-N, M-V and P-V), performance is worse in pairs involving split-words.This is however not true of polysemous nouns (P-N), where similarities obtained with all models are of better or comparable quality on 1-and 2-SPLIT pairs.With CBERT, performance on 2-SPLIT pairs is never significantly lower than on 0-SPLIT pairs.
Lower correlation of polysemous words Correlations obtained on polysemous words are overall lower than on monosemous words, particularly so in the 0-SPLIT case.Worse performance on polysemous words can be expected for two main reasons.First, WUP between polysemous words is determined as the maximum similarity attested for all their sense pairings, while cosine similarity takes into account all the contexts provided as well as the accumulated lexical knowledge about the word contained in the representation.Second, the specific sense distribution found in the randomly selected contexts may also have an impact on the final results (particularly if, e.g., the relevant sense for the comparison is missing).
1-SPLIT vs 2-SPLIT Another interesting observation is that, in most cases, performance on 1-SPLIT pairs is lower than on 2-SPLIT pairs.We identify two main factors that explain this result.
One is the fact that in 1-SPLIT, the words in a pair are represented using different strategies (the plain representation vs {AVG|WAVG|LNG}).In fact exceptions to this observation concern almost exclusively the LNG pooling strategy.LNG does not involve any arithmetic operation, which makes the representations of the split-and full-word in a 1-SPLIT pair more comparable to each other.Another explanation is the difference in frequency between words (∆f ), which tends to be larger in 1-SPLIT than in 0-and 2-SPLIT pairs.We explore this possibility in our frequency analysis below.
In the remaining inter-word experiments, we focus our observations on the better (and simpler) AVG strategy.

Frequency-related analysis
As explained in Section 3.1, frequency and wordsplitting are strongly related.The experiments presented in this section help us understand how the tendencies observed so far are linked to or affected by word frequency.

Controlling for frequency
The lower correlations obtained in 1-and 2-SPLIT pairs in most subsets could simply be due to the lower frequency of split-words, and not necessarily to the fact that they are split.To verify this, we evaluate the models' predictions on word pairs found in the balanced SPLIT-SIM.Results are presented in Table 9.When comparing 0-SPLIT pairs to pairs involving split-words, we observe the same tendencies as in the full version of SPLIT-SIM: for monosemous words and polysemous verbs, word splitting has a negative effect on word representations.There are, however, some differences in the significance of results, particularly in P-V, due in part to the much smaller sample size of this dataset.
It is important to note that split-types are strongly defined and determined by word frequency.In natural conditions (i.e., without controlling for frequency), we expect to encounter the patterns found in Table 7.
The effect of ∆f In Table 9, we can see that, in a dataset with lower and better balanced ∆f values, 1-SPLIT pairs are no longer at a disadvantage and obtain results that are most of the time superior to those of 2-SPLIT pairs.We run an additional analysis to study the effect of different ∆f .We divide the pairs in each subset and split-type according to whether their ∆f is below or above a threshold t = 0.25, ensuring that all sets compared have at least 100 pairs.Results, omitted for brevity, show that pairs with lower ∆f obtain almost systematically better results than those with higher ∆f .This confirms that a disparity in the frequency levels of the words compared also has a negative effect on similarity estimation.
The effect of frequency on similarity estimation To investigate how estimation quality varies with frequency, we divide the data in every subset and split-type into two sets, L (low) and H (high), based on individually determined frequency thresholds.Using different thresholds does not allow us to fairly compare across data subsets and split-types but ensures that both classes (L and H) are always well-represented and balanced.The frequency of a word pair is calcu- lated as the average frequency of the two words in it.To prevent L and H from containing pairs of similar frequency, their thresholds are apart by 0.25.We only include pairs with a ∆f of at most 1.M-V is excluded from this analysis because of its small size.Table 10 (top section) shows results of this analysis.Very often, correlations are higher on the sets of pairs with lower average frequency (L).This is surprising, because, as explained in Section 2, rare words are typically problematic in NLP.Works investigating the representation of rare words in BERT, however, either test it through prompting (Schick and Schütze, 2020b), on "rarified" downstream tasks (Schick and Schütze, 2020a), or on word similarity but without providing contexts (Li et al., 2021b).We believe the observed result is due to a combination of multiple factors; both contextual and lexical.First, the contexts used to extract representations provide information about the word's meaning.If we compare results to a setting where words are presented without context (lower part of Table 10), the tendency is indeed softened, but not completely reversed, meaning that context alone does not fully explain this result.Lower frequency words are also more often morphologically complex than higher frequency ones.This is the case in our dataset. 11In the case of split-words, morphological complexity may be an advantage that helps the model understand word meaning through word splitting.Another factor contributing to this result may be the degree of polysemy.We have seen in Table 7 that similarity estimation tends to be of better quality on monosemous words than on polysemous words.However, a definite explanation of the observed results would require additional analyses which are beyond the scope of this study.

Further Analysis
How do results change across layers for every split-type? Figure 2 shows the BERT AVG performance on each split-type of every subset across model layers.In M-N, M-V and P-V we observe that at earlier layers the quality of the similarity estimations involving split-words is lower than that of 0-SPLIT pairs.However, as information advances through the network and the context is processed, their quality improves at a higher rate than that of 0-SPLIT, which remains more stable.This suggests that split-words benefit from the contextualization process taking place in the Transformer layers more than full-words.This makes sense, since sub-tokens are highly ambiguous (i.e., they can be part of multiple words), so more context processing is needed for the model to represent their meaning well.In a similar vein, the initial advantage of 0-SPLIT pairs is more pronounced in monosemous words, which is expected as context is less crucial for understanding their meaning.In P-N, the situation is different: 0-SPLIT pairs behave in a similar way as 1-and 2-SPLIT pairs from the very first layers.We verify whether this could be due to non-split polysemous nouns in P-N being particularly ambiguous.We obtain their number of senses and we also check how many split-words in WordNet they are part of following BERT's tokenization (e.g., the word "station" is part of {station, ##ery}).These figures, however, are higher in P-V, so this hypothesis is not confirmed.
We also note that performance for the different split-types usually peaks at different layers.This highlights the need to carefully select the layer to use depending on the word's tokenization.
The same tendencies are observed with ELEC-TRA and XLNet.In CBERT, results are much more stable across layers.
Is a correct morphological segmentation important for the representations' semantic con- tent?As explained in Section 2, the morphological awareness of a tokenizer has a positive effect on results in NLP tasks.Here we verify whether it is also beneficial for word similarity prediction.We use MorphoLex, a database containing morphological information (e.g., segmentation into roots and affixes) on 70,000 English words.We consider that a split-word in SPLIT-SIM is incorrectly segmented if one or more of the roots of the word have been split (e.g., saltshaker: {salts, ##hak, ##er}). 12We compare the performance on word pairs involving an incorrectly segmented word (INC) to that of pairs where the root(s) are fully preserved in both words (COR), regardless of whether the tokens containing the root contain other affixes (e.g., {marina, ##te}).Note that MorphoLex does not fully cover the vocabulary in SPLIT-SIM.13 We exclude M-V from this analysis because of the insufficient amount of known COR pairs (4 in 2-SPLIT following BERT's tokenization).All other comparisons involve at least 149 pairs.Results are presented in Table 11.They confirm that, in subword-based models, when tokenization aligns with morphology, representations are almost always of better quality than when it does not.The results obtained with CBERT, evaluated according to BERT's tokenization, highlight that the same set of INC pairs is not necessarily harder to represent than COR for a model that does not rely on subword tokenization.
Do similarity predictions vary across splittypes?In Figure 3 we show the histogram of similarities calculated with BERT AVG using the best overall layer (cf.Table 6).We observe that similarity values are found in different, though overlapping, ranges depending on the split-type.2-SPLIT pairs exhibit a clearly higher average similarity than 0-and 1-SPLIT pairs.Similarities in 1-SPLIT tend to be the lowest, but the difference is smaller.This does not correspond to the distribution of gold WUP similarities, which, due to our data collection process, does not differ across split-types.A possible partial explanation is that sub-token (##) representations are generally closer together because they share distributional properties. 14The same phenomenon is found in all models tested (ELECTRA, XLNet and CBERT), but is less pronounced in nouns in XLNET.This observation has important implications for similarity interpretation, and it discourages the comparison across split-types even when considering words of the same degree of polysemy and PoS.A similarity score that may be considered high for one split-type may be just average for another.
Does the number of subwords have an impact on the representations' semantic content?We saw in Section 2 that oversplitting words has negative consequences on certain NLP tasks.We investigate the effect that the number of subwords has on similarity predictions.We depart from the  hypothesis that the more subwords a word is split into, the worse the performance will be.This is based on the intuition that shorter subwords are not able to encode as much lexical semantic information as longer ones.We count the total number of subwords in each word pair and re-calculate correlations separately on sets of word pairs with few (−) or many (+) subwords.In 1-SPLIT, "−" is defined as 3 subwords and in 2-SPLIT, as 5 or less.We make sure that every set contains at least 1,000 pairs.Results are presented in Table 12.
Our expectations are only met in about half of the cases, particularly in P-N.Surprisingly, similarity estimations from BERT tend to be more accurate when words are split into a larger number of tokens, even though the tokenization in + is more often morphologically incorrect than in −. Results from other models are mixed.Since only the first subword in a split-word is a full-token (i.e., does not begin with ## in BERT), one difference between words split into few or many pieces is the ratio of full-tokens to sub- tokens.When using the AVG strategy, on "−" split-words, the first subword (a sub-token) has a large impact on the final representation, which is reduced as the number of subwords increases.We investigate whether this difference has something to do with the results obtained with BERT.
To do so, we test two more word representation strategies: o1, where we omit the first subword (the full-token) and oL, where we omit the last subword (a sub-token).If mixing the two kinds of subwords (sub-tokens and full-tokens) is detrimental for the final representation, we expect o1 to obtain better results than oL.Results by these two strategies could be affected by the morphological structure of words in SPLIT-SIM (e.g., o1 could perform better than oL on words with a prefix).To control for this, we only run this analysis on word pairs consisting of two simplexes ing to MorphoLex).We exclude M-V because of the insufficient (< 100) amount of pairs available in each class.
Results of this analysis are shown in Table 13.In most cases, particularly in M-N, the o1 strategy, which excludes the only full-token in the word, obtains a better performance than oL.This suggests that, in the BERT model, the first token is less useful when building a representation.This is surprising, because English tends to place disambiguatory cues at the beginning of words (Pimentel et al., 2021), and because the first subword is often the longest one. 15The intuition that representations of longer tokens contain more semantic information is, thus, not confirmed.

Within-Word
In this section we present the results on the WiC dataset.In Table 14, we report the best accuracy obtained by every model on different split-types.We observe that the best performance is achieved   on the full set of 2-SPLIT pairs (ALL).This can be explained by the label distribution in 2-SPLIT, where most pairs are of type T (cf.Table 5).We have seen in Section 5.1 that AVG representations for these pairs have higher similarity values, and we confirm this is the case, too, in the within-word setting (see Figure 4).In fact, in the case of BERT AVG, only 18 out of 97 F 2-SPLIT word pairs were correctly guessed.To have a fairer comparison with 0-SPLIT pairs, where labels are more balanced, we recalculate accuracy on 1-and 2-SPLIT pairs randomly subsampling as many T pairs as the number of available F pairs (BAL).These results are shown in the same Table .From them, we conclude that accuracy on 1-and 2-SPLIT pairs is actually lower than that on 0-SPLIT.This is not true of CBERT, however, which performs equally well across split-types and is the best option for 2-SPLIT pairs.As we can see in Figure 4, the similarities it assigns to 2-SPLIT are in a similar range to 0-SPLIT in this within-word setting.
When it comes to the pooling strategy for repre- senting split-words, AVG is still often the best one, but LNG also obtains good results.When comparing instances of the same word, contextual information is more important than word identity, so omitting part of a word does not have such a negative impact as in the inter-word setting.
In Table 15, we look at the results of AVG on the original data and when replacing target words with their lemmas (LM) separately on SAME vs DIFF pairs.There is a large gap in accuracy between SAME and DIFF 2-SPLIT pairs, with DIFF pairs obtaining worse results with all models tested16 except XLNet.0-SPLIT pairs, on the contrary, are generally less affected by this parameter.While using the lemma is clearly helpful for 1-SPLIT pairs, it does not show a consistent pattern of improvement in the other split-types.We also observe that the average similarities for SAME pairs are higher than for DIFF pairs (e.g., BERT in 0-SPLIT: 0.62 (SAME), 0.54 (DIFF)).

Discussion
We have seen that when examined separately, word pairs involving split-words often obtain worse quality similarity estimations than those consisting of full-words; but this depends on the type of word: split polysemous nouns are better represented than non-split ones.This holds across the models and tokenizers tested, and also when evaluating on words in a narrower frequency range.This shows that word splitting has a negative effect on the representation of many words.We have also seen that in normal conditions, performance on 1-SPLIT is generally the worst one, due mainly to a larger disparity in frequencies of the words in a pair.Our analysis has also confirmed the hypothesis that words that are split in a way that preserves their morphology obtain better quality similarity estimates than words where segmentation splits the word's root(s).
We have noted that similarities for the different split-types are found in different ranges; notably, similarities between two split-words tend to be higher than similarities in 0-and 1-SPLIT pairs.Naturally, this has an effect on the correlation calculated on the full dataset, which is lower than when considering each split-type separately.It would be interesting to develop a similarity measure that allows comparison across splittypes, which could rely on information from the rest of the sentence, like BERTScore (Zhang et al., 2020).Another simple way to make similarities comparable would be to bring 2-SPLIT similarities to the 0-SPLIT similarity range by subtracting the average similarity value obtained in 0-SPLIT.The best value to use, however, may vary depending on the application.
One surprising finding relates to the impact of the number of subwords: similarity estimations are not always more reliable on words involving fewer tokens.This was especially the case for BERT, where we saw that the first token is generally the least useful in building a representation.Given the tendency for the first token to be the longest, this has put the other strategies tested (WAVG and LNG) at a disadvantage.
From our within-word experiments we confirm that word form is reflected in the representations and has a strong impact on similarity, but this does not necessarily mean that comparing words with distinct morphological properties (e.g., singular vs plural) would be detrimental in the inter-word setting.In the within-word setting, SAME pairs compare two equal word forms, whose representation at the initial (static) embedding layer is identical.DIFF pairs, instead, start off with different static embeddings, which results in an overall lower similarity.In SPLIT-SIM, all comparisons are made, by definition, between different words.The fact that two words have different morphological properties may thus have a smaller impact on results.
Most of our findings are consistent between the two kinds of task (inter-and within-word) and across models.One exception is CBERT, which does not assign higher similarities to 2-SPLIT pairs when comparing instances of the same word; and the LNG strategy, which is more useful withinword than inter-word.AVG is however the best strategy overall.One direction for future work would be to find a pooling method that closes the gap in performance between split-types.
Our experiments only involve one language (English), Spearman's correlation and cosine similarity, although our methodology is not restricted to a single similarity or evaluation metric.Extending this work to more languages is also possible, but less straightforward, due to the need for suitable datasets.

Conclusion
We have compared the contextualized representations of words that are segmented into subwords to those of words that have a dedicated embedding in BERT and other models.We have done so through an intrinsic evaluation relying on similarity estimation.Our findings are relevant for any NLP practitioner working with contextualized word representations, and particularly for applications relying on word similarity: (i) Out of the tested strategies for split-word representation, averaging subword embeddings is the best one, with few exceptions; (ii) the quality of split-word representations is often worse than that of full-words, although this depends on the kind of words considered; (iii) similarity values obtained for splitword pairs are generally higher than similarity estimations involving full-words; (iv) the best layers to use differ across split-types; (v) a higher number of tokens does not necessarily, as intuitively thought, decrease representation quality; (vi) in the within-word setting, word form has a negative impact on results when words are split.
Our results also point to specific aspects to which future research and efforts of improvement should be directed.We make our SPLIT

A Results with FastText
We choose FastText as a control because of its good results on word similarity, and because it can generate embeddings for all words.91.8% of all pairs in SPLIT-SIM have both words present in the FastText vocabulary.17Table 16 contains the results.The main tendencies observed in Sections 5.1.1 and 5.1.2are found in these results too: AVG is the best overall strategy and predictions on 1and 2-SPLIT pairs are almost consistently of lower quality than on 0-SPLIT pairs.We also observe a couple of discrepancies with respect to WUP: correlations are higher overall, which makes sense as FastText is also a model that learns representations from text and all models (including FastText) have been trained on Wikipedia data.Another important difference is the relative performance of 0-SPLIT and 2-SPLIT in P-N.While with WUP P-N is the only dataset where splitting words is not detrimental to similarity estimation, this is not the case with FastText.However, we note that the difference in performance between 0-SPLIT and 2-SPLIT is much smaller in PN than in the other sub-

Figure 1 :
Figure 1: Example of one of our settings where we calculate the cosine similarity between the representations of an OOV word and a known word.We test different ways of creating one embedding for an OOV word ( §4), such as AVG and LNG, on two similarity tasks ( §3).

Figure 2 :
Figure 2: BERT AVG results by layer and split-type on every SPLIT-SIM subset.

Figure 3 :
Figure 3: Distribution of predicted similarity values by BERT (AVG) across split-types in SPLIT-SIM.

Figure 4 :
Figure 4: Average similarity values obtained on WiC (BAL) with the AVG strategy.

Table 1 :
Spearman's ρ between WUP similarity and human judgments from existing word similarity datasets.

Table 2 :
Composition of the SPLIT-SIM dataset (full and balanced versions) according to two different tokenizers.

Table 3 :
Example word pairs from SPLIT-SIM (M-N subset) with their BERT tokenization.
Table4: Average frequencies in each SPLIT-SIM subset (BERT tokenization).Values are the base-10 logarithm of the number of times a word appears per billion words.For reference, the frequencies of can, dog, oatmeal and myxomatosis are 6.46, 5.10, 3.37 and 1.61.

Table 5 :
WiC statistics: number of word pairs of different types and number of unique lemmas with different tokenizers.

Table 8 :
Results obtained with FLOTA tokenization on pairs where words were fully preserved (COM) and where at least one word had a portion omitted (INCM).

Table 10 :
Results on pairs with low (L) and high (H) frequency using 10 (top) and no (bottom) contexts.

Table 11 :
Spearman's ρ (× 100) on pairs with an incorrectly segmented word (INC) and pairs where the root(s) of both words are preserved (COR).

Table 14 :
Accuracy obtained on WiC on the full subsets (ALL) and balancing T/F labels in 1-and 2-SPLIT (BAL).The best result per model and split-type in BAL subsets is boldfaced.

Table 15 :
Accuracy on WiC pairs with the SAME vs DIFF surface form.