Abstract
Complementary to finding good general word embeddings, an important question for representation learning is to find dynamic word embeddings, for example, across time or domain. Current methods do not offer a way to use or predict information on structure between sub-corpora, time or domain and dynamic embeddings can only be compared after post-alignment. We propose novel word embedding methods that provide general word representations for the whole corpus, domain- specific representations for each sub-corpus, sub-corpus structure, and embedding alignment simultaneously. We present an empirical evaluation on New York Times articles and two English Wikipedia datasets with articles on science and philosophy. Our method, called Word2Vec with Structure Prediction (W2VPred), provides better performance than baselines in terms of the general analogy tests, domain-specific analogy tests, and multiple specific word embedding evaluations as well as structure prediction performance when no structure is given a priori. As a use case in the field of Digital Humanities we demonstrate how to raise novel research questions for high literature from the German Text Archive.
1 Introduction
Word embeddings (Mikolov et al., 2013b; Pennington et al., 2014) are a powerful tool for word-level representation in a vector space that captures semantic and syntactic relations between words. They have been successfully used in many applications such as text classification (Joulin et al., 2016) and machine translation (Mikolov et al., 2013a). Word embeddings highly depend on their training corpus. For example, technical terms used in scientific documents can have a different meaning in other domains, and words can change their meaning over time—“apple” did not mean a tech company before Apple Inc. was founded. On the other hand, such local or domain-specific representations are also not independent of each other, because most words are expected to have a similar meaning across domains.
There are many situations where a given target corpus is considered to have some structure. For example, when analyzing news articles, one can expect that articles published in 2000 and 2001 are more similar to each other than the ones from 2000 and 2010. When analyzing scientific articles, uses of technical terms are expected to be similar in articles on similar fields of science. This implies that the structure of a corpus can be a useful side resource for obtaining better word representation.
Various approaches to analyze semantic shifts in text have been proposed where typically first individual static embeddings are trained and then aligned afterwards (e.g., Kulkarni et al., 2015; Hamilton et al., 2016; Kutuzov et al., 2018; Tahmasebi et al., 2018). As most word embeddings are invariant with respect to rotation and scaling, it is necessary to map word embeddings from different training procedures into the same vector space in order to compare them. This procedure is usually called alignment, for which orthogonal Procrustes can be applied as has been used in Hamilton et al. (2016).
Recently, new methods to train diachronic word embeddings have been proposed where the alignment process is integrated in the training process. Bamler and Mandt (2017) propose a Bayesian approach that extends the skip-gram model (Mikolov et al., 2013b). Rudolph and Blei (2018) analyze dynamic changes in word embeddings based on exponential family embeddings. Yao et al. (2018) propose Dynamic Word2Vec where word embeddings for each year of the New York Times corpus are trained based on individual positive point-wise information matrices and aligned simultaneously.
We argue that apart from diachronic word embeddings there is a need to train dynamic word embeddings that not only capture temporal shifts in language but for instance also semantic shifts between domains or regional differences. It is important that those embeddings can be trained on small datasets. We therefore propose two generalizations of Dynamic Word2Vec. Our first method is called Word2Vec with Structure Constraint (W2VConstr), where domain-specific embeddings are learned under regularization with any kind of structure. This method performs well when a respective graph structure is given a priori. For more general cases where no structure information is given, we propose our second method, called Word2Vec with Structure Prediction (W2VPred), where domain-specific embeddings and sub-corpora structure are learned at the same time. W2VPred simultaneously solves three central problems that arise with word embedding representations:
Words in the sub-corpora are embedded in the same vector space, and are therefore directly comparable without post-alignment.
The different representations are trained simultaneously on the whole corpus as well as on the sub-corpora, which makes embeddings for both general and domain-specific words robust, due to the information exchange between sub-corpora.
The estimated graph structure can be used for confirmatory evaluation when a reasonable prior structure is given. W2VPred together with W2VConstr identifies the cases where the given structure is not ideal, and suggests a refined structure which leads to an improved embedding performance; we call this method Word2Vec with Denoised Structure Constraint. When no structure is given, W2VPred provides insights on the structure of sub-corpora, for example, similarity between authors or scientific domains.
All our methods rely on static word embeddings as opposed to currently often used contextualized word embeddings. As we learn one representation per slice such as year or author, thus considering a much broader context than contextualized embeddings, we are able to find a meaningful structure between corresponding slices. Another main advantage comes from the fact that our methods do not require any pre-training and can be run on a single GPU.
We test our methods on 4 different datasets with different structures (sequences, trees, and general graphs), domains (news, wikipedia, high literature), and languages (English and German). We show on numerous established evaluation methods that W2VConstr and W2VPred significantly outperform baseline methods with regard to general as well as domain-specific embedding quality. We also show that W2VPred is able to predict the structure of a given corpus, outperforming all baselines. Additionally, we show robust heuristics to select hyperparameters based on proxy measurements in a setting where the true structure is not known. Finally, we show how W2VPred can be used in an explorative setting to raise novel research questions in the field of Digital Humanities. Our code is available at https://github.com/stephaniebrandl/domain-word-embeddings.
2 Related Work
Various approaches to track, detect, and quantify semantic shifts in text over time have been proposed (Kim et al., 2014; Kulkarni et al., 2015; Hamilton et al., 2016; Zhang et al., 2016; Marjanen et al., 2019).
This research is driven by the hypothesis that semantic shifts occur, for example, over time (Bleich et al., 2016) and viewpoints (Azarbonyad et al., 2017), in political debates (Reese and Lewis, 2009), or caused by cultural developments (Lansdall-Welfare et al., 2017). Analysing those shifts can be crucial in political and social studies but also in literary studies, as we show in Section 5.
Typically, methods first train individual static embeddings for different timestamps, and then align them afterwards (e.g., Kulkarni et al., 2015; Hamilton et al., 2016; Kutuzov et al., 2018; Devlin et al., 2019; Jawahar and Seddah, 2019; Hofmann et al., 2020; and a comprehensive survey by Tahmasebi et al., 2018). Other approaches, which deal with more general structure (Azarbonyad et al., 2017; Gonen et al., 2020) and more general applications (Zeng et al., 2017; Shoemark et al., 2019), also rely on post-alignment of static word embeddings (Grave et al., 2019). With the rise of larger language models such as BERT Devlin et al. (2019) and, with that, contextualized embeddings, a part of the research question has shifted towards detecting language change in contextualized word embeddings (e.g., Jawahar and Seddah, 2019; Hofmann et al., 2020).
3 Methods
By generalizing DW2V, we propose two methods, one for the case where sub-corpora structure is given as prior knowledge, and the other for the case where no structure is given a priori. We also argue that combining both methods can improve the performance in cases where some prior information is available but not necessarily reliable.
3.1 Word2Vec with Structure Constraint
3.2 Word2Vec with Structure Prediction
The structure loss (6) with the similarity matrix W updated by Eqs. 7 and 8 constrains the distances between embeddings according to the similarity structure that is at the same time estimated from the distances between embeddings. We call this variant Word2Vec with Structure Prediction (W2VPred). Effectively, W serves as a weighting factor that strengthens connections between close embeddings.
3.3 Word2Vec with Denoised Structure Constraint
We propose a third method that combines W2VConstr and W2VPred for the scenario where W2VConstr results in poor word embeddings because the a priori structure is not optimal. In this case, we suggest applying W2VPred and consider the resulting structure as an input for W2VConstr. This procedure needs prior knowledge of the dataset and a human-in-the-loop to interpret the predicted structure by W2VPred in order to add or remove specific edges in the new ground truth structure. In the experiment section, we will condense the predicted structure by W2VPred into a sparse, denoised ground truth structure that is meaningful. We call this method Word2Vec with Denoised Structure Constraint (W2VDen).
3.4 Optimization
We solve the problem (5) iteratively for each embedding Ut, given the other embeddings {Ut′}t′≠t are fixed. We define one epoch as complete when {Ut} has been updated for all t. We applied gradient descent with Adam Kingma and Ba (2014) with default values for the exponential decay rates given in the original paper and a learning rate of 0.1. The learning rate has been reduced after 100 epochs to 0.05 and after 500 epochs to 0.01 with a total number of 1000 epochs. Both models have been implemented in PyTorch. W2VPred updates W by Eqs. 7 and 8 after every iteration.
4 Experiments on Benchmark Data
We conducted four experiments starting with well-known settings and datasets and incrementally moving to new datasets with different structures. The first experiment focuses on the general embedding quality, the second one presents results on domain-specific embeddings, the third one evaluates the method’s ability to predict structure and the fourth one shows the method’s performance on various word similarity tasks. In the following subsections, we will first describe the data, preprocessing, and then the results. Further details on implementation and hyperparameters can be found in Appendix A.
4.1 Datasets
We evaluated our methods on the following three benchmark datasets.
New York Times (NYT):
The New York Times dataset1 (NYT) contains headlines, lead texts, and paragraphs of English news articles published online and offline between January 1990 and June 2016 with a total of 100,945 documents. We grouped the dataset by years with 1990-1998 as the train set and 1999-2016 as the test set.
Wikipedia Field of Science and Technology (WikiFoS):
We selected categories of the OECD’s list of Fields of Science and Technology2 and downloaded the corresponding articles from the English Wikipedia. The resulting dataset Wikipedia Field of Science and technology (WikiFoS) contains four clusters, each of which consists of one main category and three subcategories, with 226,386 unique articles in total (see Table 1). We published the data set at https://huggingface.co/datasets/millawell/wikipedia_field_of_science. The articles belonging to multiple categories3 were randomly assigned to a single category in order to avoid similarity because of overlapping texts instead of structural similarity. In each category, we randomly chose 1/3 of the articles for the train set, and the remaining 2/3 were used as the test set.
Category . | #Articles . |
---|---|
Natural Sciences | 8536 |
Chemistry | 19164 |
Computer Science | 11201 |
Biology | 10988 |
Engineering & Technology | 20091 |
Civil Engineering | 17797 |
Electrical & Electronic Engineering | 6809 |
Mechanical Engineering | 4978 |
Social Sciences | 17347 |
Business & Economics | 14747 |
Law | 13265 |
Psychology | 5788 |
Humanities | 15066 |
Literature & Languages | 24800 |
History & Archaeology | 16453 |
Religion & Philosophy & Ethics | 19356 |
Category . | #Articles . |
---|---|
Natural Sciences | 8536 |
Chemistry | 19164 |
Computer Science | 11201 |
Biology | 10988 |
Engineering & Technology | 20091 |
Civil Engineering | 17797 |
Electrical & Electronic Engineering | 6809 |
Mechanical Engineering | 4978 |
Social Sciences | 17347 |
Business & Economics | 14747 |
Law | 13265 |
Psychology | 5788 |
Humanities | 15066 |
Literature & Languages | 24800 |
History & Archaeology | 16453 |
Religion & Philosophy & Ethics | 19356 |
Wikipedia Philosophy (WikiPhil):
Based on Wikipedia’s definition of categories in philosophy, we selected 5 main categories and their 2 largest subcategories each (see Table 2). Categories and subcategories are based on the definition given by Wikipedia. We downloaded 41,603 unique articles in total from the English Wikipedia. Similarly to WikiFoS, the articles belonging to multiple categories were randomly assigned to a single category, and the articles in each category were divided into a train set (1/3) and a test set (2/3).
Category . | #Articles . |
---|---|
Logic | 3394 |
Concepts in Logic | 1455 |
History of Logic | 76 |
Aesthetics | 7349 |
Philosophers of Art | 30 |
Literary Criticism | 3826 |
Ethics | 5842 |
Moral Philosophers | 170 |
Social Philosophy | 3816 |
Epistemology | 3218 |
Epistemologists | 372 |
Cognition | 8504 |
Metaphysics | 1779 |
Ontology | 796 |
Philosophy of Mind | 976 |
Category . | #Articles . |
---|---|
Logic | 3394 |
Concepts in Logic | 1455 |
History of Logic | 76 |
Aesthetics | 7349 |
Philosophers of Art | 30 |
Literary Criticism | 3826 |
Ethics | 5842 |
Moral Philosophers | 170 |
Social Philosophy | 3816 |
Epistemology | 3218 |
Epistemologists | 372 |
Cognition | 8504 |
Metaphysics | 1779 |
Ontology | 796 |
Philosophy of Mind | 976 |
4.2 Preprocessing
We lemmatized all tokens, that is, assigned their base forms with spaCy4 and grouped the data by years (for NYT) or categories (for WikiPhil and WikiFoS). For each dataset, we defined one individual vocabulary where we considered the 20,000 most frequent (lemmatized) words of the entire dataset that are also within the 20,000 most frequent words in at least 3 independent slices, that is, years or categories. This way, we filtered out “trend” words that are of significance only within a very short time period/only a few categories. The 100 most frequent words were filtered out as stop words. We set the symmetric context window (the number of words before and after a specific word considered as context for the PPMI matrix) to 5.
4.3 Ex1: General Embedding Performance
In our first experiment, we compare the quality of the word embeddings trained by W2VConstr and W2VPred with the embeddings trained by baseline methods, GloVe, Skip-Gram, CBOW and DW2V. For GloVe, Skip-Gram and CBOW, we computed one set of embeddings on the entire dataset. For DW2V, W2VConstr, and W2VPred, domain-specific embeddings {Ut} were averaged over all domains. We use the same vocabulary for all methods. For W2VConstr, we set the affinity matrix W as shown in the upper row of Figure 1, based on the a priori known structure, that is, diachronic structure for NYT, and the category structure in Tables 1 and 2 for WikiFoS and WikiPhil. The lower row of Figure 1 shows the learned structure by W2VPred.
Specifically, we set the ground-truth affinity as follows: for NYT, if |t − t′| = 1, and otherwise; for WikiFoS and WikiPhil, if t is the parent category of t′ or vice versa, if t and t′ are under the same parent category, and otherwise (see Tables 1 and 2 for the category structure of WikiFoS and WikiPhil, respectively, and the top row of Figure 1 for the visualization of the ground-truth affinity matrices).
We evaluate the embeddings on general analogies (Mikolov et al., 2013b) to capture the general meaning of a word. Table 3 shows the corresponding accuracies averaged across 10 runs with different random seeds.
For NYT, W2VConstr performs similarly to DW2V, which has essentially the same constraint term—LS in Eq. (6) for W2VConstr is the same as LD in Eq. (2) for DW2V up to scaling when W is set to the prior affinity matrix for NYT— and significantly outperforms the other baselines. W2VPred performs slightly worse then the best methods. For WikiFoS, W2VConstr and W2VPred outperform all baselines by a large margin. In WikiPhil, W2VConstr performs poorly (worse than GloVe), while W2VPred outperforms all other methods by a large margin. Standard deviation across the 10 runs are less than one for NYT (all methods and all n), slightly higher for WikiFoS, and highest for WikiPhil W2VPred and W2VConstr (0.28-3.17).
These different behaviors can be explained by comparing the estimated (lower row) and the a priori given (upper row) affinity matrices shown in Figure 1. In NYT, the estimated affinity decays smoothly as the time difference between two slices increases. This implies that the a priori given diachronic structure is good enough to enhance the word embedding quality (by W2VConstr and DW2V), and estimating the affinity matrix (by W2VPred) slightly degrades the performance due to the increased number of unknown parameters to be estimated. In WikiFoS, although the estimated affinity matrix shows somewhat similar structure to the given one a priori, it is not as smooth as the one in NYT and we can recognize two instead of four clusters in the estimated affinity matrix consisting of the first two main categories (Natural Sciences and Engineering & Technology), and the last two (Social Sciences and Humanities), which we find reasonable according to Table 1. In summary, W2VConstr and W2VPred outperform baseline methods when a suitable prior structure is given. Results on the WikiPhil dataset show a different tendency: The estimated affinity by W2VPred is very different from the prior structure, which implies that the corpus structure defined by Wikipedia is not suitable for learning word embeddings. As a result, W2VConstr performs even poorer than GloVe. Overall, Table 3 shows that our proposed W2VPred robustly performs well on all datasets. In Section 4.5.3, we will further improve the performance by denoising the estimated structure by W2VPred for the case where a prior structure is not given or is unreliable.
4.4 Ex2: Domain-specific Embeddings
4.4.1 Quantitative Evaluation
Yao et al. (2018) introduced temporal analogy tests that allow us to assess the quality of word embeddings with respect to their temporal information. Unfortunately, domain-specific tests are only available for the NYT dataset. Table 4 shows temporal analogy test accuracies on the NYT dataset. As expected, GloVe, Skip-Gram, and CBOW perform poorly. We assume this is because the individual slices are too small to train reliable embeddings. The embeddings trained with DW2V and W2VConstr are learned collaboratively between slices due to the diachronic and structure terms and significantly improve the performance. Notably, W2VPred further improves the performance by learning a more suitable structure from the data. Indeed, the learned affinity matrix by W2VPred (see Figure 1a) suggests that not the diachronic structure used by DW2V but a smoother structure is optimal.
4.4.2 Qualitative Evaluation
Since no domain-specific analogy test is available for WikiFoS and WikiPhil, we qualitatively analyzed the domain-specific embeddings by checking nearest neighboring words. Table 5 shows the 5 nearest neighbors of the word “power” in the embedded spaces for the 4 main categories of WikiFoS trained by W2VPred, GloVe, and Skip-Gram. We averaged the embeddings obtained by W2VPred over the subcategories in each main category. The distance between words are measured by the cosine similarity.
Nat. Sci . | Eng&Tech . | Soc. Sci . | Hum . | GloVe . | Skip-Gram . |
---|---|---|---|---|---|
generator | generator | powerful | powerful | control | Power |
PV | inverter | control | control | supply | inverter |
thermoelectric | alternator | wield | counterbalance | capacity | mover |
inverter | converter | drive | drive | system | electricity |
converter | electric | generator | supreme | internal | thermoelectric |
Nat. Sci . | Eng&Tech . | Soc. Sci . | Hum . | GloVe . | Skip-Gram . |
---|---|---|---|---|---|
generator | generator | powerful | powerful | control | Power |
PV | inverter | control | control | supply | inverter |
thermoelectric | alternator | wield | counterbalance | capacity | mover |
inverter | converter | drive | drive | system | electricity |
converter | electric | generator | supreme | internal | thermoelectric |
We see that W2VPred correctly captured the domain-specifc meaning of “power”: In Natural Sciences and Engineering & Technology the word is used in a physical context, for example, in combination with generators, which is the closest word in both categories. In Social Sciences and Humanities on the other hand, the nearest words are “powerful” and “control”, which, in combination, indicates that it refers to “the ability to control something or someone”.5 The embedding trained by GloVe shows a very general meaning of power with no clear tendency towards a physical or political context, whereas Skip-Gram shows a tendency towards the physical meaning. We observed many similar examples, for example, charge:electrical-legal, performance:quality-acting, resistance:physical- social, race:championship-ethnicity.
As another example in the NYT corpus, Figure 2 shows the evolution of the word blackberry, which can either mean the fruit or the tech company. We selected two slices (2000 & 2012) with the largest pairwise distance for the blackberry, and chose the top-5 neighboring words from each year. The figure plots the cosine similarities between blackberry and the neighboring words. The time series shows how the word blackberry evolved from being mostly associated with the fruit towards associated with the company, and back to the fruit. This can be connected to the release of their smartphone in 2002 and the decrease in sales number after 2011.6,7 Interestingly, the word apple stays relatively close during the entire time period as its word vector also (as blackberry) reflects both meanings, a fruit and a tech company.
4.5 Ex3: Structure Prediction
This subsection discusses the structure prediction performance by W2VPred. We first evaluate the prediction performance by using the a priori affinity structure as the ground-truth structure. The results of this experiment should be interpreted with care, because we have already seen in Section 4.3 that the given a priori affinity does not necessarily reflect the similarity structure of the slices in the corpus, in particular for WikiPhil. We then analyze the correlation between the embedding quality and the structure prediction performance by W2VPred, in order to evaluate the a priori affinity as the ground-truth in each dataset. Finally, we apply W2VDen which combines the benefits of both W2VConstr and W2VPred for the case where the prior structure is not suitable.
4.5.1 Structure Prediction Performance
Here, we evaluate the structure prediction accuracy by W2VPred with the a priori given affinity matrix D ∈ℝT×T (shown in the upper row of Figure 1) as the ground-truth. We report on recall@k averaged over all domains.
We compare our W2VPred with Burrows’ Delta (Burrows, 2002) and other baseline methods based on the GloVe, Skip-Gram, and CBOW embeddings. Burrows’ Delta is a commonly used method in stylometrics to analyze the similarity between corpora, for example, for identifying the authors of anonymously published documents. The baseline methods based on GloVe, Skip-Gram, and CBOW simply learn the domain-specific embeddings separately, and the distances between the slices are evaluated by Eq. 4.
Table 6 shows recall@k (averaged over ten trials). As in the analogy tests, the best methods are in gray cells according to the Wilcoxon test. We see that W2VPred significantly outperforms the baseline methods for NYT and WikiFoS. For WikiPhil, we will further analyze the affinity structure in the following section.
4.5.2 Assessment of Prior Structure
In the following, we reevaluate the aforementioned prior affinity matrix for WikiPhil (see Figure 1). Therefore, we analyze the correlation between embedding quality and structure performance and find that a suitable ground truth affinity matrix is necessary to train good word embeddings with W2VConstr. We trained W2VPred with different parameter setting for (λ,τ) on the train set, and applied the global analogy tests and the structure prediction performance evaluation (with the prior structure as the ground-truth). For λ and τ, we considered log-scaled parameters in the ranges [2−2 − 212] and [24 − 212], respectively, and display correlation values on NYT, WikiFoS, and WikiPhil in Table 7.
In NYT and WikiFoS, we observe clear positive correlations between the embedding quality and the structure prediction performance, which implies that the estimated structure closer to the ground truth enhances the embedding quality. The Pearson correlation coefficients are 0.58 and 0.65, respectively (both with p < 0.05).
Whereas Table 7 for WikiPhil does not show a clear positive correlation. Indeed, the Pearson correlation coefficient is even negative with − 0.19, which implies that the prior structure for WikiPhil is not suitable and even harmful for the word embedding performance. This result is consistent with the bad performance of W2VConstr on WikiPhil in Section 4.3.
4.5.3 Structure Discovery by W2VDen
The good performance of W2VPred on WikiPhil in Section 4.3 suggests that W2VPred has captured a suitable structure of WikiPhil. Here, we analyze the learned structure, and polish it with additional side information.
Figure 3 (left) shows the dendrogram of categories in WikiPhil obtained from the affinity matrix W learned by W2VPred. We see that the two pairs Ethics-Social Philosophy and Cognition- Epistemology are grouped together, and both pairs also belong to the same cluster in the original structure. We also see the grouping of Epistemologists, Moral Philosophers, History of Logic, and Philosophers of Art. This was at first glance surprising because they belong to four different clusters in the prior structure. However, looking into the articles revealed that this is a logical consequence from the fact that the articles in those categories are almost exclusively about biographies of philosophers, and are therefore written in a distinctive style compared to all other slices.
To confirm that the discovered structure captures the semantic sub-corpora structure, we defined a new structure for WikiPhil, which is shown in Figure 3 (right), based on our findings above and also define a new structure for WikiFoS: A minor characteristic that we found in the structure of the prediction of W2VPred in comparison with the assumed structure is that the two sub-corpora Humanities and Social Sciences and the two sub-corpora Natural Sciences and Engineering are a bit closer than other combinations of sub-corpora, which also intuitively makes sense. We connected the two sub-corpora by connecting their root node respectively and then apply W2VDen. The general analogy tests performance by W2VDen is given in Table 3. In WikiFoS, the improvement is only slightly significant for n = 5 and n = 10 and not significant for n = 1. This implies that the structure that we previously assumed for WikiFoS already works well. This shows that applying W2VDen is in fact a general purpose method that can be applied on any of the data sets but it is especially useful when there is a mismatch between the assumed structure and the structure predicted by W2VPred. In WikiPhil, we see that W2VDen further improves the performance by W2VPred, which already outperforms all other methods with a large margin. The correlation between the embedding quality and the structure prediction performance—with the denoised estimated affinity matrix as the ground truth—is shown in Table 7. The Pearson correlation is still negative, − 0.14, but no longer statistically significant (p = 0.11).
4.6 Ex4: Evaluation in Word Similarity Tasks
We further evaluate word embeddings on various word similarity tasks where human-annotated similarity between words is compared with the cosine similarity in the embedding space, as proposed in Faruqui and Dyer (2014). Table 8 shows the correlation coefficients between the human-annotated similarity and the embedding cosine similarity, where, again, the best method and the runner-ups (if not significantly outperformed) are highlighted. 8 We observe that W2VPred outperforms the other methods in 7 out of 12 datasets for NYT, and W2VConstr in 8 out of 12 for WikiFoS. For WikiPhil, since we already know that W2VConstr with the given affinity matrix does not improve the embedding performance, we instead evaluated W2VDen, which outperforms 9 out of 12 datasets in WikiPhil. In addition, W2VPred gives comparable performance to the best method over all experiments.
We also apply QVEC, which measures component-wise correlation between distributed word embeddings, as we use them throughout the paper, and linguistic word vectors based on WordNet Fellbaum (1998). High correlation values indicate high saliency of linguistic properties and thus serve as an intrinsic evaluation method that has been shown to highly correlate with downstream task performance (Tsvetkov et al., 2015). Results are shown in Table 9, where we observe that W2VConstr (as well as W2VDen for WikiPhil) outperforms all baseline methods, except CBOW in NYT, on all datasets, and W2VPred performs comparably with the best method.
4.7 Summarizing Discussion
In this section, we have shown a good performance of W2VConstr and W2VPred in terms of global and domain-specific embedding quality on news articles (NYT) and articles from Wikipedia (WikiFoS, WikiPhil). We have also shown that W2VPred is able to extract the underlying sub-corpora structure from NYT and WikiFoS.
On the WikiPhil dataset, the following observations implied that the prior sub-corpora structure, based on the Wikipedia’s definition, was not suitable for analyzing semantic relations:
Accordingly, we analyzed the learned structure by W2VPred, and further refined it by denoising with human intervention. Specifically, we analyzed the dendrogram from Figure 3, and found that 4 categories are grouped together that we originally assumed to belong to 4 different clusters. We further validated our reasoning by applying W2VDen with the structure shown in Figure 3 resulting in the best embedding performance (see Table 3).
This procedure poses an opportunity to obtain good global and domain-specific embeddings and extract, or validate if given a priori, the underlying sub-corpora structure by using W2VConstr and W2VPred. Namely, we first train W2VPred, and also W2VConstr if prior structure information is available. If both methods similarly improve the embeddings in comparison with the methods without using any structure information, we acknowledge that the prior structure is at least useful for word embedding performance. If W2VPred performs well, while W2VConstr performs poorly, we doubt that the given prior structure would be suitable, and update the learned structure by W2VPred. When no prior strucuture is given, we simply apply W2VPred to learn the structure.
We can furthermore refine the learned structure with side information, which results in a clean and human interpretable structure. Here W2VDen is used to validate the new structure, and to provide enhanced word embeddings. In our experiment on the WikiPhil dataset, the embeddings obtained this way significantly outperformed all other methods. The improved performance from W2VPred is probably due to the fewer degrees of freedom of W2VConstr, that is, once we know a reasonable structure, the embeddings can be more accurately trained with the fixed affinity matrix.
5 Application on Digital Humanities
We propose an application of W2VPred to the field of Digital Humanities, and develop an example more specifically related to Computational Literary Studies. In the renewal of literary studies brought by the development and implementation of computational methods, questions of authorship attribution and genre attribution are key to formulating a structured critique of the classical design of literary history, and of Cultural Heritage approaches at large. In particular, the investigation of historical person networks, knowledge distribution, and intellectual circles has been shown to benefit significantly from computational methods (Baillot, 2018; Moretti, 2005). Hence, our method and its capability to reveal connections between sub-corpora (such as authors’ works), can be applied with success to these types of research questions. Here, the use of quantitative and statistical models can lead to new, hitherto unfathomed insights. A corpus-based statistical approach to literature also entails a form of emancipation from literary history in that it makes it possible to shift perspectives, e.g., to reconsider established author-based or genre-based approaches.
To this end, we applied W2VPred to high literature texts (Belletristik) from the lemmatized versions of DTA (German Text Archive), a corpus selection that contains the 20 most represented authors of the DTA text collection for the period 1770-1900. We applied W2VPred in order to predict the connections between those authors with λ = 512,τ = 1024 (same as WikiFoS).
As a measure of comparison, we extracted the year of publication as established by DTA, and identified the place of work for each author9 and categorized each publication into one of three genre categories (ego document, verse, and fiction). Ego documents are texts written in the first person that document personal experience in their historical context. They include letters, diaries, and memoirs and have gained momentum as a primary source in historical research and literary studies over the past decades. We created pairwise distance matrices for all authors based on the spatial, temporal, and genre information. Temporal distance was defined as the absolute distance between the average publication year, the spatial distance as the geodesic distance between the average coordinates of the work places for each author and the genre difference as cosine distance between the genre proportions for each author. For each author, we correlated linear combinations of this (normalized) spatio-temporal-genre prior knowledge with the structure found by our method, which we show in Figure 4.
Reference Dimensions
In this visualization we want to compare the pairwise distance matrix that our method predicted with the distance matrices that can be obtained by meta data available in the DTA corpus—the reference dimensions:
Temporal difference between authors. We collect the publication year for each title in the corpus and compute the average publication year for each author. The temporal distance between one author At1 and another author At2 is computed by |At1 − At2|, the absolute difference of the average publication year.
Spatial difference between authors. We query the German Integrated Authority File for the authors’ different work places and extract them as longitude and latitude coordinates on the earths surface. We compute the average coordinates for each author by converting the coordinates into the Cartesian system and take the average on each dimension. Then, we convert the averages back into the latitude, longitude system. The spatial distance between two authors is computed by the geodesic distance as implemented in GeoPy.10
Genre difference between authors. We manually categorized each title in the corpus into one of the three categories ego document, verse, and fiction. A genre representation for an author is the relative frequency of the respective genre for that author. The distance between one author Ag1 and another author Ag2 is computed by , the cosine distance.
Calculating the Correlations
For each author t, we denote the predicted distance to all other authors as Xt ∈ℝT−1 where T is the number of all authors. Yt ∈ℝ(T−1)×3 denotes the distances from the author t to all other authors in the three meta data dimensions: space, time, and genre. For the visualization, we seek for the coefficients of the linear combination of Y that has the highest correlation with X. For this, Non-Negative Canonical Correlation Analysis with one component is applied. The MIFSR algorithm is used as described by Sigg et al. (2007).11 The coefficients are normalized to comply with the sum-to-one constraint for projection on the 2d simplex.
For many authors, the strongest correlation occurs with a mostly temporal structure and fewer correlate strongest with the spatial or the genre model. Börne and Laukhard, who have a similar spatial weight and thereby form a spatial cluster, both resided in France at that time. The impact of French literature and culture on Laukhard’s and Börne’s writing deserves attention, as suggested by our findings.
For Fontane, we do not observe a notable spatial proportion, which is surprising because his sub-corpus mostly consists of ego documents describing the history and geography of the area surrounding Berlin, his workplace. However, in contrast to the other authors residing in Berlin, the style is much more similar to a travel story. In W2VPred ’s predicted structure, the closest neighbor of Fontane is, in fact, Pückler (with a distance of .052), who also wrote travel stories.
In the case of Goethe, the maximum correlation at the (solely spatio-temporal) resulting point is relatively low and, interestingly, the highest disagreement between W2VPred and the prior knowledge is between Schiller and Goethe. The spatio-temporal model represents a close proximity; however, in W2VPred’s found structure, the two authors are much more distant. In this case, the spatio-temporal properties are not sufficient to fully characterize an author’s writing and the genre distribution may be skewed due to the incomplete selection of works in the DTA and due to the limitations of the labeling scheme, as in the context of the 19th century, it is often difficult to distinguish between ego documents and fiction.
Nonetheless, we want to stress the importance of the analysis where linguistic representation and structure, captured in W2VPred, is in line with these properties and, also, where they disagree. Both agreement and disagreement between the prior knowledge and the linguistic representation found by W2VPred can help identifying the appropriate ansatz for a literary analysis of an author.
6 Conclusion
We proposed novel methods to capture domain- specific semantics, which is essential in many NLP tasks: Word2Vec with Structure Constraint (W2VConstr) trains domain-specific word embeddings based on prior information on the affinity structure between sub-corpora; Word2Vec with Structure Prediction (W2VPred) goes one step further and predicts the structure while learning domain-specific embeddings simultaneously. Both methods outperform baseline methods in benchmark experiments with respect to embedding quality and the structure prediction performance. Specifically, we showed that embeddings provided by our methods are superior in terms of global and domain-specific analogy tests, word similarity tasks, and the QVEC evaluation, which is known to highly correlate with downstream performance. The predicted structure is more accurate than the baseline methods including Burrows’ Delta. We also proposed and successfully demonstrated a procedure, Word2Vec with Denoised Structure Constraint (W2VDen), to cope with the case where the prior structure information is not suitable for enhancing embeddings, by using both W2VConstr and W2VPred. Overall, we showed the benefits of our methods, regardless of whether (reliable) structure information is given or not. Finally, we were able to demonstrate how to use W2VPred to gain insight into the relation between 19th century authors from the German Text Archive and also how to raise further research questions for high literature.
Acknowledgments
We thank Gilles Blanchard for valuable comments on the manuscript. We further thank Felix Herron for his support in the data collection process. DL and SN are supported by the German Ministry for Education and Research (BMBF) as BIFOLD - Berlin Institute for the Foundations of Learning and Data under grants 01IS18025A and 01IS18037A. SB was partially funded by the Platform Intelligence in News project, which is supported by Innovation Fund Denmark via the Grand Solutions program and by the European Union under the Grant Agreement no. 10106555, FairER. Views and opinions expressed are those of the author(s) only and do not necessarily reflect those of the European Union or European Research Executive Agency (REA). Neither the European Union nor REA can be held responsible for them.
A Implementation Details
A.1 Ex1
All word embeddings were trained with d = 50.
GloVe
We run GloVe experiments with α = 100 and minimum occurrence =25.
Skip-Gram, CBOW
We use the Gensim Řehůřek and Sojka (2010) implementation of Skip-Gram and CBOW with min_alpha =0.0001, sample =0.001 to reduce frequent words and for Skip-Gram, we use 5 negative words and ns_component =0.75.
Parameter Selection
The parameters λ and τ for DW2V, W2VConstr and W2VPred were selected based on the performance in the analogy tests on the train set. In order to flatten the contributions from the n nearest neighbors (for n = 1,5,10), we rescaled the accuracies: For each n, accuracies are scaled so that the best and the worst method is 1 and 0, respectively. Then, we computed their average and maximum.
Analogies
Each analogy consists of two word pairs (e.g., countryA - capitalA; countryB - capitalB). We estimate the vector for the last word by capitalA - countryA + countryB, and check if capitalB is contained in the n nearest neighbors of the resulting vector .
A.2 Ex2
Temporal Analogies
Each of two word pairs consists of a year and a corresponding term, as for example, 2000 - Bush; 2008 - Obama, and the inference accuracy of the last word by vector operations on the former three tokens in the embedded space is evaluated. To apply these analogies, GloVe, Skip-Gram, and CBOW are trained individually on each year on the same vocabulary as W2VPred (same parameters for GloVe as before, with minimum occurrence =10). For the other methods, DW2V, W2VConstr, and W2VPred, we can simply use the embedding obtained in Section 4.3. Note that the parameters τ and λ were optimized based on the general analogy tests.
A.3 Ex3
Burrows
It compares normalized bag-of-words features of documents and sub-corpora, and provides a distance measure between them. Its parameters specify which word frequencies are taken into account. We found that considering the 100th to the 300th most frequent words gives the best structure prediction performance on the train set.
Recall@k
W2VPred
Hyperparameters for W2VPred were selected on the train set where we maximized the accuracy on the global analogy test as before.
Notes
We removed the dataset VERB-143 since we are using lemmatized tokens and therefore catch only a very small part of this corpus. We acknowledge that the human annotated similarity is not domain-specific and therefore not optimal for evaluating the domain-specific embeddings. However, we expect that this experiment provides another aspect of the embedding quality.
via the German Integrated Authority Files Service (GND) where available, adding missing data points manually.
We use ϵ = .00001.
References
Author notes
Action Editor: Jacob Eisenstein
Authors contributed equally.