Abstract
Transformers are arguably the main workhorse in recent natural language processing research. By definition, a Transformer is invariant with respect to reordering of the input. However, language is inherently sequential and word order is essential to the semantics and syntax of an utterance. In this article, we provide an overview and theoretical comparison of existing methods to incorporate position information into Transformer models. The objectives of this survey are to (1) showcase that position information in Transformer is a vibrant and extensive research area; (2) enable the reader to compare existing methods by providing a unified notation and systematization of different approaches along important model dimensions; (3) indicate what characteristics of an application should be taken into account when selecting a position encoding; and (4) provide stimuli for future research.
1 Introduction
The Transformer model as introduced by Vaswani et al. (2017) has been found to perform well for many tasks, such as machine translation or language modeling. With the rise of pretrained language models (Peters et al. 2018; Howard and Ruder 2018; Devlin et al. 2019; Brown et al. 2020), Transformer models have become even more popular. As a result they are at the core of many state-of-the-art natural language processing models. A Transformer model consists of several layers, or blocks. Each layer is a self-attention (Vaswani et al. 2017) module followed by a feed-forward layer. Layer normalization and residual connections are additional components of a layer.
A plain Transformer model is invariant with respect to reordering of the input. However, text data is inherently sequential. Without position information the meaning of a sentence is not well-defined—for example, compare the sequence “the cat chases the dog” to the multi-set { the, the, dog, chases, cat }. Clearly it should be beneficial to incorporate this essential inductive bias into any model that processes text data.
Therefore, there is a range of different methods to incorporate position information into Transformer models. Adding position information can be done by using position embeddings, manipulating attention matrices, or alternative methods such as preprocessing the input with a recurrent neural network. Overall, there is a large variety of methods that add absolute and relative position information to Transformer models. Similarly, many papers analyze and compare a subset of position embedding variants. But, to the best of our knowledge, there is no broad overview of relevant work on position information in Transformers that systematically aggregates and categorizes existing approaches and analyzes the differences between them.
This survey gives an overview of existing work on incorporating and analyzing position information in Transformer models. Concretely, we provide a theoretical comparison of over 30 Transformer position models and a systematization of different approaches along important model dimensions, such as the number of learnable parameters, and elucidate their differences by means of a unified notation. The goal of this work is not to identify the best way to model position information in Transformer but rather to analyze existing works, and identify common components and blind spots of current research efforts. In summary, we aim to
- (1)
showcase that position information in Transformer is a vibrant and extensive research area;
- (2)
enable the reader to compare existing methods by providing a unified notation and systematization of different approaches along important model dimensions;
- (3)
provide stimuli for future research.
2 Background
2.1 Notation
Throughout this article we denote scalars with lowercase letters x ∈ℝ, vectors with boldface x ∈ℝd, and matrices with boldface uppercase letters X ∈ℝt×d. We index vectors and matrices as follows: (xi)i = 1,2…,d = x, (Xij)i = 1,2…,t,j = 1,2,…d = X. Further, the i-th row of X is the vector Xi ∈ℝd. The transpose is denoted as XT. When we are referring to positions we use r,s,t,… whereas we use i,j,… to denote components of a vector. The maximum sequence length is called .
2.2 Transformer Model
When considering an input U = (u1,u2,…,ut) that consists of t units, such as characters, subwords, or words, the embeddings are created by a lookup in the embedding matrix E ∈ℝn×d with n being the vocabulary size. More specifically, is the embedding vector that corresponds to the unit ui. Finally, the matrix U is then (among others) used as input to the Transformer model. In the case that t is smaller or larger than , the input sequence is padded, that is, filled with special pad symbols, or truncated.
2.3 Order Invariance
If we take a close look at the Transformer model, we see that it is invariant to reordering of the input. More specifically, consider any permutation matrix . When passing PπX to a Transformer layer, one obtains , as is the identity matrix. All remaining operations are position-wise and thus PπT(X) = T(PπX) for any input X. As language is inherently sequential it is desirable to have PπT(X)≠T(PπX), which can be achieved by incorporating position information.
2.4 Encoder–Decoder
There are different set-ups for using a Transformer model. One common possibility is to have an encoder only. For example, BERT (Devlin et al. 2019) uses a Transformer model T(X) as encoder to perform masked language modeling. In contrast, a traditional sequence-to-sequence approach can be materialized by adding a decoder. The decoder works almost identically to the encoder with two exceptions: (1) The upper triangle of the attention matrix A is usually masked in order to block information flow from future positions during the decoding process. (2) The output of the encoder is integrated through a cross-attention layer inserted before the feed-forward layer. See Vaswani et al. (2017) for more details. The differences between an encoder and encoder–decoder architecture are mostly irrelevant for the injection of position information and many architectures rely just on encoder layers. Thus for the sake of simplicity we will talk about Transformer encoder blocks in general for the majority of the article. See §4.4 for position encodings that are tailored for encoder–decoder architectures.
3 Recurring Concepts in Position Information Models
Although there are a variety of approaches to integrate position information into Transformers, there are some recurring ideas, which we outline in this section. Based on these concepts we also provide a clustering of the considered position information models (see Table 1).
. | Model . | Ref. Point . | Inject. Met. . | Learnable . | Recurring . | Unbound . | #Param . | |
---|---|---|---|---|---|---|---|---|
. | . | . | . | . | ||||
Data Structure | Sequence | Transformer w/ emb. (Vaswani et al. 2017) | A | ape | ✔ | ✖ | ✖ | |
BERT (Devlin et al. 2019) | ||||||||
Reformer (Kitaev, Kaiser, and Levskaya 2020) | ||||||||
FLOATER (Liu et al. 2020) | A | ape | ✔ | ✔ | ✔ | 0 or more | ||
Shortformer (Press, Smith, and Lewis 2021) | A | ape | ✖ | ✔ | ✔ | 0 | ||
Wang et al. (2020) | A | – | ✔ | ✖ | ✔ | 2nd | ||
Shaw, Uszkoreit, and Vaswani (2018) (abs) | A | mam | ✔ | ✔ | ✖ | 2tmax2dl | ||
Shaw, Uszkoreit, and Vaswani (2018) (rel) | R | mam | ✔ | ✔ | ✖ | 2(2tmax − 1)dl | ||
T5 (Raffel et al. 2020) | (2tmax − 1)h | |||||||
Huang et al. (2020) | dlh(2tmax − 1) | |||||||
DeBERTa (He et al. 2021) | B | Both | ✔ | ✔ | ✖ | 3tmaxd | ||
Transformer XL (Dai et al. 2019) | R | mam | ✔ | ✔ | ✔ | 2d + d2lh | ||
TENER (Yan et al. 2019) | 2dlh | |||||||
DA-Transformer (Wu, Wu, and Huang 2021) | 2h | |||||||
TUPE (Ke, He, and Liu 2021) | B | mam | ✔ | ✖ | ✖ | 2d2 + tmax(d + 2) | ||
RNN-Transf. (Neishi and Yoshinaga 2019) | R | – | ✔ | ✖ | ✔ | 6d2 + 3d | ||
SPE (Liutkus et al. 2021) | R | mam | ✔ | ✔ | ✖ | 3Kdh + ld | ||
Transformer w/ sin. (Vaswani et al. 2017) | A | ape | ✖ | ✖ | ✔ | 0 | ||
Li et al. (2019) | ||||||||
Takase and Okazaki (2019) | ||||||||
Oka et al. (2020) | ||||||||
Universal Transf. (Dehghani et al. 2019) | A | ape | ✖ | ✔ | ✔ | 0 | ||
DiSAN (Shen et al. 2018) | R | mam | ✖ | ✔ | ✔ | 0 | ||
Rotary (Su et al. 2021) | ||||||||
Tree | SPR-abs (Wang et al. 2019) | A | ape | ✖ | ✖ | ✔ | 0 | |
SPR-rel (Wang et al. 2019) | R | mam | ✔ | ✖ | ✖ | 2(2tmax + 1)d | ||
TPE (Shiv and Quirk 2019) | A | ape | ✔ | ✖ | ✖ | |||
Graph | Struct. Transformer (Zhu et al. 2019) | R | mam | ✔ | ✔ | ✔ | 5d2 + (d + 1)dr | |
Graph Transformer (Cai and Lam 2020) | 7d2 + 3d | |||||||
Graformer (Schmitt et al. 2021) | R | mam | ✔ | ✔ | ✖ | 2(Dmax + 1)h | ||
Graph Transformer (Dwivedi and Bresson 2020) | A | ape | ✖ | ✖ | ✔ | 0 | ||
graph-bert (Zhang et al. 2020) | B |
. | Model . | Ref. Point . | Inject. Met. . | Learnable . | Recurring . | Unbound . | #Param . | |
---|---|---|---|---|---|---|---|---|
. | . | . | . | . | ||||
Data Structure | Sequence | Transformer w/ emb. (Vaswani et al. 2017) | A | ape | ✔ | ✖ | ✖ | |
BERT (Devlin et al. 2019) | ||||||||
Reformer (Kitaev, Kaiser, and Levskaya 2020) | ||||||||
FLOATER (Liu et al. 2020) | A | ape | ✔ | ✔ | ✔ | 0 or more | ||
Shortformer (Press, Smith, and Lewis 2021) | A | ape | ✖ | ✔ | ✔ | 0 | ||
Wang et al. (2020) | A | – | ✔ | ✖ | ✔ | 2nd | ||
Shaw, Uszkoreit, and Vaswani (2018) (abs) | A | mam | ✔ | ✔ | ✖ | 2tmax2dl | ||
Shaw, Uszkoreit, and Vaswani (2018) (rel) | R | mam | ✔ | ✔ | ✖ | 2(2tmax − 1)dl | ||
T5 (Raffel et al. 2020) | (2tmax − 1)h | |||||||
Huang et al. (2020) | dlh(2tmax − 1) | |||||||
DeBERTa (He et al. 2021) | B | Both | ✔ | ✔ | ✖ | 3tmaxd | ||
Transformer XL (Dai et al. 2019) | R | mam | ✔ | ✔ | ✔ | 2d + d2lh | ||
TENER (Yan et al. 2019) | 2dlh | |||||||
DA-Transformer (Wu, Wu, and Huang 2021) | 2h | |||||||
TUPE (Ke, He, and Liu 2021) | B | mam | ✔ | ✖ | ✖ | 2d2 + tmax(d + 2) | ||
RNN-Transf. (Neishi and Yoshinaga 2019) | R | – | ✔ | ✖ | ✔ | 6d2 + 3d | ||
SPE (Liutkus et al. 2021) | R | mam | ✔ | ✔ | ✖ | 3Kdh + ld | ||
Transformer w/ sin. (Vaswani et al. 2017) | A | ape | ✖ | ✖ | ✔ | 0 | ||
Li et al. (2019) | ||||||||
Takase and Okazaki (2019) | ||||||||
Oka et al. (2020) | ||||||||
Universal Transf. (Dehghani et al. 2019) | A | ape | ✖ | ✔ | ✔ | 0 | ||
DiSAN (Shen et al. 2018) | R | mam | ✖ | ✔ | ✔ | 0 | ||
Rotary (Su et al. 2021) | ||||||||
Tree | SPR-abs (Wang et al. 2019) | A | ape | ✖ | ✖ | ✔ | 0 | |
SPR-rel (Wang et al. 2019) | R | mam | ✔ | ✖ | ✖ | 2(2tmax + 1)d | ||
TPE (Shiv and Quirk 2019) | A | ape | ✔ | ✖ | ✖ | |||
Graph | Struct. Transformer (Zhu et al. 2019) | R | mam | ✔ | ✔ | ✔ | 5d2 + (d + 1)dr | |
Graph Transformer (Cai and Lam 2020) | 7d2 + 3d | |||||||
Graformer (Schmitt et al. 2021) | R | mam | ✔ | ✔ | ✖ | 2(Dmax + 1)h | ||
Graph Transformer (Dwivedi and Bresson 2020) | A | ape | ✖ | ✖ | ✔ | 0 | ||
graph-bert (Zhang et al. 2020) | B |
3.1 Reference Point: Absolute vs. Relative Position Encoding
Absolute positions encode the absolute position of a unit within a sentence. Another approach is to encode the position of a unit relative to other units. This makes sense intuitively, as in sentences like “The cat chased the dog.” and “Suddenly, the cat chased the dog.” The change in absolute positions due to the added word “Suddenly” causes only a small semantic change, whereas the relative position of “cat” and “dog” is decisive for the meaning of the sentences. The advantage of relative position encoding is that it is invariant with respect to such shifts.
Despite this advantage, it has never been shown conclusively that relative position encoding outperforms an absolute one and thus both systems continue to co-exist—even in the most recent works (see Table 1). Learnable relative position embeddings do have the undeniable disadvantage that they have to consider twice as many different positions (for relative positions to the right and to the left of a word). Hence, in general, they need to train and store more parameters. Table 1 refers to this distinction as reference point.
3.2 Injection Method
As ape results in a modification of the attention matrix, ape and mam are highly interlinked. Still, we make a distinction between these two approaches for multiple reasons: (1) While adding position embedding results, among other effects, in a modified attention matrix, mamonly modifies the attention matrix. (2) ape involves learning embeddings for position information whereas mam is often interpreted as adding or multiplying scalar biases to the attention matrix A (see Figure 2). (3) ape is often tied to individual positions and interactions between two positions are computed based on parameters by the model. In contrast, mam often directly models the interaction of two positions.
Note that methods using relative position encodings exclusively rely on mam rather than ape. Intuitively, this makes sense because relative position encodings consider pairs of positions and their relation to each other and an attention matrix already models pairs of positions and their interaction. Thus, incorporating positional information into the attention matrix is a straightforward approach. Furthermore, ape relies on the fact that every input unit can be assigned a unique position embedding. As each unit has a different relative position to each other’s unit, ape is inherently incompatible with relative position information. Although absolute position information is, in principle, compatible with mam (see Shaw, Uszkoreit, and Vaswani 2018), there is also a strong correlation between absolute position encodings and ape. Probably, this is because absolute position encodings consider only a single time step at a time, which makes it more intuitive to model position information at the unit level and directly assign position embeddings one unit at a time.
Table 1 distinguishes ape and mam position information models in its column injection method.
3.3 Recurring Integration
In theory, there are many possibilities for integrating position information into a Transformer model, but in practice the information is either integrated in the input, at each attention matrix, or directly before the output. When adding position information at the beginning, it only affects the first layer and thus has to be propagated to upper layers indirectly. Therefore, the more direct approach of reminding the model of position information in each layer seems more desirable. Then again, this approach also denies the model the flexibility of choosing how strong position information should influence word representations in higher layers.
Often, ape is only added at the beginning, and mam approaches are used for each layer and attention head. There is, however, no theoretical reason to pair these approaches in that manner. The recurring column in Table 1 marks those approaches where position information is added in each layer anew.
3.4 Fundamental Model Properties
Besides the aforementioned distinctions, we include three other properties per model in Table 1: (1) Learnable distinguishes whether or not the position information model is learned from data or not. (2) #Param provides the number of (trainable) parameters the position information model uses. And (3) Unbound concerns the (theoretical) ability of a position information model to generalize beyond the longest input it has seen during training.
On the one hand, learnable position embeddings give the model more flexibility to adapt the position representations to the task. On the other hand, it also adds parameters, which can lead to overfitting. For trainable position information models, there is a notable trend toward cutting the number of parameters while maintaining good performance; see for example Raffel et al. (2020). Whereas the number of parameters is a fundamental property of any machine learning model, notably the unbound property is specific to models handling position information. Although it is highly desirable for a model to be able to handle input of any length, often bounded values can suffice in practice. Schemes where high length values are clipped, which means that high position values are not distinguished anymore, are also considered bounded even though inputs of any length can be processed by such a model.
4 Current Position Information Models
In this section we provide an overview of current position information models. Note that we use the term position information model to refer to a method that integrates position information; the term position encoding refers to a position ID associated with units, for example numbered from 0 to t, or assigning relative distances. A position embedding then refers to a numerical vector associated with a position encoding.
For the sake of clarity and for easier reading we structure this overview into subsections. To this end, we systematize position information models along two dimensions: reference point and topic. We chose reference point as it reflects a foundational design choice for each position information model. This dimension can have the values absolute, relative, or both. Further, we choose the prevalent topic of each paper as second dimension, that is, generic, sinusoidal, graphs, decoder, crosslingual, and analysis. The objective of these categories is not to create a mutually exclusive or exhaustive classification. Given that each paper usually deals with multiple aspects of position information models and sometimes proposes multiple models, this would be challenging if not impossible. Rather, we want to guide the reader to find relevant papers quickly and thus decided on categorizing along topic, similar to keywords. For instance, a reader interested in encoder–decoder models or decoder-only models will find relevant papers in the topic “Decoder.” Conversely, papers listed in other topics might be applicable or relevant for decoders as well.
Table 2 shows which papers were assigned to which categories. The following sections deal with each topic and within each topic we discuss approaches with different reference points.
. | Reference Point . | |||
---|---|---|---|---|
Absolute . | Absolute & Relative . | Relative . | ||
Topic | Generic | Devlin et al. (2019) | Shaw, Uszkoreit, and Vaswani (2018) | Dai et al. (2019) |
Kitaev, Kaiser, and Levskaya (2020) | Ke, He, and Liu (2021) | Raffel et al. (2020) | ||
Liu et al. (2020) | Dufter, Schmitt, and Schútze (2020) | Chang et al. (2021) | ||
Press, Smith, and Lewis (2021) | He et al. (2021) | Wu, Wu, and Huang (2021) | ||
Wang et al. (2020) | Huang et al. (2020) | |||
Shen et al. (2018) | ||||
Neishi and Yoshinaga (2019) | ||||
Liutkus et al. (2021) | ||||
Sinusoidal | Vaswani et al. (2017) | Yan et al. (2019) | ||
Dehghani et al. (2019) | Su et al. (2021) | |||
Li et al. (2019) | ||||
Likhomanenko et al. (2021) | ||||
Graphs | Shiv and Quirk (2019) | Wang et al. (2019) | Zhu et al. (2019) | |
Dwivedi and Bresson (2020) | Zhang et al. (2020) | Cai and Lam (2020) | ||
Schmitt et al. (2021) | ||||
Decoder | Takase and Okazaki (2019) | |||
Oka et al. (2020) | ||||
Bao et al. (2019) | ||||
Crossling. | Artetxe, Ruder, and Yogatama (2020) | |||
Ding, Wang, and Tao (2020) | ||||
Liu et al. (2021a) | ||||
Liu et al. (2021b) | ||||
Analysis | Yang et al. (2019) | Rosendahl et al. (2019) | ||
Wang and Chen (2020) | Wang et al. (2021) | |||
Chen et al. (2021) |
. | Reference Point . | |||
---|---|---|---|---|
Absolute . | Absolute & Relative . | Relative . | ||
Topic | Generic | Devlin et al. (2019) | Shaw, Uszkoreit, and Vaswani (2018) | Dai et al. (2019) |
Kitaev, Kaiser, and Levskaya (2020) | Ke, He, and Liu (2021) | Raffel et al. (2020) | ||
Liu et al. (2020) | Dufter, Schmitt, and Schútze (2020) | Chang et al. (2021) | ||
Press, Smith, and Lewis (2021) | He et al. (2021) | Wu, Wu, and Huang (2021) | ||
Wang et al. (2020) | Huang et al. (2020) | |||
Shen et al. (2018) | ||||
Neishi and Yoshinaga (2019) | ||||
Liutkus et al. (2021) | ||||
Sinusoidal | Vaswani et al. (2017) | Yan et al. (2019) | ||
Dehghani et al. (2019) | Su et al. (2021) | |||
Li et al. (2019) | ||||
Likhomanenko et al. (2021) | ||||
Graphs | Shiv and Quirk (2019) | Wang et al. (2019) | Zhu et al. (2019) | |
Dwivedi and Bresson (2020) | Zhang et al. (2020) | Cai and Lam (2020) | ||
Schmitt et al. (2021) | ||||
Decoder | Takase and Okazaki (2019) | |||
Oka et al. (2020) | ||||
Bao et al. (2019) | ||||
Crossling. | Artetxe, Ruder, and Yogatama (2020) | |||
Ding, Wang, and Tao (2020) | ||||
Liu et al. (2021a) | ||||
Liu et al. (2021b) | ||||
Analysis | Yang et al. (2019) | Rosendahl et al. (2019) | ||
Wang and Chen (2020) | Wang et al. (2021) | |||
Chen et al. (2021) |
4.1 Generic
In this section we present the first topical cluster, called Generic. The papers discussed here exhibit a great variety ranging from learned absolute position embeddings in the original Transformer paper (Vaswani et al. 2017) over complex-valued embeddings (Wang et al. 2020) to adding a recurrent neural network layer before the Transformer (Neishi and Yoshinaga 2019). All these works do not stand out by a particular mathematical characteristic, such as using sinusoidal functions, or a specific theme, such as focusing on encoding graph structures, and therefore do not fit into one of the other categories. Thus, this section contains the most fundamental and original position information models as well as later ones that are equally general for processing sequential data structures. We first describe papers dealing with absolute position encodings followed by methods that deal with relative ones.
4.1.1 Absolute Position Encodings
The original Transformer paper considered absolute position encodings. One of the two approaches proposed by Vaswani et al. (2017) follows Gehring et al. (2017) and learns a position embedding matrix corresponding to the absolute positions 1,2,…,tmax − 1,tmax in a sequence. This matrix is simply added to the unit embeddings U before they are fed to the Transformer model (ape).
In the simplest case, the position embeddings are randomly initialized and then adapted during training of the network (Gehring et al. 2017; Vaswani et al. 2017; Devlin et al. 2019). Gehring et al. (2017) find that adding position embeddings only helps marginally in a convolutional neural network. A Transformer model without any position information, however, performs much worse for some tasks—see for example Wang et al. 2019, Wang et al. 2021.
4.1.2 Relative Position Encodings
Chang et al. (2021) provide a theoretical link between the position information models proposed by Shaw, Uszkoreit, and Vaswani (2018) and Raffel et al. (2020) and convolutions. They find that combining these two relative position information models increases performance on natural language understanding tasks.
On several GLUE tasks (Wang et al. 2018) they find that the last two methods perform best.
The next approach is not directly related to relative position encodings, but it can be interpreted as using relative position information. Shen et al. (2018) propose Directional Self-Attention Networks (DiSAN). Besides other differences to plain self-attention, such as multidimensional attention, they notably mask out the upper/lower triangular matrix or the diagonal in A to achieve non-symmetric attention matrices. Allowing attention only in a specific direction does not add position information directly, but still makes the attention mechanism position-aware to some extent by enabling the model to distinguish directions.
Neishi and Yoshinaga (2019) argue that recurrent neural networks (RNN) in the form of gated recurrent units (GRU) (Cho et al. 2014) are able to encode relative positions. Thus they propose to replace position encodings by adding a single GRU layer on the input before feeding it to the Transformer (see Figure 5). With their models called RRN-Transformer they observe comparable performance compared to position embeddings; however, for longer sequences the GRU yields better performance. Combining their approach with the method by Shaw, Uszkoreit, and Vaswani (2018) improves performance further, a method they call RR-Transformer.
Relative position information models usually require the computation of the full attention matrix A because each cell depends on a different kind of relative position interaction. Liutkus et al. (2021) proposed an alternative called Stochastic Positional Encoding (SPE). By approximating relative position interactions as cross-covariance structures of correlated Gaussian processes, they make relative position encodings available to linear-complexity Transformers, such as the Performer (Choromanski et al. 2021), that do not compute the full attention matrix, which would lead to a quadratic complexity. Liutkus et al. (2021) describe two variants of SPE, sineSPE that combines K learned sinusoidal components and convSPE that learns convolutional filters. Notably, they also propose a gating mechanism that controls with a learnable parameter how much the attention in each vector dimension depends on content vs. position information. The description of SPE in Table 1 is based on gated sineSPE. The experiments in Liutkus et al. (2021) show that SPE leads to performance improvements compared to absolute position encodings for tasks involving long-range dependencies (Tay et al. 2021b).
4.2 Sinusoidal
Likhomanenko et al. (2021) introduce continuous augmented positional embeddings and focus on making sinusoidal position embeddings work for other modalities such as vision or speech. More specifically, they propose converting discrete positions to a continuous range and suggest noise augmentations to avoid the model taking up spurious correlations. Instead of using the position t in sinusoidal position embeddings they create t′ using mean normalization followed by a series of three random augmentations: (1) global shift t′ = t + Δ, (2) local shift t′ = t + ϵ, and (3) global scaling t′ = λt. , , and are sampled from a uniform distribution. Note that during inference only mean normalization is performed. As expected, they find their model to work well on vision and speech data. On natural language it performed on par with minor improvements compared with sinusoidal position embeddings as measured on machine translation.
4.3 Graphs
In the following section, we will take a look at position information models for graphs—more specifically, cases where Transformers have been used for genuine graph input as well as cases where the graph is used as a sentence representation (e.g., a dependency graph). We distinguish two types of graph position models according to the assumptions they make about the graph structure: positions in hierarchies (trees) and arbitrary graphs.
4.3.1 Hierarchies (Trees)
Wang et al. (2019) propose structural position representations (SPR) (see Figure 7). This means that instead of treating a sentence as a sequence of information, they perform dependency parsing and compute distances on the parse tree (dependency graph).1 We can distinguish two settings: (1) Analogously to absolute position encodings in sequences, where unit ut is assigned position t, absolute SPR assigns ut the position abs(ut) : =dtree(ut,root) where root is the root of the dependency tree, that is, the main verb of the sentence, and dtree(x,y) is the path length between x and y in the tree. (2) For the relative SPR between the units ut,us, they define rel(ut,us) = abs(ut) − abs(us) if ut is on the path from us to the root or vice versa. Otherwise, they use rel(ut,us) = sgn(t − s)(abs(ut) + abs(us)). So we see that SPR does not only assume the presence of a graph hierarchy but also needs a strict order to be defined on the graph nodes, because rel equally encodes sequential relative position. This makes SPR a suitable choice for working with dependency graphs but renders SPR incompatible with other tree structures.
Having defined the position of a node in a tree, Wang et al. (2019) inject their SPR via sinusoidal ape for absolute and via learned embeddings in combination with mam for relative positions. It is noteworthy that Wang et al. (2019) achieve their best performance by combining both variants of SPR with sequential position information and that SPR as sole sentence representation, that is, without additional sequential information, leads to a large drop in performance.
Shiv and Quirk (2019) propose alternative absolute tree position encodings (TPE). They draw inspiration from the mathematical properties of sinusoidals but do not use them directly like Wang et al. (2019). Also unlike SPR, their position encodings consider the full path from a node to the root of the tree and not only its length, thus assigning every node a unique position. This is more in line with the spirit of absolute sequential position models (§4.1.1). The first version of TPE is parameter-free: The path from the root of an k-ary tree to some node is defined as the individual decisions that lead to the destination, that is, which of the k children is the next to be visited at each intermediate step. These decisions are encoded as one-hot vectors of size k. The whole path is simply the concatenation of these vectors (padded with 0s for shorter paths). In a second version, multiple instances of parameter-free TPE are concatenated and each one is weighted with a different learned parameter. After scaling and normalizing these vectors, they are added to the unit embeddings before the first Transformer layer (ape).
4.3.2 Arbitrary Graphs
In contrast to the other approaches, Graformer explicitly models disconnected graphs (∞) and does not add any sequential position information. Unfortunately, Schmitt et al. (2021) do not evaluate Graformer on the same tasks as the other discussed approaches, which makes a performance comparison difficult.
All the approaches discussed so far have in common that they allow any node to compute attention over the complete set of nodes in the graph—similar to the global self-attention over tokens in the original Transformer—and that they inject the graph structure solely over a relative position encoding. Dwivedi and Bresson (2020) restrict attention in their graph Transformer to the local node neighborhood and therefore do not need to capture the graph structure by defining the relative position between nodes. Instead they use an absolute ape model by adding Laplacian eigenvectors to the node embeddings before feeding them to the Transformer encoder. Like sinusoidal position embeddings only depend on the (unchanging) order of words, Laplacian eigenvectors only depend on the (unchanging) graph structure. Thus, these position embeddings are parameter-free and can be precomputed for efficient processing. Again, however, an empirical comparison is impossible because Dwivedi and Bresson (2020) evaluate their model on node classification and graph regression whereas the approaches discussed above are tested on graph-to-text generation.
A parameter-free approach is described by Zhang et al. (2020). In their pretraining based on linkless subgraph batching, they combine different features of each node, both predefined (such as node labels) and structural information (such as shortest path lengths), translate them to integers (the position encoding) and, finally, map them to real numbers via sinusoidal position embeddings (see §4.2). The final graph-bert model takes the overall sum as its input (ape).
4.4 Decoding
Bao et al. (2019) propose to predict positions word units in the decoder in order to allow for effective non-autoregressive decoding (see Figure 9). More specifically, they predict the target sentence length and a permutation from decoder inputs and subsequently reorder the position embeddings in the decoder according to the predicted permutation. Their model, called PNAT, achieves performance improvements in machine translation.
4.5 Crosslingual
Unit order across different languages is quite different. English uses a subject–verb–object ordering (SVO), but all possible orderings of S, V, and O have been argued to occur in the world’s languages. Also, whereas unit ordering is rather fixed in English, it varies considerably in other languages, for example, in German. This raises the question whether it is useful to share position information across languages.
Per default, position embeddings are shared in multilingual models (Devlin et al. 2019; Conneau et al. 2020). Artetxe, Ruder, and Yogatama (2020) observe mixed results with language-specific position embeddings in the context of transferring monolingual models to multiple languages: for most languages it helps, but for some it seems harmful. They experimented with learned absolute position embeddings as proposed in Devlin et al. (2019).
Ding, Wang, and Tao (2020) use crosslingual position embeddings (XL PE): In the context of machine translation, they obtain reorderings of the source sentence and subsequently integrate both the original and reordered position encodings into the model and observe improvements on the machine translation task.
Liu et al. (2021a) find that position information hinders zero-shot crosslingual transfer in the context of machine translation. They remove a residual connection in a middle layer to break the propagation of position information, and thereby achieve large improvements in zero-shot translation.
Similarly, Liu et al. (2021b) find that unit order information harms crosslingual transfer, for example in a zero-shot transfer setting. They reduce position information by (1) removing the position embeddings, and replacing them with one-dimensional convolutions, that is, leveraging only local position information, (2) randomly shuffling the unit order in the source language, and (3) using position embeddings from a multilingual model and freezing them. Indeed they find that reducing order information with these three methods increases performance for crosslingual transfer.
4.6 Analysis
There is a range of work comparing and analyzing position information models. Rosendahl et al. (2019) analyze them in the context of machine translation. They find similar performance for absolute and relative encodings, but relative encodings are superior for long sentences. In addition, they find that the number of learnable parameters can often be reduced without performance loss.
Yang et al. (2019) evaluate the ability of recovering the original word positions after shuffling some input words. In a comparison of recurrent neural networks, Transformer models, and DiSAN (both with learned position embeddings), they find that RNN and DiSAN achieve similar performance on the word reordering task, whereas Transformer is worse. However, when trained on machine translation, Transformer performs best in the word reordering task.
Wang and Chen (2020) provide an in-depth analysis of what position embeddings in large pretrained language models learn. They compare the embeddings from BERT (Devlin et al. 2019), RoBERTa (Liu et al. 2019), GPT-2 (Radford et al. 2019), and sinusoidal embeddings. See Figure 10 for their analysis.
More recently, Wang et al. (2021) present an extensive analysis of position embeddings. They empirically compare 13 variants of position embeddings. Among other findings, they conclude that absolute position embeddings are favorable for classification tasks and relative embeddings perform better for span prediction tasks.
Chen et al. (2021) compare absolute and relative position embeddings as introduced by Ke, He, and Liu (2021). They slightly modify the formulation, add segment embeddings as used in the original BERT formulation (Devlin et al. 2019), and investigate sharing parameters across heads and layers. They find that an argued superiority of relative position embeddings might have been due to the fact that they are added to each attention head. When applying the same procedure with absolute position embeddings they find the best performance across a range of natural language understanding tasks.
We provide a high-level comparison of the discussed methods in Table 1. In this table we group similar approaches from a methodological point of view. The objective is to make comparisons easier and spot commonalities faster.
5 Conclusion
This article presented an overview of methods to inject position information into Transformer models. We hope our unified notation and systematic comparison will foster understanding and spark new ideas in this important research area. In this section, we outline limitations of our survey and possible directions for future research.
5.1 Limitations
While this article aims at providing an exhaustive and systematic overview of position information models that assists researchers in finding relevant work, we would like to point out the following limitations.
- (1)
There is a range of work proposing modifications to the core Transformer architecture that have an indirect effect on handling position information. In the following, we briefly describe three of these approaches. We decided not to include them in the main discussion because their focus is to modify core components of the Transformer architecture such as the dot-product attention mechanism rather than focusing specifically on how position information is modeled. Note that this survey focuses on position information in Transformers. Including all model architectures that are derived from Transformers that change how position information is handled would quickly go beyond the scope of this survey. You, Sun, and Iyyer (2020) propose hard-coded Gaussian attention. More specifically, they replace the dot-product attention with fixed values based on a Gaussian distribution that is centered around the position t of the query. This can be interpreted as a locality bias where tokens around position t should have most influence. Related to this, Beltagy, Peters, and Cohan (2020) introduce a sliding-window attention that only attends to local context. Within the sliding window, they use the standard dot-product attention. This allows the model to process longer sequences efficiently. As a last example, we describe the Synthesizer (Tay et al. 2021a), which replaces the dot-product attention with random matrices in the extreme case. Here, positional information does not impact the attention matrix at all anymore.
- (2)
This article does not provide a quantitative comparison of different position information models for two reasons. First, the described models are used in a large number of different tasks and datasets. Picking a single experimental setting as comparison benchmark would bias the reader into thinking that some position information models are universally better than others while this might not be the case. Second, evaluating all models mentioned in this article on a fair and exhaustive set of tasks and datasets is computationally too expensive and constitutes a research effort on its own.
- (3)
This article presents position information models in a certain structure as outlined in Table 2 and compares them along selected dimensions in Table 1. We do not claim that these categories are exhaustive or mutually exclusive. Similarly, there exist countless alternative categorizations. We outlined the reasons why we decided on this particular presentation at the beginning of §3 and §4.
5.2 Future Work
There are many open questions and starting points for future work. We believe that the following areas are important topics for future work related to position information models.
- (1)
Can we use position information models to include more information about the structure of text? While there are many models for processing sequential and graph-based structures, there is a wide range of structural information in text that is not considered currently. Some examples include tables, document layout such as headlines, list enumerations, sentence order, and link structures. Can this structure be integrated with current position information models or are new methods required for representing document structure? Is including the document structure useful for downstream tasks such as document-level machine translation?
- (2)
The majority of the presented position information models are designed with word or subword tokenization in mind. From the beginnings of neural language models (Sutskever, Martens, and Hinton 2011) up to recent research (e.g., Lee, Cho, and Hofmann 2017; Xue et al. 2021), character- and byte-level processing has been a vibrant research area. Designing position information models specifically for character- or byte-level processing thus seems a logical next step. Future directions could make byte-level positions aware of the encoding structure or character-level positions aware of word structures.
- (3)
Some analysis papers such as Wang et al. (2021) are extensive and provide many insights. Still, many aspects and differences of the position information models are not fully understood. A promising future direction is to continue an empirical comparison of different position information models on more tasks, languages, and datasets.
- (4)
For many tasks, treating sentences as bag-of-words could be sufficient. Indeed, Wang et al. (2021) show that without position embeddings the performance drops for some tasks are marginal. Thus we consider it interesting to investigate which tasks require explicit position information.
Acknowledgments
This work was supported by the European Research Council (# 740516) and by the BMBF as part of the project MLWin (01IS18050). The first author was supported by the Bavarian Research Institute for Digital Transformation (bidt) through their fellowship program. We also gratefully acknowledge a Ph.D. scholarship awarded to the second author by the German Academic Scholarship Foundation (Studienstiftung des deutschen Volkes). We thank Nikita Datcenko for helpful discussions and valuable insights.
Notes
Dependency parsers usually do not operate on subwords. So subwords are assigned the position of their main word.
References
Author notes
First two authors contributed equally.
Now at Apple.
Now at celebrate company.
Action Editor: Rico Sennrich