## Abstract

Transformers have become a standard neural network architecture for many NLP problems, motivating theoretical analysis of their power in terms of formal languages. Recent work has shown that transformers with *hard* attention are quite limited in power (Hahn, 2020), as they can be simulated by constant-depth AND/OR circuits (Hao et al., 2022). However, hard attention is a strong assumption, which may complicate the relevance of these results in practice. In this work, we analyze the circuit complexity of transformers with *saturated* attention: a generalization of hard attention that more closely captures the attention patterns learnable in practical transformers. We first show that saturated transformers transcend the known limitations of hard-attention transformers. We then prove saturated transformers with floating-point values can be simulated by constant-depth threshold circuits, giving the class TC^{0} as an upper bound on the formal languages they recognize.

## 1 Introduction

Opening the “black box” (Alishahi et al., 2020) of the representations within neural networks is an important step towards building systems with robust and interpretable behavior. In NLP, one part of this question is analyzing the languages that networks can model, and the mechanisms they use to represent linguistic structure and dependencies.

One path toward this goal is via formal analysis of specific network architectures (Merrill, 2021); for example, recurrent neural networks (RNNs). Due to their autoregressive formulation, formal linguistic analysis of RNNs has often characterized their power by relating them to automata- theoretic classes of formal languages (Weiss et al., 2018; Peng et al., 2018; Merrill, 2019, inter alia). Recently, however, RNNs have largely been overtaken in NLP by a new class of models: transformers (Vaswani et al., 2017). Transformers are not autoregressive, and therefore less naturally resemble automata, posing challenges to characterizing their linguistic capacity or inductive biases in the same terms as RNNs. Instead, some recent work has related them to circuit complexity classes, a direction that we continue to pursue in this paper. Drawing on classical circuit lower bound results, Hao et al. (2022) and Hahn (2020) derive theoretical limitations of transformers with *hard* attention, meaning the attention distributions focus all probability mass on one index. Together, their results show that AC^{0}—the class of languages recognizable by constant-depth circuit families— upper bounds the formal languages hard-attention transformers can recognize.

However, hard attention is a strong assumption, making it unclear how these results transfer to practical transformers. For example, Bhattamishra et al. (2020) showed how transformers can solve synthetic counting tasks by using uniform attention patterns, which hard attention does not allow. Motivated by this potential disconnect between theory and practice, we aim to extend circuit- based analysis to transformers with saturated attention: a generalization of hard attention that has been argued to approximate attention patterns acquired through gradient descent (Merrill et al., 2021). Broadly speaking, saturated attention goes beyond hard attention in that it can “tie” across a subset of positions, rather than selecting just *one* position. The tied positions are then aggregated by averaging. Qualitatively, saturated attention heads can “count”: a capability observed in transformers in practice (Bhattamishra et al., 2020). Further, Merrill et al. (2021) show that transformer training dynamics lead attention heads in several pretrained transformers to approximate saturated attention. In summary, saturated attention strictly generalizes hard attention and should more closely reflect the attention patterns acquired in practical transformers.

Our main contributions are twofold. First, we show that saturated transformers can recognize languages outside AC^{0}. Then, as depicted in Table 1, we prove that transformers with floating point activations and saturated attention can only recognize formal languages in the circuit complexity class TC^{0}, constituting an upper bound for a more realistic model of transformers than past results with hard attention.

## 2 Roadmap

In §3, we formally define our model of the transformer, including defining saturated attention in contrast to hard attention. §4 introduces circuits in theoretical computer science and relevant complexity measures and classes for them.

In §5, we first briefly analyze saturated transformers with rational values where the embedding, scoring, and activation functions are allowed to be any size-preserving function. We find such transformers to be universally powerful. We also observe that when the positional embeddings are computed in time linear in the sequence length, saturated rational-valued transformers are exactly as powerful as the complexity class of their activation functions, because the full input sequence can be pooled to a single position, and an activation function can be used as an oracle over the full input sequence. However, this setup relies on the use of unrealistic embedding functions. To move to a more realistic model of computation, we then focus on saturated transformers whose values are restricted to be *floats*, which have a coarser granularity and, thus, cannot encode the full input sequence into a single position.

Building on results of Pérez et al. (2019), we demonstrate in §6 that saturated transformers with *float* activations transcend the theoretical limitations of hard-attention transformers. In particular, we will show that they can recognize the majority language, which lies outside AC^{0}. We experimentally validate that transformers can learn to recognize the majority language. Taken together, these results suggest that the very weak characterization of hard-attention transformers does not hold in practice for saturated or soft attention.

In §7, we show that, on input sequences of length *n*, the size of each state vector in a transformer over floats is $O(logn)$ bits, similar to saturated LSTMs (cf. Merrill, 2019). Thus, the full transformer state at any given layer has size $O(nlogn)$, although each feedforward block can only locally access a small, $O(logn)$ “piece”. Thus, while hierarchical representations can be implemented in a transformer (e.g., to process arbitrary-depth Dyck languages or reverse strings as in Weiss et al. [2021]), our result implies that they must be *distributed* in some way across *n* state vectors, rather than represented compactly within a single vector.

Finally, in §8, we use the bounded size of transformer representations to upper bound the formal languages that can be recognized by saturated transformers with floating-point values. In particular, we show that such transformers can be simulated by constant-depth threshold circuits, and thus only recognize languages in TC^{0}. Informally, this suggests that moving from hard attention to saturated attention can be thought of as extending the implicit class of circuit gates available in the network to include threshold gates.

Our results make progress in the analysis of transformers by deriving upper bounds for a more realistic model of transformers than has previously been analyzed. RoBERTa, T5, and other pretrained transformers have been shown to be approximately saturated (Merrill et al., 2021), so our results imply that TC^{0} may be a meaningful upper bound on the computation expressible within such networks. Our analysis also motivates future work further refining the circuit characterization of saturated transformers, as well as comparing transformers with soft and saturated attention.

## 3 Definitions and Notation

We will often use *w* to refer to a string over any generic alphabet Σ, that is, *w* ∈ Σ^{*}. Semantically, *w* corresponds to the string a transformer receives as input. In contrast, we use *x* and other symbols to refer to binary strings in {0,1}^{*}. These binary strings will represent intermediate values within the transformer computation, rather than the raw input to the transformer.

### 3.1 Datatypes

Under our model, all values in the transformer are binary strings. In order to compute self attention and other operations over binary strings, we need to define datatypes describing the semantics of these binary strings as numbers. We will describe a semantics for binary strings as integers, as often comes up in circuit complexity. We then extend this to rational numbers and floats, which are necessary for representing the division operations that occur in attention heads within transformers.

##### Unsigned Integers

*x*∈ {0,1}

^{*}as unsigned integers in the standard way, namely, the numerical value of

*x*∈ {0,1}

^{n}is

_{ℤ},·

_{ℤ}, <

_{ℤ}. For example, 101 +

_{ℤ}1 = 110.

##### Rationals

*r*∈ {0,1}

^{*}as a rational number, we first view it as a sign bit

*s*along with a tuple of two unsigned integer substrings $\u2329p,q\u232a$.

^{1}The numerical value represented by

*r*is

*p*,

*q*) return $\u2329s,t\u232a$ where

*s*=

*p*/gcd(

*p*,

*q*) and

*t*=

*q*/gcd(

*p*,

*q*). Then, we can define arithmetic operations over two rationals $r=\u2329p,q\u232a$ and $r\u2032=\u2329p\u2032,q\u2032\u232a$ in the standard way:

##### Floats

We define floats $F$ as the subset of the rationals where the denominator is constrained to be a power of 2.^{2} Multiplication and addition are defined as for ℚ, and are guaranteed to produce another float. Notably, division for floats is implemented by multiplying by an approximate multiplicative inverse, so it may be that $(x/Fy)\xb7\mathbb{Q}y\u2260x$. See Appendix A for a more formal discussion.

In §5, we will study transformers over rational values. From §6 onwards, we will then take the values in transformers to be floats unless otherwise stated. Going forward, we will generally omit datatype subscripts from operations where they are clear from context. We will sometimes write $D$ as a set in function signatures, for example, $f:Dk\u2192Dk$. In this usage, it refers to the set {0,1}^{*}, but it is often more intuitive to write the datatype shorthand (rather than {0,1}^{*}) to hint at the intended semantics of the functional arguments.

##### Size of Binary Strings

Under our model, integers, rationals, and floats are all abstractions built out of binary strings. For any *x* ∈ {0,1}^{*} (which can be interpreted semantically as an integer, float, or rational), we define its size $|x|$ as the total length of *x* measured in bits. We imagine a tuple $\u2329p,q\u232a$ is encoded by padding *p*,*q* to the same length with leading 0’s, and interleaving bits from each sequence. This means the size of a rational is $2max(|p|,|q|)+1$. For example, the integer 2 takes 2 bits to specify, while the float $12$ takes 5 bits (1 for the sign, 2 for the numerator, 2 for the denominator).

##### Size Preservation

We say that a function *f* : {0,1}^{*} →{0,1}^{*} is size-preserving iff there exist constants *c*,*n* such that for all inputs *x* with $n\u2264|x|$, $|f(x)|\u2264c\xb7|x|$. Let 𝒫 be the set of size-preserving functions. While size-preserving functions are defined here over binary strings, they can be equivalently applied over integers, rationals, and floats, since these datatypes, as we have defined them, are just binary strings.

### 3.2 Transformers

We define the following general transformer model, which can be parameterized to use different types of attention patterns and whose internal functions (e.g., feedforward blocks) can be computed by different function classes.

A *transformer* is a tuple $\u2329\Sigma ,D,\alpha ,L,H,\varphi ,{s\u2113,h}\u2113,h=1L,H,{f\u2113}\u2113=1L\u232a$ where

Σ is a finite input alphabet, that is, the set of token types in a formal language.

$D$ is a scalar datatype, that is, a semantics for interpreting binary strings as numbers. We will generally consider $D=F$.

*α*is an attention function that maps a vector of attention scores in $Dn$ (for any*n*) to a normalized probability distribution, also in $Dn$. In this paper we take*α*to be either hard (*η*) or saturated (*ζ*) attention; see §3.3.$L\u2208\mathbb{N}$ is the number of layers.

$H\u2208\mathbb{N}$ is the number of heads.

$\varphi :\Sigma \xd7N\u2192Dm$ is a position-aware embedding function that maps a token and position to a vector, where

*m*is a multiple of*H*.For each

*ℓ*,*h*, the function $s\u2113,h:Dm\xd7Dm\u2192D$ assigns attention scores to pairs of values.For each

*ℓ*, the function $f:Dm\xd7Dm\u2192Dm$, maps a previous layer value and attention head output to a new value vector.

On an input string *w* ∈ Σ^{n}, a transformer computes *L* layers of output sequences *v*_{ℓ,1},⋯ ,*v*_{ℓ,n} (for *ℓ* ≤ *L*), where each $v\u2113,i\u2208Dm$. In the 0th layer, each token *w*_{i} and its position *i* are embedded into a value *v*_{0,i}. Subsequent layers aggregate information from the previous value sequence *v*_{ℓ} using a *multi-head attention mechanism*, and output a new value sequence *v*_{ℓ +1}. More formally, these layers are structured as follows:

**Embedding Layer:***v*_{0,i}=*ϕ*(*w*_{i},*i*).**Attention Head:**Each of the*H*attention heads in layer*ℓ*maps the full previous sequence into a new value via*s*_{ℓ,h}and then applies the attention function*α*:Crucially, the semantics for addition and multiplication here (as well as in the computation of$a\u2113,h,i,j=s\u2113,h(v\u2113,i,v\u2113,j)b\u2113+1,h,i=\u2211j=1n\alpha (a\u2113,h,i,:)j\xb7v\u2113,j.$*α*) come from the datatype $D$.**Activation Block:**^{3}$v\u2113+1,i=f\u2113+1(v\u2113,i,b\u2113,:,i).$

### 3.3 Attention Functions

An attention function *α* maps a vector of scores $a\u2208Dn$ to a probability distribution over 1,⋯ ,*n*. Specifically, we consider two attention functions: *hard* attention *η*(*a*) and *soft* attention *ζ*(*a*).

Hard attention collapses the attention scores to a one-hot distribution with all mass concentrated at one index. Let $M(a)={i\u2223ai=maxjaj}$.

*η*(

*a*) as

In contrast, saturated attention spreads probability mass evenly across “tied” scores.

*ζ*(

*a*) as

Merrill (2019) shows how this form of attention can be derived by taking a large-norm limit of the network weights; a derivation can be found there. Saturated attention reduces to hard attention when $|M(a)|=1$, and attends uniformly when $|M(a)|=n$. Both hard and uniform attention can be implemented with numerical stability, motivating *weak* saturated (or, “uniform”) attention:

Each head implements either hard attention (Definition 2) *or* the uniform pattern $\upsilon (a)j=1n$.

In general, we will use “saturated attention” to refer to strong saturated attention and provide upper bounds for this setting. On the other hand, our lower bounds only use weak saturated attention, thereby showing that even weak saturated attention is more powerful than hard attention.

### 3.4 Language Recognition

Finally, we define language recognition for transformers.

*v*

_{ℓ,i}(

*w*) for the value of

*v*

_{ℓ,i}on input string

*w*. A transformer recognizes a language ℒ ⊆ Σ

^{*}if there exists a $D$-valued affine transformation

*W*,

*b*such that, for all

*w*∈ Σ

^{*},

This says that the decision problem of recognizing ℒ must be linearly separable using the first value in the last layer of the transformer. In practice, the first token in a transformer is often set to CLS, and its output can be passed to a classifier during finetuning (Devlin et al., 2019). This inspires Definition 5. There are other potential ways to define language recognition and generation for transformers (Hewitt et al., 2020; Yao et al., 2021), but they do not lead to meaningful differences for our purposes.

Finally, we define $AHAT(D)$ as the set of languages recognizable by some saturated transformer over $D$, where the internal functions can be any size-preserving function.^{4}

Let $AHAT(D)$ be the set of languages ℒ such that there exists a transformer $\u2329\Sigma ,D,\zeta ,L,H,\varphi ,s\u2113,h,f\u2113\u232a$ that recognizes ℒ where each *ϕ*,*s*_{ℓ,h},*f*_{ℓ} ∈ 𝒫.^{5}

We note that size preservation is a weak condition to assume about the internal functions in practical transformers: Because any linear-time- computable function is size-preserving, it is strictly weaker than assuming that the internal functions can be computed in linear time. To further justify this condition, we explicitly show in Appendix B that the component functions within transformers are size-preserving.

## 4 Circuit Complexity

Circuit complexity is a branch of computational complexity theory that studies circuit families as a model of computation.^{6} Intuitively, circuits are useful for formally studying the types of computational problems that can be efficiently solved with parallelism, as the depth of a circuit corresponds to the runtime of a program on an idealized, fully parallel computer. We review background on circuits, circuit families, and relevant complexity measures and classes.

##### Circuits

For a fixed *n*, a *circuit* is a computation graph, where leaves correspond to input bits *x*_{i} and their negations ¬*x*_{i}, and the internal nodes are logic gates (typically ∧ and ∨), with one labeled as the output node. The gates can conventionally be taken to have either binary or unbounded fan-in. The circuit computes a function *f* : {0,1}^{n} →{0,1} by substituting the input values into the leaf nodes, propagating the computation through the graph, and returning the value of the output node. Figure 1 shows an example circuit that takes inputs of length 5, and returns whether they contain the bigram 11.

##### Circuit Families

*circuit family*is an ordered set of circuits {

*C*

_{n}}

_{n∈ ℕ}where each circuit is identified with a particular input size

*n*. We say a circuit family recognizes a formal language ℒ ⊆{0,1}

^{*}iff, for all

*w*∈ ℒ,

^{7}

##### Circuit Complexity

Two important notions of complexity for a circuit are its size and depth. The size of a circuit is the number of gates. The depth is the longest path from an input node to the output node. For a circuit family, both quantities can be expressed as functions of the input size *n*. A *circuit complexity class* is a set of formal languages that can be recognized by circuit families of a certain size, depth, and set of gates. In particular, we will discuss the classes AC^{0} and TC^{0}.

AC^{0} is the set of languages ℒ ⊆{0,1}^{*} such that there exists a circuit family recognizing ℒ with unbounded arity {∧,∨} gates, poly(*n*) size, and O(1) depth.

Intuitively, AC^{0} represents the class of problems that are highly parallelizable when the computational primitives are standard logic gates. In contrast, TC^{0} will also represent highly parallelizable computation, but when the gates are expanded to include *threshold gates*.

For a bitstring *x* ∈ {0,1}^{*}, define the threshold gate *θ*_{≥k}(*x*) to return 1 iff ≥ *k* bits in *x* are 1, and equivalently for ≤ *k*. For example, *θ*_{≥3}(110011) = 1.

TC^{0} is the set of languages ℒ ⊆{0,1}^{*} such that there exists a circuit family recognizing ℒ with unbounded arity {∧,∨,*θ*} gates, poly(*n*) size, and O(1) depth.

It is known that AC^{0} ⊂TC^{0} ⊆NC^{1}, where NC^{1} denotes the languages recognizable by $O(logn)$- depth circuits with bounded gate arity. Whether or not the latter containment between TC^{0} and NC^{1} is strict is an open question. Whereas parity and other basic regular languages are outside AC^{0} (Furst et al., 1981), TC^{0} properly contains parity, although it is unknown whether it contains *all* the regular languages. Between AC^{0} and TC^{0} lies the class ACC^{0} (Yao, 1990).

##### Uniformity

*non- uniform*, meaning circuits for different input sizes are not constrained to have any relation to each other. Non-uniform circuit families can recognize some uncomputable languages, such as the language of strings 1

^{k}such that Turing machine

*k*does not halt on the null input (cf. Arora and Barak, 2009). In contrast, the

*uniform*variants of circuit families are constrained such that a log-space Turing machine must output a string encoding of circuit

*C*

_{n}on the input string 1

^{n}, forcing any language the circuit family can recognize to be computable. For these uniform classes (which we write with a

*u*prefix), it is known that

*u*TC

^{0}is restricted compared to general polynomial-time computation, but if we accept the common conjecture that one (if not all) of the above containments are strict, then

*u*TC

^{0}forms a restricted family of problems compared to P, which, intuitively, are more parallelizable than other problems in P.

## 5 Aren’t Transformers Universal?

We now begin our analysis of saturated transformers. Hao et al. (2022) and Hahn (2020) were able to give upper bounds on the power of hard attention without imposing any constraints on the embedding, scoring, and activation functions. The same will not be the case with saturated attention: any bounds on transformers will require leveraging some properties constraining their internal functions. One property we use will be size preservation. We will first show, though, that size preservation is not enough on its own: Deriving a nontrivial upper bound will depend on subtle assumptions about the transformer’s datatype.

With rational values and size-preserving internal functions, we will show saturated transformers can recognize *any* formal language, namely, the class ALL = {ℒ∣ℒ ⊆{0,1}^{*}}. Our construction resembles the universal approximation construction of Yun et al. (2020), which relies on the ability of the transformer to uniquely encode the full input string into a single value vector. After the full sequence is encoded locally into a single vector, the activation block can be used as a black box to recognize any language.

AHAT(ℚ) = ALL.

*Proof.*

*w*in any formal language ℒ ∈ ALL. We will omit

*ℓ*,

*h*subscripts. Let

*p*

_{i}denote the

*i*th prime number. The embedding layer encodes each input token according to

*i*by the prime number theorem (cf. Goldstein, 1973), the number of bits needed to represent the denominator of

*ϕ*(

*w*

_{i},

*i*) is

*i*had size $logi$, this implies

*ϕ*is size- preserving.

Now, we define a single uniform attention head that sums across all *i*, outputting $\u2211wi=11pi$. The denominator *q* of this sum is the product $\u220fi=1pi$. Observe that *w*_{i} = 1 iff *p*_{i} divides *q*. Thus, we can define a function *f* that extracts the input sequence *w* from *q* by checking whether, for each *i*, *p*_{i} divides *q*. We let *g* be a function recognizing *L*, and set *f* = *g* ∘ *f*. The output of the transformer will now compute whether *w* ∈ ℒ, since *f* outputs an encoding of the original input sequence *w*, and *g* decides whether *w* ∈ ℒ. Note that any function solving a decision problem is size-preserving, hence *f* ∈ 𝒫.

Theorem 1 says that our transformer architecture parameterized with a rational datatype can recognize any formal language. But a construction of this form feels unrealistic for two reasons. First, it requires the embedding layer to implement an unconventional prime encoding scheme in the embedding layer. Second, we are using the activation layer as a black box to recognize any language—even uncomputable ones! On the other hand, the feedforward subnetworks used in practice in transformers cannot even implement all computable functions when the weights are fixed independent of the sequence length *n*. We can get around both these issues by instead restricting the datatype to floats, which is the direction we will pursue in the remaining sections.^{8}

### 5.1 Resource-Bounded Transformers

In Appendix C, we develop an alternate perspective on the universality of transformers, showing that, if the embedding function is allowed to be computed in time linear in the sequence length, then the transformer’s complexity is equivalent to its activation functions’ complexity.

*If ϕ can be any function computable in time linear in n, and the scoring and activation functions can be computed in T*(*m*) *time on inputs of size m with*$T(m)\u2265m$, *then languages recognizable by the transformer are*TIME(*T*(*m*)).

Appendix C contains a formal statement and proof. For example, allowing polynomial-time functions inside the transformer implies that the transformer will recognize exactly the complexity class P. A major unrealism about this setup is the assumption that *ϕ* can be an arbitrary function computable in time linear in *n*, motivating our main results in a more constrained setting in §8.

### 5.2 Discussion

We are not stating the results in this section as evidence that practical transformers are capable of universal or arbitrary polynomial computation. Rather, the unnaturalness of these constructions (specifically, the prime numbers based position encoding) motivates us to slightly constrain our model of the transformer in a realistic way: We will switch the datatype from rationals to floats, because even using only simple uniform attention, a model with rationals and unconstrained internal functions is universal. We will soon see that this realistic constraint prevents universal simulation, and in fact bounds the capacity of the saturated transformer within TC^{0}.

## 6 Beyond Hard Attention, with Floats

We now move to the setting of saturated transformers over floats. Hao et al. (2022) identified that hard-attention transformers can only recognize languages within AC^{0}. In contrast, saturated transformers over floats can recognize the “majority” language maj, which is known to lie outside AC^{0} (Furst et al., 1981). Pérez et al. (2019, Prop. 3.3) show how maj can be recognized by transformers. In Theorem 3, we offer a simpler construction that leverages only a single uniform attention head, as opposed to the model of transformers they were considering. Thus, this construction is achievable with saturated attention.

$AHAT(F)\u2ac5\u0338AC0$.

*Proof.*

*#*

_{σ}(

*w*) ∈ ℕ denote the number of

*σ*tokens in string

*w*∈ {0,1}

^{*}. Let

*#*(

*w*) denote a count vector where each element corresponds to some

*σ*∈ {0,1}. We define maj as follows:

*ℓ*,

*h*subscripts from

*s*,

*f*,

*x*,

*b*. Figure 2 gives the same construction in RASP (Weiss et al., 2021).

*x*

_{i}=

*ϕ*(

*w*

_{i},

*i*) be a 1-hot encoding of

*w*

_{i}. For all

*i*,

*j*, set

*s*(

*x*

_{i},

*x*

_{j}) = 1, resulting in a single head attending everywhere:

*f*(

*b*

_{i}) to return whether

*#*

_{1}(

*w*)/

*n*>

*#*

_{0}(

*w*)/

*n*, which, for

*n*> 0, is true iff

*w*∈ MAJ.

*ϕ*,

*s*, and

*f*in real transformers (Vaswani et al., 2017). The uniform attention pattern can be implemented by setting all query and key attention parameters to 0. Then, we can use the affine transformation that aggregates the head outputs to compute the tuple:

*t*

_{1}>

*t*

_{2}iff the same applies to the quantities in the original tuple. Thus, a linear classifier can decide whether

*t*

_{1}>

*t*

_{2}to successfully recognize the language, as per Definition 5.

### 6.1 Empirical Validation

In Figure 3, we show empirically that a 1-layer transformer can learn and generalize maj. This supports our argument that the theoretical limitations of hard-attention transformers do not apply to practical transformers. We train with three different types of positional encoding: none, meaning no positional information; learned, where each position gets a trainable embedding vector, and the sinusoidal scheme of Vaswani et al. (2017). The model with no positional embeddings generalizes the best, followed by the learned embeddings. It appears that while maj is in the capacity of the transformer, the standard sinusoidal positional embedding scheme provides the wrong inductive bias for learning it. This recalls the finding of Yao et al. (2021) that the choice of positional encodings seems to greatly impact the transformer’s ability to generalize formal language tasks to longer sequences.

## 7 Size of Transformer Values

The theoretical limits on hard-attention transformers were derived by Hao et al. (2022) by bounding the size in bits of the representation *v*_{ℓ,i} at each layer *ℓ* and position *i*. Specifically, they show that the value *v*_{ℓ,i} is representable in $O(logn)$ bits on input sequences of length *n*. Thus, each value can only contain limited information about the input sequence, intuitively explaining their upper bound on hard-attention transformers. Inspired by their analysis, this section will show that, in a saturated transformer, each *v*_{ℓ,i} also has a size of $O(logn)$ bits. Later in §8, we will use this property to show that saturated attention transformers are limited in the formal languages they can recognize.

### 7.1 Size of Float Sums

How many bits does it take to represent the value of an attention head within a saturated transformer? As a naive bound, the output of a saturated attention head is specified by a float for each of *n* values attended over from the last layer, which would take at least linearly many bits in *n*. However, this upper bound on its size is not tight. Instead, we will show that all head and activation values can be represented in $O(logn)$ bits. Our analysis will rely heavily on the following lemma:

Let *v*_{1},⋯ ,*v*_{n} be a sequence of floats, each with size at most *z*. Then there exists *c* such that $\u2211i=1nvi$ has size at most $4cz+2logn+1$.

*Proof.*

*p*

_{i},

*q*

_{i}denote the numerator and denominator of the floating point

*v*

_{i}, respectively. Similarly, let

*p*

_{s},

*q*

_{s}be the numerator and denominator of the float

*s*. By assumption, there exists

*c*such that each

*p*

_{i},

*q*

_{i}both have size ≤

*cz*for large enough

*n*. We let $pmax=maxipi$ and analogously for

*q*

_{max}. Because all

*q*

_{i}’s are powers of 2, the numerator

*p*

_{s}is

*q*

_{s}=

*q*

_{max}, which has size ≤

*z*. Therefore, the float representing the sum has size

In particular, we will use Lemma 1 to show that, when each of a sequence of *n* values has size $O(logn)$, the sum will also have size $O(logn)$.

### 7.2 Size of Transformer Values

We will now leverage Lemma 1 to show that the values are of bounded size in any transformer over floats with an elementwise-size-preserving attention function.

A function $\alpha :Dn\u2192Dn$ is elementwise-size-preserving if, for 1 ≤ *i* ≤ *n*, the function *x*_{i}↦*α*(*x*)_{i} is size-preserving (where $x\u2208Dn$).

Note that saturated attention satisfies this definition. We are ready to prove a theorem bounding the size of the representations in transformers with elementwise-size-preserving attention.

*For any transformer over*$F$*with**ϕ*,*s*_{ℓ, h},*f*_{ℓ} ∈ 𝒫 *and**α**elementwise-size-preserving, for all**ℓ* ≤ *L**and**i* ≤ *n*, *v*_{ℓ,i}*has size*$O(logn)$.

*Proof.*

By induction over *ℓ*. The proof follows the definition of transformer computation in §3.2.

##### Base Case

*w*_{i} ∈ Σ has size O(1), and *i* ∈ [*n*] has size $O(logn)$. Since *ϕ* ∈ 𝒫, *v*_{0,i} = *ϕ*(*w*_{i},*i*) has size $O(logn)$ for all *i*.

##### Inductive Case

*v*

_{ℓ,i}has size $O(logn)$. Since

*s*

_{ℓ +1,h}∈ 𝒫,

*a*

_{ℓ +1, h, i, j}=

*s*

_{ℓ +1, h}(

*v*

_{ℓ, i},

*v*

_{ℓ, j}) has size $O(logn)$ for all

*i*,

*j*. Since

*α*is elementwise-size-preserving, we can conclude that

*α*(

*a*

_{ℓ +1,h, i,:})

_{j}also has size $O(logn)$ for all

*h*,

*i*,

*j*. Multiplying two floats is size-preserving (cf. Appendix B), so

*α*(

*a*

_{ℓ +1,h,i,:})

_{j}·

*v*

_{ℓ,j}has size $O(logn)$ for all

*h*,

*i*,

*j*. We then apply Lemma 1 to conclude that

*b*

_{ℓ +1, h, i}has size $O(logn)$, where, recall,

*v*

_{ℓ +1, i}=

*f*

_{ℓ +1}(

*v*

_{ℓ, i},

*b*

_{ℓ,:,i}), we conclude that

*v*

_{ℓ +1, i}has size $O(logn)$ for all

*i*by size preservation.

*For any saturated transformer over*$F$*with size-preserving internal functions, for all**ℓ* ≤ *L**and**i* ≤ *n*, *v*_{ℓ, i}*has size*$O(logn)$.

Corollary 4.1 follows because saturated attention is elementwise-size-preserving. Softmax attention, on the other hand, is not guaranteed to fulfill this property, because it requires computing the exponential function. This technical challenge prevents generalizing our technique to soft attention.

### 7.3 Discussion

Similar to hard-attention transformers (Hao et al., 2022), the size of each vector representation in a saturated transformer over floats is $O(logn)$. This is enough memory for individual vectors to “count”, a behavior that has been observed in both LSTMs (Weiss et al., 2018) and transformers (Bhattamishra et al., 2020). On the other hand, $O(logn)$ space is not enough memory for individual vectors (for example, the CLS output) to encode arbitrarily large combinatorial objects like trees. However, transformers are *not* limited to computing in an “online” fashion where tokens are consumed sequentially, meaning that their effective state is *n* values of size $O(logn)$. Notably, trees with *n* leaves can be encoded in a distributed fashion across *n* values of size $O(logn)$. One construction for this is, at index *i*, to store *w*_{i} and *i*, along with a pointer *j* to the parent. Since *i*,*j* can both be represented in $logn$ bits, each vector uses only $O(logn)$ bits.

Additionally, the $O(logn)$ space bound has implications from the perspective of circuit complexity. While saturated attention cannot be simulated in AC^{0}, we will show in §8 that saturated transformers over $F$*can* be simulated by TC^{0} circuits.

## 8 Threshold Circuit Simulation

We have proved that each value vector in a saturated transformer over floats has $O(logn)$ size. Now, we show how this implies saturated transformers can be simulated by TC^{0} circuits. Our results heavily leverage the following lemmas:

Any function *f* : {0,1}^{c} →{0,1}^{d} can be computed by a Boolean circuit of depth 3 and size at most (2^{c} + *c* + 1)*d*.

So that our results are self-contained, we reproduce a proof of this lemma in Appendix D. Applying Lemma 2 to a size-preserving function with at most $clogn$ input bits immediately yields:

*Any size-preserving function with at most*$clogn$*input bits can be computed by a Boolean circuit of depth 3 and polynomial size*.

In other words, such functions can be computed with AC^{0} circuits. In addition, we will show that the sum of *n* floats of size at most $clogn$ can be computed by TC^{0} circuits.

Let *v*_{1},⋯ ,*v*_{n} be a sequence of floats, each with size at most $clogn$ for some *c*. Then the sum $\u2211i=1nvi$ is computable by a threshold circuit of constant depth and polynomial size.

*Proof.*

*p*

_{i},

*q*

_{i}be the numerator and denominator of

*v*

_{i}. We first compute

*q*

_{max}, the maximum

*q*

_{i}, using an AC

^{0}circuit that compares all pairs

*q*

_{i},

*q*

_{j}, and returns the first

*q*

_{i}such that

*q*

_{j}≥

*q*

_{j}for all

*j*. We then use the fact that multiplication and right shift (

*q*

_{i}is a power of 2) are in TC

^{0}, in order to compute

*i*. Note that

*q*

_{max}and

*q*

_{i}are both powers of 2, so the division will be exact. Next, we leverage the fact that the sum of

*n*integers of size $O(logn)$ is in TC

^{0}(Kayal, 2015), in order to compute the numerator of the sum $p\u2032=\u2211iri$. We select the denominator as

*q′*=

*q*

_{max}. Finally, we can add an AC

^{0}circuit that “reduces” the fraction by removing shared trailing zeros from

*p′*,

*q′*, which is possible by Corollary 2.1. Thus, we have constructed a TC

^{0}circuit to compute the sum of

*n*floats with size $O(logn)$.

We now construct a TC^{0} circuit that simulates a saturated transformer over floats.

$AHAT(F)\u2286TC0$.

*Proof.*

For each *n*, we construct a TC^{0} circuit that simulates a saturated transformer on inputs of size *n*. We construct the circuit modularly, with one subcircuit for the attention mechanism, and another for the feedforward subnetwork.

##### Attention Head

^{0}subcircuit that simulates the attention mechanism at position

*i*. The head attends over vectors

*v*

_{1},⋯ ,

*v*

_{n}. For all

*j*,

*v*

_{j}has size $O(logn)$ by Theorem 4. In parallel for each

*j*, we compute the scores

*a*

_{i,j}=

*s*(

*v*

_{i},

*v*

_{j}) with an AC

^{0}circuit by Corollary 2.1. We then compute $ai,max\u225cmaxjai,j$ with an AC

^{0}circuit by comparing all

*v*

_{j}pairwise, and selecting the first

*v*

_{k}such that

*v*

_{k}≥

*v*

_{j}for all

*j*. We then compute “masked” values

*u*

_{i,j}for each

*j*via an AC

^{0}circuit by Lemma 2:

*s*

_{i}has size $O(logn)$. Now, we similarly define

*z*

_{i,j}instead of

*u*

_{i,j}, we can use a TC

^{0}circuit to compute $|M(a)|$: the number of

*j*such that

*a*

_{i,j}≥

*a*

_{i,max}. Finally, since dividing floats is in TC

^{0}(cf. §9), we can compute the head output as $si/|M(a)|$, which has size $O(logn)$ by size preservation of division.

##### Feedforward

As input, *f* receives *v*_{i} as well as *H* head outputs, all of which have size $O(logn)$. As the total size of the input is $O(logn)$, we can use Corollary 2.1 to compute the output of *f* with an AC^{0} circuit. The size of the output is $O(logn)$ by size preservation of *f*. The same idea holds for *ϕ* as well as the linear classification head.

We have simulated each transformer component with a TC^{0} subcircuit, completing the proof.

### 8.1 Discussion

Recall that, over rationals, we found that size- preserving saturated transformers could recognize any language. In contrast, we have now shown that using floating-point representations places such transformers within TC^{0}. In this paper, we have only considered non-uniform AC^{0} and TC^{0}, as opposed to the uniform variants of these classes, which are more closely connected to familiar formal language classes like the regular and context- free languages (cf. Cojocaru, 2016; Mahajan, 2007). As transformers satisfy some intuitive notion of uniformity, an open question is whether saturated transformers also fall into uniform TC^{0}.

## 9 Conclusion

Compared with hard attention, saturated attention adds theoretical power to transformers. We showed that saturated attention lets transformers recognize languages outside AC^{0}, which is the upper bound with hard attention. Further, while saturated transformers with rational values and size-preserving internal functions can recognize any language, we characterize the limits of size- preserving saturated transformers with *floats*. Specifically, saturated transformers with float values fall in TC^{0}, a more powerful circuit class than AC^{0}. Thus, going from hard to saturated attention can be understood as augmenting the model with threshold gates. This illustrates one way that the circuit complexity paradigm characterizes the power of transformers. Going forward, there are many interesting open questions that circuit analysis can answer, such as comparing the power of saturated and soft attention, and refining existing upper bounds for transformers in terms of uniform circuit families.

## Acknowledgments

Thanks to Yiding Hao, Dana Angluin, and Robert Frank for sharing an early draft of their work. We also appreciate helpful feedback from Dana Angluin, Matt Gardner, Yoav Goldberg, Michael Hahn, Kyle Richardson, and Roy Schwartz.

## Appendix A Float Division

Let / be truncated division between integers. We divide a float by an integer *p* by defining an approximate multiplicative inverse *p*^{−1}. The numerator is $2|p|/p$ and the denominator is $2|p|$. For division by a float *p*,*q*, we simply apply the integer approach and then multiply by *q*. This yields numerator $2|p|/p\xb7q$ and denominator $2|p|$.

The fact that float division is defined in terms of integer multiplication and division implies that it is size-preserving and can be simulated in TC^{0}, which we use in §8.

## Appendix B Justifying Size Preservation

We justify that feedforward neural networks are size-preserving over floats. Feedforward neural networks are made up of a fixed (with respect to *n*) number of addition, multiplication, division, ReLU, and square root (for layer norm) operations. Therefore, it suffices to show that these operations are all in $S(F)$.

For multiplication, the numerator is just *p*_{1} · *p*_{2}, which has size $\u22642|pmax|$. Let the denominators be $q1=2k1$ and $q2=2k2$. Then the denominator is $2k1+k2$, which has size $\u22642|qmax|$.

Division can be analyzed in terms of the approximate multiplicative inverse (§9).^{9} Its numerator has size $\u2264|p|+1+|q|\u22642(|p|+|q|)$ for large enough input size. The denominator has size $\u2264|p|+1\u22642|p|$ for large enough input size.

Size preservation is trivially satisfied for ReLU, which cannot expand the size of the input.

To make layer norm work, we just need to analyze square root, which we will define in a truncated fashion over integers. The square root of a rational, then, simply takes the square root of *p* and *q*. We have that $|p|\u2264|p|$ and analogously for *q*.

## Appendix C Resource-Bounded Transformers

Size preservation is one way to characterize the constraints on transformers’ internal functions; a slightly different perspective is to fix *ϕ* and analyze how the language recognition abilities of the transformer change depending on the computational resources allotted to each *s*_{ℓ,h} and *f*_{ℓ}. In this section, we derive an alternate universality theorem in terms of time complexity classes. We will show that as long as *ϕ* is powerful enough, such transformers have equivalent time complexity to their activation functions.

Recall that a transformer is a tuple $\u2329\Sigma ,D,\alpha ,L,H,\varphi ,s\u2113,h,f\u2113\u232a$. In contrast to $AHAT(D)$ (cf. Definition 6), we will now work with a different class of transformer languages $AHAT(D,T(m))$ We will allow the embedding functions to be linear in the sequence length, and explore the effect of varying the complexity of the other internal functions. Let FTIME(*T*(*m*)) be the set of functions computable by a Turing machine in *T*(*m*) time.^{10}

Let $AHAT(D,T(n))$ be the class of languages ℒ ⊆ Σ^{*} such that there exists a transformer $\u2329\Sigma ,D,\alpha ,L,H,\varphi ,s\u2113,h,f\u2113\u232a$ that recognizes ℒ, where *ϕ* runs in time linear in the sequence length *n*, and *s*_{ℓ,h},*f*_{ℓ} ∈ FTIME(*T*(*m*)).

For any $T(m)\u2265m$, we will show transformers $AHAT(D,T(m))$ have the complexity of their activation functions. Formally:

###### Theorem 2

**.**

*For*$D\u2208{F,\mathbb{Q}}$

*and*$T(m)\u2265m$,

*Proof.*

First, observe that $AHAT(D,T(m))\u2286TIME(T(m))$, since the embedding function and saturated attention can be computed in time linear in the input sequence length, and the other internal functions can be computed in FTIME(*T*(*m*)) by construction.

*ℓ*,

*h*subscripts. We define three components of the embedding function $\varphi :\Sigma \xd7N\u2192D3$:

*n*. Define three heads

*b*

_{1, i},

*b*

_{2, i},

*b*

_{3, i}. Without loss of generality, consider

*b*

_{h, i}to act on

*ϕ*(

*w*

_{i},

*i*)

_{h}alone, rather than the full embeding vector.

*b*

_{1, i}is defined as a uniform head, while

*b*

_{2, i}and

*b*

_{3, i}are computed with

*s*

_{h}(

*v*

_{i},

*v*

_{j}) =

*v*

_{j}. Thus,

*f*to compute whether

*w*∈ ℒ. Let

*p*be the function that extracts the numerator of a float or rational number, which is computable in O(

*m*) time on float of size

*m*. Within

*f*, we compute

*u*=

*p*(

*b*

_{1,i}). At this point, we proceed in two cases depending on the datatype $D$:

**Rationals:**If $D=\mathbb{Q}$, then*u*is the binary string*w*. Any ℒ ∈ TIME(*T*(*m*)) has an indicator function*δ*∈ FTIME(*T*(*m*)), which we now apply to recognize whether*w*∈ ℒ.**Floats:**If $D=F$, then $u=2|n|/n\xb7w$ as in §9. Therefore, in linear time, we computeand feed$b2,ib3,i\xb7u=n2|n|\xb72|n|wn=w,$*w*through*δ*as in the $D=\mathbb{Q}$ case.

So, $TIME(T(m))\u2286AHAT(D,T(m))$.

## Appendix D Proof from Hao et al. (2022)

###### Lemma 2.

Any function *f* : {0,1}^{c} →{0,1}^{d} can be computed by a Boolean circuit of depth 3 and size at most *d*(2^{c} + *c* + 1).

*Proof.*

The idea of the proof is to define *d* subcircuits of size at most 2^{c} + *c* + 1 that compute the *d* output bits of *f* in parallel. We will build a circuit that computes each output bit of *f* according to its representation in disjunctive normal form (DNF). We define a first layer of the circuit that computes the negation of each input, which takes *c* gates. The second layer then computes the value of each DNF term by computing a conjunction (∧ gate) over the corresponding literals or negated literals. Note that a formula of *c* variables has at most 2^{c} DNF terms. Finally, the third layer of the circuit computes a disjunction (∨ gate) over the values of all terms, yielding the output of *f*, and adding a single gate. In summary, we have shown how to compute each output bit with a circuit of size at most 2^{c} + *c* + 1, which implies the full function *f* can be computed by a circuit of size at most *d*(2^{c} + *c* + 1).

## Notes

Under the hood, we imagine the pair $\u2329p,q\u232a$ is encoded by padding *p* and *q* to the same length with 0’s and interweaving bits from each.

More generally, the denominator may be taken to have a prime factorization of bounded length, although we work with the power of 2 definition, which is both simpler and closely resembles conventional floating point datatypes.

Let *V*_{ℓ,h} be a head’s value matrix in the standard transformer parameterization. Then *f*_{ℓ} is computed by first multiplying each *b*_{ℓ,h,i} by *V*_{ℓ,h}, aggregating the multiple attention heads, and applying the feedforward subnetwork.

The name AHAT standards for “averaging hard attention transformer”, and is taken from Hao et al. (2022).

To apply size preservation to the embedding function *ϕ*, we consider the size of a token to be $log(|\Sigma |)$.

Similarly, for any alphabet Σ and ℒ ⊆ Σ^{*}, we interpret *w*_{i} as a one-hot vector over Σ and define the family to recognize ℒ iff, for all *w* ∈ ℒ, $C|w|\xb7|\Sigma |(w)=1\u21d4w\u2208L$.

It may also be possible to derive tighter bounds for rational-valued transformers by imposing stronger constraints on the internal functions. However, with floats, we will see that size preservation is sufficient to derive a tighter characterization of transformers’ power. We leave this alternate direction to future work.

The exact multiplicative inverse $\u2329p,q\u232a\u21a6\u2329q,p\u232a$ over unconstrained rationals is also size-preserving. Thus, neural networks are size-preserving over both floats and rationals.

We write FTIME(*m*) instead of the conventional FTIME(*n*) to avoid confusion with the sequence length *n*.

## References

^{1}and AC

^{1}.

## Author notes

Action Editor: Mark Johnson