Strong Equivalence of TAG and CCG

Tree-adjoining grammar (TAG) and combinatory categorial grammar (CCG) are two well-established mildly context-sensitive grammar formalisms that are known to have the same expressive power on strings (i.e., generate the same class of string languages). It is demonstrated that their expressive power on trees also essentially coincides. In fact, CCG without lexicon entries for the empty string and only first-order rules of degree at most 2 are sufficient for its full expressive power.


Introduction
Combinatory categorial grammar (CCG) [27,28] is one of several grammar formalisms that were introduced as an extension of context-free grammars. In particular, CCG extends the classical categorial grammar [3], which has the same expressivity as context-free grammar, by rules that are inspired by combinatory logic [5]. CCG is a mildly context-sensitive grammar formalism [14], which are formalisms that are efficiently parsable (i.e., in polynomial time) and have expressivity beyond the context-free languages. Additionally, they are able to express a limited amount of cross-serial dependencies and have the constant growth property. Due to these features and its notion of syntactic categories, which is quite intuitive for natural languages, CCG has become widely applied in compuational linguistics. Further, it can be enhanced by semantics through the lambda calculus [27].
CCG is based on a lexicon and a rule system. The lexicon assigns syntactic categories to the symbols of an input string and the rule system describes how neighboring categories can be combined to new categories. Each category has a target, which is similar to the return type of a function, and optionally, a number of arguments. Different from functions, each argument has a directionality that indicates if it is expected on the left or the right side. If repeated combination of categories leads to a (binary) derivation tree that comprises all input symbols and is rooted in an initial category, then the input string is accepted.
When defining CCG, there are many degrees of freedom yielding a number of different variants [2,18,27,28]. This is a consequence of the linguistically motivated need to easily express specific structures that have been identified in a particular theory of syntax for a given natural language. However, we and others [18] are interested in the expressive power of CCG as generators of formal languages, since this allows us to disentangle the confusion of subtly different formalisms and identify the principal structures expressible by a common core of the formalisms. As linguistic structure calls for a representation that goes beyond strings, we aim for a characterization of expressive power in terms of the generated trees.
The most famous result on the expressive power of CCG is by Vijay-Shanker and Weir [30] and shows that tree-adjoining grammar (TAG), linear indexed grammar (LIG), head grammar (HG), and CCG generate the same string languages. An equivalent automaton model is the embedded push-down automaton [29]. In the definition of CCG used by Vijay-Shanker and Weir [30], the lexicon allows ε-entries, which assign syntactic categories to the empty string ε. Their rule system restricts rules to specific categories and limits the rule degree. CCG with unbounded rule degree are Turing-complete [22]. Prefix-closed CCG without target restrictions, in which the rules obey special closure properties, are less powerful. This even holds for multimodal CCG [17,18], which allow many types of directionality indicators (i.e. slashes beyond those for left and right).
When going beyond the level of string languages, there exist different notions of strong generative power. We regard two formalisms as strongly equivalent if their generated derivation tree languages coincide. Oftentimes, we will consider strong equivalence modulo relabeling. For example, the well-known local and regular tree grammars [10] are strongly equivalent modulo relabeling. On the other hand, Hockenmaier and Young [12] regard two formalisms as strongly equivalent if they capture the same sets of dependencies. Then there exist specific scrambling cases whose dependencies can be expressed by their CCG, but not by Lexicalized TAG (LTAG). Their CCG are syntactically more expressive than ours and allow type-raising, whereas the strong generative capacity (in our sense) of LTAG is strictly smaller than that of TAG [21]. The dependencies expressed by CCG without rule restrictions and TAG are shown to be incomparable by Koller and Kuhlmann [16]. It has been shown that CCG is able to generate exactly the separable permutations (i.e. permutations that can label the leaves of a binary tree such that the leaf labels of each subtree are a set of consecutive elements of the original order) of a specific "natural order of dominance", while TAG can also express non-separable permutations [26].
Returning to our notion of strong generative capacity, Kuhlmann, Maletti, and Schiffer [19,20] investigated the tree-generative capacity of CCG without ε-entries. The generated trees are always binary. CCG with application and first-degree composition rules generate exactly the regular tree languages [10]. Without the composition rules, only a proper subset can be generated. The languages of CCG rule trees (i.e. trees labeled by applied rules instead of categories) with bounded rule degree can also be generated by simple monadic context-free tree grammar (sCFTG).
For the converse direction we show that the tree languages generated by sCFTG can also be generated by CCG, which shows strong equivalence. This answers several open questions. Since sCFTG and TAG are strongly equivalent [15], our result also shows strong equivalence of CCG and TAG. In contrast to the construction of Vijay-Shanker and Weir [30], which relies heavily on ε-entries, our construction avoids them and shows that they do not increase the expressive power of CCG. Additionally, we only use rules up to degree 2 and first-order categories (i.e., arguments are atomic), which shows that larger rule degree or higher-order categories do not increase the expressive power.
Our construction proceeds roughly as follows. We begin with a spine grammar, which is a variant of sCFTG that is also strongly equivalent to TAG (up to relabeling). We encode its spines using a context-free grammar, which in turn can be represented by a special variant of push-down automata. Finally, the runs of the push-down automaton are simulated by a CCG such that the stack operations of the automaton are realized by adding and removing arguments of the categories.

Preliminaries
The nonnegative integers are N and the positive integers are N + . For every k ∈ N, we let [k] = {i ∈ N | 1 ≤ i ≤ k}. Given a set A, let P(A) = {A | A ⊆ A} be the power-set of A. As usual, π i : X 1 × · · · × X n → X i projects a tuple to its i-th component and is given by π i ( x 1 , . . . , x n ) = x i , where each X j with j ∈ [n] is a set. An alphabet is a finite set of symbols. The set Σ * contains all strings over the alphabet Σ including the empty string ε. We let Σ + = Σ * \ {ε}. The length of w ∈ Σ * is |w|, and concatenation is written as juxtaposition. The prefixes Pref(w) of a string w ∈ Σ * are {u ∈ Σ * | ∃v ∈ Σ * : w = uv}. A string language is a subset L ⊆ Σ * . Given a relation ⇒ ⊆ S 2 , we let ⇒ * be the reflexive, transitive closure of ⇒.

2.1.
Representations of String Languages. We briefly recall three standard formalisms for representing string languages. We start with nondeterministic finite automata [13].
Definition 2.1. A nondeterministic finite automaton (NFA) A = (Q, Σ, δ, I, F ) is a tuple consisting of (i) finite sets Q and Σ of states and input symbols, respectively, (ii) a transition relation δ ⊆ Q × Σ × Q, and (iii) sets I, F ⊆ Q of initial and final states, respectively.
The transition relation can be extended to a functionδ : Q × Σ * → P(Q) given byδ for all q ∈ Q, a ∈ Σ, and w ∈ Σ * . The language accepted by a given NFA A is defined as Given a string language L, if there exists a NFA A with L(A) = L, then we call L regular.

Definition 2.2.
A context-free grammar (CFG) G = (N, Σ, S, P ) consists of (i) disjoint finite sets N and Σ of nonterminal and terminal symbols, respectively, (ii) a start nonterminal S ∈ N , and (iii) a finite set P ⊆ N × (N ∪ Σ) * of productions.
In the following let G = (N, Σ, S, P ) be a CFG. We write productions (n, r) as n → r. Given n → r ∈ P and u, v ∈ (N ∪ Σ) * , we write unv ⇒ G urv and say that unv derives urv. The language generated by G is L(G) = {w ∈ Σ * | S ⇒ * G w . Given a string language L, if there exists a CFG G with L(G) = L, then we call L context-free.
Given a PDA A = (Q, Σ, Γ, δ, I, ⊥, F ), let Conf A = Q × Σ * × Γ * ⊥ be the set of configurations. Intuitively speaking, in configuration q, w, α ∈ Conf A the PDA A is in state q with stack contents α and still has to read the input string w. The move relation A ⊆ Conf 2 A is defined as follows: The configuration q, w, α is initial (respectively, final ) if q ∈ I, w ∈ Σ + , and α = ⊥ (respectively, q ∈ F , w = ε, and α = ε). An accepting run is a sequence ξ 0 , . . . , ξ n ∈ Conf A of configurations that are successively related by moves (i.e., ξ i−1 A ξ i for all i ∈ [n]), that starts with an initial configuration ξ 0 , and finishes in a final configuration ξ n . In other words, we start in an initial state with ⊥ on the stack and finish in a final state with the empty stack, and for each intermediate step there exists a transition. An input string w ∈ Σ + is accepted by A if there exists an accepting run starting in q, w, ⊥ with q ∈ I. The language L(A) accepted by the PDA A is the set of accepted input strings and thus given by Note that our PDA are ε-free (in the sense that each transition induces moves that process an input symbol) and have limited stack access: In each move we can pop a symbol, push a symbol, or ignore the stack. Note that we explicitly exclude the case, in which a symbol is popped and another symbol is pushed at the same time. However, this restriction has no influence on the expressive power (see [7,Corollary 12] for the weighted scenario; an instantiation of the result with the Boolean semiring yields the unweighted case). We also note that no moves are possible anymore once the stack is empty.
2.2. Tree Languages. In this paper, we only deal with binary trees since the derivation trees of CCGs are binary. We therefore build trees over ranked sets Σ = Σ 0 ∪ Σ 1 ∪ Σ 2 . If Σ is an alphabet, then it is a ranked alphabet. For every k ∈ {0, 1, 2}, we say that symbol a ∈ Σ k has rank k. We write T Σ2, Σ1 (Σ 0 ) for the set of all trees over Σ, which is the smallest set T such that c(t 1 , . . . , t k ) ∈ T for all k ∈ {0, 1, 2}, c ∈ Σ k , and t 1 , . . . , t k ∈ T . As usual, we write just a for leaves a() with a ∈ Σ 0 . A tree language is a subset T ⊆ T Σ2,∅ (Σ 0 ). Let T = T Σ2,Σ1 (Σ 0 ). The map pos : T → P + [2] * assigns Gorn tree addresses [11] to a tree, where P + (S) is the set of all nonempty subsets of S. Let {iw | w ∈ pos(t i )} for all k ∈ {0, 1, 2}, c ∈ Σ k , and t 1 , . . . , t k ∈ T . The set of all leaf positions of t is defined as leaves(t) = w ∈ pos(t) | w1 / ∈ pos(t) . Given a tree t ∈ T and a position w ∈ pos(t), we write t| w and t(w) to denote the subtree rooted in w and the symbol at w, respectively. Additionally, we let t[t ] w be the tree obtained when replacing the subtree appearing in t at position w by the tree t ∈ T . Finally, let yield : T → Σ + 0 be inductively defined by yield(a) = a for all a ∈ Σ 0 and yield c(t 1 , . . . , t k ) = yield(t 1 ) · · · yield(t k ) for all k ∈ [2], c ∈ Σ k , and t 1 , . . . , t k ∈ T . The special leaf symbol is reserved and is used to represent a hole in a tree. The set C Σ2,Σ1 (Σ 0 ) of contexts contains all trees of T Σ2,Σ1 Σ 0 ∪ { } , in which occurs exactly once. We write pos (C) to denote the unique position of in the context C ∈ C Σ2,Σ1 (Σ 0 ). Moreover, given t ∈ T we simply write 2.3. Combinatory Categorial Grammar. In the following, we give a short introduction to CCG. Given an alphabet A of atoms or atomic categories and a set of slashes D = {/, /} indicating directionality, the set of categories is defined as C(A) = T D,∅ (A). We usually write the categories in infix notation and the slashes are left-associative by convention, so each category takes the form c = a | 1 c 1 · · · | k c k where a ∈ A, | i ∈ D, c i ∈ C(A) for all i ∈ {1, . . . , k}. The atom a is called the target of c and written as tar(c). The slash-category pairs | i c i are called arguments and their number k is called the arity of c and denoted by ar(c). In addition, we write arg(c, i) to get the i-th argument | i c i of c. In the sense of trees, the sequence of arguments is a context | 1 c 1 · · · | k c k . The set of argument contexts is denoted by A(A) ⊆ C D,∅ (A). We distinguish between two types of categories. In first-order categories, all arguments are atomic, whereas in higher-order categories, the arguments can have arguments themselves.
Next, we describe how two neighboring categories can be combined. Intuitively, the direction of the slash determines on which side a category matching the argument is expected. Hence there are two types of rules. Despite the conventions for inference systems, we put the inputs (premises) below and the output (conclusion) above to make the shape of the proof tree apparent. A rule of degree k with k ∈ N has one of the following forms: where a ∈ A, c ∈ C(A) ∪ {y}, | i ∈ D, and c i ∈ C(A) ∪ {y i } for all i ∈ [k]. Here, y, y 1 , . . . , y k are category variables that can match any category in C(A) and x is an argument context variable that can match any argument context in A(A). The category taking the argument (ax | c with | ∈ D) is called primary category, the one providing it (c | 1 c 1 · · · | k c k ) is called secondary category, and they are combined to an output category (ax | 1 c 1 · · · | k c k ). Given rule r, we write sec(r) to refer to the secondary category. Rules of degree 0 will be referred to as application rules, while rules of higher degree are composition rules. We write R(A) for the set of all rules over A. A rule system is a pair Π = (A, R), where A is an alphabet and R ⊆ R(A) is a finite set of rules over A. Given a rule r ∈ R, we obtain a ground instance of it by replacing the variables {y, y 1 , . . . } by concrete categories and the variable x by a concrete argument context. The ground instances of Π induce a relation → Π ⊆ C(A) 2 × C(A) and we write c c c Π instead of (c, c ) → Π c . The relation → Π extends to a relation ⇒ Π ⊂ (C(A) * ) 2 on sequences of categories. It is given by A combinatory categorial grammar (CCG) is a tuple G = (Σ, A, R, I, L) that consists of an alphabet Σ of input symbols, a rule system (A, R), a set I ⊆ A of initial categories, and a finite relation L ⊆ Σ × C(A) called lexicon. It is called k-CCG if each rule r ∈ R has degree at most k, where k ∈ N.
The CCG G generates the category sequences C G ⊆ C(A) * and the string language L(G) ⊆ Σ * given by and L(G) = L −1 (C G ), where the string language L(G) contains all strings that can be relabeled via the lexicon to a category sequence in C G . A tree t ∈ T C(A),∅ (L(Σ)) is called derivation tree of G if t(w) t(w1) t(w2) (A,R) for every w ∈ pos(t) \ leaves(t). We denote the set of all derivation trees of G by D(G).
A category relabeling ρ : C(A) → ∆ is a relabeling such that ρ(c) = ρ(c ) for all categories c, c ∈ C(A) with tar(c) = tar(c ) and arg c, ar(c) = arg c , ar(c ) . The relabeled derivation trees T ρ (G) ⊆ T ∆2, ∅ (∆ 0 ) are given by  and secondary category c 2 = c /a/c. The latter has target tar(c 2 ) = c and the two arguments /a and /c, so its arity is ar(c 2 ) = 2.
A derivation tree of G is depicted in Figure 1. We start at the bottom with categories taken from the lexicon in accordance with the input symbols. Then neighboring categories are combined until we arrive at the root with initial category ⊥, so the input word is accepted.

Moore Push-down Automata
We start by introducing a Moore variant of push-down automata [1] that is geared towards our needs and still accepts the context-free languages (of strings of length ≥ 2). It will be similar to the push-down Moore machines of Decker, Leucker, and Thoma [6]. Instead of processing input symbols as part of transitions (as in Mealy machines), Moore machines output a unique input symbol in each state [8]. Recall that for every set Γ we have Γ ≤1 = {ε} ∪ Γ and additionally let Γ ≥2 = {w ∈ Γ * | 2 ≤ |w|} be the strings of length at least 2.
Definition 3.1. A Moore push-down automaton (MPDA) is defined as a tuple A = (Q, Σ, Γ, δ, τ, I, F ) that consists of (i) finite sets Q, Σ, and Γ of states, input symbols, and stack symbols, respectively, an output function τ : Q → Σ, and (iv) sets I, F ⊆ Q of initial and final states, respectively.
Due to the definition of δ, as for the PDA of Definition 2.3, in a single step we can either push or pop a single stack symbol or ignore the stack. In the following, let A = (Q, Σ, Γ, δ, τ, I, F ) be an MPDA. On the set and a configuration q, α ∈ Conf A is initial (respectively, final ) if q ∈ I and α ∈ Γ (respectively, q ∈ F and α = ε). An accepting run is defined in the same manner as for PDA. However, note that contrary to PDA we can start with an arbitrary symbol on the stack. The language L(A) accepted by A contains exactly those strings w ∈ Σ * , for which there exists an accepting run q 0 , α 0 , . . . , q n , α n such that w = τ (q 0 ) · · · τ (q n ). Thus, we accept the strings that are output symbol-bysymbol by the states attained during an accepting run. As usual, two MPDA are equivalent if they accept the same language. Since no initial configuration is final, each accepting run has length at least 2, so we can only accept strings of length at least 2. While we could adjust the model to remove this restriction, the presented version serves our later purposes best. Proof. The straightforward part of the proof is to show that each language accepted by an MPDA is context-free. For the converse, let L ⊆ Σ ≥2 be context-free and A = (Q, Σ, Γ, δ, I, ⊥, F ) be a PDA such that L(A) = L. We assume without loss of generality that the initial states of A have no incoming transitions and that the final states of A have no outgoing transitions and all their incoming transitions pop ⊥. We will construct an MPDA A with L(A ) = L in the spirit of the classical conversion from Mealy to Moore machines. The main idea is to shift the input symbol a from the transition (q, a, γ, γ , q ) ∈ δ to the target state q . Additionally, since there is always one more configuration compared to the number of moves (and thus involved transitions) in an accepting run, the first move needs to be avoided in A . If the corresponding transition pushes a symbol to the stack, we have to store it in the target state of the transition. This state becomes an initial state of A . To be able to discern if the stored symbol can be deleted, A needs to be aware whether the stack currently contains only one symbol, since in A the symbol pushed in the first transition lies above the bottom symbol. Since we clearly cannot store the size of the current stack in the state, we need to mark the symbol at the bottom of the stack. Formally, we construct the MPDA A = (Q , Σ, Γ , δ , π 2 , I , F ) with While transitions that ignore the stack (3.1) or push to the stack (3.2) can be adopted easily, we have three variants (3.3)-(3.5) of transitions that pop from the stack. If we have not reached the bottom of the stack yet, then we can pop symbols without problems (3.3). However, when only the initial stack symbol is left, we first have to remove the stored symbol (3.4) before we can pop the initial stack symbol (3.5).
Let w = a 1 · · · a n with a 1 , . . . , a n ∈ Σ be an input string. First we assume that the first move of A pushes the symbol γ 1 to the stack, which then gets popped in the i-th move. Any such sequence of configurations q 1 , a 1 · · · a n , ⊥ A q 2 , a 2 · · · a n , γ 1 ⊥ A q 3 , a 3 · · · a n , γ 2 γ 1 ⊥ A · · · A q i , a i · · · a n , γ 1 ⊥ A q i+1 , a i+1 · · · a n , ⊥ A · · · A q n , a n , ⊥ A q n+1 , ε, ε in A with q 1 ∈ I and q n+1 ∈ F yields the corresponding sequence of configurations which is an accepting run of A and vice versa.
On the other hand, if the first move of A ignores the stack, then A simulates the moves starting in configuration q 2 , a 1 , ε, 1 , ⊥, 1 . Thus, each string of length at least 2 accepted by A is also accepted by A and vice versa, which proves The MPDA A is pop-normalized if there exists a map pop : Γ → Q such that q = pop(γ) for every transition (q, γ, ε, q ) ∈ δ. In other words, for each stack symbol γ ∈ Γ there is a unique state pop(γ) that the MPDA enters whenever γ is popped from the stack.
Later on, we will simulate the runs of an MPDA in a CCG such that subsequent configurations are represented by subsequent primary categories. Pop transitions are modeled by removing the last argument of a category. Thus, the target state has to be stored in the previous argument. This argument is added when the according push transition is simulated, so at that point we already have to be aware in which state the MPDA will end up after popping the symbol again. This will be explained in more detail in Section 7.
We can easily establish this property by storing a state in each stack symbol. Each push transition is replaced by one variant for each state (i.e., we guess a state when pushing), but when a symbol is popped, this is only allowed if the state stored in it coincides with the target state. Proof. Given an MPDA A = (Q, Σ, Γ, δ, τ, I, F ), we extend each stack symbol by a state and let Γ = Γ × Q as well as pop = π 2 , i.e., pop γ, q = q for all γ, q ∈ Γ . All transitions that push a symbol to the stack also guess the state that is entered when that symbol is eventually popped. Hence we con- It is obvious that for every accepting run of A there is an accepting run of A , in which all the guesses were correct. Note that A starts with an arbitrary symbol on the stack, so we can find a run where the second component of this symbol coincides with the final state that is reached by popping this symbol. Similarly, every accepting run of A can be translated into an accepting run of A by projecting each stack symbol to its first component. Hence A and A are equivalent and A is clearly pop-normalized.
The next statement shows that we can provide a form of look-ahead on the output. In each new symbol we store the current as well as the next output symbol. We will briefly sketch why this look-ahead is necessary. Before constructing the CCG, the MPDA will be used to model a spine grammar. The next output symbol of the MPDA corresponds to the label of the parent node along a "spine" of a tree generated by the spine grammar. From this parent node we can determine the label of its other child. This information will be used in the CCG to control which secondary categories are allowed as neighboring combination partners.
Corollary 3.5. For every context-free language L ⊆ Σ ≥2 there exists a popnormalized MPDA A such that L(A) = Next(L).

Spine Grammars
Now we move on to representations of tree languages. We first recall contextfree tree grammars (CFTG) [24], but only the monadic simple variant [15], i.e., all nonterminals are either nullary or unary and productions are linear and nondeleting.
In the following let G = (N, Σ, S, P ) be an sCFTG. We write (n, r) ∈ P simply as n → r. Given t, u ∈ T Σ2,N1 (Σ 0 ∪ N 0 ) we let t ⇒ G u if there exist (n → r) ∈ P and a position w ∈ pos(t) such that (i) t| w = n and u = t[r] w with n ∈ N 0 , or (ii) t| w = n(t ) and u = t r[t ] w with n ∈ N 1 and t ∈ T Σ2,N1 (Σ 0 ∪ N 0 ). The tree language T (G) generated by G is Spine grammars [9] were originally defined as a restriction of CFTG and possess the same expressive power as sCFTG, which follows from the normal form for spine grammars. Although sCFTG are more established, we elect to utilize spine grammars because of their essential notion of spines and use a variant of their normal form. Deviating from the original definition [9, Definition 3.2], we treat spine grammars as a restriction on sCFTG and equip the terminal symbols with a "spine direction" (instead of the nonterminals, which is not useful in sCFTG). 1 By creating copies of binary terminal symbols it can be shown that both variants are equivalent modulo relabeling. More specifically, under our definition, each spine grammar is clearly itself an sCFTG and for each sCFTG G there exist a spine grammar G and a relabeling ρ such that Definition 4.2. Let G = (N, Σ, S, P ) be an sCFTG. It is called spine grammar if there exists a map d : Σ 2 → {1, 2} such that wi ∈ Pref(pos (C)) with i = d(C(w)) for every production (n → C) ∈ P with n ∈ N 1 and w ∈ Pref(pos (C)) with C(w) ∈ Σ 2 .
Henceforth let G = (N, Σ, S, P ) be a spine grammar with map d : Σ 2 → {1, 2}. Consider a production (n → C) ∈ P with n ∈ N 1 . The spine of C is simply the path from the root of C to the unique occurrence pos (C) of . The special feature of a spine grammar is that the symbols along the spine indicate exactly in which direction the spine continues. Since only the binary terminal symbols offer branching, the special feature of spine grammars is the existence of a map d that tells us for each binary terminal symbol σ ∈ Σ 2 whether the spine continues to the left, in which case d(σ) = 1, or to the right, in which case d(σ) = 2. This map d, called spine direction, applies to all instances of σ in all productions with spines. We will use the term spine also to refer to the paths that follow the spine direction in a tree generated by a spine grammar. In this manner, each such tree can be decomposed into a set of spines.
In the original definition, productions are not necessarily linear or nondeleting, and the nonterminals may have rank greater than 1. Nonterminals are equipped with a head that specifies the direction where the spine continues. The spine of the right-hand side of a production is the path from the root to the unique appearence of the variable that is in head direction of the (non-nullary) nonterminal on the left-hand side. All other variables on the left-hand side of productions have to appear as children of spinal nodes on the right-hand side if they appear at all.
In spine grammars in normal form the start nonterminal is isolated and cannot occur on the right-hand sides. The three production types of the normal form are illustrated in Figure 2.
Using a single start production followed by a number of chain and terminal productions, a nullary nonterminal n can be rewritten to a tree t that consists of a spine of terminals, where each non-spinal child is a nullary nonterminal. Formally, for every nullary nonterminal n ∈ N 0 let where G is the spine grammar G without start productions; i.e., G = (N, Σ, S, P ) with productions P = {(n → r) ∈ P | n ∈ N 1 }. So we perform a single derivation step using the productions of G followed by any number of derivation steps using only productions of G . The elements of I G (n) are called spinal trees for n and their spine generator is n. By a suitable renaming of nonterminals we can always achieve that the spine generator does not occur in any of its spinal trees. Accordingly, the spine grammar G is normalized if it is in normal form and The tree in Figure 3a, in which the spines are marked by thick edges, is generated by G. The spinal tree corresponding to the main spine of the depicted tree is shown in Figure 3b.
The following result is a variant of Theorem 1 of Fujiyoshi and Kasai [9].
Theorem 4.5. For every spine grammar there is a strongly equivalent normalized spine grammar.
, and that in productions of type (iii) the start nonterminal is not excluded from the set of nonterminals that can be produced. When starting from their normal form, standard techniques can be used to modify the grammar such that all productions n → r with r = b 1 (· · · (b m ( )) · · · ) have m = 2, that the start nonterminal is isolated, and that no nullary nonterminal can derive a spinal tree containing the same nonterminal. We therefore assume that these conditions are already met.
Let G = (N, Σ, S, P ) be a spine grammar that is already in the desired form except for the set of productions P 1 = {n → b(a) | n, a ∈ N 0 , b ∈ N 1 } ⊆ P . Let G = (N 0 ∪ N 1 , Σ, S, P ) with unary nonterminals N 1 = N 1 ∪ N 0 × Σ 0 and productions When a nonterminal is expanded to a non-trivial spine, the terminal symbol at the bottom of that spine is guessed. That symbol is immediately produced and stored in its parent nonterminal. If the original nonterminal corresponding to the parent can be replaced by the guessed terminal symbol in the original grammar, the parent can be removed instead since the terminal symbol was already produced (see also [23,Section 5] for a similar construction). It is easy to verify that G still generates the same tree language as G.
After these modifications, the set P contains collapsing productions a → with a ∈ N 1 that are not allowed in our normal form. They are subsequently removed using the standard techniques for removal of ε-productions and unit productions from CFG to obtain a spine grammar of the desired normal form. Figure 4. Tree-adjoining grammar

Tree-adjoining Grammars
Before we proceed, we will briefly introduce TAG and sketch how a spine grammar is obtained from it. TAG is a mildly context-sensitive grammar formalism that operates on a set of elementary trees of which a subset is initial. To generate a tree, we start with an initial tree and successively splice elementary trees into nodes using adjunction operations. In an adjunction, we select a node, insert a new tree there, and reinsert the original subtree below the selected node at the distinguished and specially marked foot node of the inserted tree. We use the non-strict variant of TAG, in which the root and foot labels of the inserted tree need not coincide with the label of the replaced node to perform an adjunction. To control at which nodes adjunction is allowed, each node is equipped with two types of constraints. The selective adjunction constraint specifies a set of trees that can be adjoined and the Boolean obligatory adjunction constraint specifies whether adjunction is mandatory. Only trees without obligatory adjunction constraints are part of the generated tree language. Figure 4 shows the elementary trees of an example TAG. Only tree 1 is initial and foot nodes are marked by a superscript asterisk · * on the label. Whenever adjunction is forbidden (i.e., empty set as selective adjunction constraint and nonobligatory adjunction), we omit the constraints altogether. Otherwise, the constraints are put next to the label. For example, {2, 3} + indicates that tree 2 or 3 must (+ = obligatory) be adjoined.
We briefly sketch the transformation from TAG to sCFTG that was presented by Kepser and Rogers [15]. TAG is a notational variant of footed simple CFTG, in which all variables in right-hand sides of productions appear in order directly below a designated foot node. To obtain an sCFTG, the footed simple CFTG is first converted into a spine grammar, where the spine is the path from the root to the foot node, and then brought into normal form using the construction of Fujiyoshi and Kasai [9]. The spine grammar of Example 4.4 is strongly equivalent to the TAG shown in Figure 4.

Decomposition into Spines
We proceed with the construction starting from the normalized spine grammar G. First, we will construct a CFG that captures all information of G. It represents the spinal trees (from bottom to top) as strings and enriches the symbols with the spine generator (initialized by start productions and preserved by chain productions) and the non-spinal child (given by terminal productions). The order of these annotations depends on the spine direction of the symbol. The leftmost symbol of the generated strings has only a spine generator annotated since the bottom of the spine has no children. To simplify the notation, we write n g for (n, g) ∈ N 2 , α n for (α, n) ∈ Σ 0 × N , and σ n1 n2 for (σ, n 1 , n 2 ) ∈ Σ 2 × N 2 . Definition 6.1. Let spine grammar G be normalized and / ∈ N . The spines S(G) = L(G ) of G are the strings generated by the CFG G = ({ } ∪ N 2 , Σ , , P ) with Σ = (Σ 0 × N ) ∪ (Σ 2 × N 2 ) and productions P = P 0 ∪ P 1 ∪ P 2 given by Example 6.2. We list some corresponding productions of the spine grammar G (left) of Example 4.4 and the CFG G (right) for its spines S(G).
Note that for each start production we obtain a single production since the nonterminal on the left-hand side becomes the spine generator. On the other hand, for each chain or terminal production we have to combine them with all nonterminals, as we do not know the spine generator of the nonterminal on the left-hand side of the original production. When a string is derived, the spine generators are pulled through originating from start productions and are consistent throughout the string. The language generated by G is Note that each string generated by the CFG belongs to (Σ 0 × N )(Σ 2 × N 2 ) * . Next we define how to reassemble those spines to form trees again, which then relabel to the original trees generated by G. The operation given in the following definition describes how a string generated by the CFG can be transformed into a tree by attaching subtrees in the non-spinal direction of each symbol, whereby the non-spinal child annotation of the symbol and the spinal annotation of the root of the attached tree have to match.
The generator gen : (Σ 0 ×N )∪(Σ 2 ×N 2 ) → N is the nonterminal in spine direction and is given by For n ∈ N , let T n = t ∈ T gen t(ε) = n be those trees of T whose root label has n annotated in spine direction. We recursively define the tree language attach T (w) ⊆ T Σ2×N 2 ,∅ (Σ 0 × N ) by attach T (α n ) = {α n } for all α n ∈ Σ 0 × N , and for all w ∈ A and σ n1 n2 ∈ Σ 2 × N 2 . To obtain the tree language defined by G, it is necessary to apply this operation recursively on the set of spines.
We inductively define the tree language F(L) generated by L as the smallest tree language F with attach F (w) ⊆ F for every w ∈ L.
Example 6.5. The CFG G of Example 6.2 generates the set of spines S(G) and F(S(G)) S contains the correctly assembled trees formed from these spines. Figure 3c shows a tree of F(S(G)) S since the generator of the main spine is S = s, which is stored in spinal direction in the root label α2 a s . We can observe the correspondence of annotations in non-spinal direction and the spine generator of the respective child in the same direction.
Next we prove that F S(G) S and T (G) coincide modulo relabeling. This shows that the context-free language S(G) of spines completely describes the tree language T (G) generated by G. Theorem 6.6. Let spine grammar G be normalized. Then π F(S(G)) S = T (G), where the relabeling π : (Σ 0 × N ) ∪ (Σ 2 × N 2 ) → Σ 0 ∪ Σ 2 is given by π(α n ) = α and π σ n1 n2 = σ for all α ∈ Σ 0 , σ ∈ Σ 2 , and n, n 1 , n 2 ∈ N . Proof. We will prove a more general statement. Let G be the CFG constructed for G in Definition 6.1. Given n ∈ N 0 , we will show that the tree language π(F(S(G)) n ) coincides with {t ∈ T Σ2,∅ (Σ 0 ) | n ⇒ * G t}, which contains the trees that can be derived in G starting from the nullary nonterminal n. To this end, we show inclusion in both directions.
The inclusion (⊆) is proved by induction on the size of t ∈ F(S(G)) n . Clearly, t was constructed from a string w = α n σ1 n1,1 n1,2 · · · σm nm,1 nm,2 ∈ S(G) with spine generator n and n i,d(σi) = n for all i ∈ {1, . . . , m}. Hence, there is a derivation ⇒ * G w, where is the start nonterminal of G . Each production applied during this derivation corresponds uniquely to a production of the spine grammar G. This yields a derivation n ⇒ * G t w of a spinal tree t w ∈ I G (n), where the spine of t w is labeled (from bottom to top) by π(w). Besides the spine, t w contains only nullary nonterminals that for i ∈ {1, . . . , m} are attached below σ i in the non-spinal direction 3 − d(σ i ) and are labeled by π 1 (n i,3−d(σi) ), respectively. For better readability, let n i = n i,3−d(σi) in the following. Each nonterminal annotation n i in w implies the attachment of a tree t i ∈ F(S(G)) ni . These attached trees are smaller than t, so we can use the induction hypothesis and conclude that there is a derivation n i ⇒ * G π(t i ). Combining those derivations, we obtain a derivation n ⇒ * G π(t). To prove the other direction (⊇), we use induction on the length of the derivation n ⇒ * G t and show that there exists a tree t ∈ F(S(G)) n with π(t ) = t. To this Figure 5. Sample MPDA (see Example 6.8) end, we reorder the derivation such that a spinal tree s ∈ I G (n) is derived first (i.e., n ⇒ * G s ⇒ * G t). Suppose that this spinal tree s has the nullary terminal symbol α at the bottom and contains m binary terminal symbols σ 1 , . . . , σ m (from bottom to top). Let n i be the non-spinal child of σ i . It is attached in direction 3 − d(σ i ). Due to the construction of G , there is a corresponding derivation ⇒ * G w, for which the derived string w ∈ S(G) has the form w = α n σ1 n1,1 n1,2 · · · σm nm,1 nm,2 with n i,d(σi) = n and n i,3−d(σi) = n i for all i ∈ {1, . . . , m}. The remaining nonterminals in s are replaced by subderivations n i ⇒ * G t i for all i ∈ {1, . . . , m}. These subderivations are shorter than the overall derivation n ⇒ * G t, so by the induction hypothesis, there exist trees t i ∈ F(S(G)) ni such that π(t i ) = t i for all i ∈ {1, . . . , m}. Attaching those trees t i to w, we obtain the tree t ∈ F(S(G)) n . As required, we have π(t ) = t.
We thus have proved that π(F(S(G)) S ) coincides with {t ∈ T Σ2,∅ (Σ 0 ) | S ⇒ * G t}. Hence, π(F(S(G)) S ) = T (G). Proof. Without loss of generality (see Theorem 4.5), let G = (N, Σ, S, P ) be a normalized spine grammar. Clearly, S(G) is a context-free subset of A 0 A * 2 with A 0 = Σ 0 × N and A 2 = Σ 2 × N 2 by Definition 6.1. Corollary 3.5 yields a popnormalized MPDA A such that L(A) = w ∈ Next S(G) | |w| ≥ 2 . Moreover, we observe that L(A)∪L 1 ⊆ (A 2 ×A 0 )(A 2 ×A 2 ) * with A 2 = A 2 ∪{ }. Clearly, L(A)∪L 1 relabels to S(G) via the projection to the components of A 0 and A 2 . Consider the ranked alphabet Σ given by the projection to the components of Σ 0 × N and Σ 2 × N 2 . By Theorem 6.6, the tree language F S(G) S relabels to T (G), which proves that F L(A) ∪ L 1 S and T (G) coincide modulo relabeling.
Example 6.8. The MPDA constructed in Corollary 6.7 for the spine grammar G of Example 4.4 is depicted in Figure 5. Initial states are indicated using a start marker and final states are marked by a double circle. Push and pop stack operations are written with downwards and upwards arrows, respectively. The MPDA consists of two components. The larger one describes the main spine, and the smaller one describes the side spine. The distinction between the three stack symbols is necessary due to pop-normalization, and the distinction between q 1 and q 1 (and similar states) is necessary because of the lookahead provided by Next S(G) . For example, τ (q 1 ) = γ2 s c , γ2 s c and τ (q 1 ) = β2 s b , γ2 s c . Similarly, τ (p 1 ) = (z, z) and τ (p 1 ) = ( , z) where z = η2 e b . To completely capture the behavior of G, we additionally require the set L 1 = {( , α a ), ( , β b ), ( , β e ), ( , γ c )}, which contains the spines of length 1.
We simulate the accepting runs of A in the spines consisting of primary categories of the CCG. These primary spines are paths in a CCG derivation tree that start with a lexical category at a leaf of the derivation tree and consist of a (possibly empty) sequence of primary categories followed by a secondary or initial category. The shortest possible primary spine is a single lexical category that serves as a secondary or initial category. The main idea is that the primary categories on the primary spine store the current configuration of A. This is achieved by adding an additional argument for transitions that push a symbol, whereas for each pop transition, an argument is removed. The last argument stores the current state in the first component and the top of the stack in the second component. The previous arguments store the preceding stack symbols in their second components and the state the automaton returns to when the stack symbol stored in the next argument is popped in the first components. To implement the required transformations of consecutive primary categories, the secondary categories need to have a specific structure. This mandates that the categories at the top of a spine (which act as secondary categories unless they belong to the main spine) cannot store their corresponding automaton state in the first component of the last argument as usual, but instead utilize the third component of their target. Thus each argument stores the final state corresponding to its secondary combination partner in the third component. This third component also allows us to decide whether a category is primary: A category is a primary category if and only if the spine generator of the state stored in the first component of the last argument and the spine generator of the state stored in the last component of the target coincide. This is possible since G is normalized, which yields that attaching spines have a spine generator that is different from the spine generator of the spine that they attach to.
Definition 7.1. We define the CCG G A,L1 = (∆ 0 , A, R, I , L) as follows: . We use a i to refer to the i-th component of an atom a ∈ A. Additionally, let In the rules R = |∈D (R we underline the primary category ax/b, which always needs to fulfill gen(a 3 ) = gen(b 1 ).
We listed all the forward rules, but for each forward rule there also exists a symmetric backward rule yielding the rule sets R Let C wf = {c ∈ C(A) | c well-formed } be the set of well-formed categories. Clearly I ⊆ C wf . In addition, we introduce sets L1 and A of top-of-spine categories derived from the short strings of L 1 and the strings accepted by A, respectively: N ) we define the lexicon as follows : Each atom of A consists of three components. The first component stores the current state of A (or the special symbol ⊥), the second component stores the current symbol at the top of the stack, and the third component stores the final state or the symbol corresponding to the combining category of the attaching side spine. With this intuition, the rule system directly implements the transitions of A.
The lexicon assigns categories to symbols that can label leaves, so these symbols are taken from the nullary terminal symbols. The assigned categories consist of a category that appears at the top of a spine and an additional argument for the initial state of an accepting run. The spines of length 1 are translated directly to secondary categories or initial categories.
Let us make two general observations that hold for all categories that appear in derivation trees of G A,L1 : (1) All categories are well-formed. This follows from the fact that only wellformed categories occur in the lexicon and all categories in the derivation trees consist of atoms and arguments that were already present in the lexicon [30, Lemma 3.1]. (2) All primary categories ax | b obey gen(a 3 ) = gen(b 1 ). This is directly required by the rule system. Finally, we will now describe how to relabel the derivation trees D(G A,L1 ) of the CCG G A,L1 that uses categories built using the input symbols of the MPDA A. Note that only well-formed categories will occur in derivation trees. Primary and nonprimary categories are relabeled differently. The relabeling ρ : C wf → ∆ is defined for every c ∈ C wf by ρ(ax | b) = τ (b 1 ) for all primary categories ax | b ∈ C wf ; i.e., gen(a 3 ) = gen(b 1 ). Otherwise ρ(ax) = τ (a 3 ) for all initial and secondary categories ax ∈ C wf .
The following property requires that the spine grammar G is normalized, so a spine never has the same spine generator as its attached spines. Proof. We have gen(a 3 ) = comb(a 1 ) by the definition of atoms A. Additionally, we have gen(a 1 ) = gen(b 1 ) by the construction of the rule system, since a 1 , b 1 ∈ Q occur in single transition of A. However, the spine generator gen(a 1 ) never coincides with the spine generator comb(a 1 ) of an attaching spine due to normalization of G. So comb(a 1 ) = gen(a 1 ) and thus gen(a 3 ) = comb(a 1 ) = gen(a 1 ) = gen(b 1 ).
We are now ready to describe the general form of primary spines of G A,L1 . Given a primary spine c 0 · · · c n with n ≥ 1 read from lexicon entry towards the root, we know that it starts with a lexicon entry c 0 = ax | b ∈ L(∆ 0 ) and ends with the non-primary category ax, which as such cannot be further modified. Hence each of the categories c ∈ {c 0 , . . . , c n−1 } has the form ax . The category c n is relabeled to τ (a 3 ) and c is relabeled to τ (q m ). Additionally, unless a 1 = ⊥, the first components of all atoms in ax have the same spine generator gen(a 1 ) and gen(q 1 ) = · · · = gen(q m ), but gen(a 1 ) = gen(q 1 ). Finally, neighboring arguments . This coupling is introduced in the rules of second degree and preserved by the other rules.
Using these observations, it can be proved that the primary spines of G A,L1 are relabeled to strings of Next S(G) and vice versa. Additionally, spines attach in essentially the same manner in the CCG and using F. This yields the result that, given a spine grammar, it is possible to construct a CCG that generates the same tree language. We will prove the correctness of our construction in the following two subsections. But first, we will illustrate it by means of an example. Example 7.3. Figure 6 shows part of the derivation tree of CCG G A,L1 that corresponds to the tree of Figure 3a, which is generated by the spinal grammar G of Example 4.4. We use the following abbreviations: α = ( , α a ), β = ( , β b ), and γ = ( , γ c ). The labeling of the depicted section is δ γ 2 γ 2 β 2 for the main spine and β η 2 for the side spine (see Figure 3a). The corresponding runs of A are q 0 , ω , q 1 , ω , q 1 , υω , q 2 , υω and p 0 , χ , p 1 , ε (see Example 6.8, Figure 5). Let us observe how the transitions of A are simulated by G A,L1 . The first transition (q 0 , ε, ε, q 1 ) on the main spine does not modify the stack. It is implemented by replacing the last argument /(q 0 , ω, γ) by /(q 1 , ω, γ). The next transition (q 1 , ε, υ, q 1 ) pushes the symbol υ to the stack. The argument /(q 1 , ω, γ) is thus replaced by two arguments /(q 3 , ω, α)/(q 1 , υ, p 1 ). As the stack grows, an additional argument with the new state and stack symbol is added. The previous argument stores pop(υ) = q 3 to ensure that we enter the correct state after popping υ. It also contains the previous unchanged stack symbol ω. The pop transition (p 0 , χ, ε, p 1 ) on the side spine run is realized by removing /(p 0 , χ, β).
The third components are required to relabel the non-primary categories. At the bottom of the main spine, c 1 = (⊥, ε, q 3 )/(q 0 , ω, γ) is a primary category because q 0 and q 3 are associated with the same spine generator s. Thus, c 1 gets relabeled to τ (q 0 ). However, for c 2 = (q 0 , ω, γ)/(q 1 , ω, γ) the spine generators of γ and of the output of q 1 are different (c and s). Hence it is a non-primary category and gets relabeled to γ.
Concerning the lexicon, c 1 is a lexical category due to the fact that (⊥, ε, q 3 ) ∈ A can appear at the top of a spine as an initial category with q 3 ∈ F in its third component, while the appended (q 0 , ω, γ) represents an initial configuration of A. Similarly, c 2 is a well-formed secondary category of a rule and the third component of its target is in L 1 . Therefore, it is an element of L1 , which is a subset of the lexicon.
Let us illustrate how the attachment of the side spine to the main spine is realized. The lexicon contains (q 1 , υ, p 1 ) /(q 2 , υ, α) /(p 0 , χ, β), of which the first two atoms are responsible for performing a transition on the main spine. This part cannot be modified since the rule system disallows it. The target stores the final state p 1 of the side spine run in its third component. The appended argument models the initial configuration of the side spine run starting in state p 0 with χ on the stack.

7.1.
Relating CCG Spines and Automaton Runs. We assume the symbols that were introduced above. In particular, let A = (Q, ∆, Γ, δ, τ, I, F ) be the popnormalized MPDA, let L 1 = w ∈ Next(S(G)) | |w| = 1 be the short strings not captured by A, and let G A,L1 = (∆ 0 , A, R, I , L) be the constructed CCG. We start with discussing the spines before we move on to the discussion of how those spines attach to each other in the next subsection.
Lemma 7.4. Every primary input spine of a derivation tree of D(G A,L1 ) read from bottom to top is relabeled to a string w ∈ Next(S(G)).
Proof. We start with spines of length 1. Their single category is obviously taken from the lexicon and thus either an initial atomic category a or a secondary input category ax. Both of those categories are relabeled to a 3 ∈ L 1 ⊆ Next(S(G)). Now consider a primary input spine c 0 · · · c n with n ≥ 1. We have to show that there is an accepting run of A corresponding to this spine. We have already described the general form of these spines. There exists a category bx and for each i ∈ {0, . . . , n} there exist m ∈ N and slashes | 1 , . . . , | m ∈ D as well as atoms a 1 , . . . , a m ∈ A such that category c i has the form bx | 1 a 1 · · · | m a m . In particular, we have c 0 = bx | a for some | ∈ D and a ∈ A as well as c n = bx. For better readability, we address the j-th component of atom a i by a i,j as an abbreviation for (a i ) j , where i ∈ [m] and j ∈ [3]. We translate each category bx | 1 a 1 · · · | m a m to a configuration of A via the mapping conf : C(A) → Conf A in the following manner.
In other words, the state of the configuration corresponding to c n is the third component of the target b, whereas all other categories c 0 , . . . , c n−1 store the state in the first component of the last argument. The stack content is represented by the second components of the suffix | 1 a 1 · · · | m a m . Each category relabels to the input symbol produced by its respective stored state. Thus, if conf(c 0 ), . . . , conf(c n ) is an accepting run of A, then it generates the same string that the spine is relabeled to.
It remains to show that conf(c 0 ), . . . , conf(c n ) is actually an accepting run. Since c 0 is assigned by the lexicon and has a suffix behind bx consisting of a single argument, whereas c n has an empty suffix, it is easy to see that c 0 and c n correspond to an initial and a final configuration, respectively. Hence we only need to prove that configurations corresponding to subsequent categories c i and c i+1 with i ∈ {0, . . . , n − 1} are connected by valid moves. To this end, we distinguish three cases based on the rule that is used to derive output category c i+1 from primary category c i : Moreover, let c i = bx | 1 a 1 · · · | m a m with | m = | and c i+1 = bx | 1 a 1 · · · | m−1 a m−1 | m a m . These categories correspond to configurations a m,1 , a m,2 · · · a 1,2 and a m,1 , a m,2 a m−1,2 · · · a 1,2 , respectively. The definition of R | 1 implies a m,2 = a m,2 as well as the existence of the transition (a m,1 , ε, ε, a m,1 ) ∈ δ of A that enables a valid move.
We now turn our attention to the inverse direction. More precisely, we will show that, given a string w = w 0 · · · w n ∈ Next(S(G)), we can find a primary spine c 0 · · · c n of G A,L1 (i.e., a sequence of primary categories starting at a category that belongs to L(∆ 0 ) and ending in a non-primary category) that gets relabeled to it. Further, for this spine we have some freedom in the selection of the topmost category, so that for every valid secondary category or initial category, in which the third component of the target outputs w n , we can choose either this category or another category that differs only in the third component, but still outputs w n . Additionally, the third component of the last argument in the suffix (so in all categories except for c n ) can be chosen freely from the set of strings of length 1 or final states with the correct spine generator. This will be of great importance when we combine these spines to a complete derivation tree.
Lemma 7.5. For each string w ∈ L 1 and category c ∈ {ax ∈ L(∆ 0 ) | a 3 = w} there is a primary spine of G A,L1 that is labeled by c and relabeled to w.
Proof. The set C = {ax ∈ L(∆ 0 ) | a 3 = w} is clearly a subset of L(∆ 0 ) and each ax ∈ C is either a secondary category or an initial category by the construction of L. In either case ax is relabeled to a 3 = w and cannot be modified. Consequently, these categories themselves constitute complete primary spines of length 1.
Proof. We will describe how to construct the primary spine c 0 · · · c n−1ĉ by induction on i ∈ {0, . . . , n} that additionally obeys the following invariants: (i) The categories are well-formed. (ii) Subsequent arguments | j−1 a j−1 | j a j in the suffix (i.e. the argument sequence afterĉ in each categoryĉ | 1 a 1 · · · | m a m ) are coupled in such a way that pop((a j ) 2 ) = (a j−1 ) 1 for all j ∈ [m] \ {1}. We already noted that the suffix of a category stores the stack in the second components such that the second component of the last argument contains the topmost stack symbol. Also note that all states q 0 , . . . , q n have the same spine generator.
In the induction step, we assume that c i−1 already fulfills the conditions, and we let (q i−1 , γ, γ , q i ) ∈ δ be a transition that permits the move q i−1 , γ i−1 A q i , γ i . We again distinguish three cases for the construction of a suitable category c i : (1) Ignore stack: Suppose that γ = γ = ε. Let c i−1 =ĉ | 1 a 1 · · · | m a m . We apply a rule of R |m 1 and obtain c i =ĉ | 1 a 1 · · · | m−1 a m−1 | b, where | = slash(q i ) and b = q i , (a m ) 2 , e i . Since b 1 = q i , the category c i gets relabeled to τ (q i ). The stack symbol is not changed, so the category is well-formed and subsequent arguments are still coupled. Additionally, neither the stack size nor the arity of the category have changed.
(2) Push symbol: Suppose that γ = ε. Moreover, let j ∈ {i+1, . . . , n} be minimal such that γ j = γ i−1 (i.e., j is the index of the configuration in which the stack symbol γ was removed again). Finally, let c i−1 =ĉ | 1 a 1 · · · | m a m . Clearly, we apply a rule of R |m 2 to obtain c i =ĉ | 1 a 1 · · · | m−1 a m−1 | b | b with | = slash(q j ), | = slash(q i ), b = (q i , γ , e i ), and b = q j , (a m ) 2 , e j . Note that q j = pop(γ ). Hence the category c i gets relabeled to τ (q i ). The mentioned conditions ensure that c i is well-formed and obeys the second invariant. The increase in stack size is properly accounted for by an increased arity of c i .
(3) Pop symbol: Suppose that γ = ε. We further distinguish between the cases i < n and i = n. We start with i < n. Suppose that c i−1 =ĉ | 1 a 1 . . . | m a m . Note that m ≥ 2. Since c i−1 obeys the invariants, subsequent arguments in the suffix are coupled, so we have (a m−1 ) 1 = pop((a m ) 2 ) = pop(γ) = q i . Additionally, (a m−1 ) 3 = e i as prepared in the corresponding push transition. We apply a rule of R |m 3 to obtain c i =ĉ | 1 a 1 . . . | m−1 a m−1 , which is trivially well-formed and still obeys the second invariant. It relabels to τ (q i ) as required due to (a m−1 ) 1 = q i . The stack size and arity both decrease by 1.
For i = n we have c n−1 =ĉ | 1 a 1 since the stack size is necessarily 1. We also apply a rule of R |1 3 and obtain the categoryĉ. This category is trivially well-formed and also trivially fulfills the second invariant. Additionally, it relabels to τ (q n ) since it is a secondary category and tar(ĉ) 3 = q n .
We observe that the restrictions on arguments in categories c i for i ∈ {0, . . . , n} also hold for the arguments (and for the target) of the secondary categories that are needed to perform the category transformations corresponding to the automaton run. All of these secondary categories themselves can be chosen as the category at the top of an appropriate primary spine (unless the third component of the target constitutes an unreachable state of A). This will be relevant in the next step, in which we combine the spines to obtain a complete derivation tree. 7.2. Combining Spines. We continue to use the introduced symbols. Moreover, In other words, these are exactly the derivation trees whose root nodes are labeled by top-of-spine categories. We will show that ρ D (G A,L1 ) = F Next(S(G)) .
Proof. We prove the statement by induction on the size of t ∈ D (G A,L1 ). Let c 0 · · · c n be the primary spine of t that starts at a lexicon entry c 0 ∈ L(∆ 0 ) and ends at the root (i.e. c n = t(ε)). By Lemma 7.4, this spine gets relabeled to a string w = w 0 · · · w n ∈ Next S(G) . Except for the root category c n , each of the spinal categories c 0 , . . . , c n−1 gets combined with a secondary category that is itself the root of a subtree t ∈ D (G A,L1 ). Since t is a proper subtree of t, we can utilize the induction hypothesis to conclude that ρ(t ) ∈ F Next(S(G)) . It remains to show that each such tree fulfills the requirements necessary to attach it to the spine. Suppose that the primary category is c i = ax | b, so it can only be combined with a secondary category of the form by, which gets relabeled to ρ(by) = τ (b 3 ). Suppose further that ρ(c i ) = σ , n 1 , n 2 , σ, n 1 , n 2 . Clearly, gen(b 3 ) = comb(b 1 ), where gen(b 3 ) is the spine generator at the root of ρ(t ), and comb(b 1 ) = n 3−d(σ ) is the generator of the non-spinal child of the succeeding parent symbol ρ(c i+1 ). Since they coincide, the attachment of ρ(t ) is possible and the directionality of the attachment is 3 − d(σ ), which is guaranteed by the requirement | = slash(b) for argument | b. We conclude that all attachments of subtrees are consistent with the definition of F Next(S(G)) . Thus, ρ(t) ∈ F Next(S(G)) .
Proof. Indeed we prove the following statement for all t ∈ F Next(S(G)) . If |t| = 1, then for each c ∈ {ax ∈ L1 | a 3 = t} there is a tree t ∈ D (G A,L1 ) such that t (ε) = c and ρ(t ) = t. If |t| > 1, then for each c ∈ {ax ∈ A | a 3 = q n }, where q n is the final state of an accepting run of A corresponding to the main spine of t, there is a tree t ∈ D (G A,L1 ) such that t (ε) = c and ρ(t ) = t. We perform an induction on the size of t.
In the induction base, t consists of a single node t ∈ L 1 . By Lemma 7.5, all categories c ∈ {ax ∈ L(∆ 0 ) | a 3 = t} = {ax ∈ L1 | a 3 = t} are complete primary spines that get relabeled to t. By the definition of the lexion, this set is nonempty for all t ∈ L 1 . Thus, t ∈ ρ(D (G A,L1 )).
In the induction step, let t ∈ F Next(S(G)) be a tree with |t| > 1. We identify the main spine labeled by w = w 0 · · · w n ∈ Next S(G) that was used to create t. This string w is generated by an accepting run ( q 0 , α 0 , . . . , q n , ε ) of A. Likewise, there exists a primary spine c 0 · · · c n of G A,L1 that gets relabeled to w and we can choose the category c n at the top of the spine freely from the set {ax | ax | b ∈ L(∆ 0 ), a 3 = q n } = {ax ∈ A | a 3 = q n } according to Lemma 7.6. Similarly, we can also freely choose the third component of the last argument of each category c 0 , . . . , c n−1 . Consider an arbitrary 0 ≤ i ≤ n−1, and let w i = ( σ , n 1 , n 2 , σ, n 1 , n 2 ); the case of w i = ( σ , n 1 , n 2 , σ, n ) is analogous. Below the parent node that is labeled by w i+1 , in the direction 3 − d(σ ), a tree t ∈ F Next(S(G)) with spine generator n 3−d(σ ) stored in its root label is attached. Let q be the final state of an accepting run of A for the main spine of t when |t | > 1 or let q = t when |t | = 1. Moreover, suppose that c i = ax | b with b 1 = q i , so the required secondary category has the shape by. By the induction hypothesis, there exists t ∈ D (G A,L1 ) such that t (ε) = by ∈ L1 ∪ A , ρ(t ) = t , and b 3 = q. This choice of b 3 is possible for c i provided that gen(b 3 ) = comb(b 1 ), which is the case since comb(b 1 ) = comb(q i ) = n 3−d(σ ) = gen(q) = gen(b 3 ). The directionality 3 − d(σ ) of the attachment of t is guaranteed by the relationship | = slash(b), which holds since c i is well-formed. In conclusion, for each attached subtree t ∈ F Next(S(G)) of t we can find a suitable t ∈ D (G A,L1 ) whose root category can be combined with the neighboring primary category of the spine c 0 · · · c n . Putting the primary spine and the derivation trees for subtrees together again yields a tree in D (G A,L1 ). Its root c n can be chosen freely from the desired set {ax | ax | b ∈ L(∆ 0 ), a 3 = q n }.
) . Also recall that the initial atomic categories of G A,L1 are I = {(⊥, ε, f ) ∈ A | gen(f ) = S}. Let t ∈ F Next(S(G)) S . By Lemma 7.8, there is a tree t ∈ D(G A,L1 ) with ρ(t ) = t, whose root category can be any category from {ax ∈ A | a 3 = f }, where f is the final state of an accepting run for the main spine of t, or from {ax ∈ L1 | a 3 = t}, if t consists of a single node. Hence we can select the category t (ε) = (⊥, ε, f ) in the former and t (ε) = (⊥, ε, t) in the latter case. Since both of these categories are initial, we obtain ρ(t ) ∈ T ρ (G A,L1 ). Now let t ∈ T ρ (G A,L1 ). Hence there is a tree t ∈ D(G A,L1 ) with ρ(t ) = t and t (ε) ∈ I = {(⊥, γ, f ) ∈ A | gen(f ) = S}. By Lemma 7.7, we also have t ∈ F Next(S(G)) . Because ρ t (ε) = t(ε) = f with gen(f ) = S, we obtain t ∈ F Next(S(G)) S .
Together with Corollary 6.7, this concludes the proof of the following main theorem.
Theorem 7.10. Given a spine grammar G, we can construct a CCG G that can generate T (G).

Strong Equivalence
In this section we will show that CCG and TAG are strongly equivalent modulo relabeling. We will also cover the implications regarding the role of ε-entries, rule degree, and the use of first-order categories.
For the converse inclusion of Theorem 7.10 we utilize a result by Kuhlmann, Maletti, and Schiffer [20,Theorem 29]. It states that for every CCG G there exists an sCFTG that generates the rule trees of G. While derivation trees are labeled by categories, rule trees are labeled by lexicon entries at leaves and by applied rules (instead of the output category) at inner nodes. Rule trees are a natural encoding of derivation trees using only a finite set of labels. As each rule indicates the target and last argument of its output category, rule trees can be relabeled in the same manner as derivation trees. For completeness' sake we restate the following definition [20,Definition 22]. Definition 8.1. Let G = (Σ, A, R, I, L) be a CCG and T = T R,∅ (L(Σ)). A tree t ∈ T is a rule tree if cat(t) ∈ I, where the partial map cat : T → C(A) is inductively defined by (i) cat(a) = a for all lexicon entries a ∈ L(Σ), (ii) cat axy ax/b by (t 1 , t 2 ) = azy for all trees t 1 , t 2 ∈ T with cat(t 1 ) = az/b and cat(t 2 ) = by, and (iii) cat axy by ax /b (t 1 , t 2 ) = azy for all trees t 1 , t 2 ∈ T with cat(t 1 ) = by and cat(t 2 ) = az /b. The set of all rule trees of G is denoted by R(G).
This leads us to the second main theorem. Proof. Let G be a CCG. Then its rule tree language R(G) can be generated by an sCFTG G [20, Theorem 29]. The tree language T ρ (G) accepted by G is the set of derivation trees D(G) relabeled by ρ. The relabeling ρ can be transferred to the rule tree language R(G) since it only depends on the target and the last argument of each category, which can both be figured out by looking at the output category of the rule label of the respective rule tree node. Conversely, given an sCFTG G we can first convert it into an equivalent spine grammar and then construct a CCG that is equivalent (up to relabeling) to G by Theorem 7.10.
Kepser and Rogers [15] proved that TAGs and sCFTGs are strongly equivalent, which shows that they are also strongly equivalent (up to relabeling) to CCGs. Clearly, from strong equivalence we can conclude weak equivalence as well (without the relabeling since the lexicon provides the relabeling). Weak equivalence was famously proven by Vijay-Shanker and Weir [30], but Theorem 3 of Kuhlmann, Koller, and Satta [18] shows that the original construction is incorrect. However, Weir [31] provides an alternative construction and proof. Our contribution provides a stronger form (and proof) of this old equivalence result. It avoids the ε-entries that the original construction heavily relies on. An ε-entry is a category assigned to the empty string; these interspersed categories form the main building block in the original constructions. The necessity of these ε-entries [30] is an interesting and important question that naturally arises and has been asked by Kuhlmann, Koller, and Satta [18]. We settle this question and demonstrate that they can be avoided. Proof. The weak equivalence of CCG and TAG is clear from the previous corollary. Similarly, each ε-free language generated by a CCG can trivially also be generated by a CCG with ε-entries. For the converse direction, let G be a CCG with εentries. We convert it into a spine grammar G in normal form accepting the rule tree language of G. A standard ε-removal construction yields a weakly equivalent spine grammar G , which can be converted into a strongly equivalent CCG (up to relabeling). This constructed CCG accepts the same ε-free string language as the original CCG G that utilized ε-entries.
The tree expressive power of CCGs with restricted rule degrees has already been investigated by Kuhlmann, Maletti, and Schiffer [19,20]. It has been shown that 0-CCGs accept a proper subset of the regular tree languages [10], whereas 1-CCGs accept exactly the regular tree languages. It remained open whether there is a k such that k-CCGs and (k + 1)-CCGs have the same expressive power. Our construction establishes that 2-CCGs are as expressive as k-CCGs for arbitrary k ≥ 2. Another consequence of our construction is that first-order categories are sufficient.
Corollary 8.5. 2-CCGs with first-order categories have the same expressive power as k-CCGs with k > 2.
Proof. We only argue the nontrivial inclusion. Let G be a CCG whose categories have arbitrary order and whose rules have degree at most k. We construct the sCFTG G generating the rule tree language R(G). Next, we construct the CCG G that generates the same tree language as G modulo relabeling. By construction, G uses only first-order categories and rules with rule-degree at most 2. As already argued, the rule trees can be relabeled to the relabeled tree language generated by G.

Conclusion
We presented a translation from spine grammar to CCG. Due to the strong equivalence of spine grammar and TAG [15], we can also construct a strongly equivalent CCG for each TAG. Together with the translation from CCG to sCFTG [19,20], this proves the strong equivalence of TAG and CCG, which means that both formalisms generate the same derivation trees modulo relabeling. Our construction uses CCG rules of degree at most 2, only first-order categories, lexicon entries of arity at most 3, and no ε-entries in the lexicon. Such CCGs thus have full expressive power. Avoiding ε-entries is particularly interesting because they violate the Principle of Adjacency [27, p. 54], which is a fundamental linguistic principle underlying CCG and requires that all combining categories correspond to phonologically realized counterparts in the input and are string-adjacent. Their elimination is performed by trimming them from the sCFTG obtained from a CCG with ε-entries and translating the trimmed sCFTG back to a CCG using our construction.
Translating CCG to sCFTG [19,20] yields sCFTGs whose size is exponential in a CCG-specific constant, which depends on the maximal arity of secondary categories and of lexicon entries. Note that the maximal arity of secondary categories can be higher than the rule degree because it is also affected by the maximal arity of categories in lexical arguments. Our construction increases the grammar size only polynomially, which can be verified for each step. Overall, a k-CCG can be converted to an equivalent 2-CCG without ε-entries in time and space exponential in the grammar-specific constant and polynomial in the size of the grammar. However, we expect that the construction can be improved to be exponential only in the maximum rule degree k, as this runtime can be achieved for CCG parsing [25], which is a task that is closely related.